Good day to All,
I need to communicate to DBs like here is the sample using python & peewee.
First i need to put an entry in the types table of the MainDb. only the Name & type am having & the ID will get auto increment after the entryin the table, then using the auto incremented id(MainID), i need to put a data entry in respective type table of the respective type db.
then i need to do some operation using the select queries with MainID in the python code and finally i need to delete the entries of the MainID in the MainDb as well as in the respective type db, if the operations in the python code gets executed successfully or not (throws any error or exception).
Currently i am using the simple db (MySQLdb) connection & close with cursor to execute the queries & to get the last auto incremented id am using lastrowid of the cursor(cursor.lastrowid).
And also i referred the question in stackoverflow as per my understanding its ok if i have an single db (like main db alone) but what am i need to do for my situation.
OS: Windows7 64bit,
Db: MySQL Db v5.7,
Python : v2.7,
peewee : v2.10
Thanks in advance
Related
I have a table with 30k clients, with the ClientID as primary key.
I'm getting data from API calls and inserting them into the table using python.
I'd like to find a way to insert rows with new clients and, if the ClientID that comes with the API call already exists in the table, update the existing register with the updated information of this client.
Thanks!!
A snippet of code would be nice to show us what exactly you are doing right now. I presume you are using an ORM like SqlAlchemy? If so, then you are looking at doing an UPSERT type of an operation.
That is already answered HERE
Alternatively, if you are executing raw queries without an ORM then you could write a custom procedure and pass required parameters. HERE is a good write up on how that is done in MSSQL under high concurrency. You could use this as a starting point for understanding and then re-write it for PostgreSQL.
I've been trying to write a bulk insert to a table.
I've already connected and tried to use SQLAlchemy bulk insert functions, but it's not really bulk inserting, it inserts the rows one by one (the dba tracked the db and showed me).
I wrote a class for the table:
Class SystemLog(Base):
__tablename__ = 'systemlogs'
# fields goes here...
Because of the fact that the bulk insert functions doesn't work I want to try to use a stored procedure.
I have a stored procedure named 'insert_new_system_logs' that receives a tablevar as a parameter.
How can I call it with a table from python SQLAlchemy?
My SQLAlchemy version is 1.0.6
I can't paste my code because it's in a closed network.
I don't have to use SQLAlchemy, I just want to bulk insert my logs.
I want to dump oracle objects like tables and stored procedures using cx_Oracle from python ,
is any tutorial how to do this ?
If you are looking for the source code for tables you can use the following:
select DBMS_METADATA.GET_DDL('TABLE','<table_name>') from DUAL;
for stored procedures you can use
select text from all_source where name = '<procedure name>'
In general this is not a cx_Oracle specific problem, just call the oracle specific tables (like all_source) or functions (like get_ddl) and read it in like any other query. There are more of these sorts of tables (like user_source for source that you the specific user own) in Oracle, but I'm doing this off the top of my head and don't have easy access to an Oracle db to remind myself.
Intro
I'm writing an application in Python using a Cassandra 1.2 cluster (7 nodes, replication factor 3) and I'm accessing Cassandra from Python using the cql library (CQL 3.0).
The problem
The application is built in a way that when trying to run a cql statement against an unconfigured column family, it automatically creates the table and retries the cql statement. For example, if I try to run this:
SELECT * FROM table1
And table1 doesn't exists, then the application will run the corresponding CREATE TABLE for table1 and will retry the previous select. The problem is that, after the creation of the table the SELECT (the retry) fails with this error:
Request did not complete within rpc_timeout
The question
I assume the cluster needs some time to propagate the creation of the table or something like that? If I wait a few seconds between the creation of the table and the retry of the select statement everything works, but I want to know exactly why and if there is a better way of doing it. Perhaps making the create table wait for the changes to propagate before returning?, is there a way of doing that?
Thanks in advance
I am assuming you are using cqlsh. Default consistency level for cqlsh is one meaning it will return after the first node completes but not necessarily before all nodes complete. If you read you aren't guaranteed to read from the node that has the completed table. You can check this by turning on tracing but that will affect performance.
You can enforce consistency which should make that create wait until the table is created on all nodes.
CREATE TABLE ... USING CONSISTENCY ALL
I would like to copy the contents of a MySQL database from one server to another using a third server. This could be done from the shell prompt using this:
mysqldump --host=hostname1 --user=username --password="mypwd" acme | mysql --host=hostname2 --user=username --password="mypwd" acme
However, how do I do this from within a Python script without using os.system or any of the other subprocess methods? I've read through the MySQLdb docs, but don't see a way to do a bulk export/import. Thank you!
If you dont want to use mysqldump from the command line (using the os.system methods) you are kind of tied to get the data straight from MySQL and then put it to the other server. In that respect your question looks very similar to Get Insert Statement for existing row in MySQL
you can use a query to get the schema creation sql
SHOW CREATE TABLE MyTable;
And then you need to implement a script that just querys data and inserts it to the other server.
You could also look into third party applications that allows you to copy data from one database to another.