Psycopg2 copy_to not fully executing - python

def backup_action():
# read connection parameters
conn = psycopg2.connect(clicked.get())
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
cursor = conn.cursor()
f = open(cur_path+"/"+"kopia"+".csv", 'w')
cursor.copy_to(f, 'mtr', sep=",")
cursor.close()
I have a problem with copy_to executing only partialy. I have 2 databases, one for testing with almost embpy small tables and one big database with actual data. When I execute this for the smaller one it works just fine but when I try to do it for the bigger one, it modifies the csv file but leaves it empty.
I've had a similar problem once with doing an actual backup in pyodbc and I resolved it by making delaying closing the connection. I have no idea if that's actually the problem here and I don't really know if psycopg2 offers a similar solution.
Please help.

I don't know the exact cause of the problem, but using the copy_expert worked for me.
I'd post the code but for some reason the page doesn't let me to.

Related

How to close SQLite database connection when using Python and Jupyter Notebook

I am using Jupyter Notebook to query an SQLite database running on a Raspberry Pi. I'm a beginner and I'm trying to learn to write 'good' Python and other threads say I should close the database connection after I have finished with it. I had been using
if conn:
conn.close()
but this morning I decided to try experimenting with with so to check it was working I added print(conn). This returned <sqlite3.Connection object at 0x6d13####>. More searching showed that with will commit but not close SQLite connections, so I added
with closing(sqlite3.connect(db_file)) as conn:
which according to that same link should fix it. But print still returned an object. I then tried adding the print test on my original if and .close() method but that still returned an object. Is there something wrong with both my close methods, or am I misunderstanding what print(conn) is telling me, or is there something about Jupyter that is stopping either method from closing the connection? This link suggests that Jupyter might be the problem? If it is Jupyter, how should I close the connection or do I just stop worrying about it?
Thanks for your help
I needed to use two different sqlite databases in Jupyter. The problem was that even if I started the second connection with a different name, the program was still using the first one.
I solved the problem by assigning None to the connection
conn.close()
conn = None
and only then I was able to connect to the second database.
I know that it doesn't makes sense, but it works.
Your code appears to be fine, print(conn) will always return a <sqlite3.Connection object at 0x######>, even after conn.close() is called.

Python loses connection to MySQL database after about a day

I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.
I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.

Discrepancy between my SQL python data and the DB Browser

I am using sqlite combined with tkinter to write and delete records within my Python program. The deletion works perfectly fine in my program and also when I restart the program, the record does not exist anymore.
However, I always cross check using the Linux standard software DB Browser for SQLite and look at my SQL Table. Strangely, all records still exist in the DB Browser. Now I am wondering, why's that? Why is it gone within my Python sqlite queries but not in the DB Browser? Somehow the records are still there. How can I completely destroy my records?
For deletion I use:
(The user can chose a specific entry using a listbox. Eventually, I "translate" the selected item into its specific ID and trigger the deletion.)
self.c.execute("DELETE FROM financial_table WHERE ID=?",(entry,))
self.conn.commit()
For my query I use:
(I query the data for a specific year and month.)
self.c.execute("SELECT ID, Date, Item, Price FROM financial_table WHERE strftime('%Y-%m', Date) = '{}' ORDER BY Date ".format(date))
single_dates = self.c.fetchall()
Thank you very much for your help.
The solution to my question is: I am stupid!
I was tired yesterday evening and looked at the wrong sql file in a subfolder which had the same name than the one from my python program. So it is actually working. Please excuse my stupidity.
#Bruceskyaus
Despite my stupidity I learned from your answer, especially the try ... except block. I am going to implement it. Thanks.
You may have an problem with controlling transactions on your database, but it could also be the connection itself. Make sure you don't have any uncommitted DML statements on a different connection (i.e. an INSERT, UPDATE or DELETE in your DB Browser that wasn't committed), this could cause the conn.commit() to fail. With SQLite, an uncommitted transaction could lock the entire database - for a brief period of time.
Try ensuring that there is a new cursor for the delete statement and call conn.close() after the conn.commit(). Before you execute the code, make sure that no other connections are accessing the database - including the DB Browser. Only check in the DB Browser when you have shut down the application (for this test). This eliminates multithreading or locking as a possible cause. See also SQLite - Data Persistence and SQLite - Controlling Transactions
It is also helpful to trap all errors for DML statements using a try...except block. Something like this:
import sqlite3
try:
self.conn = sqlite3.connect('mydb.db')
self.c = conn.cursor()
self.c.execute("DELETE FROM financial_table WHERE ID=?",(entry,))
self.conn.commit()
except sqlite3.Error as e:
print("An error occurred:", e.args[0])
finally:
self.conn.close()

Using a base dialect with pyodbc in SQLAlchemy

I can connect via pyODBC to an unsupported database over ODBC. Queries appear to execute correctly. If I try to connect using mssql+pyodbc, I can't connect properly (image not found).
I've tried "base:///", "base+pyodbc:///", or "pyodbc:///".
Do I need to write my own "dialect" that doesn't make any changes to base, and are there any useful (up to date) guides on how to do this?
EDIT:
import pyodbc
conn = pyodbc.connect(DSN = "ODBCCONNECTIONNAME", UID = "ODBCUSER", PWD="PASSWORD")
cursor = conn.cursor()
Also works with this line replacing the connection above:
conn = pyodbc.connect("DSN=fmp_production;UID=odbc_user;PWD=Pwd222")
I can then run selects, and modifications fine, using standard SQL.
EDIT:
Okay, so I'm getting an error:
"Abort trap: 6" basically my python process is crashing out. I've tried testing with a functioning ODBC connection to a MySQL database, using:
engine = create_engine('mysql+pyodbc://root:rootpwd#testenvironment')
If I include the name of the database, then I get an image not found error instead. But I think the actual problem is whatever is crashing my python.

Overwrite a database

I have an online database and connect to it by using MySQLdb.
db = MySQLdb.connect(......)
cur = db.cursor()
cur.execute("SELECT * FROM YOUR_TABLE_NAME")
data = cur.fetchall()
Now, I want to write the whole database to my localhost (overwrite). Is there any way to do this?
Thanks
If I'm reading you correctly, you have two database servers, A and B (where A is a remote server and B is running on your local machine) and you want to copy a database from server A to server B?
In all honesty, if this is a one-off, consider using the mysqldump command-line tool, either directly or calling it from python.
If not, the last answer on http://bytes.com/topic/python/answers/24635-dump-table-data-mysqldb details the SQL needed to define a procedure to output tables and data, though this may well miss subtleties mysqldump does not

Categories