I am using Jupyter Notebook to query an SQLite database running on a Raspberry Pi. I'm a beginner and I'm trying to learn to write 'good' Python and other threads say I should close the database connection after I have finished with it. I had been using
if conn:
conn.close()
but this morning I decided to try experimenting with with so to check it was working I added print(conn). This returned <sqlite3.Connection object at 0x6d13####>. More searching showed that with will commit but not close SQLite connections, so I added
with closing(sqlite3.connect(db_file)) as conn:
which according to that same link should fix it. But print still returned an object. I then tried adding the print test on my original if and .close() method but that still returned an object. Is there something wrong with both my close methods, or am I misunderstanding what print(conn) is telling me, or is there something about Jupyter that is stopping either method from closing the connection? This link suggests that Jupyter might be the problem? If it is Jupyter, how should I close the connection or do I just stop worrying about it?
Thanks for your help
I needed to use two different sqlite databases in Jupyter. The problem was that even if I started the second connection with a different name, the program was still using the first one.
I solved the problem by assigning None to the connection
conn.close()
conn = None
and only then I was able to connect to the second database.
I know that it doesn't makes sense, but it works.
Your code appears to be fine, print(conn) will always return a <sqlite3.Connection object at 0x######>, even after conn.close() is called.
Related
def backup_action():
# read connection parameters
conn = psycopg2.connect(clicked.get())
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
cursor = conn.cursor()
f = open(cur_path+"/"+"kopia"+".csv", 'w')
cursor.copy_to(f, 'mtr', sep=",")
cursor.close()
I have a problem with copy_to executing only partialy. I have 2 databases, one for testing with almost embpy small tables and one big database with actual data. When I execute this for the smaller one it works just fine but when I try to do it for the bigger one, it modifies the csv file but leaves it empty.
I've had a similar problem once with doing an actual backup in pyodbc and I resolved it by making delaying closing the connection. I have no idea if that's actually the problem here and I don't really know if psycopg2 offers a similar solution.
Please help.
I don't know the exact cause of the problem, but using the copy_expert worked for me.
I'd post the code but for some reason the page doesn't let me to.
Situation
I have a plotly-dash application running in a docker container (based on python3.7-slim).
The app is accessing a postgres database and visualizes the queried data.
However, if the app has not been used for some time (I would estimate around 24-48 hours. We first noticed this issue on mondays after nobody used the app during the weekend) i.e. if no data has been queried from the database, the app freezes and the logs show some errors related to the database.
I cannot fully access the logs, but they contain this error:
AttributeError: 'Connection' object has no attribute '_Connection_connection'
and in the following, all the pieces of code which tried to query data from the database are stated (but not what exactly went wrong).
The problem was always solved with a restart of the app (and thus a new connection to the database)
Assumption
As stated above, this always occured after a period of inactivity. So my assumption is, that the engine disconnects after some idle time
Code Sample
For accessing the database, I have a DatabaseConnection class. The relevant part of the code contais something like this:
from sqlalchemy import create_engine
...
engine = create_engine(f"postgresql+psycopg2://{user}:{passw}#{url}:{port}/{db_name}")
self.engine = engine.connect()
...
Question
What is the best solution for overcoming the issue of the disconnect after some inactivity?
How could I possibly check whether the database connection is still active and if not, reconnect it somehow?
-Is there a better way the access the database than through an engine-object?
Is there something wrong with my approach in general?
Please let me know if you require further information. Thanks in Advance.
There is an error in my code. It should be
self.engine = create_engine(f"postgresql+psycopg2://{user}:{passw}#{url}:{port}/{db_name}")
and the second line should be omitted. I misunderstood what engine.connect() is doing: It returns a Connection object (not an engine, as the attribute name suggests).
Then, for each query I execute, I use the context manager like this:
with self.engine.connect() as conn
table1 = pd.read_sql_table("table1", con=conn)
That way, the Connection object is closed after it has been used. But the engine object may open new connections whenever necessary.
In my previous solution, die Connection was killed after some Idle time.
(Based on this GitHub Discussion)
I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.
I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.
I am building a web application in Flask.
We have opened up the database window of PyCharm and established a data source to a SQL server database.
My question is what does establishing a data source do?
Does is remove the need to connect to a database manually?, like for example
db = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
If the answer is yes it does remove the need to set
updb = MySQLdb.connect("localhost","testuser","test123","TESTDB" )
then how can you access the data in the database, and establish a cursor object?
The JetBrains IDEs such as PyCharm or IntelliJ have a database browser, basically productionalized as it's own IDE called DataGrip, but that's besides the point.
Fact is, no, that doesn't replace the need for code, and you could have zero code and make a database connection, or entirely code and never touch the database window, ever (because you don't need PyCharm to write said code).
So, they are separate things, just like how "SQL Server" means something completely different from "MySQL" (e.g. you might need a different library)
So, I'm using a Jupyter Lab notebook to connect to our production database. After a few days of work, we noticed how the server displays hundreds upon hundreds of active conenctions to the database, listed as established (running "netstat -na").
This is terribly bad, and we identified the issues as coming from the python kernel opening connections to the server without actually ever closing them, even if expelicitely told to do so.
This a redacted version of the code we are using to connect to the server, ran in a notebook cell by itself, separated from the other code. We isolated the issue and we are certain it comes from these lines of code:
client = MongoClient(url, maxIdleTimeMS=120000)
db = client["database"]
coll = db["data"]
query = # Our query
data = list(coll.find(query))
client.close()
Why is this happening? What are we doing wrong? Why doesn't the .close() method actually close the connection?
I have been using MongoDB for quite a while now in our production environment and have faced such problems in the past.
data = list(coll.find(query)) this line of code actually materializes the results of the query that your cursor returns and causes the connection to stay alive. The result of a query is a generator and should be consumed as is in a loop. Materializing the Cursor into a list() causes all the data to be pulled into the memory causing it to crash at times, as opposed to the cursor which points to the first entry in the result-set.
You can simply perform the following operation on the cursor:
for elem in cursor:
do_something
and not require the call to the close() method
Secondly, with Jupyter Notebooks, you need to stop the session after you are done with your work. Unless this is done, the notebook shall always keep the connection to MongoDB alive eating up resources along the way.