python pg module error messages - python

pg module in python for interacting with postgres is not giving any error message for DML queries.
Is there any alternative to pg module which gives meaningful error messages.
>>>import pg
>>>
>>>
>>>conn = pg.connect(dbname="db", user="postgres", host="localhost")
>>>print conn.query("delete from item where item_id=0")
>>>None
>>>print conn.query("delete from item where item_id=0")
>>>None #This should have been an error message

Why are you expecting an error message? I delete does not raise an error in the server if no records were found. So why do you expect a generally applicable database driver to raise an error?
I can't think of any database driver that would issue an error in that case because there may be perfectly legitimate reasons for it. In database terms, for example, an error usually means you are going to roll back a transaction.
If you are going to wrap in your own API, my recommendation is that you either decide how you want your application to address this or at most you raise warnings instead.

Do you know the psycopg2 module? It seems a very nice module to interact with PostgreSQL via Python. There is a tutorial available, besides the modules' documentation. In the examples given in the docs, commands that fail indeed print out error messages.
I personally don't have experience in this module. But it looks very nice!

Related

Catch "HTTPError 404" in python for gcloud commands

Trying to catch this error:
ERROR: (gcloud.compute.instances.add-labels) HTTPError 404: The resource 'projects/matei-testing-4010-5cbdeeff/zones/us-east1-b/instances/all' was not found
Tried different versions of code and none worked for me.
My current code does not seem to catch an error:
from googleapiclient import discovery, errors
try:
print("Applying labels")
gcloud_value = (f'gcloud compute instances add-labels all --labels="key=value" --zone=us-east1-b')
process = subprocess.run([gcloud_value], shell=True)
except errors.HttpError:
print("Command did not succeed because of the following error: {}".format(errors.HttpError))
How do I catch the error to use it later?
Thank you
Try this:-
import subprocess
gcloud_value = 'gcloud compute instances add-labels all --labels="key=value" --zone=us-east1-b'
process = subprocess.run(gcloud_value, shell=True, capture_output=True)
print(process.stdout.decode('utf-8'))
print(process.stderr.decode('utf-8'))
print(process.returncode)
One would expect gcloud to emit errors to stderr. Therefore by examining process.stderr you should be able to figure out what (if anything) has gone wrong. Also, if process.returncode is non-zero you should be able to deduce that it didn't work but that depends entirely on how the underlying application (gcloud in this case) is written. There's plenty of stuff out there that returns zero even when there was a failure!
I strongly encourage you to consider using Google's client libraries to interact with its services' rather than subprocess'ing.
As you're experiencing calling out to a shell not only limits error handling but it often requires string "munging" which is also imprecise and ad hoc auth.
Google generally provides machine-generated "perfect" SDK implementations of all its services. For Cloud (except Compute!?), there are also Cloud Client libraries.
I encourage you to explore using the Python SDK for Compute Engine:
https://cloud.google.com/compute/docs/tutorials/python-guide

Is there a way to set secure_auth to false in MySQLdb.connect in Python 2.7.5?

I am attempting to run a script written in Python 2.7.5 (not using Django). When it tries to connect to a remote mysql server with the MySQLdb.connect() method it throws the following error:
_mysql_exceptions.OperationalError: (2049, "Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled)")
I have done reading about this issue:
Django/MySQL-python - Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled)
mysql error 2049 connection using old (pre-4-1-1) authentication from mac
Is there a way to set a parameter in the MySQLdb.connect() method to set secure_auth to false? Without having to change any passwords or running the command from the cmd line. I have looked at the official docs and there does not appear to be anything in there.
I have tried adding secure_auth=False to the parameters but throws an error (shown in the code below).
Python:
def get_cursor():
global _cursor
if _cursor is None:
try:
db = MySQLdb.connect(user=_usr, passwd=_pw, host='external.website.com', port=3306, db=_usr, charset="utf8")
# tried this but it doesnt work (as expect but tried anyway) which throws this error
# TypeError: 'secure_auth' is an invalid keyword argument for this function
# db = MySQLdb.connect(user=_usr, passwd=_pw, host='external.website.com', port=3306, db=_usr, charset="utf8", secure_auth=false)
_cursor = db.cursor()
except MySQLdb.OperationalError:
print "error connecting"
raise
return _cursor
I spent an inordinate amount of time working through the MySQLdb source code and determined that this simply cannot be done without patching the MySQLdb's C wrapping code. Theoretically, you should be able to pass the SECURE_CONNECTION flag to specify that do not want to use the insecure old passwords:
MySQLdb.connect(..., client_flags=MySQLdb.constant.CLIENT.SECURE_CONNECTION)
But the MySQLdb code never actually checks that flag, and never configures the secure_connection option when calling the MySQL connection code, so it always defaults to requiring new-style passwords.
Possible fixes include:
Patch the MySQLdb code
Use an old version of the MySQL client libraries
Update the passwords on the MySQL server
Create a single new user with a new-style password
Sorry I don't have a better answer. I just ran into this problem myself!
I know Moses answer as been validated but I wanted to offer my work around based on what he suggested.
I had previously installed mysql_python for my python and had the brew version of mysql installed.
I deteleted all of that.
I look for a way to install MySQLdb by looking for it last stable version with the source.
I compiled them (followed the isntructions here), installed them and then I looked for a stable version of MySQL client (MySQL website is the best place for that) and install the 5.5 version which was perfectly fitting my requirements.
I made mysql to launch itself automatically and then restarted my computer (but you can just restart apache) and check that all path were correct and the right includes are in the right places (you can check that against the link above).
And now it all works fine!
Hope it helps.
SSL is a separate paramter that you can set in the connection paramter...Here is a note from the source code...Try checking mysql_ssl_set() documentation.
ssl
dictionary or mapping, contains SSL connection parameters;
see the MySQL documentation for more details
(mysql_ssl_set()). If this is set, and the client does not
support SSL, NotSupportedError will be raised.
This document talks about all the secure parameters - http://dev.mysql.com/doc/refman/5.0/en/mysql-ssl-set.html...
I don't see anything to disable secure auth in glance..

MySQL and python-mysql (mysqldb) crashing under heavy load

I was just putting the finishing touches to a site built using web.py, MySQL and python-mysql (mysqldb module) feeling good about having projected from sql injections and the like when I leant on the refresh button sending 50 or so simultaneous requests and it crashed my server! I reproduced the error and found that I get the following two errors interchangeably, sometimes its one and sometimes the other:
Error 1:
127.0.0.1:60712 - - [12/Sep/2013 09:54:34] "HTTP/1.1 GET /" - 500 Internal Server Error
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x10b287750>> ignored
Traceback (most recent call last):
Error 2:
python(74828,0x10b625000) malloc: *** error for object 0x7fd8991b6e00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Clearly the requests are straining MySQL and causing it to fall over so my question is how do I protect against this happening.
My server setup is setup using Ubuntu 13.04, nginx, MySQL (which I connect to with the mysqldb python module), web.py and fast-cgi.
When the web.py app starts up it connects to the database as so:
def connect():
global con
con = mdb.connect(host=HOST, user=USER, passwd=PASSWORD, db=DATABASE)
if con is None:
print 'error connecting to database'
and the con object is assigned to a global variable so various parts of the application can access it
I access the databse data like this:
def get_page(name):
global con
with con:
cur = con.cursor()
cur.execute("SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='jt_website' AND `TABLE_NAME`='pages'")
table_info = cur.fetchall()
One idea I had was to open and close the database before and after each request but that seems overkill to me, does anybody have any opinions on this?
What sort of methods do people use to protect their database connections in python and other environments and what sort of best practices should I be following?
I don't use web.py but docs and tutorials show a different way to deal with database.
They suggest to use a global object (you create it in .connect) which probably will be a global proxy in the Flask style.
Try organizing your code as in this example ←DEAD LINK and see if it happens again.
The error you reported seems a concurrency problem, that normally is handled automatically by the framework.
About the latter question:
What sort of methods do people use to protect their database connections in python and other environments and what sort of best practices should I be following?
It's different depending on the web framework you use. Django for example hides everything and it just works.
Flask lets you choose what you want to do. You can use flask-sqlalchemy which uses the very good SQLAlchemy ORM managing the connection proxy for the web application.

pymongo: "OperationFailure: database error: error querying server"

We're occasionally getting the following error when doing queries:
OperationFailure: database error: error querying server
There is no specific query causing this, and when repeating the process things work. Has anybody else seen this error?
Our setup is a cluster of Ubuntu VMs on Amazon EC2, we're using Python 2.7.3 and pymongo v2.3. We're also using Mongoengine, however we still get this exception from non-Mongoengine code.
To those discovering this question:
We were never able to fully diagnose the problem with this, our hunch is that the database connection tends to fail every once in a while for whatever reason. From our research into distributed computing, this is a common problem and needs to be handled explicitly.
In the end, we adapted our system to become robust to DB connection failures by catching OperationFailure exceptions along with similar ones and re-establishing the database connection. This resolved the problem along with a number of similar ones we were having.
Seems the query failed on the server - to diagnose you'd need to check the server logs.

python MySQLdb - how to log warnings

MySQLdb is really nice library for handling SQL connections. I found only one problem. It writes all warning messages directly to stdout. Is there any possibility to redirect it to standard python logger instead?
Replace warnings.showwarning() with a function that works as desired.

Categories