How to receive notices with PyGreSQL? - python

I'm using PyGreSQL 4.1.1 with Postgres 9.5, and have written some stored functions. I use RAISE with different levels inside of the functions for debugging purposes, which works very well in psql, but I haven't found a way to access those messages in Python.
Example:
CREATE OR REPLACE FUNCTION my_function() RETURNS BOOLEAN AS $_$
BEGIN
RAISE NOTICE 'A notice from my function.';
RETURN TRUE;
END
$_$ LANGUAGE plpgsql;
My Python code looks like this:
conn = pgdb.connect(database = 'mydb', user = 'myself')
cursor = conn.cursor()
cursor.execute("SELECT my_function()"):
How can I access the notice (A notice from my function.) after running my_function()?

Due to #klin's comment I found a somewhat unclean solution. The pgdb.Connection object stores the underlying pg.Connection object in a private property named _cnx. Thus, you can set the notice receiver like this:
def my_notice_receiver(notice):
logging.info("Notice: %s", notice)
conn._cnx.set_notice_receiver(my_notice_receiver)

Related

It is possible to check a variable value in a function in testing?

I have the following function:
def create_tables(tables: list) -> None:
template = jinja2.Template(
open("/opt/airflow/etl_scripts/postgres/staging_orientdb_create_tables.sql", "r").read()
)
pg_hook = PostgresHook(POSTGRES_CONN_ID) # this is airflow module for Postgres
conn = pg_hook.get_conn()
for table in tables:
sql = template.render(TABLE_NAME=table)
with conn.cursor() as cur:
cur.execute(sql)
conn.commit()
Is there a solution to check the content of the "execute" or "sql" internal variable?
My test looks like this, but because I return nothing, I can't check:
def test_create_tables(mocker):
pg_mock = MagicMock(name="pg")
mocker.patch.object(
PostgresHook,
"get_conn",
return_value=pg_mock
)
mock_file_content = "CREATE TABLE IF NOT EXISTS {{table}}"
mocker.patch("builtins.open", mock_open(read_data=mock_file_content))
create_tables(tables=["mock_table_1", "mock_table_2"])
Thanks,
You cannot access internal variables of a function from the outside.
I strongly suggest refactoring your create_tables function then, if you already see that you want to test a specific part of its algorithm.
You could create a function like get_sql_from_table for example. Then test that separately and mock it out in your test_create_tables.
Although, since all it does is call the template rendering function form an external library, I am not sure, if that is even something you should be testing. You should assume/know that they test their functions themselves.
Same goes for the Postgres functions.
But the general advice stands: If you want to verify that a specific part of your code does what you expect it to do, factor it out into its own unit/function and test that separately.

segmentation fault extending psycopg2._psycopg.cursor

This small code snippet results in SIGSEGV (I thought this wouldn't be possible in a language with garbage collection like python, but I'm used to be an ace in creating new kind of bugs) even though the database exists and the connection works, anyway I was trying to extend psycopg2._psycopg.cursor class to have a function returning query results in dictionary form, what am I doing wrong?
import psycopg2
class dcursor(psycopg2._psycopg.cursor):
def __init__(self,parent_cursor):
self=parent_cursor
def dictfetchall(self):
"Returns all rows from a cursor as a dict"
desc = cursor.description
return [
dict(zip([col[0] for col in desc], row))
for row in cursor.fetchall()
]
conn = psycopg2.connect("dbname=dbpgs user=openerp")
cur = dcursor(conn.cursor())
cur.execute('select name_related from hr_employee;')
print cur.dictfetchall()
The cursor signature takes a connection as first argument. The way you have overridden __init__ makes it take a cursor. Segfault follows. Your class is more a wrapper than a cursor. You are also not calling the __init__ base class, and self=parent_cursor doesn't do anything.
The right way to subclass a cursor taking your example is something like:
class dcursor(psycopg2.extensions.cursor):
def dictfetchall(self):
"Returns all rows from a cursor as a dict"
desc = self.description
return [
dict(zip([col[0] for col in desc], row))
for row in self.fetchall()
]
conn = psycopg2.connect("dbname=dbpgs user=openerp")
cur = conn.cursor(cursor_factory=dcursor)
cur.execute('select name_related from hr_employee;')
print cur.dictfetchall()
but see also fog's suggestion about using DictCursor.
It is possible, because psycopg2 is a module written in C, it only exposes its API to Python. You can see the code here: http://github.com/psycopg/psycopg2.git
I guess what you've encountered is a bug in Psycopg. That said, underscore in the _psycopg package name indicates, that classes defined there are not really meant to be subclassed.
Why don't you define dictfetchall() as a standalone helper function? It doesn't access any internal state of the cursor object, there's no real need to make it a cursor method.
psycopg2 is written in C and unless you know what you're doing it is possible to cause a SIGSEGV when calling/extending the module. All common functions and method carefully check their arguments both to avoid breakage and security problems but there are areas where, right now, the burden of doing the Right Thing is on the client code. You just hit one of those areas: extending the connection or cursor types.
To do this right you need to do some specific work in your __init__ method, as shown here:
https://github.com/psycopg/psycopg2/blob/master/lib/extras.py#L49
Specifically cursor (and connection) are new-style classes and need to be initialized using super() and the full list of parameters passed to __init__. At minimum:
def __init__(self, *args, **kwargs):
super(DictCursorBase, self).__init__(*args, **kwargs)
I linked that example specifically because it already does what you need, i.e., fetches data and makes it available as dicts. Just import psycopg.extras.DictCursor (to use a dict-like row class) or import psycopg.extras.RealDictCursor (to use a real dict for every row) and you're done.
I had the same error and the solution was to replace the psycopg2 with a different version. I replaced 2.6 version for 2.4 version and the problem was fixed.
You can validate this by running python interpreter and importing psycopg2.

MySQLdb Python - test databases

I would like to run some tests for code that uses a MySQL database. Right now, the code consists of multiple modules that all import a common module mainlib. This module does the
db = MySQLdb.connect(host='localhost', user='admin', password='admin', db='MyDatabase').
I would like to do tests using a test database instead of the real database.
I was thinking I could close the connection (mainlib.db.close()) and create a new connection in the test script:
db = MySQLdb.connect(host='localhost', user='admin', password='admin', db='TestDatabase')
and name the new cursor with the same global variable. But I am unsure of how the imports in the other modules work. In any case, this method doesn't seem to work, as I get InterfaceError: (0, '') as well as no data back from my test database cursor.
Does anyone know how to switch to a test database without modifying the source code?
Python's "global" variables don't have global scope. They are module-scope variables. So the same-named global in different modules isn't the same variable.
I think you might be closing mainlib.db and then setting mytestcode.db to a new database. All the rest of your code of course continues to use mainlib.db, which is now closed.
Try mainlib.db = MySQLdb.connect(...), and the same for the cursor. Directly modifying another module's variables is ugly, but it works as you'd expect.
The alternative would be to introduce a way of configuring how mainlib opens the DB. For example, you could have a function like this in mainlib:
db = None
dbname = None
cursor = None
def connectdb(name = None):
"""
Set up the global database connection and cursor, if it isn't already.
Omit 'name' when the caller doesn't care what database is used,
and is happy to accept whatever database is already connected or
connect to a default database.
Since there cannot be multiple global databases, an exception is thrown
if 'name' is specified, the global connection already exists, and the
names don't match.
"""
global db, dbname, cursor
if db is None:
if name is None:
name = 'MyDatabase'
db = MySQLdb.connect(host='localhost', user='admin', password='admin', db=name)
dbname = name
cursor = db.cursor()
elif name not in (None, dbname):
raise Exception('cannot connect to the specified db: the global connection already exists and connects to a different db')
Now, in your normal program (not in every module, just the top level) you call mainlib.connectdb() right after importing mainlib. In your test code you call mainlib.connectdb('TestDatabase').
Optionally, you could have connectdb return the cursor and/or the connection object, That way, everything that uses the global db can go through this function.
Personally, I prefer not to use globals for this at all -- I would have a function to create a database connection and I would pass that database as a parameter into anything that needs it. However, I realise that tastes vary in this respect.
A quick fix would be to use the same cursor, but to be explicit with the database when selecting a table. for instance if you have a table T in both databases.
you could do
select * from myDatabase.T #if you want to use the real table
or
select * from TestDatabase.T #if you want to use the test table

Creating pooled objects (Python)

I am writing a script that requires interacting with several databases (not concurrently). In order to facilitate this, I am mainting the db related information (connections etc) in a dictionary. As an aside, I am using sqlAlchemy for all interaction with the db. I don't know whether that is relevant to this question or not.
I have a function to set up the pool. It looks somewhat like this:
def setupPool():
global pooled_objects
for name in NAMES:
engine = create_engine("postgresql+psycopg2://postgres:pwd#localhost/%s" % name)
metadata = MetaData(engine)
conn = engine.connect()
tbl = Table('my_table', metadata, autoload=True)
info = {'db_connection': conn, 'table': tbl }
pooled_objects[name] = info
I am not sure if there are any gotchas in the code above, since I am using the same variable names, and its not clear (to me atleast), how the underlying pointers to the resources (connection are being handled). For example, will creating another engine (to a different db) and assigning it to the 'engine' variable cause the previous instance to be 'harvested' by the GC (since no code is using that reference yet - the pool is still being setup).
In short, is the code above OK?, and if not, why not - i.e. how may I fix it with respect to the issues mentioned above?
The code you have is perfectly good.
Just because you use the same variable name does not mean you are overriding (or freeing) another object that was assigned to that variable. In fact, you can look at the names as temporary labels to your objects.
Now, you store the final objects in the global dictionary pooled_objects, which means that until your program is done or your delete data from there explicitely, GC is not going to free them.

How do I get a raw, compiled SQL query from a SQLAlchemy expression?

I have a SQLAlchemy query object and want to get the text of the compiled SQL statement, with all its parameters bound (e.g. no %s or other variables waiting to be bound by the statement compiler or MySQLdb dialect engine, etc).
Calling str() on the query reveals something like this:
SELECT id WHERE date_added <= %s AND date_added >= %s ORDER BY count DESC
I've tried looking in query._params but it's an empty dict. I wrote my own compiler using this example of the sqlalchemy.ext.compiler.compiles decorator but even the statement there still has %s where I want data.
I can't quite figure out when my parameters get mixed in to create the query; when examining the query object they're always an empty dictionary (though the query executes fine and the engine prints it out when you turn echo logging on).
I'm starting to get the message that SQLAlchemy doesn't want me to know the underlying query, as it breaks the general nature of the expression API's interface all the different DB-APIs. I don't mind if the query gets executed before I found out what it was; I just want to know!
This blogpost by Nicolas Cadou provides an updated answer.
Quoting from the blog post, this is suggested and worked for me:
from sqlalchemy.dialects import postgresql
print str(q.statement.compile(dialect=postgresql.dialect()))
Where q is defined as:
q = DBSession.query(model.Name).distinct(model.Name.value) \
.order_by(model.Name.value)
Or just any kind of session.query().
The documentation uses literal_binds to print a query q including parameters:
print(q.statement.compile(compile_kwargs={"literal_binds": True}))
the above approach has the caveats that it is only supported for basic types, such as ints and strings, and furthermore if a bindparam() without a pre-set value is used directly, it won’t be able to stringify that either.
The documentation also issues this warning:
Never use this technique with string content received from untrusted
input, such as from web forms or other user-input applications.
SQLAlchemy’s facilities to coerce Python values into direct SQL string
values are not secure against untrusted input and do not validate the
type of data being passed. Always use bound parameters when
programmatically invoking non-DDL SQL statements against a relational
database.
This should work with Sqlalchemy >= 0.6
from sqlalchemy.sql import compiler
from psycopg2.extensions import adapt as sqlescape
# or use the appropiate escape function from your db driver
def compile_query(query):
dialect = query.session.bind.dialect
statement = query.statement
comp = compiler.SQLCompiler(dialect, statement)
comp.compile()
enc = dialect.encoding
params = {}
for k,v in comp.params.iteritems():
if isinstance(v, unicode):
v = v.encode(enc)
params[k] = sqlescape(v)
return (comp.string.encode(enc) % params).decode(enc)
Thing is, sqlalchemy never mixes the data with your query. The query and the data are passed separately to your underlying database driver - the interpolation of data happens in your database.
Sqlalchemy passes the query as you've seen in str(myquery) to the database, and the values will go in a separate tuple.
You could use some approach where you interpolate the data with the query yourself (as albertov suggested below), but that's not the same thing that sqlalchemy is executing.
For the MySQLdb backend I modified albertov's awesome answer (thanks so much!) a bit. I'm sure they could be merged to check if comp.positional was True but that's slightly beyond the scope of this question.
def compile_query(query):
from sqlalchemy.sql import compiler
from MySQLdb.converters import conversions, escape
dialect = query.session.bind.dialect
statement = query.statement
comp = compiler.SQLCompiler(dialect, statement)
comp.compile()
enc = dialect.encoding
params = []
for k in comp.positiontup:
v = comp.params[k]
if isinstance(v, unicode):
v = v.encode(enc)
params.append( escape(v, conversions) )
return (comp.string.encode(enc) % tuple(params)).decode(enc)
First let me preface by saying that I assume you're doing this mainly for debugging purposes -- I wouldn't recommend trying to modify the statement outside of the SQLAlchemy fluent API.
Unfortunately there doesn't seem to be a simple way to show the compiled statement with the query parameters included. SQLAlchemy doesn't actually put the parameters into the statement -- they're passed into the database engine as a dictionary. This lets the database-specific library handle things like escaping special characters to avoid SQL injection.
But you can do this in a two-step process reasonably easily. To get the statement, you can do as you've already shown, and just print the query:
>>> print(query)
SELECT field_1, field_2 FROM table WHERE id=%s;
You can get one step closer with query.statement, to see the parameter names. Note :id_1 below vs %s above -- not really a problem in this very simple example, but could be key in a more complicated statement.
>>> print(query.statement)
>>> print(query.statement.compile()) # seems to be equivalent, you can also
# pass in a dialect if you want
SELECT field_1, field_2 FROM table WHERE id=:id_1;
Then, you can get the actual values of the parameters by getting the params property of the compiled statement:
>>> print(query.statement.compile().params)
{u'id_1': 1}
This worked for a MySQL backend at least; I would expect it's also general enough for PostgreSQL without needing to use psycopg2.
For postgresql backend using psycopg2, you can listen for the do_execute event, then use the cursor, statement and type coerced parameters along with Cursor.mogrify() to inline the parameters. You can return True to prevent actual execution of the query.
import sqlalchemy
class QueryDebugger(object):
def __init__(self, engine, query):
with engine.connect() as connection:
try:
sqlalchemy.event.listen(engine, "do_execute", self.receive_do_execute)
connection.execute(query)
finally:
sqlalchemy.event.remove(engine, "do_execute", self.receive_do_execute)
def receive_do_execute(self, cursor, statement, parameters, context):
self.statement = statement
self.parameters = parameters
self.query = cursor.mogrify(statement, parameters)
# Don't actually execute
return True
Sample usage:
>>> engine = sqlalchemy.create_engine("postgresql://postgres#localhost/test")
>>> metadata = sqlalchemy.MetaData()
>>> users = sqlalchemy.Table('users', metadata, sqlalchemy.Column("_id", sqlalchemy.String, primary_key=True), sqlalchemy.Column("document", sqlalchemy.dialects.postgresql.JSONB))
>>> s = sqlalchemy.select([users.c.document.label("foobar")]).where(users.c.document.contains({"profile": {"iid": "something"}}))
>>> q = QueryDebugger(engine, s)
>>> q.query
'SELECT users.document AS foobar \nFROM users \nWHERE users.document #> \'{"profile": {"iid": "something"}}\''
>>> q.statement
'SELECT users.document AS foobar \nFROM users \nWHERE users.document #> %(document_1)s'
>>> q.parameters
{'document_1': '{"profile": {"iid": "something"}}'}
The following solution uses the SQLAlchemy Expression Language and works with SQLAlchemy 1.1. This solution does not mix the parameters with the query (as requested by the original author), but provides a way of using SQLAlchemy models to generate SQL query strings and parameter dictionaries for different SQL dialects. The example is based on the tutorial http://docs.sqlalchemy.org/en/rel_1_0/core/tutorial.html
Given the class,
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class foo(Base):
__tablename__ = 'foo'
id = Column(Integer(), primary_key=True)
name = Column(String(80), unique=True)
value = Column(Integer())
we can produce a query statement using the select function.
from sqlalchemy.sql import select
statement = select([foo.name, foo.value]).where(foo.value > 0)
Next, we can compile the statement into a query object.
query = statement.compile()
By default, the statement is compiled using a basic 'named' implementation that is compatible with SQL databases such as SQLite and Oracle. If you need to specify a dialect such as PostgreSQL, you can do
from sqlalchemy.dialects import postgresql
query = statement.compile(dialect=postgresql.dialect())
Or if you want to explicitly specify the dialect as SQLite, you can change the paramstyle from 'qmark' to 'named'.
from sqlalchemy.dialects import sqlite
query = statement.compile(dialect=sqlite.dialect(paramstyle="named"))
From the query object, we can extract the query string and query parameters
query_str = str(query)
query_params = query.params
and finally execute the query.
conn.execute( query_str, query_params )
You can use events from ConnectionEvents family: after_cursor_execute or before_cursor_execute.
In sqlalchemy UsageRecipes by #zzzeek you can find this example:
Profiling
...
#event.listens_for(Engine, "before_cursor_execute")
def before_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
conn.info.setdefault('query_start_time', []).append(time.time())
logger.debug("Start Query: %s" % statement % parameters)
...
Here you can get access to your statement
UPDATE: Came up with yet another case where the previous solution here wasn't properly producing the correct SQL statement. After a bit of diving around in SQLAlchemy, it becomes apparent that you not only need to compile for a particular dialect, you also need to take the compiled query and initialize it for the correct DBAPI connection context. Otherwise, things like type bind processors don't get executed and values like JSON.NULL don't get properly translated.
Note, this makes this solution very particular to Flask + Flask-SQLAlchemy + psycopg2 + PostgreSQL. You may need to translate this solution to your environment by changing the dialect and how you reference your connection. However, I'm pretty confident this produces the exact SQL for all data types.
The result below is a simple method to drop in and occasionally but reliably grab the exact, compiled SQL that would be sent to my PostgreSQL backend by just interrogating the query itself:
import sqlalchemy.dialects.postgresql.psycopg2
from flask import current_app
def query_to_string(query):
dialect = sqlalchemy.dialects.postgresql.psycopg2.dialect()
compiled_query = query.statement.compile(dialect=dialect)
sqlalchemy_connection = current_app.db.session.connection()
context = dialect.execution_ctx_cls._init_compiled(
dialect,
sqlalchemy_connection,
sqlalchemy_connection.connection,
compiled_query,
None
)
mogrified_query = sqlalchemy_connection.connection.cursor().mogrify(
context.statement,
context.parameters[0]
)
return mogrified_query.decode()
query = [ .... some ORM query .... ]
print(f"compiled SQL = {query_to_string(query)}")
I've created this little function that I import when I want to print the full query, considering I'm in the middle of a test when the dialect is already bound:
import re
def print_query(query):
regex = re.compile(":(?P<name>\w+)")
params = query.statement.compile().params
sql = regex.sub("'{\g<name>}'", str(query.statement)).format(**params)
print(f"\nPrinting SQLAlchemy query:\n\n")
print(sql)
return sql
I think .statement would possibly do the trick:
http://docs.sqlalchemy.org/en/latest/orm/query.html?highlight=query
>>> local_session.query(sqlalchemy_declarative.SomeTable.text).statement
<sqlalchemy.sql.annotation.AnnotatedSelect at 0x6c75a20; AnnotatedSelectobject>
>>> x=local_session.query(sqlalchemy_declarative.SomeTable.text).statement
>>> print(x)
SELECT sometable.text
FROM sometable
If with SQLAlchemy you are using PyMySQL, you can do one trick.
I was in a hurry and lost a lot of time, so I changed the driver for print the current statement with parameters.
SQLAlchemy intentionally does not support full stringification of literal values.
But PyMySQL has 'mogrify' method which does it, but, SQLALchemy has no HOOK for call it when using ORM insert/update (when it controls the cursor) like db.add or commit/flush (for update).
So, Just go where the driver is using (to know where use):
pip show pycharm
In the folder, find and edit the file cursors.py.
In the method:
def execute(self, query, args=None):
Under the line:
query = self.mogrify(query, args)
Just Add:
print(query)
Will work like a charm, debug, resolve the issue and remove the print.

Categories