I recently became aware of the sqlalchemy raw_connections() method which gives you the ability to use the DBAPI for the given relational db. I was hoping I could use this method to load an existing database into memory like so:
engine = create_engine('sqlite://')
connection = engine.raw_connection()
src = sqlite3.connect("Name of existing db")
src.backup(connection)
Unfortuntaley the object returned by raw_connection() is a _ConnectionFairy object and the following error occurs: TypeError: backup() argument 1 must be sqlite3.Connection, not _ConnectionFairy
Does anyone know a work around this or possibly a working method of what I am trying to do?
Related
After patching psycopg2 with the latest aws-xray-python-sdk (v2.2.0), my alembic script started to throw exception : TypeError: argument 2 must be a connection, cursor or None.
The `TypeError: argument 2 must be a connection, cursor or None` in Psycopg2 thread seems to indicate that setting a proper creator when creating the engine using sqlalchemy.create_engine would solve this problem.
However, I currently use engine_from_config() to create my sqlalchemy engine.
Anyway to specify the creator or solve this issue while using engine_from_config()?
Thanks
New to MongoDB, I am trying to optimize bulk writes to the database. I do not understand how to initialize the Bulk() operations, though.
My understanding is that since the inserts will be done on a collection, this is where (or rather "on what") initializeUnorderedBulkOp() should be initialized:
The code below covers all the cases I can think of:
import pymongo
conn = pymongo.MongoClient('mongodb.example.com', 27017)
db = conn.test
coll = db.testing
# tried all of the following
# this one does not make much sense to me as I insert to a collection, added for completeness
bulk = db.initializeUnorderedBulkOp()
# that one seems to be the most reasonable to me
bulk = coll.initializeUnorderedBulkOp()
# that one is from http://blog.mongodb.org/post/84922794768/mongodbs-new-bulk-api
bulk = db.collection('testing').initializeUnorderedBulkOp()
# the staging and execution
# bulk.find({'name': 'hello'}).update({'name': 'hello', 'who': 'world'})
# bulk.execute()
The exception raised is
TypeError: 'Collection' object is not callable. If you meant to call the 'initializeUnorderedBulkOp' method on a 'Database' object it is failing because no such method exists.
with 'Database' for the first case and 'Collection' for the last two
How should I use initializeUnorderedBulkOp() ?
In Python (using the pymongo module), the method name is not initializeUnorderedBulkOp but initialize_unordered_bulk_op.
You have to call it, as you correctly guessed, on the collection (in your case, coll.initialize_unordered_bulk_op() should work).
While trying to do the following operation:
for line in blines:
line.account = get_customer(line.AccountCode)
I am getting an error while trying to assign a value to line.account:
DetachedInstanceError: Parent instance <SunLedgerA at 0x16eda4d0> is not bound to a Session; lazy load operation of attribute 'account' cannot proceed
Am I doing something wrong??
"detached" means you're dealing with an ORM object that is not associated with a Session. The Session is the gateway to the relational database, so anytime you refer to attributes on the mapped object, the ORM will sometimes need to go back to the database to get the current value of that attribute. In general, you should only work with "attached" objects - "detached" is a temporary state used for caching and for moving objects between sessions.
See Quickie Intro to Object States, then probably read the rest of that document too ;).
I had the same problem with Celery. Adding lazy='subquery' to relationship solved my problem.
I encountered this type of DetachedInstanceError when I prematurely close the query session (that is, having code to deal with those SQLAlchemy model objects AFTER the session is closed). So that's one clue to double check no session closure until you absolutely don't need interact with model objects, I.E. some Lazy Loaded model attributes etc.
I had the same problem when unittesting.
The solution was to call everything within the "with" context:
with self.app.test_client() as c:
res = c.post('my_url/test', data=XYZ, content_type='application/json')
Then it worked.
Adding the lazy attribute didn't work for me.
To access the attribute connected to other table, you should call it within session.
#contextmanager
def get_db_session(engine):
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
db = SessionLocal()
try:
yield db
except Exception:
db.rollback()
raise
finally:
db.close()
with get_db_session(engine) as sess:
data = sess.query(Groups).all()
# `group_users` is connected to other table
print([x.group_users for x in data]) # sucess
print([x.group_users for x in data]) # fail
I store the MySQL Compress function to insert compressed blob data to the database.
In a previous question I was instructed to to use
func.compress
( mysql Compress() with sqlalchemy )
The problem now is that I want to read also the data from the database.
In mysql I would have done
SELECT UNCOMPRESS(text) FROM ...
probably I should use a getter in the class.
I tried to do somethin like:
get_html(self):
return func.uncompress(self.text)
but this does not work. It returns an sqlalchemy.sql.expression.Function and not the string.
Moreover I could not find which functions contains sqlalchemy's func.
Any ideas on how i could write a getter in the object so I get back the uncompressed data.
func is actually a really fancy factory object for special function objects which are rendered to SQL at query time - you cannot evaluate them in Python since Python would not know how your database implements compress(). That's why it doesn't work.
SQLAlchemy lets you map SQL expressions to mapped class attributes. If you're using the declarative syntax, extend your class like so (it's untested, but I'm confident this is the way to go):
from sqlalchemy.orm import column_property
class Demo(...):
data_uncompressed = column_property(func.uncompress(data))
Now whenever SQLAlchemy loads an instance from the database, the SELECT query will contain SELECT ..., UNCOMPRESS(demotable.data), ... FROM demotable.
Edit by Giorgos Komninos:
I used the
http://docs.sqlalchemy.org/en/rel_0_7/orm/mapper_config.html#using-a-plain-descriptor
and it worked.
I know how to query on a model now. Suppose there is a Question model:
class Question(Base):
__tablename__ = "questions"
id=Column(...)
user_id=Column(...)
...
Now, I can do:
question = Session.query(Question).filter_by(user_id=123).one()
But, now, I have a table (not a model) questions:
questions = Table('questions', Base.metadata,
Column(id, ...),
Column(user_id, ...),
....)
How to query it as what I do with models?
Session.query(questions).filter_by(user_id=123).one()
This will report an error:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "E:\Python27\lib\site-packages\sqlalchemy-0.6.3-py2.7.egg\sqlalchemy\orm\query.py", line 851, in filter_by
for key, value in kwargs.iteritems()]
File "E:\Python27\lib\site-packages\sqlalchemy-0.6.3-py2.7.egg\sqlalchemy\orm\util.py", line 567, in _entity_descriptor
desc = entity.class_manager[key]
AttributeError: 'NoneType' object has no attribute 'class_manager'
But:
Session.query(questions).all()
is OK.
Is filter_by only work for models? How can I query on tables?
I think it's Session.query(questions).filter(questions.c.user_id==123).one()
You can query tables that were created with the Table constructor using Session.query(Base.metadata.tables['myTable']).all().
This is a bit of a late answer, but what the existing ones are missing is the fact that you can both work with Sessions and with Engines & Connections and that you do not need to work with a Session if you defined sqlalchemy.schema.Table directly.
But when should you use a Session, and when a Connection?
The SQLAlchemy documentation has the following to say regarding this:
Its important to note that when using the SQLAlchemy ORM, these objects [Engines and Connections] are not generally accessed; instead, the Session object is used as the interface to the database. However, for applications that are built around direct usage of textual SQL statements and/or SQL expression constructs without involvement by the ORM’s higher level management services, the Engine and Connection are king (and queen?)
In short:
If you are using the ORM (so you write python classes for you data models), you will work with a Session. If you are directly defining Tables, then you don't need to involve any ORM (and related management services) and can work directly with a Connection.
So, how would working with a Connection look like?:
Here is a simple example, adapted from the docs about connections that also answers your question about how to query a table:
from sqlalchemy import create_engine
engine = create_engine('postgresql://user:pw#localhost:5432/mydatabase')
# Assuming questions is a sqlalchemy.schema.Table instance
with engine.begin() as connection:
query = questions.select().where(
questions.c.user_id == 1)
q1 = connection.execute(query).fetch_one()
See also the docs about sqlalchemy.schema.Table.select for more info.