I wrote my own implementation of the ISession interface of Pyramid which should store the Session in a database. Everything works real nice, but somehow pyramid_tm throws up on this. As soon as it is activated it says this:
DetachedInstanceError: Instance <Session at 0x38036d0> is not bound to a Session;
attribute refresh operation cannot proceed
(Don't get confused here: The <Session ...> is the class name for the model, the "... to a Session" most likely refers to SQLAlchemy's Session (which I call DBSession to avoid confusion).
I have looked through mailing lists and SO and it seems anytime someone has the problem, they are
spawning a new thread or
manually call transaction.commit()
I do neither of those things. However, the specialty here is, that my session gets passed around by Pyramid a lot. First I do DBSession.add(session) and then return session. I can afterwards work with the session, flash new messages etc.
However, it seems once the request finishes, I get this exception. Here is the full traceback:
Traceback (most recent call last):
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/channel.py", line 329, in service
task.service()
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 173, in service
self.execute()
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 380, in execute
app_iter = self.channel.server.application(env, start_response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 251, in __call__
response = self.invoke_subrequest(request, use_tweens=True)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 231, in invoke_subrequest
request._process_response_callbacks(response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/request.py", line 243, in _process_response_callbacks
callback(self, response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/miniblog/miniblog/models.py", line 218, in _set_cookie
print("Setting cookie %s with value %s for session with id %s" % (self._cookie_name, self._cookie, self.id))
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 168, in __get__
return self.impl.get(instance_state(instance),dict_)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 451, in get
value = callable_(passive)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/state.py", line 285, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/mapper.py", line 1668, in _load_scalar_attributes
(state_str(state)))
DetachedInstanceError: Instance <Session at 0x7f4a1c04e710> is not bound to a Session; attribute refresh operation cannot proceed
For this case, I deactivated the debug toolbar. The error gets thrown from there once I activate it. It seems the problem here is accessing the object at any point.
I realize I could try to detach it somehow, but this doesn't seem like the right way as the element couldn't be modified without explicitly adding it to a session again.
So when I don't spawn new threads and I don't explicitly call commit, I guess the transaction is committing before the request is fully gone and afterwards there is again access to it. How do I handle this problem?
I believe what you're seeing here is a quirk to the fact that response callbacks and finished callbacks are actually executed after tweens. They are positioned just between your app's egress, and middleware. pyramid_tm, being a tween, is committing the transaction before your response callback executes - causing the error upon later access.
Getting the order of these things correct is difficult. A possibility off the top of my head is to register your own tween under pyramid_tm that performs a flush on the session, grabs the id, and sets the cookie on the response.
I sympathize with this issue, as anything that happens after the transaction has been committed is a real gray area in Pyramid where it's not always clear that the session should not be touched. I'll make a note to continue thinking about how to improve this workflow for Pyramid in the future.
I first tried with registering a tween and it worked somehow, but the data did not get saved. I then stumpled upon the SQLAlchemy Event System. I found the after_commit event. Using this, I could set up the detaching of the session object after the commit was done by pyramid_tm. I think this provides the full fexibility and doesn't impose any requirements on the order.
My final solution:
from sqlalchemy.event import listen
from sqlalchemy.orm import Session as SASession
def detach(db_session):
from pyramid.threadlocal import get_current_request
request = get_current_request()
log.debug("Expunging (detaching) session for DBSession")
db_session.expunge(request.session)
listen(SASession, 'after_commit', detach)
Only drawback: It requires calling get_current_request() which is discouraged. However, I saw no way of passing the session in any way, as the event gets called by SQLAlchemy. I thought of some ugly wrapping stuff but I think that would have been way to risky and unstable.
Related
I am developing a script to create a record in a model of an Odoo. I need to run this model's methods on specific records. In my case the method which I need to run on a specific record doesn't have any parameter (just has self). I want to know how can I run the method on a specific record of the model through xmlrpc call from client to Odoo server. Below is the way I tried to call the method and pass the id of a specific record regarding this question.
xmlrpc_object.execute('test_db', user, 'admin', 'test.test', 'action_check_constraint', [record_id])
action_check_constraint checks some constraints on each record of the model and if all the constraints passed, changes the state of the record or raise validation errors. But the above method call with xmlrpc raise below error:
xmlrpc.client.Fault: <Fault cannot marshal None unless allow_none is enabled: 'Traceback (most recent call last):\n File "/home/ibrahim/workspace/odoo13/odoo/odoo/addons/base/controllers/rpc.py", line 60, in xmlrpc_1\n response = self._xmlrpc(service)\n File "/home/ibrahim/workspace/odoo13/odoo/odoo/addons/base/controllers/rpc.py", line 50, in _xmlrpc\n return dumps((result,), methodresponse=1, allow_none=False)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 968, in dumps\n data = m.dumps(params)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 501, in dumps\n dump(v, write)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 523, in __dump\n f(self, value, write)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 527, in dump_nil\n raise TypeError("cannot marshal None unless allow_none is enabled")\nTypeError: cannot marshal None unless allow_none is enabled\n'>
> /home/ibrahim/workspace/scripts/automate/automate_record_creation.py(328)create_record()
Can anyone help with the correct and best way of calling a model's method (with no parameter except self) on a specific record through xmlrpc client to Odoo server?
That error is raised, because the xmlrpc library is not allowing None as return value as default. But you should change that behaviour by just allowing it.
Following line is from Odoo's external API documentation, extended to allow None as return value:
models = xmlrpc.client.ServerProxy(
'{}/xmlrpc/2/object'.format(url), allow_none=True)
For more information about xmlrpc ServerProxy look into the python documentation
You can get the error if action_check_constraint does not return anything (by default None).
Try to run the server with the log-level option set to debug_rpc_answer to get more details.
After lost of search and try first I used this fix to solve the error but I think this fix is not a best practice. So, I found OdooRPC which does the same job but it handled the above case and there's no such error for model methods which return None. Using OdooRPC solved my problem and I done what I needed to do with xmlrpc in Odoo.
The Situation: Simple Class with Basic Attributes
In an application I'm working on, instances of particular class are persisted at the end of their lifecycle, and while they are not subsequently modified, their attributes may need to be read. For example, the end_time of the instance or its ordinal position relative to other instances of the same class (first instance initialized gets value 1, the next has value 2, etc.).
class Foo(object):
def __init__(self, position):
self.start_time = time.time()
self.end_time = None
self.position = position
# ...
def finishFoo(self):
self.end_time = time.time()
self.duration = self.end_time - self.start_time
# ...
The Goal: Persist an Instance using SQLAlchemy
Following what I believe to be a best practice - using a scoped SQLAlchemy Session, as suggested here, by way of contextlib.contextmanager - I save the instance in a newly-created Session which immediately commits. The very next line references the newly persistent instance by mentioning it in a log record, which throws a DetachedInstanceError because the attribute its referencing expired when the Session committed.
class Database(object):
# ...
def scopedSession(self):
session = self.sessionmaker()
try:
yield session
session.commit()
except:
session.rollback()
logger.warn("blah blah blah...")
finally:
session.close()
# ...
def saveMyFoo(self, foo):
with self.scopedSession() as sql_session:
sql_session.add(foo)
logger.info("Foo number {0} finished at {1} has been saved."
"".format(foo.position, foo.end_time))
## Here the DetachedInstanceError is raised
Two Known Possible Solutions: No Expiring or No Scope
I know I can set the expire_on_commit flag to False to circumvent this issue, but I'm concerned this is a questionable practice -- automatic expiration exists for a reason, and I'm hesitant to arbitrarily lump all ORM-tied classes into a non-expiry state without sufficient reason and understanding behind it. Alternatively, I can forget about scoping the Session and just leave the transaction pending until I explicitly commit at a (much) later time.
So my question boils down to this:
Is a scoped/context-managed Session being used appropriately in the case I described?
Is there an alternative way to reference expired attributes that is a better/more preferred approach? (e.g. using a property to wrap the steps of catching expiration/detached exceptions or to create & update a non-ORM-linked attribute that "mirrors" the ORM-linked expired attribute)
Am I misunderstanding or misusing the SQLAlchemy Session and ORM? It seems contradictory to me to use a contextmanager approach when that precludes the ability to subsequently reference any of the persisted attributes, even for a task as simple and broadly applicable as logging.
The Actual Exception Traceback
The example above is simplified to focus on the question at hand, but should it be useful, here's the actual exact traceback produced. The issue arises when str.format() is run in the logger.debug() call, which tries to execute the Set instance's __repr__() method.
Unhandled Error
Traceback (most recent call last):
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/python/log.py", line 73, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 614, in _doReadOrWrite
why = selectable.doRead()
--- <exception caught here> ---
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/internet/udp.py", line 248, in doRead
self.protocol.datagramReceived(data, addr)
File "/opt/zenith/operations/network.py", line 311, in datagramReceived
self.reactFunction(datagram, (host, port))
File "/opt/zenith/operations/schema_sqlite.py", line 309, in writeDatapoint
logger.debug("Data written: {0}".format(dataz))
File "/opt/zenith/operations/model.py", line 1770, in __repr__
repr_info = "Set: {0}, User: {1}, Reps: {2}".format(self.setNumber, self.user, self.repCount)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 239, in __get__
return self.impl.get(instance_state(instance), dict_)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 589, in get
value = callable_(state, passive)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/state.py", line 424, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 563, in load_scalar_attributes
(state_str(state)))
sqlalchemy.orm.exc.DetachedInstanceError: Instance <Set at 0x1c96b90> is not bound to a Session; attribute refresh operation cannot proceed
1.
Most likely, yes. It's used correctly insofar as correctly saving data to the database. However, because your transaction only spans the update, you may run into race conditions when updating the same row. Depending on the application, this can be okay.
2.
Not expiring attributes is the right way to do it. The reason the expiration is there by default is because it ensures that even naive code works correctly. If you are careful, it shouldn't be a problem.
3.
It's important to separate the concept of the transaction from the concept of the session. The contextmanager does two things: it maintains the session as well as the transaction. The lifecycle of each ORM instance is limited to the span of each transaction. This is so you can assume the state of the object is the same as the state of the corresponding row in the database. This is why the framework expires attributes when you commit, because it can no longer guarantee the state of the values after the transaction commits. Hence, you can only access the instance's attributes while a transaction is active.
After you commit, any subsequent attribute you access will result in a new transaction being started so that the ORM can once again guarantee the state of the values in the database.
But why do you get an error? This is because your session is gone, so the ORM has no way of starting a transaction. If you do a session.commit() in the middle of your context manager block, you'll notice a new transaction being started if you access one of the attributes.
Well, what if I want to just access the previously-fetched values? Then, you can ask the framework not to expire those attributes.
Or, Saltstack + docker-py AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
I have been digging into this issue to no avail. I'm trying to use SaltStack to manage my docker images/containers but ran into this problem.
Initially I was using the salt state docker.running but that presented as the command does not exist. When I changed the state to docker.running, I got the traceback I posted over at that GitHub issue:
ID: scheduler
Function: docker.pulled
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1563, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/states/dockerio.py", line 271, in pulled
returned = pull(name, tag=tag, insecure_registry=insecure_registry)
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 1599, in pull
client = _get_client()
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 277, in _get_client
client._version = client.version()['ApiVersion']
File "/usr/local/lib/python2.7/dist-packages/docker/client.py", line 837, in version
return self._result(self._get(url), json=True)
File "/usr/local/lib/python2.7/dist-packages/docker/clientbase.py", line 86, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 310, in get
#: Stream response content default.
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
url=request.url,
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 155, in send
**proxy_kwargs)
File "/usr/local/lib/python2.7/dist-packages/docker/unixconn/unixconn.py", line 74, in get_connection
with self.pools.lock:
AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
Started: 09:33:42.873628
Duration: 22.115 ms
After searching Google a bit more and coming up with nothing, I went ahead and started reading the source.
After reading unixconn.py and realizing that RecentlyUsedContainer was coming from urllib3, I went and tracked down the source for that and discovered that there was a _lock attribute that was changed to lock a while ago. That seemed strange.
I looked closer at the imports and realized that unixconn.py was attempting to use requests' built-in urllib3 and then falling back to the stand alone urllib3. So I checked out the requests urllib3 and found that it did, indeed have the _lock -> lock change. But it was newer than my version of requests. So I upgraded requests and tried again. Still no dice - same AttributeError.
Now things start to get weird.
In order to get information back to my salt master, I started mucking with the docker-py and urllib3 code on my salt minion. At first I raised exceptions with urllib3.__file__ to make sure I was using the right file. But occasionally the file name that it would return was in a file and a folder that did not exist. Usually it was displaying /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc, but when I would delete that file thinking that maybe the .pyc being cached was causing a problem it would still say that was the __file__, even though it didn't exist.
Then I discovered inspect.getfile. And I got the same bizarre behavior - I could delete the .pyc file and yet inspect.getfile(self.pools) would return the non-existent file.
To make life even better, I've added
raise Exception('Pining for the Fjords')
to
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.py
At the end of the RecentlyUsedContainer.__init__. Yet that exception does not raise.
And I have just confirmed that something is in fact lying to me, because despite changing unixconn.py
def get_connection(self, url, proxies=None):
import inspect
r = RecentlyUsedContainer(10)
raise Exception(inspect.getfile(r.__class__) + '\n' + r.__doc__)
which returns /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc, when I go edit that .pyc and modify the RecentlyUsedContainer's docstring I get the original docstring.
And finally, when I edit /usr/lib/python2.7/dist-packages/urllib3/_collections.pyc and change it's docstring, (or the same path but _collections.py instead)...
I still get the same docstring!
Why is the wrong code getting executed here, and how can I find out where it is so I can fix the problem?
So I finally figured out the problem:
It did have something to do with salt. For some reason the way the salt minion imported the docker-py library it did some sort of... partial hold on the imports. I suspect that what was happening was that salt was re-importing just the docker-py library specifically so when I would make changes to those files the changes would show up.
However, since the Python import mechanism will search for pre-imported modules first the urllib3 code was never re-imported.
Ultimately all that is required is to restart the salt minion:
salt 'my-minion' cmd.run "nohup /bin/sh -c 'sleep 10 && salt-call --local service.restart salt-minion'"
I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)
Here is the query in Pymongo
import mong #just my library for initializing
collection_1 = mong.init(collect="col_1")
collection_2 = mong.init(collect="col_2")
for name in collection_2.find({"field1":{"$exists":0}}):
try:
to_query = name['something']
actual_id = collection_1.find_one({"something":to_query})['_id']
crap_id = name['_id']
collection_2.update({"_id":id},{"$set":{"new_name":actual_id}},upset=True)
except:
open('couldn_find_id.txt','a').write(name)
All this is doing is taking a field from one collection, finding the id of that field and updating the id of another collection. It works for about 1000-5000 iterations, but periodically fails with this and then I have to restart the script.
> Traceback (most recent call last):
File "my_query.py", line 6, in <module>
for name in collection_2.find({"field1":{"$exists":0}}):
File "/home/user/python_mods/pymongo/pymongo/cursor.py", line 814, in next
if len(self.__data) or self._refresh():
File "/home/user/python_mods/pymongo/pymongo/cursor.py", line 776, in _refresh
limit, self.__id))
File "/home/user/python_mods/pymongo/pymongo/cursor.py", line 720, in __send_message
self.__uuid_subtype)
File "/home/user/python_mods/pymongo/pymongo/helpers.py", line 98, in _unpack_response
cursor_id)
pymongo.errors.OperationFailure: cursor id '7578200897189065658' not valid at server
^C
bye
Does anyone have any idea what this failure is, and how I can turn it into an exception to continue my script even at this failure?
Thanks
The reason of the problem is described in pymongo's FAQ:
Cursors in MongoDB can timeout on the server if they’ve been open for
a long time without any operations being performed on them. This can
lead to an OperationFailure exception being raised when attempting to
iterate the cursor.
This is because of the timeout argument of collection.find():
timeout (optional): if True (the default), any returned cursor is
closed by the server after 10 minutes of inactivity. If set to False,
the returned cursor will never time out on the server. Care should be
taken to ensure that cursors with timeout turned off are properly
closed.
Passing timeout=False to the find should fix the problem:
for name in collection_2.find({"field1":{"$exists":0}}, timeout=False):
But, be sure you are closing the cursor properly.
Also see:
mongodb cursor id not valid error