I'm expectedly getting a CypherExecutionException. I would like to catch it but I can't seem to find the import for it.
Where is it?
How do I find it myself next time?
Depending on which version of py2neo you're using, and which Cypher endpoint - legacy or transactional - this may be one of the auto-generated errors built dynamically from the server response. Newer functionality (i.e. the transaction endpoint) no longer does this and instead holds hard-coded definitions for all exceptions for just this reason. This wasn't possible for the legacy endpoint when the full list of possible exceptions was undocumented.
You should however be able to catch py2neo.error.GraphError instead which is the base class from which these dynamic errors inherit. You can then study the attributes of that error for more specific checking.
Related
I would like, given a python module, to monkey patch all functions, classes and attributes it defines. Simply put, I would like to log every interaction a script I do not directly control has with a module I do not directly control. I'm looking for an elegant solution that will not require prior knowledge of either the module or the code using it.
I found several high-level tools that help wrapping, decorating, patching etc... and i've went over the code of some of them, but I cannot find an elegant solution to create a proxy of any given module and automatically proxy it, as seamlessly as possible, except for appending logic to every interaction (record input arguments and return value, for example).
in case someone else is looking for a more complete proxy implementation
Although there are several python proxy solutions similar to those OP is looking for, I could not find a solution that will also proxy classes and arbitrary class objects, as well as automatically proxy functions return values and arguments. Which is what I needed.
I've got some code written for that purpose as part of a full proxying/logging python execution and I may make it into a separate library in the future. If anyone's interested you can find the core of the code in a pull request. Drop me a line if you'd like this as a standalone library.
My code will automatically return wrapper/proxy objects for any proxied object's attributes, functions and classes. The purpose is to log and replay some of the code, so i've got the equivalent "replay" code and some logic to store all proxied objects to a json file.
The "Python Database API Specification v2.0" document (PEP 249) specifies a DatabaseError base class for exceptions that a database driver would throw related to, well, database errors.
But as far as I can tell, the DB-API has no abstract implementation within Python. Is it really the case that each driver has its own separate set of exceptions here, simply with the same names?
Looking at the sqlite3 module source code, it appears this might indeed be the case! As far as I can tell, the C code [initializing pysqlite_DatabaseError] (https://github.com/python/cpython/blob/c30098c8c6014f3340a369a31df9c74bdbacc269/Modules/_sqlite/module.c#L378) is as follows:
if (!(pysqlite_DatabaseError = PyErr_NewException(MODULE_NAME ".DatabaseError", pysqlite_Error, NULL))) {
goto error;
}
…seemingly inheriting from pysqlite_Error, which in turn inherits from PyExc_Exception which I assume corresponds to core Python's own Exception.
Am I missing something here? This seems like an unfortunate design for errors which may bubble out of an API that uses an arbitrary database driver internally.
I am trying to set custom causes for Jenkins builds using the jenkins api.
The jenkins api has an invoke() method for invoking new builds that receives the cause parameter.
# this is a jenkinsapi.Job method
invoke(self, securitytoken=None, block=False,
build_params=None, cause=None, files=None, delay=5):
The cause param is dealt with like this:
if cause:
build_params['cause'] = cause
I am trying to find out what format to use when defining a custom cause. In order to do this I first extracted the cause of a build to see what it looks like using jenkinsapi.Build method get_causes().
This yields a list of dictionaries as expected (only 1 cause), example:
[{'shortDescription': 'description of cause',
'userId': 'userid',
'userName': 'username'}]
With this knowledge, I tried invoking builds while specifying cause as a list of dictionaries in the same format, but this didn't work, upon collecting the causes from this new build, only the normal default cause was there.
So, my question is what do I need to do to create a custom cause?
I've found two ways to add in the custom cause but only one way which works with the Jenkin's API. I'm still hoping there's an alternative solution.
To get the custom cause setting to work I had to enable this setting in each Jenkin's job:
After enabling that setting, I was able to trigger the Job with a custom cause which would show up in the console.
job.invoke(securitytoken="asecuretoken", cause="A custom cause.")
The main trouble I have with this route is that it doesn't fill out the amount of information I see from custom plugins. That's the alternative I've found to using the cause in this manner but it requires more work to implement.
I've found a good example which customizes the build messages based on a REST request is the GitLab Jenkin's Plugin.
The documentation for GAE's ndp.put_multi is severely lacking. NDB Entities and Keys - Python — Google Cloud Platform shows that it returns a list of keys (list_of_keys = ndb.put_multi(list_of_entities)), but it says nothing about failures. NDB Functions doesn't provide much more information.
Spelunking through the code (below), shows me that, at least for now, put_multi just aggregates the Future.get_result()s returned from the async method, which itself delegates to the entities' put code. Now, the docs for the NDB Future Class indicate that a result will be returned or else an exception will be raised. I've been told, however, that the result will be None if a particular put failed (I can't find any authoritative documentation to that effect, but if it's anything like db.get then that would make sense).
So all of this boils down to some questions I can't find the answers to:
Clearly, the return value is a list - is it a list with some elements possibly None? Or are exceptions used instead?
When there is an error, what should be re-put? Can all entities be re-put (idempotent), or only those whose return value are None (if that's even how errors are communicated)?
How common are errors (One answer: 1/3000)? Do they show up in logs (because I haven't seen any)? Is there a way to reliably simulate an error for testing?
Usage of the function in an open source library implies that the operation is idempotent, but that's about it. (Other usages don't even bother checking the return value or catching exceptions.)
Handling Datastore Errors makes no mention of anything but exceptions.
I agree with your reading of the code: put_multi() reacts to an error the same way put_async().get_result() does. If put() would raise an exception, put_multi() will also, and will be unhelpful about which of the multiple calls failed. I'm not aware of a circumstance where put_multi() would return None for some entries in the key list.
You can re-put entities that have been put, assuming no other user has updated those entities since the last put attempt. Entities that are created with system-generated IDs have their in-memory keys updated, so re-putting these would overwrite the existing entities and not create new ones. I wouldn't call it idempotent exactly because the retry would overwrite any updates to those entities made by other processes.
Of course, if you need more control over the result, you can perform this update in a transaction, though all entities would need to be in the same entity group for this to work with a primitive transaction. (Cross-group transactions support up to five distinct entity groups.) A failure during a transaction would ensure that none of the entities are created or updated if any of the attempts fail.
I don't know a general error rate for update failures. Such failures are most likely to include contention errors or "hot tablets" (too many updates to nearby records in too short a time, resulting in a timeout), which would depend on app behavior. All such errors are reported by the API as exceptions.
The easiest way to test error handling call paths would be to wrap the Model class and override the methods with a test mode behavior. You could get fancier and dig into the stub API used by testbed, which may have a way to hook into low-level calls and simulate errors. (I don't think this is a feature of testbed directly but you could use a similar technique.)
I do run parallel write requests on my ZODB. I do have multiple BTree instances inside my ZODB. Once the server accesses the same objects inside such a BTree, I get a ConflictError for the IOBucket class. For all my Django bases classes I do have _p_resolveconflict set up, but can't implement it for IOBucket 'cause its a C based class.
I did a deeper analysis, but still don't understand why it complains about the IOBucket class and what it writes into it. Additionally, what would be the right strategy to resolve it?
Thousand thanks for any help!
IOBucket is part of the persistence structure of a BTree; it exists to try and reduce conflict errors, and it does try and resolve conflicts where possible.
That said, conflicts are not always avoidable, and you should restart your transaction. In Zope, for example, the whole request is re-run up to 5 times if a ConflictError is raised. Conflicts are ZODB's way of handling the (hopefully rare) occasion where two different requests tried to change the exact same data structure.
Restarting your transaction means calling transaction.begin() and applying the same changes again. The .begin() will fetch any changes made by the other process and your commit will be based on the fresh data.