pymongo Collection initialize_unordered_bulk_op method broken - python

TypeError("'Collection' object is not callable. If you meant to call the 'initialize_unordered_bulk_op' method on a 'Collection' object it is failing because no such method exists.")
Anyone else run into this?

Ran into this problem in pymongo v4.0. Release notes mention this, but it wasn't very obvious.
Changed in version 4.0: Removed the reindex, map_reduce, inline_map_reduce, parallel_scan, initialize_unordered_bulk_op, initialize_ordered_bulk_op, group, count, insert, save, update, remove, find_and_modify, and ensure_index methods.
Refer to Migration Guide

Related

force object to be `dirty` in sqlalchemy

Is there a way to force an object mapped by sqlalchemy to be considered dirty? For example, given the context of sqlalchemy's Object Relational Tutorial the problem is demonstrated,
a=session.query(User).first()
a.__dict__['name']='eh'
session.dirty
yielding,
IdentitySet([])
i am looking for a way to force the user a into a dirty state.
This problem arises because the class that is mapped using sqlalchemy takes control of the attribute getter/setter methods, and this preventing sqlalchemy from registering changes.
I came across the same problem recently and it was not obvious.
Objects themselves are not dirty, but their attributes are. As SQLAlchemy will write back only changed attributes, not the whole object, as far as I know.
If you set an attribute using set_attribute and it is different from the original attribute data, SQLAlchemy founds out the object is dirty (TODO: I need details how it does the comparison):
from sqlalchemy.orm.attributes import set_attribute
set_attribute(obj, data_field_name, data)
If you want to mark the object dirty regardless of the original attribute value, no matter if it has changed or not, use flag_modified:
from sqlalchemy.orm.attributes import flag_modified
flag_modified(obj, data_field_name)
The flag_modified approach works if one know that attribute have a value present. SQLAlchemy documentation states:
Mark an attribute on an instance as ‘modified’.
This sets the ‘modified’ flag on the instance and establishes an
unconditional change event for the given attribute. The attribute must
have a value present, else an InvalidRequestError is raised.
Starting with version 1.2, if one wants to mark an entire instance then flag_dirty is the solution:
Mark an instance as ‘dirty’ without any specific attribute mentioned.

AttributeError: 'PullRequest' object has no attribute 'issue_comments'

I'm using https://github.com/sigmavirus24/github3.py
and I'm having problem with getting issue_comments from PR.
for pr in repo.iter_pulls():
for comment in pr.issue_comments():
print comment
I'm getting
AttributeError: 'PullRequest' object has no attribute
'issue_comments'
What I'm doing wrong here? review_comments for example is working just fine
The review_comments method was added very recently and was backported from the next planned version of github3.py (1.0). When it was backported, to reduce migration headaches from 0.9.x to 1.0, we decided to not prefix it with iter_ like the other similar methods. In short, the method you are looking for is: iter_issue_comments.
The following should work
TEMPLATE = """{0.user} commented on #{0.number} at {0.created_at} saying:
{0.body}
"""
for pr in repo.iter_pulls()
for comment in pr.iter_issue_comments():
print(TEMPLATE.format(comment))

Strange error after SQLAlchemy update: 'list' object has no attribute '_all_columns'

Here's a simplified version of my query:
subquery = (select(['latitude'])
.select_from(func.unnest(func.array_agg(Room.latitude))
.alias('latitude')).limit(1).as_scalar())
Room.query.with_entities(Room.building, subquery).group_by(Room.building).all()
When executing it I get an error deep inside SQLAlchemy:
File ".../sqlalchemy/sql/selectable.py", line 429, in columns
self._populate_column_collection()
File ".../sqlalchemy/sql/selectable.py", line 992, in _populate_column_collection
for col in self.element.columns._all_columns:
AttributeError: 'list' object has no attribute '_all_columns'
Inspecting it in a debugger shows me this:
>>> self.element
<sqlalchemy.sql.functions.Function at 0x7f72d4fcae50; unnest>
>>> str(self.element)
'unnest(array_agg(rooms.id))'
>>> self.element.columns
[<sqlalchemy.sql.functions.Function at 0x7f72d4fcae50; unnest>]
The problem started with SQLAlchemy 0.9.4; in 0.9.3 everything worked fine.
When running it in SQLAlchemy 0.9.3 the following query is executed (as expected):
SELECT rooms.building AS rooms_building,
(SELECT latitude
FROM unnest(array_agg(rooms.latitude)) AS latitude
LIMIT 1) AS anon_1
FROM rooms
GROUP BY rooms.building
Am I doing something wrong here or is it a bug in SQLAlchemy?
This turned out to be a bug in SQLAlchemy: https://bitbucket.org/zzzeek/sqlalchemy/issue/3137/decide-what-funcxyz-alias-should-do
func.foo().alias() should in fact be equivalent to func.foo().select().alias(), however in this case that will push out a second level of nesting here which you don't want. So to make that correction to the API probably needs to be a 1.0 thing, unless I can determine that func.foo().alias() is totally unusable right now.
The proper way to do it, according to the SQLAlchemy developer, is this:
subquery = (select(['*']).select_from(
func.unnest(func.array_agg(Room.latitude)))
.limit(1)
.as_scalar())
Most likely the next version (0.9.8 I assume) is going to have the old behavior restored:
I'm restoring the old behavior, but for now just use select(['*']). the column is unnamed. PG's behavior of assigning the column name based on the alias in the FROM is a little bit magic (e.g., if the function returned multiple columns, then it ignores that name and uses the ones the function reports?)

How do I update the status of an asset in VersionOne using the REST API

How can I update the status of an asset in V1 using the Rest API?
I would assume I could do something like this using the Python SDK:
from v1pysdk import V1Meta
v1 = V1Meta()
for s in (v1.PrimaryWorkitem
.filter("Number='D-01240'")):
s.StoryStatus = v1.StoryStatus(134)
v1.commit()
This is at least how I understand the Python SDK examples here:
https://github.com/versionone/VersionOne.SDK.Python
However this does not change anything, even though I have the rights to change the status.
Try using:
s.Status = v1.StoryStatus(134)
According to ~/meta.v1?xsl=api.xsl#PrimaryWorkitem The attribute on PrimaryWorkitem of type StoryStatus is named Status, so I think it's just a mistaken attribute name.
What's probably happening is that you're setting a new attribute on that python object, but since StoryStatus is not one of the setters that the SDK created from the instance schema metadata, it doesn't attempt to add it to the uncommitted data collection, and thus the commit is a no-op and yields neither error nor any action.
It might be possible to fence off arbitrary attribute access on those objects so that misspelled names raise errors. I'll investigate adding that.
Try doing:
s.set(Status = v1.StoryStatus(134))

should I use .rank or .order_id?

accessing order_id property of a ScoredDocument object in SearchResults object generates following error in log:
DeprecationWarning: order_id is deprecated; use rank instead
logging.debug(document.order_id)
However documentation here refers to order_id: https://developers.google.com/appengine/docs/python/search/scoreddocumentclass
Which is correct? I am using SDK 1.7.3.
Documentation updates slower then the code, you should flow whatever the latest code recommends you to do.
You should use rank. I've filed a bug to fix that documentation. (I work on the Search API)
In the SdkReleaseNotes of the version 1.6.6 - May22, 2012 it writes:
"The Search API has deprecated the order_id attribute on Document class. It has been replaced with the rank attribute."
So obviously you should use rank.

Categories