Or, Saltstack + docker-py AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
I have been digging into this issue to no avail. I'm trying to use SaltStack to manage my docker images/containers but ran into this problem.
Initially I was using the salt state docker.running but that presented as the command does not exist. When I changed the state to docker.running, I got the traceback I posted over at that GitHub issue:
ID: scheduler
Function: docker.pulled
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1563, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/states/dockerio.py", line 271, in pulled
returned = pull(name, tag=tag, insecure_registry=insecure_registry)
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 1599, in pull
client = _get_client()
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 277, in _get_client
client._version = client.version()['ApiVersion']
File "/usr/local/lib/python2.7/dist-packages/docker/client.py", line 837, in version
return self._result(self._get(url), json=True)
File "/usr/local/lib/python2.7/dist-packages/docker/clientbase.py", line 86, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 310, in get
#: Stream response content default.
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
url=request.url,
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 155, in send
**proxy_kwargs)
File "/usr/local/lib/python2.7/dist-packages/docker/unixconn/unixconn.py", line 74, in get_connection
with self.pools.lock:
AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
Started: 09:33:42.873628
Duration: 22.115 ms
After searching Google a bit more and coming up with nothing, I went ahead and started reading the source.
After reading unixconn.py and realizing that RecentlyUsedContainer was coming from urllib3, I went and tracked down the source for that and discovered that there was a _lock attribute that was changed to lock a while ago. That seemed strange.
I looked closer at the imports and realized that unixconn.py was attempting to use requests' built-in urllib3 and then falling back to the stand alone urllib3. So I checked out the requests urllib3 and found that it did, indeed have the _lock -> lock change. But it was newer than my version of requests. So I upgraded requests and tried again. Still no dice - same AttributeError.
Now things start to get weird.
In order to get information back to my salt master, I started mucking with the docker-py and urllib3 code on my salt minion. At first I raised exceptions with urllib3.__file__ to make sure I was using the right file. But occasionally the file name that it would return was in a file and a folder that did not exist. Usually it was displaying /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc, but when I would delete that file thinking that maybe the .pyc being cached was causing a problem it would still say that was the __file__, even though it didn't exist.
Then I discovered inspect.getfile. And I got the same bizarre behavior - I could delete the .pyc file and yet inspect.getfile(self.pools) would return the non-existent file.
To make life even better, I've added
raise Exception('Pining for the Fjords')
to
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.py
At the end of the RecentlyUsedContainer.__init__. Yet that exception does not raise.
And I have just confirmed that something is in fact lying to me, because despite changing unixconn.py
def get_connection(self, url, proxies=None):
import inspect
r = RecentlyUsedContainer(10)
raise Exception(inspect.getfile(r.__class__) + '\n' + r.__doc__)
which returns /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc, when I go edit that .pyc and modify the RecentlyUsedContainer's docstring I get the original docstring.
And finally, when I edit /usr/lib/python2.7/dist-packages/urllib3/_collections.pyc and change it's docstring, (or the same path but _collections.py instead)...
I still get the same docstring!
Why is the wrong code getting executed here, and how can I find out where it is so I can fix the problem?
So I finally figured out the problem:
It did have something to do with salt. For some reason the way the salt minion imported the docker-py library it did some sort of... partial hold on the imports. I suspect that what was happening was that salt was re-importing just the docker-py library specifically so when I would make changes to those files the changes would show up.
However, since the Python import mechanism will search for pre-imported modules first the urllib3 code was never re-imported.
Ultimately all that is required is to restart the salt minion:
salt 'my-minion' cmd.run "nohup /bin/sh -c 'sleep 10 && salt-call --local service.restart salt-minion'"
Related
I am working with UPnPy, and I immediately notice an issue when attempting to discover devices on my local network. Here is the basic code I am using:
import upnpy
upnp = upnpy.UPnP()
devices = upnp.discover()
This throws the following exception:
Traceback (most recent call last):
File "C:\Users\name\Projects\pythonProject\main.py", line 5, in <module>
devices = upnp.discover()
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\upnp\UPnP.py", line 33, in discover
for device in self.ssdp.m_search(discover_delay=delay, st='upnp:rootdevice', **headers):
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPRequest.py", line 50, in m_search
devices = self._send_request(self._get_raw_request())
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPRequest.py", line 100, in _send_request
device = SSDPDevice(addr, response.decode())
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 87, in __init__
self._get_services_request()
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 23, in wrapper
return func(device, *args, **kwargs)
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 54, in wrapper
return func(instance, *args, **kwargs)
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 171, in _get_services_request
event_sub_url = service.getElementsByTagName('eventSubURL')[0].firstChild.nodeValue
AttributeError: 'NoneType' object has no attribute 'nodeValue'
I have been researching the cause of this but I have found nothing. I am using UPnPy version 1.1.8. I use PyCharm as my IDE. I've tried using previous versions of UPnPy but none seem to be working. Any help would be appreciated. Thanks!
Most likely you have a non-compliant UPnP Device in your home network that is serving a non-standard/broken XML at their description location, and upnpy was not smart enough to handle the parsing error and perhaps ignore that device.
That scenario is more common than you might think: many Smart TVs (LG ones for sure) have an embedded device that advertises as UPnP but their description endpoint answers JSON instead of XML!
Some suggestions:
Use a different library or app (you could try my own), at least to identify the culprit. Check turn on verbosity and look for warnings and parsing error logs
Use a scanner such as tcpdump to sniff network packets (UDP port 1900) for SSDP NOTICE advertises, and manually open each LOCATION URL in a browser to see if they're valid XML
Selectively turn off / unplug devices you think might be UPnP-enabled, such as Smart TVs, Home Theaters, Video Game consoles, Routers, etc, to see wich one was serving the bogus XML.
Edit your local copy of upnpy to handle that error, for example enclose the function/line with a try/except block and print some details about what is it trying to parse prior to the error.
Following the docs here for Auth Code Flow, I can't seem to get the example to work.
import apis
import spotipy
import spotipy.util as util
username = input("Enter username: ")
scope = "user-library-read"
token = util.prompt_for_user_token(username, scope,
client_id=apis.SPOTIFY_CLIENT,
client_secret=apis.SPOTIFY_SECRET,
redirect_uri="http://localhost")
if token:
sp = spotipy.Spotify(auth=token)
results = sp.current_user_saved_tracks()
for item in results['items']:
track = item['track']
print(track['name'] + ' - ' + track['artists'][0]['name'])
else:
print("Can't get token for", username)
I get a 400 Bad Request error:
Traceback (most recent call last):
File "/home/termozour/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.6494.30/plugins/python/helpers/pydev/pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/termozour/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.6494.30/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/media/termozour/linux_main/PycharmProjects/SpotiDowner/main.py", line 27, in <module>
sp = spotipy.Spotify(auth=token)
File "/media/termozour/linux_main/PycharmProjects/SpotiDowner/venv/lib/python3.7/site-packages/spotipy/util.py", line 92, in prompt_for_user_token
token = sp_oauth.get_access_token(code, as_dict=False)
File "/media/termozour/linux_main/PycharmProjects/SpotiDowner/venv/lib/python3.7/site-packages/spotipy/oauth2.py", line 382, in get_access_token
raise SpotifyOauthError(response.reason)
spotipy.oauth2.SpotifyOauthError: Bad Request
Yes, I've seen the other ideas people had to fix this issue but none worked for me:
I tried to reset the client secret, change my URI to a website or localhost, check my client ID and secret ID, and they are all fine.
What I also tried was to go all the way to spotipy/oauth2.py where I get the traceback, added a neat little print(response.text) and got a marvelous {"error":"invalid_grant","error_description":"Invalid authorization code"}
Any ideas or insight?
I have to specify that I ran this code from PyCharm Professional(2019.3.3) and the issue came up from a trailing space.
When the code is run, spotipy asks for that confirmation URL you get with the code based on the redirect URI you set in the app and the code.
The issue was how PyCharm handles URLs in it's terminal window when you run the project. When you put a URL in, and you press enter, PyCharm opens the URL in the browser (for some reason) so the workaround for that is to add a space after the URL and then press enter. That works for PyCharm, but sometimes screws up the code. In this case, it did.
I tried to run the code from PyCharm terminal (python3 ), pasted the URL and directly pressed enter. Not only it didn't open the browser window, it also accepted the URL and let me get my info from Spotify.
In any case, the code is fine, it was the IDE that was breaking it all - maybe a good suggestion would be to remove trailing spaces after the url, if any when parsed from the library itself.
This 'bug' has been reported here and has been marked as fixed (spoiler alert: not fixed) and will be fixed in PyCharm 2020.1
I tried running the code normally from PyCharm 2020.1 and it all works fine, so I can confirm: in my case it was an IDE issue.
import boto3
import json
import time
client = boto3.client('elbv2')
desired_capacity=8
client.set_desired_capacity(
AutoScalingGroupName='Test-Web',
DesiredCapacity=desired_capacity,
HonorCooldown=True)
and
boto3==1.7.1
When I run this script I get a
File "deploy_staging_web.py", line 6, in <module>
client.set_desired_capacity(
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 601, in __getattr__
self.__class__.__name__, item)
AttributeError: 'ElasticLoadBalancingv2' object has no attribute 'set_desired_capacity'
I intended to use python to scale aws instances up and down.
I'm not inside any virtual environment at the moment.
why is it being thrown, and how do I get across it?
It is even mentioned here on the official documentation : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/autoscaling.html#AutoScaling.Client.set_desired_capacity
The official document is for the latest version, not your too old version. Upgrade your boto3 package to the latest. The most recent version is 1.9.243.
The problem turns out to be a silly one.
boto3 has moved around the various functions.
set_desired_capacity is no longer part of 'elbv2' .
It is part of 'autoscaling' https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/autoscaling.html#AutoScaling.Client.set_desired_capacity
While 'describe_target_health' is still part of the former https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html?highlight=elb#ElasticLoadBalancingv2.Client.describe_target_health.
Updating
client = boto3.client('elbv2')
to
client = boto3.client('autoscaling')
has solved my problem.
I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)
I wrote my own implementation of the ISession interface of Pyramid which should store the Session in a database. Everything works real nice, but somehow pyramid_tm throws up on this. As soon as it is activated it says this:
DetachedInstanceError: Instance <Session at 0x38036d0> is not bound to a Session;
attribute refresh operation cannot proceed
(Don't get confused here: The <Session ...> is the class name for the model, the "... to a Session" most likely refers to SQLAlchemy's Session (which I call DBSession to avoid confusion).
I have looked through mailing lists and SO and it seems anytime someone has the problem, they are
spawning a new thread or
manually call transaction.commit()
I do neither of those things. However, the specialty here is, that my session gets passed around by Pyramid a lot. First I do DBSession.add(session) and then return session. I can afterwards work with the session, flash new messages etc.
However, it seems once the request finishes, I get this exception. Here is the full traceback:
Traceback (most recent call last):
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/channel.py", line 329, in service
task.service()
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 173, in service
self.execute()
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 380, in execute
app_iter = self.channel.server.application(env, start_response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 251, in __call__
response = self.invoke_subrequest(request, use_tweens=True)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 231, in invoke_subrequest
request._process_response_callbacks(response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/request.py", line 243, in _process_response_callbacks
callback(self, response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/miniblog/miniblog/models.py", line 218, in _set_cookie
print("Setting cookie %s with value %s for session with id %s" % (self._cookie_name, self._cookie, self.id))
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 168, in __get__
return self.impl.get(instance_state(instance),dict_)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 451, in get
value = callable_(passive)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/state.py", line 285, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/mapper.py", line 1668, in _load_scalar_attributes
(state_str(state)))
DetachedInstanceError: Instance <Session at 0x7f4a1c04e710> is not bound to a Session; attribute refresh operation cannot proceed
For this case, I deactivated the debug toolbar. The error gets thrown from there once I activate it. It seems the problem here is accessing the object at any point.
I realize I could try to detach it somehow, but this doesn't seem like the right way as the element couldn't be modified without explicitly adding it to a session again.
So when I don't spawn new threads and I don't explicitly call commit, I guess the transaction is committing before the request is fully gone and afterwards there is again access to it. How do I handle this problem?
I believe what you're seeing here is a quirk to the fact that response callbacks and finished callbacks are actually executed after tweens. They are positioned just between your app's egress, and middleware. pyramid_tm, being a tween, is committing the transaction before your response callback executes - causing the error upon later access.
Getting the order of these things correct is difficult. A possibility off the top of my head is to register your own tween under pyramid_tm that performs a flush on the session, grabs the id, and sets the cookie on the response.
I sympathize with this issue, as anything that happens after the transaction has been committed is a real gray area in Pyramid where it's not always clear that the session should not be touched. I'll make a note to continue thinking about how to improve this workflow for Pyramid in the future.
I first tried with registering a tween and it worked somehow, but the data did not get saved. I then stumpled upon the SQLAlchemy Event System. I found the after_commit event. Using this, I could set up the detaching of the session object after the commit was done by pyramid_tm. I think this provides the full fexibility and doesn't impose any requirements on the order.
My final solution:
from sqlalchemy.event import listen
from sqlalchemy.orm import Session as SASession
def detach(db_session):
from pyramid.threadlocal import get_current_request
request = get_current_request()
log.debug("Expunging (detaching) session for DBSession")
db_session.expunge(request.session)
listen(SASession, 'after_commit', detach)
Only drawback: It requires calling get_current_request() which is discouraged. However, I saw no way of passing the session in any way, as the event gets called by SQLAlchemy. I thought of some ugly wrapping stuff but I think that would have been way to risky and unstable.