I am working with UPnPy, and I immediately notice an issue when attempting to discover devices on my local network. Here is the basic code I am using:
import upnpy
upnp = upnpy.UPnP()
devices = upnp.discover()
This throws the following exception:
Traceback (most recent call last):
File "C:\Users\name\Projects\pythonProject\main.py", line 5, in <module>
devices = upnp.discover()
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\upnp\UPnP.py", line 33, in discover
for device in self.ssdp.m_search(discover_delay=delay, st='upnp:rootdevice', **headers):
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPRequest.py", line 50, in m_search
devices = self._send_request(self._get_raw_request())
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPRequest.py", line 100, in _send_request
device = SSDPDevice(addr, response.decode())
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 87, in __init__
self._get_services_request()
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 23, in wrapper
return func(device, *args, **kwargs)
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 54, in wrapper
return func(instance, *args, **kwargs)
File "C:\Users\name\Projects\pythonProject\venv\lib\site-packages\upnpy\ssdp\SSDPDevice.py", line 171, in _get_services_request
event_sub_url = service.getElementsByTagName('eventSubURL')[0].firstChild.nodeValue
AttributeError: 'NoneType' object has no attribute 'nodeValue'
I have been researching the cause of this but I have found nothing. I am using UPnPy version 1.1.8. I use PyCharm as my IDE. I've tried using previous versions of UPnPy but none seem to be working. Any help would be appreciated. Thanks!
Most likely you have a non-compliant UPnP Device in your home network that is serving a non-standard/broken XML at their description location, and upnpy was not smart enough to handle the parsing error and perhaps ignore that device.
That scenario is more common than you might think: many Smart TVs (LG ones for sure) have an embedded device that advertises as UPnP but their description endpoint answers JSON instead of XML!
Some suggestions:
Use a different library or app (you could try my own), at least to identify the culprit. Check turn on verbosity and look for warnings and parsing error logs
Use a scanner such as tcpdump to sniff network packets (UDP port 1900) for SSDP NOTICE advertises, and manually open each LOCATION URL in a browser to see if they're valid XML
Selectively turn off / unplug devices you think might be UPnP-enabled, such as Smart TVs, Home Theaters, Video Game consoles, Routers, etc, to see wich one was serving the bogus XML.
Edit your local copy of upnpy to handle that error, for example enclose the function/line with a try/except block and print some details about what is it trying to parse prior to the error.
Related
I'm trying to upload a video to Youtube using a python script.
So the code given here (upload_video.py) is supposed to work and I've followed the set up which includes enabling the Youtube API and getting OAuth secret keys and what not. You may notice that the code is in Python 2 so I used 2to3 to make it run with python3.7. The issue is that for some reason, I'm asked to login when I execute upload_video.py:
Now this should not be occuring as that's the whole point of having a client_secrets.json file, that you don't need to explicitly login. So once I exit this in-shell browser, Here's what I see:
Here's the first line:
/usr/lib/python3.7/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access upload_video.py-oauth2.json: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Now I don't understand why upload_video.py-oauth2.json is needed as in the upload_video.py file, the oauth2 secret file is set as "client_secrets.json".
Anyways, I created the file upload_video.py-oauth2.json and copied the contents of client_secrets.json to it. I didn't get the weird login then but I got another error:
Traceback (most recent call last):
File "upload_video.py", line 177, in <module>
youtube = get_authenticated_service(args)
File "upload_video.py", line 80, in get_authenticated_service
credentials = storage.get()
File "/usr/lib/python3.7/site-packages/oauth2client/client.py", line 407, in get
return self.locked_get()
File "/usr/lib/python3.7/site-packages/oauth2client/file.py", line 54, in locked_get
credentials = client.Credentials.new_from_json(content)
File "/usr/lib/python3.7/site-packages/oauth2client/client.py", line 302, in new_from_json
module_name = data['_module']
KeyError: '_module'
So basically now I've hit a dead end. Any ideas about what to do now?
See the code of function get_authenticated_service in upload_video.py: you should not create the file upload_video.py-oauth2.json by yourself! This file is created upon the completion of the OAuth2 flow via the call to run_flow within get_authenticated_service.
Also you may read the doc OAuth 2.0 for Mobile & Desktop Apps for thorough info about the authorization flow on standalone computers.
I have already seen the examples on here of using python's os library to get a local file's time stamp in python by passing it the local path (i.e. /var/www/html/etc.../filename.txt), but when I try to pass getmtime a link, it cannot process it.
Here is what the code looks like:
import os
print(os.path.getmtime('https://www.sec.gov/Archives/edgar/data/1474439/000169655519000022/xslF345X03/wf-form4_156772823294389.xml'))
Here is the error I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.7/genericpath.py", line 55, in getmtime
return os.stat(filename).st_mtime
FileNotFoundError: [Errno 2] No such file or directory: 'https://www.sec.gov/Archives/edgar/data/1474439/000169655519000022/xslF345X03/wf-form4_156772823294389.xml'
I know that this link exists.
So it obviously doesn't like me passing it a link. Is there another function that you use to pass links, to get the last modification time of a remote file?
An URL is not necessarily a file. You can ask the remote server to tell you about the link, and the remote server may provide a Last-Modified header, or may not, at the remote server's discretion. It could also lie, if so instructed. In order to do this, you would need to make a HTTP request; the easiest way to do it from Python is the nice requests library.
import requests
import dateutil.parser
response = requests.head(url)
last_modified = response.headers.get('Last-Modified')
if last_modified:
last_modified = dateutil.parser.parse(last_modified)
I'm trying to programmatically add a blacklisted IP to the firewall. I try this but get an error. I'm not that new to python, but I'm not all that proficient in reading the documentation, so here is that if it helps.
https://media.readthedocs.org/pdf/smc-python/latest/smc-python.pdf
https://smc-python.readthedocs.io/en/latest/index.html
from smc import session
from smc_monitoring.monitors.blacklist import BlacklistQuery
from smc.core.engines import Engine
from smc.administration.system import System
session.login(url='http://nope', api_key='supersecret')
print("logged in")
# # Method 1 ERROR
system = System()
print(system.smc_version)
system.blacklist(src='1.1.1.1/32', dst='2.2.2.2/32', duration=3600)
session.logout()
Traceback (most recent call last): File
"/home/matthew/PycharmProjects/GitSMC/BlacklistTest.py", line 12, in
system.blacklist(src='1.1.1.1/32', dst='2.2.2.2/32', duration=3600)
File
"/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/administration/system.py",
line 159, in blacklist
json=prepare_blacklist(src, dst, duration, **kw))
File
"/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/base/mixins.py",
line 32, in make_request
result = getattr(request, method)()
File
"/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/api/common.py",
line 66, in create
return self._make_request(method='POST')
File
"/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/api/common.py",
line 101, in _make_request
raise err
smc.api.exceptions.ActionCommandFailed: Invalid JSON format: At line 1
and column 17, end_point1 is not recognized as JSON attribute.
There are multiple ways to blacklist, either through the System entry point like you have above, or individually against a single firewall/cluster.
If using the System entry point, the blacklist entry will go to all SMC managed firewalls.
Based on the message, it appears you might be using a newer version of smc-python (i.e. >6.5.x).
In that case it's best to use the engine level blacklisting:
from smc.elements.other import Blacklist
engine = Engine('myfw')
blacklist = Blacklist()
blacklist.add_entry(src='1.1.1.1/32', dst='2.2.2.2/32')
engine.blacklist_bulk(blacklist)
I just noticed that the System entry point does not have a blacklist function for SMC 6.5 (which hasn't technically been fully certified for this library yet), but I will add to the develop branch as 6.5.x will be officially supported in the next couple of weeks.
If you are using SMC version <= 6.4.x, you can use the engine.blacklist, or System.blacklist commands.
DLP
Or, Saltstack + docker-py AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
I have been digging into this issue to no avail. I'm trying to use SaltStack to manage my docker images/containers but ran into this problem.
Initially I was using the salt state docker.running but that presented as the command does not exist. When I changed the state to docker.running, I got the traceback I posted over at that GitHub issue:
ID: scheduler
Function: docker.pulled
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1563, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/states/dockerio.py", line 271, in pulled
returned = pull(name, tag=tag, insecure_registry=insecure_registry)
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 1599, in pull
client = _get_client()
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 277, in _get_client
client._version = client.version()['ApiVersion']
File "/usr/local/lib/python2.7/dist-packages/docker/client.py", line 837, in version
return self._result(self._get(url), json=True)
File "/usr/local/lib/python2.7/dist-packages/docker/clientbase.py", line 86, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 310, in get
#: Stream response content default.
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
url=request.url,
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 155, in send
**proxy_kwargs)
File "/usr/local/lib/python2.7/dist-packages/docker/unixconn/unixconn.py", line 74, in get_connection
with self.pools.lock:
AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
Started: 09:33:42.873628
Duration: 22.115 ms
After searching Google a bit more and coming up with nothing, I went ahead and started reading the source.
After reading unixconn.py and realizing that RecentlyUsedContainer was coming from urllib3, I went and tracked down the source for that and discovered that there was a _lock attribute that was changed to lock a while ago. That seemed strange.
I looked closer at the imports and realized that unixconn.py was attempting to use requests' built-in urllib3 and then falling back to the stand alone urllib3. So I checked out the requests urllib3 and found that it did, indeed have the _lock -> lock change. But it was newer than my version of requests. So I upgraded requests and tried again. Still no dice - same AttributeError.
Now things start to get weird.
In order to get information back to my salt master, I started mucking with the docker-py and urllib3 code on my salt minion. At first I raised exceptions with urllib3.__file__ to make sure I was using the right file. But occasionally the file name that it would return was in a file and a folder that did not exist. Usually it was displaying /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc, but when I would delete that file thinking that maybe the .pyc being cached was causing a problem it would still say that was the __file__, even though it didn't exist.
Then I discovered inspect.getfile. And I got the same bizarre behavior - I could delete the .pyc file and yet inspect.getfile(self.pools) would return the non-existent file.
To make life even better, I've added
raise Exception('Pining for the Fjords')
to
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.py
At the end of the RecentlyUsedContainer.__init__. Yet that exception does not raise.
And I have just confirmed that something is in fact lying to me, because despite changing unixconn.py
def get_connection(self, url, proxies=None):
import inspect
r = RecentlyUsedContainer(10)
raise Exception(inspect.getfile(r.__class__) + '\n' + r.__doc__)
which returns /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc, when I go edit that .pyc and modify the RecentlyUsedContainer's docstring I get the original docstring.
And finally, when I edit /usr/lib/python2.7/dist-packages/urllib3/_collections.pyc and change it's docstring, (or the same path but _collections.py instead)...
I still get the same docstring!
Why is the wrong code getting executed here, and how can I find out where it is so I can fix the problem?
So I finally figured out the problem:
It did have something to do with salt. For some reason the way the salt minion imported the docker-py library it did some sort of... partial hold on the imports. I suspect that what was happening was that salt was re-importing just the docker-py library specifically so when I would make changes to those files the changes would show up.
However, since the Python import mechanism will search for pre-imported modules first the urllib3 code was never re-imported.
Ultimately all that is required is to restart the salt minion:
salt 'my-minion' cmd.run "nohup /bin/sh -c 'sleep 10 && salt-call --local service.restart salt-minion'"
I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)