I'm newbie for python, I'm having task so I need to scan wifi and send the data to the server, the below is the format which i have to send, this work fine when enter manually in browser url text box,
http://223.56.124.58:8080/ppod-web/ProcessRawData?data={"userId":"2220081127-14","timestamp":"2010-04-12 10:54:24","wifi":{"ssid":"guest","rssi":"80"}}
here is my code:
import httplib
import urllib
params = urllib.urlencode('{\"userId\":\"20081127-14\",\"timestamp\":\"2010-04-12 10:54:24\",\"wifi\":{\"ssid\":\"guest\",\"rssi\":\"80\"}}')
headers = {"Content-type":"application/x-www-form-urlencoded","Accept":"text/plain"}
conn = httplib.HTTPConnection("http://223.56.124.58:8080")
conn.request("POST","ppod-web/ProcessRawData?data=",params,headers)
response = conn.getresponse()
print response.status
print "-----"
print response.reason
data = response.read()
print data
conn.close()
thanks
Most likely, the issue with the script you posted in the question is you cannot directly do:
conn=httplib.HTTPConnection("http://223.56.124.58:8080/wireless")
The exception is triggered in getaddrinfo(), which calls the C function getaddrinfo() which returns EAI_NONAME:
The node or service is not known; or both node and service are NULL; or AI_NUMERICSERV was specified in hints.ai_flags and service was not a numeric port-number string."
There obviously is a problem with the parameters passed to getaddrinfo, and most likely you are trying to get information for the "223.56.124.58:8080/wireless" host. Ooops!
Indeed, you cannot directly connect to an URL address. As the documentation clearly states and shows, you connect to the server:
conn = httplib.HTTPConnection("223.56.124.58", 8080)
Then you can do:
conn.request("POST", "wireless", params, headers)
What about the script you are actually using?
conn.request("POST","http://202.45.139.58:8080/ppod-web",params,headers)
Even if the connection was correctly formed, that would have you POSTing to http://202.45.139.58:8080/http://202.45.139.58:8080/ppod-web. What you really want probably is:
conn = httplib.HTTPConnection("202.45.139.58", 8080)
conn.request("POST", "ppod-web", params, headers)
The error is shown for this line because most likely HTTPConnection is a lazy object and only attempts to actually connect to the server when you call request().
After you're done fixing the above, you'll need to fix params.
>>> urllib.urlencode({"wifi":{"ssid":"guest","rssi","80"}})
SyntaxError: invalid syntax
>>> urllib.urlencode({"wifi":{"ssid":"guest","rssi":"80"}})
'wifi=%7B%27rssi%27%3A+%2780%27%2C+%27ssid%27%3A+%27guest%27%7D'
To get what you think you want to get, you should do:
>>> urllib.urlencode({"data": {"wifi":{"ssid":"guest","rssi":"80"}}})
'data=%7B%27wifi%27%3A+%7B%27rssi%27%3A+%2780%27%2C+%27ssid%27%3A+%27guest%27%7D%7D'
Instead of:
conn = httplib.HTTPConnection("http://223.56.124.58:8080/wireless")
conn.request("POST", "data", params, headers)
try:
conn = httplib.HTTPConnection("223.56.124.58", port=8080)
conn.request("POST", "/wireless", params, headers)
Not sure if it will resolve all your problems, but at least your code will conform to the method/constructor signatures.
The traceback doesn't come from the same code you pasted.
On the error traceback there's a line:
conn.request("POST","http://202.45.139.58:8080/ppod-web",params,headers)
It is the line 9 of http.py however it is not on the code you pasted.
Please paste the actual code.
Related
I'm trying to create a post request in python, but I get an internal server error when issuing the request.
I'm trying to intercept it with a try-statement, but that doesn't seem to work.
import logging
import requests
logging.basicConfig(filename='python.log', filemode='w', level=logging.DEBUG)
url = "https://redacted-url.com/my-api/check_email"
json = {"email":request.params["email"].strip(), "list":request.params["list"]}
headers = {"Content-Type":"application/json", "Accept": "text/plain"}
try:
r = requests.post(url, headers=headers, json=json)
except requests.exceptions.RequestException as e:
logging.error(e, exc_info=True)
A: I have no idea where that logging-file would be stored. Do I have to add the full server-part? What If I just use «python.log»? Where would it be stored?
B: the try/except doesn't seem to work, I still get an internal server error
C: the error definitely occurs on the line r = requests.post(url, headers=headers, json=json). If I comment that out, the error doesn't occur.
D: Since I don't get an error that's meaningful: What am I doing wrong with that request? This is actually my main problem, but it would be nice to figure out how to log that error and how to intercept it.
Last but not least: If I run the same command from the terminal, the request is processed fine. WTH???
I’m trying to connect to the OPS API but get an error when trying to connect to the url. I get the access_token just fine as detailed in the documentation (page 34), but when I try to connect to the url I’m interested in, I get a ‘Name or Service not found’ error.
The documentation states (page 35) that the client should access the OPS resource over an encrypted HTTPS connection, which I think might be the missing step in my code creating this error (or not).
Below is the code I use (replacing #### with my access_token):
from http.client import HTTPSConnection
c = HTTPSConnection('ops.epo.org/3.2/rest-services/published-data/search?q=Automation', port=443)
headers2 = {'Authorization': ‘Bearer ########kv5’}
c.request('GET', '/', headers=headers2)
res = c.getresponse()
data = res.read()
Many thanks.
Not sure why this issue was happening earlier, but it seems to be fine now when I run the following code:
headers = {'Authorization': 'Bearer %s' % token }
query = requests.get('http://ops.epo.org/3.2/rest-services/published-data/search?q=Automation', headers=headers)
query.content
I get a status response code of 200, and I can parse the content just fine.
I'm trying to use httplib to check if each url in a list of 30k+ websites still works. Each url is read in from a .csv file, and into a matrix, and then that matrix goes through a for-loop for each url in the file. Afterwards, (where my problem is), I run a function, runInternet(url), which takes in the url string, and returns true if the url works, and false if it doesn't.
I've used this as my baseline, and have also looked into this. While I've tried both, I don't quite understand the latter, and neither works...
def runInternet(url):
try:
page = httplib.HTTPConnection(url)
page.connect()
except httplib.HTTPException as e:
return False
return True
However, afterwards, all the links are stated as broken! I randomly chose a few that worked, and they work when I input them into my browser...so what's happening? I've narrowed down the problem spot to this line:
page = httplib.HTTPConnection(url)
Edit: I tried inputting 'www.google.com' in exchange for the url, and the program works, and when I try printing e, it says nonnumeric port...
You could troubleshoot this by allowing the HTTPException to propagate instead of catching it. The specific exception type would likely help understand what is wrong.
I suspect though that the problem is this line:
page = httplib.HTTPConnection(url)
The first argument to the constructor is not a URL. Instead, it's a host name. For example, this code sample passing a URL to the constructor fails:
page = httplib.HTTPConnection('https://www.google.com/')
page.connect()
httplib.InvalidURL: nonnumeric port: '//www.google.com/'
Instead, if I pass host name to the constructor, and then URL to the request method, then it works:
conn = httplib.HTTPConnection('www.google.com')
conn.request('GET', '/')
resp = conn.getresponse()
print resp.status, resp.reason
200 OK
For reference, here is the relevant abridged documentation of HTTPConnection:
class HTTPConnection
| Methods defined here:
|
| __init__(self, host, port=None, strict=None, timeout=<object object>, source_address=None)
...
| request(self, method, url, body=None, headers={})
| Send a complete request to the server.
so I want to check if a URL is reachable from python, and I got this code from googling:
def checkUrl(url):
p = urlparse(url)
conn = http.client.HTTPConnection(p.netloc)
conn.request('HEAD', p.path)
resp = conn.getresponse()
return resp.status < 400
Here is my URL: https://eurotableau.nomisonline.com.
It works fine if I just pass that in to the function. The resp.status is 302. However, if I add a port 443 at the end of it, https://eurotableau.nomisonline.com:443, it returns false. The resp.status is 400. I tried both URL in google Chrome, both of them work. So my question is why is this happening? Anyway I can include the port value and still get valid resp.status value (< 400)? Thanks.
Use http.client.HTTPSConnection instead. The plain old HTTPConnection ignores the protocol that is part of the URL.
If you do not require the HEAD method but just wish to check if host is available then why not do:
from urllib2 import urlopen
try:
u = urlopen("https://eurotableau.nomisonline.com")
u.close()
print "Everything fine!"
except Exception, e:
if hasattr(e, "code"):
print "Server is there but something is wrong with rest of URL"
else: print "Server is on vacations or was never there!"
print e
This will establish a connection with server but it won't download any data unless you read it. It'll only read few KB to get the header (like when using HEAD method) and wait for you to request more. But you will close it there.
So, you can catch an exception and see what the problem is, or if there is no exception, just close the connection.
urllib2 will handle HTTPS and protocol://user#URL:PORT for you neatly.
No worries about anything.
I'm trying to write a small program that will simply display the header information of a website. Here is the code:
import urllib2
url = 'http://some.ip.add.ress/'
request = urllib2.Request(url)
try:
html = urllib2.urlopen(request)
except urllib2.URLError, e:
print e.code
else:
print html.info()
If 'some.ip.add.ress' is google.com then the header information is returned without a problem. However if it's an ip address that requires basic authentication before access then it returns a 401. Is there a way to get header (or any other) information without authentication?
I've worked it out.
After try has failed due to unauthorized access the following modification will print the header information:
print e.info()
instead of:
print e.code()
Thanks for looking :)
If you want just the headers, instead of using urllib2, you should go lower level and use httplib
import httplib
conn = httplib.HTTPConnection(host)
conn.request("HEAD", path)
print conn.getresponse().getheaders()
If all you want are HTTP headers then you should make HEAD not GET request. You can see how to do this by reading Python - HEAD request with urllib2.