I'm familiar with CURL in PHP but am using it for the first time in Python with pycurl.
I keep getting the error:
Exception Type: error
Exception Value: (2, '')
I have no idea what this could mean. Here is my code:
data = {'cmd': '_notify-synch',
'tx': str(request.GET.get('tx')),
'at': paypal_pdt_test
}
post = urllib.urlencode(data)
b = StringIO.StringIO()
ch = pycurl.Curl()
ch.setopt(pycurl.URL, 'https://www.sandbox.paypal.com/cgi-bin/webscr')
ch.setopt(pycurl.POST, 1)
ch.setopt(pycurl.POSTFIELDS, post)
ch.setopt(pycurl.WRITEFUNCTION, b.write)
ch.perform()
ch.close()
The error is referring to the line ch.setopt(pycurl.POSTFIELDS, post)
I do like that:
post_params = [
('ASYNCPOST',True),
('PREVIOUSPAGE','yahoo.com'),
('EVENTID',5),
]
resp_data = urllib.urlencode(post_params)
mycurl.setopt(pycurl.POSTFIELDS, resp_data)
mycurl.setopt(pycurl.POST, 1)
...
mycurl.perform()
I know this is an old post but I've just spent my morning trying to track down this same error. It turns out that there's a bug in pycurl that was fixed in 7.16.2.1 that caused setopt() to break on 64-bit machines.
It would appear that your pycurl installation (or curl library) is damaged somehow. From the curl error codes documentation:
CURLE_FAILED_INIT (2)
Very early initialization code failed. This is likely to be an internal error or problem.
You will possibly need to re-install or recompile curl or pycurl.
However, to do a simple POST request like you're doing, you can actually use python's "urllib" instead of CURL:
import urllib
postdata = urllib.urlencode(data)
resp = urllib.urlopen('https://www.sandbox.paypal.com/cgi-bin/webscr', data=postdata)
# resp is a file-like object, which means you can iterate it,
# or read the whole thing into a string
output = resp.read()
# resp.code returns the HTTP response code
print resp.code # 200
# resp has other useful data, .info() returns a httplib.HTTPMessage
http_message = resp.info()
print http_message['content-length'] # '1536' or the like
print http_message.type # 'text/html' or the like
print http_message.typeheader # 'text/html; charset=UTF-8' or the like
# Make sure to close
resp.close()
to open an https:// URL, you may need to install PyOpenSSL:
http://pypi.python.org/pypi/pyOpenSSL
Some distibutions include this, others provide it as an extra package right through your favorite package manager.
Edit: Have you called pycurl.global_init() yet? I still recommend urllib/urllib2 where possible, as your script will be more easily moved to other systems.
Related
I'm trying to post to a server using the following script:
import requests
data = {
'query': 'GetProcess',
'getFrom': '2018-12-06 10:10:10.000',
}
response = requests.post('http://localhost/monitor', data=data)
I cannot find where exactly, but the space character in the getFrom element is being replaced with a +: '2018-12-06+10:10:10.000'
This doesn't match the syntax SQL expects on our server, so the query fails.
I read here (https://stackoverflow.com/a/12528097) that setting the Content-type might help. I tried text/html, text/plain, application/json, and nothing seems to change.
Interestingly, the following (equivalent?) bash command succeeds:
curl -d 'query=GetProcess&getFrom=2018-12-06 10:10:10.000' localhost/monitor
I'm looking for a way to make my server receive "getFrom" : "2018-12-06 10:10:10.000" in the header.
I found a way to make this work: the problem I was having was due to the use of the urlencode function used in requests. In the requests documentation, it is shown how to go around this default behavior using PreparedRequests: http://docs.python-requests.org/en/master/user/advanced/#prepared-requests
Essentially, instead of using the requests.post() wrapper, make the function calls explicitly. This way, you will be able to control exactly what is sent. In my case, the solution was to do:
import requests
data = {
'query': 'GetProcess',
'getFrom': '2018-12-06 10:10:10.000'
}
s = requests.Session()
req = requests.Request('POST', 'http://'+ipAddress+'/monitor', data=data)
prepped = s.prepare_request(req)
prepped.body = prepped.body.replace("+", " ")
response = s.send(prepped)
I'm trying to create a super-simplistic Virtual In / Out Board using wx/Python. I've got the following code in place for one of my requests to the server where I'll be storing the data:
data = urllib.urlencode({'q': 'Status'})
u = urllib2.urlopen('http://myserver/inout-tracker', data)
for line in u.readlines():
print line
Nothing special going on there. The problem I'm having is that, based on how I read the docs, this should perform a Post Request because I've provided the data parameter and that's not happening. I have this code in the index for that url:
if (!isset($_POST['q'])) { die ('No action specified'); }
echo $_POST['q'];
And every time I run my Python App I get the 'No action specified' text printed to my console. I'm going to try to implement it using the Request Objects as I've seen a few demos that include those, but I'm wondering if anyone can help me explain why I don't get a Post Request with this code. Thanks!
-- EDITED --
This code does work and Posts to my web page properly:
data = urllib.urlencode({'q': 'Status'})
h = httplib.HTTPConnection('myserver:8080')
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
h.request('POST', '/inout-tracker/index.php', data, headers)
r = h.getresponse()
print r.read()
I am still unsure why the urllib2 library doesn't Post when I provide the data parameter - to me the docs indicate that it should.
u = urllib2.urlopen('http://myserver/inout-tracker', data)
h.request('POST', '/inout-tracker/index.php', data, headers)
Using the path /inout-tracker without a trailing / doesn't fetch index.php. Instead the server will issue a 302 redirect to the version with the trailing /.
Doing a 302 will typically cause clients to convert a POST to a GET request.
I am checking for url status with this code:
h = httplib2.Http()
hdr = {'User-Agent': 'Mozilla/5.0'}
resp = h.request("http://" + url, headers=hdr)
if int(resp[0]['status']) < 400:
return 'ok'
else:
return 'bad'
and getting
Error -3 while decompressing data: incorrect header check
the url i am checking is:
http://www.sueddeutsche.de/wirtschaft/deutschlands-innovationsangst-wir-neobiedermeier-1.2117528
the Exception Location is:
Exception Location: C:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py in _decompressContent, line 403
try:
encoding = response.get('content-encoding', None)
if encoding in ['gzip', 'deflate']:
if encoding == 'gzip':
content = gzip.GzipFile(fileobj=StringIO.StringIO(new_content)).read()
if encoding == 'deflate':
content = zlib.decompress(content) ##<---- error line
response['content-length'] = str(len(content))
# Record the historical presence of the encoding in a way the won't interfere.
response['-content-encoding'] = response['content-encoding']
del response['content-encoding']
except IOError:
content = ""
http status is 200 which is ok for my case, but i am getting this error
I actually need only http status, why is it reading the whole content?
You may have any number of reasons why you choose httplib2, but it's far too easy to get the status code of an HTTP request using the python module requests. Install with the following command:
$ pip install requests
See an extremely simple example below.
In [1]: import requests as rq
In [2]: url = "http://www.sueddeutsche.de/wirtschaft/deutschlands-innovationsangst-wir-neobiedermeier-1.2117528"
In [3]: r = rq.get(url)
In [4]: r
Out[4]: <Response [200]>
Link
Unless you have a considerable constraint that needs httplib2 explicitly, this solves your problem.
This may be a bug (or just uncommon design decision) in httplib2. I don't get this problem with urllib2 or httplib in the 2.x stdlib, or urllib.request or http.client in the 3.x stdlib, or the third-party libraries requests, urllib3, or pycurl.
So, is there a reason you need to use this particular library?
If so:
I actually need only http status, why is it reading the whole content?
Well, most HTTP libraries are going to read and parse the whole content, or at least the headers, before returning control. That way they can respond to simple requests about the headers or chunked encoding or MIME envelope or whatever without any delay.
Also, many of them automate things like 100 continue, 302 redirect, various kinds of auth, etc., and there's no way they could do that if they didn't read ahead. In particular, according to the description for httplib2, handling these things automatically is one of the main reasons you should use it in the first place.
Also, the first TCP read is nearly always going to include the headers anyway, so why not read them?
This means that if the headers are invalid, you may get an exception immediately. They may still provide a way to get the status code (or the raw headers, or other information) anyway.
As a side note, if you only want the HTTP status, you should probably send a HEAD request rather than a GET. Unless you're writing and testing a server, you can almost always rely on the fact that, as the RFC says, the status and headers should be identical to what you'd get with GET. In fact, that would almost certainly solve things in this caseāif there is no body to decompress, the fact that httplib2 has gotten confused into thinking the body is gzipped when it isn't won't matter anyway.
I've got a cURL command that does what I need, and I'm trying to translate it into python. Here's the cURL:
curl http://example.com:1234/faye -d 'message={"channel":"/test","data":"hello world"}'
This talks to a Faye server and publishes a message to the channel /test. This works. I'm trying to do that same publishing from within Python. I've looked at this and this, and neither of them helped me; I get a 400 error with both of those methods. Here's some of the stuff I've tried from within the Python shell:
import urllib2, json, requests
addr = 'http://example.com:1234/faye'
data = {'message': {'channel': '/test', 'data': 'hello from python'}}
data_as_json = json.dumps(data)
requests.post(addr, data=data)
requests.post(addr, params=data)
requests.post(addr, data=data_as_json)
requests.post(addr, params=data_as_json)
req = urllib2.Request(addr, data)
urllib2.urlopen(req)
req = urllib2.Request(addr, data_as_json)
urllib2.urlopen(req)
# All these things give 400 errors
Unfortunately I can't wireshark the connection since it's over an SSH tunnel (so everything's encrypted and on the wrong ports). Using the --trace option from cURL I can see that it's not url-encoding the data, so I know I don't need to do that. I also really don't want to Popen cURL itself.
message in this case is the name of a POST variable, and shouldn't be included in the JSON.
Thus, what you actually want to do is this:
data = urllib.urlencode({'message': json.dumps({'channel': '/test', 'data': 'hello from python'}))
conn = urllib2.urlopen('http://example.com:1234/faye', data=data)
print conn.read()
I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL.
If it matters, I am running this on a Raspberry Pi.
(Please excuse my simplicity, I am very new to all this)
Many folks turn to the requests library for this sort of thing.
For something lower level, you might use urllib2
I've been using the requests package as well. Here's an example POST from the requests documentation.
If you are feeling that you want to use CURL, try PyCurl.
Install it using:
sudo pip install pycurl
Here is an example of how to send data using it:
import pycurl
import json
import urllib
import cStringIO
url = 'your_url'
first_param = '12'
dArrayData = [{'data' : 'first'}, {'data':'second'}]
json_to_send = json.dumps(dArrayData, separators=(',',':'), sort_keys=False)
curlClient = pycurl.Curl()
curlClient.setopt(curlClient.USERAGENT, 'curl-user-agent')
# Sets the url of the service
curlClient.setopt(curlClient.URL, url)
# Sets the request to be of the type POST
curlClient.setopt(curlClient.POST, True)
# Sets the params of the post request
send_params = 'first_param=' + first_param + '&data=' + urllib.quote(json_to_send)
curlClient.setopt(curlClient.POSTFIELDS, send_params)
# Setting the buffer for the response to be written to
bufResponse = cStringIO.StringIO()
curlClient.setopt(curlClient.WRITEFUNCTION, bufResponse.write)
# Setting to fail on error
curlClient.setopt(curlClient.FAILONERROR, True)
# Sets the time out for the connections
curlClient.setopt(pycurl.CONNECTTIMEOUT, 25)
curlClient.setopt(pycurl.TIMEOUT, 25)
response = ''
try:
# Performs the operation
curlClient.perform()
except pycurl.error as err:
errno, errString = err
print '========'
print 'ERROR sending the data:'
print '========'
print 'CURL error code:', errno
print 'CURL error Message:', errString
else:
response = bufResponse.getvalue()
# Do what ever you want with the response.. Json it or what ever..
finally:
curlClient.close()
bufResponse.close()
The requests library is most supported and advanced way to do this.