Curl Equivalent in Python - python

I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL.
If it matters, I am running this on a Raspberry Pi.
(Please excuse my simplicity, I am very new to all this)

Many folks turn to the requests library for this sort of thing.
For something lower level, you might use urllib2

I've been using the requests package as well. Here's an example POST from the requests documentation.

If you are feeling that you want to use CURL, try PyCurl.
Install it using:
sudo pip install pycurl
Here is an example of how to send data using it:
import pycurl
import json
import urllib
import cStringIO
url = 'your_url'
first_param = '12'
dArrayData = [{'data' : 'first'}, {'data':'second'}]
json_to_send = json.dumps(dArrayData, separators=(',',':'), sort_keys=False)
curlClient = pycurl.Curl()
curlClient.setopt(curlClient.USERAGENT, 'curl-user-agent')
# Sets the url of the service
curlClient.setopt(curlClient.URL, url)
# Sets the request to be of the type POST
curlClient.setopt(curlClient.POST, True)
# Sets the params of the post request
send_params = 'first_param=' + first_param + '&data=' + urllib.quote(json_to_send)
curlClient.setopt(curlClient.POSTFIELDS, send_params)
# Setting the buffer for the response to be written to
bufResponse = cStringIO.StringIO()
curlClient.setopt(curlClient.WRITEFUNCTION, bufResponse.write)
# Setting to fail on error
curlClient.setopt(curlClient.FAILONERROR, True)
# Sets the time out for the connections
curlClient.setopt(pycurl.CONNECTTIMEOUT, 25)
curlClient.setopt(pycurl.TIMEOUT, 25)
response = ''
try:
# Performs the operation
curlClient.perform()
except pycurl.error as err:
errno, errString = err
print '========'
print 'ERROR sending the data:'
print '========'
print 'CURL error code:', errno
print 'CURL error Message:', errString
else:
response = bufResponse.getvalue()
# Do what ever you want with the response.. Json it or what ever..
finally:
curlClient.close()
bufResponse.close()

The requests library is most supported and advanced way to do this.

Related

making a simple GET/POST with url Encoding python

i have a custom url of the form
http://somekey:somemorekey#host.com/getthisfile.json
i tried all the way but getting errors :
method 1 :
from httplib2 import Http
ipdb> from urllib import urlencode
h=Http()
ipdb> resp, content = h.request("3b8138fedf8:1d697a75c7e50#abc.myshopify.com/admin/shop.json")
error :
No help on =Http()
Got this method from here
method 2 :
import urllib
urllib.urlopen(url).read()
Error :
*** IOError: [Errno url error] unknown url type: '3b8108519e5378'
I guess something wrong with the encoding ..
i tried ...
ipdb> url.encode('idna')
*** UnicodeError: label empty or too long
Is there any way to make this Complex url get call easy .
You are using a PDB-based debugger instead of a interactive Python prompt. h is a command in PDB. Use ! to prevent PDB from trying to interpret the line as a command:
!h = Http()
urllib requires that you pass it a fully qualified URL; your URL is lacking a scheme:
urllib.urlopen('http://' + url).read()
Your URL does not appear to use any international characters in the domain name, so you do not need to use IDNA encoding.
You may want to look into the 3rd-party requests library; it makes interacting with HTTP servers that much easier and straightforward:
import requests
r = requests.get('http://abc.myshopify.com/admin/shop.json', auth=("3b8138fedf8", "1d697a75c7e50"))
data = r.json() # interpret the response as JSON data.
The current de facto HTTP library for Python is Requests.
import requests
response = requests.get(
"http://abc.myshopify.com/admin/shop.json",
auth=("3b8138fedf8", "1d697a75c7e50")
)
response.raise_for_status() # Raise an exception if HTTP error occurs
print response.content # Do something with the content.

How do I get HTTP header info without authentication using python?

I'm trying to write a small program that will simply display the header information of a website. Here is the code:
import urllib2
url = 'http://some.ip.add.ress/'
request = urllib2.Request(url)
try:
html = urllib2.urlopen(request)
except urllib2.URLError, e:
print e.code
else:
print html.info()
If 'some.ip.add.ress' is google.com then the header information is returned without a problem. However if it's an ip address that requires basic authentication before access then it returns a 401. Is there a way to get header (or any other) information without authentication?
I've worked it out.
After try has failed due to unauthorized access the following modification will print the header information:
print e.info()
instead of:
print e.code()
Thanks for looking :)
If you want just the headers, instead of using urllib2, you should go lower level and use httplib
import httplib
conn = httplib.HTTPConnection(host)
conn.request("HEAD", path)
print conn.getresponse().getheaders()
If all you want are HTTP headers then you should make HEAD not GET request. You can see how to do this by reading Python - HEAD request with urllib2.

Http POST Curl in python

I'm having trouble understanding how to issue an HTTP POST request using curl from inside of python.
I'm tying to post to facebook open graph. Here is the example they give which I'd like to replicate exactly in python.
curl -F 'access_token=...' \
-F 'message=Hello, Arjun. I like this new API.' \
https://graph.facebook.com/arjun/feed
Can anyone help me understand this?
You can use httplib to POST with Python or the higher level urllib2
import urllib
params = {}
params['access_token'] = '*****'
params['message'] = 'Hello, Arjun. I like this new API.'
params = urllib.urlencode(params)
f = urllib.urlopen("https://graph.facebook.com/arjun/feed", params)
print f.read()
There is also a Facebook specific higher level library for Python that does all the POST-ing for you.
https://github.com/pythonforfacebook/facebook-sdk/
https://github.com/facebook/python-sdk
Why do you use curl in the first place?
Python has extensive libraries for Facebook and included libraries for web requests, calling another program and receive output is unecessary.
That said,
First from Python Doc
data may be a string specifying additional data to send to the server,
or None if no such data is needed. Currently HTTP requests are the
only ones that use data; the HTTP request will be a POST instead of a
GET when the data parameter is provided. data should be a buffer in
the standard application/x-www-form-urlencoded format. The
urllib.urlencode() function takes a mapping or sequence of 2-tuples
and returns a string in this format. urllib2 module sends HTTP/1.1
requests with Connection:close header included.
So,
import urllib2, urllib
parameters = {}
parameters['token'] = 'sdfsdb23424'
parameters['message'] = 'Hello world'
target = 'http://www.target.net/work'
parameters = urllib.urlencode(parameters)
handler = urllib2.urlopen(target, parameters)
while True:
if handler.code < 400:
print 'done'
# call your job
break
elif handler.code >= 400:
print 'bad request or error'
# failed
break

Trouble with pycurl.POSTFIELDS

I'm familiar with CURL in PHP but am using it for the first time in Python with pycurl.
I keep getting the error:
Exception Type: error
Exception Value: (2, '')
I have no idea what this could mean. Here is my code:
data = {'cmd': '_notify-synch',
'tx': str(request.GET.get('tx')),
'at': paypal_pdt_test
}
post = urllib.urlencode(data)
b = StringIO.StringIO()
ch = pycurl.Curl()
ch.setopt(pycurl.URL, 'https://www.sandbox.paypal.com/cgi-bin/webscr')
ch.setopt(pycurl.POST, 1)
ch.setopt(pycurl.POSTFIELDS, post)
ch.setopt(pycurl.WRITEFUNCTION, b.write)
ch.perform()
ch.close()
The error is referring to the line ch.setopt(pycurl.POSTFIELDS, post)
I do like that:
post_params = [
('ASYNCPOST',True),
('PREVIOUSPAGE','yahoo.com'),
('EVENTID',5),
]
resp_data = urllib.urlencode(post_params)
mycurl.setopt(pycurl.POSTFIELDS, resp_data)
mycurl.setopt(pycurl.POST, 1)
...
mycurl.perform()
I know this is an old post but I've just spent my morning trying to track down this same error. It turns out that there's a bug in pycurl that was fixed in 7.16.2.1 that caused setopt() to break on 64-bit machines.
It would appear that your pycurl installation (or curl library) is damaged somehow. From the curl error codes documentation:
CURLE_FAILED_INIT (2)
Very early initialization code failed. This is likely to be an internal error or problem.
You will possibly need to re-install or recompile curl or pycurl.
However, to do a simple POST request like you're doing, you can actually use python's "urllib" instead of CURL:
import urllib
postdata = urllib.urlencode(data)
resp = urllib.urlopen('https://www.sandbox.paypal.com/cgi-bin/webscr', data=postdata)
# resp is a file-like object, which means you can iterate it,
# or read the whole thing into a string
output = resp.read()
# resp.code returns the HTTP response code
print resp.code # 200
# resp has other useful data, .info() returns a httplib.HTTPMessage
http_message = resp.info()
print http_message['content-length'] # '1536' or the like
print http_message.type # 'text/html' or the like
print http_message.typeheader # 'text/html; charset=UTF-8' or the like
# Make sure to close
resp.close()
to open an https:// URL, you may need to install PyOpenSSL:
http://pypi.python.org/pypi/pyOpenSSL
Some distibutions include this, others provide it as an extra package right through your favorite package manager.
Edit: Have you called pycurl.global_init() yet? I still recommend urllib/urllib2 where possible, as your script will be more easily moved to other systems.

Adding Cookie to SOAPpy Request

I'm trying to send a SOAP request using SOAPpy as the client. I've found some documentation stating how to add a cookie by extending SOAPpy.HTTPTransport, but I can't seem to get it to work.
I tried to use the example here,
but the server I'm trying to send the request to started throwing 415 errors, so I'm trying to accomplish this without using ClientCookie, or by figuring out why the server is throwing 415's when I do use it. I suspect it might be because ClientCookie uses urllib2 & http/1.1, whereas SOAPpy uses urllib & http/1.0
Does someone know how to make ClientCookie use http/1.0, if that is even the problem, or a way to add a cookie to the SOAPpy headers without using ClientCookie? If tried this code using other services, it only seems to throw errors when sending requests to Microsoft servers.
I'm still finding my footing with python, so it could just be me doing something dumb.
import sys, os, string
from SOAPpy import WSDL,HTTPTransport,Config,SOAPAddress,Types
import ClientCookie
Config.cookieJar = ClientCookie.MozillaCookieJar()
class CookieTransport(HTTPTransport):
def call(self, addr, data, namespace, soapaction = None, encoding = None,
http_proxy = None, config = Config):
if not isinstance(addr, SOAPAddress):
addr = SOAPAddress(addr, config)
cookie_cutter = ClientCookie.HTTPCookieProcessor(config.cookieJar)
hh = ClientCookie.HTTPHandler()
hh.set_http_debuglevel(1)
# TODO proxy support
opener = ClientCookie.build_opener(cookie_cutter, hh)
t = 'text/xml';
if encoding != None:
t += '; charset="%s"' % encoding
opener.addheaders = [("Content-Type", t),
("Cookie", "Username=foobar"), # ClientCookie should handle
("SOAPAction" , "%s" % (soapaction))]
response = opener.open(addr.proto + "://" + addr.host + addr.path, data)
data = response.read()
# get the new namespace
if namespace is None:
new_ns = None
else:
new_ns = self.getNS(namespace, data)
print '\n' * 4 , '-'*50
# return response payload
return data, new_ns
url = 'http://www.authorstream.com/Services/Test.asmx?WSDL'
proxy = WSDL.Proxy(url, transport=CookieTransport)
print proxy.GetList()
Error 415 is because of incorrect content-type header.
Install httpfox for firefox or whatever tool (wireshark, Charles or Fiddler) to track what headers are you sending. Try Content-Type: application/xml.
...
t = 'application/xml';
if encoding != None:
t += '; charset="%s"' % encoding
...
If you trying to send file to the web server use Content-Type:application/x-www-form-urlencoded
A nice hack to use cookies with SOAPpy calls
Using Cookies with SOAPpy calls

Categories