Python code like curl - python

in curl i do this:
curl -u email:password http://api.foursquare.com/v1/venue.json?vid=2393749
How i can do this same thing in python?

Here's the equivalent in pycurl:
import pycurl
from StringIO import StringIO
response_buffer = StringIO()
curl = pycurl.Curl()
curl.setopt(curl.URL, "http://api.foursquare.com/v1/venue.json?vid=2393749")
curl.setopt(curl.USERPWD, '%s:%s' % ('youruser', 'yourpassword'))
curl.setopt(curl.WRITEFUNCTION, response_buffer.write)
curl.perform()
curl.close()
response_value = response_buffer.getvalue()

"The problem could be that the Python libraries, per HTTP-Standard, first send an unauthenticated request, and then only if it's answered with a 401 retry, are the correct credentials sent. If the Foursquare servers don't do "totally standard authentication" then the libraries won't work.
Try using headers to do authentication:"
taked from Python urllib2 Basic Auth Problem
import urllib2
import base64
req = urllib2.Request('http://api.foursquare.com/v1/venue.json?vid=%s' % self.venue_id)
req.add_header('Authorization: Basic ',base64.b64encode('email:password'))
res = urllib2.urlopen(req)

I'm more comfortable running the command line curl through subprocess. This avoids all of the potential version matching headaches of python, pycurl, and libcurl. The observation that pycurl hasn't been touched in 2 years, and is only listed as suppported through Python 2.5, made me wary.
-- John
import subprocess
def curl(*args):
curl_path = '/usr/bin/curl'
curl_list = [curl_path]
for arg in args:
curl_list.append(arg)
curl_result = subprocess.Popen(
curl_list,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE).communicate()[0]
return curl_result
answer = curl('-u', 'email:password', 'http://api.foursquare.com/v1/venue.json?vid=2393749')

if use human_curl you can write some code
import human_curl as hurl
r = hurl.get('http://api.foursquare.com/v1/venue.json?vid=2393749', auth=('email','password'))
Json data in r.content

Use pycurl
http://pycurl.sourceforge.net/
There is a discussion on SO for tutorials
What good tutorials exist for learning pycURL?
A typical example:
import sys
import pycurl
class ContentCallback:
def __init__(self):
self.contents = ''
def content_callback(self, buf):
self.contents = self.contents + buf
t = ContentCallback()
curlObj = pycurl.Curl()
curlObj.setopt(curlObj.URL, 'http://www.google.com')
curlObj.setopt(curlObj.WRITEFUNCTION, t.content_callback)
curlObj.perform()
curlObj.close()
print t.contents

Related

curl post for elasticsearch in python

I have a curl POST to do for elasticsearch:
curl -XPOST "http://localhost:9200/index/name" --data-binary "#file.json"
How can I do this in a python shell? Basically because i need to loop over many json files. I want to be able to do this in a for loop.
import glob
import os
import requests
def index_data(path):
item = []
for filename in glob.glob(path):
item.append(filename[55:81]+'.json')
return item
def send_post(url, datafiles):
r = requests.post(url, data=file(datafiles,'rb').read())
data = r.text
return data
def main():
url = 'http://localhost:9200/index/name'
metpath = r'C:\pathtofiledirectory\*.json'
jsonfiles = index_data(metpath)
send_post(url, jsonfiles)
if __name__ == "__main__":
main()
I fixed to do this, but is giving me a TypeError:
TypeError: coercing to Unicode: need string or buffer, list found
You can use requests http client :
import requests
files = ['file.json', 'file1.json', 'file2.json', 'file3.json', 'file4.json']
for item in files:
req = requests.post('http://localhost:9200/index/name',data=file(item,'rb').read())
print req.text
From your edit, you would need :
for item in jsonfiles:
send_post(url, item)
Use the requests library.
import requests
r = requests.post(url, data=data)
It is as simple as that.

accessing papertrail api information with Python3 script

I have a PaperTrail account, and I am trying to write a Python script that accesses the PaperTrail logs and grabs the information as a JSON. This is my current attempt, and it's ugly -- I think I got fouled when trying to convert Python2 to Python3, and I have a somewhat unclear understanding of API/JSON as well.
import http.client, urllib, time, os, json
PAPERTRAIL_TOKEN = "[xxx]"
INTERVAL = 10 * 60
conn = http.client.HTTPSConnection(host = 'papertrailapp.com')
conn.request(
method = 'GET',
url = '/api/v1/events/search.json'
headers = {'X-Papertrail-Token' : os.environ['PAPERTRAIL_TOKEN']})
response = conn.getresponse()
I've made some small changes to your program:
added a shebang line: #!/usr/bin/env python3
added a , at the end of the url line to correct the syntax
pretty printed the JSON
PAPERTRAIL_TOKEN = "[xxx]" is not used - the program looks in the environment for this, so make sure to set that before running it: export PAPERTRAIL_TOKEN=xxx
#!/usr/bin/env python3
import http.client, urllib, time, os, json
PAPERTRAIL_TOKEN = "[xxx]"
INTERVAL = 10 * 60
conn = http.client.HTTPSConnection(host = 'papertrailapp.com')
conn.request(
method = 'GET',
url = '/api/v1/events/search.json',
headers = {'X-Papertrail-Token' : os.environ['PAPERTRAIL_TOKEN']})
response = conn.getresponse()
print(json.dumps(json.loads(response.read()), indent=4))

Python 3 Get and parse JSON API [duplicate]

This question already has answers here:
HTTP requests and JSON parsing in Python [duplicate]
(8 answers)
Closed 8 months ago.
How would I parse a json api response with python?
I currently have this:
import urllib.request
import json
url = 'https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty'
def response(url):
with urllib.request.urlopen(url) as response:
return response.read()
res = response(url)
print(json.loads(res))
I'm getting this error:
TypeError: the JSON object must be str, not 'bytes'
What is the pythonic way to deal with json apis?
Version 1: (do a pip install requests before running the script)
import requests
r = requests.get(url='https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty')
print(r.json())
Version 2: (do a pip install wget before running the script)
import wget
fs = wget.download(url='https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty')
with open(fs, 'r') as f:
content = f.read()
print(content)
you can use standard library python3:
import urllib.request
import json
url = 'http://www.reddit.com/r/all/top/.json'
req = urllib.request.Request(url)
##parsing response
r = urllib.request.urlopen(req).read()
cont = json.loads(r.decode('utf-8'))
counter = 0
##parcing json
for item in cont['data']['children']:
counter += 1
print("Title:", item['data']['title'], "\nComments:", item['data']['num_comments'])
print("----")
##print formated
#print (json.dumps(cont, indent=4, sort_keys=True))
print("Number of titles: ", counter)
output will be like this one:
...
Title: Maybe we shouldn't let grandma decide things anymore.
Comments: 2018
----
Title: Carrie Fisher and Her Stunt Double Sunbathing on the Set of Return of The Jedi, 1982
Comments: 880
----
Title: fidget spinner
Comments: 1537
----
Number of titles: 25
I would usually use the requests package with the json package. The following code should be suitable for your needs:
import requests
import json
url = 'https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty'
r = requests.get(url)
print(json.loads(r.content))
Output
[11008076,
11006915,
11008202,
....,
10997668,
10999859,
11001695]
The only thing missing in the original question is a call to the decode method on the response object (and even then, not for every python3 version). It's a shame no one pointed that out and everyone jumped on a third party library.
Using only the standard library, for the simplest of use cases :
import json
from urllib.request import urlopen
def get(url, object_hook=None):
with urlopen(url) as resource: # 'with' is important to close the resource after use
return json.load(resource, object_hook=object_hook)
Simple use case :
data = get('http://url') # '{ "id": 1, "$key": 13213654 }'
print(data['id']) # 1
print(data['$key']) # 13213654
Or if you prefer, but riskier :
from types import SimpleNamespace
data = get('http://url', lambda o: SimpleNamespace(**o)) # '{ "id": 1, "$key": 13213654 }'
print(data.id) # 1
print(data.$key) # invalid syntax
# though you can still do
print(data.__dict__['$key'])
With Python 3
import requests
import json
url = 'http://IP-Address:8088/ws/v1/cluster/scheduler'
r = requests.get(url)
data = json.loads(r.content.decode())

How to monitor progress of a HTTP PUT upload using Python Requests and Clint

I am writing a simple commandline application - transfer.py - to allow for uploading and downloading files from the transfer.sh service as a learning exercise, using the 'requests' library for HTTP. Thanks to some answers on here, I was able to implement a progress bar using python-clint and python-requests for monitoring the file download - said functionality being seen here.
Anyway, I got very, very lost when trying to implement the same kind of progress bar to monitor the upload - which uses HTTP PUT. I understand conceptually it should be very similar, but cannot for some reason figure it out, and would be very thankful if someone could point me in the right direction on this. I tried a few methods using multipart encoders and suchlike, but those lead to the file being mangled on the way up (the service accepts raw PUT requests, and multipart encoding messes it up seemingly).
The end goal is to write a script to AES encrypt the file to be uploaded with a random key, upload it to the service, and print a link + encryption key that can be used by a friend to download/decrypt the file, mostly for fun and to fill in some knowledge-gaps in my python.
I recommend you use the requests_toolbelt with the clint.textui.progress module. I found this code which will do.
from clint.textui.progress import Bar as ProgressBar
from requests_toolbelt import MultipartEncoder, MultipartEncoderMonitor
import requests
def create_callback(encoder):
encoder_len = encoder.len
bar = ProgressBar(expected_size=encoder_len, filled_char='=')
def callback(monitor):
bar.show(monitor.bytes_read)
return callback
def create_upload():
return MultipartEncoder({
'form_field': 'value',
'another_form_field': 'another value',
'first_file': ('progress_bar.py', open(__file__, 'rb'), 'text/plain'),
'second_file': ('progress_bar.py', open(__file__, 'rb'),
'text/plain'),
})
if __name__ == '__main__':
encoder = create_upload()
callback = create_callback(encoder)
monitor = MultipartEncoderMonitor(encoder, callback)
r = requests.post('https://httpbin.org/post', data=monitor,
headers={'Content-Type': monitor.content_type})
print('\nUpload finished! (Returned status {0} {1})'.format(
r.status_code, r.reason
))
The following code should work for you:
import requests
import os
from tqdm import tqdm
from tqdm.utils import CallbackIOWrapper
def upload_from_file(src, dst):
file_size = os.path.getsize(src)
with open(src, "rb") as fd:
with tqdm(desc=f"Uploading", total=file_size, unit="B", unit_scale=True, unit_divisor=1024) as t:
reader_wrapper = CallbackIOWrapper(t.update, fd, "read")
response = requests.put(dst, data=reader_wrapper)
response.raise_for_status()
SRC = '/path/to/file'
DST = '/url/to/upload'
upload_from_file(SRC, DST)
Just define your own SRC and DST variables.
Then you can just copy and paste the code.
You can try to use DST='http://httpbin.org/put' for testing.
Enjoy!

getting the file size with pycurl

I want to write a downloader with python and I use PycURL as my library, but I got a problem.
I can't get the size of the file wich I wanna download. Here is part of my code :
import pycurl
url = 'http://www.google.com'
c = pycurl.Curl()
c.setopt(c.URL, url)
print c.getinfo(c.CONTENT_LENGTH_DOWNLOAD)
c.perform()
When I test this code in python shell, it's ok but when I write it as a function and run it, it gives me -1 instead of the size.
What is the problem?
(code's been edited)
This answer adds the missing c.setopt(c.NOBODY, 1) and is otherwise the same as the one given some months ago:
import pycurl
c = pycurl.Curl()
c.setopt(c.URL, 'http://www.alfe.de')
c.setopt(c.NOBODY, 1)
c.perform()
c.getinfo(c.CONTENT_LENGTH_DOWNLOAD)
Calling c.setopt(c.NOBODY, 1) before calling c.perform() avoids downloading the contents of the file ("No Body", but all headers).
From the pycurl documentation on the Curl object:
The getinfo method should not be called unless perform has been called
and finished.
You're calling getinfo before you've called perform.
Here is a simplified version of your example, does this work?
import pycurl
url = 'http://www.google.com'
c = pycurl.Curl()
c.setopt(c.URL, url)
c.perform()
print c.getinfo(c.CONTENT_LENGTH_DOWNLOAD)
You should see the HTML content followed by the size.
Try adding debug to see what happens actually. After you created curl make this:
def curl_debug(debug_type, msg):
print("debug: %s %s" % (repr(debug_type), repr(msg)))
c.setopt(pycurl.VERBOSE, 1)
c.setopt(pycurl.DEBUGFUNCTION, curl_debug)

Categories