How do I use python requests to download a processed files? - python

I'm using Django 1.8.1 with Python 3.4 and i'm trying to use requests to download a processed file. The following code works perfect for a normal request.get command to download the exact file at the server location, or unprocessed file.
The file needs to get processed based on the passed data (shown below as "data"). This data will need to get passed into the Django backend, and based off the text pass variables to run an internal program from the server and output .gcode instead .stl filetype.
python file.
import requests, os, json
SERVER='http://localhost:8000'
authuser = 'admin#google.com'
authpass = 'passwords'
#data not implimented
##############################################
data = {FirstName:Steve,Lastname:Escovar}
############################################
category = requests.get(SERVER + '/media/uploads/9128342/141303729.stl', auth=(authuser, authpass))
#download to path file
path = "/home/bradman/Downloads/requestdata/newfile.stl"
if category.status_code == 200:
with open(path, 'wb') as f:
for chunk in category:
f.write(chunk)
I'm very confused about this, but I think the best course of action is to pass the data along with request.get, and somehow make some function to grab them inside my views.py for Django. Anyone have any ideas?

To use data in request you can do
get( ... , params=data)
(and you get data as parameters in url)
or
post( ... , data=data).
(and you send data in body - like HTML form)
BTW. some APIs need params= and data= in one request of GET or POST to send all needed information.
Read requests documentation

Related

Create a task with attachments in Asana using Asana api

I am trying to send an image using Asana's API but it just attaches a blank file.
This is the code I have been using.
client.attachments.create_on_task(task_id=123456789,file_content="Url_of_file",file_name='Name_of_File',file_content_type="image/jpeg")
I have tried using different file formats like .txt and .png but for some reason the Asana API is blocking my requests.It just posts a black image on Asana.
I have tried to convert the file to base64 as well but it still doesn't work
In this original documentation(below), it shows that we need to pass two arguments;one for the file's content and the other for file itself ('file').
def create_on_task(self, task_id, file_content, file_name, file_content_type=None, **options):
"""Upload an attachment for a task. Accepts a file object or string, file name, and optional file Content-Type"""
path = '/tasks/%d/attachments' % (task_id)
return self.client.request('post', path, files=[('file', (file_name, file_content, file_content_type))], **options)
But when I am trying to pass the arguments for file and file content it shows me an error.
Can somebody please help me with this?
Another user on the Asana development forum had the same problem using the Curl API. (https://forum.asana.com/t/sending-file-with-api/16897/2). Apparently it has something to do with the 'multipart form upload'.
Looking through another thread here on stackoverflow (How to send a "multipart/form-data" with requests in python?), it seemed the file object had to be read in binary was all.
so the parameters would be:
client.attachments.create_on_task(task_id=<task id here>,file_content=open(filename_with_path, 'rb'),file_name='Name_of_File',file_content_type="image/jpeg")

Download entire history of a Wikipedia page

I'd like to download the entire revision history of a single article on Wikipedia, but am running into a roadblock.
It is very easy to download an entire Wikipedia article, or to grab pieces of its history using the Special:Export URL parameters:
curl -d "" 'https://en.wikipedia.org/w/index.php?title=Special:Export&pages=Stack_Overflow&limit=1000&offset=1' -o "StackOverflow.xml"
And of course I can download the entire site including all versions of every article from here, but that's many terabytes and way more data than I need.
Is there a pre-built method for doing this? (Seems like there must be.)
The example above only gets information about the revisions, not the actual contents themselves. Here's a short python script that downloads the full content and metadata history data of a page into individual json files:
import mwclient
import json
import time
site = mwclient.Site('en.wikipedia.org')
page = site.pages['Wikipedia']
for i, (info, content) in enumerate(zip(page.revisions(), page.revisions(prop='content'))):
info['timestamp'] = time.strftime("%Y-%m-%dT%H:%M:%S", info['timestamp'])
print(i, info['timestamp'])
open("%s.json" % info['timestamp'], "w").write(json.dumps(
{ 'info': info,
'content': content}, indent=4))
Wandering around aimlessly looking for clues to another question I have myself — my way of saying I know nothing substantial about this topic! — I just came upon this a moment after reading your question: http://mwclient.readthedocs.io/en/latest/reference/page.html. Have a look for the revisions method.
EDIT: I also see http://mwclient.readthedocs.io/en/latest/user/page-ops.html#listing-page-revisions.
Sample code using the mwclient module:
#!/usr/bin/env python3
import logging, mwclient, pickle, os
from mwclient import Site
from mwclient.page import Page
logging.root.setLevel(logging.DEBUG)
logging.debug('getting page...')
env_page = os.getenv("MEDIAWIKI_PAGE")
page_name = env_page is not None and env_page or 'Stack Overflow'
page_name = Page.normalize_title(env_page)
site = Site('en.wikipedia.org') # https by default. change w/`scheme=`
page = site.pages[page_name]
logging.debug('extracting revisions (may take a really long time, depending on the page)...')
revisions = []
for i, revision in enumerate(page.revisions()):
revisions.append(revision)
logging.debug('saving to file...')
with open('{}Revisions.mediawiki.pkl'.format(page_name), 'wb+') as f:
pickle.dump(revisions, f, protocol=0) # protocol allows backwards compatibility between machines

Upload image to facebook using the Python API

I have searched the web far and wide for a still working example of uploading a photo to facebook through the Python API (Python for Facebook). Questions like this have been asked on stackoverflow before but non of the answers I have found work anymore.
What I got working is:
import facebook as fb
cfg = {
"page_id" : "my_page_id",
"access_token" : "my_access_token"
}
api = get_api(cfg)
msg = "Hello world!"
status = api.put_wall_post(msg)
where I have defined the get_api(cfg) function as this
graph = fb.GraphAPI(cfg['access_token'], version='2.2')
# Get page token to post as the page. You can skip
# the following if you want to post as yourself.
resp = graph.get_object('me/accounts')
page_access_token = None
for page in resp['data']:
if page['id'] == cfg['page_id']:
page_access_token = page['access_token']
graph = fb.GraphAPI(page_access_token)
return graph
And this does indeed post a message to my page.
However, if I instead want to upload an image everything goes wrong.
# Upload a profile photo for a Page.
api.put_photo(image=open("path_to/my_image.jpg",'rb').read(), message='Here's my image')
I get the dreaded GraphAPIError: (#324) Requires upload file for which non of the solutions on stackoverflow works for me.
If I instead issue the following command
api.put_photo(image=open("path_to/my_image.jpg",'rb').read(), album_path=cfg['page_id'] + "/picture")
I get GraphAPIError: (#1) Could not fetch picture for which I haven't been able to find a solution either.
Could someone out there please point me in the right direction of provide me with a currently working example? It would be greatly appreciated, thanks !
A 324 Facebook error can result from a few things depending on how the photo upload call was made
a missing image
an image not recognised by Facebook
incorrect directory path reference
A raw cURL call looks like
curl -F 'source=#my_image.jpg' 'https://graph.facebook.com/me/photos?access_token=YOUR_TOKEN'
As long as the above calls works, you can be sure the photo agrees with Facebook servers.
An example of how a 324 error can occur
touch meow.jpg
curl -F 'source=#meow.jpg' 'https://graph.facebook.com/me/photos?access_token=YOUR_TOKEN'
This can also occur for corrupted image files as you have seen.
Using .read() will dump the actual data
Empty File
>>> image=open("meow.jpg",'rb').read()
>>> image
''
Image File
>>> image=open("how.png",'rb').read()
>>> image
'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00...
Both of these will not work with the call api.put_photo as you have seen and Klaus D. mentioned the call should be without read()
So this call
api.put_photo(image=open("path_to/my_image.jpg",'rb').read(), message='Here's my image')
actually becomes
api.put_photo('\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00...', message='Here's my image')
Which is just a string, which isn't what is wanted.
One needs the image reference <open file 'how.png', mode 'rb' at 0x1085b2390>
I know this is old and doesn't answer the question with the specified API, however, I came upon this via a search and hopefully my solution will help travelers on a similar path.
Using requests and tempfile
A quick example of how I do it using the tempfile and requests modules.
Download an image and upload to Facebook
The script below should grab an image from a given url, save it to a file within a temporary directory and automatically cleanup after finished.
In addition, I can confirm this works running on a Flask service on Google Cloud Run. That comes with the container runtime contract so that we can store the file in-memory.
import tempfile
import requests
# setup stuff - certainly change this
filename = "your-desired-filename"
filepath = f"{directory}/{filename}"
image_url = "your-image-url"
act_id = "your account id"
access_token = "your access token"
# create the temporary directory
temp_dir = tempfile.TemporaryDirectory()
directory = temp_dir.name
# stream the image bytes
res = requests.get(image_url, stream=True)
# write them to your filename at your temporary directory
# assuming this works
# add logic for non 200 status codes
with open(filepath, "wb+") as f:
f.write(res.content)
# prep the payload for the facebook call
files = {
"filename": open(filepath, "rb"),
}
url = f"https://graph.facebook.com/v10.0/{act_id}/adimages?access_token={access_token}"
# send the POST request
res = requests.post(url, files=files)
res.raise_for_status()
if res.status_code == 200:
# get your image data back
image_upload_data = res.json()
temp_dir.cleanup()
if "images" in image_upload_data:
return image_upload_data["images"][filepath.split("/")[-1]]
return image_upload_data
temp_dir.cleanup() # paranoid: just in case an error isn't raised

open('file_path').read() - save contents back to file

I need to upload a file to the server using urllib2. Since I cannot use any external libraries (like requests and others) because I am using OpenOffice python, I needed a simple way to post file data.
So I came with:
post_url = "http://localhost:8000/admin/oo_file_uploader?user_id=%s&file_id=%s" % (user_id, file_id)
file_path = doc.Location.replace('file://', '')
data = urllib.urlencode({"file": open(file_path).read()})
urllib2.urlopen(post_url, data)
which posts something to the server.
I wonder if it is possible to save posted contents back to the file using python/django?
This expands somewhat on #zero323's answer. You will want to make sure that you implement some sort of security to prevent random files being uploaded by unauthorized users, which is what the #file_upload_security decorator is implied to handle.
#file_upload_security
def oo_file_uploader(user_id=None, file_id=None):
if request.method == 'POST':
# Exception handling skipped if get() fails.
user = User.objects.get(id=user_id)
save_to_file = MyFiles.objects.get(id=file_id)
# You will probably want to ensure 'file' is in post data.
file_contents = save_to_file.parse_post_to_content(request.POST['file'])
with open(save_to_file.path_to_file, 'w') as fw:
fw.write(file_contents)

urllib2.urlopen not getting all content

I am a beginner in python to pull some data from reddit.com
More precisely, I am trying to send a request to http:www.reddit.com/r/nba/.json to get the JSON content of the page and then parse it for entries about a specific team or player.
To automate the data gathering, I am requesting the page like this:
import urllib2
FH = urllib2.urlopen("http://www.reddit.com/r/nba/.json")
rnba = FH.readlines()
rnba = str(rnba[0])
FH.close()
I am also pulling the content like this on a copy of the script, just to be sure:
FH = requests.get("http://www.reddit.com/r/nba/.json",timeout=10)
rnba_json = FH.json()
FH.close()
However, I am not getting the full data that is presented when I manually go to
http://www.reddit.com/r/nba/.json with either method, in particular when I call
print len(rnba_json['data']['children']) # prints 20-something child stories
but when I do the same loading the copy-pasted JSON string like this:
import json
import urllib2
fh = r"""{"kind": "Listing", "data": {"modhash": ..."""# long JSON string
r_nba = json.loads(fh) #loads the json string from the site into json object
print len(r_nba['data']['children']) #prints upwards of 100 stories
I get more story links. I know about the timeout parameter but providing it did not resolve anything.
What am I doing wrong or what can I do to get all the content presented when I pull the page in the browser?
To get the max allowed, you'd use the API like: http://www.reddit.com/r/nba/.json?limit=100

Categories