Generate multiple HTTP request from list values - python

I have a list with three hashes and the script should create HTTP request for each hash in the list and then save the file of the hash.
for some reason, the script generates only one HTTP request and for this reason, I manage to download only one file instead of three.
all_hashes = ['07a3355f81f0dbd9f5a9a', 'e0f1d8jj3d613ad5ebda6d', 'dsafhghhhffdsfdd']
for hash in all_hashes:
params = {'apikey': 'xxxxxxxxxxxxx', 'hash': (hash)}
response = requests.get('https://www.test.com/file/download', params=params)
downloaded_file = response.content
name = response.headers['x-goog-generation']
if response.status_code == 200:
with open('%s.bin' % name, 'wb') as f:
f.write(response.content)

your response checking and saving code should also be in the loop
e.g
all_hashes = ['07a3355f81f0dbd9f5a9a', 'e0f1d8jj3d613ad5ebda6d', 'dsafhghhhffdsfdd']
for hash in all_hashes:
params = {'apikey': 'xxxxxxxxxxxxx', 'hash': (hash)}
response = requests.get('https://www.test.com/file/download', params=params)
downloaded_file = response.content
name = response.headers['x-goog-generation']
if response.status_code == 200:
with open('%s.bin' % name, 'wb') as f:
f.write(response.content)
currently you response is the last request since your code is executed after the loop is finished.

Related

Bypass google automated query scurity check

i have a list of google drive file links about 300 pdf files which i have to download
so what i am trying to do is using pythons request library i am requesting to google server and getting the files.
after 30 to 36 files download google blocks my requests and return
We're sorry...... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.
i am using the following code
import requests
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
if response.status_code!=200:
print(response.status_code)
return response.status_code
print('downloading '+ destination)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
i = 0
for chunk in response.iter_content(CHUNK_SIZE):
print(str(i)+'%')
i = i+1
if chunk: # filter out keep-alive new chunks
f.write(chunk)
print('downloaded '+ destination)
if __name__ == "__main__":
file_id = 'file id'
destination = file_id+'.pdf'
download_file_from_google_drive(file_id, destination)
i am iterating download_file_from_google_drive this function using my list
so can i bypass the security check
i tried using vpn which changes my ip address but nothing works.
after about 1hour downloading restart

How to download mail attachment through script?

I am writing a python script to fetch mail attachments through Graph API.
In the Graph Explorer, I can perfectly download file attachments by manually pressing the download button after calling:
https://graph.microsoft.com/v1.0/me/messages/{message-id}/attachments/{attachment-id}/$value
However, when trying to make the same request in my Python script, all I get returned is 'Response [200]' (so the request works, but the file is not reachable).
I try to make the request like this:
def get_mails_json():
requestHeaders = {'Authorization': 'Bearer ' +result["access_token"],'Content-Type': 'application/json'}
queryResults = msgraph_request(graphURI + "/v1.0/me/messages?$filter=isRead ne true",requestHeaders)
return json.dumps(queryResults)
try:
data = json.loads(mails)
values = data['value']
for i in values:
mail_id = i['id']
mail_subj = i['subject']
if i['hasAttachments'] != False:
attachments = o365.get_attachments(mail_id)
attachments = json.loads(attachments)
attachments = attachments['value']
for i in attachments:
details = o365.get_attachment_details(mail_id,i["id"])
except Exception as e:
print(e)
def get_attachment_details(mail,attachment):
requestHeaders = {'Authorization': 'Bearer ' + result["access_token"],'Content-Type': 'application/json'}
queryResults = msgraph_request(graphURI + "/v1.0/me/messages/"+mail+"/attachments/"+attachment+'/$value',requestHeaders)
return json.dumps(queryResults)
Is there a way for me to download the file to AT ALL through my python script ?
I found a simple solution to downloading a file through a python script!
I used chip's answer, found on this thread:
thread containing chip's answer
I make the request for the attachment like so:
def get_attachment_details(mail,attachment):
requestHeaders = {'Authorization': 'Bearer ' + result["access_token"],'Content-Type': 'application/file'}
resource= graphURI + "/v1.0/me/messages/"+mail+"/attachments/"+attachment+'/$value'
payload = {}
results = requests.request("GET", resource,headers=requestHeaders,data=payload, allow_redirects=False)
return results.content
This gets me the encoded bytes of the file, which I then decode and write to a file like so:
for i in attachments:
details = o365.get_attachment_details(mail_id,i["id"])
toread = io.BytesIO()
toread.write(details)
with open(i['name'], 'wb') as f:
f.write(toread.getbuffer())

urllib.request.Request - How to send csv files as input in the request

I need to send csv file as input in the request along with url,data & headers as this is import request. But the Request is not supporting the files. Can you assist me how to send files input in the urllib.request.
headers = {
'SESSION-ID': 'xxx',
'keyid': 'xxx'
}
payload = {"delimiter":"COMMA","textQualifier":"DOUBLE_QUOTE","codepage":"UTF8",
"dateFormat":"ISO"}
payload1 = json.dumps(payload).encode("utf-8")
payload2 = {'importSettings':payload1}
data = json.dumps(payload2).encode("utf-8")
#print(data)
files = [('file',('import.csv', csvstr,'text/csv'))]
url = xxx
try:
req = urllib.request.Request(url, data, headers,files=files)
with urllib.request.urlopen(req) as f:
res = f.read()
print(res.decode())
return res
except Exception as e:
print(e)
Getting this error - init() got an unexpected keyword argument 'files'

Python Graph API request only retrieving 1 attachment from an email

I'm using a python script to send an API request to get the attachments of an email. The email I'm using has 4 attachments (plus pictures in the signature in the email). The python request only retrieves 1 attachment along with the pics from the signature. When using Postman with the exact same information, it retrieves all attachments along with the pics.
Any ideas on how I can get the other attachments?
import requests
url = 'https://graph.microsoft.com/v1.0/users/{{users email}}/messages/{{messageID}}/attachments'
body = None
head = {"Content-Type": "application/json;charset=UFT-8", "Authorization": "Bearer " + accessToken}
response1 = requests.get(url, data=body, headers=head)
response = response1.text
Below shows the response from the python script, with only 7 items, and the Postman response with 10 items.
Below code retrieves multiple attachments
(files being an array of attachment names)
def execute(accessToken, messageID, files, noAttachments):
import os
from os import path
import requests
import base64
import json
if noAttachments == "False":
url = 'https://graph.microsoft.com/v1.0/users/{{users email}}/messages/{{messageID}}/attachments'
body = {}
head = {"Authorization": "Bearer " + accessToken}
responseCode = requests.request("GET", url, headers=head, data = body)
response = responseCode.text
test = json.loads(responseCode.text.encode('utf8'))
x, contentBytes = response.split('"contentBytes":"',1)
if len(files) == 1:
imgdata = base64.b64decode(str(contentBytes))
filename = "C:/Temp/SAPCareAttachments/" + files[0]
with open(filename, 'wb') as f:
f.write(imgdata)
else:
for file in test["value"]:
imgdata = base64.b64decode(file["contentBytes"])
if file["name"] in files:
filename = "C:/Temp/" + file["name"]
with open(filename, 'wb') as f:
f.write(imgdata)
print(responseCode)

Python having problems writing/reading and testing in a correct format

I’m trying to make a program that will do the following:
check if auth_file exists
if yes -> read file and try to login using data from that file
- if data is wrong -> request new data
if no -> request some data and then create the file and fill it with requested data
So far:
import json
import getpass
import os
import requests
filename = ".auth_data"
auth_file = os.path.realpath(filename)
url = 'http://example.com/api'
headers = {'content-type': 'application/json'}
def load_auth_file():
try:
f = open(auth_file, "r")
auth_data = f.read()
r = requests.get(url, auth=auth_data, headers=headers)
if r.reason == 'OK':
return auth_data
else:
print "Incorrect login..."
req_auth()
except IOError:
f = file(auth_file, "w")
f.write(req_auth())
f.close()
def req_auth():
user = str(raw_input('Username: '))
password = getpass.getpass('Password: ')
auth_data = (user, password)
r = requests.get(url, auth=auth_data, headers=headers)
if r.reason == 'OK':
return user, password
elif r.reason == "FORBIDDEN":
print "Incorrect login information..."
req_auth()
return False
I have the following problems(understanding and applying the correct way):
I can't find a correct way of storing the returned data from req_auth() to auth_file in a format that can be read and used in load_auth file
PS: Of course I'm a beginner in Python and I'm sure I have missed some key elements here :(
To read and write data, you can use json:
>>> with open('login.json','w') as f:
f.write(json.dumps({'user': 'abc', 'pass': '123'}))
>>> with open('login.json','r') as f:
data=json.loads(f.read())
>>> print data
{u'user': u'abc', u'pass': u'123'}
A few improvements I'd suggest:
Have a function that tests login (arguments: user,pwd) and returns True/False
Save data inside req_data, because req_data is called only when you have incorrect/missing data
Add an optional argument tries=0 to req_data, and test against it for a maximum number of tries
(1):
def check_login(user,pwd):
r = requests.get(url, auth=(user, pwd), headers=headers)
return r.reason == 'OK':
for (2), you can use json (as described above), csv, etc. Both of those are extremely easy, though json might make more sense since you're already using it.
for (3):
def req_auth(tries = 0) #accept an optional argument for no. of tries
#your existing code here
if check_login(user, password):
#Save data here
else:
if tries<3: #an exit condition and an error message:
req_auth(tries+1) #increment no. of tries on every failed attempt
else:
print "You have exceeded the number of failed attempts. Exiting..."
There are a couple of things I would approach differently, but you're off to a good start.
Instead of trying to open the file initially I'd check for it's existence:
if not os.path.isfile(auth_file):
Next, when you're working with writing the output you should use context managers:
with open(auth_file, 'w') as fh:
fh.write(data)
And finally, as a storage open (not terribly secure), it might work well to put the information you're saving in json format:
userdata = dict()
userdata['username'] = raw_input('Username: ')
userdata['password'] = getpass.getpass('Password: ')
# saving
with open(auth_file, 'w') as fho:
fho.write(josn.dumps(userdata))
# loading
with open(auth_file) as fhi:
userdata = json.loads(fhi.read())

Categories