How to send post requests using multi threading in python? - python

I'm trying to use multi threading to send post requests with tokens from a txt file.
I only managed to send GET requests,if i try to send post requests it results in a error.
I tried modifying the GET to POST but it gets an error.
I want to send post requests with tokens in them and verify for each token if they are true or false. (json response)
Here is the code:
import threading
import time
from queue import Queue
import requests
file_lines = open("tokens.txt", "r").readlines() # Gets the tokens from the txt file.
for line in file_lines:
param={
"Token":line.replace('/n','')
}
def make_request(url):
"""Makes a web request, prints the thread name, URL, and
response text.
"""
resp = requests.get(url)
with print_lock:
print("Thread name: {}".format(threading.current_thread().name))
print("Url: {}".format(url))
print("Response code: {}\n".format(resp.text))
def manage_queue():
"""Manages the url_queue and calls the make request function"""
while True:
# Stores the URL and removes it from the queue so no
# other threads will use it.
current_url = url_queue.get()
# Calls the make_request function
make_request(current_url)
# Tells the queue that the processing on the task is complete.
url_queue.task_done()
if __name__ == '__main__':
# Set the number of threads.
number_of_threads = 5
# Needed to safely print in mult-threaded programs.
print_lock = threading.Lock()
# Initializes the queue that all threads will pull from.
url_queue = Queue()
# The list of URLs that will go into the queue.
urls = ["https://www.google.com"] * 30
# Start the threads.
for i in range(number_of_threads):
# Send the threads to the function that manages the queue.
t = threading.Thread(target=manage_queue)
# Makes the thread a daemon so it exits when the program finishes.
t.daemon = True
t.start()
start = time.time()
# Puts the URLs in the queue
for current_url in urls:
url_queue.put(current_url)
# Wait until all threads have finished before continuing the program.
url_queue.join()
print("Execution time = {0:.5f}".format(time.time() - start))
I want to send a post request for each token in the txt file.
Error i get when using replacing get with post:
Traceback (most recent call last):
File "C:\Users\Creative\Desktop\multithreading.py", line 40, in
url_queue = Queue()
NameError: name 'Queue' is not defined
current_url = url_queue.post()
AttributeError: 'Queue' object has no attribute 'post'
File "C:\Users\Creative\Desktop\multithreading.py", line 22, in manage_queue
Also tried a solution using tornado and async but none of them with success.

I finally managed to do post requests using multi threading.
If anyone sees an error or if you can do an improvement for my code feel free to do it :)
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from time import time
url_list = [
"https://www.google.com/api/"
]
tokens = {'Token': '326729'}
def download_file(url):
html = requests.post(url,stream=True, data=tokens)
return html.content
start = time()
processes = []
with ThreadPoolExecutor(max_workers=200) as executor:
for url in url_list:
processes.append(executor.submit(download_file, url))
for task in as_completed(processes):
print(task.result())
print(f'Time taken: {time() - start}')

Related

How to continuously pull data from a URL in Python?

I have a link, e.g. www.someurl.com/api/getdata?password=..., and when I open it in a web browser it sends a constantly updating document of text. I'd like to make an identical connection in Python, and dump this data to a file live as it's received. I've tried using requests.Session(), but since the stream of data never ends (and dropping it would lose data), the get request also never ends.
import requests
s = requests.Session()
x = s.get("www.someurl.com/api/getdata?password=...") #never terminates
What's the proper way to do this?
I found the answer I was looking for here: Python Requests Stream Data from API
Full implementation:
import requests
url = "www.someurl.com/api/getdata?password=..."
s = requests.Session()
with open('file.txt','a') as fp:
with s.get(url,stream=True) as resp:
for line in resp.iter_lines(chunk_size=1):
fp.write(str(line))
Note that chunk_size=1 is necessary for the data to immediately respond to new complete messages, rather than waiting for an internal buffer to fill before iterating over all the lines. I believe chunk_size=None is meant to do this, but it doesn't work for me.
You can keep making get requests to the url
import requests
import time
url = "www.someurl.com/api/getdata?password=..."
sess = requests.session()
while True:
req = sess.get(url)
time.sleep(10)
this will terminate the request after 1 second ,
import multiprocessing
import time
import requests
data = None
def get_from_url(x):
s = requests.Session()
data = s.get("www.someurl.com/api/getdata?password=...")
if __name__ == '__main__':
while True:
p = multiprocessing.Process(target=get_from_url, name="get_from_url", args=(1,))
p.start()
# Wait 1 second for get request
time.sleep(1)
p.terminate()
p.join()
# do something with the data
print(data) # or smth else

My threads seem to be hanging at the last couple of tasks (image downloads with requests)

I'm using threads to download images from the imagenetdata base.
Here is the link:
http://www.image-net.org/
First, I searched the imagenet database for "puppies" and I was able to get a textfile with 1000+ urls (urls for images)
(If you don't want to go that manner,
I've uploaded the urls onto this pastebin (first 400 or so):
https://pastebin.com/yTcHq0iw )
I then read the first 200 lines of the textfile (thus 200 urls) and used
threads to download those 200 images.
If I download only 100 images (read only the first 100 urls), threads execute perfectly.
However If I try something like 150+ (150, 175,200 etc), the threads will download the first 147, (or 172ish if I'm using 175), and just hang for about 30 seconds or so, before finishing up the last few images.
I'm using Requests to download the images, so not sure if Requests is having trouble making some connections and that is the cause. I'm not too familiar with
Requests lower level API, so not sure how to fix it if it's a Request problem. I found some code on the internet and attempted to override some of the built-in options of Requests, but these tweaks haven't solved the "hanging" problem.
Here is my code:
import os
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
import time
import queue
from threading import Thread
SAVE_DIR = r'C:\Users\Moondra\Desktop\TEMP\Puppy_threading' #is a constant,
def decorator_function(func):
def wrapper(*args,**kwargs):
session = requests.Session()
retry = Retry(connect=0, backoff_factor=0.2)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return func(*args, session = session, **kwargs)
return wrapper
#Using threading:
image_count = 0
##decorator_function (optional decorator_function)
def download_image(session = None):
global image_count
if not session:
session = requests.Session()
while not q.empty():
try:
r = session.get(q.get(block = False))
except (requests.exceptions.RequestException, UnicodeError) as e:
print(e)
image_count += 1
q.task_done()
continue
image_count += 1
q.task_done()
print('image', image_count)
with open(os.path.join(
SAVE_DIR, 'image_{}.jpg'.format(image_count)),
'wb') as f:
f.write(r.content)
q =queue.Queue()
with open(r'C:\Users\Moondra\Desktop\puppies.txt', 'rt') as f:
for i in range(200):
line = f.readline()
q.put(line.strip())
print(q.qsize())
threads = []
start = time.time()
for i in range(50):
t = Thread(target = download_image)
t.setDaemon(True)
threads.append(t)
t.start()
q.join()
for t in threads:
t.join()
print(t.name, 'has joined')
end = time.time()
print('time taken: {:.4f}'.format(end - start))

Concurrent POST requests w/ attachment Python

I have a server which waits for a request containing a pictures:
#app.route("/uploader_ios", methods=['POST'])
def upload_file_ios():
imagefile = request.files['imagefile']
I can submit a post request quite easily using requests in python like so:
url = "<myserver>/uploader_ios"
files = {'imagefile': open(fname, 'rb')}
%time requests.post(url, files=files).json() # 2.77s
However, what I would like to do is submit 1000 or perhaps 100,000 requests at the same time. I wanted to try to do this using asyncio because I have been able to use this for get requests without a problem. However I can't see to create a valid post request that the server accepts.
My attempt is below:
import aiohttp
import asyncio
import json
# Testing with small amount
concurrent = 2
url_list = ['<myserver>/uploader_ios'] * 10
def handle_req(data):
return json.loads(data)['English']
def chunked_http_client(num_chunks, s):
# Use semaphore to limit number of requests
semaphore = asyncio.Semaphore(num_chunks)
#asyncio.coroutine
# Return co-routine that will work asynchronously and respect
# locking of semaphore
def http_get(url):
nonlocal semaphore
with (yield from semaphore):
# Attach files
files = aiohttp.FormData()
files.add_field('imagefile', open(fname, 'rb'))
response = yield from s.request('post', url, data=files)
print(response)
body = yield from response.content.read()
yield from response.wait_for_close()
return body
return http_get
def run_experiment(urls, _session):
http_client = chunked_http_client(num_chunks=concurrent, s=_session)
# http_client returns futures, save all the futures to a list
tasks = [http_client(url) for url in urls]
dfs_route = []
# wait for futures to be ready then iterate over them
for future in asyncio.as_completed(tasks):
data = yield from future
try:
out = handle_req(data)
dfs_route.append(out)
except Exception as err:
print("Error {0}".format(err))
return dfs_route
with aiohttp.ClientSession() as session: # We create a persistent connection
loop = asyncio.get_event_loop()
calc_routes = loop.run_until_complete(run_experiment(url_list, session))
The issue is that the response I get is:
.../uploader_ios) [400 BAD REQUEST]>
I am assuming this is because I am not correctly attaching the image-file

write an asynchronous http client using twisted framework

i want to write an asynchronous http client using twisted framework which fires 5 requests asynchronously/simultaneously to 5 different servers. Then compare those responses and display a result. Could someone please help regarding this.
For this situation I'd suggest using treq and DeferredList to aggregate the responses then fire a callback when all the URLs have been returned. Here is a quick example:
import treq
from twisted.internet import reactor, defer, task
def fetchURL(*urls):
dList = []
for url in urls:
d = treq.get(url)
d.addCallback(treq.content)
dList.append(d)
return defer.DeferredList(dList)
def compare(responses):
# the responses are returned in a list of tuples
# Ex: [(True, b'')]
for status, content in responses:
print(content)
def main(reactor):
urls = [
'http://swapi.co/api/films/schema',
'http://swapi.co/api/people/schema',
'http://swapi.co/api/planets/schema',
'http://swapi.co/api/species/schema',
'http://swapi.co/api/starships/schema',
]
d = fetchURL(*urls) # returns Deferred
d.addCallback(compare) # fire compare() once the URLs return w/ a response
return d # wait for the DeferredList to finish
task.react(main)
# usually you would run reactor.run() but react() takes care of that
In the main function, a list of URLs are passed into fecthURL(). There, each site will make an async request and return a Deferred that will be appended to a list. Then the final list will be used to create and return a DeferredList obj. Finally we add a callback (compare() in this case) to the DeferredList that will access each response. You would put your comparison logic in the compare() function.
You don't necessarily need twisted to make asynchronous http requests. You can use python threads and the wonderful requests package.
from threading import Thread
import requests
def make_request(url, results):
response = requests.get(url)
results[url] = response
def main():
results = {}
threads = []
for i in range(5):
url = 'http://webpage/{}'.format(i)
t = Thread(target=make_request, kwargs={'url': url, 'results': results})
t.start()
threads.append(t)
for t in threads():
t.join()
print results

Make Post Requests with Files Simultaneously

I write a simple server and it runs well.
So I want to write some codes which will make many post requests to my server simultaneously to simulate a pressure test. I use python.
Suppose the url of my server is http://myserver.com.
file1.jpg and file2.jpg are the files needed to be uploaded to the server.
Here is my testing code. I use threading and urllib2.
async_posts.py
from Queue import Queue
from threading import Thread
from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
import urllib2, sys
num_thread = 4
queue = Queue(2*num_thread)
def make_post(url):
register_openers()
data = {"file1": open("path/to/file1.jpg"), "file2": open("path/to/file2.jpg")}
datagen, headers = multipart_encode(data)
request = urllib2.Request(url, datagen, headers)
start = time.time()
res = urllib2.urlopen(request)
end = time.time()
return res.code, end - start # Return the status code and duration of this request.
def deamon():
while True:
url = queue.get()
status, duration = make_post(url)
print status, duration
queue.task_done()
for _ in range(num_thread):
thd = Thread(target = daemon)
thd.daemon = True
thd.start()
try:
urls = ["http://myserver.com"] * num_thread
for url in urls:
queue.put(url)
queue.join()
except KeyboardInterrupt:
sys.exit(1)
When num_thread is small (ex: 4), my code runs smoothly. But as I switch num_thread to slightly larger number, say 10, all the threading things break down and keep throwing httplib.BadStatusLine error.
I don't know why my code goes wrong or maybe there is better way to do this?
A a reference, my server is written in python using flask and gunicorn.
Thanks in advance.

Categories