Downloading a large file in parts using multiple parallel threads - python

I have a use case, where a large remote file needs to be downloaded in parts, by using multiple threads.
Each thread must run simultaneously (in parallel), grabbing a specific part of the file.
The expectation is to combine the parts into a single (original) file, once all parts were successfully downloaded.
Perhaps using the requests library could do the job, but then I am not sure how I would multithread this into a solution that combines the chunks together.
url = 'https://url.com/file.iso'
headers = {"Range": "bytes=0-1000000"} # first megabyte
r = get(url, headers=headers)
I was also thinking of using curl where Python would orchestrate the downloads, but I am not sure that's the correct way to go. It just seems to be too complex and swaying away from the vanilla Python solution. Something like this:
curl --range 200000000-399999999 -o file.iso.part2
Can someone explain how you'd go about something like this? Or post a code example of something that works in Python 3? I usually find the Python-related answers quite easily, but the solution to this problem seems to be eluding me.

Here is a version using Python 3 with Asyncio, it's just an example, it can be improved, but you should be able to get everything you need.
get_size: Send an HEAD request to get the size of the file
download_range: Download a single chunk
download: Download all the chunks and merge them
import asyncio
import concurrent.futures
import functools
import requests
import os
# WARNING:
# Here I'm pointing to a publicly available sample video.
# If you are planning on running this code, make sure the
# video is still available as it might change location or get deleted.
# If necessary, replace it with a URL you know is working.
URL = 'https://download.samplelib.com/mp4/sample-30s.mp4'
OUTPUT = 'video.mp4'
async def get_size(url):
response = requests.head(url)
size = int(response.headers['Content-Length'])
return size
def download_range(url, start, end, output):
headers = {'Range': f'bytes={start}-{end}'}
response = requests.get(url, headers=headers)
with open(output, 'wb') as f:
for part in response.iter_content(1024):
f.write(part)
async def download(run, loop, url, output, chunk_size=1000000):
file_size = await get_size(url)
chunks = range(0, file_size, chunk_size)
tasks = [
run(
download_range,
url,
start,
start + chunk_size - 1,
f'{output}.part{i}',
)
for i, start in enumerate(chunks)
]
await asyncio.wait(tasks)
with open(output, 'wb') as o:
for i in range(len(chunks)):
chunk_path = f'{output}.part{i}'
with open(chunk_path, 'rb') as s:
o.write(s.read())
os.remove(chunk_path)
if __name__ == '__main__':
executor = concurrent.futures.ThreadPoolExecutor(max_workers=3)
loop = asyncio.new_event_loop()
run = functools.partial(loop.run_in_executor, executor)
asyncio.set_event_loop(loop)
try:
loop.run_until_complete(
download(run, loop, URL, OUTPUT)
)
finally:
loop.close()

The best way i found is to use a module called pySmartDL.
step 1: pip install pySmartDL
step 2: for downloading the file you could use
from pySmartDL import SmartDL
obj = SmartDL(url, destination)
obj.start()
Note: This gives you a download meter by default.
In case you need to hook the download progress to a gui you could use
obj = SmartDL(url, dest,progress_bar=False)
obj.start(blocking=False)
while not obj.isFinished():
download_precentage = round(obj.get_progress()*100,2)
time.sleep(0.2)
print(download_precentage)
if you want to use more threads you can use
obj = SmartDL(url, destination,threads=7) #by default thread = 5
obj.start()
you can find many more features from the project page
Downloads: http://pypi.python.org/pypi/pySmartDL/
Documentation: http://itaybb.github.io/pySmartDL/
Project page: https://github.com/iTaybb/pySmartDL/
Bugs and Issues: https://github.com/iTaybb/pySmartDL/issues

You could use grequests to download in parallel.
import grequests
URL = 'https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.1.0-amd64-netinst.iso'
CHUNK_SIZE = 104857600 # 100 MB
HEADERS = []
_start, _stop = 0, 0
for x in range(4): # file size is > 300MB, so we download in 4 parts.
_start = _stop
_stop = 104857600 * (x + 1)
HEADERS.append({"Range": "bytes=%s-%s" % (_start, _stop)})
rs = (grequests.get(URL, headers=h) for h in HEADERS)
downloads = grequests.map(rs)
with open('/tmp/debian-10.1.0-amd64-netinst.iso', 'ab') as f:
for download in downloads:
print(download.status_code)
f.write(download.content)
PS: I did not check if the Ranges are correctly determinded and if the downloaded md5sum matches! This should just show in general how it could work.

You can also you use ThreadPoolExecutor (or ProcessPoolExecutor) from concurrent.futures instead of using asyncio. The following shows how to modify bug's answer by using ThreadPoolExecutor:
Bonus: The following snippet also uses tqdm to show a progress bar of the download. If you don't want to use tqdm then just comment out the block below with tqdm(total=file_size . . .. More information on tqdm is here which can be installed with pip install tqdm. Btw, tqdm can also be used with asyncio.
import requests
import concurrent.futures
from concurrent.futures import as_completed
from tqdm import tqdm
import os
def download_part(url_and_headers_and_partfile):
url, headers, partfile = url_and_headers_and_partfile
response = requests.get(url, headers=headers)
# setting same as below in the main block, but not necessary:
chunk_size = 1024*1024
# Need size to make tqdm work.
size=0
with open(partfile, 'wb') as f:
for chunk in response.iter_content(chunk_size):
if chunk:
size+=f.write(chunk)
return size
def make_headers(start, chunk_size):
end = start + chunk_size - 1
return {'Range': f'bytes={start}-{end}'}
url = 'https://download.samplelib.com/mp4/sample-30s.mp4'
file_name = 'video.mp4'
response = requests.get(url, stream=True)
file_size = int(response.headers.get('content-length', 0))
chunk_size = 1024*1024
chunks = range(0, file_size, chunk_size)
my_iter = [[url, make_headers(chunk, chunk_size), f'{file_name}.part{i}'] for i, chunk in enumerate(chunks)]
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
jobs = [executor.submit(download_part, i) for i in my_iter]
with tqdm(total=file_size, unit='iB', unit_scale=True, unit_divisor=chunk_size, leave=True, colour='cyan') as bar:
for job in as_completed(jobs):
size = job.result()
bar.update(size)
with open(file_name, 'wb') as outfile:
for i in range(len(chunks)):
chunk_path = f'{file_name}.part{i}'
with open(chunk_path, 'rb') as s:
outfile.write(s.read())
os.remove(chunk_path)

Related

Multiprocessing Pool usage with requests

Good day
I am working on a directory scanner and trying to speed it up as much as possible. I have been looking into using multiprocessing, however I do not believe I am using it correctly.
from multiprocessing import Pool
import requests
import sys
def dir_scanner(wordlist=sys.argv[1],dest_address=sys.argv[2],file_ext=sys.argv[3]):
print(f"Scanning Target: {dest_address} looking for files ending in {file_ext}")
# read a wordlist
dir_file = open(f"{wordlist}").read()
dir_list = dir_file.splitlines()
# empty list for discovered dirs
discovered_dirs = []
# make requests for each potential dir location
for dir_item in dir_list:
req_url = f"http://{dest_address}/{dir_item}.{file_ext}"
req_dir = requests.get(req_url)
print(req_url)
if req_dir.status_code==404:
pass
else:
print("Directroy Discovered ", req_url)
discovered_dirs.append(req_url)
with open("discovered_dirs.txt","w") as f:
for directtories in discovered_dirs:
print(req_url,file=f)
if __name__ == '__main__':
with Pool(processes=4) as pool:
dir_scanner(sys.argv[1],sys.argv[2],sys.argv[3])
Is the above example the correct usage of Pool? Ultimately I am attempting to speed up the requests that are being made to the target.
UPDATE: Perhaps not the most eleigant solution but:
from multiprocessing import Pool
import requests
import sys
# USAGE EXAMPLE: python3 dir_scanner.py <wordlist> <target address> <file extension>
discovered_dirs = []
# read in the wordlist
dir_file = open(f"{sys.argv[1]}").read()
dir_list = dir_file.splitlines()
def make_request(dir_list):
# create a GET request URL base on items in the wordlist
req_url = f"http://{sys.argv[2]}/{dir_list}.{sys.argv[3]}"
return req_url, requests.get(req_url)
# map the requests made by make_requests to speed things up
with Pool(processes=4) as pool:
for req_url, req_dir in pool.map(make_request, dir_list):
# if the request resp is a 404 move on
if req_dir.status_code == 404:
pass
# if not a 404 resp then add it to the list
else:
print("Directroy Discovered ", req_url)
discovered_dirs.append(req_url)
# create a new file and append it with directories that were discovered
with open("discovered_dirs.txt","w") as f:
for directories in discovered_dirs:
print(req_url,file=f)
Right now, you are creating a pool and not using it.
You can use pool.map to distribute the request into multiple process:
...
def make_request(dir_item):
req_url = f"http://{dest_address}/{dir_item}.{file_ext}"
return req_url, requests.get(req_url)
with Pool(processes=4) as pool:
for req_url, req_dir in pool.map(make_request, dir_list):
print(req_url)
if req_dir.status_code == 404:
pass
else:
print("Directroy Discovered ", req_url)
discovered_dirs.append(req_url)
...
In the example above the function make_request is executed in subprocesses.
Python documentation gives a lot of examples.

Memory usage of urllib during requests

I have the following general setup for Python 2 and 3 support for downloading an ~8MiB binary payload:
import six
if six.PY3:
from urllib.request import Request, urlopen
else:
from urllib2 import Request, urlopen
def request(url, method='GET'):
r = Request(url)
r.get_method = lambda: method
response = urlopen(r)
return response
def download_payload():
with open('output.bin', 'w') as f:
f.write(request(URL).read())
I have the following constraints:
It must work on Python 2 and 3
It must have little to no dependencies whatsoever, as it'll run as an Ansible module on various distributions, Ubuntu, RHEL, Fedora, Debian, etc.
I'd like to minimize the memory usage here, but I'm not seeing any documentation on how urllib works internally; does it just always read the response into memory, or can I do manual buffering on my end to keep the memory usage fixed at my buffer size?
I was thinking of doing something like this:
def download_payload():
with open('output.bin', 'w') as f:
r = request(URL)
hunk = r.read(8192)
while len(hunk) > 0:
f.write(hunk)
hunk = r.read(8192)
The question I'm running into is whether urllib allows me to buffer reads like this to manually manage the memory. Are there any guarantees on it doing this? I can't find mentions of memory usage or buffering in the docs.

How to download multiple files simultaneously and trigger specific actions for each one done?

I need help with a feature I try to implement, unfortunately I'm not very comfortable with multithreading.
My script download 4 different files from internet, and calls a dedicated function for each one, then saving all.
The problem is that I'm doing it step by step, therefore I have to wait for each download to finish in order to proceed to the next one.
I see what I should do to solve this, but I don't succeed to code it.
Actual Behaviour:
url_list = [Url1, Url2, Url3, Url4]
files_list = []
files_list.append(downloadFile(Url1))
handleFile(files_list[-1], type=0)
...
files_list.append(downloadFile(Url4))
handleFile(files_list[-1], type=3)
saveAll(files_list)
Needed Behaviour:
url_list = [Url1, Url2, Url3, Url4]
files_list = []
for url in url_list:
callThread(files_list.append(downloadFile(url)), # function
handleFile(files_list[url.index], type=url.index) # trigger
#use a thread for downloading
#once file is downloaded, it triggers his associated function
#wait for all files to be treated
saveAll(files_list)
Thanks for your help !
Typical approach is to put the IO heavy part like fetching data over the internet and data processing into the same function:
import random
import threading
import time
from concurrent.futures import ThreadPoolExecutor
import requests
def fetch_and_process_file(url):
thread_name = threading.currentThread().name
print(thread_name, "fetch", url)
data = requests.get(url).text
# "process" result
time.sleep(random.random() / 4) # simulate work
print(thread_name, "process data from", url)
result = len(data) ** 2
return result
threads = 2
urls = ["https://google.com", "https://python.org", "https://pypi.org"]
executor = ThreadPoolExecutor(max_workers=threads)
with executor:
results = executor.map(fetch_and_process_file, urls)
print()
print("results:", list(results))
outputs:
ThreadPoolExecutor-0_0 fetch https://google.com
ThreadPoolExecutor-0_1 fetch https://python.org
ThreadPoolExecutor-0_0 process data from https://google.com
ThreadPoolExecutor-0_0 fetch https://pypi.org
ThreadPoolExecutor-0_0 process data from https://pypi.org
ThreadPoolExecutor-0_1 process data from https://python.org

aiobotocore-aiohttp - Get S3 file content and stream it in the response

I want to get the content of an uploaded file on S3 using botocore and aiohttp service. As the files may have a huge size:
I don't want to store the whole file content in memory,
I want to be able to handle other requests while downloading files from S3 (aiobotocore, aiohttp),
I want to be able to apply modifications on the files I download, so I want to treat it line by line and stream the response to the client
For now, I have the following code in my aiohttp handler:
import asyncio
import aiobotocore
from aiohttp import web
#asyncio.coroutine
def handle_get_file(loop):
session = aiobotocore.get_session(loop=loop)
client = session.create_client(
service_name="s3",
region_name="",
aws_secret_access_key="",
aws_access_key_id="",
endpoint_url="http://s3:5000"
)
response = yield from client.get_object(
Bucket="mybucket",
Key="key",
)
Each time I read one line from the given file, I want to send the response. Actually, get_object() returns a dict with a Body (ClientResponseContentProxy object) inside. Using the method read(), how can I get a chunk of the expected response and stream it to the client ?
When I do :
for content in response['Body'].read(10):
print("----")
print(content)
The code inside the loop is never executed.
But when I do :
result = yield from response['Body'].read(10)
I get the content of the file in result. I am a little bit confused about how to use read() here.
Thanks
it's because the aiobotocore api is different than the one of botocore , here read() returns a FlowControlStreamReader.read generator for which you need to yield from
it looks something like that (taken from https://github.com/aio-libs/aiobotocore/pull/19)
resp = yield from s3.get_object(Bucket='mybucket', Key='k')
stream = resp['Body']
try:
chunk = yield from stream.read(10)
while len(chunk) > 0:
...
chunk = yield from stream.read(10)
finally:
stream.close()
and actually in your case you can even use readline()
https://github.com/KeepSafe/aiohttp/blob/c39355bef6c08ded5c80e4b1887e9b922bdda6ef/aiohttp/streams.py#L587

Requests with multiple connections

I use the Python Requests library to download a big file, e.g.:
r = requests.get("http://bigfile.com/bigfile.bin")
content = r.content
The big file downloads at +- 30 Kb per second, which is a bit slow. Every connection to the bigfile server is throttled, so I would like to make multiple connections.
Is there a way to make multiple connections at the same time to download one file?
You can use HTTP Range header to fetch just part of file (already covered for python here).
Just start several threads and fetch different range with each and you're done ;)
def download(url,start):
req = urllib2.Request('http://www.python.org/')
req.headers['Range'] = 'bytes=%s-%s' % (start, start+chunk_size)
f = urllib2.urlopen(req)
parts[start] = f.read()
threads = []
parts = {}
# Initialize threads
for i in range(0,10):
t = threading.Thread(target=download, i*chunk_size)
t.start()
threads.append(t)
# Join threads back (order doesn't matter, you just want them all)
for i in threads:
i.join()
# Sort parts and you're done
result = ''.join(parts[i] for i in sorted(parts.keys()))
Also note that not every server supports Range header (and especially servers with php scripts responsible for data fetching often don't implement handling of it).
Here's a Python script that saves given url to a file and uses multiple threads to download it:
#!/usr/bin/env python
import sys
from functools import partial
from itertools import count, izip
from multiprocessing.dummy import Pool # use threads
from urllib2 import HTTPError, Request, urlopen
def download_chunk(url, byterange):
req = Request(url, headers=dict(Range='bytes=%d-%d' % byterange))
try:
return urlopen(req).read()
except HTTPError as e:
return b'' if e.code == 416 else None # treat range error as EOF
except EnvironmentError:
return None
def main():
url, filename = sys.argv[1:]
pool = Pool(4) # define number of concurrent connections
chunksize = 1 << 16
ranges = izip(count(0, chunksize), count(chunksize - 1, chunksize))
with open(filename, 'wb') as file:
for s in pool.imap(partial(download_part, url), ranges):
if not s:
break # error or EOF
file.write(s)
if len(s) != chunksize:
break # EOF (servers with no Range support end up here)
if __name__ == "__main__":
main()
The end of file is detected if a server returns empty body, or 416 http code, or if the response size is not chunksize exactly.
It supports servers that doesn't understand Range header (everything is downloaded in a single request in this case; to support large files, change download_chunk() to save to a temporary file and return the filename to be read in the main thread instead of the file content itself).
It allows to change independently number of concurrent connections (pool size) and number of bytes requested in a single http request.
To use multiple processes instead of threads, change the import:
from multiprocessing.pool import Pool # use processes (other code unchanged)
This solution requires the linux utility named "aria2c", but it has the advantage of easily resuming downloads.
It also assumes that all the files you want to download are listed in the http directory list for location MY_HTTP_LOC. I tested this script on an instance of lighttpd/1.4.26 http server. But, you can easily modify this script so that it works for other setups.
#!/usr/bin/python
import os
import urllib
import re
import subprocess
MY_HTTP_LOC = "http://AAA.BBB.CCC.DDD/"
# retrieve webpage source code
f = urllib.urlopen(MY_HTTP_LOC)
page = f.read()
f.close
# extract relevant URL segments from source code
rgxp = '(\<td\ class="n"\>\<a\ href=")([0-9a-zA-Z\(\)\-\_\.]+)(")'
results = re.findall(rgxp,str(page))
files = []
for match in results:
files.append(match[1])
# download (using aria2c) files
for afile in files:
if os.path.exists(afile) and not os.path.exists(afile+'.aria2'):
print 'Skipping already-retrieved file: ' + afile
else:
print 'Downloading file: ' + afile
subprocess.Popen(["aria2c", "-x", "16", "-s", "20", MY_HTTP_LOC+str(afile)]).wait()
you could use a module called pySmartDLfor this it uses multiple threads and can do a lot more also this module gives a download bar by default.
for more info check this answer

Categories