Wrong speed test using time.clock() between synchronous and asynchronous cases? - python

I'm reading a tutorial about gevent, and it provids sample codes for a demonstrastion for synchronous and asynchronous cases:
import gevent
import random
def task(pid):
"""
Some non-deterministic task
"""
gevent.sleep(random.randint(0,2)*0.001)
print('Task', pid, 'done')
def synchronous():
for i in range(1,10):
task(i)
def asynchronous():
threads = [gevent.spawn(task, i) for i in xrange(1000)]
gevent.joinall(threads)
This article explains that 'the order of execution in the async case is essentially random and that the total execution time in the async case is much less than the sync case'.
So I used time module to test it:
print('Synchronous:')
start1 = time.clock()
synchronous()
end1 = time.clock()
print "%.2gs" % (end1-start1)
print('Asynchronous:')
start2 = time.clock()
asynchronous()
end2 = time.clock()
print "%.2gs" % (end2-start2)
However, the time run by 'asynchronous' is much longer than 'synchronous':
ubuntu#ip:/tmp$ python gevent_as.py
Synchronous:
0.32s
Asynchronous:
0.64s
ubuntu#ip:/tmp$ python gevent_as.py
Synchronous:
0.3s
Asynchronous:
0.61s
I want to know what's wronge with my test program? Thanks.

It is the problem of time.clock(), that doesn't work properly under ubuntu. See the link for details: Python - time.clock() vs. time.time() - accuracy?
I changed the test program:
print('Synchronous:')
start1 = time.time()
synchronous()
end1 = time.time()
print "%.2gs" % (end1-start1)
print('Asynchronous:')
start2 = time.time()
asynchronous()
end2 = time.time()
print "%.2gs" % (end2-start2)
Then the test speed of 'asynchronous' is much faster than 'synchronous':
ubuntu#ip:/tmp$ python gevent_as.py
Synchronous:
1.1s
Asynchronous:
0.057s

Probably the sleeps are very small and overhead matters. Try replacing 0.001 with 0.1.

Related

How can I improve what I already have?

I have created a windows internet speed test, I'd like to improve it and make the code more presentable as well as better define my functions.
When the computer reaches initialise, due to the variable being in another function, it cannot call it. How can I rectify this as I have various variables being called in different functions.
Feel free to use this speedtester as well, I will be working on developing a useful phone app to run the code as well.
The code prints the current date and time, searches for the connected SSID, initialises the speedtest module, scans for servers, selects the best server, initiates ping test, then download speed test, then upload speed test, followed by printing the results on screen and writing it to a simple txt file for viewing later.
Each function shows its run time using the time module and lastly total execution time with date and time also.
It works perfectly without the functions, and on android without find_ssid(): but I keep running into the trouble of localised variables.
import speedtest
from datetime import datetime
import subprocess
import re
import time
def main():
def date():
dt_now = datetime.now()
dtn = dt_now.strftime("%a %d-%m-%Y, %H:%M:%S%p")
return dtn
print(date())
def find_ssid():
stt = time.time()
cdop = subprocess.run(["netsh", "WLAN", "show", "interfaces"], capture_output=True).stdout.decode()
ssid = (re.findall("SSID : (.*)\r", cdop))
for char in ssid:
ssid = f"Network Name: {char} \n"
sid = time.time() - stt
print(f'SSID found in: {sid:.2f}s')
print(ssid)
find_ssid()
def initialise():
print("Initialising network speed test... ")
st = speedtest.Speedtest()
print("Network speed test active.")
sta = time.time() - stt
print(f'Speed test activation time: {sta - sid:.2f}s')
def scan_servers():
print("Scanning for available servers...")
st.get_servers()
print("Found available servers.")
sft = time.time() - stt
print(f'Servers found in: {sft - sta:.2f}s')
def best_server():
print("Choosing best server...")
bserv = st.get_best_server()
print(f"Best server is: {bserv['sponsor']} - {bserv['host']} located in {bserv['name']}, {bserv['country']}")
bst = time.time() - stt
print(f'Best server found in: {bst - sft:.2f}s')
def ping_test():
print("Ping testing...")
p = st.results.ping
ph = f"Ping: {p:.2f}ms"
print("Ping test complete.")
ptt = time.time() - stt
print(f'Ping test completed in: {ptt - bst:.2f}s')
def download_speed_test():
print("Download speed testing...")
ds = st.download()
dsh = f"Download speed: {ds / 1024 / 1024:.2f}mb/s"
print("Download speed test complete.")
dst = time.time() - stt
print(f'Download speed test completed in: {dst - ptt:.2f}s')
def upload_speed_test():
print("Upload speed testing...")
us = st.upload()
ust = time.time() - stt
ush = f"Upload speed: {us / 1024 / 1024:.2f}mb/s \n"
print("Upload speed test complete. \n")
print(f'Upload speed test completed in: {ust - dst:.2f}s')
def result():
print("Speed test results are: \n")
print(ssid)
print(ph)
print(dsh)
print(ush)
ttn = datetime.now()
fdt = ttn.strftime("%a %d-%m-%Y, %H:%M:%S%p")
tt = time.time() - stt
print(f"Start Time: {dtn}")
print(f"Finish Time: {fdt}")
print(f'Total execution time: {tt:.2f}s')
results = [ssid, ph, dsh, ush, dtn]
txt = "Speedtest Results.txt"
with open(txt, 'a') as f:
f.write("\n")
f.write("\n".join(results))
f.write("\n")
f.close()
main()
You can run this on, one line i believe by
ssid = (re.findall("SSID : (.*)\r", cdop))
for char in ssid:
ssid = f"Network Name: {char} \n/
Which should make it quicker, have a look at list comprehension

Python multiprocessing finish the work correctly, but the processes still alive (Linux)

I use python multiprocessing to compute some sort of scores on DNA sequences from a large file.
For that I write and use the script below.
I use a Linux machine with 48 cpu in python 3.8 environment.
Th code work fine, and terminate the work correctly and print the processing time at the end.
Problem: when I use the htop command, I find that all 48 processes are still alive.
I don't know why, and I don't know what to add to my script to avoid this.
import csv
import sys
import concurrent.futures
from itertools import combinations
import psutil
import time
nb_cpu = psutil.cpu_count(logical=False)
def fun_job(seq_1, seq_2): # seq_i : (id, string)
start = time.time()
score_dist = compute_score_dist(seq_1[1], seq_2[1])
end = time.time()
return seq_1[0], seq_2[0], score_dist, end - start # id seq1, id seq2, score, time
def help_fun_job(nested_pair):
return fun_job(nested_pair[0], nested_pair[1])
def compute_using_multi_processing(list_comb_ids, dict_ids_seqs):
start = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor(max_workers=nb_cpu) as executor:
results = executor.map(help_fun_job,
[((pair_ids[0], dict_ids_seqs[pair_ids[0]]), (pair_ids[1], dict_ids_seqs[pair_ids[1]]))
for pair_ids in list_comb_ids])
save_results_to_csv(results)
finish = time.perf_counter()
proccessing_time = str(datetime.timedelta(seconds=round(finish - start, 2)))
print(f' Processing time Finished in {proccessing_time} hh:mm:ss')
def main():
print("nb_cpu in this machine : ", nb_cpu)
file_path = sys.argv[1]
dict_ids_seqs = get_dict_ids_seqs(file_path)
list_ids = list(dict_ids_seqs) # This will convert the dict_keys to a list
list_combined_ids = list(combinations(list_ids, 2))
compute_using_multi_processing(list_combined_ids, dict_ids_seqs)
if __name__ == '__main__':
main()
Thank you for your help.
Edit : add the complete code for fun_job (after #Booboo answer)
from Bio import Align
def fun_job(seq_1, seq_2): # seq_i : (id, string)
start = time.time()
aligner = Align.PairwiseAligner()
aligner.mode = 'global'
score_dist = aligner.score(seq_1[1],seq_2[1])
end = time.time()
return seq_1[0], seq_2[0], score_dist, end - start # id seq1, id seq2, score, time
When the with ... as executor: block exits, there is an implicit call to executor.shutdown(wait=True). This will wait for all pending futures to to be done executing "and the resources associated with the executor have been freed", which presumably includes terminating the processes in the pool (if possible?). Why your program terminates (or does it?) or at least you say all the futures have completed executing, while the processes have not terminated is a bit of a mystery. But you haven't provided the code for fun_job, so who can say why this is so?
One thing you might try is to switch to using the multiprocessing.pool.Pool class from the multiprocessing module. It supports a terminate method, which is implicitly called when its context manager with block exits, that explicitly attempts to terminate all processes in the pool:
#import concurrent.futures
import multiprocessing
... # etc.
def compute_using_multi_processing(list_comb_ids, dict_ids_seqs):
start = time.perf_counter()
with multiprocessing.Pool(processes=nb_cpu) as executor:
results = executor.map(help_fun_job,
[((pair_ids[0], dict_ids_seqs[pair_ids[0]]), (pair_ids[1], dict_ids_seqs[pair_ids[1]]))
for pair_ids in list_comb_ids])
save_results_to_csv(results)
finish = time.perf_counter()
proccessing_time = str(datetime.timedelta(seconds=round(finish - start, 2)))
print(f' Processing time Finished in {proccessing_time} hh:mm:ss')

Set limit and check memory usage of subprocess.Popen in Python

I am developing online judge using Django on Debian. All user scripts I call with subprocess.Popen. To check script time usage I use time module and to limit timeout I'm giving parameter timeout to communicate on process object and handling subprocess.TimeoutExpired exception. Is it possible to do something similar to check memory usage of process and limit it?
That is a code sample how I do it now:
try:
execution = subprocess.Popen(execute_line.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE, shell=False)
execution.stdin.write(bytes(test.input_data, 'UTF-8'))
execution.stdin.flush()
start_time = time.time()
test_output, test_error_string = execution.communicate(timeout=time_limit)
end_time = time.time()
finish_time = end_time - start_time
test_output = test_output.decode('utf-8')
test_error_string = test_error_string.decode('utf-8')
except subprocess.TimeoutExpired:
end_time = time.time()
finish_time = end_time - start_time
There is no straight-forward one-liner easy way to check for memory consumption.
Solutions:
You can use psutil (pip install psutil), for example:
def get_process_memory():
process = psutil.Process(os.getpid())
return process.memory_info().rss
def track(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory() / 1024 ** 2
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory() / 1024 ** 2
metrics = {
'callable': func.__name__,
'memory_before': mem_before,
'memory_after': mem_after,
'memory_used': mem_after - mem_before,
'exec_time': elapsed_time
}
print(f"{json.dumps(metrics, indent=4)}")
return result
return wrapper
Use similar approaches including resource Python library (only works on Unix systems)
# Memory consumption with psutil (MB)
import os, psutil; print(psutil.Process(os.getpid()).memory_info().rss / 1024 ** 2)
# Memory consumption with resource (MB) - Only works on Unix
import resource; print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024)
Use tracemalloc - a bit more complex IMHO
Use an external process/software to monitor metrics of your processes

Python parallel library takes longer than sequential execution

I am trying to leverage multi processing by using the Parallel library of python. However strangely I see that the sequential execution is taking longer compared to the parallel version. Below is the code which I am running for comparison
import time
from joblib import Parallel, delayed
def compute_features(summary, article):
feature_dict = {}
feature_dict["f1"] = summary
feature_dict["f2"] = article
return feature_dict
def construct_input(n):
summaries = []
articles = []
for i in range(n):
summaries.append("summary_" + str(i))
articles.append("articles_" + str(i))
return summaries, articles
def sequential_test(n):
print("Sequential test")
start_time = time.time()
summaries, articles = construct_input(n)
feature_list = []
for i in range(n):
feature_list.append(compute_features(summaries[i], articles[i]))
total_time = time.time() - start_time
print("Total Time Sequential : %s" % total_time)
# print(feature_list)
def parallel_test(n):
print("Parallel test")
start_time = time.time()
summaries, articles = construct_input(n)
feature_list = []
executor = Parallel(n_jobs=8, backend="multiprocessing", prefer="processes", verbose=True)
# executor = Parallel(n_jobs=4, prefer="threads")
tasks = (delayed(compute_features)(summaries[i], articles[i]) for i in range(n))
results = executor(tasks)
for result in results:
feature_list.append(result)
total_time = time.time() - start_time
print("Total Time Parallel : %s" % total_time)
# print(feature_list)
if __name__ == "__main__":
n = 500000
sequential_test(n)
parallel_test(n)
I get the following output when I run the code above
Sequential test
Total Time Sequential : 1.200118064880371
Parallel test
[Parallel(n_jobs=8)]: Using backend MultiprocessingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 56 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 49136 tasks | elapsed: 1.0s
[Parallel(n_jobs=8)]: Done 500000 out of 500000 | elapsed: 4.7s finished
Total Time Parallel : 5.427206039428711
I am running this code on a mac with the following configuration
Can you guys please help me understand why this is so? And if the hardware were to change say to use a GPU would the code be any faster? Appreciate your responses. Thanks in advance.

For loop on API

I have a list of around 28K numbers in a list named "y" and I am running a for loop on API to send Messages but this takes a lot of time (to be exact 1.2797 seconds per call)
Code:
import timeit
start = timeit.default_timer()
for i in y:
data = {'From': 'XXXX', 'To': str(i),
'Body': "ABC ABC" }
requests.post('https://xxxx:xx#api.xxx.com/v1/Accounts/xxx/Sms/send',data=data)
stop = timeit.default_timer()
print('Time: ', stop - start)
How can I reduce the time for this ?
Asyncio or Multithreading are the two possible solutions to optimize your code, and both basically do the same under the hood:
Threaded
import timeit
import threading
import time
y = list(range(50))
def post_data(server, data, sleep_time=1.5):
time.sleep(sleep_time)
# request.post(server, data=data)
start = timeit.default_timer()
server = 'https://xxxx:xx#api.xxx.com/v1/Accounts/xxx/Sms/send'
threads = []
for i in y:
# if you don't need to wait for your threads don't hold them in memory after they are done and instead do
# threading.Thread(target, args).start()
# instead. Especially important if you want to send a large number of messages
threads.append(threading.Thread(target=post_data,
args=(server, {'From': 'XXXX', 'To': str(i), 'Body': "ABC ABC"}))
threads[-1].start()
for thread in threads:
# optional if you want to wait for completion of the concurrent posts
thread.join()
stop = timeit.default_timer()
print('Time: ', stop - start)
Asyncio
Referring to this answer.
import timeit
import asyncio
from concurrent.futures import ThreadPoolExecutor
y = list(range(50)
_executor = ThreadPoolExecutor(len(y))
loop = asyncio.get_event_loop()
def post_data(server, data, sleep_time=1.5):
time.sleep(sleep_time)
# request.post(server, data=data)
async def post_data_async(server, data):
return await loop.run_in_executor(_executor, lambda: post_data(server, data))
async def run(y, server):
return await asyncio.gather(*[post_data_async(server, {'From': 'XXXX', 'To': str(i), 'Body': "ABC ABC"})
for i in y])
start = timeit.default_timer()
server = 'https://xxxx:xx#api.xxx.com/v1/Accounts/xxx/Sms/send'
loop.run_until_complete(run(y, server))
stop = timeit.default_timer()
print('Time: ', stop - start)
When using an API that does not support asyncio but would profit from concurrency, like your use-case, I'd tend towards using threading as it's easier to read IMHO. If your API/Library does support asyncio, go for it! It's great!
On my machine with a list of 50 elements the asyncio solutions clocks in at 1.515 seconds of runtime while the threaded solution needs about 1.509 seconds, when executing 50 instances of time.sleep(1.5).

Categories