QProcess execution time - python

I would like to measure execution time of a QProcess object.
Is there an internal attribute, method or object in PySide for execution time measurements?
The current approach is to measure it from the outside using time.time().
Example code:
from PySide import QtCore
import time
p = QtCore.QProcess()
start_time = time.time()
p.start('ping -n 5 127.0.0.1 >nul')
p.waitForFinished(-1)
end_time = time.time() - start_time
print(end_time)

One way you could do this is as follows. This uses the systems time command to get the time of execution.
from PySide import QtCore
import time
p = QtCore.QProcess()
p.start('time -p ping -n 5 127.0.0.1 >nul')
p.waitForFinished(-1)
stdOut = p.readAllStandardOutput()
print(stdOut)
#TODO you will have to regex the stdOut to get the values you want.
Here is another approach:
from PySide import QtCore
import time
timer = QtCore.QTime()
def handle_proc_stop(*vargs):
procTime = timer.elapsed()
print("Process took {} miliseconds".format(procTime))
p = QtCore.QProcess()
p.started.connect(timer.start)
p.finished.connect(handle_proc_stop)
p.start('ping -n 5 127.0.0.1 >nul')
p.waitForFinished(-1)

Related

Using both multiprocessing and multithreading in a Python script to speed up execution

I have the following range of subnets: 10.106.44.0/24 - 10.106.71.0/24. I am writing a Python script to ping each IP in all the subnets. To speed up this script I am trying to use both multiprocessing and multithreading. I am creating a new process for each subnet and creating a new thread to ping each host in that subnet. I would like to ask two questions:
Is this the best approach for this problem?
If yes, how would I go about implementing this?
I would first try to use threading. You can try creating a thread pool whose size is the total number of pings you have to do, but ultimately I believe that this will not do much better than using a thread pool size equal to the number of CPU cores you have (explanation below). Here is a comparison both ways using threading and multiprocessing:
ThreadPoolExecutor (255 threads)
from concurrent.futures import ThreadPoolExecutor
import os
import platform
import subprocess
import time
def ping_ip(ip_address):
param = '-n' if platform.system().lower() == 'windows' else '-c'
try:
output = subprocess.check_output(f"ping {param} 1 {ip_address}", shell=True, universal_newlines=True)
if 'unreachable' in output:
return False
else:
return True
except Exception:
return False
def main():
t1 = time.time()
ip_addresses = ['192.168.1.154'] * 255
#with ThreadPoolExecutor(os.cpu_count())) as executor: # uses number of CPU cores
with ThreadPoolExecutor(len(ip_addresses)) as executor:
results = list(executor.map(ping_ip, ip_addresses))
#print(results)
print(time.time() - t1)
if __name__ == '__main__':
main()
Prints:
2.049474000930786
You can try experimenting with fewer threads (max_workers argument to the ThreadPoolExecutor constructor). See: concurrent.futures
I found that running 8 threads, which is the number of cores that I had, did just about as well (timing: 2.2745485305786133). I believe the reason for this is that despite pinging being an I/O-related task, the call to subprocess must be creating internally a new process that uses a fair amount of CPU and therefore the concurrency is somewhat processor-limited.
ProcessPoolExecutor (8 cores)
from concurrent.futures import ProcessPoolExecutor
import os
import platform
import subprocess
import time
def ping_ip(ip_address):
param = '-n' if platform.system().lower() == 'windows' else '-c'
try:
output = subprocess.check_output(f"ping {param} 1 {ip_address}", shell=True, universal_newlines=True)
if 'unreachable' in output:
return False
else:
return True
except Exception:
return False
def main():
t1 = time.time()
ip_addresses = ['192.168.1.154'] * 255
with ProcessPoolExecutor() as executor:
results = list(executor.map(ping_ip, ip_addresses))
#print(results)
print(time.time() - t1)
if __name__ == '__main__':
main()
Prints:
2.509838819503784
Note that on my Linux system you have to be a superuser to issue a ping command.

Run same function simultaneously with different arguments

I have been attempting to make a small python program to monitor and return ping results from different servers. I have reached a point where pinging each device in the sequence has become inefficient and lacks performance. I want to continuously ping each one of my targets at the same time on my python.
What would the best approach to this be? Thanks for your time
def get_latency(ip_address, port):
from tcp_latency import measure_latency
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%Y-%m-%d %H:%M:%S")
latency = str(measure_latency(host=ip_address, port=port, runs=1, timeout=1))[1:-1]
#add to table and upload to database function()
ip_address_list = [('google.com', '80'), ('bing.com', '80')]
#Problem
#run function simultaneously but with different arguments
get_latency(ip_address_list[0][0], ip_address_list[0][1])
get_latency(ip_address_list[1][0], ip_address_list[1][1])
For loop does not run in simultaneous.
You can use threading to run in simultaneous.
see this:
import threading
def get_latency(ip_address, port):
from tcp_latency import measure_latency
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%Y-%m-%d %H:%M:%S")
latency = str(measure_latency(host=ip_address, port=port, runs=1, timeout=1))[1:-1]
#add to table and upload to database function()
ip_address_list = [('google.com', '80'), ('bing.com', '80')]
#adding to thread
t1 = threading.Thread(target=get_latency, args=(ip_address_list[0][0], ip_address_list[0][1]))
t2 = threading.Thread(target=get_latency, args=(ip_address_list[1][0], ip_address_list[1][1]))
# starting thread
t1.start()
t2.start()
# wait until thread 1 is completely executed
t1.join()
# wait until thread 2 is completely executed
t2.join()
# both threads completely executed
print("Done!")
You can use a for loop for this purpose.
Something like this:
for i in range(len(ip_address_list)):
print(get_latency(ip_address_list[i][0], ip_address_list[i][1]))
Also you should define the modules before writing the function and return the results
from tcp_latency import measure_latency
from datetime import datetime
def get_latency(ip_address, port):
.
.
.
return results

How do I make a subprocess run for a set amount of time, then return to the loop and wait for a trigger?

This i what I have so far...
from gpiozero import MotionSensor
import subprocess
import threading
import time
pir = MotionSensor(4)
while True:
pir.wait_for_motion()
print("Start Playing Music")
subprocess.call(['mplayer', '-vo', 'null', '-ao', 'alsa', '-playlist', 'myplaylist', '-shuffle'])
The music playing part works great, but as for the timing, I've tried threading and time, but all seem to do is pause the code for a given amount of time. I want to run the subprocess for a given amount of time, then return to wait on motion. I'm still learning. Thanks for your help.
Python 2.7 - 3.x
Create your subprocess command. I have chosen Popen.
Popen doesn't block, allowing you to interact with the process while it's running, or continue with other things in your Python program. The call to Popen returns a Popen object.
You can read the difference between subprocess.Popen and subprocess.call here
You can use shlex module to split your string command - very comfortable.
After that, you can call your command in the thread. From this moment, you can manage your task called in a thread. There is a simple example, how to do it:
Example of code:
import logging
import shlex
import subprocess
import sys
import threading
logging.basicConfig(filename='log.log',
filemode='a',
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.INFO)
log = logging.getLogger(__name__)
def exec_cmd(command):
try:
cmd = subprocess.Popen(shlex.split(command), # nosec
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True)
_thread_command(cmd)
out, err = cmd.communicate()
log.error(err) if err else log.info(out)
except subprocess.CalledProcessError as su_err:
log.error('Calledprocerr: %s', su_err)
except OSError as os_error:
log.error('Could not execute command: %s', os_error)
def _thread_command(task, timeout=5):
"""
Thread. If task is longer than <timeout> - kill.
:param task: task to execute.
"""
task_thread = threading.Thread(target=task.wait)
task_thread.start()
task_thread.join(timeout)
if task_thread.is_alive(): # do whatever you want with your task, for example, kill:
task.kill()
logging.error('Timeout! Executed time is more than: %s', timeout)
sys.exit(1)
if __name__ == '__main__':
exec_cmd('sleep 10') # put your string command here
Tested on Centos:
[kchojnowski#zabbix4-worker1 ~]$ cat log.log
11:31:48,348 root ERROR Timeout! Executed time is more than: 5

python apscheduler, an easier way to run jobs?

I have jobs scheduled thru apscheduler. I have 3 jobs so far, but soon will have many more. i'm looking for a way to scale my code.
Currently, each job is its own .py file, and in the file, I have turned the script into a function with run() as the function name. Here is my code.
from apscheduler.scheduler import Scheduler
import logging
import job1
import job2
import job3
logging.basicConfig()
sched = Scheduler()
#sched.cron_schedule(day_of_week='mon-sun', hour=7)
def runjobs():
job1.run()
job2.run()
job3.run()
sched.start()
This works, right now the code is just stupid, but it gets the job done. But when I have 50 jobs, the code will be stupid long. How do I scale it?
note: the actual names of the jobs are arbitrary and doesn't follow a pattern. The name of the file is scheduler.py and I run it using execfile('scheduler.py') in python shell.
import urllib
import threading
import datetime
pages = ['http://google.com', 'http://yahoo.com', 'http://msn.com']
#------------------------------------------------------------------------------
# Getting the pages WITHOUT threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
for page in pages:
job(page)
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microseconds WITHOUT threads" \
.format((end - start).microseconds)
#------------------------------------------------------------------------------
# Getting the pages WITH threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
threads = []
for page in pages:
t = threading.Thread(target=job, args=(page,))
t.start()
threads.append(t)
for t in threads:
t.join()
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microsecond WITH threads" \
.format((end - start).microseconds)
Look #
http://furius.ca/pubcode/pub/conf/bin/python-recursive-import-test
This will help you import all python / .py files.
while importing you can create a list which keeps keeps a function call, for example.
[job1.run(),job2.run()]
Then iterate through them and call function :)
Thanks Arjun

Running apschduler in Python script as a daemon?

I have a job.py which has the following code.
import datetime
import logging
import sys
import os
from apscheduler.scheduler import Scheduler
from src.extractors.pExtractor import somejob
def run_job():
start = datetime.datetime.now()
logging.debug('Proposal extraction job starting')
somejob.main()
end = datetime.datetime.now()
duration = end - start
logging.debug('job completed , took ' + str(duration.seconds) + ' seconds')
def main():
logging.basicConfig(filename='/tmp/pExtractor.log', level=logging.DEBUG,format='%(levelname)s[%(asctime)s]: %(message)s')
sched = Scheduler()
sched.start()
sched.add_interval_job(run_job, minutes=2)
if __name__ == '__main__':
main()
When I run this on the command prompt, it exits immediately:
INFO[2012-04-03 13:31:02,825]: Started thread pool with 0 core threads
and 20 maximum threads INFO[2012-04-03 13:31:02,827]: Scheduler
started INFO[2012-04-03 13:31:02,827]: Added job "run_job (trigger:
cron[minute='2'], next run at: 2012-04-03 14:02:00)" to job store
"default" INFO[2012-04-03 13:31:02,828]: Shutting down thread pool
How can I makde this run as a daemon?
Write your main() as below.
def main():
[... your_code_as_in_your_question ...]
while (True):
pass
Additionally it shouldn't hurt to consider PEP 3143.

Categories