Run same function simultaneously with different arguments - python

I have been attempting to make a small python program to monitor and return ping results from different servers. I have reached a point where pinging each device in the sequence has become inefficient and lacks performance. I want to continuously ping each one of my targets at the same time on my python.
What would the best approach to this be? Thanks for your time
def get_latency(ip_address, port):
from tcp_latency import measure_latency
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%Y-%m-%d %H:%M:%S")
latency = str(measure_latency(host=ip_address, port=port, runs=1, timeout=1))[1:-1]
#add to table and upload to database function()
ip_address_list = [('google.com', '80'), ('bing.com', '80')]
#Problem
#run function simultaneously but with different arguments
get_latency(ip_address_list[0][0], ip_address_list[0][1])
get_latency(ip_address_list[1][0], ip_address_list[1][1])

For loop does not run in simultaneous.
You can use threading to run in simultaneous.
see this:
import threading
def get_latency(ip_address, port):
from tcp_latency import measure_latency
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%Y-%m-%d %H:%M:%S")
latency = str(measure_latency(host=ip_address, port=port, runs=1, timeout=1))[1:-1]
#add to table and upload to database function()
ip_address_list = [('google.com', '80'), ('bing.com', '80')]
#adding to thread
t1 = threading.Thread(target=get_latency, args=(ip_address_list[0][0], ip_address_list[0][1]))
t2 = threading.Thread(target=get_latency, args=(ip_address_list[1][0], ip_address_list[1][1]))
# starting thread
t1.start()
t2.start()
# wait until thread 1 is completely executed
t1.join()
# wait until thread 2 is completely executed
t2.join()
# both threads completely executed
print("Done!")

You can use a for loop for this purpose.
Something like this:
for i in range(len(ip_address_list)):
print(get_latency(ip_address_list[i][0], ip_address_list[i][1]))
Also you should define the modules before writing the function and return the results
from tcp_latency import measure_latency
from datetime import datetime
def get_latency(ip_address, port):
.
.
.
return results

Related

writing to text file within thread python

I am trying to run this script in the background of a flask web app. This is an example code of what I am trying to do without the PIR sensor connected. I am essentially running this infinite loop and would like to write to a file periodically within a thread. I do not understand what is wrong and why the file is empty.
import threading
from datetime import datetime as dt
from time import sleep
global_lock = threading.Lock()
def write_to_file():
while global_lock.locked():
continue
def motionlog():
global_lock.acquire()
f = open("motionlog.txt", mode = "w")
f.write("Motion Detection Log" + "\n")
while True:
#output_lock.acquire()
f.write("Motion Detected at "+ dt.now().strftime("%m_%d_%Y-%I:%M:%S_%p")+"\n")
print('Motion Detected')
sleep(5)
global_lock.release()
t1 = threading.Thread(target=motionlog)
t1.start()
t1.join()

How can I ensure the timeliness of a callback function in Python?

I have a callback function but the delegate that issues the callback occasionally takes several seconds to provides updates (because it is waiting for data over a remote connection). For my use case this is troublesome because I need to run a function at a regular time, or quit the program. What is the easiest way to have a timer in python that runs a function, or quits the application if I haven't had an update from the delegate within a certain space of time, like five seconds?
def parseMessage(client, userdata, message): # CALLBACK FUNCTION THAT LISTENS FOR NEW MESSAGES
signal = int(str(message.payload.decode("utf-8")))
writeToSerial(signal)
def exceptionState(): # THIS IS THE FUNCTION I WOULD LIKE TO RUN IF THERE'S NO CALLBACK
print("ERROR, LINK IS DOWN, DISABLING SERVER")
exit()
def mqttSignal():
client.on_message = parseMessage # THIS INVOKES THE CALLBACK FUNCTION
client.loop_forever()
This sounds like a good scenario for setting up a background thread that exits if you don't get an event based on a sentinel value. A simple implementation might look like this:
from datetime import datetime, timedelta
from threading import Thread
from time import sleep
class Watcher:
timeout = timedelta(minutes=5)
def __init__(self):
self.last_signal = datetime.now()
Thread(target=self.exception_state).start()
def parse_message(self):
self.last_signal = datetime.now()
# Other handling code here
def exception_state(self):
while True:
if datetime.now() - self.last_signal > self.timeout:
exit("No signal received.")
sleep(5)

Call a python function on specific timestamps

I am trying to send a query to an API every full minute because the API updates its data every minute and I want the updated data immediately. It is important that the timing is very precise, I want to run everything continuously in the end.
This is vaguely what I am trying to do:
import time, sched
time = 1549667056000 # starting timestamp
def get_data(): # function to get some data from the API via requests
#gets the Data
while true:
s.scheduler(time)
s.run(get_data()) # gets the data on the specified time(stamp)
time = time + 60000 # adds 1 minute to the timestamp
Shall I do it this way, or is there a even smarter way of getting data from a REST Api exactly every full minute?
You could use asyncio.sleep
For Python < 3.7
import asyncio
def get_data():
print("Getting data")
async def main():
while true:
get_data()
await asyncio.wait(MINUTE)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
For Python 3.7+
import asyncio
def get_data():
print("Getting data")
async def main():
while true:
get_data()
await asyncio.wait(60)
#This is the only thing that changes
asyncio.run(main)
edit
As per your comment, if you're really worried about making sure this gets called every 60 seconds.
You could implement a way to take the time before get_data is called and subtract that from 60, just need to make sure if get_data does take over 60 secs to wait 0 secs or not at all.
Something like this for your main() should work:
#make sure to:
import time
async def main():
while true:
t = time.time()
get_data()
time_diff = int(time.time() - t)
await asyncio.wait(max(60 - time_diff, 0))
Thanks to everyone for helping out.
This Answer worked for me pretty well in the end:
import time
starttime = time.time()
while True:
print(time.time())
time.sleep(60.0 - ((time.time() - starttime) % 60.0))
I let it run over night, there was no shift overtime. The time between the executions is exactly 60, no matter how long the code in the loop takes to execute.

python apscheduler, an easier way to run jobs?

I have jobs scheduled thru apscheduler. I have 3 jobs so far, but soon will have many more. i'm looking for a way to scale my code.
Currently, each job is its own .py file, and in the file, I have turned the script into a function with run() as the function name. Here is my code.
from apscheduler.scheduler import Scheduler
import logging
import job1
import job2
import job3
logging.basicConfig()
sched = Scheduler()
#sched.cron_schedule(day_of_week='mon-sun', hour=7)
def runjobs():
job1.run()
job2.run()
job3.run()
sched.start()
This works, right now the code is just stupid, but it gets the job done. But when I have 50 jobs, the code will be stupid long. How do I scale it?
note: the actual names of the jobs are arbitrary and doesn't follow a pattern. The name of the file is scheduler.py and I run it using execfile('scheduler.py') in python shell.
import urllib
import threading
import datetime
pages = ['http://google.com', 'http://yahoo.com', 'http://msn.com']
#------------------------------------------------------------------------------
# Getting the pages WITHOUT threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
for page in pages:
job(page)
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microseconds WITHOUT threads" \
.format((end - start).microseconds)
#------------------------------------------------------------------------------
# Getting the pages WITH threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
threads = []
for page in pages:
t = threading.Thread(target=job, args=(page,))
t.start()
threads.append(t)
for t in threads:
t.join()
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microsecond WITH threads" \
.format((end - start).microseconds)
Look #
http://furius.ca/pubcode/pub/conf/bin/python-recursive-import-test
This will help you import all python / .py files.
while importing you can create a list which keeps keeps a function call, for example.
[job1.run(),job2.run()]
Then iterate through them and call function :)
Thanks Arjun

parallelly execute blocking calls in python

I need to do a blocking xmlrpc call from my python script to several physical server simultaneously and perform actions based on response from each server independently.
To explain in detail let us assume following pseudo code
while True:
response=call_to_server1() #blocking and takes very long time
if response==this:
do that
I want to do this for all the servers simultaneously and independently but from same script
Use the threading module.
Boilerplate threading code (I can tailor this if you give me a little more detail on what you are trying to accomplish)
def run_me(func):
while not stop_event.isSet():
response= func() #blocking and takes very long time
if response==this:
do that
def call_to_server1():
#code to call server 1...
return magic_server1_call()
def call_to_server2():
#code to call server 2...
return magic_server2_call()
#used to stop your loop.
stop_event = threading.Event()
t = threading.Thread(target=run_me, args=(call_to_server1))
t.start()
t2 = threading.Thread(target=run_me, args=(call_to_server2))
t2.start()
#wait for threads to return.
t.join()
t2.join()
#we are done....
You can use multiprocessing module
import multiprocessing
def call_to_server(ip,port):
....
....
for i in xrange(server_count):
process.append( multiprocessing.Process(target=call_to_server,args=(ip,port)))
process[i].start()
#waiting process to stop
for p in process:
p.join()
You can use multiprocessing plus queues. With one single sub-process this is the example:
import multiprocessing
import time
def processWorker(input, result):
def remoteRequest( params ):
## this is my remote request
return True
while True:
work = input.get()
if 'STOP' in work:
break
result.put( remoteRequest(work) )
input = multiprocessing.Queue()
result = multiprocessing.Queue()
p = multiprocessing.Process(target = processWorker, args = (input, result))
p.start()
requestlist = ['1', '2']
for req in requestlist:
input.put(req)
for i in xrange(len(requestlist)):
res = result.get(block = True)
print 'retrieved ', res
input.put('STOP')
time.sleep(1)
print 'done'
To have more the one sub-process simply use a list object to store all the sub-processes you start.
The multiprocessing queue is a safe object.
Then you may keep track of which request is being executed by each sub-process simply storing the request associated to a workid (the workid can be a counter incremented when the queue get filled with new work). Usage of multiprocessing.Queue is robust since you do not need to rely on stdout/err parsing and you also avoid related limitation.
Then, you can also set a timeout on how long you want a get call to wait at max, eg:
import Queue
try:
res = result.get(block = True, timeout = 10)
except Queue.Empty:
print error
Use twisted.
It has a lot of useful stuff for work with network. It is also very good at working asynchronously.

Categories