How to run several looping functions at the same time in python? - python

i've been working on a project that i was assigned to do. it is about some sort of parking lot where the cars that enter, are generated automaticly (done) now, I've put them into a 'waiting list (because i have to represent them with a GUI module later) in order to later be assigned in a spot in the parking lot. and then they must get out the parking lot (also randomly)
The problem raises when I created a function that will always create cars randomly, now i cant call any other function because the first one is looping.
the question is, is there a way to call several looping functions at the same time?
Thanks

the question is, is there a way to call several looping functions at the same time?
This is a great question and there are several ways to do it.
Threading can let your functions run concurrently. The data flow between the threads should be managed using the Queue module:
# Inter-thread communication
wait_to_park = Queue()
wait_to_exit = Queue()
# Start the simulation
tg = threading.Thread(target=generate_cars)
tp = threading.Thread(target=park_cars)
tu = threading.Thread(target=unpark_cars)
tg.start(); tp.start(); tu.start()
# Wait for simumlation to finish
tg.join()
wait_to_park.join()
tp.join()
wait_to_exit.join()
tu.join()
Alternatively, you can use an event-loop such as the sched module to coordinate the events. Generators may help with this -- they work like functions that can be suspended and restarted.

maybe import random and then set up a range that you want certain events to happen?
def mainLoop():
while True:
x = random.randrange(1,100)
if 0>x>10: do something()
if 10>x>60: do somethingMoreFrequently()
if 60>x>61: do somethingRarely()
etcetera
if you LITERALLY want to call several looping functions at the same time be prepared to learn about Threading. Threading is difficult and i never do it unless 100% necessary.
but this should be simple enough to achieve without

Don't have both infinitely loop, have then each do work if needed and return (or possibly yield). Then have your main event loop call both. Something like this:
def car_arrival():
if need_to_generate_car:
# do car generation stuff
return
def car_departure()
if time_for_car_to_leave:
# do car leaving stuff
return
def event_loop():
while sim_running:
car_arrival()
car_departure()
sleep(0.5)

Related

Is there way to have code running concurrently (more specifically PyAutoGui)?

I have the following code
def leftdoor():
press('a')
pyautogui.sleep(1)
press('a')
def rightdoor():
press('d')
pyautogui.sleep(1)
press('d')
leftdoor()
rightdoor()
and when I run the code what happens is the letter A is pressed and 1 second is waited and then its pressed again. Then the same happens for the D key. However is there a way for me to be able to press them both down and express that in code by calling both functions and not having to wait for the .sleep of the previous function?
There are two ways to run your code concurrently:
Combine the functions (might not be possible for large functions)
In the case of your code, it would look like this:
def door():
press('a')
press('d')
sleep(1)
press('a')
press('d')
door()
If this isn't what you're looking for, use threading.
Theading
Here is a link to a tutorial on the module, and the code is below.
from threading import Thread # Module import
rdt = Thread(target=rightdoor) # Create two Thread objects
ldt = Thread(target=leftdoor)
rdt.start() # start and join the objects
ldt.start()
rdt.join()
ldt.join()
print("Finished execution") # done!
Note that using this does not absolutely guarantee that a and d will be pressed at the same time (I got a ~10 millisecond delay at max, and it might have been from the program I used to time it), but it should work for all purposes.

Share variables across scripts

I have 2 separate scripts working with the same variables.
To be more precise, one code edits the variables and the other one uses them (It would be nice if it could edit them too but not absolutely necessary.)
This is what i am currently doing:
When code 1 edits a variable it dumps it into a json file.
Code 2 repeatedly opens the json file to get the variables.
This method is really not elegant and the while loop is really slow.
How can i share variables across scripts?
My first scripts gets data from a midi controller and sends web-requests.
My second script is for LED strips (those run thanks to the same midi controller). Both script run in a "while true" loop.
I can't simply put them in the same script since every webrequest would slow the LEDs down. I am currently just sharing the variables via a json file.
If enough people ask for it i will post the whole code but i have been told not to do this
Considering the information you provided, meaning...
Both script run in a "while true" loop.
I can't simply put them in the same script since every webrequest would slow the LEDs down.
To me, you have 2 choices :
Use a client/server model. You have 2 machines. One acts as the server, and the second as the client. The server has a script with an infinite loop that consistently updates the data, and you would have an API that would just read and expose the current state of your file/database to the client. The client would be on another machine, and as I understand it, it would simply request the current data, and process it.
Make a single multiprocessing script. Each script would run on a separate 'thread' and would manage its own memory. As you also want to share variables between your two programs, you could pass as argument an object that would be shared between both your programs. See this resource to help you.
Note that there are more solutions to this. For instance, you're using a JSON file that you are consistently opening and closing (that is probably what takes the most time in your program). You could use a real Database that could handle being opened only once, and processed many times, while still being updated.
a Manager from multiprocessing lets you do this sort thing pretty easily
first I simplify your "midi controller and sends web-request" code down to something that just sleeps for random amounts of time and updates a variable in a managed dictionary:
from time import sleep
from random import random
def slow_fn(d):
i = 0
while True:
sleep(random() ** 2)
i += 1
d['value'] = i
next we simplify the "LED strip" control down to something that just prints to the screen:
from time import perf_counter
def fast_fn(d):
last = perf_counter()
while True:
sleep(0.05)
value = d.get('value')
now = perf_counter()
print(f'fast {value} {(now - last) * 1000:.2f}ms')
last = now
you can then run these functions in separate processes:
import multiprocessing as mp
with mp.Manager() as manager:
d = manager.dict()
procs = []
for fn in [slow_fn, fast_fn]:
p = mp.Process(target=fn, args=[d])
procs.append(p)
p.start()
for p in procs:
p.join()
the "fast" output happens regularly with no obvious visual pauses

How to instantiate and destroy classes in python effectively and efficiently

My classes have become complicated, and I think the way I am instantiating and not closing may be a problem... sometimes the program just hangs, code that took .2 seconds to run will take 15 seconds. It happens randomly and rebooting seems to fix it. Restarting the program without rebooting does not, I'm using cntrl+C to stop the program running on CENTOS 7.
Here's my class layout:
def do_stuff(self):
pool_class = pools()
sql_pool_insert = pool_class.generate_sql()
print "pools done"
sql_HD_insert = hard_drives(pool_class).generate_sql()
print "hds done"
class pools
class devs
class hard_drives(object):
def __init__(self, **pool_class = pools()**):
self.drives_table=[]
self.assigned_drive_names =[]
self.unassigned_drives_table=[]
#for each pool, generate drives list actively combining
for pool in pool_class.pool_name:
self.drives_table = self.drives_table + pool_class.Vdevs(pool).get_drive_table()
i=0
Outside of that file, I have my main program that imports the above as one of many packages. The main file calls the functions in the above file that are above the classes (because they use the classes) and sometimes the main program uses discrete functions inside classes. The problem is the pools and devs class init sections are HUGE! It takes .7 seconds to run all that code in some cases, so I want to instantiate it correctly and close it effectively. Here is a sample from the main program:
if (new_pool==1):
result = mypackage.pools().create_pool(pool_to_create)
I really feel Like I'm doing two or three things wrong. I think, perhaps, I should just create an instance of the pools class in my main program while loop, at the top, then use that everywhere in the loop... then I would close it at the end of the loop; the loop runs every .25 seconds. I want the pool class to be initiated at the beginning of the while loop and closed at the end. Is this a good approach? This question applies equally to the Mysql InnoDB cursors I'm using, which may be another factor of the issue, how should these be handled as not to let instances of cursors and classes get out of hand? Heck, maybe I should be instantiating the entire package instead of one class in the package; i.e.:
with package as MyPackage:
package.pool()...#do whatever, all inits in the package have run once and you don't have to worry about them running again so long as you use the package object for everything???
Maybe I'm just confused... it comes down to this... how do I control when a class init function is run... it really only needs to run at the beginning of the loop, so that the loop has fresh data to work with, and then get used everywhere else... but you see how the pools class is used by the hard_drives class... how would I handle that?

Python simplest form of multiprocessing

Ive been trying to read up on threading and multiprocessing but all the examples are to intricate and advanced for my level of python/programming knowlegde. I want to run a function, which consists of a while loop, and while that loop runs I want to continue with the program and eventually change the condition for the while-loop and end that process. This is the code:
class Example():
def __init__(self):
self.condition = False
def func1(self):
self.condition = True
while self.condition:
print "Still looping"
time.sleep(1)
print "Finished loop"
def end_loop(self):
self.condition = False
The I make the following function-calls:
ex = Example()
ex.func1()
time.sleep(5)
ex.end_loop()
What I want is for the func1 to run for 5s before the end_loop() is called and changes the condition and ends the loop and thus also the function. I.e I want one process to start and "go" into func1 and at the same time I want time.sleep(5) to be called, so the processes "split" when arriving at func1, one process entering the function while the other continues down the program and start with the time.sleep(5) execution.
This must be the most basic example of a multiprocess, still Ive had trouble finding a simple way to do it!
Thank you
EDIT1: regarding do_something. In my real problem do_something is replaced by some code that communicates with another program via a socket and receives packages with coordinates every 0.02s and stores them in membervariables of the class. I want this constant updating of the coordinates to start and then be able to to read the coordinates via other functions at the same time.
However that is not so relevant. What if do_something is replaced by:
time.sleep(1)
print "Still looping"
How do I solve my problem then?
EDIT2: I have tried multiprocessing like this:
from multiprocessing import Process
ex = Example()
p1 = Process(target=ex.func1())
p2 = Process(target=ex.end_loop())
p1.start()
time.sleep(5)
p2.start()
When I ran this, I never got to p2.start(), so that did not help. Even if it had this is not really what Im looking for either. What I want would be just to start the process p1, and then continue with time.sleep and ex.end_loop()
The first problem with your code are the calls
p1 = Process(target=ex.func1())
p2 = Process(target=ex.end_loop())
With ex.func1() you're calling the function and pass the return value as target parameter. Since the function doesn't return anything, you're effectively calling
p1 = Process(target=None)
p2 = Process(target=None)
which makes, of course, no sense.
After fixing that, the next problem will be shared data: when using the multiprocessing package, you implement concurrency using multiple processes which, by default, cannot simply share data afaik. Have a look at Sharing state between processes in the package's documentation to read about this. Especially take the first sentence into account: "when doing concurrent programming it is usually best to avoid using shared state as far as possible"!
So you might want to also have a look at Exchanging objects between processes to read about how to send/receive data between two different processes. So, instead of simply setting a flag to stop the loop, it might be better to send a message to signal the loop should be terminated.
Also note that processes are a heavyweight form of multiprocessing, they spawn multiple OS processes which comes with a relatively big overhead. multiprocessing's main purpose is to avoid problems imposed by Python's Global Interpreter Lock (google about this to read more...) If your problem is'nt much more complex than what you've told us, you might want to use the threading package instead: threads come with less overhead than processes and also allow to access the same data (although you really should read about synchronization when doing this...)
I'm afraid, multiprocessing is an inherently complex subject. So I think you will need to advance your programming/python skills to successfully use it. But I'm sure you'll manage this, the python documentation about this is comprehensive and there are a lot of other resources about this.
To tackle your EDIT2 problem, you could try using the shared memory map Value.
import time
from multiprocessing import Process, Value
class Example():
def func1(self, cond):
while (cond.value == 1):
print('do something')
time.sleep(1)
return
if __name__ == '__main__':
ex = Example()
cond = Value('i', 1)
proc = Process(target=ex.func1, args=(cond,))
proc.start()
time.sleep(5)
cond.value = 0
proc.join()
(Note the target=ex.func1 without the parentheses and the comma after cond in args=(cond,).)
But look at the answer provided by MartinStettner to find a good solution.

What is the "correct" way to make a stoppable thread in Python, given stoppable pseudo-atomic units of work?

I'm writing a threaded program in Python. This program is interrupted very frequently, by user (CRTL+C) interaction, and by other programs sending various signals, all of which should stop thread operation in various ways. The thread does a bunch of units of work (I call them "atoms") in sequence.
Each atom can be stopped quickly and safely, so making the thread itself stop is fairly trivial, but my question is: what is the "right", or canonical way to implement a stoppable thread, given stoppable, pseudo-atomic pieces of work to be done?
Should I poll a stop_at_next_check flag before each atom (example below)? Should I decorate each atom with something that does the flag-checking (basically the same as the example, but hidden in a decorator)? Or should I use some other technique I haven't thought of?
Example (simple stopped-flag checking):
class stoppable(Thread):
stop_at_next_check = False
current_atom = None
def __init__(self):
Thread.__init__(self)
def do_atom(self, atom):
if self.stop_at_next_check:
return False
self.current_atom = atom
self.current_atom.do_work()
return True
def run(self):
#get "work to be done" objects atom1, atom2, etc. from somewhere
if not do_atom(atom1):
return
if not do_atom(atom2):
return
#...etc
def die(self):
self.stop_at_next_check = True
self.current_atom.stop()
Flag checking seems right, but you missed an occasion to simplify it by using a list for atoms. If you put atoms in a list, you can use a single for loop without needing a do_atom() method, and the problem of where to do the check solves itself.
def run(self):
atoms = # get atoms
for atom in atoms:
if self.stop_at_next_check:
break
self.current_atom = atom
atom.do_work()
Create a "thread x should continue processing" flag, and when you're done with the thread, set the flag to false.
Killing a thread directly is considered bad form, because you might get a fractional chunk of work completed.
A tad late but I have created a small library, ants, solving this problem. In your example an atomic unit is represented by an worker
Example
from ants import worker
#worker
def hello():
print(“hello world”)
t = hello.start()
...
t.stop()
In above example hello() will run in a separate thread being called in a while True: loop thus spitting out “hello world” as fast as possible
You can also have triggering events , e.g. in above replace hello.start() with hello.start(lambda: time.sleep(5)) and you will have it trigger every 5:th second
The library is very new and work is ongoing on GitHub https://github.com/fa1k3n/ants.git
Future work includes adding a colony for having several workers working on different parts of same data, also planning on a queen for worker communication and control, like synch

Categories