Python Multiprocessing calling object method [duplicate] - python

This question already has an answer here:
Running multiple threads at the same time
(1 answer)
Closed 5 years ago.
I want to use Python's multiprocessing module to start a new process which creates some other object and calls that objects loops_forever method.
In my main class I have:
import OtherService
from multiprocessing import Process
my_other_service = OtherService(address=ADDRESS)
my_other_process = Process(target=my_other_service.loops_forever())
print("got here")
my_other_process.start()
print("done")
When I run this code, "got here" never gets printed. loops_forever gets called just above the "got here" print, and control never returns back to my main class.
What am I doing wrong here?
I have used multiprocessing before in this fashion:
my_other_process = Process(target=OtherService, kwargs={"address":ADDRESS})
my_other_process.start()
which correctly calls OtherService's init function and runs the init function as a separate process.
The only difference is this time I want to call init function AND then run the loops_forever method forever as a separate process.

When you do target=my_other_service.loops_forever(), you call loops_forever. To pass it as a function, rather than call it, you would drop the parentheses, like target=my_other_service.loops_forever.

Related

Throwing a coroutine to event loop without blocking [duplicate]

This question already has answers here:
How can I call an async function without await?
(3 answers)
Closed 1 year ago.
I've been using Python for years now, and recently I've discovered asynchronies programming.
However, I'm having a rough time trying to implement my ideas...
Let's say I have a regular (synchronized) function g() that returns a value that it computes. Inside this function, I want to call another function, that is asynchronies and runs in a loop - we will call this function f(). The function f() is called inside g(), but the returned value of g() is computed before f() is even called. However, obviously, I want g() to return the value it computed and keep running f() "in the background".
import asyncio
def g(a, b):
y = a + b
asyncio.run(f()) # This is blocking!
return y
async def f():
for _ in range(3):
await asyncio.sleep(1)
print("I'm still alive!")
print(g(3, 5))
# code async code continues here...
# Output:
# I'm still alive!
# I'm still alive!
# I'm still alive!
# 8
# Expected\Wanted:
# 8
# I'm still alive!
# I'm still alive!
# I'm still alive!
Is something like this is even possible? Or am I missing something?
Thanks in advance!
There are a few misunderstandings I believe. So first of all, you cannot have something really simultaneous without using thread or process. async is syntax sugar for the callback-based event-driven architecture.
Shortly, they are still in the same thread, and as you know, one thread can do only one thing at one time. If you want to have a kind of background task running and printing "I'm still alive", then async is not what you are looking for.
Also, Aren't you curious about where is the event loop? The loop is created and managed by asyncio.run, it roughly equals to:
loop = asyncio.get_event_loop()
loop.run_until_complete(f())
So you see, you need to run/trigger the loop, and it is blocking.
Basically, the asynchronies programming doesn't work as you thought(I guess). There is no magic inside, it is just a normal blocking event loop. We add multiple tasks into it, and all the tasks are running one by one. Some tasks have callback functions, which adds another task into the loop. That's it.

Change Variables In Loop Of Other Python File

I have multiple python files in different folders that work together to make my program function. They consist of a main.pyfile that creates new threads for each file and then starts them with the necessary parameters. This works great while the parameters are static, but if a variable changes in the main.py it doesn't get changed in the other files. I also can't import the main.py file into otherfile.py to get the new variable since it is in a previous dir.
I have created an example below. What should happen is that the main.py file creates a new thread and calls otherfile.py with set params. After 5 seconds, the variable in main.py changes and so should the var in otherfile (so it starts printing the number 5 instead of 10), but I haven't found a solution to update them in otherfile.py
The folder structure is as follows:
|-main.py
|-other
|
otherfile.py
Here is the code in both files:
main.py
from time import sleep
from threading import Thread
var = 10
def newthread():
from other.otherfile import loop
nt = Thread(target=loop(var))
nt.daemon = True
nt.start()
newthread()
sleep(5)
var = 5 #change the var, otherfile.py should start printing it now (doesnt)
otherfile.py
from time import sleep
def loop(var):
while True:
sleep(1)
print(var)
In Python, there are two types of objects:
Immutable objects can’t be changed.
Mutable objects can be changed.
Int is immutable. you must be use list or dict variable.
from time import sleep
from threading import Thread
var = [10]
def newthread():
from other.otherfile import loop
nt = Thread(target=loop, args=(var,), daemon=True)
nt.start()
newthread()
sleep(5)
var[0] = 5
This happens because of how objects are passed into functions in Python. You'll hear that everything is passed by reference in Python, but since integers are immutable, when you edit the value of val, you're actually creating a new object and your thread still holds a reference to the integer with a value of 10.
To get around this, I wrote a simple wrapper class for an integer:
class IntegerHolder():
def __init__(self, n):
self.value = n
def set_value(self, n):
self.value = n
def get_value(self):
return self.value
Then, instead of var = 10, I did i = IntegerHolder(10), and after the sleep(5) call, I simply did i.set_value(5), which updates the wrapper object. The thread still has the same reference to the IntegerHolder object i, and when i.get_value() is called in the thread, it will return 5, as required.
You can also do this with a Python list, since lists are objects — it's just that this implementation makes it clearer what's going on. You'd just do var = [10] and do var[0] = 5, which would work since your thread should still keep a reference to the same list object as the main thread.
Two more errors:
Instead of Thread(target=loop(var)), you need to do Thread(target=loop, args=(i,)). This is because target is supposed to be a callable object, which is basically a function. Doing loop(var) will cause the Thread constructor to loop forever waiting for the function to return (and then set target to the return value), so the thread never actually gets created. You can verify this with your favorite Python debugger, or print statements.
Setting nt.daemon = True allows main.py to exit before the thread finishes. This means that as soon as i.set_value(5) is called, the main program terminates and your integer wrapper object ceases to exist. This makes your thread very confused when it tries to access the wrapper object, and by very confused, I mean it throws an exception and dies because threads do that. You can verify this by catching the exit code of the thread. Deleting that line fixes things (nt.daemon = False by default), but it's probably safer to do a nt.join() call in the main thread, which waits for a thread to finish execution.
And one warning, because programming wouldn't be complete without warnings:
Whenever different threads try to access a value, if AT LEAST ONE thread is modifying the value, this can cause a race condition. This means that all accesses at that point should be wrapped in a lock/mutex to prevent this. The Python (3.7.4) docs have more info about this.
Let me know if you have any more questions!

Python: Start a thread and have its returned value to a variable further down in the script [duplicate]

This question already has answers here:
How to get the return value from a thread?
(26 answers)
Closed 3 years ago.
I am currently writing a program that is required to be as fast as possible.
Currently, one of the functions looks like this:
def function():
value = get_value()
# Multiple lines of code
if value == "1":
print("success")
I want to know, if there is a way of calling the get_value() function at the start of the function and instantly running the multiple lines of code and then whenever the the get_value() function is finishes and returns a value the value variable is updated ready for the if statement.
Thanks!
This is what futures are for. With the concurrent.futures module, you'd do something like:
import concurrent.futures
# Properly, this would be created once, in a main method, using a with statement
# Use ProcessPoolExecutor instead if the functions involved are CPU bound, rather
# than I/O bound, to work around the GIL
executor = concurrent.futures.ThreadPoolExecutor()
def function():
value = executor.submit(get_value)
# Multiple lines of code
if value.result() == "1":
print("success")
This creates a pool of workers that you can submit tasks to, receiving futures, which can be waited for when you actually need the result. I'd recommend looking at the examples in the documentation for more full-featured usage.
The other approach here, for largely I/O bound cases based on sockets, subprocesses, etc., is using asyncio with async/await, but that requires a complete rewrite of your code, and is out of scope for a short answer.

How threading.timer works in python [duplicate]

This question already has answers here:
How Python threading Timer work internally?
(2 answers)
Closed 5 months ago.
I want to run a function every n seconds. After some research, I figured out this code:
import threading
def print_hello():
threading.Timer(5.0, print_hello).start()
print("hello")
print_hello()
Will a new thread be created every 5 sec when print_hello() is called?
Timer is a thread. Its created when you instantiate Timer(). That thread waits the given amount of time then calls the function. Since the function creates a new timer, yes, it is called every 5 seconds.
Little indentation of code would help to better understand the question.
Formatted Code:
from threading import Timer
def print_hello():
Timer(5.0,print_hello,[]).start()
print "Hello"
print_hello()
This code works spawning a new thread every 5 sec as you are calling it recursively in every new thread call.
In my case this worked
import turtle
def hello:
threading.Timer(2, hello()).start()
hello()
hello function should contain braces while passing as an argument in Timer().

New thread life-cycle within Tkinter object

I've got an issue with working with the threading class within a Tkinter GUI. On initiating the Tkinter GUI, I create new Threading & Queue objects with a daemon and start it. In the Tkinter GUI, I have a button that calls an internal method. This method then calls put on the Queue object and is posted below. The Threading object performs all the necessary actions that I expect.
def my_method_threaded(self, my_name):
try:
self.queue.put(("test", dict(name=my_name)))
self.label_str_var.set('')
self.queue.join()
except:
self.error_out(msg=traceback.format_exc())
However, I am encountering an issue AFTER it has finished. If I call self.queue.join(), then the set call is never executed and the app freezes after the thread has completed its task. If I comment out the join() command, the set call IS executed, but the button will only work the first time, after it does nothing (I am tracking what the run() method is doing using a logger. It is only ever called the first time).
I am assuming there is an issue with calling join() and the Tkinter loop, which is why the first issue occurs. Can anyone shed any light on the second issue? If you need more code, then let me know.
Edit: A second issue I've just noticed is that the while True loop executes my action twice even though I have called self.queue.task_done(). Code for the run method is below:
def run(self):
args = self._queue.get()
my_name = args[1]["name"]
while True:
if my_name == "Barry":
#calls a static method elsewhere
self.queue.task_done()

Categories