First of all, here are my two python files:
sred.py:
import _thread,time
class Thread:
def __init__(self,time:int,say:str):
self.time=time
self.say=say
def create():
id = _thread.get_ident()
for i in range(5):
print("HALLO", id)
return
from sred import Thread
import time,_thread
_thread.start_new_thread(Thread.create,())
The second one:
main.py
from sred import Thread
import time,_thread
_thread.start_new_thread(Thread.create,())
when executing this it doesn't print anything out, why?
UPDATE:
import _thread
class Thread:
#classmethod
def create():
id = _thread.get_ident()
for i in range(5):
print("HALLO", id)
return
main.py:
from sred import Thread
import time,_thread
_thread.start_new_thread(Thread().create,())
Is this now right, or is there still something wrong?
The create method is missing self as a parameter -- it looks like it should also be a #classmethod if you want to call it as it's written now. Note that your __init__ method is never getting called, because you never instantiate any Thread objects. You may want it to read:
_thread.start_new_thread(Thread().create, ())
i.e., instantiate a thread, then pass its create method to be executed in the new thread. I'm not sure what's happening, but I suspect that something is erroring and the stacktrace is being suppressed by something.
Also, you need to delete the space after the for statement -- it's significant, and it should be throwing you a syntax error about an unexpected indent.
EDIT:
This version runs on my machine:
import _thread
class Thread:
def create(self):
id = _thread.get_ident()
for i in range(5):
print("HALLO", id)
return
_thread.start_new_thread(Thread().create, ())
Related
In a custom class I have the following code:
class CustomClass():
triggerQueue: multiprocessing.Queue
def __init__(self):
self.triggerQueue = multiprocessing.Queue()
def poolFunc(queueString):
print(queueString)
def listenerFunc(self):
pool = multiprocessing.Pool(5)
while True:
try:
queueString = self.triggerQueue.get_nowait()
pool.apply_async(func=self.poolFunc, args=(queueString,))
except queue.Empty:
break
What I intend to do is:
add a trigger to the queue (not implemented in this snippet) -> works as intended
run an endless loop within the listenerFunc that reads all triggers from the queue (if any are found) -> works as intended
pass trigger to poolFunc which is to be executed asynchronosly -> not working
It works as soon as I source my poolFun() outside of the class like
def poolFunc(queueString):
print(queueString)
class CustomClass():
[...]
But why is that so? Do I have to pass the self argument somehow? Is it impossible to perform it this way in general?
Thank you for any hint!
There are several problems going on here.
Your instance method, poolFunc, is missing a self parameter.
You are never properly terminating the Pool. You should take advantage of the fact that a multiprocessing.Pool object is a context manager.
You're calling apply_async, but you're never waiting for the results. Read the documentation: you need to call the get method on the AsyncResult object to receive the result; if you don't do this before your program exits your poolFunc function may never run.
By making the Queue object part of your class, you won't be able to pass instance methods to workers.
We can fix all of the above like this:
import multiprocessing
import queue
triggerQueue = multiprocessing.Queue()
class CustomClass:
def poolFunc(self, queueString):
print(queueString)
def listenerFunc(self):
results = []
with multiprocessing.Pool(5) as pool:
while True:
try:
queueString = triggerQueue.get_nowait()
results.append(pool.apply_async(self.poolFunc, (queueString,)))
except queue.Empty:
break
for res in results:
print(res.get())
c = CustomClass()
for i in range(10):
triggerQueue.put(f"testval{i}")
c.listenerFunc()
You can, as you mention, also replace your instance method with a static method, in which case we can keep triggerQueue as part of the class:
import multiprocessing
import queue
class CustomClass:
def __init__(self):
self.triggerQueue = multiprocessing.Queue()
#staticmethod
def poolFunc(queueString):
print(queueString)
def listenerFunc(self):
results = []
with multiprocessing.Pool(5) as pool:
while True:
try:
queueString = self.triggerQueue.get_nowait()
results.append(pool.apply_async(self.poolFunc, (queueString,)))
except queue.Empty:
break
for r in results:
print(r.get())
c = CustomClass()
for i in range(10):
c.triggerQueue.put(f"testval{i}")
c.listenerFunc()
But we still need to reap the pool_async results.
Okay, I found an answer and a workaround:
the answer is based the anser of noxdafox to this question.
Instance methods cannot be serialized that easily. What the Pickle protocol does when serialising a function is simply turning it into a string.
For a child process would be quite hard to find the right object your instance method is referring to due to separate process address spaces.
A functioning workaround is to declare the poolFunc() as static function like
#staticmethod
def poolFunc(queueString):
print(queueString)
I'm new to threading and python. I would like to understand how to pass multiple arguments from one class to another class in python using threading.
I'm using a main thread to call a class- Process then inside the run I'm doing some business logic and calling another class- build using thread and passing multiple arguments.
The run of build class is getting executed but Inside the build class, I'm unable to access those arguments and hence not able to proceed further.
Not sure if my approach is right? Any suggestions will be appreciated.
Below is my main class :
from threading import Thread
import logging as log
from process import Process
if __name__ == '__main__':
try:
proc = Process()
proc.start()
except Exception as e:
#log some error
Inside Process:
#all the dependencies are imported
class Process(Thread):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
Thread.__init__(self)
#other intializations
def run(self):
#some other logic
self.notification(pass_some_data)
#inside notification I'm calling another thread
def notification(self,passed_data):
#passed data is converted dict1
#tup1 is being formed from another function.
#build is a class, and if i don't pass None, i get groupname error.
th = build(None,(tup1,),(dict1,))
th.start()
#inside build
class build(Thread):
def _init_(self,tup1,dict1):
super(build,self).__init__(self)
self.tup1 = tup1
self.dict1 = dict1
def run(self):
#some business logic
#I'm unable to get the arguments being passed here.
I want to get value of a tornado object with key
This is my code :
beanstalk = beanstalkt.Client(host='host', port=port)
beanstalk.connect()
print("ok1")
beanstalk.watch('contracts')
stateTube = beanstalk.stats_tube('contracts', callback=show)
print("ok2")
ioloop = tornado.ioloop.IOLoop.instance()
ioloop.start()
print("ok3")
And this is the function `show()``
def show(s):
pprint(s['current-jobs-ready'])
ioloop.stop
When I look at the documentation I found this :
And when I excecute this code, I have this :
ok1
ok2
3
In fact I have the result I wanted "3" but I don't understand why my program continue to running? Whythe ioloop doesn't close? I don't have ok3when I compile how can I do to close the ioloop and have ok3?
beanstalk.stats_tube is async, it returns a Future which represents a future result that has not yet been resolved.
As the README says, Your callback show will be executed with a dict that contains the resolved result. So you could define show like:
def show(stateTube):
pprint(stateTube['current-job-ready'])
beanstalk.stats_tube('contracts', callback=show)
from tornado.ioloop import IOLoop
IOLoop.current().start()
Note that you pass show, not show(): you're passing the function itself, not calling the function and passing its return value.
The other way to resolve a Future, besides passing a callback, is to use it in a coroutine:
from tornado import gen
from tornado.ioloop import IOLoop
#gen.coroutine
def get_stats():
stateTube = yield beanstalk.stats_tube('contracts')
pprint(stateTube['current-job-ready'])
loop = IOLoop.current()
loop.spawn_callback(get_stats)
loop.start()
I need to add some extra functionality to RethinkDB's run() method. Here's what I've come up with:
from rethinkdb.ast import RqlQuery
class ExtendedRqlQuery(RqlQuery):
def run(self, c=None, **global_optargs):
if not c:
with connection.get_conn() as conn:
return super(ExtendedRqlQuery, self).run(conn, **global_optargs)
else:
return super(ExtendedRqlQuery, self).run(c, **global_optargs)
RqlQuery = ExtendedRqlQuery
The problem is that, if I run a query without any connection parameter, the default behavior occurs, as if my patch didn't go into effect. What am I doing wrong?
EDIT:
I've put up a test and it seems very odd to me:
import rethinkdb as r
# Here is where I do my patching
import .utils
class Test(TestCase):
def test(self):
print r.ast.RqlQuery
r.table('table').insert({'a': 1}).run()
The print statement says <class 'remodel.utils.ExtendedRqlQuery'> (which is right), but the following statement still uses the original file at rethinkdb/ast.py.
import sys
from threading import Thread
is_online=1
class CommandListenerThread(Thread):
global is_online
def run(self):
while is_online:
next_command=sys.stdin.readlines();
if next_command == 'exit':
is_online=0
else:
print next_command
listener=CommandListenerThread()
listener.start()
When I run this python code,it shows an error: "UnboundLocalError: local variable 'is_online' referenced before assignment"
I tested another code which uses the same way to access the global variable inside a class,and it works fine. So,what is wrong with this specific code?
the code may look weird which using a thread to listen the command line,but it is just
a part of my program which gets an error when I run the whole program.
thank you guys
Move global is_online into run() to solve the error.
To address your other question (in a comment below), why not make it a static class variable ?
class CommandListenerThread(Thread):
is_online = 1
def run(self):
print CommandListenerThread.is_online
In case that you have to use another code with a global is_online, you can take the DI (dependency injection) approach as follows:
import sys
from threading import Thread
is_online = 2
class CommandListenerThread(Thread):
def __init__(self, is_online):
super(CommandListenerThread, self).__init__()
CommandListenerThread.is_online = is_online # now it's a static member
# if you want to make it an instance member use self.is_online
def run(self):
print CommandListenerThread.is_online
listener=CommandListenerThread(is_online) # inject the value to the constructor
listener.start()