Is it the right way to run 2 infiniteloop in python? - python

i wrote a code in python with 2 infinite loop like this:
import threading
import time
ticker = 0
def ticking():
global ticker
while True:
time.sleep(1)
ticker +=1
print("ticker = {}".format(ticker))
def main_line():
while True:
print("Hello world")
time.sleep(4)
t1 = threading.Thread(target = ticking)
t2 = threading.Thread(target = main_line)
t1.start()
t2.start()
#t1.join()
if __name__ == "__main__":
t1.join()
#t2.join()
If i don't join any thread it's not working, but when i join 1 thread, another thread is work too but i don't know why?
Can anyone explain for me?

Join is used to block the calling method until the thread finishes, throws an exception or time out.
join([timeout]) Wait until the thread terminates. This blocks the
calling thread until the thread whose join() method is called
terminates – either normally or through an unhandled exception – or
until the optional timeout occurs.
In your case, you have started both threads. So both threads are working. But once you block the main thread by t1.join(), both running threads will be visible.

it is because you also "start" another thread. If you make a comment on t2.start(), then the thread is only working on t1.
t1.start()
# t2.start()

Related

Python threading with connection as connect in loop

I cant start 2 separated threads on 1 connection to server.
I want run 2 functions in loop, it looks like they are executed one by one.
Are there any other option to run functions every x seconds/min?
Threading is good tool to use?
import time
import datetime
from threading import Thread
def func1(ts3conn):
print(f"func 1, pause 2sec NOW:{datetime.datetime.now()}")
time.sleep(2)
def func2(ts3conn):
print(f"func2, pause 10sec NOW:{datetime.datetime.now()}")
time.sleep(10)
if __name__ == "__main__":
with ts3.query.TS3ServerConnection("telnet://serveradmin:passwod#1.2.3.4:10011") as ts3conn:
ts3conn.exec_("use", sid=1)
ts3conn.exec_("clientupdate", client_nickname="BOT")
while True:
Thread(target=func1(ts3conn)).start()
Thread(target=func2(ts3conn)).start()
func 1, pause 2sec NOW:2019-08-24 23:53:19.139951
func2, pause 10sec NOW:2019-08-24 23:53:21.141273
func 1, pause 2sec NOW:2019-08-24 23:53:31.141770
func2, pause 10sec NOW:2019-08-24 23:53:33.142568
func 1, pause 2sec NOW:2019-08-24 23:53:43.143090
func2, pause 10sec NOW:2019-08-24 23:53:45.143880
The target parameter expects a callable (i.e. function), but you're passing the result of calling that callable and the sleep is occurring before the thread is even created.
So instead of this:
Thread(target=func1(ts3conn)).start()
... try something like this:
Thread(target=func1, args=(ts3conn,)).start()
Unfortunately when you fix this issue you're going have another problem: the while loop is not going to wait for the threads to finish before creating new threads, and will continue creating new threads until the application crashes. You may want to try something like this:
while True:
t1 = Thread(target=func1, args=(ts3conn,))
t2 = Thread(target=func2, args=(ts3conn,))
t1.start()
t2.start()
# wait for the threads to finish
t1.join()
t2.join()

Python input() call blocks other threads from printing to console

Thread1: Just blocks on user-input, and then adds the input to a queue before going back to blocking.
Thread2: Prints to console.
I don't see Thread2 outputs unless I enter something on the console. i.e unblock thread1. Is there a way to output to console while also blocking on input in another thread?
import threading
import time
def reader():
while(1):
text=input()
def writer():
while(1):
time.sleep(1)
print("Thread 2")
t1 = threading.Thread(target=reader)
t2 = threading.Thread(target=writer)
t1.start()
t2.start()
while(1):
#Do nothing
time.sleep(1)
Environment: Windows, WPy 3.6.7

Why does my multiprocess queue not appear to be thread safe?

I am building a watchdog timer that runs another Python program, and if it fails to find a check-in from any of the threads, shuts down the whole program. This is so it will, eventually, be able to take control of needed communication ports. The code for the timer is as follows:
from multiprocessing import Process, Queue
from time import sleep
from copy import deepcopy
PATH_TO_FILE = r'.\test_program.py'
WATCHDOG_TIMEOUT = 2
class Watchdog:
def __init__(self, filepath, timeout):
self.filepath = filepath
self.timeout = timeout
self.threadIdQ = Queue()
self.knownThreads = {}
def start(self):
threadIdQ = self.threadIdQ
process = Process(target = self._executeFile)
process.start()
try:
while True:
unaccountedThreads = deepcopy(self.knownThreads)
# Empty queue since last wake. Add new thread IDs to knownThreads, and account for all known thread IDs
# in queue
while not threadIdQ.empty():
threadId = threadIdQ.get()
if threadId in self.knownThreads:
unaccountedThreads.pop(threadId, None)
else:
print('New threadId < {} > discovered'.format(threadId))
self.knownThreads[threadId] = False
# If there is a known thread that is unaccounted for, then it has either hung or crashed.
# Shut everything down.
if len(unaccountedThreads) > 0:
print('The following threads are unaccounted for:\n')
for threadId in unaccountedThreads:
print(threadId)
print('\nShutting down!!!')
break
else:
print('No unaccounted threads...')
sleep(self.timeout)
# Account for any exceptions thrown in the watchdog timer itself
except:
process.terminate()
raise
process.terminate()
def _executeFile(self):
with open(self.filepath, 'r') as f:
exec(f.read(), {'wdQueue' : self.threadIdQ})
if __name__ == '__main__':
wd = Watchdog(PATH_TO_FILE, WATCHDOG_TIMEOUT)
wd.start()
I also have a small program to test the watchdog functionality
from time import sleep
from threading import Thread
from queue import SimpleQueue
Q_TO_Q_DELAY = 0.013
class QToQ:
def __init__(self, processQueue, threadQueue):
self.processQueue = processQueue
self.threadQueue = threadQueue
Thread(name='queueToQueue', target=self._run).start()
def _run(self):
pQ = self.processQueue
tQ = self.threadQueue
while True:
while not tQ.empty():
sleep(Q_TO_Q_DELAY)
pQ.put(tQ.get())
def fastThread(q):
while True:
print('Fast thread, checking in!')
q.put('fastID')
sleep(0.5)
def slowThread(q):
while True:
print('Slow thread, checking in...')
q.put('slowID')
sleep(1.5)
def hangThread(q):
print('Hanging thread, checked in')
q.put('hangID')
while True:
pass
print('Hello! I am a program that spawns threads!\n\n')
threadQ = SimpleQueue()
Thread(name='fastThread', target=fastThread, args=(threadQ,)).start()
Thread(name='slowThread', target=slowThread, args=(threadQ,)).start()
Thread(name='hangThread', target=hangThread, args=(threadQ,)).start()
QToQ(wdQueue, threadQ)
As you can see, I need to have the threads put into a queue.Queue, while a separate object slowly feeds the output of the queue.Queue into the multiprocessing queue. If instead I have the threads put directly into the multiprocessing queue, or do not have the QToQ object sleep in between puts, the multiprocessing queue will lock up, and will appear to always be empty on the watchdog side.
Now, as the multiprocessing queue is supposed to be thread and process safe, I can only assume I have messed something up in the implementation. My solution seems to work, but also feels hacky enough that I feel I should fix it.
I am using Python 3.7.2, if it matters.
I suspect that test_program.py exits.
I changed the last few lines to this:
tq = threadQ
# tq = wdQueue # option to send messages direct to WD
t1 = Thread(name='fastThread', target=fastThread, args=(tq,))
t2 = Thread(name='slowThread', target=slowThread, args=(tq,))
t3 = Thread(name='hangThread', target=hangThread, args=(tq,))
t1.start()
t2.start()
t3.start()
QToQ(wdQueue, threadQ)
print('Joining with threads...')
t1.join()
t2.join()
t3.join()
print('test_program exit')
The calls to join() means that the test program never exits all by itself since none of the threads ever exit.
So, as is, t3 hangs and the watchdog program detects this and detects the unaccounted for thread and stops the test program.
If t3 is removed from the above program, then the other two threads are well behaved and the watchdog program allows the test program to continue indefinitely.

Python thread is not threading

I've been looking all over google and can't seem to get this working.
I'm trying to thread 2 functions, both of which are infinite loops.
Looking at the extract below, it only starts the 1st thread and does not proceed to the next one in line.
PS: When I swap the 2 threads around, then I have the same problem with the 2nd thread.
def syslog_service():
syslog_server = socketserver.UDPServer((syslog_host,syslog_port), Syslog_Server)
syslog_server.serve_forever()
def cleanup_old_logs_service():
# lock = threading.Lock()
# threading.Thread.__init__(self)
global syslog_retention_hours
global RUNNING
while RUNNING:
# cleanup_old_logs_service.lock.acquire()
cleanup.old_logs(syslog_retention_hours)
# cleanup_old_logs_service.lock.release()
time.sleep(10)
if __name__ == "__main__":
try:
logger.info("Starting main thread")
config()
logger.info("Starting system testing")
test()
logger.info("Config loaded")
thread1 = cleanup_old_logs_service()
thread2 = syslog_service()
thread1.start()
logger.info("Syslog cleanup service running")
thread2.start()
logger.info("Syslog server running")
The reason why only the first thread is executed is that you actually have ONLY one thread in your program. When you write thread1 = cleanup_old_logs_service() and thread2 = syslog_service()you are not creating new threads, but just assigning the return values of your functions to 2 different variables. For this reason, as soon as the program encounters thread1, it executes cleanup_old_logs_service() and gets stuck in an infinite loop.
To create a new thread, I would import the threading module, create a new threadObj object and start the thread as follows:
import threading
threadObj = threading.Thread(target=cleanup_old_logs_service)
threadObj.start()
This way, the function cleanup_old_logs_service() will be executed in a new thread.
By saying thread1 = cleanup_old_logs_service() you are actually executing the function cleanup_old_logs_service not saving a reference to a thread. You would have to say
import threading # If you have not already
thread1 = threading.Thread(target=cleanup_old_logs_service)
thread2 = threading.Thread(target=syslog_service)
# Now you can start the thread
thread1.start()
logger.info("Syslog cleanup service running")
thread2.start()
logger.info("Syslog server running")
You can look at https://docs.python.org/3.5/library/threading.html for documentation and https://pymotw.com/2/threading/ for examples, because I believe you would need to use locks to manage access to your resources

Python exiting multiple threads

I'm trying to see how multi thread are working in order to use them in an automation project. I can run the thread but I cannot find a way to exit completely the two threads: the thread restart after each keyboard interupt. Is there a way to exit both thread with a keyboard interupt ?
import thread
from time import sleep
*parameters when starting
temp_c = 32
T_hot = 30
T_cold = 27
interval_temp = 2
def ctrl_fan(temp_c, T_hot,interval_temp):
while True:
if temp_c >= T_hot:
print 'refreshing'
else:
print ' fan stopped'
sleep(interval_temp)
print 'shutting everything off'
def ctrl_light(temp_c, T_cold,interval_temp):
while True:
if temp_c <= T_cold:
print 'warming'
else:
print 'light stopped'
sleep(interval_temp)
print 'shutting everything off'
try:
thread.start_new_thread(ctrl_fan, (temp_c, T_hot,interval_temp, ) )
sleep(1)
thread.start_new_thread(ctrl_light, (temp_c, T_cold,interval_temp, ) )
except (KeyboardInterrupt, SystemExit):
thread.exit()
print "Error: unable to start thread"
Sure,
Firstly I'd recommend using the slightly higher level threading module instead of the thread module.
To start a thread with threading use the following
import threading
t = threading.Thread(target=ctrl_fan, args=(temp_c, T_hot, interval_temp))
t.start()
There's a few things you'll need to do to get the program to exit with a Ctrl-C interupt.
Firstly you will want to set the threads to be daemon, so that they allow the program to exit when the main thread exits (t.daemon = True)
You will also want the main thread to wait on the completion of the threads, you can use t.join() to do this. However this wont raise out a KeyboardInterrupt exception until the thread finishes, there is a work around for this though
while t.is_alive():
t.join(1)
Providing a timeout value gets around this.
I'd be tempted to pull this together into a subclass, to get the behaviour you want
import threading
class CustomThread(threading.Thread):
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
self.daemon = True
def join(self, timeout=None):
if timeout is None:
while self.is_alive():
threading.Thread.join(self, 10)
else:
return threading.Thread.join(self, timeout)
t1 = CustomThread(target=ctrl_fan, args=(temp_c, T_hot, interval_temp))
t1.start()
t2 = CustomThread(target=ctrl_light, args=(temp_c, T_cold, interval_temp))
t2.start()
t1.join()
t2.join()
The explanation is, again, in the documentation (https://docs.python.org/2/library/thread.html) :
Threads interact strangely with interrupts: the KeyboardInterrupt exception will be received by an arbitrary thread. (When the signal module is available, interrupts always go to the main thread.)
You'd certainly find answers in https://stackoverflow.com/, like :
Propagate system call interruptions in threads

Categories