How can I access a running thread - python

By starting the main, I´m starting a thread that keeps a connection to a opcua server alive (and a few things more).
I now want to open a function inside this thread but I don´t want to import everything again (because it takes to long).
In if __name__ == "__main__":
it is working, but when I run a second script goIntoThread.py, it is not working. Obviously because I didn´t import the modules...
What are my options to trigger e.g. thd.doSomethingInThread() without importing everything again?
Thnaks alot!
main.py
import time
def importOnlyMain():
global KeepConnected
from keepConnected import KeepConnected
if __name__ == "__main__":
importOnlyMain()
global thd
thd = KeepConnected()
thd.start()
time.sleep(3)
thd.doSomethingInThread()
def goIntoThread():
print("Going to Thread")
thd.doSomethingInThread()
goIntoThread.py
import main
main.goIntoThread()
Copy Comment: I get the following error:
thd.setBool()
NameError: global name 'thd' is not defined

Related

How to create a new console in Python to print message

I make a python script running in the console, and I want to create another console for printing important messages without running another python script to do that.
I first tried to use win32console.AllocConsole() directly, but it got Access is denied
(Seemingly because one process can attach to at most one console according to the docs).
So I tried creating a new process by using multiprocessing :
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
sys.stdout = open("CONOUT$", "w")
sys.stderr = open("CONOUT$", "w")
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()
But when I ran the code in Powershell, it exited directly with no message while no new console is created.
None of the possible solutions I found in stackoverflow works for me. What should I do?
Update: It turns out that it is because multiprocessing.Process fails to call ShowConsole function. I use multiprocessing.dummy.Process as the alternative and it works as expected.
The reason why multiprocessing.Process fails to call target is still unclear.
There's nothing wrong with your example above, it pops the console as shown below. I added a "hello" in the main section to differentiate.
But since you want to values from the first console to the second,
here's a better example. Utilize put/get to pass the information from the first console to the second console.
import win32console
import multiprocessing
import time
def secondconsole(output):
win32console.FreeConsole()
win32console.AllocConsole()
while True:
print(output.get())
if __name__ == "__main__":
output = multiprocessing.Queue()
multiprocessing.Process(target=secondconsole, args=[output]).start()
while True:
print("Hello World")
output.put("Hello to second console") #here you will provide the data to the second console
time.sleep(3) #sleep for 3 seconds just for testing
It looks like the issue might be with the way you are trying to open the console using sys.stdout and sys.stderr. Try using the following code instead:
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
os.dup2(win32console.GetStdHandle(win32console.STD_OUTPUT_HANDLE), sys.stdout.fileno())
os.dup2(win32console.GetStdHandle(win32console.STD_ERROR_HANDLE), sys.stderr.fileno())
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()

control a daemon thread with routes in flask

How do you make two routes control a daemon thread in python
flask backend file
from flask import Flask
from time import time,sleep
from threading import Thread
app = Flask(__name__)
def intro():
while True:
sleep(3)
print (f" Current time : {time()}")
#app.route('/startbot')
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
bot_thread.start()
return "bot started "
#app.route('/stopbot')
def stop_bot():
bot_thread.join()
return
if __name__ == "__main__":
app.run()
When trying to kill the thread the curl request in the terminal does not return back to the console and the thread keeps on printing data to the terminal
the idea I had was that I would declare the variable that holds the reference to the bot_thread and use the routes to control it
to test this I used curl http://localhost:port/startbot and curl http://localhost:port/stopbot
I can start the bot just fine but when I try to kill it, I get the following
NameError: name 'bot_thread' is not defined
Any help and does and don'ts will be very appreciated
take into consideration that after killing the thread a user can create a new one and also be able to kill it
Here is a Minimal Reproducible Example :
from threading import Thread
def intro():
print("hello")
global bot_thread
def start_bot():
bot_thread = Thread(target=intro, daemon=True)
return
def stop_bot():
if bot_thread:
bot_thread.join()
if __name__ == "__main__":
import time
start_bot() # simulating a request on it
time.sleep(1) # some time passes ...
stop_bot() # simulating a request on it
Traceback (most recent call last):
File "/home/stack_overflow/so71056246.py", line 25, in <module>
stop_bot() # simulating a request on it
File "/home/stack_overflow/so71056246.py", line 17, in stop_bot
if bot_thread:
NameError: name 'bot_thread' is not defined
My IDE makes the error visually clear for me : the bot_thread is not used, because it is a local variable, not the global one, although they have the same name. This is a pitfall for Python programmers, see this question or this one for example.
So :
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
return
but
Traceback (most recent call last):
File "/home/stack_overflow/so71056246.py", line 26, in <module>
stop_bot() # simulating a request on it
File "/home/stack_overflow/so71056246.py", line 19, in stop_bot
bot_thread.join()
File "/usr/lib/python3.9/threading.py", line 1048, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
Hence :
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
bot_thread.start()
return
which finally gives :
hello
EDIT
When trying to kill the thread the curl request in the terminal does not return back to the console and the thread keeps on printing data to the terminal
#prometheus the bot_thread runs the intro function. Because it contains an infinite loop (while True) it will never reach the end of the function (implicit return) so the thread is never considered as finished. Because of that, when the main thread tries to join (wait until the thread finishes, then get the result), it waits endlessly because the bot thread is stuck in the loop.
So you have to make it possible to exit the while loop. For example (like in the example I linked in a comment), by using another global variable, a flag, that gets set in the main thread (route stop_bot) and that is checked in the intro loop. Like so :
from time import time, sleep
from threading import Thread
def intro():
global the_bot_should_continue_running
while the_bot_should_continue_running:
print(time())
sleep(1)
global bot_thread
global the_bot_should_continue_running
def start_bot():
global bot_thread, the_bot_should_continue_running
bot_thread = Thread(target=intro, daemon=True)
the_bot_should_continue_running = True # before the `start` !
bot_thread.start()
return
def stop_bot():
if bot_thread:
global the_bot_should_continue_running
the_bot_should_continue_running = False
bot_thread.join()
if __name__ == "__main__":
start_bot() # simulating a request on it
sleep(5.5) # some time passes ...
stop_bot() # simulating a request on it
prints 6 times then exits.

AttributeError("module 'name' has no attribute 'def name'",)

I'm creating a virtual OS with different files, like boot.py and Run.py
It worked for a while until I typed this in:
if Run.run == 0:
error()
else:
start()
There is no errors saying about it like red lines or any. Now I get this error:
AttributeError("module 'boot' has no attribute 'bootup'",)
Here's the code:
boot.py:
#Imports
import Run
#Varibles
name = "PyOS"
version = 1
creator = "© Dragon Softwares"
def bootup(run):
if Run.run == 0:
error()
else:
start()
def start():
print("Starting PyOS")
print("Version ", version)
def error():
print("Please Run 'Run.py' To bootup the system")
bootup()
Run.py:
#Imports
import time
import boot
#Varibles
run = 0
def main():
run = 1
time.sleep(0.1)
boot.bootup
main()
You have a circular import. Don't have boot import Run and vice-versa.
What happens now is:
Run.py is executed as the __main__ module.
The line import boot is reached and executed.
boot has not been imported before, so boot.py is found and loaded to be the boot module.
The line import Run is reached and executed.
Run has not been imported before, so Run.py is found and loaded to be the Run module. Note: that Run.py is also used for __main__ does not matter here.
The line import boot is reached and executed.
boot is already being imported, so it used to set the name boot in the Run module
Various other names are set in the Run module namespace (run and main).
The line main() is reached and executed
The main() function references boot.bootup, which doesn't exist yet, because the boot module top-level code never reached the def bootup(run): line.
Instead of importing Run, pass the run value as a parameter to the boot function:
def main():
run = 1
time.sleep(0.1)
boot.bootup(run)
Also, don't call bootup() at the end of the boot module. That function call will be executed when you import boot, and makes no sense there. If you want to present an error message, use a if __name__ == '__main__': guard:
if __name__ == '__main__':
import sys
error()
sys.exit(1)
The if test will only ever be true if you ran boot.py as the script.

Maximum number of running instances reached (1) - apscheduler

I'm using apscheduler to recursively run a function in my application. The basic source of this functionality is as follows:
class ClassName(QtGui.QWidget)
def __init__(self):
super(ClassName, self).__init__()
from apscheduler.scheduler import Scheduler
sched = Scheduler(standalone=True)
sched.daemonic = False
sched.add_cron_job(self.FunctionName, second='*/5')
def FunctionName(self):
print("Hello World!")
#Function contents here
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
ClassName = ClassName()
sys.exit(app.exec_())
My understanding from reading this thread, is that if the called function hasn't completed when the next job is started, the "Maximum number of running instances reached" error can be raised. I understand the theory behind the solution in that thread, but am unsure on the application.
Should I explicitly state, at the end of the called function, the terminate/kill a process to ensure that the original thread is removed before the next thread is started?

Why does pyInstaller packed program creates guardian when creating a new process on Windows?

I am trying to write an alarm clock program in python using multiprogramming module on Windows 7.
It all runs good in the interpreter. But when packed in one file by pyinstaller, every time the code create a process, there turn out to be 2 processes, one is a parent and the other is its child. When the code kills the parent process, the child become an orphan process.
The code:
from multiprocessing import Process,freeze_support
import time
import winsound
def startout(seconds,name):
freeze_support()
print name+':pid '+str(os.getpid())+' is created'
startTime=time.time()
while (time.time()-startTime)<seconds:
time.sleep(1)
winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS)
print name+' end'
class alarmCenter:
def __init__(self):
self.alarmList={'alarm1':None,'alarm2':None,'alarm3':None}
def newAlarm(self,seconds,name):
if self.alarmList[name]!=None:
if self.alarmList[name].is_alive():
return False
ala=Process(target=startout, args=(seconds,name))
ala.deamon=True
ala.start()
self.alarmList[name]=ala
return True
def stopAlarm(self,name):
try:
self.alarmList[name].terminate()
self.alarmList[name].join()
self.alarmList[name]=None
except Exception:
pass
def terminateAll(self):
for each in self.alarmList.keys():
if self.alarmList[each]!=None:
self.alarmList[each].terminate()
if __name__=='__main__':
freeze_support()
#....
Note that multiprocessing.freeze_support() is already there.
Could anyone please show me how to kill the child process or fix this bug?

Categories