Ok so I have multiple problems with the code under:
when the key chosen in the Combo Box is held down, it keeps printing "You Pressed It", is there a way to avoid this?
When I press the set hotkey, the label changes but the while loop in the process() doesnt, its suppose to do a process of tasks but I simplified it to print for this question.
run = False
def press():
global run
while True:
if keyboard.read_key(hotCombo.get()):
print("You Pressed It")
run = not run
keyboard.wait(hotCombo.get())
if run == True:
status["text"]="Working"
else:
status["text"]="Not Working"
def process():
while run == True:
print("runnning")
Been tinkering with it and found more problems
I ended up with this but while its printing run I cant seem to stop it
def process():
global run
while True:
if keyboard.read_key(hotCombo.get()):
print("kijanbdsjokn")
run = not run
keyboard.wait(hotCombo.get())
if run == True:
status["text"]="Working"
else:
status["text"]="Not Working"
while run == True:
print("run")
time.sleep(1)
Can I ask why I cant just integrate tkinter into a working python script using threading?
A Python script is generally linear. You do things in sequence and then you exit.
In a tkinter program, your code consists of three things.
Code to set up the window and widgets.
Initialization of global variables (doesn't really matter if you hide them in a class instance; they're still globals).
Most of it will be functions/methods that are called as callbacks from tkinter when it is in the mainloop.
So in atkinter program most of your code is a guest in the mainloop. Where it is executed in small pieces in reaction to events. This is a completely different kind of program. It was called event-driven or message based programming long before that became cool in web servers and frameworks.
So, can you integrate a script in a tkinter program? Yes, it is possible.
There are basically three ways you can do it;
Split the code up into small pieces that can be called via after timeouts. This involves the most reorganization of your code. To keep the GUI responsive, event handlers (like timeouts) should not take too long; 50 ms seems to be a reasonable upper limit.
Run it in a different thread. We will cover that in more detail below.
Run it in a different process. Broadly similar to running in a thread (the API's of threading.Thread and multiprocessing.Process are almost the same by design). The largest difference is that communication between processes has to be done explicitly via e.g. Queue or Pipe.
There are some things that you have to take into account when using extra threads, especially in a tkinter program.
1) Python version
You need to use Python 3. This will not work well in Python 2 for reasons that are beyond the scope of this answer. Better preemption of threads in Python 3 is a big part of it.
2) Multithreaded tkinter build
The tkinter (or rather the underlying tcl interpreter) needs to be built with threading enabled. I gather that the official python.org builds for ms-windows are, but apart from that YMMV. On some UNIX-like systems such as Linux or *BSD the packages/ports systems gives you a choice in this.
3) Make your code into a function
You need to wrap up the core of your original script in a function so you can start it in a thread.
4) Make the function thread-friendly
You probably want to be able to interrupt that thread if it takes too long. So you have to adapt it to check regularly if it should continue. Checking if a global named run is True is one method. Note that the threading API does not allow you to just terminate a thread.
5 The normal perils of multithreading
You have to be careful with modifying widgets or globals from both threads at the same time.
At the time of writing, the Python GIL helps you here. Since it assures that only one thread at a time is executing Python bytecode, any change that can be done in a single bytecode is multithreading safe as a side effect.
For example, look at the modification of a global in the modify function:
In [1]: import dis
In [2]: data = []
Out[2]: []
In [3]: def modify():
...: global data
...: newdata = [1,2,3]
...: data = newdata
...:
In [4]: dis.dis(modify)
3 0 BUILD_LIST 0
2 LOAD_CONST 1 ((1, 2, 3))
4 LIST_EXTEND 1
6 STORE_FAST 0 (newdata)
4 8 LOAD_FAST 0 (newdata)
10 STORE_GLOBAL 0 (data)
12 LOAD_CONST 0 (None)
14 RETURN_VALUE
See how the new list is built separately, and only when it is comple is it assigned to the global. (This was not by accident.)
It takes only a single bytecode instruction (STORE_GLOBAL) to set a global variable to a newly created list. So at no point can the value of data be ambiguous.
But a lot of things take more then one bytecode. So there is a chance that one thread is preempted in favor of the other while it was modifying a variable or widget. How big of a chance that is depends on how often these situations happen and how long they take.
IIRC, currently a thread is preempted every 15 ms.
So an change that takes longer than that is guaranteed to be preempted. As is any task that drops the GIL for I/O.
So if you see weird things happening, make sure to use Locks to regulate access to shared resources.
It helps if e.g. a widget or variable is only modified from one thread, and only read from all the other threads.
One way to handle your key is to turn it into a two-phase loop:
def press():
global run
while True:
while not keyboard.read_key(hotCombo.get()):
time.sleep(0.2)
run = True
status["text"]="Working"
while keyboard.read_key(hotCombo.get()):
print("running")
time.sleep(0.2)
run == False
status["text"]="Not Working"
Related
I have 2 separate scripts working with the same variables.
To be more precise, one code edits the variables and the other one uses them (It would be nice if it could edit them too but not absolutely necessary.)
This is what i am currently doing:
When code 1 edits a variable it dumps it into a json file.
Code 2 repeatedly opens the json file to get the variables.
This method is really not elegant and the while loop is really slow.
How can i share variables across scripts?
My first scripts gets data from a midi controller and sends web-requests.
My second script is for LED strips (those run thanks to the same midi controller). Both script run in a "while true" loop.
I can't simply put them in the same script since every webrequest would slow the LEDs down. I am currently just sharing the variables via a json file.
If enough people ask for it i will post the whole code but i have been told not to do this
Considering the information you provided, meaning...
Both script run in a "while true" loop.
I can't simply put them in the same script since every webrequest would slow the LEDs down.
To me, you have 2 choices :
Use a client/server model. You have 2 machines. One acts as the server, and the second as the client. The server has a script with an infinite loop that consistently updates the data, and you would have an API that would just read and expose the current state of your file/database to the client. The client would be on another machine, and as I understand it, it would simply request the current data, and process it.
Make a single multiprocessing script. Each script would run on a separate 'thread' and would manage its own memory. As you also want to share variables between your two programs, you could pass as argument an object that would be shared between both your programs. See this resource to help you.
Note that there are more solutions to this. For instance, you're using a JSON file that you are consistently opening and closing (that is probably what takes the most time in your program). You could use a real Database that could handle being opened only once, and processed many times, while still being updated.
a Manager from multiprocessing lets you do this sort thing pretty easily
first I simplify your "midi controller and sends web-request" code down to something that just sleeps for random amounts of time and updates a variable in a managed dictionary:
from time import sleep
from random import random
def slow_fn(d):
i = 0
while True:
sleep(random() ** 2)
i += 1
d['value'] = i
next we simplify the "LED strip" control down to something that just prints to the screen:
from time import perf_counter
def fast_fn(d):
last = perf_counter()
while True:
sleep(0.05)
value = d.get('value')
now = perf_counter()
print(f'fast {value} {(now - last) * 1000:.2f}ms')
last = now
you can then run these functions in separate processes:
import multiprocessing as mp
with mp.Manager() as manager:
d = manager.dict()
procs = []
for fn in [slow_fn, fast_fn]:
p = mp.Process(target=fn, args=[d])
procs.append(p)
p.start()
for p in procs:
p.join()
the "fast" output happens regularly with no obvious visual pauses
My classes have become complicated, and I think the way I am instantiating and not closing may be a problem... sometimes the program just hangs, code that took .2 seconds to run will take 15 seconds. It happens randomly and rebooting seems to fix it. Restarting the program without rebooting does not, I'm using cntrl+C to stop the program running on CENTOS 7.
Here's my class layout:
def do_stuff(self):
pool_class = pools()
sql_pool_insert = pool_class.generate_sql()
print "pools done"
sql_HD_insert = hard_drives(pool_class).generate_sql()
print "hds done"
class pools
class devs
class hard_drives(object):
def __init__(self, **pool_class = pools()**):
self.drives_table=[]
self.assigned_drive_names =[]
self.unassigned_drives_table=[]
#for each pool, generate drives list actively combining
for pool in pool_class.pool_name:
self.drives_table = self.drives_table + pool_class.Vdevs(pool).get_drive_table()
i=0
Outside of that file, I have my main program that imports the above as one of many packages. The main file calls the functions in the above file that are above the classes (because they use the classes) and sometimes the main program uses discrete functions inside classes. The problem is the pools and devs class init sections are HUGE! It takes .7 seconds to run all that code in some cases, so I want to instantiate it correctly and close it effectively. Here is a sample from the main program:
if (new_pool==1):
result = mypackage.pools().create_pool(pool_to_create)
I really feel Like I'm doing two or three things wrong. I think, perhaps, I should just create an instance of the pools class in my main program while loop, at the top, then use that everywhere in the loop... then I would close it at the end of the loop; the loop runs every .25 seconds. I want the pool class to be initiated at the beginning of the while loop and closed at the end. Is this a good approach? This question applies equally to the Mysql InnoDB cursors I'm using, which may be another factor of the issue, how should these be handled as not to let instances of cursors and classes get out of hand? Heck, maybe I should be instantiating the entire package instead of one class in the package; i.e.:
with package as MyPackage:
package.pool()...#do whatever, all inits in the package have run once and you don't have to worry about them running again so long as you use the package object for everything???
Maybe I'm just confused... it comes down to this... how do I control when a class init function is run... it really only needs to run at the beginning of the loop, so that the loop has fresh data to work with, and then get used everywhere else... but you see how the pools class is used by the hard_drives class... how would I handle that?
I read on the python documentation that Queue.Queue() is a safe way of passing variables between different threads. I didn't really know that there was a safety issue with multithreading. For my application, I need to develop multiple objects with variables that can be accessed from multiple different threads. Right now I just have the threads accessing the object variables directly. I wont show my code here because there's way too much of it, but here is an example to demonstrate what I'm doing.
from threading import Thread
import time
import random
class switch:
def __init__(self,id):
self.id=id
self.is_on = False
def self.toggle():
self.is_on = not self.is_on
switches = []
for i in range(5):
switches[i] = switch(i)
def record_switch():
switch_record = {}
while True:
time.sleep(10)
current = {}
current['time'] = time.srftime(time.time())
for i in switches:
current[i.id] = i.is_on
switch_record.update(current)
def toggle_switch():
while True:
time.sleep(random.random()*100)
for i in switches:
i.toggle()
toggle = Thread(target=toggle_switch(), args = ())
record = Thread(target=record_switch(), args = ())
toggle.start()
record.start()
So as I understand, the queue object can be used only to put and get values, which clearly won't work for me. Is what I have here "safe"? If not, how can I program this so that I can safely access a variable from multiple different threads?
Whenever you have threads modifying a value other threads can see, then you are going to have safety issues. The worry is that a thread will try to modify a value when another thread is in the middle of modifying it, which has risky and undefined behavior. So no, your switch-toggling code is not safe.
The important thing to know is that changing the value of a variable is not guaranteed to be atomic. If an action is atomic, it means that action will always happen in one uninterrupted step. (This differs very slightly from the database definition.) Changing a variable value, especially a list value, can often times take multiple steps on the processor level. When you are working with threads, all of those steps are not guaranteed to happen all at once, before another thread starts working. It's entirely possible that thread A will be halfway through changing variable x when thread B suddenly takes over. Then if thread B tries to read variable x, it's not going to find a correct value. Even worse, if thread B tries to modify variable x while thread A is halfway through doing the same thing, bad things can happen. Whenever you have a variable whose value can change somehow, all accesses to it need to be made thread-safe.
If you're modifying variables instead of passing messages, you should be using aLockobject.
In your case, you'd have a global Lock object at the top:
from threading import Lock
switch_lock = Lock()
Then you would surround the critical piece of code with the acquire and release functions.
for i in switches:
switch_lock.acquire()
current[i.id] = i.is_on
switch_lock.release()
for i in switches:
switch_lock.acquire()
i.toggle()
switch_lock.release()
Only one thread may ever acquire a lock at a time (this kind of lock, anyway). When any of the other threads try, they'll be blocked and wait for the lock to become free again. So by putting locks around critical sections of code, you make it impossible for more than one thread to look at, or modify, a given switch at any time. You can put this around any bit of code you want to be kept exclusive to one thread at a time.
EDIT: as martineau pointed out, locks are integrated well with the with statement, if you're using a version of Python that has it. This has the added benefit of automatically unlocking if an exception happens. So instead of the above acquire and release system, you can just do this:
for i in switches:
with switch_lock:
i.toggle()
Can someone please tell me if the following is threadsafe or not and if it isnt what must I do to make it so.
Note: this only a small sample, not sure if it runs.
TIMER = True
time_lock = threading.Lock()
def timing():
while TIMER:
# some logic will be here for now print time
print time.time()
timer = threading.Thread(target=timing)
timer2 = threading.Thread(target=timing)
timer.start()
timer2.start()
while True:
time_lock.aquire()
if doSomeStuff():
TIMER = True
if otherThings():
break
time_lock.aquire()
TIMER = False
time_lock.release()
time_lock.aquire()
TIMER = False
time_lock.release()
It depends a bit on which implementation you are using, and what you are doing.
First, in the de facto standard implementation of python ("cpython"), only one thread is allowed to run at a time since some internals of the python interpreter aren't thread-safe. This is controlled by the Global Interpreter Lock (aka "GIL"). So in cpython, you don't really need extra locks at all; the GIL makes sure that only one thread at a time is running python code and possibly changing variables. This is a feature of the implementation, not of the language.
Second, if only one thread writes to a simple variable and others only read it you don't need a lock either, for obvious reasons. It is however up to you as the programmer to make sure that this is the case, and it is easy to make mistakes with that.
Even assinging to a simple variable might not need a lock. In python variables are more like labels used to refer to an object, rather than boxes where you can put something in. So simple assignments are atomic (in the sense that they cannot be interrupted halfway), as you can see when you look at the generated python bytecode:
In [1]: import dis
In [2]: x = 7
In [3]: def setx():
global x
x = 12
...:
In [4]: dis.dis(setx)
3 0 LOAD_CONST 1 (12)
3 STORE_GLOBAL 0 (x)
6 LOAD_CONST 0 (None)
9 RETURN_VALUE
The only code that changes x is the single STORE_GLOBAL opcode. So the variable is either changed or it isn't; there is no inconsistent state inbetween.
But if you e.g. want to test a variable for a certain value, and perform an action while that test still holds true, you do need a lock. Because another thread could have changed the variable just after you tested it.
Things like appending to a list or swapping variables are not atomic. But again in cpython these would be protected by the GIL. In other implementations you'd need a lock around such operations to protect against possible inconsistencies.
If I understand you correctly, you want to signal your thread when to stop. Since this is a situation where only one thread writes to a shared variable and only once, you do not need a lock.
Locks are necessary when you are doing concurrent modification of a shared datastructure that cannot be read atomically.
I'm writing a threaded program in Python. This program is interrupted very frequently, by user (CRTL+C) interaction, and by other programs sending various signals, all of which should stop thread operation in various ways. The thread does a bunch of units of work (I call them "atoms") in sequence.
Each atom can be stopped quickly and safely, so making the thread itself stop is fairly trivial, but my question is: what is the "right", or canonical way to implement a stoppable thread, given stoppable, pseudo-atomic pieces of work to be done?
Should I poll a stop_at_next_check flag before each atom (example below)? Should I decorate each atom with something that does the flag-checking (basically the same as the example, but hidden in a decorator)? Or should I use some other technique I haven't thought of?
Example (simple stopped-flag checking):
class stoppable(Thread):
stop_at_next_check = False
current_atom = None
def __init__(self):
Thread.__init__(self)
def do_atom(self, atom):
if self.stop_at_next_check:
return False
self.current_atom = atom
self.current_atom.do_work()
return True
def run(self):
#get "work to be done" objects atom1, atom2, etc. from somewhere
if not do_atom(atom1):
return
if not do_atom(atom2):
return
#...etc
def die(self):
self.stop_at_next_check = True
self.current_atom.stop()
Flag checking seems right, but you missed an occasion to simplify it by using a list for atoms. If you put atoms in a list, you can use a single for loop without needing a do_atom() method, and the problem of where to do the check solves itself.
def run(self):
atoms = # get atoms
for atom in atoms:
if self.stop_at_next_check:
break
self.current_atom = atom
atom.do_work()
Create a "thread x should continue processing" flag, and when you're done with the thread, set the flag to false.
Killing a thread directly is considered bad form, because you might get a fractional chunk of work completed.
A tad late but I have created a small library, ants, solving this problem. In your example an atomic unit is represented by an worker
Example
from ants import worker
#worker
def hello():
print(“hello world”)
t = hello.start()
...
t.stop()
In above example hello() will run in a separate thread being called in a while True: loop thus spitting out “hello world” as fast as possible
You can also have triggering events , e.g. in above replace hello.start() with hello.start(lambda: time.sleep(5)) and you will have it trigger every 5:th second
The library is very new and work is ongoing on GitHub https://github.com/fa1k3n/ants.git
Future work includes adding a colony for having several workers working on different parts of same data, also planning on a queen for worker communication and control, like synch