manually trigger a new thread in python - python

I am trying to implement a program that will be able to execute 2 functions asynchronously, e.g. let each function be triggered regardless of whether the other one is running and how long its been running for.
I know python supports threading, but all examples I have seen call functions in the script, so there is a predetermined order and time separation between the functions.
My question is how to get passed that and trigger the functions myself whenever I am ready.

Related

Cancelling plpython runtime in postgresql

I have a PostgreSQL function that wraps around a plpython3u function.
Now everytime I run that function and want to cancel it mid running, the function only stops running once the Python function finishes, which defeats the whole purpose of cancelling. This is especially problematic if the Python function goes into a never ending loops.
Is there a way to force cancellation on the python function, or any setup that makes it listen to the cancellation command?
(I'm using DBeaver)
Thank you.

How to keep python running to respond to callback being called

I am trying to write a python script which registers a function as a listener for certain events. However, by the time the listener callback function gets called, the python process has ended, since it is just a short script. What is the best way to keep the python process running so that the callback function can be run when it gets an update?
In case it is relevant, I am trying to do this to get state updates from a drone running ardupilot. Whenever the drone's attitude changes, I want my callback function to be run.
Thanks in advance for any help!
You can achieve that using a good while loop that updates your listener function so long as your set criteria evaluate to true.

Pause Execution in Python

I am implementing a Python plugin that is part of a larger C++ program. The goal of this program is to allow the user to input a command's actions in Python. It currently receives a string from the C++ function and runs it via the exec() function. The user can then use an API to affect changes on the larger C++ program.
The current feature I am working on is a pause execution feature. It needs to remember where it is in the code execution as well as the state of any local variables, and resume execution once a condition has been met. I am not very familiar with Python, and I would like some advice how to implement this feature. My first design ideas:
1) Using the yield command.
This seemed to be a good idea at the start since when you use the next command it remembers everything I needed it to, but the problem is that yield only returns to the previous level in the call stack as far as I can tell. So if the user calls a function that yields it will simply return to the user's code, and not the larger C++ program. As far as I can tell there isn't a way to propagate the yield command up the stack???
2) Threading
Create a main python thread that creates a thread for each command. This main thread would spawn a thread for each command executed and kill it when it is done. If it needs to be suspended and restarted it could do so through a queue of locks.
Those were the only two options I came up with. I am not sure the yield function would work or is what it was designed to do. I think the Threading approach would work but might be overkill, and take a long time to develop. I also was looking for some sort of Task Module in Python, but couldn't find exactly what I was looking for. I was wondering if anyone has any other suggestions as I am not very familiar with Python.
EDIT: As mentioned in the comments I did not explain what needs to happen when the script "Pauses". The python plugin needs to allow the C++ program to continue execution. In my mind this means A) returning if we are talking about a single threaded approach, or B) Sending a message(Function call?) to C++
EDIT EDIT: As stated I didn't fully explain the problem description. I will make another post that has a better statement of what currently exists, and what needs to happen as well as providing some sudo code. I am new to Stack Overflow, so if this is not the appropriate response please let me know.
Whenever a signal is sent in Python, execution is immediately paused until whatever signal handler function is being used is finished executing; at that point, the execution continues right where it left off. My suggestion would be to use one of the user-defined signals (signal.SIGUSR1 and signal.SIGUSR2). Take a look at the signal documentation here:
https://docs.python.org/2/library/signal.html
At the beginning of the program, you'd define a signal handler function like so:
def signal_pause(signum, frame):
if signum == signal.SIGUSR1:
# Do your pause here - function processing, etc
else:
pass
Then in the main program somewhere, you'll switch out the default signal handler for the one you just created:
signal.signal(signal.SIGUSR1, signal_pause)
And finally, whenever you want to pause, you'll send the SIGUSR1 signal like so:
os.kill(os.getpid(),signal.SIGUSR1)
Your code will immediately pause, saving its state, and head to the signal_pause function to do whatever you need to do. Once that function exits, normal program execution will resume.
EDIT: this assumes you want to do something sophisticated while you're pausing the program. If all you want to do is wait a few seconds or ask for some user input, there are some much easier ways (time.sleep or input respectively).
EDIT EDIT: this assumes you're on a Unix system.
If you need to communicate with a C program, then sockets are probably the way to go.
https://docs.python.org/2/library/socket.html
One of your two programs acts as the socket server, and the other connects to it as the socket client. When you want the C++ program to continue, you use socket.send() to transmit a continue message. Then your Python program would use socket.recv(), which will cause it to wait around until it receives a message back from the C++ program.
If you need two programs to send signals to each other, this is probably the safest way to go about it.

using python sched module and enterabs to make a function run at a certain time

I cannot seem to find a simple example of how to schedule an event in Python.
I want to be able to pass a date and time string as an argument into a function.
For example:
String: "m/d/Y HH:MM" would set the time for a future function to run, after the code has been executed. So, like a function that is waiting to go off after I run it.
It seems like the main problem is formatting the string correctly, but a simple example would really help to see how to 'schedule' a function to run.
You don give enough context to understand what are you trying to do in a larger frame - but, generally speaking - "this is not how it works" in Python.
An "ordinary" Python program is a single-threaded, synchronous program - it will run one task, after another, after another, when everything is done, the program exits, and the interpreter exits along with it.
so, something along (with a fictitious "schedule" function):
def main():
print("Hello World")
schedule(60, main)
main()
would not work in Python, if the call to schedule would return immediately - the main function would exit, and the program would try to resume after the main() call, and terminate. There needs to be a piece of code left running, which can count time, and delays, maybe receive network or user generated events, and dispatch them to previously arranged callback functions in order for a program to keep running.
Such a piece of code, which can account for time and dispatch calls, is usually called a "reactor" - and there is none running in a plain Python program. Unlike, say, in a JavaScript program, where the browser, or other JavaScript environment provides such hosting by default.
That is why most Python web or network frameworks, all GUI toolkits, provide such a core - it is usually called at the end of the one main's script and is a method or function named mainloop or serve_forever, start and so on. From that point on, your main script, which had set the appropriate callbacks, scheduled things and so on, stops - the reactor will be the piece of code calling things.
That is where I say your question misses the context of what you want to do: at first you just want to test some scheduling - but afterwards you will want that inside a larger system - that system should be built using an appropriated framework for your "real task" at hand, for example Django, tornado, pyramid, if it is a web-server system, gtk, Qt, Tk if it is a GUI program, PyOgre, kivy, pyglet if it is a multimedia program, twisted for a generic network server of another protocol, or some other thing, like celery or camaelia - these are only general examples.
That said, Python's standard library does offer a "generic" scheduler function - it does implement such a loop, with the bare core of functionality. If you are doing nothing else, and nothing fancy, it will block there until it reaches the time to call your scheduled function, at which point it will exit, and resume the control to your main program. If your called function schedule other things, it will continue running, and so on.
See the documentation and example at:
http://docs.python.org/2/library/sched.html
You can use functions from the datetime module instead of time.time to set r absolute timings as you asking for. Also check the documentation there to threading.Timer - which in a naively way can do more or less what you have in mind, if you want to run a simple function after a given delay, in parallel to whatever other code is running and don't want to rewrite your application to be event based - but simpler as it may seen, it will have many drawbacks in a larger system - you should pick one of the frameworks listed.

communication with long running tasks in python

I have a python GUI app that uses a long running function from a .so/.dll it calls through ctypes.
I'm looking for a way to communicate with the function while it's running in a separate thread or process, so that I can request it to terminate early (which requires some work on the C side before returning a partial result). I suppose this will need some kind of signal receiving or reading from pipes, but I want to keep it as simple as possible.
What would you consider the best approach to solve this kind of problem? I am able to change the code on both the python and C sides.
There's two parts you'll need to answer here: one if how to communicate between the two processes (your GUI and the process executing the function), and the other is how to change your function so it responds to asynchronous requests ("oh, I've been told to just return whatever I've got").
Working out the answer to the second question will probably dictate the answer to the first. You could do it by signals (in which case you get a signal handler that gets control of the process, can look for more detailed instructions elsewhere, and change your internal data structures before returning control to your function), or you could have your function monitor a control interface for commands (every millisecond, check to see if there's a command waiting, and if there is, see what it is).
In the first case, you'd want ANSI C signal handling (signal(), sighandler_t), in the second you'd probably want a pipe or similar (pipe() and select()).
You mention that you can change both the C and Python sides. To avoid having to write any sockets or signal code in C, it might be easiest to break up the large C function into 3 smaller separate functions that perform setup, a small parcel of work, and cleanup. The work parcel should be between about 1 ms and 1 second run time to strike a balance between responsiveness and low overhead. It can be tough to break up calculations into even chunks like this in the face of changing data sizes, but you would have the same challenge in a single big function that also did I/O.
Write a worker process in Python that calls those 3 functions through ctypes. Have the worker process check a Queue object for a message from the GUI to stop the calculation early. Make sure to use the non-blocking Queue.get_nowait call instead of Queue.get. If the worker process finds a message to quit early, call the C clean up code and return the partial result.
If you're on *nix register a signal handler for SIGUSR1 or SIGINT in your C program then from Python use os.kill to send the signal.
You said it: signals and pipes.
It doesn't have to be too complex, but it will be a heck of a lot easier if you use an existing structure than if you try to roll your own.

Categories