Passing argument to constantly running Python application - python

I have a constantly running Python code on Linux, every so often outside data needs to be fed into this code so Python code can alter a file.
How do I go about structuring Python code so it receives these arguments for further processing?
I found some stuff on outgoing args, Running external program using pipes and passing arguments in python
But looking for in coming args
Flexible with how arguments get passed down

You need some kind of Inter Process Communication.
For example, you can feed program's standard input. You can read it by reading from sys.stdin, but it requires the program that started your process to give its handle to another process.
Another way is to create a socket of some kind. That's far more scalable, allows connecting to the program when it's running on another machine, and allows non-Python processes to easily communicate with your process.

Related

Pass Data from One .exe to Another

I'm relatively inexperienced with C++, but I need to build a framework to shuffle some data around. Not necessarily relevant, but the general flow path of my data needs to go like this:
Data is generated in a python script
The python object is passed to a compiled C++ extension
The C++ extension makes some changes and passes the data (presumably a pointer?) to compiled C++/CUDA code (.exe)
C++/CUDA .exe does stuff
Data is handled back in the python script and sent to more python functions
Step 3. is where I'm having trouble. How would I go about calling the .exe containing the CUDA code in a way that it can access the data that is seen in the C++ python extension? I assume I should be able to pass a pointer somehow, but I'm having trouble finding resources that explain how. I've seen references to creating shared memory, but I'm unclear on the details there, as well.
There are many ways two executables can exchange data.
Some examples:
write/read data to/from a shared file (don't forget locking so they don't stumble on eachother).
use TCP or UDP sockets between the processes to exchange data.
use shared memory.
if one application starts the other you can pass data via commandline arguments or in the environment.
use pipes between the processes.
use Unix domain sockets between the processes.
And there are more options but the above are probably the most common ones.
What you need to research is IPC (Inter-Process Communication).

C++ open Python Shell process and send commands to it?

So we have this massive Python project responsible for talking to a piece of hardware.
Currently we open up a python shell, import a bunch of the company's and python classes and run a bunch of commands to initialize the hardware. In the shell we then execute a bunch of functions loaded from our Python API passing in references of the initialized hardware we got from initialization.
We would like to be able to do the same thing via C++, and use it as a wrapper of sorts to send commands into a Python shell.
We cant just pass in scripts that initialize, get the hw reference, and then run some functions and end because the initialize part takes 5-10 seconds, so we want to keep the python instance alive that has the var's to communicate with the the initialized hardware so we can initialize once and then just send send function after function at the hardware at a much faster rate. I'd also like to be able to get the output back to C++.
Hopefully that makes sense what we are trying to do and why, if not let me know.
You can extend Python with C++ easily. Or you can run two processes and use inter-process communication to using methods and functionalities.

How to write a system agnostic Python daemon/service? [duplicate]

I would like to have my Python program run in the background as a daemon, on either Windows or Unix. I see that the python-daemon package is for Unix only; is there an alternative for cross platform? If possible, I would like to keep the code as simple as I can.
In Windows it's called a "service" and you could implement it pretty easily e.g. with the win32serviceutil module, part of pywin32. Unfortunately the two "mental models" -- service vs daemon -- are very different in detail, even though they serve similar purposes, and I know of no Python facade that tries to unify them into a single framework.
This question is 6 years old, but I had the same problem, and the existing answers weren't cross-platform enough for my use case. Though Windows services are often used in similar ways as Unix daemons, at the end of the day they differ substantially, and "the devil's in the details". Long story short, I set out to try and find something that allows me to run the exact same application code on both Unix and Windows, while fulfilling the expectations for a well-behaved Unix daemon (which is better explained elsewhere) as best as possible on both platforms:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The fundamental problem with cross-platform daemonization is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Windows services (though a viable alternative in many cases) were basically out of the question for me: they aren't cross-platform, and they're going to require code modification. pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries) is closer, but it still doesn't quite make the cut: in particular, it fails to improve the situation for signal handling, and you still cannot easily launch a pythonw.exe application from the terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing".
In the end, I settled on using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword to create an independent, windowless process:
import subprocess
independent_process = subprocess.Popen(
'/path/to/pythonw.exe /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
However, that still left me with the added challenge of startup communications and signal handling. Without going into a ton of detail, for the former, my strategy was:
pickle the important parts of the launching process' namespace
Store that in a tempfile
Add the path to that file in the daughter process' environment before launching
Extract and return the namespace from the "daemonization" function
For signal handling I had to get a bit more creative. Within the "daemonized" process:
Ignore signals in the daemon process, since, as mentioned, they all terminate the process immediately and without cleanup
Create a new thread to manage signal handling
That thread launches daughter signal-handling processes and waits for them to complete
External applications send signals to the daughter signal-handling process, causing it to terminate and complete
Those processes then use the signal number as their return code
The signal handling thread reads the return code, and then calls either a user-defined signal handler, or uses a cytpes API to raise an appropriate exception within the Python main thread
Rinse and repeat for new signals
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above Windows strategy into a unified facade. The cross-platform API looks like this:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
Two options come to mind:
Port your program into a windows service. You can probably share much of your code between the two implementations.
Does your program really use any daemon functionality? If not, you rewrite it as a simple server that runs in the background, manages communications through sockets, and perform its tasks. It will probably consume more system resources than a daemon would, but it would be quote platform independent.
In general the concept of a daemon is Unix specific, in particular expected behaviour with respect to file creation masks, process hierarchy, and signal handling.
You may find PEP 3143 useful wherein a proposed continuation of python-daemon is considered for Python 3.2, and many related daemonizing modules and implementations are discussed.
The reason it's unix only is that daemons are a Unix specific concept i.e a background process initiated by the os and usually running as a child of the root PID .
Windows has no direct equivalent of a unix daemon, the closest I can think of is a Windows Service.
There's a program called pythonservice.exe for windows . Not sure if it's supported on all versions of python though

User Input Python Script Executing Daemon

I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s).
To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python.
What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed?
You could perhaps consider to create a pool of python daemon processes?
Their purpose would be to serve one request and to die afterwards.
You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished.

create subprocess object from already running process

I would like to create an subprocess.Popen object from an already running process... Is that possible somehow?
Another idea would be to serialize (pickle) the subprocess object and write it to a database so that if the main process restarts it could get the subprocess.Popen objects back from the database. I'm unsure if that works.
create an subprocess.Popen object from an already running process
Do you mean from an already running sub-process? The only way I know of to pass objects between processes is to pickle them and write them out either to a file or a database as you suggested.
Typically, sub-processes cannot be spawned from already running sub-processes, but you can keep a reference to the new process you want to create and spawn it from the main process. This could get really ugly, and I suggest against it strongly. Why, specifically do you need to further your process tree past two-deep? This info might lead to a better answer.
Assuming you want to communicate with the "subprocess" and must do so using its standard i/o streams, you could create a wrapper around the executable that maps its stdin/out/err to a socket or named pipe.
The program that intends to control the "subprocess" can then start and stop communications at any time. You may have to provide for a locking mechanism too.
Then, assuming you're on Linux, you can access the stdin/out/err of a running process through /proc/<pid>/fd/<0,1,2>. You won't connect these to a subprocess.Popen object but open('/proc/<pid>/fd/1', 'rb') will behave like Popen().stdout.

Categories