How do I execute two programs from python at the same time? - python

This post explains how to launch a single external program from Python
How shall I launch multipal programs(or threads) at the same time ?
My intended application is a video slide show. I want to launch a image sequence player and a music player at the same time
Thanks in advance

subprocess.Popen doesn't block unless you explicitly ask it to by calling communicate on the returned object, so you can call it more than once to start more than one process.
If you do need to communicate with both sub-processes simultaneously (read their STDOUT, for instance), then invoke subprocess.Popen in separate threads. Each thread can manage a sub-process and communicate with it. Naturally, this leaves you to do all the synchronization but that highly depends on your specific application.

Related

Spawn a subprocess but kill it if main process gets killed

I am creating a program in Python that listens to varios user interactions and logs them. I have these requirements/restrictions:
I need a separate process that sends those logs to a remote database every hour
I can't do it in the current process because it blocks the UI.
If the main process stops, the background process should also stop.
I've been reading about subprocess but I can't seem to find anything on how to stop both simultaneously. I need the equivalent of spawn_link if anybody know some Erlang/Elixir.
Thanks!
To answer the question in the title (for visitors from google): there are robust solutions on Linux, Windows using OS-specific APIs and less robust but more portable psutil-based solutions.
To fix your specific problem (it is XY problem): use a daemon thread instead of a process.
A thread would allow to perform I/O without blocking GUI, code example (even if GUI you've chosen doesn't provide async. I/O API such as tkinter's createfilehandler() or gtk's io_add_watch()).

Is it possible to create a long running process in NodeJs

Is it possible to create a long running process in NodeJs to handle many background operations without interrupting the main thread; something like Celery in Python.
Hint, it's highly preferable to be able to manage that long-running process, in case of failure, or need to be restarted, away from the main process.
http://nodejs.org/api/child_process.html is the right API to create long-running processes, you will have complete control over the child processes (access to stdin/out/err, can send signals etc). This approach however requires that your node process is parent of those children.. If you want the child to outlive the parent, take a look at options.detached during child creation (and following child.unref()).
Please note, however, that Node.js is suited extremely well to avoid such architecture. Typically node.js do all the background stuff in the main thread. I've been writing apps with lots of traffic (like thousands requests per second), with DB, Redis and RabbitMQ access all from the main thread and without any child processes - and it was worked fine, as it should, thanks to Node's evented IO system.
I'm generally using child_process api only to launch separate executables (e.g. ffmpeg to transcode some video file), apart of such scenarios separate processes are probably not what you want.
There is also cluster api which allow single master to handle numerous worker processes, though I think it isn't what you look for, either.
You can create child process to handle your background operations. And then use messages to pass data between the new process and your main thread.
http://nodejs.org/api/child_process.html
Update
It looks like you need to use the server queues, sort of beanstalkd http://kr.github.io/beanstalkd/ + https://www.npmjs.com/package/fivebeans.

User Input Python Script Executing Daemon

I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s).
To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python.
What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed?
You could perhaps consider to create a pool of python daemon processes?
Their purpose would be to serve one request and to die afterwards.
You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished.

python running multiple instances

hi lets assume i have a simple programm in python. This programm is running every five minutes throught cron. but i dont know how to write it so the programm will allow to run multiple processes of its self simultaneously. i want to speed things up ...
I'd handle the forking and process control inside your main python program. Let the cron spawn only a single process and that process be a master for (possible multiple) worker processes.
As for how you can create multiple workers, there's the threading module for multi threading and multiprocessing module for multi processing. You can also keep your actual worker code as separate files and use the subprocess module.
Now that I think about it, maybe you should use supervisord to do the actual process control and simply write the actual work code.

Python: when to use pty.fork() versus os.fork()

I'm uncertain whether to use pty.fork() or os.fork() when spawning external background processes from my app. (Such as chess engines)
I want the spawned processes to die if the parent is killed, as with spawning apps in a terminal.
What are the ups and downs between the two forks?
The child process created with os.fork() inherits stdin/stdout/stderr from parent process, while the child created with pty.fork() is connected to new pseudo terminal. You need the later when you write a program like xterm: pty.fork() in parent process returns a descriptor to control terminal of child process, so you can visually represent data from it and translate user actions into terminal input sequences.
Update:
From pty(7) man page:
A process that expects to be connected
to a terminal, can open the slave end
of a pseudo-terminal and then be
driven by a program that has
opened the master end. Anything that
is written on the master end is
provided to the process on the slave
end as though it was input typed on
a terminal. For example, writing the
interrupt character (usually
control-C) to the master device
would cause an interrupt signal
(SIGINT) to be generated for the
foreground process group that is
connected to the slave. Conversely,
anything that is written to the
slave end of the pseudo-terminal can
be read by the process that is
connected to the master end.
In the past I've always used the subprocess module for this. It provides a good api for communicating with subprocesses.
You can use call(*popenargs, **kwargs) for blocking execution of them, and I believe using the Popen class can handle async execution.
Check out the docs for more info.
As far as using os.fork vs pty.fork, both are highly platform dependent, and neither will work (or at least is tested) with windows. The pty module seems to be the more constrained of the two by reading the docs. The main difference being the pseudo terminal aspect. So if you aren't willing to architect your code in such a way as to be able to use the subprocess module, I'd probably go with os.fork instead of pty.fork.
Pseudotermials are necessary for some applications that really expect a terminal. An interactive shell is one of these examples but there are many other. The pty.fork option is not there as another os.fork but as a specific API to use a pseudoterminal.

Categories