Say I have a python script that pulls messages of a queue and process it
process_queue_emails.py
Now I want to somehow run multiple processes of this file at once, how would I do that? I need it to run in the background, and I'm guessing on a seperate port? (this is on ubuntu)
So I want to write messages to the queue in my web application, and then I want these worker processes (the .py file above) to recieve and respond to the messages in parallel i.e. I need to run them in their own process.
The zdaemon module can be used to write demonized processes. Or you can look into the multiprocessing module of Python. And alternative is using supervisord for starting arbitrary scripts or programs as daemon.
Related
Two python scripts, A and B, have compatibility issues and need separate conda environments for them. Here's the scenario. When script A runs, it sends data to process B (Script B is running in a different terminal), and process B returns the output to process A (Process A cannot be put to sleep). I have been using pickle files for exchanging data between these two processes, but this method seems slow, and I would like to speed it up, which is necessary for my work.
make one program a child of the other using the subprocess module and have the communication over stdin and stdout. (fastest) (note you have to activate the other anaconda environment in the command to launch the child)
have one application be a server and attach to a socket on localhost, the other application is going to be the client using the socket module. (most organized and scalable solution)
make a part of the memory a shared memory that both applications can access and write and read from using multiprocessing.shared_memory (requires proper synchronization, but can be faster than first option for transferring GBs of data at a time), (wrapping it in an io.TextIOWrapper will make communication a lot easier, as easy as working with sockets)
I am doing research that requires me to run multiple experiments with many permutations of parameters. My main issue is that I would like to free up my main machine for my daily tasks and offload my experiments to other machines (I have two extra laptops).
Currently I am using redis-rq to queue and run tasks on those "remote servers". Once redis is running on my main machine and my remote servers, I simply run my code which queues the tasks to the specific redis-rq port on my remote (via ssh). This seems to work fine except that I need to make sure to push my code to my remote server before doing this otherwise the task will fail. The remote will essentially have old code on it.
I have two questions:
Does this pattern make sense or is there a better way for me to offload tasks to remote servers?
Is there a way I can ensure the remote always has up-to-date code when I start queueing tasks? (currently thinking of including a function in my code that will copy the current directory to the remote via scp)
Thanks for your help with this,
NB: my code is all in python and would prefer to keep it that way
I have a separate process that I want to run alongside the python process I have managed by uWSGI. I wanted to use the attach-daemon option to start this process, but it seems that bash command specified in attach-daemon does not get called until after the python process' app gets started up. However, I need the process to be running before the python process starts up in order for everything to run correctly. Is there any way to specify which order things get started in? It's not even necessary to me that I use attach-daemon, if there's a simpler way to initialize a set of managed processes in a defined order.
Use --lazy-apps, in this way the app will be loaded by each worker after the master has been fully spawned (and its external daemons started)
How to start an always on Python Interpreter on a server?
If bash starts multiple python programs, how can I run it on just one interpreter?
And how can I start a new interpreter after tracking number of bash requests, say after X requests to python programs, a new interpreter should start.
EDIT: Not a copy of https://stackoverflow.com/questions/16372590/should-i-run-1000-python-scripts-at-once?rq=1
Requests may come pouring in sequentially
You cannot have new Python programs started through bash run on the same interpreter, each program will always have its own. If you want to limit the number of Python programs running the best approach would be to have a Python daemon process running on your server and instead of creating a new program through bash on each request you would signal the daemon process to create a thread to handle the task.
To run a program forever in python:
while True :
do_work()
You could look at spawning threads for incoming request. Look at threading.Thread class.
from threading import Thread
task = new Thread(target=do_work, args={})
task.start()
You probably want to take a look at http://docs.python.org/3/library/threading.html and http://docs.python.org/3/library/multiprocessing.html; threading would be more lightweight but only allows one thread to execute at a time (meaning it won't take advantage of multicore/hyperthreaded systems), while multiprocessing allows for true simultaneous execution but can be a bit less lightweight than threading if you're on a system that doesn't utilize lightweight subprocesses and may not be as necessary if the threads/processes spend lots of time doing I/O requests.
I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.
threading Documentation
thread Documentation
The thread module offers low level threading and synchronization using simple Lock objects.
Again, not sure if this helps since you're using Python under a Delphi environment.
If a process dies all it's threads die with it, so a solution might be a separate process.
See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.
Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.
You'll probably want the new process to end (via exit() or the like) immediately after spawning the script.