Running same Pexpect program simultaneously on the same server but differnt terminals - python

I have a python pexpect code lets say Program1.py which logs-in to one router from each program using pexpect.spawn and does required operations using sendline and expect modules in pexpect.
If I try to run this same program, from multiple prompts on my server with each program login to two diffrerent routers only one program seems to get the expect() input while the other instance of the programs times out atchild.expect() -> at read non_blocing()
Example:
In prompt-1 on my RHEL Server I execute the program to login to router X with ARGs
bash$ python program1.py 10.11.12.13/2001 configure_MGMT
In Prompt-2 on my RHEL Server I execute the program to login to router Y with ARGs
bash$ python program1.py 20.20.12.13/2020 configure_MGMT
one of the Programs runs successfully, while the other hits TIMEOUT at the first child.expect() call.
Is this due to the GIL?
Is there a workaround for this?
(I wish to avoid multiprocessing here, because my webserver handles the multiprocessing aspect and executes the same program multiple times.)

The GIL has nothing to do with this, because independent processes do not share a GIL. The most likely cause of this is that your router only supports one log-in at a time. The easiest way to verify that this is the problem is to remove Python from the equation by manually logging in to the router from two different terminal sessions at the same time.

Found the answer to this. Since python programs will always get executed synchronously and not in parallel, the pexpect will timeout waiting for input while the process is waiting to get scheduled.
to run all of them as background processes and in parallel, we need to use the '&' at the end of execution CLI.
Example:
bash$ python program1.py 20.20.12.13/2020 configure_MGMT &
thanks

Related

Python Subprocess in Multiple Terminals in VSCode

I'm using Python's subprocess to spawn new processes. The processes are independent of each other and output some of the data related to the account creation.
for token in userToken:
p = subprocess.Popen(['python3','create_account.py',token)
sleep(1)
I'm trying to find a way to get the output of each of the Python scripts to run in the separate VSCode terminals to clearly see how the processes are running.
For example, in VSCode you can split the terminals as in the screenshot below. It would be great if each of the processes would have its own terminal window.
I've also checked that you can run tasks in VSCode in separate terminals as described here. Is there a way to launch multiple subprocess threads in separate terminals like that?
If that's not possible, is there another way I can run subprocess in multiple terminals in VSCode?
Currently in VS Code, it supports running python code in a single thread in the terminal by default.
If you want to run the python code in two or more VS Code terminals separately, and not run them sequentially, you could manually enter the run command in the two VS Code terminals, for example:
The command to run the python file'c.py': "..:/.../python.exe ..:/.../c.py".
And for multi-threaded synchronous operation, except for the manual input of execution commands in two or more newly created terminals to make the code run synchronously, VSCode currently does not have other local support that supports this function.
I have submitted an application for this feature in Github and we are looking forward to the realization of this feature:
Github link: Can VSCode automatically run python scripts in two or more terminals at the same time?

Threads not being executed under supervisord

I am working on a basic crawler which crawls 5 websites concurrently using threads.
For each site it creates a new thread. When I run the program from the shell then the output log indicates that all the 5 threads run as expected.
But when I run this program as a supervisord program then the log indicates that only 2 threads are being run everytime! The log indicates that the all the 5 threads have started but only the same two of them are being executed and the rest get stuck.
I cannot understand why this inconsistency is happening when it is run from a shell and when it run from supervisor. Is there something I am not taking into account?
Here is the code which creates the threads:
for sid in entries:
url = entries[sid]
threading.Thread(target=self.crawl_loop, \
args=(sid, url)).start()
UPDATES:
As suggested by tdelaney in the comments, I changed the working directory in the supervisord configuration and now all the threads are being run as expected. Though I still don't understand that why setting the working directory to the crawler file directory rectifies the issue. Perhaps some one who knows about how supervisor manages processes can explain?
AFAIK python threads can't do threads properly because it is not thread safe. It just gives you a facility to simulate simultaneous run of the code. Your code will still use 1 core only.
https://wiki.python.org/moin/GlobalInterpreterLock
https://en.wikibooks.org/wiki/Python_Programming/Threading
Therefore it is possible that it does not spawn more processes/threads.
You should use multiprocessing I think?
https://docs.python.org/2/library/multiprocessing.html
I was having the same silent problem, but then realised that I was setting daemon to true, which was causing supervisor problems.
https://docs.python.org/2/library/threading.html#threading.Thread.daemon
So the answer is, daemon = true when running the script yourself, false when running under supervisor.
Just to say, I was just experiencing a very similar problem.
In my case, I was working on a low powered machine (RaspberryPi), with threads that were dedicated to listening to a serial device (an Arduino nano on /dev/ttyUSB0). Code worked perfectly on the command line - but the serial reading thread stalled under supervisor.
After a bit of hacking around (and trying all of the options here), I tried running python in unbuffered mode and managed to solve the issue! I got the idea from https://stackoverflow.com/a/17961520/741316.
In essence, I simply invoked python with the -u flag.

Spawn a subprocess in foreground, even after python exits

Assume there exists a python script resolve_ip.py which magically returns the string IP of a machine we care about.
I'm interested in learning how to achieve the python equivalent of the following bash command:
user#host:~$ ssh $(./resolve_ip.py)
In this bash example, after the python script runs, it is replaced or rather substituted with its return value which is, in turn, provided as a input to the program ssh. The result is, the python program terminates and the user interacts with ssh initialization.
The problem is, this solution requires the use of either 2 scripts (a bash script to run ssh, combined with the python script to return the arguments) or alternatively human intervention to type the bash command directly as formatted above.
Question:
Is there a way, only using python, to start/fork an interactive service (like ssh), by using subprocess or some comparable module, and have this forked child process remain alive in the foreground, even after its parent (python) has sys.exit()'d?
Considerations:
In this case, the point of this question is not to script the submission of ssh credentials to ssh from within Python. It is how to spawn a subprocess which continues to run foregrounded after its parent/spawner terminates.
EDIT:
The following resources are related:
Run a program from python, and have it continue to run after the script is killed
how to spawn new independent process in python
Indefinite daemonized process spawning in Python
Python spawn off a child subprocess, detach, and exit
I think you want to "exec". Like this:
import resolve_ip
import os
host = resolve_ip.get_host_to_use() # you figure this part out
os.execlp('ssh', 'ssh', host)
That will replace the Python interpreter process image with the ssh process image, and keep running. So you will end up as if you had run ssh instead of Python, and the interpreter will no longer execute at all.

Python: How to Run multiple programs on same interpreter

How to start an always on Python Interpreter on a server?
If bash starts multiple python programs, how can I run it on just one interpreter?
And how can I start a new interpreter after tracking number of bash requests, say after X requests to python programs, a new interpreter should start.
EDIT: Not a copy of https://stackoverflow.com/questions/16372590/should-i-run-1000-python-scripts-at-once?rq=1
Requests may come pouring in sequentially
You cannot have new Python programs started through bash run on the same interpreter, each program will always have its own. If you want to limit the number of Python programs running the best approach would be to have a Python daemon process running on your server and instead of creating a new program through bash on each request you would signal the daemon process to create a thread to handle the task.
To run a program forever in python:
while True :
do_work()
You could look at spawning threads for incoming request. Look at threading.Thread class.
from threading import Thread
task = new Thread(target=do_work, args={})
task.start()
You probably want to take a look at http://docs.python.org/3/library/threading.html and http://docs.python.org/3/library/multiprocessing.html; threading would be more lightweight but only allows one thread to execute at a time (meaning it won't take advantage of multicore/hyperthreaded systems), while multiprocessing allows for true simultaneous execution but can be a bit less lightweight than threading if you're on a system that doesn't utilize lightweight subprocesses and may not be as necessary if the threads/processes spend lots of time doing I/O requests.

python subprocess: raising an error when a process prompts

I have a python script that runs on a server after hours and invokes many shell subprocesses. None of the programs that are called should be prompting, but sometimes it happens and the script hangs, waiting for input until the user (me) notices and gets angry. :)
Tried: Using p.communicate() with stdin=PIPE, as written in the python subprocess documentation.
Running: Ubuntu 10.10, Python 2.6
I don't want to respond to the prompts, I want the script to raise an error and continue. Any thoughts?
Thanks,
Alexander.
As a catch-all solution to any problems in subprocesses I'd recommend using timeouts for all shell calls. There's no built-in timeout support in subprocess module calls, so you need to use signals. See details here: Using module 'subprocess' with timeout
You need a time-out while waiting for your tasks to complete and then have your script kill or terminate the process (in addition to raising the error).
Pyexpect is a Python tool for dealing with subprocesses that may generate output (and may need input as a result). It will help you easily deal with the various cases, including managing timeouts.
See: http://www.noah.org/wiki/pexpect

Categories