I have two python scripts that have to run simultaneously because they interact with each other. One script is a 'server' script running locally and the other is client script that connects to it via a socket. Normally I just open a couple terminal tabs and run the server script in one and the client in the other. After starting and stopping each script over and over, I wanted to make a bash alias to run both scripts with just one command and came up with this:
gnome-terminal --tab -e "python server.py" --tab -e "python client.py"
However, now the server script is raising an sqlite OperationalError saying that one of my data tables doesn't exist. But when I run the scripts manually everything works fine. I have no clue what is going on, but I thought that maybe running the scripts together wasn't giving the server script enough time to initialize and make its connection to the database. So I put a time.sleep(5) in the client script, but as soon as it starts I get the same error.
Anyone have an idea what could be happening? Or does anyone know of any alternatives for starting two python scripts with one command?
Try combining the two commands into one:
gnome-terminal --tab -x bash -c "python server.py & sleep 5; python client.py"
I think it is better to put the sleep command (if needed) outside client since there may be situations where the server is already started and the client does not have to sleep.
The -x flag means
-x, --execute
Execute the remainder of the command line inside the terminal.
The command calls bash:
bash -c "python server.py & sleep 5; python client.py"
bash in turn, has a -c flag which means
-c string If the -c option is present, then commands are read from string. If
there are arguments after the string, they are assigned to the posi‐
tional parameters, starting with $0.
You might want to experiment with
gnome-terminal --tab -e "python server.py & sleep 5; python client.py"
That might work too. When you run bash first, then your ~/.bashrc is read. Without calling bash, I think by default, /bin/sh is called instead.
If you get
"socket.error: [Errno 98] Address already in use",
it probably means that your server has already been started, and running the server a second time fails.
Related
This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.
Not able to send commands to shell I logged into
Originally, I wrote a Python script. It was able to send commands like
subprocess.run(['kubectl', 'config', 'get-context'], shell=True)
but when it came time to get to the child shell, in this case bash, the command wouldn't run until I exited that shell and it would say things like it couldn't find the command.
I then tried to do it with the module "sh," but was also unsuccessful
I thought maybe using Python was problem and also realized my ultimate goal was to use a different shell (cypher-shell) and so skipped immediately to that with bash as the parent shell. In there I have a line that is sometimes successful, sometimes not
kubectl run -it --rm cypher-shell --image=gcr.io/cloud-marketplace/neo4j-public/causal-cluster-k8s:3.4 --restart=Never --namespace=default --command -- ./bin/cypher-shell -u neo4j -p "password" -a "domain.name"
But even when it successfully logs in it, it just hangs until I manually exit and then it runs the next commands
Note: I saw this and so, perhaps, it's not a child shell? Run shell command from child shell
I can't say I know exactly what you are doing, but if I understand your objective correctly you want the Python program to continue to log while the script continues to run? The problem is that the logger continues to run and holds up your program. The way I would deal with that would be to run the logger as a background process.
With bash, that would be ./script.sh & which would make it run without holding the rest of the program back from running.
Hopefully that may give you an idea! Good luck.
keygen etc. so I can just ssh remotehost w/o using password, shell =BASH
I am using a for loop to ssh over multiple nodes (remote host) and wish to execute a script but it dosent seem to work
for i in {1..10};
do
ssh -f node$i "python script.py $i"
done
the terminal script hangs up and nothing happens
Also I can manually ssh and use python. The PYTHONPATH etc are configured for enodes.
There was cshell on nodes, so i used .cshrc wit exec /bin/bash which atleast when i log manually gives me bash shell, so problem doesent seem to be there.
I
Instead of wrapping python script in a shell script, you should probably have a python script that connects to all the remote hosts via ssh and executes stuff.
Paramiko is a very good framework for this kind of use-case. It will be much easier to maintain your script this was in the long run.
Use -o BatchMode=yes
and maybe you need to force allocate pseudo-tty and -n to prevent reading from stdin
I have this bash:
#!/bin/sh
# launcher.sh
echo "Remote Control Server is starting up..."
sudo python RControlPanel.py &
wait &
sudo python startup.py &
wait
The first python file is a flask server which its necessary to start up first.
The second file is initialising the components on the raspberry pi, and turns on a couple of LEDs and stuff. The way the script is written requires the flask application first and then initialise the components.
Its seems that the flask application it takes longer to start up and the bash script continues to run the startup.py
Is it possible to make sure that the flask app is running and then carry on to the next script? I though wait at the end will work but it doesnt. I have even tried with sleep.
Update: Im not quite sure but, I think when flask app runs, is getting to an endless loop, and waits for requests, like a normal web server do. Maybe thats the problem why the solutions bellow wont work.
I suppose the Flask server opens some HTTP port? Let's say on port 8080, then you could poll the app like so:
while ! curl http://localhost:8080 -m1 -o/dev/null -s ; do
sleep 0.1
done
Options:
-m1 to allow at most 1 second for the HTTP request. If your firewall is configured to silently drop packets to closed ports, this should make it go faster.
-o/dev/null so the HTTP response body doesn't get printed.
-s to hide any errors, as they are expected.
Add -S if you still want to see the "Connection refused" messages scroll by until the server is up.
I came up with a solution which im using the Thomas answer.
I have created a sh file which is called webserver.sh:
echo "Remote Control Server is starting up..."
sudo python RControlPanel.py
Then a second file which is called components.sh:
while ! curl http://127.0.0.1:80 -m1 -o/dev/null -s ; do
sleep 0.1
echo "Web Server still loading" #This line is for testing purposes
done
sudo python startup.py
echo "Startup Initialazation done. System Ready!"
And the a thrid file launcher.sh:
./launcher.sh
./remoteServer.sh
The first file starts up the web server only. no other code needs to be executed in there because it will be skipped, cause the flask app is an endless loop and it will skip everything underneath it.
The second file at the biggening is using the Thomas code to check if the webserver is running. If it does not is keep looping until the webserver (Flask app) come alive, and then run the startup.py python script which is initialising the components.
The third file is just calling launcher.sh and remoteServer.sh. So I can run my whole project within a single file, no matter which one is gonna start first.
Just use:
sudo python RControlPanel.py && sudo python startup.py &
Double && ensures that the second command runs only after first returns exit status zero.
I want to run a python script on my server (that python script has GUI). But I want to start it from ssh. Something like this:
ssh me#server -i my_key "nohup python script.py"
... > let the script run forever
BUT it complains "unable to access video driver" since it is trying to use my ssh terminal as output.
Can I somehow make my commands output run on server machine and not to my terminal... Basically something like "wake-on-lan functionality" -> tell the server you want something and he will do everything using its own system (not sending any output back)
What about
ssh me#server -i my_key "nohup python script.py >/dev/null 2>&1"
You can use redirection to some remote logfile instead of /dev/null of course.
? :)
EDIT: GUI applications on X usually use $DISPLAY variable to know where they should be displayed. Moreover, X11 display servers use authorization to permit or disallow applications connecting to its display. Commands
export DISPLAY=:0 && xhost +
may be helpful for you.
Isn't it possible for you to rather use python ssh extension instead of calling external application?
It would:
run as one process
guarantee that invocation will be the same among all possible system
lose the overhead from "execution"
send everything trough ssh (you won't have to worry about input like "; possibly local executed command)
If not, go with what Piotr Wades suggested.