Python3 restart a script from another script - python

This is the situation:
I've 2 script in python running in 2 different shells in Linux:
1° - python3 server.py
2° - python3 roomcontrol.py
I need that the user can restart roomcontrol.py from server.py.
I tried with subprocess:
from subprocess import call
dir = os.path.dirname(os.path.realpath(__file__)) + "/roomcontrol.py"
call(["python3",dir])
These instructions just start a new istance of "roomcontrol.py" in the shell of "server.py", I need to restart roomcontrol.py in his shell. Or close his shell and open a new one.
Edit:
I also tried:
import subprocess
dir = os.path.dirname(os.path.realpath(__file__)) + "/roomcontrol.py"
subprocess.Popen([dir], stdout=subprocess.PIPE, shell=True)
It doesn't work.
It writes a lot of stuff in the same shell of server.py and my cursor become a cross and if I click somewhere it wrtes stuff like before. A little example of what it writes:
import: unable to grab mouse `': Resource temporarily unavailable # error/xwindow.c/XSelectWindow/9199.
import: unable to grab mouse `': Resource temporarily unavailable # error/xwindow.c/XSelectWindow/9199.
.
.
.
from: can't read /var/mail/xml.dom
/home/stark/Desktop/TrackingOk/Release/roomcontrol.py: 9: /home/stark/Desktop/Tr: not foundlease/roomcontrol.py:
/home/stark/Desktop/TrackingOk/Release/roomcontrol.py: 10: /home/stark/Desktop/T: not foundelease/roomcontrol.py: try:

Create a new .sh file ("restart.sh" for example):
#!/bin/bash
kill $(pgrep -f 'python3 roomcontrol.py')
python3 roomcontrol.py &
Then just call
os.system('./restart.sh')
somewhere in your "server.py" script.
PS: You have to make the .sh file executable by running the following command:
chmod +x restart.sh
Edit: I'm not sure how you can start a process from a different shell, but you can start "roomcontrol.py" in another terminal window with the following (bash) command:
gnome-terminal -x sh -c 'python3 roomcontrol.py'
But then you'd have to replace "restart.sh" by
#!/bin/bash
kill -9 $(pgrep -f 'sh -c python3 roomcontrol.py')
gnome-terminal -x sh -c 'python3 roomcontrol.py'

Related

Script to open terminal in Kali, create multiple new tabs, rename and colour code them

I need help creating a script, bash, python whichever works best within Kali linux.
What I need the script to do is open up the default terminal (ZSH) create the following new tabs - main, msf, nc listener, http server, serachsploit and then colour code them.
This is the bash script to create the tabs. I am not sure about the colorcoding.
#!/bin/bash
mate-terminal --tab --title="main"
mate-terminal --tab --title="msf"
mate-terminal --tab --title="NC Listener"
mate-terminal --tab --title="http server"
mate-terminal --tab --title="Searchsploit"
Write a script:
sleep 1m;gnome-terminal –geometry=150×50 –tab –title="echo" -e "bash -c \"echo "hello";echo "there";exec bash\"" –tab –title="idea" -e "bash -c \"/opt/idea-IU-111.69/bin/idea.sh;exec bash\"" –tab –title="sql" -e "bash -c \"mysql -uroot -pigdefault;\"" –tab –title="firefox" -e "bash -c \"/usr/bin/firefox www.gmail.com;\""
The script above will open a terminal with 4 tabs:
echo "hello there"
Idea
MySql
Firefox
The script will also set the titles in the terminal for each tab.
One can personalize the above script accordingly.
Description of the script :
sleep 1m : Executes the script after 1 minute so that the system finishes
its startup process.
gnome-terminal : Open a terminal.
-geometry=150×50 : Set screen size for terminal.
–tab : Open new tab
–title : Set title for the terminal
-e : Execute the argument inside the terminal.
exec bash : Starts a new bash after executing all the commands.
This command is required if you do not want to close the current tab.
Save the script anywhere in the file system and make the file executable using the following command:
$chmod +x file.sh
Add this file to startup applications :
Go to System > Preferences > Startup Applications
Click on add and in "command" write:
bash path/to/your/file.sh
Close and its done!
Next time when you start your system, all the tasks mentioned in the script will be automated.

Can't open sh file

I'm trying to execute a sh file from script python.
My script python
os.system('sh run.sh')
My sh file
echo 'The house is blue' | /opt/palavras/por.pl > output.txt
Error:
sh: 0: Cant' open run.sh
How can I fix it?
Make sure that your bash script has the right permissions (i.e it is executable). In a terminal run:
chmod +x run.sh
and then try (assuming that run.sh is in the same directory as your python script)
import os
os.system('./run.sh')
I do not believe this would run from the terminal either because you must run the file in order. Try:
os.system('sh chmod +x run.sh|./run.sh')
instead.
See: https://askubuntu.com/questions/38661/how-do-i-run-sh-files for details on running sh files and how to use os.system() in python for running an shell order for the use of | running in an order in a shell.

Run python script from rc.local does not execute

I want to run a python script on boot of ubuntu 14.04LTS.
My rc.local file is as follows:
sudo /home/hduser/morey/zookeeper-3.3.6/bin/zkServer.sh start
echo "test" > /home/hduser/test3
sudo /home/hduser/morey/kafka/bin/kafka-server-start.sh /home/hduser/morey/kafka/config/server.properties &
echo "test" > /home/hduser/test1
/usr/bin/python /home/hduser/morey/kafka/automate.py &
echo "test" > /home/hduser/test2
exit 0
everything except my python script is working fine even the echo statement after running the python script, but the python script doesnt seem to run.
My python script is as follows
import sys
from subprocess import Popen, PIPE, STDOUT
cmd = ["sudo", "./sbt", "project java-examples", "run"]
proc = Popen(cmd, shell=False, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
proc.communicate(input='1\n')
proc.stdin.close()
which works perfectly fine if executed individually.
I went through the following questions , link
I did a lot of research but couldn't find a solution
Edit : echo statements are for testing purpose only, and the second actual command (not considering the echo statements) is starting a server which keeps on running, and even the python script starts a listener which runs on an infinite loop, if this is any help
The Python script tries to launch ./sbt. Are you sure of what if the current directory when rc.local runs? The rule is always use absolute paths in system scripts
Do not run the Python script in background, run it in foreground. Do not exit from its parent script. Better call another script from "rc.local" that does all the job of "echo" and script launching.
Run that script from "rc.local"; not in background (no &).
You do not need "sudo" as "rc.local" is run as root.
If you want to run python script at system boot there is an alternate solution which i have used.
1:Create sh file like sample.sh and copy paste following content
#!/bin/bash
clear
python yourscript.py
2:Now add a cron job at reboot.If you are using linux you can use as following
a:Run crontab -e(Install sudo apt-get install cron)
b:#reboot /full path to sh file > /home/path/error.log 2>&1
And restart your device

Run $Path command in Terminal in a python script

i use ipython notebook and I want to call a terminal command:
fft <in> <out>
my "fft" is in my $PATH so using a terminal, this would work.
How can I run this command in my ipython notebook?
the problem is that my fft executable is in my $PATH folder, and python won't recognize this
Found the solution:
import os
os.system("xterm -e 'bash -c \"fft -i 3 AddedK AddedK_ifft; exit -f exec bash\"' ")
xterm opens a new terminal
fft ...; calls the function fft
exit -f closes the terminal

Changing Process Name using Shell for nagios monitoring with check_procs

I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')

Categories