I have written a python script which scans my gmail INBOX for a particular mail, and if that mail is present it opens up a GUI. I have tested this script and works correctly.
I want to run this script whenever the network connection is established. So, I have added a script in the dispatch.d directory of the NetworkManager. My bash script is shown below.
#!/bin/bash
#/etc/NetworkManager/dispatcher.d/90filename.sh
IF=$1
STATUS=$2
if [ "$IF" == "wlan0" ]; # for wireless internet
then
case "$2" in
up)
logger -s "NM Script up triggered"
python /home/rahul/python/expensesheet/emailReader.py
logger -s "emailReader completed"
exitValue=$?
python3.2 /home/rahul/python/expensesheet/GUI.py &
logger -s "GUI completed with exit status $exitValue"
;;
down)
logger -s "NM Script down triggered"
#place custom here
;;
pre-up)
logger -s "NM Script pre-up triggered"
#place custom here
;;
post-down)
logger -s "NM Script post-down triggered"
#place custom here
;;
*)
;;
esac
fi
I have used tkinter to design my GUI.
My problem is that, emailReader(which has no GUI) gets executed correctly, but GUI.py doesn't get executed. It exits with the exit status 1.
Can somebody throw some light on this matter and explain what I'm doing wrong?
NetworkManager is a process that is running on a virtual terminal, outside of your X-server.
(e.g. NetworkManager get's started on bootup before your window manager gets started; they are totally unrelated).
therefore, any script started by NetworkManager will not (directly) be able to access the GUI. (it is very similar to what you get when you change from your desktop to a virtual terminal (e.g. Ctrl-Alt-1), and then try to run your GUI from there: you will most likely get an error like "Can't open display".
if you want to start a GUI-program, you have 2 possibilities
tell a notification daemon (a sub-process of your window-manager) to start your GUI
tell your GUI to start on the correct display (the one, where your desktop is running)
i'd go for the first solution (notification daemons are designed for that very purpose), but how to do it, heavily depends on the window-manager you use.
the 2nd solution is a bit more dirty and involves potential security breaches but basically try something like starting DISPLAY=:0.0 myguiapp.py instead of starting myguiapp.py (this assumes you are running an X-server on localhost:0.0).
you can check whether this works by simply launching the command with the DISPLAY-line from a virtualterminal.
to get the display you are actually using, simply run echo $DISPLAY in a terminal within your X-server.
usually, remote connections are disabled to your running Xserver (as it allows non-proviliged users to take over your desktop - everything from starting new GUI-programs (which is what you want) to installing keyloggers); if that's the case check man xhost (or go for solution #1)
UPDATE
for the 1st solution, you probably want to check out libraries like libnotify (there's python bindings in python-notify and python-notify2).
if you want more than simple "notification popups", you probably have to dig into D-BUS.
a simple example (haven't tested it personally, though), can be found here.
Related
This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.
Hardware setup (computer, etc)
Ubuntu server 18.04.1
Serial To Usb Converter with 8 ports
Python version
2.7.15r1
Python program description
When the program starts to create some threads:
Create one thread for the Modbus server.
Run 1 thread for each serial port connected (/dev/ttyUSBn) and start read the data.
Problem explanation
When I run the script using the normal command (python2.7 myProgram.py) it work, so the modbus server start and I can read the value, and I can also see the usb-serial convert blink on the TX-RX leds.
If I check the readed data they are correct, so the program is working properly.
The problem come out when I set up a crontab job that run my python script!
The modbus server start properly, but I can't see the usb-serial converter leds blink and the python program don't print the readed data. That means the program is not working on the "serial" side.
To create the job I has used the following commands:
crontab -e
selected nano (default option)
added at the end of the file the cron command: #reboot /usr/bin/python2.7 /myProgram.py
I can't figure out where the problem is, the program is not catching the exception and the process is still running until I stop it manually. If I stop it and run it manually after that it start and work properly.
To help you:
I have also tried to run it using **systemctl**, the problem is the same.
At boot the service start and if I check it I can read: Active(running), but the software is not reading from the serial port.
The questions are:
How can I solve it?
There is something wrong with the crontab job?
Maybe crontab job can't access the /dev/ directory? How can I solve this?
I'm very confused about that, I hope the question is properly created and formatted.
EDIT 30/11/18:
I have removed the crontab command, and create a service to run the program using this procedure.
If I run the command: service supervision start I can see that the process is running correctly and on htop I have only 4 processes.
In this case, the program is not reading from the serial port, but the modbus server is working. You can see that I have just 4 processes and the cpu load is too high.
If I run it manually with the command: python2.7 LibSupervisione.py
The output of the htop command is:
Here you can see that I have more processes, 1 for each thread that I create and the load on the cpu is properly distributed.
Your script probably requires a Console or some Environment variables, but in a systemd started process you dont have these automatically.
The easiest way would be to prepend /usr/bin/bash -c "your command" in your System unit in the field ExecStart to enable a Shell like Environment likle this:
ExecStart=/bin/bash -c "/usr/bin/python2.7 /myProgram.py"
WorkingDirectory=yourWorkingDir
Why do you Need to use cron? Use a systemd timer instead.
If you could run your code with the service like this: sudo service <service-name> start and get a good status using sudo service <serivice-name> status, you can test it in crontab -e like this (run every 5 minutes for test):
*/5 * * * * service <service-name> start
*/10 * * * * service <service-name> stop
Then using #rebote after with the above test.
OR:
Finally, if you want to run your code/service at the system startup, do it instead of cron jon:
Edit the rc.local file with an editor with the sudo permission, then:
#!/bin/sh -e
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
service <service-name> start
exit 0
[NOTE]:
This is the procedure of creating a service from your code.
I want a python script to be executed at bootup on my raspberry pi2, so I put it into .bashrc.
Launching the script with crontab didn't work.
But I only want to execute it once. Not everytime I enter a terminal or every time I login via ssh.
My poor try of course didn't work and it's obvious why.
python_running=false
if [ "$python_running" = false ] ; then
./launcher.sh
$python_running = true
fi
EDIT:
My main problem is that the python script needs internet access. The connection has to be established before the script is executed.
After the first answer and comments I realized that .bashrc is not a good place for launching the script at bootup. It works with autologin, but is not a proper solution.
But what could be a proper solution to run the script only once?
.bashrc is definetly not a proper place to do that. To start the script at bootup the best and easiest solution I found is crontab:
sudo crontab -e
then add the following line to the end of the file:
#reboot sh /home/pi/launcher.sh > /home/pi/logs/cronlog 2>&1
But to use crontab the shell script needs to be changed to wait/poll for internet connection:
ROUTER_IP=192.168.0.1
while ( ! ping -c1 $ROUTER_IP) do
echo "network is not up yet"
sleep 3
done
echo "network is up now"
python3 myScript.py &
Polling might not be the best option, but there's nothing wrong in creating one sleep process every 3 seconds.
Ok.. so we need to clarify some terminology ..
The pi (or any unix system) doesn't really distinguish between a "console" login or a ssh (remote) login, it's going to drop you into a shell anyway.
However, if you want something to start on bootup (which is what I think you want), then look at /etc/rc.d - have a look here - but in case that link goes, put a command in /etc/rc.local
I've read a lot of other posts about monitoring python scripts, but haven't been able to find anything like what I am hoping to do. Essentially, I have 2 desktops running Linux. Each computer has multiple python scripts running non-stop 24/7. Most of them are web scraping, while a few others are scrubbing and processing data. I have built pretty extensive exception handling into them that sends me an email in the event of any error or crash, but there are some situations that I haven't been able to get emailed about (such as if the script itself just freezes or the computer itself crashes, or the computer looses internet connection, etc.)
So, I'm trying to build a sort of check-in service where a python script checks in to the service multiple times throughout it's run, and if it doesn't check-in within X amount of time, then send me an email. I don't know if this is something that can be done with the signal or asyncore module(s) and/or sockets, or what a good place would be to even start.
Has anyone had any experience in writing anything like this? Or can point me in the right direction?
Take a look at supervision tools like monit or supervisord.
Those tools are built to do what you described.
For example: create a simple init.d script for your python process:
PID_FILE=/var/run/myscript.pid
LOG_FILE=/mnt/logs/myscript.log
SOURCE=/usr/local/src/myscript
case $1 in
start)
exec /usr/bin/python $SOURCE/main_tread.py >> LOG_FILE 2>&1 &
echo $! > $PID_FILE
;;
stop)
kill `cat ${PID_FILE}`
;;
*)
echo "Usage: wrapper {start|stop}"
;;
esac
exit 0
Then add this to the monit config:
check process myscript pidfile /var/run/myscript.pid
start program = "/etc/init.d/myscript start"
stop program = "/etc/init.d/myscript stop"
check file myscript.pid path /var/run/myscript.pid
if changed checksum then alert
Also check documentation, it has pretty good example on how to setup alerts and send emails.
Upstart is a good choice but I' afraid it is only available for Ubuntu and Redhat based distros
I’m writing a web app that uses Selenium to screen-scrape another website. This screen-scraping only happens once a day, so I’d rather not leave Selenium and Xvfb running all the time.
I’m trying to figure out how to start Xvfb and Selenium from Python, and then stop them once the screen-scraping’s done.
If I was doing it manually, I’d start them at the command line, and hit CTRL C to stop them. I’m trying to do the same thing from Python.
I seem to be able to successfully start Xvfb like this:
xvfb = Popen('Xvfb :99 -nolisten tcp', shell=True)
But when I’ve tried to terminate it:
xvfb.terminate()
and then tried to start it again (by repeating my initial command), it tells me it’s already running.
I don't know why you want to run Xvfb as root. Your usual X server only needs to run as root (on many but not all unices) only so that it can access the video hardware; that's not an issue for Xvfb by definition.
tempdir = tempfile.mkdtemp()
xvfb = subprocess.Popen(['Xvfb', ':99', '-nolisten', 'tcp', '-fbdir', tempdir])
When you terminate the X server, you may see a zombie process. This is in fact not a process (it's dead), just an entry in the process table that goes away when the parent process either reads the child's exit status or itself dies. Zombies are mostly harmless, but it's cleaner to call wait to read the exit status.
xvfb.terminate()
# At this point, `ps -C Xvfb` may still show a running process
# (because signal delivery is asynchronous) or a zombie.
xvfb.wait()
# Now the child is dead and reaped (assuming it didn't catch SIGTERM).
I assume you can parametrize your system to allow any user to launch Xvfb as explained here solving all your problems
EDIT
the correct command line is
sudo chmod u+s `which Xvfb`