I am working on raspberry pi 3 about 3 months , I had a problem when I started working with it.
I couldn't find an efficient and safe way to run a python script on raspberry when it turns on(without monitor and mouse and keyboard).At the moment I have added "$sudo run myscript.py &" at /etc/profile but sometimes when I turn it on my script doesn't run until I connect monitor and mouse and keyboard to it and run the script with GUI and after that it works fine (again without mouse and keyboard).
I want to know is there any solution that I will be sure my script will run after I turn raspberry pi on?
Thanks a lot
You will want to setup a service and user sudo service <my_service> [start, stop, restart] to get it working on startup. See here for reference.
The /etc/profile is executed when new shell session in being started, so unless you start at least single shell session your script will not be run. Moreover it will be terminated when session stops, and if you start multiple sessions then the script will also be started for each session, which is probably not what you want.
Depending on your init system you would need to create SysVinit or systemd service. Assuming you use systemd based distro (which is currently default for most Linux distributions) you need to do following:
Step 1: Place your script in location from which it will be executed by service. For example /usr/local/bin/ may be good choice.
Step 2: Create service file. Assuming you want to name it myscript.service, create file at following path /etc/systemd/system/myscript.service with following content:
[Unit]
Description=myscript
[Service]
ExecStart="/usr/bin/python /usr/local/bin/myscript.py"
[Install]
WantedBy=multi-user.target
Step 3: Reload systemd daemon and enable your service:
systemctl daemon-reload
systemctl enable myscript
Now after you restart your system, your service should be automatically started. You can verify that using command systemctl status myscript, which returns service status.
Related
I am working on an RFID-based access control system for which I have a working python script. For some specific details if they matter, the main processing is done on a pi zero w, which is connected by USB to a microcontroller that handles the input from the RFID module and sends it to the pi in string format for simplicity. The pi then compares the string received to a yaml file and a schedule and uses GPIO to switch on or off a door strike using a power supply. The issue I'm running into is that the script stops running after about 30 minutes, and I'm not quite sure why, but I think the ideal solution in any case is to daemonize it, because a cron job is too subject to failure and a daemon seems very appropriate for this use. Does anyone have any suggestions for daemonizing the script such that it will start on boot and restart itself if it detects a failure or that it is no longer running?
As larsks said, you can create systemd service
sudo nano /etc/systemd/system/yourscript.service
This file should be something like this (read the documentation for more information):
[Unit]
Description=My cool script
After=multi-user.target
[Service]
User=root
WorkingDirectory=/path/to/your/script/directory/
Restart=on-failure
RestartSec=5s
ExecStart=/usr/bin/python3 your_script.py
StandardOutput=append:/var/log/your_script.log
StandardError=append:/var/log/your_script.log
SyslogIdentifier=coolawesomescript
[Install]
WantedBy=multi-user.target
Then enable and start it:
foo#bar:~$ sudo systemctl enable yourscript
foo#bar:~$ sudo systemctl start yourscript
Now your script will automatically restart when it crashes
You can check if your script actually works by typing sudo systemctl status yourscript
I need to deploy a Python script on a AWS machine with Ubuntu server 18.04.
In the script there is a TCP server using a custom TCP port (let's say the 9999), which handles the clients' requests in different threads.
The problem is that I don't know which could be the best practice to keep this script running if there is any problem (the main TCP server thread dies for whatever reason).
Furthermore, I don't really know which could be the best practice to run this kind of script in the AWS EC2 system.
So far I am manually starting the script via SSH. Everything in the script logic works well, the problem is how to start and keep running such script.
You should take a look at the systemd suite. It can be used to manage the status of your script. It can restart the script if it dies, or if the node is rebooted.
Here's an example service.
Create the file below in this location: /lib/systemd/system/example.service
[Unit]
Description=A short description of the script.
[Service]
Type=simple
# Script location
ExecStart=/path/to/some/script.py
# Restart the script in all circumstances (e.g If it exits successfully, fails or crashes).
Restart=always
[Install]
WantedBy=multi-user.target
Then set the service to start automatically on boot and start the service:
chmod 644 /lib/systemd/system/example.service
systemctl enable example
systemctl start example
There are a lot of resources available if you want to learn more about systemd. I'd suggest the links below:
[0] https://www.freedesktop.org/wiki/Software/systemd/
[1] https://github.com/torfsen/python-systemd-tutorial
[2] https://www.linode.com/docs/quick-answers/linux/start-service-at-boot/#create-a-custom-systemd-service
[3] https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
As for general best practices, it is difficult to provide advice without knowing more about your script. It is not recommended to use the python HTTPServer module for Production workloads, because it only implements basic security checks.
Hardware setup (computer, etc)
Ubuntu server 18.04.1
Serial To Usb Converter with 8 ports
Python version
2.7.15r1
Python program description
When the program starts to create some threads:
Create one thread for the Modbus server.
Run 1 thread for each serial port connected (/dev/ttyUSBn) and start read the data.
Problem explanation
When I run the script using the normal command (python2.7 myProgram.py) it work, so the modbus server start and I can read the value, and I can also see the usb-serial convert blink on the TX-RX leds.
If I check the readed data they are correct, so the program is working properly.
The problem come out when I set up a crontab job that run my python script!
The modbus server start properly, but I can't see the usb-serial converter leds blink and the python program don't print the readed data. That means the program is not working on the "serial" side.
To create the job I has used the following commands:
crontab -e
selected nano (default option)
added at the end of the file the cron command: #reboot /usr/bin/python2.7 /myProgram.py
I can't figure out where the problem is, the program is not catching the exception and the process is still running until I stop it manually. If I stop it and run it manually after that it start and work properly.
To help you:
I have also tried to run it using **systemctl**, the problem is the same.
At boot the service start and if I check it I can read: Active(running), but the software is not reading from the serial port.
The questions are:
How can I solve it?
There is something wrong with the crontab job?
Maybe crontab job can't access the /dev/ directory? How can I solve this?
I'm very confused about that, I hope the question is properly created and formatted.
EDIT 30/11/18:
I have removed the crontab command, and create a service to run the program using this procedure.
If I run the command: service supervision start I can see that the process is running correctly and on htop I have only 4 processes.
In this case, the program is not reading from the serial port, but the modbus server is working. You can see that I have just 4 processes and the cpu load is too high.
If I run it manually with the command: python2.7 LibSupervisione.py
The output of the htop command is:
Here you can see that I have more processes, 1 for each thread that I create and the load on the cpu is properly distributed.
Your script probably requires a Console or some Environment variables, but in a systemd started process you dont have these automatically.
The easiest way would be to prepend /usr/bin/bash -c "your command" in your System unit in the field ExecStart to enable a Shell like Environment likle this:
ExecStart=/bin/bash -c "/usr/bin/python2.7 /myProgram.py"
WorkingDirectory=yourWorkingDir
Why do you Need to use cron? Use a systemd timer instead.
If you could run your code with the service like this: sudo service <service-name> start and get a good status using sudo service <serivice-name> status, you can test it in crontab -e like this (run every 5 minutes for test):
*/5 * * * * service <service-name> start
*/10 * * * * service <service-name> stop
Then using #rebote after with the above test.
OR:
Finally, if you want to run your code/service at the system startup, do it instead of cron jon:
Edit the rc.local file with an editor with the sudo permission, then:
#!/bin/sh -e
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
service <service-name> start
exit 0
[NOTE]:
This is the procedure of creating a service from your code.
So, here is my little problem:
I have a small python program and have to run it 24/7 with internet access. So using my laptop is not really a solution. But I can use a local server. My program is saved on the server. Is there a way to start the program headless on the server, so it can run for a long period of time?
Thanks
This post assumes you are using linux. If this is not the case, I will still keep this answer around for anyone else. The general Principles will apply to any OS regardless.
While setsid is one way to put a program into the background, it is usually not what you want for a number of reasons:
If you ssh into the server again, there is no easy way to see the output of the program. Any output will not be kept.
If the program crashes, it won't be restarted.
If the server reboots, it won't be started.
There is no easy way to see the status, stop or restart it.
One slightly better method would be to use tmux (or the older screen). These can be used to detach a process, but still have access to it's output. (see this answer).
However, if you want to do things correctly, you should use a process manager/supervisor, such as systemd or supervisord.
For systemd, you can create the following file: /etc/systemd/system/yourprogramname.service
Inside it, place the following text:
[Unit]
Description=YourDescription
[Service]
ExecStart=/usr/bin/python3 /your/full/script/location.py
Restart=always
[Install]
WantedBy=multi-user.target
(These files support a number of additional options, you can view them at: http://0pointer.de/public/systemd-man/systemd.service.html)
Then reload the units with systemctl daemon-reload and enable your unit at boot with systemctl enable yourprogramname.service.
You can then:
Start it: systemctl start yourprogramname
Retart it: systemctl restart yourprogramname
Stop it: systemctl stop yourprogramname
Get the status: systemctl status yourprogramname
View the full logs: journalctl -u yourprogramname
(these commands all require sudo)
I have a python script that has a While True: in it that I would like to have run on startup on a raspberry pi running Jessie.
So far I have a startup bash script in /etc/init.d called startup.sh which contains
sudo python3 /home/pi/Desktop/Scripts/bluez3.py &
When the raspberry pi starts up, the script does run but after 20 minutes the script seems to stop. I have logging in my script and the time-stamp stops exactly 20 mins in.
I did some reading and I think the best option would be to create the python script as a service on the raspberry pi. However, I have not been able to find a decent tutorial about how to do this (and my lack of python knowledge).
My question is, is there another way to resolve my problem or does anyone know of a good tutorial on how to make the python script into a service.
Thanks!
given the name of your script, I'm guessing it's related to some bluetooth stuff. It's likely that after 20 min, whatever you're checking/needing in your script gets unaccessible and throws an exception or something like that. Like a resource being locked, or a bt device being disconnected or a module being unloaded or unavailable or [insert edge case reason here]…
that being said, in between creating a systemd service, you can first play with supervisorctl which is just an apt install supervisor away.
then if you really want to launch it as a service, you can find plenty of examples in /lib/systemd/system/*.service, like the following:
[Unit]
Description=Your service
Wants=
After=bluetooth.target # I guess you need bluetooth initialised first
[Service]
ExecStart=/usr/bin/python3 /home/pi/Desktop/Scripts/bluez3.py
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target
which I customized from the sshd.service file 😉