I'm trying to make a Python script run as a service.
It need to work and run automatically after a reboot.
I have tried to copy it inside the init.d folder, But without any luck.
Can anyone help?(if it demands a cronjob, i haven't configured one before, so i would be glad if you could write how to do it)
(Running Centos)
run this command
crontab -e
and then add
#reboot /usr/bin/python /path/to/yourpythonscript
save and quit,then your python script will automatically run after you reboot
There is no intrinsic reason why Python should be different from any other scripting language here.
Here is someone else using python in init.d: blog.scphillips.com/posts/2013/07/… In fact, that deals with a lot that I don't deal with here, so I recommend just following that post.
For Ubuntu Variant:-
open the /etc/rc.local file with:
nano /etc/rc.local
add the following line just before the exit 0 line:
start-stop-daemon -b -S -x python /root/python/test.py
or
Give absolute path of your command i.e
nohup /usr/bin/python2 /home/kamran/auto_run_py_script_1.py &
The start-stop-daemon command creates a daemon to handle the execution of our program. The -b switch causes the program to be executed in the background. The -S switch tells the daemon to start our program. And the -x switch tells the daemon that our program is an executable.
To check and Run
sudo sh /etc/rc.local
Related
I am running an Instagram bot using python and selenium. I use a bash script to run a python script with the accounts credentials(Username, password, hashtags, etc...) I run multiple Instagrams so I have made multiple copies of this file. Is there a way to put this in a single file that I can click on and run?
To open multiple terminals running their assigned account?
I've already tried just to add them to one big file but the scripts wont run until the previous one finishes.
Also since I'm using selenium, trying multi threading in python is somewhat difficult but would not mind going that route if someone could point me to where I could start with that.
#!/bin/sh
cd PycharmProjects/InstaBot
python3 W.py
I highly recommend that everyone read about Bash Job Control.
Getting into multithreading is ridiculous overkill if your bottleneck has nothing to do with the CPU.
for script in PycharmProjects/InstaBot/*.py; do
python3 "$script" &
done
jobs
Only one process in shell can run in foreground mode. So the command gets executed only when the previous completes.
Adding "&" symbol at the end of the command line tells shell that the command should be executed in the background. This way the shell will start python and continue without waiting.
This will execute two instances simultaneously, but they will all output to the same terminal:
#!/bin/sh
cd PycharmProjects/InstaBot
python3 W.py first_credentials &
python3 W.py second_credentials &
You can use the same technique to start a new terminal process for each python script:
#!/bin/sh
cd PycharmProjects/InstaBot
gnome-terminal -e "python3 W.py first_credentials" &
gnome-terminal -e "python3 W.py second_credentials" &
I've got a batch script that is pretty straight forward:
start "" "%SYSTEMDRIVE%\Program Files (x86)\Git\bin\sh.exe" --login -i -c "cd C:\python-script && python run.py"
I've not done a lot with batch scripts before, but here I can tell it's just opening up git bash interactively, then running the command to CD in a directory and run python run.py
When I run this .bat from a cmd prompt, git bash opens up ever so briefly and then immediately closes but the script is never fired off.
Any way I can stop that script temporarily so I can see any error the git window may be throwing?
In order to allow the pause to display in the current window, we can test by removing the start command to execute it in the local window. Also as mentioned in comments, if python is installed and it does not find it, it would require you to specify full path to it which should then work:
"%ProgramFiles(x86)%\Git\bin\sh.exe" --login -i -c "cd C:\python-script" "C:\python37\python.exe" run.py
I'm using pysftp in python 3.7 to connect to a Linux box and issue commands.
I am running a dask-worker in the background.
Here's the base command to start a dask-worker:
dask-worker my_domain:8786
Here's the command that gets sent to the Linux box: (you can see I wrap it with nohup in order to allow it to continue to run, and not hang my local python process).
nohup dask-worker my_domain:8786 > /dev/null 2>&1 &
Issuing this command works great! But I have one more thing I want to include, and I can't get it to work with this nohup wrapper. Maybe you can tell me why.
I want to run the dask-worker from a specific folder, like this:
cd /home/user/some_folder/ && dask-worker my_domain:8786
That works great, but when I wrap that in a nohup, it just hangs - my process on my machine issuing commands to Linux boxes doesn't move onto the next command.
So this doesn't work:
nohup cd /home/user/some_folder/ && dask-worker my_domain:8786 > /dev/null 2>&1 &
Do you know why? Or how I can get it to spin off my dask-worker process at that specific location? Is there a better way of doing this than chaining Linux commands together.
Here's the reason I'm doing it this way - because I cannot get the pysftp.chdir() command to actually work. It doesn't change the directory on the Linux box, and maybe that's because it's Centos7, idk, but the prepackaged commands to set a working directory don't stick for me.
Thank you kindly for your attention.
This is probably a very simple question, but I just can't figure it out. I've coded a Python GUI application (uses PyQT), and all I want to do is to launch it with an executable script. I'm still debugging the application, so it'd be nice if the terminal stays open to see any errors/print statements/exceptions thrown.
This is what I've got in my script:
#!/bin/sh
x-terminal-emulator -e cd dirname && python GUIapp.py
It successfully runs the Python application, but once the application loads, the terminal automatically closes. The application continues to run after the terminal closes.
I know I can open a terminal and then simply type in "cd dirname && python GUIapp.py", but I'm lazy.
What am I missing here?
Use --hold or --noclose option. They have the same function with different name.
So change script to
#!/bin/sh
x-terminal-emulator --noclose -e cd dirname && python GUIapp.py
If you are looking for a command option, always check --help. In this case it gives you the needed information.
user#host:~/projects$ x-terminal-emulator --help
Usage: x-terminal-emulator [Qt-options] [KDE-options] [options] [args]
Terminal emulator
Generic options:
[...]
Options:
[...]
--hold, --noclose Do not close the initial session automatically when it ends.
[...]
I want to run a python script at boot of Lubuntu 15.04. This python script writes some string into a text file placed in /home/aUser/aFile.txt
My /etc/rc.local file is:
#!/bin/sh -e
python /home/aUser/theScript.py &
exit 0
And the script /home/aUser/theScript.py is:
#!/usr/bin/python
f = open('/home/aUser/aFile.txt','w');
f.write("Some string...");
f.close();
Actually the python script does more, and run an infinite loop, this is why I run the script in background with &. Of course I have python installed:
~$ python --version
Python 2.7.9
I checked if /etc/rc.local is called at boot, and it is, proof of that: I added a test into the /etc/rc.local in this way:
#!/bin/sh -e
python /home/aUser/theScript.py &
exit 0
echo "Test" >> /home/aUser/aTest.txt
and the file /home/aUser/aTest.txt is written/created at boot.
So everything looks correct, proof of that:
if I run manually ~$ /etc/rc.local the file aFile.txt is correctly written.
Instead if I start (or reboot) the OS, the file is not written at boot.
I suspect that could be a problem of permissions/user: I know that /etc/rc.local is run as root, but even if I set root or aUser as owner of the file, the situation is the same. Also run the python script in the /etc/rc.local as user aUser (with su command) does not solve the problem.
Ok I found the problem and fix it, thanks to the #Zac comment.
Actually the python script try to open a network connection before writing the file: at boot time, when the python script is run from /etc/rc.local (so, it is run), the network is still not ready (probably because it is a wireless network) and therefore an exception is raised and the entire script stops. Capturing the exception solves the problem.
So at the end it was my fault, (not) helped by the rc.local that does not provide an easy way to debug.