Run python script from rc.local does not execute - python

I want to run a python script on boot of ubuntu 14.04LTS.
My rc.local file is as follows:
sudo /home/hduser/morey/zookeeper-3.3.6/bin/zkServer.sh start
echo "test" > /home/hduser/test3
sudo /home/hduser/morey/kafka/bin/kafka-server-start.sh /home/hduser/morey/kafka/config/server.properties &
echo "test" > /home/hduser/test1
/usr/bin/python /home/hduser/morey/kafka/automate.py &
echo "test" > /home/hduser/test2
exit 0
everything except my python script is working fine even the echo statement after running the python script, but the python script doesnt seem to run.
My python script is as follows
import sys
from subprocess import Popen, PIPE, STDOUT
cmd = ["sudo", "./sbt", "project java-examples", "run"]
proc = Popen(cmd, shell=False, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
proc.communicate(input='1\n')
proc.stdin.close()
which works perfectly fine if executed individually.
I went through the following questions , link
I did a lot of research but couldn't find a solution
Edit : echo statements are for testing purpose only, and the second actual command (not considering the echo statements) is starting a server which keeps on running, and even the python script starts a listener which runs on an infinite loop, if this is any help

The Python script tries to launch ./sbt. Are you sure of what if the current directory when rc.local runs? The rule is always use absolute paths in system scripts

Do not run the Python script in background, run it in foreground. Do not exit from its parent script. Better call another script from "rc.local" that does all the job of "echo" and script launching.
Run that script from "rc.local"; not in background (no &).
You do not need "sudo" as "rc.local" is run as root.

If you want to run python script at system boot there is an alternate solution which i have used.
1:Create sh file like sample.sh and copy paste following content
#!/bin/bash
clear
python yourscript.py
2:Now add a cron job at reboot.If you are using linux you can use as following
a:Run crontab -e(Install sudo apt-get install cron)
b:#reboot /full path to sh file > /home/path/error.log 2>&1
And restart your device

Related

Jenkins not printing output of python script in console

I have a python script(myscript.py) as follows:
#!/bin/python
import os
import optparse
import subprocess
import sys
sys.stdout.flush()
print("I can see this message on Jenkins console output")
cmd="sshpass -p 'xxx' ssh test#testmachine 'cmd /c cd C:\stage && test.bat'"
retval=subprocess.call(cmd,shell=True)
print retval
In jenkins, I have a job with execute shell as follows:
#!/bin/sh
./myscript.py
Problem:
Jenkins console shows only "I can see this message on Jenkins console output".
If there is any output from the subprocess call, it does not print it out on the console.
If I putty to Server A and run the same command (./myscript.py) on shell, I can see the output of subprocess call.
How can I print this output of subprocess call on Jenkins console?
FYI: As you can see from my command, the subprocess call is running a batch file on windows; Jenkins is running on Linux; There is ssh setup between the two machines..
Edit:
My test.bat looks like this:
echo off
RMDIR /S /Q C:\Test
IF %ERRORLEVEL% NEQ 0 (
ECHO Could not delete
EXIT /b %ERRORLEVEL%
)
if I run this batch file locally on windows server, it returns a 1 ( because am holding a file open in Test folder )
But when the python script calls this batch file using the subprocess call, all i get is a Zero for retval.
Why is this and how to fix this? If I can capture the correct retval, I can make the Jenkins job fail.
Edit 12/12:
Helllo!! Anybody! Somebody! Help!
I wonder if it has to do anything with stdout being buffered
Can you try setting PYTHONUNBUFFERED before running your command?
export PYTHONUNBUFFERED=true
In my Jenkins environment, executing python scripts with the unbuffered argument makes the output appear immediately. Like this:
python3 -u some_script.py
More information comes from the help menu (python3 --help):
-u : force the stdout and stderr streams to be unbuffered;
this option has no effect on stdin; also PYTHONUNBUFFERED=x
TL; DR
The fix is to use some conditional execution (the || operator) on rmdir to fix the errorlevel being returned.
Investigation
This was a corker of a bug, with quite a few twists and turns! We initially suspected that the stdout chain was broken somehow, so looked into that through explicit use of pipes in Popen and then removing sshpass from your command and so using the output from ssh directly.
However, that didn't do the trick, so we moved on to looking at the return code of the command. With sshpass removed, ssh should return the result of the command that was run. However, this was always 0 for you.
At this point, I found a known bug in Windows that rmdir (which is the same as rd) doesn't always set errorlevel correctly. The fix is to use some conditional execution (the || operator) on rmdir to fix up the errorlevel.
See batch: Exit code for "rd" is 0 on error as well for full details.
When you execute your script in a shell, Python sets your shell's STDOUT as the subprocess's STDOUT, so everything that gets executed gets printed to your terminal. I'm not sure why, but when you're executing in Jenkins the subprocess is not inheriting the shell's STDOUT so its output is not displayed.
In all likelihood, the best way to solve your problem will be to PIPE the STDOUT (and STDERR for good measure) and print it after the process ends. Also, if you exit with the exit code of your subprocess and the exit code is not 0, it will likely terminate your Jenkins job.
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
exit_code = p.wait() # wait for it to end
print('Got the following output from the script:\n', p.stdout.read().decode())
print('Got the following errors from the script:\n', p.stderr.read().decode())
print('Script returned exit code:', exit_code)
sys.exit(exit_code)

Opening a terminal application from python and running custom scripts inside it

I'm working with a software called dc_shell that has a terminal command (also called dc_shell) on a CentOS Linux server. when I run dc_shell command, I'm connected to its terminal and I'm able to run scripts/commands inside it. (This is all done manually)
So the real problem is that I want to do this task all from a Python program. Meaning that I have a Python code which does some task, and after that has to open dc_shell and run some commands inside it.
I have used subprocess.Popen before and this doesn't have any problem when I run commands like ls or other general terminal commands. But when I run dc_shell command it seems like it crashes and nothing happens, and when I try to terminate the session I get the following errors in my terminal.
Here's my code:
def run_scripts():
commandtext = 'cd ..; dc_shell-xg-t; set_app_var link_library "slow.db"; set_app_var target_library "slow.db"; set_app_var symbol_library "tsmc18.sdb";'
print(commandtext)
process = subprocess.Popen(commandtext,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print(proc_stdout)
and the output is:
cd ..; dc_shell-xg-t; set_app_var link_library "slow.db"; set_app_var target_library "slow.db"; set_app_var symbol_library "tsmc18.sdb";
and nothing happens... and after terminating I get:
[User#server python]$ /bin/sh: set_app_var: command not found
/bin/sh: set_app_var: command not found
/bin/sh: set_app_var: command not found
Do you need to use dc_shell to run your commands?
If so, that should be your executable and the rest of commands your arguments.
You should never use shell=True due to security considerations (the warning in the 2.x docs for subprocess seems much clearer to me).

subprocess Popen in python with command that changes environment

I'm trying to run a python script from python using the subprocess module and executing a script sequentially.
I'm trying to do this in UNIX but before I launch python in a new shell I need to execute a command (ppack_gnu) that sets the environment for python (and prints some lines in the console).
The thing is that when I run this command from python subprocess the process hangs and waits for this command to finish whereas when I do it in the UNIX console it jumps to the next line automatically.
Examples below:
From UNIX:
[user1#1:~]$ ppack_gnu; echo 1
You appear to be in prefix already (SHELL=/opt/soft/cdtng/tools/ppack_gnu/3.2/bin/bash)
1
[user1#1:~]$
From PYTHON:
processes.append(Popen("ppack_gnu; echo 1", shell=True, stdin = subprocess.PIPE))
This will print Entering Gentoo Prefix /opt/soft/cdtng/tools/ppack_gnu/3.2 - run 'bash -l' to source full bash profiles
in the python console and then hang...
Popen() does not hang: it returns immediately while ppack_gnu may be still running in the background.
The fact that you see the shell prompt does not mean that the command has returned:
⟫ echo $$
9302 # current shell
⟫ bash
⟫ echo $$
12131 # child shell
⟫ exit
⟫ echo $$
9302 # current shell
($$ -- PID of the current shell)
Even in bash, you can't change environment variables of the parent shell (without gdb or similar hacks) that is why source command exists.
stdin=PIPE suggests that you want to pass commands to the shell started by ppack_gnu. Perhaps you need to add process.stdin.flush() after the corresponding process.stdin.write(b'command\n').

Python paramiko hangs when trying to run bash shell

I need to run a regression script in bash shell on a remote server. I am able to successfully connect and execute different commands using paramiko. But when I try to execute '/bin/bash' my Python script hangs forever:
stdin,stdout,stderr = ssh.exec_command("pwd;/bin/bash;echo $SHELL")
Without /bin/bash echo $SHELL works good and returns the following:
[u'/home/akar\n', u'/tools/cfr/bin/tcsh\n']
Is there any workaround?
My first doubt is what is the purpose of the bash you are executing. Literally it means:
pwd; #print '/home/akar\n' as it results
/bin/bash; #/bin/bash will take over of console(stdin/stdout/stderr) from here
echo $SHELL #when you input exit to exit the bash from line 2, line 3 will print

My Raspbian doesn't reboot via a Python application

I am desperately trying to find out a way to force my Raspberry Pi running Raspbian to restart when a certain condition is met (Python script), however I got no success so far...
I have tried the following statements by using popen:
sudo reboot -i -p
sudo reboot -f
sudo shutdown -r -f now
I thought the problem could be calling it through the Python application itself, therefore I wrote a small C program to kill all running Python application and then reboot, but no success...
My Raspberry is enough powered (Red LED is always on) and all commands I described above work fine when called directly from the command window.
Any help is appreciated!
Thanks,
EDITED:
Adding my python script as required:
from subprocess import Popen, PIPE
def reboot():
echo.echo("Rebooting...")
db.write_alarm(get_alarm_status())
upload.upload_log()
reboot_statement = "sudo shutdown -r -f now"
popen_args = reboot_statement.split(" ")
Popen(popen_args, stdout=PIPE, stderr=PIPE)
Try this:
create a file called reboot.py with the following contents:
import os
os.system("shutdown -r now")
then call it like this:
sudo python reboot.py
Assuming this works you can probably invoke your original script with sudo to get it to work.
You should pass shell=True id you want the shell to process the arguments
Popen("sudo shutdown -r -f now", stdout=PIPE, stderr=PIPE, shell=True)

Categories