The Jenkins job is set up so that it checks out latest version of a git repo
which executes some python code. The git repo is checked out to our linux lab pc and it runs there.
In the script, we are checking status of some labpc network interfaces. I made a small script which executes the following lines but it throws error like "no file or directory." The command is ok but it fails because linux env is not visible. The strange thing is that we have like 10 testcases and in 6 of them it works perfectly fine and it 4 it fails and it always fails for just those. The sequence of events is exactly the same in all the testcases ...
res = subprocess.check_output(['ip', 'link', 'show', 'dev', '<interface name>'])
logger.info(res)
The scripts works when executed locally so there is jenkins issue which is behind this. Does anybody have any tips to resolve this?
The problem is solved by putting 'sudo' before the command. Even
though the command doesn't require sudo rights, it requires it
when run with jenkins.
You can open in jenkins the node where you can execute groovy commands to the server and test that it works.
Related
I'm trying to run a python script on my raspberrypi using cron.
I did the following:
crontab -e # To edit a crontab job
After the cron file opened, I added the following line:
#reboot /usr/bin/python /home/pi/path/to/file/example.py > /home/pi/cronlogs/mylog.log # JOB_ID_!
If I understand the documentation correctly, this cron job should be executed every time after the system booted up. However in my case, when I reboot the computer, the script will not be executed.
What's strange:
I checked the log file and it's empty, so it seems like everything goes fine
If I run the given command manually (so basically write the following code to the terminal) it executes and works correctly: /usr/bin/python /home/pi/path/to/file/example.py > /home/pi/cronlogs/mylog.log
I guess I missed something really obvious but I can't see it. Please can I ask for any advise how to debug this. Thanks!
The cron definition looks correct; I just checked this on my Pi running Debian stretch and it works OK:
#reboot /usr/bin/python /home/pi/example.py > /home/pi/mylog.log
Some other possible reasons it might not work:
working directory issue (if you're using relative paths)
a long running script (being a scraping script it might take a while to complete) - you can check if it's still running using ps aux | grep python
the script does not output anything (would need some more details about the script)
Just to be sure you catch any errors from the script, redirect stderr to stdout by using 2>&1
I am using dryscrape in a python script. The python script is called in a bash script, which is run by cron. For those who may not be aware, dryscrape is a headless browser (use QtWebkit in the background - so requires an xsession).
Here are the main points concerning the issue I'm having
When I run the python script from the command line, it works
When I run the bash script from the command line, it works too
I figured out that this may have something to do with different environments between my command prompt and when the cron job is running, so I modified my bash script to source my .profile as follows:
#/bin/bash
. /full/path/to/my/home/directory/.profile
python script_to_run.py
This is what my cronjob crontab entry looks like:
0,55 14-22 * * 1-5 /path/to/script.sh >> $(date "+/path/to/logs/\%Y\%m\%d.mydownload.log" )
By the way, I know that the job is being run (I can see entries in /var/log/syslog, and the script also writes to a log file - which is where I get the error message below):
In all cases, I got the following error message:
Could not connect to X server. Try calling dryscrape.start_xvfb()
before creating a session
I have installed the prerequisites, on my machine (obviously - since it runs at the command line). At the moment, I have run out of ideas.
What is causing the script to run fine at the console, and then fail when run by cron?
[[Relevant Details]]
OS: Linux 16.0.4 LTS
bash: version 4.3.46(1)
cron user: myself (i.e. same user at the command prompt)
dryscrape: version 1.0.1
The solution to this was to call the dryscrape.start_xvfb() method before starting the dryscrape session.
Cron user does not have display, so you cannot run any command which requires a display.
You need to modify the python script to do not use any type of display (check carefully, because some python commands, even though they do not open any display , they internally check for this variable).
The best way to test is to ssh into the machine without Display, and check if you can run it from there without erros.
I wrote python script that uses subprocess.pOpen() module to run and manipulate with 2 GUI programs: Firefox and VLC player. I am using Ubuntu 14.04 LTS operating system in Desktop mode.
My problem is when I try to run that python script when system starts, script is running but Firefox or VLC don't start.
So far, I tried to make shell script to run my python script and then with crontab with #reboot /home/user/startup.sh to execute my python script. I set all permissions for every script that is using. I gave my user root permisions so everything is OK with that.
I also tried to run my script putting command "sudo python /path/to/my/script.py" in /etc/rc.local file but that also is not helping.
I googled and found out people using .desktop files that they put in ~/.config/autostart/ directory but that also failed. Example of what I wrote:
[Desktop Entry]
Type=Application
Exec="sudo python /home/user/path_to_my_script/my_script.py"
X-GNOME-Autostart-enabled=true
Name=screensplayer
Comment=screensplayer
And I saved this as program.desktop in ~/.config/autostart/ directory but it does not work. I am sure there is a way to fix this but don't know how. Any help will be appreciated!
Found solution to my problem. When you are running commands with pOpen in python like this:
FNULL = open(os.devnull, 'w')
_FIREFOX_START = ["sudo", "firefox", "test.com/index.html"]
subprocess.Popen(self._FIREFOX_START, stdout=self.FNULL, stderr=subprocess.STDOUT)
it won't run apps because of "sudo" word, when I removed it, it worked.
Also run gnome-session-properties in terminal and add new startup application, be aware that you have to execute python script without sudo, like this:
python /home/user/path_to_script/script.py
Also, I granted my user root privileges so kepp that in mind.
I'm trying to run a python script from bamboo. I created a script task and wrote inline "python myFile.py". Should I be listing the full path for python?
I changed the working directory to the location of myFile.py so that is not a problem. Is there anything else I need to do within the configuration plan to properly run this script? It isn't running but I know it should be running because the script works fine from terminal on my local machine. Thanks
I run a lot of python tasks from bamboo, so it is possible. Using the Script task is generally painless...
You should be able to use your script task to run the commands directly and have stdout written to the logs. Since this is true, you can run:
'which python' -- Output the path of which python that is being ran.
'pip list' -- Output a list of which modules are installed with pip.
You should verify that the output from the above commands matches the output when ran from the server. I'm guessing they won't match up and once that is addressed, everything will work fine.
If not, comment back and we can look at a few other things.
For the future, there are a handful of different ways you can package things with python which could assist with this problem (e.g. automatically installing missing modules, etc).
You can also use the Script Task directly with an inline Python script to run your myFile.py:
/usr/bin/python <<EOF
print "Hello, World!"
EOF
Check this page for a more complex example:
https://www.langhornweb.com/display/BAT/Run+Python+script+as+a+Bamboo+task?desktop=true¯oName=seo-metadata
I'm running windows 8 and would like to launch a spark cluster. I'm using the this tutorial. It isn't running with windows CLI, so I tried installing and using cygwin. With that I was able to change the environment variables and also run the ec2 script but I get the error:
ERROR: The identity file must be accessible only by you.
You can fix this with: chmod 400 "SpakPlaygroundKeyPair.pem"
So I'm stuck here. I saw that in This Question it was suggested to run the python file directly, which is actually what I want to do, but I'm not sure how. e.g. When you run the script, you have to specify things like
--key-pair=SpakPlaygroundKeyPair --identity-file=SpakPlaygroundKeyPair.pem --region=us-east-1 --zone=us-east-1a --instance-type=t2.micro launch my-spark-cluster
How do you tell that to the python script?
I ran into the same issue with on windows 10. Luckily the file permission requirements are coded into the spark_ec2.py script, and are not a fundamental limitation of the AWS python API.
I ended up commenting out the following lines in the spark_ec2.py script:
if not (file_mode & S_IRUSR) or not oct(file_mode)[-2:] == '00':
print("ERROR: The identity file must be accessible only by you.", file=stderr)
print('You can fix this with: chmod 400 "{f}"'.format(f=opts.identity_file),
file=stderr)
sys.exit(1)
Simply run the suggested fix; Like this:
$ chmod 400 "SpakPlaygroundKeyPair.pem"
This should give only you read permissions to the pem file.