Trying to provide the minimal amount of information necessary here, so I've left a lot out. Lots of similar questions around, but the most common answer (use chmod +x) isn't working for me.
I have a Python script and a shell script that sit next to each in a GitHub Enterprise repository:
Next, in Jenkins I check the code in this repository out. The two key steps in my Jenkinsfile are like so:
dir ("$WORK/python")
{
sh "chmod +x test.sh"
sh "python3 foo.py -t '${AUTH}'"
}
Here, $WORK is the location on the Jenkins node that the code checks out to, and python (yes, poorly named) is the folder in the repository that the Python and shell script live in. Now, foo.py calls the shell script in this way:
try:
cmd = f'test.sh {repo_name}'
subprocess.Popen(cmd.split())
except Exception as e:
print(f'Error during repository scan: {e}')
Here, repo_name is just an argument that I define above this snippet, that I'm asking the shell script to do something with. When I run the job in Jenkins, it technically executes without error, but the exception branch above does run:
11:37:24 Error during repository scan - [Errno 13] Permission denied: 'test.sh'
I wanted to be sure that the chmod in the Jenkinsfile was running, so I opened a terminal to the machine that the code checked out to and found that the execute permissions were indeed correctly set:
-rw-r--r-- 1 adm domain users 4106 Feb 6 14:24 foo.py
-rwxr-xr-x 1 adm domain users 619 Feb 6 14:37 test.sh
I've gone around on this most of the day. What the heck is going on?
Doing this fixed it:
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
cmd = f'{BASE_DIR}/test.sh {repo_name}'
Popen(cmd.split())
Why? In my ignorance associated with picking up someone else's code, I neglected to see, buried further up before the call to the shell script, that we change to a folder one level down:
os.chdir('repo_content')
The shell script does not, of course, live there. So calling it without specifying the path won't work. I'm not sure why this results in [Errno 13] Permission denied: 'test.sh', as that would imply that the shell script was found, but fully qualifying the path as above is doing exactly what I had hoped.
Related
I have this PHP script that basically runs a shell script. The shell script takes a python file name main.py and gives output and stores it in a file after the output file gets to move to a different folder.
My PHP code:
<?php
if ($_GET['run']) {
# This code will run if ?run=true is set.
exec("/home/ubuntu/python/script.sh");
}
?>
<!-- This link will add ?run=true to your URL, myfilename.php?run=true -->
Click Me!
Myscript code is this:
#!/bin/bash
/usr/bin/python3 /home/ubuntu/python/main.py > /home/ubuntu/python/output/mainOut
mv /home/ubuntu/python/main.py /home/ubuntu/python/outputMain
My Script is working fine when I run it using command but with PHP it is giving me permission denied error. I tried changing the user name or adding sudo to the script but it is not working. any suggestion will help.
this is my Error log:
mv: cannot move '/home/ubuntu/python/main.py' to '/home/ubuntu/python/outputMain/main.py': Permission denied
Quick and dirty workaround : sudo chmod 777 /home/ubuntu/python -R
If you want to solve it proper, here's my thinking:
First check apache/nginx's running user & group
Then check it's permission on such file/folder (that't not neccessary but it shows the problem more specific)
Finally change the nginx-user's group like usermod -aG GROUP USER then sudo chmod 775 DIR -R (or you can make the permission more preciselly)
The diffrence of 777 and 775 is : 777 makes any user on your server could edit that file, when '775' only allowed owner and group members edit it.
If im not wrong you need to perform sudo chmod 755 or 777 on the .sh, and thats all.
They help me, they know I need to run a script to start the services, I use Django with Python and ubuntu server.
I have been seeing many examples in crontab, which I will use, every time I restart the server, I run the Script, which contains the command to run the virtual environment and in addition to the command "python3 manage.py runserver_plus", apart was to see restart the server all nights, I was also successful with crontab, but I can't execute what the script contains. They can help me, I am not very expert, but I managed to do something.
Is it the path of the script?
Tried running the command directly, got no results.
I write the following.
root#server:/home/admin-server# pwd
/home/admin-server
root#server:/home/admin-server# ls -l
drwxrwxr 3 admin-server admin-server 4096 Nov 20 17:25 control_flota
-rwxr--r-- 1 root root. 141 Nov 20 18:00 server_script.sh
Script new
I still have no results: /, I don't know why?
#!bin/bash
echo "Welcome"
cd /home/admin-server/control_flota/
source venvp1/bin/activate
echo "Thanks"
You can activate the Virtual Environment from within the shell script, prior to running any manage.py commands
#!/bin/bash
cd /your_code_directory
source env/bin/activate
python ./manage.py runserver_plus
Ensure you save the file with the .sh extension, then give it execute rights:
chmod u+x your_script.sh
You should then be able to call from cron; sudo cron if you run into permissions issues
I had to push an .exe file to heroku to be able to create invoice pdfs. It works localy without any problems but on heroku I get an error:
OSError: [Errno 13] Permission denied
Probably because I am not allowed to execute .exe files. So I need somehow to create a rule that this file is allowed to execute.
I pushed wkhtmltopdf.exe to heroku and I access this file in my method to create a pdf:
MYDIR = os.path.dirname(__file__)
path_wkthmltopdf = os.path.join(MYDIR + "/static/executables/", "wkhtmltopdf.exe")
config = pdfkit.configuration(wkhtmltopdf=path_wkthmltopdf)
Was not able to find a solution yet.
EDIT:
Tryed giving permission with chmod through heroku bash and also adding a linux executable but still the same error:
~/static/executables $ chmod a+x wkhtmltopdf-linux.exe
~ $ chmod a+x static/executables/wkhtmltopdf-linux.exe
Using sudo gave me:
bash: sudo: command not found
I'm not very familiar with heroku, but if you can somehow get access to terminal of environment of your application (for example ssh to your server), you need to change permissions of that file so it can be executed. To do that, you need to run in that terminal:
sudo chmod a+x /path/to/file/FILENAME
Also,i'm pretty sure your app on Heroku runs on Linux, specifically on Ubuntu, since it's the default (link)
It means there might be difficulties with running Windows executables.
Okay I managed to fix this with a buildpack. In addition wkhtmltopdf-pack must be installed and added to the requirements.txt.
Then you have to set a config var in heroku for the wkhtmltopdf executable which will be generated from the files provided in the buildpack. Do not search for an .exe file.
heroku config:set WKHTMLTOPDF_BINARY=wkhtmltopdf-pack
You can see all your config vars also in the heroku dashboard under settings, you can also create it there and not use the CLI.
Then you have to tell the pdfkit configuration where to find the WKHTMLTOPDF_BINARY:
In my config.py:
import subprocess
WKHTMLTOPDF_CMD = subprocess.Popen(
['which', os.environ.get('WKHTMLTOPDF_BINARY', 'wkhtmltopdf')], # Note we default to 'wkhtmltopdf' as the binary name
stdout=subprocess.PIPE).communicate()[0].strip()
For the pdfkit configuration:
config = pdfkit.configuration(wkhtmltopdf=app.config['WKHTMLTOPDF_CMD'])
Now you should be able to create the pdf, example:
the_pdf = pdfkit.from_string("something", False, configuration=config)
Credit to this tutorial:
https://artandlogic.com/2016/12/generating-pdfs-wkhtmltopdf-heroku/
I have created a executeable script .sh which contains code to run a django managemenet command.
cron.sh
#!/bin/sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I can confirm this script and manage.py command is working by executing it directly on terminal
$ /path/to/cron.sh
When i do it same via crontab its not working as expected.
** What am i doing wrong ?? I can confirm there is nothing wrong with crontab, it executing the cron.sh file but path/to/env/bin/python manage.py some_command is not working as expected.
cron log also showing
CRON[14768]: (root) CMD /path/to/cron.sh > /dev/null 2>&1
I am using bitnami django ami (ubuntu 14.04.5 LTS)
Update
After removing /dev/null i am getting this error now
"Cannot locate wrapped file"
It seems that it is a PATH problem. I do not know if django uses specific paths that must be set but AFAIK the crontab PATH is really limited due to security reasons. Just to check if that is the problem you could do in a shell terminal the following:
echo $PATH
You will get a complete PATH for instance:
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
In your crontab, put it above your code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
Tell me if this works. If does, try to purge the provided PATH or even better provide absolute locations in your code.
I have to say that I don't know if you can perform a cd in the cron like this. I always used absolute paths or cd /some/dir && /path/to/script args.
P.S: I cannot make comments yet, for this reason I put it in an answer.
The problem is that your not using the script that Bitnami uses to load all the environment variables (/opt/bitnami/scritps/setenv.sh).
I would try using this script:
#!/bin/sh
. /opt/bitnami/scritps/setenv.sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I would like to display "hello world" via MPI on different Google cloud compute instances with the help of the following code:
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()
print("Hello, World! I am process/rank {} of {} on {}.\n".format(rank, size, name))
.
The problem is, that even so I can ssh-connect across all of these instances without problem, I get a permission denied error message when I try to run my script. I use following command to envoke my script:
mpirun --host localhost,instance_1,instance_2 python hello_world.py
.
And get the following error message:
Permission denied (publickey).
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
.
Additional information:
I installed open-MPI on all of my nodes
I have Google automatically set all of my ssh-keys by using gcloud to log into each instance from each instance
instance-type: n1-standard-1
instance-OS: Linux Debian (default)
.
Thanks you for your help :-)
.
New Information:
(thanks # Zulan for pointing out that I should edit my previous post instead of creating a new answer for new information)
So, I tried to do the same with mpich instead of openmpi. However, I run into a similar error message.
Command:
mpirun --host localhost,instance_1,instance_2 python hello_world.py
.
Error message:
Host key verification failed.
.
I can ssh-connect between my two instances without problems, and through the gcloud commands the ssh-keys should automatically be set up properly.
So, has somebody an idea what the problem could be? I also checked the path, the firewall rules, and my ability to write startup scripts in the temp-folder. Can someone please try to recreate this problem? + Should I raise this question to Google? (never done such thing before, Im quite unsure :S)
Thanks for helping :)
so I finally found a solution. Wow, problem was driving me nuts.
So it turned out, that I needed to generate ssh-keys manually for the script to work. I have no idea why, because google-services already set up the keys by using
gcloud compute ssh , but well, it worked :)
Steps I did:
instance_1 $ ssh-keygen -t rsa
instance_1 $ cd .ssh
instance_1 $ cat id_rsa.pub >> authorized_keys
instance_1 $ gcloud compute copy-files id_rsa.pub
instance_1 $ gcloud compute ssh instance_2
instance_2 $ cd .ssh
instance_2 $ cat id_rsa.pub >> authorized_keys
.
I will open another topic and ask why I cannot use ssh instance_2, even so gcloud compute ssh instance_2 is working. See: Difference between the commands "gcloud compute ssh" and "ssh"