When I run the fabric.py to deploy my site ton Ubuntu.
I met the error:
[192.168.15.143] run: rm -rf /home/user/project/weather_station/
[192.168.15.143] out: rm: cannot remove '/home/user/project/weather_station/logs/gunicorn.log': Permission denied
[192.168.15.143] out:
Fatal error: run() received nonzero return code 1 while executing!
Requested: rm -rf /home/user/project/weather_station/
Executed: /bin/bash -l -c "rm -rf /home/user/project/weather_station/"
Aborting.
Disconnecting from 192.168.15.143... done.
My think is that the error is about the permission denied.
I referenced this
So I changed run('rm -rf {}'.format(PROJECT_DIR))into sudo('rm -rf {}'.format(PROJECT_DIR))
but still error.Is there any approach?
Is the /home/user/project/weather_station/logs/gunicorn.log file in use by an active process? If gunicorn is running and using this file as a log file, then "Permission denied." is exactly what should happen. If this is the case, then you need to re-consider what you're trying to do, as you shouldn't be deleting a file that's being used.
In the case of a log file, the obvious solution would be to configure gunicorn to use a different location, like /home/user/logs/weather_station, so that it's outside of the path that you're trying to delete.
That point aside, if you stop the gunicorn process before executing this rm command, then your command should run successfully.
The broad issue, however, is that (I think) you're trying to delete a log file that's in use. You either need to configure gunicorn to use a different location for its log file, or else you need to end gunicorn before you attempt to delete it.
I finally use sudo chmod 777 /home/user/project/weather_station/logs/gunicorn.log
Then it does work.
Related
I have this PHP script that basically runs a shell script. The shell script takes a python file name main.py and gives output and stores it in a file after the output file gets to move to a different folder.
My PHP code:
<?php
if ($_GET['run']) {
# This code will run if ?run=true is set.
exec("/home/ubuntu/python/script.sh");
}
?>
<!-- This link will add ?run=true to your URL, myfilename.php?run=true -->
Click Me!
Myscript code is this:
#!/bin/bash
/usr/bin/python3 /home/ubuntu/python/main.py > /home/ubuntu/python/output/mainOut
mv /home/ubuntu/python/main.py /home/ubuntu/python/outputMain
My Script is working fine when I run it using command but with PHP it is giving me permission denied error. I tried changing the user name or adding sudo to the script but it is not working. any suggestion will help.
this is my Error log:
mv: cannot move '/home/ubuntu/python/main.py' to '/home/ubuntu/python/outputMain/main.py': Permission denied
Quick and dirty workaround : sudo chmod 777 /home/ubuntu/python -R
If you want to solve it proper, here's my thinking:
First check apache/nginx's running user & group
Then check it's permission on such file/folder (that't not neccessary but it shows the problem more specific)
Finally change the nginx-user's group like usermod -aG GROUP USER then sudo chmod 775 DIR -R (or you can make the permission more preciselly)
The diffrence of 777 and 775 is : 777 makes any user on your server could edit that file, when '775' only allowed owner and group members edit it.
If im not wrong you need to perform sudo chmod 755 or 777 on the .sh, and thats all.
I'm running a docker container that executes commands with server.
It then logs the output to files.
I have another docker that runs every few minutes and pick up everything from docker log {name}.
I'm looking for a way to read the executions log files and print it to STDOUT for the logger service to pick the data to.
I tried something like:
subprocess.call("cat {latest_file}".format(latest_file=latest_file), shell=True) but it only print it to console whilst running.
my question is: can I add my own files/directories to the docker logger?
Assuming that you know the name of the log file beforehand, you can let the application continue to log as is (i.e to a file) and symlink that file to /dev/stdout or /dev/stderr
This is a quite common solution, for example nginx does it
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
EDIT:
Please note that this will only work if the applications that do logging is running as PID 0. If you have forked a process or similar, you will have to explicitly write to the stdout/stderr file descriptor of PID 1 (available under /proc/1/fd
# forward request and error logs to docker log collector
RUN ln -sf /proc/1/fd/1 /var/log/nginx/access.log \
&& ln -sf /proc/1/fd/2 /var/log/nginx/error.log
(Please see this answer for more details)
interesting enough, when I tried to clone object using bash shell
ssh -t -i ~/.ssh/security.pem root#xx.xxx.xx.xx 'rm -rf myproject && git clon -b mybranch https://github.com/myproject.git'
everything works beautifully.
But when I tried to do it from python subprocess call, like
subprocess.check_call("ssh -t -i ~/.ssh/security.pem root#xx.xxx.xx.xx 'rm -rf myproject && git clone -b mybranch https://github.com/myproject.git'", shell=True)
then I will get the following error:
fatal: destination path 'myproject' already exists and is not an empty directory.
Traceback (most recent call last):
File "", line 1, in
File "C:\Python27\lib\subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
If you are doing a long bash command, you have to split up its arguments into a list. Like if I wanted to run "ls -a" using the subprocess library I would have to do:
subprocess.call(["ls","-a"])
Check out the docs for reference: https://docs.python.org/2/library/subprocess.html
But, if you still have trouble removing the folder, shutil.rmtree() will work, using the shutil library.
Here is what I find out: shell=True. For some reason, the python will execute the first shell command 'rm -rf myproject' in the remote machine and then close the SSH connection, finally to execute the second command 'git clone -b branch https://github.com/myproject.git' in the local machine. In my case, I have the same git repository 'myproject' in my local directory, so git tries to clone in my local directory and complains it. After changing to shell=False, or leave it out, then everything works. Not sure about why the shell value can cause that?!
I'm trying to prefill env.password using --initial-password-prompt, but remote is throwing back some strangeness. Let's say that I'm trying to cat a root-owned file as testuser, with 600 permissions on the file. I'm calling sudo('cat /home/testuser/test.txt'), and getting this back:
[testuser#testserver] sudo: cat /home/testuser/test.txt
[testuser#testserver] out: cat: /home/testuser/test.txt: Permission denied
[testuser#testserver] out:
Fatal error: sudo() received nonzero return code 1 while executing!
Requested: cat /home/testuser/test.txt
Executed: sudo -S -p 'sudo password:' -u "testuser" /bin/bash -l -c "cat /home/testuser/test.txt"
Is that piping the prompt right back into the input? I tried using sudo() with pty=False to see if it was an issue with the pseudoterminal, but to no avail.
Here's the weird part: calling run('sudo cat /home/testuser/test.txt') and invoking fab without --initial-password-prompt passes back a password prompt from remote, and on entering the password, everything works fine.
Naturally, running ssh -t testuser#testserver 'sudo cat /home/user/test.txt' prompts for a password and returns the contents of the file correctly. Do I have an issue with my server's shell config, or is the issue with how I'm using sudo()?
Down the line, I'm likely to set up a deploy user with no-password sudo and restricted commands. That'll probably moot the issue, but I'd like to figure this one out if possible. I'm running an Ubuntu 14.10 VPS, in case that's relevant.
Oh, my mistake. I had foolishly set env.sudo_user to my deploy user testuser, thinking that it was specifying the invoking user on remote. In fact, it was specifying the target user, and I was attempting to sudo into myself. Whoops.
I'm struggling to get a subversion post commit hook to work.
When I commit, I get the error:
Failed to start '/svn/web/hooks/post-commit' hook
I read around a bit with people that had similar problems, and they all related to the lack of environment, or incorrect file permissions, so I ran this:
sudo -u www-data env - ./post-commit /svn/web 70
But it worked fine! I added logging to the file, which works when I run it with the above command, but not when I commit to the repo.
Any ideas? I gave everyone execute permission (chmod a+x post-commit).
It was a problem with my line endings.
fromdos post-commit
Did the trick.