I use rsync to move files from my home computer to a server. Here's the command that I use to transfer and update the directory of only files that contain a grep + glob. I execute this command from the toplevel/ directory in the directory structure I show below.
rsync -r --progress --include='**201609*/***' --exclude='*' -avzh files/ user#server.edu:/user/files
Here's what the file structure of the working directory on my home file looks like:
- toplevel
- items
- files
- 20160315
- 20160910
- dir1
- really_cool_file1
- 20160911
- dir2
This works fine, and the file structure on user#server.edu:/user/files is the same as on my home computer.
I wrote a python script to do this and it doesn't work. It also transfers over the files/20160315, which is not what I want.
#!/usr/bin/env python3
import os
from subprocess import run
os.chdir("toplevel")
command_run = ["rsync", "-r",
"--progress",
"--include='**201609*/***'",
"--exclude='*'",
"-avzh",
"files/", "user#server.edu:/user/files"]
run(command_run, shell=False, check=True)
What's going on here? I had the same problem when command_run was a string, and I passed it to subprocess.run() with shell=True.
Some of those quotes are removed by the shell before being passed to the called process. You need to do this yourself if you call the program with the default shell=False. This little script will tell you what your parameters need to look like
test.py
#!/usr/bin/env python3
import sys
print(sys.argv)
And then running with your command line
~/tmp $ ./test.py -r --progress --include='**201609*/***' --exclude='*' -avzh files/ user#server.edu:/user/files
['./test.py', '-r', '--progress', '--include=**201609*/***', '--exclude=*', '-avzh', 'files/', 'user#server.edu:/user/files']
~/tmp $
Related
I have a bash script that I can run flawlessly in my Rpi terminal in its folder:
./veye_mipi_i2c.sh -r -f mirrormode -b 10
it works like this: Usage: ./veye_mipi_i2c.sh [-r/w] [-f] function name -p1 param1 -p2 param2 -b bus
options:
-r read
-w write
-f [function name] function name
-p1 [param1] param1 of each function
-p2 [param1] param2 of each function
-b [i2c bus num] i2c bus number
When I try to run it in Python (2) via my Spyder editor with os.system, I get a "0" return which I interpret as "succesfully executed" but in fact the script has not been executed and the functions have not been performed. I know this because the script is suppose to change the camera functioning and by checking the images I take afterwards, I can see that nothing has changed.
import os
status = os.system('/home/pi/VeyeMipi/Camera_Folder/veye_mipi_i2c.sh -w -f mirrormode -p1 0x04 -b 10')
print status
Any idea, what causes this? The bash script uses two other scripts that lie in the same folder location (read and write). Could it be, that it cannot execute these additional scripts when startet through Python? It does not make sense to me, but so do a lot of things....
Many thanks
Ok, I understand that my question was not exemplary because of the lack of a minimal reproducible example, but as I did not understand what the problem was, I was not able to create one.
I have found out, what the problem was. The script I am calling in bash requires two more scripts that are in the same folder. Namely the "write" script and "read" script. When executing in terminal in the folder, no problem, because the folder was the working directory.
I tried to execute the script within Spyder editor and added the file location to the PATH in the user interface. But still it would not be able to execute the "write" script in the folder.
Simply executing it in the terminal did the trick.
It would help if you fix your scripts so they don't depend on the current working directory (that's a very bad practice).
In the meantime, running
import subprocess
p = subprocess.run(['./veye_mipi_i2c.sh', '-r', '-f', 'mirrormode', '-b', '10'], cwd='/home/pi/VeyeMipi/Camera_Folder')
print(p.returncode)
which changes the directory would help.
Use subprocess and capture the output:
import subprocess
output = subprocess.run(stuff, capture_output=True)
Check output.stderr and output.stdout
The following command works:
$ pycco *.py
# generates literate-style documentation
# for all .py files in the current folder
And the following snippet in my tox.ini file works as expected:
[testenv:pycco]
deps =
pycco
commands =
pycco manage.py
# generates literate-style documentation
# for manage.py
But if I try to use a glob:
[testenv:pycco]
deps =
pycco
commands =
pycco *.py
...I get the following error:
File "/home/user/Documents/project/.tox/pycco/lib/python3.7/site-packages/pycco/main.py", line 79, in generate_documentation
code = open(source, "rb").read().decode(encoding)
FileNotFoundError: [Errno 2] No such file or directory: '*.py'
How can I pass *.py to pycco via tox?
The problem here is that pycco does not support glob expansions. What makes pycco *.py work is that before the execution happens the shell actually transforms *.py to actual files; and then passes that to the OS to run it.
When tox runs your command there's no shell involved, so whatever you write is as is passed on to the OS, so now pycco actually gets as argument *.py hence the error.
You can work around this by either explicitly listing the file paths or using the python interpreter to do the expansion:
python -c 'from glob import glob; import subprocess; subprocess.check_call(["pycco"] + glob("*.py"))'
Put the above command inside your tox commands and things will work now as python is now the shell doing the expansion of "*.py" to actual files list.
You cannot do this directly because pycco does not (currently) support glob expansions. Instead you can create a shell script execute_pycco.sh as follows:
#!/bin/sh
pycco *.py
Update tox.ini as follows:
[testenv:pycco]
deps =
pycco
commands =
./execute_pycco.sh
You will now execute your shell script in the "pycco" environment created by tox. This method also allows you to define more elaborate scripts:
#!/bin/sh
filelist=$( find . -name '*.py' | grep -v ".tox" )
# make a list of all .py files in all subfolders,
# except the .tox/ subfolder
pycco -ip $filelist
# generate literate-style documentation for all
# files in the list
I have a folder called TEST with inside :
script.py
script.sh
The bash file is :
#!/bin/bash
# Run the python script
python script.py
If I run the bash file like this :
./TEST/script.sh
I have the following error :
python: can't open file 'script.py': [Errno 2] No such file or directory
How could I do, to tell my script.sh to look in the directory (which may change) and to allow me to run it for inside the TEST directory ?
Tricky, my python file run a sqlite database and I have the same problem when calling the script from outside the folder, it didn't look inside the folder to find the database!
Alternative
You are able to run the script directly by adding this line to the top of your python file:
#!/usr/bin/env python
and then making the file executable:
$ chmod +x script.py
With this, you can run the script directly with ./TEST/script.py
What you asked for specifically
This works to get the path of the script, and then pass that to python.
#!/bin/sh
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
python "$SCRIPTPATH/script.py"
Also potentially useful:
You mentioned having this problem with accessing a sqlite DB in the same folder, if you are running this from a script to solve this problem, it will not work. I imagine this question may be of use to you for that problem: How do I get the path of a the Python script I am running in?
You could use $0 which is the name of the currently executing program, as invoked, combined with dirname which provides the directory component of a file path, to determine the path (absolute or relative) that the shell script was invoked under. Then, you can apply it to the python invocation.
This example worked for me:
$ t/t.sh
Hello, world!
$ cat t/t.sh
#!/bin/bash
python "$(dirname $0)/t.py"
Take it a step farther and change your current working directory which will also be inherited by python, thus helping it to find its database:
$ t/t.sh; cat t/t.sh ; cat t/t.py ; cat t/message.txt
hello, world!
#!/bin/bash
cd "$(dirname $0)"
python t.py
with(open('message.txt')) as msgf:
print(msgf.read())
hello, world!
From the shell script, you can always find your current directory: Getting the source directory of a Bash script from within. While the accepted answer to this question provide a very comprehensive and robust solution, your relatively simple case only really needs something like
#!/bin/bash
dir="$(dirname "${BASH_SOURCE[0]}")"
# Run the python script
python "$(dir)"/script.py
Another way to do it would be to change the directory from which you run the script:
#!/bin/bash
dir="$(dirname "${BASH_SOURCE[0]}")"
# Run the python script
(cd "$dir"; python script.py)
The parentheses ((...)) around cd and python create a subprocess, so that the directory does not change for the rest of your bash script. This may not be necessary if you don't do anything else in the bash portion, but is still useful to have if you ever decide to say source your script instead of running it as a subprocess.
If you do not change the directory in bash, you can do it in Python using a combination of sys.argv\[0\], os.path.dirname and os.chdir:
import sys
import os
...
os.chdir(os.path.dirname(sys.argv[0]))
In my terminal, I can see a python program in execution:
python3 app.py
where can I find app.py?
I've tried to look in the /proc/$pid/exe but links to the python interpreter.
I have many app.py programs in my system, I want to find out exactly which is in execution with that pid.
i ran a short test on my machine and came up with this... maybe it helps:
find the process id PID of the job in question:
$ ps -u $USER -o pid,cmd | grep app.py
the PID will be in the first column. assign this number to the variable PID.
find the current working directory of that job:
$ ls -l /proc/$PID/cwd
(for more info: cat $ cat /proc/$PID/environ)
your app.py file will be in this directory.
check the file
/proc/[PID]/environ
There is PWD variable contains full path of the directory containing the executable file.
If you are on Mac Os X please try using:
sudo find / -type f -fstype local -iname "app.py"
If you are not on Mac Os X you can use:
sudo find / -mount -type f -iname "app.py"
The find command will start from your root folder and search recursively for all the files called "app.py" (case insensitive).
I am trying to run a script which in turn should execute a basic python script.
This is the shell script:
#!usr/bin/bash
mv ~/Desktop/source/movable.py ~/Desktop/dest
cd ~/Desktop/dest
pwd
ls -lah
chmod +x movable.py
python movable.py
echo "Just ran a python file from a shell script"
This is the python script:
#!usr/bin/python
import os
print("movable transfered to dest")
os.system("pwd")
os.system("mv ~/Desktop/dest/movable.py ~/Desktop/source")
print("movable transfered to dest")
os.system("cd ~/Desktop/source")
os.system("pwd")
Q1. The shell script is not executing the python file. What am I doing wrong?
Q2. Do I need to write the first line #!usr/bin/python in the python script?
Thank you.
You are missing a '/' in the shebang line:
#!usr/bin/python
should be
#!/usr/bin/python
Another thing I noticed was, you are calling cd in os.system. Since that command would be executed in a sub shell, it won't change the current directory of the calling script process.