I'm trying to write a zsh script that contains a python 1-liner which takes an argument.
#!/bin/zsh
foo_var="foo"
python -c "import sys; print sys.argv" $foo_var
(This isn't my actual code but this is the gist of what I was doing.)
That code outputs the following:
['-c', 'foo']
The one liner got a little longer than I wanted it to, so I put it in a heredoc, like this:
#!/bin/zsh
bar_var="bar"
python << EOF
import sys
print sys.argv
EOF
$bar_var
(Again, not my actual code but same idea.)
which outputs:
['']
./doctest.zsh:14: command not found: bar
I need $bar_var to be on the line as python so it will get passed as an argument, but I can't have anything on the same line as the second 'EOF'. I also can't have anything before the heredoc because python will interpret it as a filename.
Is there a way to work around the mandatory newline after the second EOF, or better yet, is there generally a better way to do this?
(Also this is my first SO post, so please let me know if I've done something wrong in that sense)
This might do what you want:
python - $bar_var << EOF
import sys
print sys.argv
EOF
Related
In a script called generator.py i have this line of code:
arg=''
for i,a in enumerate(sys.argv):
if(i!=0):
arg+=a+" "
if i print arg i can see the shell argument minus the script's name
And it was fine but if a use a regex line *.h i discovered that the interpreter resolve this rule for me and dispite *.h printing a i can see:
file.h file2.h .... filen.h
My question is how can i get the non resolved command line string?
My guess is that you probably can't catch it - the arguments to script get resolved on an os level before they are passed to sys.argv.
You're calling the script from command line, which resolves the glob for you and passes the result to the python script.
If you don't want the glob resolved by the command line, just escape the wildcard *.
Your shell is expanding the * to its meaning of wildcard.
Possible solutions are passing "*.h", '*.h' or \*.h from the command line to your script, which will receive the exact *.h string.
As #SpoonMeiser suggested, you can check what your script receives by using this:
echo script.py arg1 arg2 blahblah
This will print back what is effectively called after the expansion of the glob and give you an idea of what happened.
Also thanks to #SpoonMeiser for giving the concept of glob for this case, that you can look more in detail here
I want to run a program on Python on macOS Sierra that checks Terminal for its outputs after I automatically enter a command on it. For example, I would write in Terminal:
$ pwd
and then Terminal would output something like:
/Users/username
How would I have Python scan what Terminal outputs and set it to a variable as a string?
>>>output = (whatever Terminal outputs)
>>>print (output)
"/Users/username"
By the way, the other forums do not explain in much detail how one would do this in macOS. Therefore, this is not a duplicate of any forum.
You could pipe the output to a file and read the file.
$ pwd > output.txt
Then read the file and take further actions based on its contents.
Use the subprocess module, it has some shortcut methods to make things easier and less complicated than using Popen.
>>> import subprocess
>>> output = subprocess.check_output("pwd")
>>> print(output)
b'L:\\\r\n'
You can decode this using output.decode("UTF-8") if you like or you can use the universal_newlines keyword argument to have it done automatically as well as sorting out newlines.
>>> subprocess.check_output("pwd", universal_newlines=True)
'L:\\\n'
Edit: With #Silvio's sensible suggestion, passing all arguments you can do the following:
subprocess.check_output(["ls", "-l"])
Or if you have a string sourced from elsewhere you can call .split() which will generate a list of substrings separated by a space.
subprocess.check_output("ls -l /".split())
Note: I'm using Python3 on Windows and Gnu on Windows so I have \r\n line endings and pwd.
Is there a way to retrieve the path to the interpreter a UNIX shell would use for a given script? (preferably in a Python API or as shell command)?
To be used like this:
$ get_bang ./myscript.py
/usr/bin/python3
Of course I could extract it manually using RE but I'm sure in real world that's more complicated than just handling the first line and I don't want to re-invent the wheel..
The reason I need this is I want to call the script from inside another script and I want to add parameters to the interpreter.
Actually, it isn't more complicated than reading (the first word) of the first line.
Try putting the shebang on the second line (or even just putting a space before the #) and see what happens.
Also see http://www.in-ulm.de/~mascheck/various/shebang/ and http://homepages.cwi.nl/~aeb/std/hashexclam-1.html for more than you've ever wanted to know about the shebang feature.
Many ways - for example:
sed -n '1s/^#!//p' filename
prints for example
/bin/sh
or (if multiword)
/usr/bin/env perl
or nothing, if here isn't shebang
This might be a simple question, but I am new to bash scripting and have spent quite a bit of time on this with no luck; I hope I can get an answer here.
I am trying to write a bash script that reads individual lines from a text file and passes them along as argument for a python script. I have a list of files (which I have saved into a single text file, all on individual lines) that I need to be used as arguments in my python script, and I would like to use a bash script to send them all through. Of course I can take the tedious way and copy/paste the rest of the python command to individual lines in the script, but I would think there is a way to do this with the "read line" command. I have tried all sorts of combinations of commands, but here is the most recent one I have:
#!/bin/bash
# Command Output Test
cat infile.txt << EOF
while read line
do
VALUE = $line
python fits_edit_head.py $line $line NEW_PARA 5
echo VALUE+"huh"
done
EOF
When I do this, all I get returned is the individual lines from the input file. I have the extra VALUE there to see if it will print that, but it does not. Clearly there is something simple about the "read line" command that I do not understand but after messing with it for quite a long time, I do not know what it is. I admit I am still a rookie to this bash scripting game, and not a very good one at that. Any help would certainly be appreciated.
You probably meant:
while read line; do
VALUE=$line ## No spaces allowed
python fits_edit_head.py "$line" "$line" NEW_PARA 5 ## Quote properly to isolate arguments well
echo "$VALUE+huh" ## You don't expand without $
done < infile.txt
Python may also read STDIN so that it could accidentally read input from infile.txt so you can use another file descriptor:
while read -u 4 line; do
...
done 4< infile.txt
Better yet if you're using Bash 4.0, it's safer and cleaner to use readarray:
readarray -t lines < infile.txt
for line in "${lines[#]}; do
...
done
I'm trying to pass file list to my python script via argument:
python script.py -o aaa -s bbb "filename.txt" "filename2.txt" "file name3.txt"
Unfortunately ArgumentParser is ignoring quotes and instead of giving list of 3 files it gives me list of 4 elements as followed:
1) "filename.txt"
2) "filename2.txt"
3) "file
4) name3.txt"
It completely ignores quotes. How to make it work with quotes?
Hard without seeing what you're using or any code.
Your shell may be interfering, you may need to escape the spaces with \.
Example:
python script.py -o a -f "file1.txt" "file\ 2.csv"
Is hard without code but considering you are using sys.argv[] you can easily pass the file arguments with quotes like when you need a file or argv with blank spaces: python script.py "myFile.txt" "otherFile.jpeg"
Try this simple code to understand:
import sys
for n, p in enumerate(sys.argv):
print("Parameter: %d = %s") % (n, p))`
You can see that first argv is the file name you are running.
It looks like that this is not python's fault. I'm calling python script from inside of bash script and this make mess with quotes as parameters.