How can I extract the filename after sign '>' from a command line in python. For example:
python func.py -f file1.txt -c > output.txt
I want to get the output file name 'output.txt' from the above command line.
Thanks
You can't.
When you write something like command > file in shell, the shell:
creates the file if it doesn't exist,
opens it, and
assigns the descriptor to the stdout of command.
The called process (and it doesn't matter if that's Python or something else) doesn't know what happens to the data after it's written because it only sees the descriptor provided by the shell. It's a bit like having one end of a really long pipe: you know you can put stuff in, but you can't see what happens to it on the other end.
If you want to retrieve the file name in that call, you need to either:
pass it to your Python program as an argument and handle redirection from Python, e.g. python func.py -f file1.txt -c output.txt, or
output it from the shell, e.g. echo 'output.txt'; python func.py -f file1.txt -c > output.txt
Related
I am trying to access a python function from the command line, and I would like to write such a command that will print the output in the terminal. The below doesn't work. What could I change?
python -c 'from laser import Laser; laser = Laser();l = laser.embed_sentences("hello", lang = "en").shape == (1, 1024); print(l)'
(base) ~ % python -c 'print("hello, world")'
hello, world
Printing works fine for me when running python through python -c. Are you sure your terminal isn't truncating your output by omitting the last (and in this case, only) line? You could try creating a single line file (no newline at the end) and then running cat [filename] (which is how I sometimes discover that my terminal is doing this)
-c cmd : program passed in as string (terminates option list)
That is the correct flag to be used. This must be a CLI config issue. Or the script is taking longer than you are expecting to run and it appears no output is generated.
Does python -c 'print("hello")' work?
I want to use pygmentize to highlight some script files (python/bash/...) without extension. But pygmentize requires me to specify the lexer using -l. It does not automatically identify the file type from the content.
I have the following options at hand, but none of them work now:
use file -b --mime-type. But this command output x-python and x-shellscript instead of python and bash and I don't know the rules
use vim -e -c 'echo &ft|q' the_file. For any file with or without file extension, vim has a mechanism to guess the file type. But it doesn't work. Since the output goes to the vim window and disappears after q.
What can I do?
#Samborski's method works fine in normal case but it does not work in python subprocess.check_output since the pts is not allocated. If you use nvim, you can use this more straightforward way:
HOME=_ nvim --headless -es file <<EOF
call writefile([&ft], "/dev/stdout")
EOF
You can use vim this way:
vim -c ':silent execute ":!echo " . &ft . " > /dev/stdout"' -c ':q!' the_file
It simply constructs command to run in the shell as a string concatenation.
I am working with this python script: https://github.com/sarchar/addressgen/blob/master/genaddress.py
Currently it works via command line like so:
python3 genaddress.py -p passphrase
How would I alter the script to accept a text file of passphrases?
I know this might not directly answer the question (How would I alter the script), but it should achieve a similar result on bash with the following command, assuming each passphrase has its own unique output:
cat passphrases.txt | xargs -I phrase python3 genaddress.py -p phrase
This iterates through each line in passphrases.txt and then subsequently passes it to your script, one line at a time.
I have a problem..whenever I am writing any Python script say like this
#!/usr/local/bin/python
print "hello"
Then using
chmod +x a.py
And then write ./a.py then it is not printing anything in the terminal
Moreover whenever I write any comment below the shabang line, it gives me an error saying
#: bad interpreter : No such file or directory
but when I run the script like this python a.py it works as usual..
Can someone tell me what's wrong and how to fix this..
This is almost certainly because your line ending is a carriage-return/line feed combination - which Windows-style editors will create. Unix regards the LF as the end of the line, so it's looking for an executable called "python\r". When you run it with an explicit call to the interpreter the shebang line is just treated as a comment.
Under linux text replacement fix:
sed -i 's/^ M//g' filename
(note that ^ M is written in linux, press ^ M is carriage return and line feed, the input method is to hold down CTRL + v, release v, press m)
This solve bad endline for shell in first line if error message looks like this:
/usr/bin/python3 ^M: bad interpreter: No such file or directory
Is there a way to create a python script which wraps an entire bash command including the pipes.
For example, if I have the following simple script
import sys
print sys.argv
and call it like so (from bash or ipython), I get the expected outcome:
[pkerp#pendari trell]$ python test.py ls
['test.py', 'ls']
If I add a pipe, however, the output of the script gets redirected to the pipe sink:
[pkerp#pendari trell]$ python test.py ls > out.txt
And the > out.txt portion is not in sys.argv. I understand that the shell automatically process this output, but I'm curious if there's a way to force the shell to ignore it and pass it to the process being called.
The point of this is to create something like a wrapper for the shell. I'd like to run the commands regularly, but keep track of the strace output for each command (including the pipes). Ideally I'd like to keep all of the bash features, such as tab-completion and up and down arrows and history search, and then just pass the completed command through a python script which invokes a subprocess to handle it.
Is this possible, or would I have to write my own shell to do this?
Edit
It appears I'm asking the exact same thing as this question.
The only thing you can do is pass the entire shell command as a string, then let Python pass it back to a shell for execution.
$ python test.py "ls > out.txt"
Inside test.py, something like
subprocess.call("strace " + sys.argv[1], shell=True, executable="/bin/bash")
to ensure the entire string is passed to the shell (and bash, specifically).
Well, I don't quite see what you are trying to do. The general approach would be to give the desired output destination to the script using command line options: python test.py ls --output=out.txt. Incidentally, strace writes to stderr. You could capture everything using strace python test.py > out 2> err if you want to save everything...
Edit: If your script writes to stderr as well you could use strace -o strace_out python test.py > script_out 2> script_err
Edit2: Okay, I understand better what you want. My suggestion is this: Write a bash helper:
function process_and_evaluate()
{
strace -o /tmp/output/strace_output "$#"
/path/to/script.py /tmp/output/strace_output
}
Put this in a file like ~/helper.sh. Then open a bash, source it using . ~/helper.sh.
Now you can run it like this: process_and_evaluate ls -lA.
Edit3:
To capture output / error you could extend the macro like this:
function process_and_evaluate()
{
out=$1
err=$2
shift 2
strace -o /tmp/output/strace_output "$#" > "$out" 2> "$err"
/path/to/script.py /tmp/output/strace_output
}
You would have to use the (less obvious ) process_and_evaluate out.txt err.txt ls -lA.
This is the best that I can come up with...
At least in your simple example, you could just run the python script as an argument to echo, e.g.
$ echo $(python test.py ls) > test.txt
$ more test.txt
['test.py','ls']
Enclosing a command in parenthesis with a dollar sign first executes the contents then passes the output as an argument to echo.