how to call xargs using script - python

I am new to python and still at the level of basic learning. Recently I tried to write a script to generate new folders according to the number supplied in the input text file. After creating those folders I want to copy a file into all those folders at the same time. I can do it by typing
echo equil{1..x} | xargs -n 1 cp *.txt *
in the terminal, and it works fine. Here x is the number of folders I have in my working directory. But my concern is to make it automatic, i.e. to call it from the script, so that the user doesn't need to type this line every time in the terminal. That is why I tried this
sub2 = subprocess.call(['echo', 'equil{1..x}', '|', 'xargs', '-n', '1', 'cp', '*.txt *'])
Can anyone please guide me and show me the mistake. Actually I am not getting any error, rather it is printing this
equil{1..x} | xargs -n 1 cp *.txt *
in the terminal after executing the rest of the script.

You have to use subprocess.Popen if you want to send data to/from stdin/stdout of your subprocesses. And you have to Popen a subprocess for each of the executables, i.e. in your example, one for echo and one for xargs.
There is an example in the docs: https://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline
Another here: Call a shell command containing a 'pipe' from Python and capture STDOUT
However, instead of running echo to produce some lines, you can directly write them in python to the process stdin.

I don't think you can use subprocess.call() like this with pipes. For recipes how to use pipes, see
https://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline
I.e. you would use subprocess.communicate() over two processes.

Related

Run shell commands with subprocess while displaying full messages

I want to run multiple Terminal commands from Python using subprocess and simultaneously not only execute the commands but also print the output that appears in Terminal in full to my stdout, so I can see it in real-time (as I would if making the commands directly in Terminal).
Now, using the advice here I was able to run multiple Bash commands from Python:
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print(proc_stdout)
subprocess_cmd('echo a; echo b; cd /home/; ls')
Output:
b'a\nb\n<Files_in_my_home_folder>'
So far so good. But if I try to run ls -w (which should raise an error),
subprocess_cmd('echo a; echo b; cd /home/; ls -w')
output:
b'a\nb'
whereas the error message should be shown as it would in Terminal:
ls: option requires an argument -- 'w'
Try 'ls --help' for more information.
I would like to print out whatever is in Terminal (simultaneously with running the command) for whatever the command is, be it running some executable, or a shell command like ls.
I am using Python 3.7+ so any solution using subprocess.run or similar is also welcome. However, I'm not sure this takes multiple commands together nor does using capture_output=True, text=True print error messages.
The stdout=subprocess.PIPE (or the shorthand capture_output=True which subsumes this and a few related settings) says that you want Python to read the output. If you simply want the subprocess to spill whatever it prints directly to standard output and/or standard error, you can simply leave out this keyword argument.
As always, don't use Popen if you can avoid it (and usually avoid shell=True if you can, though that is not possible in your example).
subprocess.check_call('echo a; echo b; cd /home/; ls', shell=True)
To briefly reiterate, this bypasses Python entirely, and lets the subprocess write to its (and Python's) standard output and/or standard error without Python's involvement or knowledge. If you need for Python to know what's printed, you'll need to have your script capture it, and have Python print it if required.

How to run a python script with a list of arguments in linux terminal?

I have a list .txt with input arguments from a python script, I would like to execute the script with each argument in the list in the linux terminal, but only runs the latter.
I'm trying this:
for p in $(cat list.txt); do eval $(echo script.py -u $p); done;
there are multiple ways to do it. But my most recommended way would be the first one.
Change your python script to take an argument which is the path to your list.txt. I would suggest using argparse module in python to handle arguments. Once the argument is read, just loop thorugh the contents of the file in python and use them the way you want to. With the you can handle the arguments much better. The program would be more robust.
You can run the python script using cat operation on the list.txt. python script.py $(cat list.txt)
You can use pipe, grep and awk. python script.py $(grep pattern file | awk '{print}' ORS=' ')
But I would still suggest the first option. Its more robust, you can handle arguments better and control the errors also.

Unset or remove most recent like of .bash_history from within Python script

The Issue
I have a Python script that when I run it from the command line I do not want to record anything within .bash_history.
The reason for this is that the script uses the Python argparse library which allows me to pass in arguments to the python code directly from the command line.
For example I could write the script so that it would use "123456" as a value in the script:
$ ./scriptname.py -n 123456
The issue is that I don't want the value 123456 stored in .bash_history. In fact, I'd rather the entire command was never stored into the .bash_history file in the first place.
What I've Tried
Subprocess & history -c
I've added the subprocess library to the top of my script and then included this directly after to attempt to proactively clear the current history of the shell I am working in:
subprocess.call("history -c", shell=True)
Theoretically this should clear the history of the current shell. I don't see errors from this so I'm assuming that it runs in some other shell. When I run it outside of the script (directly after running the command to invoke the script) it works properly.
Subprocess & unset HISTFILE
I have also used subprocess with the following with no success:
subprocess.call("unset HISTFILE", shell=True)
os.system & history -c
I've also used the os library for Python and included the following in the script:
os.system("history -c")
os.system and unset HISTFILE
I've also tried unset HISTFILE with os.system to no avail.
os.system("unset HISTFILE")
Preferred Solution Characteristics
I realize that I could simply type in unset HISTFILE or history -c after using the command. But I want this to be as much as possible a self-contained script.
Ideally the solution would prevent the ./scomescript.py command from ever being recorded within .bash_history.
I need this script to output text to the terminal based on the input so I can't close the terminal immediately afterwards either.
I imagine there must be a way to do this from within the python script itself - this is my preference.
This really isn't very feasible... Adding the entry to the history file is performed by the interactive shell, and it occurs after the command has completed and the parent shell exits. It is, strictly speaking, possible, if you were to make your python program execute spawn a hacky background process that did something like read the history file in a loop re-writing it. I really can't advocate anything like this, but you could append your script with something like:
os.system("nohup bash -ic 'while :; do read -d \"\" history < \"$HISTFILE\"; echo \"$history\" | sed -e\"s#^%s.*##\" -e\"/^$/d\" > \"$HISTFILE\"; sleep 1; done &' >/dev/null 2>&1" % sys.argv[0])
I think a much better way to accomplish your goal of not recording any arguments would be to use something like var = raw_input("") instead of passing sensitive argument on the command line.
You could also perhaps create a shell function to wrap your script, something like my_script(){ set +o history; python_script.py "$#; set -o history ;}?

Persistent Terminal Session in Python

I may not at all understand this correctly, but I am trying to allow a Python program to interface with a subprocess that runs commands as if on a Linux shell.
For example, I want to be able to run "cd /" and then "pwd later in the program and get "/".
I am currently trying to use subprocess.Popen and the communicate() method to send and receive data. The first command, sent with the Popen constructor, runs fine and gives proper output. But I cannot send another command via communicate(input="pwd").
My code so far:
from subprocess i
term=Popen("pwd", stdout=PIPE, stdin=PIPE)
print(flush(term.communicate()))
term.communicate(input="cd /")
print(flush(term.communicate(input="pwd")))
Is there a better way to do this? Thanks.
Also, I am running Python 3.
First of all, you need to understand that running a shell command and running a program aren't the same thing.
Let me give you an example:
>>> import subprocess
>>> subprocess.call(['/bin/echo', '$HOME'])
$HOME
0
>>> subprocess.call(['/bin/echo $HOME'], shell=True)
/home/kkinder
0
Notice that without the shell=True parameter, the text of $HOME is not expanded. That's because the /bin/echo program doesn't parse $HOME, Bash does. What's really happening in the second call is something analogous to this:
>>> subprocess.call(['/bin/bash', '-c', '/bin/echo $HOME'])
/home/kkinder
0
Using the shell=True parameter basically says to the subprocess module, go interpret this text using a shell.
So, you could add shell=True, but then the problem is that once the command finishes, its state is lost. Each application in the stack has its own working directory. So what the directory is will be something like this:
bash - /foo/bar
python - /foo
bash via subprocess - /
After your command executes, the python process's path stays the same and the subprocess's path is discarded once the shell finishes your command.
Basically, what you're asking for isn't practical. What you would need to do is, open a pipe to Bash, interactively feed it commands your user types, then read the output in a non-blocking way. That's going to involve a complicated pipe, threads, etc. Are you sure there's not a better way?

Is there a way to hold a big list of strings in memory created from a Python script, and made available to the current bash shell?

Imagine I list a lot of file paths in a Python program, and then want to use it in a terminal.
I could store it into a file.
I could pipe it outside the Python program so bash can read it.
But can is just set some sort of variable so that the current shell (the one running the script) as access to the ouput as an ordinary bash variable ?
You could set an environment variable.
But to have the current shell have access to hit, however, you propably have to use a little hack:
In your python script, create a temporary .sh file which will set the environment variable using export yourname=yourlist, then run and delete it.
Another way would be to use e.g. Unix Domain Sockets using python's socket module, and read it in your shell using socat or netcat or use python's mmap module.
Create a python program to print the files to stdout
Execute the python script using backticks to store the output of the
script in a bash variable
Note the difference in accessing the bash variable using "$var" preserves the newlines whilst using plain $var changes them to spaces.
paddy$ echo $SHELL
/bin/bash
paddy$ unset frompython
paddy$ cat alltxt.py
from glob import glob
print("\n".join(glob("*.txt")))
paddy$ python alltxt.py
entry3.txt
entry2.txt
test_in2.txt
infile.txt
test_in.txt
entry.txt
entry1.txt
testfile.txt
paddy$ frompython=`python alltxt.py`
paddy$ echo $frompython
entry3.txt entry2.txt test_in2.txt infile.txt test_in.txt entry.txt entry1.txt testfile.txt
paddy$ echo "$frompython"
entry3.txt
entry2.txt
test_in2.txt
infile.txt
test_in.txt
entry.txt
entry1.txt
testfile.txt
paddy$

Categories