Output of shell script run on Windows contains prompt - python

I have a Python script which is running a shell script using the subprocess library. It has to run on any platform so I have 2 shell scripts, one for Linux/MacOS (cm) and one for Windows (cm.cmd).
Let's say they both contain just a single command example_command -param.
The code which is running the shell script looks like the following:
json = subprocess.run(['cm'], shell=True)
This way, thanks to the shell handling the execution of the script (shell=True), it runs the script cm on Linux/MacOS platforms and cm.cmd on Windows.
The output of the script is a JSON and it works properly on Linux/MacOS platforms, the only problem is with Windows where the output contains the shell prompt which breaks the JSON obviously.
The captured output in the json variable may look like this:
My prompt c:\ $ example_command -param
{ "json_data": ... }
How to avoid printing of the prompt to the subprocess output?

It's caused by the feature called command echoing which is enabled by default but it may be disabled using the echo command. From the documentation:
Syntax
echo [on | off]
Parameters
[on | off] Turns on or off the command echoing feature. Command echoing is on by default.
If you add echo off at the first line of the script, it will disable the command echoing for all subsequent commands but it will echo the echo off command itself. To suppress even echoing of that command, simply prefix it with #.
At sign (#) as a command prefix has the same effect as echo off but only for a single command.
So to summarize it: Simply add #echo off at the first line of the shell script (or batch in Windows terminology) and that's it. Only the output of command(s) executed in the script will be sent to stdout.

Related

Run shell commands with subprocess while displaying full messages

I want to run multiple Terminal commands from Python using subprocess and simultaneously not only execute the commands but also print the output that appears in Terminal in full to my stdout, so I can see it in real-time (as I would if making the commands directly in Terminal).
Now, using the advice here I was able to run multiple Bash commands from Python:
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print(proc_stdout)
subprocess_cmd('echo a; echo b; cd /home/; ls')
Output:
b'a\nb\n<Files_in_my_home_folder>'
So far so good. But if I try to run ls -w (which should raise an error),
subprocess_cmd('echo a; echo b; cd /home/; ls -w')
output:
b'a\nb'
whereas the error message should be shown as it would in Terminal:
ls: option requires an argument -- 'w'
Try 'ls --help' for more information.
I would like to print out whatever is in Terminal (simultaneously with running the command) for whatever the command is, be it running some executable, or a shell command like ls.
I am using Python 3.7+ so any solution using subprocess.run or similar is also welcome. However, I'm not sure this takes multiple commands together nor does using capture_output=True, text=True print error messages.
The stdout=subprocess.PIPE (or the shorthand capture_output=True which subsumes this and a few related settings) says that you want Python to read the output. If you simply want the subprocess to spill whatever it prints directly to standard output and/or standard error, you can simply leave out this keyword argument.
As always, don't use Popen if you can avoid it (and usually avoid shell=True if you can, though that is not possible in your example).
subprocess.check_call('echo a; echo b; cd /home/; ls', shell=True)
To briefly reiterate, this bypasses Python entirely, and lets the subprocess write to its (and Python's) standard output and/or standard error without Python's involvement or knowledge. If you need for Python to know what's printed, you'll need to have your script capture it, and have Python print it if required.

Bash script: Open Python, execute some lines, then let user take control

I have a Python program, A.py, that creates binary data upon completion. To help users analyze the output, I want to add a small script, B.sh, to the output directory that fires up a Python console and executes some commands, C, that load the data and prepare them such that a user sees what is available. After executing C, the script B.sh should keep the Python console open.
First attempt at B.sh:
I figured out that
#!/bin/sh
xterm -e python
opens a Python console and keeps it open but doesn't execute anything within that console.
Second attempt at B.sh:
I figured out that
#!/bin/sh
xterm -e python -i C.py
executes C.py (I'd prefer not to have to write an additional file for the startup commands, but I could live with that) and keeps the window open, but doesn't show what was done. More specifically, the user would be presented with the outputs of C, but not the command that were used to achieve the outputs.
Instead,I'd like the user to be presented with a console like this:
>>> [info,results] = my_package.load(<tag>)
>>> my_package.plot(results)
>>> print(info)
<output>
>>> my_package.analyze(results)
<output>
>>>
Save this in a file called demo.tcl
#!/usr/local/bin/expect -f
# Spawn Python and await prompt
spawn /usr/local/bin/python3
expect ">>>"
# Send Python statement and await prompt
send "print('Hello world!')\n"
expect ">>>"
# Pass control to user so he can interact with Python
interact
Then make it executable with:
chmod +x demo.tcl
And run with:
xterm -e ./demo.tcl
In the picture, you can see I went on after the "Hello world" to print the system version info.
Your paths for Python and expect may be different, so check and alter to suit.
For anyone who happens to be using macOS (a.k.a. OSX), you can install expect with homebrew as follows:
brew install expect
And, since Macs don't ship with X11 any more, rather than install XQuartz and run xterm, you can start a new Terminal and run the Python in there quite simply with:
open -a Terminal.app demo.tcl
As suggested by user #pask I simply printed the commands before executing them.
Furthermore, I added a -c switch to be able to put the Python commands directly in B.sh instead of having to write a file C.py
Here is the B.sh I am using now:
#!/bin/sh
xterm -e python -i -c "print('>>> import my_package');import my_package;print('>>> [info,results] = my_package.load()');[info,results] = my_package.load()"

How to redirect default program output?

I am trying to pipe output of my python scripts to pygmentize.
In registry I've set it like this:
"C:\Anaconda\python.exe" "%1"%* | pygmentize
not working (not piping anything at all), what is wrong?
You can use powershell for this
C:\ PowerShell -Command " python [yourprogram.py] | pygmentize -l py3t"
-Command
Executes the specified commands (and any parameters) as though they were typed at the Windows PowerShell command prompt, and then exits, unless the NoExit parameter is specified. Essentially, any text after -Command is sent as a single command line to PowerShell (this is different from how -File handles parameters sent to a script).
The value of Command can be "-", a string. or a script block. If the value of Command is "-", the command text is read from standard input.
Script blocks must be enclosed in braces ({}). You can specify a script block only when running PowerShell.exe in Windows PowerShell. The results of the script are returned to the parent shell as deserialized XML objects, not live objects.
If the value of Command is a string, Command must be the last parameter in the command, because any characters typed after the command are interpreted as the command arguments.
To write a string that runs a Windows PowerShell command, use the format:
"& {<command>}"
where the quotation marks indicate a string and the invoke operator (&) causes the command to be executed.
The above documentation for Command can be found here

Unset or remove most recent like of .bash_history from within Python script

The Issue
I have a Python script that when I run it from the command line I do not want to record anything within .bash_history.
The reason for this is that the script uses the Python argparse library which allows me to pass in arguments to the python code directly from the command line.
For example I could write the script so that it would use "123456" as a value in the script:
$ ./scriptname.py -n 123456
The issue is that I don't want the value 123456 stored in .bash_history. In fact, I'd rather the entire command was never stored into the .bash_history file in the first place.
What I've Tried
Subprocess & history -c
I've added the subprocess library to the top of my script and then included this directly after to attempt to proactively clear the current history of the shell I am working in:
subprocess.call("history -c", shell=True)
Theoretically this should clear the history of the current shell. I don't see errors from this so I'm assuming that it runs in some other shell. When I run it outside of the script (directly after running the command to invoke the script) it works properly.
Subprocess & unset HISTFILE
I have also used subprocess with the following with no success:
subprocess.call("unset HISTFILE", shell=True)
os.system & history -c
I've also used the os library for Python and included the following in the script:
os.system("history -c")
os.system and unset HISTFILE
I've also tried unset HISTFILE with os.system to no avail.
os.system("unset HISTFILE")
Preferred Solution Characteristics
I realize that I could simply type in unset HISTFILE or history -c after using the command. But I want this to be as much as possible a self-contained script.
Ideally the solution would prevent the ./scomescript.py command from ever being recorded within .bash_history.
I need this script to output text to the terminal based on the input so I can't close the terminal immediately afterwards either.
I imagine there must be a way to do this from within the python script itself - this is my preference.
This really isn't very feasible... Adding the entry to the history file is performed by the interactive shell, and it occurs after the command has completed and the parent shell exits. It is, strictly speaking, possible, if you were to make your python program execute spawn a hacky background process that did something like read the history file in a loop re-writing it. I really can't advocate anything like this, but you could append your script with something like:
os.system("nohup bash -ic 'while :; do read -d \"\" history < \"$HISTFILE\"; echo \"$history\" | sed -e\"s#^%s.*##\" -e\"/^$/d\" > \"$HISTFILE\"; sleep 1; done &' >/dev/null 2>&1" % sys.argv[0])
I think a much better way to accomplish your goal of not recording any arguments would be to use something like var = raw_input("") instead of passing sensitive argument on the command line.
You could also perhaps create a shell function to wrap your script, something like my_script(){ set +o history; python_script.py "$#; set -o history ;}?

How to start a Python ipython shell, automatically run inside it a few commands, and leave it open?

I tried
echo "print 'hello'" | ipython
Which runs the command but ipython immediately exits afterwards.
Any ideas? Thanks!
Edit:
I actually need to pass the command into the interactive Django shell, e.g.:
echo "print 'hello'" | python manage.py shell
so the -i switch gimel suggested doesn't seem to work (the shell still exits after execution)
Use the same flag used by the standard interpreter, -i.
-i
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even when sys.stdin does not appear to be a terminal. The PYTHONSTARTUP file is not read.
A Linux example, using the -c command line flag:
$ ipython -i -c 'print "hello, ipython!"'
hello, ipython!
In [2]: print "right here"
right here
In [3]:
Try using the ipy_user_conf.py inside your ~/.ipython
I'm not sure of ipython but the basic python interpreter has a command line parameter to give you the prompt after it executes the file you've given it. I don't have an interpreter handy to tell you what it is but you can get it using python --help. It should do exactly what you want.
Running a custom startup script/profile script with the Django shell was marked as closed: wontfix.
However, there is a shell_plus Django extension discussed in that ticket which seems to do what you want. I haven't had a chance to check it out, but it looks like at the very least it can run a load to auto import all the models of all installed apps (which I usu. find myself doing).
Shell plus.py in django-command-extensions on Google Code
django-command-extensions homepage on Google Code
django_extensions on Github

Categories