When I type
$ git br<tab>
Git automatically completes the option into this:
$ git branch
Suppose I want to imitate this functionality in my fictitious program say.py:
#!/usr/bin/python
import sys
args = ['morning', 'night']
if sys.argv[1] == args[0]:
print "Mr. Tacha Vinci! Good morning!"
elif sys.argv[1] == args[1]:
print "Mr. Tacha Vinci, sweet dreams..."
Such that when I do:
$ say.py mor<tab>
I get:
$ say.py morning
It's not Git that does the completion, it's your shell -- most probably Bash, and specifically by way of readline. (The link is to the Python bindings for this library, which is what you would use to provide completion inside a running Python program. To create Bash completions, look at e.g. the ABS intro.)
Related
A project I'm working on uses a custom CLI handler instead of Python's cmd.Cmd class. Without getting too much in detail, the handler features TAB-key completion to assist the operator with command usage. The feature works as expected on Windows (using pyreadline) and Linux (using GNU's readline).
Here is an example of the expected behavior (assume "cmd > " is the prompt and that [TAB] is a push of the TAB key):
cmd > [TAB]
cd exit load save # all the available commands
cmd > c[TAB] # autocompletes to 'cd'
cmd > cd [TAB]
cd ./folder1 cd ./folder2 cd ./folder3 # folders in the cwd
cmd > cd C:\[TAB]
cd C:\Users cd C:\Windows... # enumerates folders in C:\ (on windows)
cmd > cd /[TAB]
cd /bin cd /opt cd /usr... # enumerates folders from root (on linux)
The custom class defines the following tab completion method, which is set using readline.set_completer():
def tab_completer(self, text, state):
# rl delims set to "" so we get the whole line as a single string
words = re.split(r'[\s\t\n]+', text)
# find_subcompleter populates a list of possible matches or next words
# each command implements its own completer_stub depending on the function (ex: cd will complete directory names)
retval = self.find_subcompleter(words.pop())
try:
return retval[state]
except IndexError:
return None
The function works as expected on Windows (10, Python 3.6.6) and Linux (CentOS 7, Python 3.6.8), but something strange happens on macOS (10.15.7, Python 3.8.2 via xcode on zsh terminal):
cmd > [TAB]
cd exit load save # this is good
cmd > c[TAB] # still autocompletes to 'cd', good
cmd > cd [TAB]
cd exit load save # as if I've typed nothing!
For those of you wondering, this behavior happens with ANY command, not just with cd.
I'm aware that the underlying readline implementation on macOS uses libedit due to GNU licensing. I just haven't seen anyone else (to date) mention this difference on any other forums. A possible solution that comes to mind is to add a conditional for libedit implementations to use get_line_buffer() and redisplay() to mimic the correct behavior. Any pointers in the right direction are appreciated!
Thank you
I am having an issue after I run a python unit test file. Once the file exits I can only interact with my console after pressing "i" and using other vim keybindings. I also noticed that using the arrow keys to traverse what I typed will delete a random number of characters at the end of the line.
EX:
$ ./tests.py -v
<output>
$ <cannot type>
<press "i">
$ I can now type
<press <- >
$ I can no
I am using RHEL 7 and bash. I've tried googling this issue but I'm either formatting the question poorly or it is an uncommon issue.
Thank you for the help.
EDIT:
The actual test.py contains private code, but this is example contains the same essential code.
test.py
#!/usr/bin/env python
import unittest
class TestUtil(unittest.TestCase):
def test_hello_world(self):
text = "Hello World!"
self.assertEqual("Hello World!", text)
print(text)
pass
if __name__ == '__main__':
unittest.main()
It sounds as if your shell is being placed into vi-mode. This is a readline mode where you can use vi editing keys instead of the more commonly used emacs keys.
There are two ways I know of that this can happen.
set -o vi
bindkey -v
Technically, to turn it off you use set +o vi. However, that will disable all inline editing. It is more likely that you wish to go back to emacs mode, which is usually the default. To do that, do this instead:
set -o emacs
Background
I have some Python scripts which use libraries only available to Python 2.7 and so I want to be able to run those in Python version 2.7.
I was thinking that it would be great if I could put some code at the top of my Python file which would detect if it was being run in Python 3 and if so execute itself in version 2.7. Is this possible?
For example, the pseudo code would be:
if (self.getPythonVersion != 2.7):
os.execute('python27 ' + os.cwd + 'script.py')
exit()
Edit: This is for my personal use, not for distributing.
Answer
I used mgilson's answer below to get this to work for me. I was not able to get os.exec() to work, but I didn't spend a long time on that. The second script worked for me. Here is what I used and worked for me:
if sys.version_info[:2] > (2, 7):
code = subprocess.call(['python27', sys.argv[0] ])
raise SystemExit(code)
Nope, this isn't possible for the simple reason that a user could have python3.x installed and not have python2.x installed.
If you know that they have python2.7 installed, then you can use something like your work-around above, however, in that case, you'll have to make sure that you can support both python3.x and python2.x in the same source (which is generally a non-trivial task)
You can detect the python version via sys.version_info and I think you can swap out the process using something in the os.exec* family of functions...
e.g.:
import os, sys
if sys.version_info[:2] > (2, 7):
os.execve('python27', sys.argv, os.environ)
Here's another variant that you can try (it'll create a new process instead of replacing the old one however):
import sys, subprocess
if sys.version_info[:2] > (2, 7):
code = subprocess.call(['python27'] + sys.argv)
raise SystemExit(code)
print(sys.version_info)
You can try adding the python2.7 shebang line at the top of your script:
#!/usr/bin/env python2.7
Make sure it is in your path though, and this should work.
This ugly hack should work on most UNIX-like systems exec-magic. It relies on the triple-quote handling difference between python and sh. First sh runs the script, then reruns the script with a suitable python binary, if found.
#!/bin/sh
_HACK_='''________BEGIN_SH_CODE_____________'
ispy2() {
case $1$2$3$4$5$6$7$8$9 in
*ython2.*) return 0 ;;
*) return 1 ;;
esac
}
for c in python python2 python3 \
/usr/local/bin/python* \
/usr/bin/python* \
/bin/python*
do
ispy2 `$c -V 2>&1` && exec $c "$0" "$#"
done
echo "could not find python 2 binary"
exit 1
_HACK_='________BEGIN_PYTHON_CODE___________'''
import sys
print sys.version
print sys.argv
The nasty ispy2() function is a hack to remove whitespace from the python -V output in case of different word-splitting behaviors (I did not want to rely on any binary besides /bin/sh).
I would advise against this for reasons raised by mgilson. However you can check the python version with:
import sys
sys.version_info[0]
In case you still want to do this.
I'm working in Linux and am wondering how to have python tell whether it is being run directly from a terminal or via a GUI (like alt-F2) where output will need to be sent to a window rather than stdout which will appear in a terminal.
In bash, this done by:
if [ -t 0 ] ; then
echo "I'm in a terminal"
else
zenity --info --title "Hello" --text "I'm being run without a terminal"
fi
How can this be accomplished in python? In other words, the equivalent of [ -t 0 ])?
$ echo ciao | python -c 'import sys; print sys.stdin.isatty()'
False
Of course, your GUI-based IDE might choose to "fool" you by opening a pseudo-terminal instead (you can do it yourself to other programs with pexpect, and, what's sauce for the goose...!-), in which case isatty or any other within-Python approach cannot tell the difference. But the same trick would also "fool" your example bash program (in exactly the same way) so I guess you're aware of that. OTOH, this will make it impossible for the program to accept input via a normal Unix "pipe"!
A more reliable approach might therefore be to explicitly tell the program whether it must output to stdout or where else, e.g. with a command-line flag.
I scoured SE for an answer to this but everywhere indicated the use of sys.stdout.isatty() or os.isatty(sys.stdout.fileno()). Neither of these dependably caught my GUI test cases.
Testing standard input was the only thing that worked for me:
sys.stdin.isatty()
I had the same issue, and I did as follow:
import sys
mode = 1
try:
if sys.stdin.isatty():
mode = 0
except AttributeError: # stdin is NoneType if not in terminal mode
pass
if mode == 0:
# code if terminal mode ...
else:
# code if gui mode ...
There are several examples of this on PLEAC which counts for a third case: running at an interactive Python prompt.
In bash I use this script:
$ cat ~/bin/test-term.sh
#!/bin/bash
#See if $TERM has been set when called from Desktop shortcut
echo TERM environment variable: $TERM > ~/Downloads/test-term.txt
echo "Using env | grep TERM output below:" >> ~/Downloads/test-term.txt
env | grep TERM >> ~/Downloads/test-term.txt
exit 0
When you create a desktop shortcut to call the script the output is:
$ cat ~/Downloads/test-term.txt
TERM environment variable: dumb
Using env | grep TERM output below:
Notice grepping env command returns nothing?
Now call the script from the command line:
$ cat ~/Downloads/test-term.txt
TERM environment variable: xterm-256color
Using env | grep TERM output below:
TERM=xterm-256color
This time the TERM variable from env command returns xterm-256color
In Python you can use:
#import os
result = os.popen("echo $TERM")
result2 = os.popen("env | grep TERM")
Then check the results. I haven't done this in python yet but will probably need to soon for my current project. I came here looking for a ready made solution but noone has posted one like this yet.
I need to make an export like this in Python :
# export MY_DATA="my_export"
I've tried to do :
# -*- python-mode -*-
# -*- coding: utf-8 -*-
import os
os.system('export MY_DATA="my_export"')
But when I list export, "MY_DATA" not appear :
# export
How I can do an export with Python without saving "my_export" into a file ?
export is a command that you give directly to the shell (e.g. bash), to tell it to add or modify one of its environment variables. You can't change your shell's environment from a child process (such as Python), it's just not possible.
Here's what's happening when you try os.system('export MY_DATA="my_export"')...
/bin/bash process, command `python yourscript.py` forks python subprocess
|_
/usr/bin/python process, command `os.system()` forks /bin/sh subprocess
|_
/bin/sh process, command `export ...` changes its local environment
When the bottom-most /bin/sh subprocess finishes running your export ... command, then it's discarded, along with the environment that you have just changed.
You actually want to do
import os
os.environ["MY_DATA"] = "my_export"
Another way to do this, if you're in a hurry and don't mind the hacky-aftertaste, is to execute the output of the python script in your bash environment and print out the commands to execute setting the environment in python. Not ideal but it can get the job done in a pinch. It's not very portable across shells, so YMMV.
$(python -c 'print "export MY_DATA=my_export"')
(you can also enclose the statement in backticks in some shells ``)
Not that simple:
python -c "import os; os.putenv('MY_DATA','1233')"
$ echo $MY_DATA # <- empty
But:
python -c "import os; os.putenv('MY_DATA','123'); os.system('bash')"
$ echo $MY_DATA #<- 123
I have an excellent answer.
#! /bin/bash
output=$(git diff origin/master..origin/develop | \
python -c '
# DO YOUR HACKING
variable1_to_be_exported="Yo Yo"
variable2_to_be_exported="Honey Singh"
… so on
magic=""
magic+="export onShell-var1=\""+str(variable1_to_be_exported)+"\"\n"
magic+="export onShell-var2=\""+str(variable2_to_be_exported)+"\""
print magic
'
)
eval "$output"
echo "$onShell-var1" // Output will be Yo Yo
echo "$onShell-var2" // Output will be Honey Singh
Mr Alex Tingle is correct about those processes and sub-process stuffs
How it can be achieved is like the above I have mentioned.
Key Concept is :
Whatever printed from python will be stored in the variable in the catching variable in bash [output]
We can execute any command in the form of string using eval
So, prepare your print output from python in a meaningful bash commands
use eval to execute it in bash
And you can see your results
NOTE
Always execute the eval using double quotes or else bash will mess up your \ns and outputs will be strange
PS: I don't like bash but your have to use it
I've had to do something similar on a CI system recently. My options were to do it entirely in bash (yikes) or use a language like python which would have made programming the logic much simpler.
My workaround was to do the programming in python and write the results to a file.
Then use bash to export the results.
For example:
# do calculations in python
with open("./my_export", "w") as f:
f.write(your_results)
# then in bash
export MY_DATA="$(cat ./my_export)"
rm ./my_export # if no longer needed
You could try os.environ["MY_DATA"] instead.
Kind of a hack because it's not really python doing anything special here, but if you run the export command in the same sub-shell, you will probably get the result you want.
import os
cmd = "export MY_DATA='1234'; echo $MY_DATA" # or whatever command
os.system(cmd)
In the hope of providing clarity over common cinfusion...
I have written many python <--> bash <--> elfbin toolchains and the proper way to see it is such as this:
Each process (originator) has a state of the environment inherited from whatever invoked it. Any change remains lokal to that process. Transfering an environment state is a function by itself and runs in two directions, each with it's own caveats. The most common thing is to modify environment before running a sub-process. To go down to the metal, look at the exec() - call in C. There is a variant that takes a pointer to environment data. This is the only actually supported transfer of environment in typical OS'es.
Shell scripts will create a state to pass when running children when you do an export. Otherwise it just uses that which it got in the first place.
In all other cases it will be some generic mechanism used to pass a set of data to allow the calling process itself to update it's environment based on the result of the child-processes output.
Ex:
ENVUPDATE = $(CMD_THAT_OUTPUTS_KEYVAL_LISTS)
echo $ENVUPDATE > $TMPFILE
source $TMPFILE
The same can of course be done using json, xml or other things as long as you have the tools to interpret and apply.
The need for this may be (50% chance) a sign of misconstruing the basic primitives and that you need a better config or parameter interchange in your solution.....
Oh, in python I would do something like...
(need improvement depending on your situation)
import re
RE_KV=re.compile('([a-z][\w]*)\s*=\s*(.*)')
OUTPUT=RunSomething(...) (Assuming 'k1=v1 k2=v2')
for kv in OUTPUT.split(' ')
try:
k,v=RE_KV.match(kv).groups()
os.environ[k]=str(v)
except:
#The not a property case...
pass
One line solution:
eval `python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'`
echo $python_include_path # prints /home/<usr>/anaconda3/include/python3.6m" in my case
Breakdown:
Python call
python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'
It's launching a python script that
imports sysconfig
gets the python include path corresponding to this python binary (use "which python" to see which one is being used)
prints the script "python_include_path={0}" with {0} being the path from 2
Eval call
eval `python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'`
It's executing in the current bash instance the output from the python script. In my case, its executing:
python_include_path=/home/<usr>/anaconda3/include/python3.6m
In other words, it's setting the environment variable "python_include_path" with that path for this shell instance.
Inspired by:
http://blog.tintoy.io/2017/06/exporting-environment-variables-from-python-to-bash/
import os
import shlex
from subprocess import Popen, PIPE
os.environ.update(key=value)
res = Popen(shlex.split("cmd xxx -xxx"), stdin=PIPE, stdout=PIPE, stderr=PIPE,
env=os.environ, shell=True).communicate('y\ny\ny\n'.encode('utf8'))
stdout = res[0]
stderr = res[1]
os.system ('/home/user1/exportPath.ksh')
exportPath.ksh:
export PATH=MY_DATA="my_export"