I want to call up an editor in a python script to solicit input from the user, much like crontab e or git commit does.
Here's a snippet from what I have running so far. (In the future, I might use $EDITOR instead of vim so that folks can customize to their liking.)
tmp_file = '/tmp/up.'+''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(6))
edit_call = [ "vim",tmp_file]
edit = subprocess.Popen(edit_call,stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=True )
My problem is that by using Popen, it seems to keep my i/o with the python script from going into the running copy of vim, and I can't find a way to just pass the i/o through to vim. I get the following error.
Vim: Warning: Output is not to a terminal
Vim: Warning: Input is not from a terminal
What's the best way to call a CLI program from python, hand control over to it, and then pass it back once you're finished with it?
Calling up $EDITOR is easy. I've written this kind of code to call up editor:
import sys, tempfile, os
from subprocess import call
EDITOR = os.environ.get('EDITOR', 'vim') # that easy!
initial_message = '' # if you want to set up the file somehow
with tempfile.NamedTemporaryFile(suffix=".tmp") as tf:
tf.write(initial_message)
tf.flush()
call([EDITOR, tf.name])
# do the parsing with `tf` using regular File operations.
# for instance:
tf.seek(0)
edited_message = tf.read()
The good thing here is the libraries handle creating and removing the temporary file.
In python3: 'str' does not support the buffer interface
$ python3 editor.py
Traceback (most recent call last):
File "editor.py", line 9, in <module>
tf.write(initial_message)
File "/usr/lib/python3.4/tempfile.py", line 399, in func_wrapper
return func(*args, **kwargs)
TypeError: 'str' does not support the buffer interface
For python3, use initial_message = b"" to declare the buffered string.
Then use edited_message.decode("utf-8") to decode the buffer into a string.
import sys, tempfile, os
from subprocess import call
EDITOR = os.environ.get('EDITOR','vim') #that easy!
initial_message = b"" # if you want to set up the file somehow
with tempfile.NamedTemporaryFile(suffix=".tmp") as tf:
tf.write(initial_message)
tf.flush()
call([EDITOR, tf.name])
# do the parsing with `tf` using regular File operations.
# for instance:
tf.seek(0)
edited_message = tf.read()
print (edited_message.decode("utf-8"))
Result:
$ python3 editor.py
look a string
Package python-editor:
$ pip install python-editor
$ python
>>> import editor
>>> result = editor.edit(contents="text to put in editor\n")
More details here: https://github.com/fmoo/python-editor
click is a great library for command line processing and it has some utilities, click.edit() is portable and uses the EDITOR environment variable. I typed the line, stuff, into the editor. Notice it is returned as a string. Nice.
(venv) /tmp/editor $ export EDITOR='=mvim -f'
(venv) /tmp/editor $ python
>>> import click
>>> click.edit()
'stuff\n'
Check out the docs https://click.palletsprojects.com/en/7.x/utils/#launching-editors My entire experience:
/tmp $ mkdir editor
/tmp $ cd editor
/tmp/editor $ python3 -m venv venv
/tmp/editor $ source venv/bin/activate
(venv) /tmp/editor $ pip install click
Collecting click
Using cached https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl
Installing collected packages: click
Successfully installed click-7.0
You are using pip version 19.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(venv) /tmp/editor $ export EDITOR='=mvim -f'
(venv) /tmp/editor $ python
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 16:52:21)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import click
>>> click.edit()
'stuff\n'
>>>
The PIPE is the problem. VIM is an application that depends on the fact that the stdin/stdout channels are terminals and not files or pipes. Removing the stdin/stdout paramters worked for me.
I would avoid using os.system as it should be replaced by the subprocess module.
The accepted answer does not work for me. edited_message stays the same as initial_message. As explained in the comments, this is caused by vim saving strategy.
There are possible workarounds, but they are not portable to other editors. Instead, I strongly recommend to use click.edit function. With it, your code will look like this:
import click
initial_message = "edit me!"
edited_message = click.edit(initial_message)
print(edited_message)
Click is a third-party library, but you probably should use it anyway if you are writing a console script. click to argparse is the same as requests to urllib.
Related
When installing gcloud for mac I get this error when I run the install.sh command according to docs here:
Traceback (most recent call last):
File "/path_to_unzipped_file/google-cloud-sdk/bin/bootstrapping/install.py", line 8, in <module>
from __future__ import absolute_import
I poked through and echoed out some stuff in the install shell script. It is setting the environment variables correctly (pointing to my default python installation, pointing to the correct location of the gcloud SDK).
If I just enter the python interpreter (using the same default python that the install script points to when running install.py) I can import the module just fine:
>>> from __future__ import absolute_import
>>>
Only other information worth noting is my default python setup is a virtual environment that I create from python 2.7.15 installed through brew. The virtual environment python bin is first in my PATH so python and python2 and python2.7 all invoke the correct binary. I've had no other issues installing packages on this setup so far.
If I echo the final line of the install.sh script that calls the install.py script it shows /path_to_virtualenv/bin/python -S /path_to_unzipped_file/google-cloud-sdk/bin/bootstrapping/install.py which is the correct python. Or am I missing something?
The script uses the -S command-line switch, which disables loading the site module on start-up.
However, it is a custom dedicated site module installed in a virtualenv that makes a virtualenv work. As such, the -S switch and virtualenvs are incompatible, with -S set fundamental imports such as from __future__ break down entirely.
You can either remove the -S switch from the install.bat command or use a wrapper script to strip it from the command line as you call your real virtualenv Python.
I had the error below when trying to run gcloud commands.
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 20, in <module>
from __future__ import absolute_import
ImportError: No module named __future__
If you have your virtualenv sourced automatically you can specify the environment variable CLOUDSDK_PYTHON i.e. set -x CLOUDSDK_PYTHON /usr/bin/python to not use the virtualenv python.
In google-cloud-sdk/install.sh go to last line, remove variable $CLOUDSDK_PYTHON_ARGS as below.
"$CLOUDSDK_PYTHON" $CLOUDSDK_PYTHON_ARGS "${CLOUDSDK_ROOT_DIR}/bin/bootstrapping/install.py" "$#"
"$CLOUDSDK_PYTHON" "${CLOUDSDK_ROOT_DIR}/bin/bootstrapping/install.py" "$#"
I am trying to get tab completion to work while running pdb on OS X 10.10.5. I have installed the homebrew version of python 2.7.13 because it appears (also see this) that Apple does not ship with a functional readline. If I have a trivial script, trivial.py
var1 = "this"
var2 = "is annoying"
and I run /usr/local/bin/python -m pdb trivial.py and on the first entry I enter import readline, rlcompleter; I subsequently can get tab completion. However, if I put in my .pdbrc
import readline
import rlcompleter
tab completion does not work. How is this not the exact same thing? Shouldn't tab completion work when put in my .pdbrc?
I get the same behavior on linux.
ie. with no .pdbrc
$ python3 -m pdb foo.py
(Pdb) in<tab> gives interact
(Pdb) interact
(Pdb) import rlcompleter
(Pdb) in<tab>
(Pdb) in input( int(
If I have import rlcompleter in my .pdbrc I only get interact when I type in. I get this same result even after I import rlcompleter.
$ python3 -m pdb -c 'import rlcompleter' foo.py
Also prevents tab completion.
Comparing the output of
$ python3 -vv -m pdb -c 'import rlcompleter' foo.py
and
$ python3 -vv -m pdb foo.py
resulted in a segfault so I would consider this a bug. I suggest you file a bug. Mention something about the import rlcompleter could be missing the Pdb completeionkey=setting being overwritten or the cmd module could be misinitialized. FWIW here is the source I was looking at to gleam some additional info. Pdb source
I found this
Using this method I was able to get the tab completion to work. His code uses a .pdbrc in the source dir and a hidden python script in the home dir. The file has comments where to split the file into two parts.
How can we start ipython REPL and instruct it to pass some command-line arguments to the underlying python interpreter?
For example, we can open a python REPL with increased verbosity by using
python -v
But I could not see how to pass that flag through when opening IPython.
I'd say the best way to do that would be to explicitly launch ipython with python:
python /usr/bin/ipython
as the ipython executable is just a python script ; or you can launch ipython by telling python to load the ipython library:
python -m IPython.frontend.terminal.ipapp
and then you can add all the native python arguments:
python -v /usr/bin/ipython
python -v -m IPython.frontend.terminal.ipapp
HTH
You could write your own ipython shebang script.
Here I copied my ipython script and added the -v
#!/usr/local/bin/python3.5 -v
# -*- coding: utf-8 -*-
import re
import sys
from IPython import start_ipython
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(start_ipython())
Now when I execute ./vipython I get many pages of import information upon startup and shutdown.
I gather from other SO questions that I might not be able to add multiple options to such a shebang line.
How to use multiple arguments with a shebang (i.e. #!)?
So for example
#!/usr/local/bin/python3.5 -vv
works, but
#!/usr/local/bin/python3.5 -v -v
doesn't - it gives me
1008:~/mypy$ ./vipython
Unknown option: -
usage: /usr/local/bin/python3.5 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
That -v option affects the behavior of the interpreter itself, not just the REPL. You get the extra import information regardless of whether you add the -i option.
Here's the default script that launches ipython (or at least one version of that)
1522:~/mypy$ cat /usr/local/bin/ipython3.5
#!/usr/local/bin/python3.5
# -*- coding: utf-8 -*-
import re
import sys
from IPython import start_ipython
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(start_ipython())
and from within an ipython session:
In [1153]: from IPython import start_ipython
In [1154]: start_ipython??
String form: <function start_ipython at 0xb697edac>
File: /usr/lib/python3/dist-packages/IPython/__init__.py
Definition: start_ipython(argv=None, **kwargs)
Source:
def start_ipython(argv=None, **kwargs):
"..."
from IPython.terminal.ipapp import launch_new_instance
return launch_new_instance(argv=argv, **kwargs)
Eventually the argv are passed to an argparse parser. Ipython populates that parser with arguments derived from its config files. So you have several options for setting parameters - default config, profile config, and commandline. But all of this is after the interpreter has been launched. Some things are acted on in the same was a with an interpreter REPL, but not all (-m, but not -v).
When -v is used as zmo suggests, we see all the imports of ipython code - which are quite a few. Are you interested in those, or are you more interested in imports related to your own script?
I use ipython a lot to test answers, especially for numpy. In fact my default ipython call has the --pylab flag. But to test stand alone scripts I use plain python (often called from a terminal window in my editor). Sometimes though I'll %run a script from within ipython. That loads the module into the main workspace, making it easy to perform %timeit tests on functions.
Other ipython scripts use
#!/usr/bin/python
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point(...)
I don't write much code using this style, but I don't see how that kind initiation can pass argv on through to the interpreter. By the time the module has been loaded and start running, the interpreter is already running.
In general it looks like ipython handles options like -i, -m, -c in basically the same way as the regular python. It may doing so with its own code, rather than delegating to the interpreter. But things like -v, -O, -t apply to the interpreter, not the REPL, and aren't handled by ipython code.
I have a package that I would like to automatically install and use from within my own Python script.
Right now I have this:
>>> # ... code for downloading and un-targzing
>>> from subprocess import call
>>> call(['python', 'setup.py', 'install'])
>>> from <package> import <name>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named <package>
Then I can continue like this:
>>> exit()
$ python
>>> from <package> import <name>
And it works just fine. For some reason, Python is able to pick up the package just fine if I restart after running the setup.py file, but not if I don't. How can I make it work without having the restart step in the middle?
(Also, is there a superior alternative to using subprocess.call() to run setup.py within a python script? Seems silly to spawn a whole new Python interpreter from within one, but I don't know how else to pass that install argument.)
Depending on your Python version, you want to look into imp or importlib.
e.g. for Python 3, you can do:
from importlib.machinery import SourceFileLoader
directory_name = # os.path to module
# where __init__.py is the module entry point
s = SourceFileloader(directory_name, __init__.py).load_module()
or, if you're feeling brave that your Python path knows about the directory:
map(__import__, 'new_package_name')
Hope this helps,
I downloaded from seaborn from GitHub.
Through command prompt, cd to downloads\seaborn folder
python install setup.py
Then using spyder from anaconda, checked if it was installed by running the following in a console
import pip
sorted(["%s==%s" % (i.key, i.version)
for i in pip.get_installed_distributions()])
Seeing that it was not there, go to tools and select "Update module names list"
Again trying the previous code in a python console, the lib was still not showing.
Restarting Spyder and trying import seaborn worked.
Hope this helps.
When using Python on an interactive shell I'm able to import the cx_Oracle file with no problem. Ex:
me#server~/ $ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>>
As you can see, importing works without a hitch. However, when I try to run a Python script doing the same thing, I get an error:
me#server~/ $ sudo script.py
Traceback (most recent call last):
File "/usr/local/bin/script.py", line 19, in <module>
import cx_Oracle
ImportError: No module named "cx_Oracle'
Here is the important section from script.py:
# 16 other lines above here
# Imports
import sys
import cx_Oracle
import psycopg2
...
I'm befuddled here. Other pertinent information is the server I'm running is Ubuntu 14.04.1 LTS (upgraded from 12.04) 64bit. which python and sudo which python both point to the same location. Also, doing this as root via sudo su - gets the same results; import OK from interactive but error from script.
Nothing other than the OS upgrade happened between when this worked and when it stopped working.
Sorry, all. This was a silly on my part. Turns out the script in question was using Python3, and when the server upgraded, Python3 went from being 3.2 version to being 3.4 version.
Once the cx_Oracle module was set up in the 3.4 version, everything worked as expected.
Phil, your final note talking about the shebang was what lead me to discover this, so kudos to you! The reason I didn't mark your response as the answer was because technically it wasn't but led me on the right path.
Cheers!
sudo starts a new bash environment which is then pointing to a different python executable (different installed modules).
You can verify this with which python and sudo which python
EDIT: so if they point to the same executable, then you should look at sys.path to find differences. In both environemnts you can:
python -c "import sys; print('\n'.join(sys.path))"
sudo python -c "import sys; print('\n'.join(sys.path))"
Look for differences. If there are none:
A common error in import situations like this is that python will first look at the local dir. So if you happen to be running python and importing something what is found locally (i.e. cx_Oracle is a subdir of your current location), you will get an import error if you change directories.
Final note: I have assumed here that the shbang of the script.py points to the same executable as which python. That is, that python script.py and script.py return the same error.