I would like to play around in the python interpreter but with a bunch of imports and object setup completed. Right now I'm launching the interpreter on the command line and doing the setup work every time. Is there any way to launch the command line interpreter with all the initialization work done?
Ex:
# Done automatically.
import foo
import baz
l = [1,2,3,4]
# Launch the interpreter.
launch_interpreter()
>> print l
>> [1,2,3,4]
You can create a script with the code you wish to run automatically, then use python -i to run it. For example, create a script (let's call it script.py) with this:
import foo
import baz
l = [1,2,3,4]
Then run the script
$ python -i script.py
>>> print l
[1, 2, 3, 4]
After the script has completed running, python leaves you in an interactive session with the results of the script still around.
If you really want some things done every time you run python, you can set the environment variable PYTHONSTARTUP to a script which will be run every time you start python. See the documentation on the interactive startup file.
I use PYTHONSTARTUP.
My .bash_profile has a path to my home folder .pyrc, which as the import statements in it.
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSTARTUP
I came across this question when trying to configure a new desk for my research and found that the answers above didn't quite suit my desire: to contain the entire desk configuration within one file (meaning I wouldn't create a separate script.py as suggested by #srgerg).
This is how I ended up achieving my goal:
export PYTHONPATH=$READ_GEN_PATH:$PYTHONPATH
alias prepy="python3 -i -c \"
from naive_short_read_gen import ReadGen
from neblue import neblue\""
In this case neblue is in the CWD (so no path extension is required there), whereas naive_short_read_gen is in an arbitrary directory on my system, which is specified via $READ_GEN_PATH.
You could do this in a single line if necessary: alias prepy=PYTHONPATH=$EXTRA_PATH:$PYTHONPATH python3 -i -c ....
You can use the -s option while starting the command line. The details are given in the documentation here
I think I know what you want to do. You might want to check IPython, because you cannot start the python interpreter without giving the -i option (at least not directly).
This is what I did in my project:
def ipShell():
'''Starts the interactive IPython shell'''
import IPython
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, display_banner=False)
# Then add the following line to start the shell
ipShell()
You need to be careful, though, because the shell will have the namespace of the module that the function ipShell() is defined. If you put the definition in the file you run, then you will be able to access the globals() you want. There could be other workarounds to inject the namespace you want, b̶u̶t̶ ̶y̶o̶u̶ ̶w̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶a̶r̶g̶u̶m̶e̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶c̶a̶s̶e̶.
EDIT
The following function defaults to caller's namespace (__main__.__dict__).
def ipShell():
'''Starts the interactive IPython shell
with the namespace __main__.__dict__'''
import IPython
from __main__ import __dict__ as ns
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, user_ns=ns, display_banner=False)
without any extra arguments.
Related
I'm trying to find a way to run an executable script that can be downloaded from the web from Python, without saving it as a file. The script can be python code or bash or whatever - it should execute appropriately based on the shebang. I.e. if the following were saved in a file called script, then I want something that will run ./script without needing to save the file:
#!/usr/bin/env python3
import sys
from my_module import *
scoped_hash = sys.argv[1]
print(scoped_hash)
I have a function that reads such a file from the web and attempts to execute it:
def execute_artifact(command_string):
os.system('sh | ' + command_string)
Here's what happens when I call it:
>>> print(string)
'#!/usr/bin/env python3\nimport sys\nfrom my_module import *\n\nscoped_hash = sys.argv[1]\n\nobject_string = read_artifact(scoped_hash)\nparsed_object = parse_object(object_string)\nprint(parsed_object)\n'
>>> execute_artifact(string)
sh-3.2$ Version: ImageMagick 7.0.10-57 Q16 x86_64 2021-01-10 https://imagemagick.org
Copyright: © 1999-2021 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC HDRI Modules OpenMP(4.5)
Delegates (built-in): bzlib freetype gslib heic jng jp2 jpeg lcms lqr ltdl lzma openexr png ps tiff webp xml zlib
Usage: import [options ...] [ file ]
Bizarrely, ImageMagick is called. I'm not sure what's going on, but I'm sure there's a better way to do this. Can anyone help me?
EDIT: This answer was added before OP updated requirements to include:
The script can be python code or bash or whatever - it should execute appropriately based on the shebang.
Some may still find the below helpful if they decided to try to parse the shebang themselves:
Probably, the sanest way to do this is to pass the string to the python interpreter as standard input:
import subprocess
p = subprocess.Popen(["python"], stdin=subprocess.PIPE)
p.communicate(command_string.encode())
My instinct tells me this entire thing is fraught with pitfalls. Perhaps, at least, you want to launch it using the same executable that launched your current process, so:
import subprocess
import sys
p = subprocess.Popen([sys.executable], stdin=subprocess.PIPE)
p.communicate(command_string.encode())
If you want to use arguments, I think using the -c option to pass in code as a string as an argument works, then you have access to the rest, so:
import subprocess
import sys
command_string = """
import sys
print(f"{sys.argv=}")
"""
completed_process = subprocess.run([sys.executable, "-c", command_string, "foo", "bar", "baz"])
The above prints:
sys.argv=['-c', 'foo', 'bar', 'baz']
This cannot be done in full generality.
If you want the shebang line to be interpreted as usual, you must write the script to a file. This is a hard requirement of the protocol that makes shebangs work. When a script with a shebang line is executed by the operating system, the kernel (and yes, it’s not the shell which does it, unlike what the question implies) reads the script and invokes the interpreter specified in the shebang, passing the pathname of the script as a command line argument. For that mechanism to work, the script must exist in the file system where the interpreter can find it. (It’s a rather fragile design, leading to some security issues, but it is what it is.)
Many interpreters will allow you to specify the program text on standard input or on the command line, but it is nowhere guaranteed that it will work for any interpreter. If you know you are working with an interpreter which can do it, you can simply try to parse the shebang line yourself and invoke the interpreter manually:
import io
import subprocess
import re
_RE_SHBANG = re.compile(br'^#!\s*(\S+)(?:\s+(.*))?\s*\n$')
def execute(script_body):
stream = io.BytesIO(script_body)
shebang = stream.readline()
m = _RE_SHBANG.match(shebang)
if not m:
# not a shebang
raise ValueError(shebang)
interp, arg = m.groups()
arg = (arg,) if arg is not None else ()
return subprocess.call([interp, *arg, '-c', script_body])
The above will work for POSIX shell and Python scripts, but not e.g. for Perl, node.js or standalone Lua scripts, as the respective interpreters take the -e option instead of -c (and the latter doesn’t even ignore shebangs in code given on the command line, so that needs to be separately stripped too). Feeding the script to the interpreter through standard input is also possible, but considerably more involved, and will prevent the script itself from using the standard input stream. That is also possible to overcome, but it doesn’t change the fact that it’s just a makeshift workaround that isn’t anywhere guaranteed to work in the first place. Better to simply write the script to a file anyway.
I would like to play around in the python interpreter but with a bunch of imports and object setup completed. Right now I'm launching the interpreter on the command line and doing the setup work every time. Is there any way to launch the command line interpreter with all the initialization work done?
Ex:
# Done automatically.
import foo
import baz
l = [1,2,3,4]
# Launch the interpreter.
launch_interpreter()
>> print l
>> [1,2,3,4]
You can create a script with the code you wish to run automatically, then use python -i to run it. For example, create a script (let's call it script.py) with this:
import foo
import baz
l = [1,2,3,4]
Then run the script
$ python -i script.py
>>> print l
[1, 2, 3, 4]
After the script has completed running, python leaves you in an interactive session with the results of the script still around.
If you really want some things done every time you run python, you can set the environment variable PYTHONSTARTUP to a script which will be run every time you start python. See the documentation on the interactive startup file.
I use PYTHONSTARTUP.
My .bash_profile has a path to my home folder .pyrc, which as the import statements in it.
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSTARTUP
I came across this question when trying to configure a new desk for my research and found that the answers above didn't quite suit my desire: to contain the entire desk configuration within one file (meaning I wouldn't create a separate script.py as suggested by #srgerg).
This is how I ended up achieving my goal:
export PYTHONPATH=$READ_GEN_PATH:$PYTHONPATH
alias prepy="python3 -i -c \"
from naive_short_read_gen import ReadGen
from neblue import neblue\""
In this case neblue is in the CWD (so no path extension is required there), whereas naive_short_read_gen is in an arbitrary directory on my system, which is specified via $READ_GEN_PATH.
You could do this in a single line if necessary: alias prepy=PYTHONPATH=$EXTRA_PATH:$PYTHONPATH python3 -i -c ....
You can use the -s option while starting the command line. The details are given in the documentation here
I think I know what you want to do. You might want to check IPython, because you cannot start the python interpreter without giving the -i option (at least not directly).
This is what I did in my project:
def ipShell():
'''Starts the interactive IPython shell'''
import IPython
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, display_banner=False)
# Then add the following line to start the shell
ipShell()
You need to be careful, though, because the shell will have the namespace of the module that the function ipShell() is defined. If you put the definition in the file you run, then you will be able to access the globals() you want. There could be other workarounds to inject the namespace you want, b̶u̶t̶ ̶y̶o̶u̶ ̶w̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶a̶r̶g̶u̶m̶e̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶c̶a̶s̶e̶.
EDIT
The following function defaults to caller's namespace (__main__.__dict__).
def ipShell():
'''Starts the interactive IPython shell
with the namespace __main__.__dict__'''
import IPython
from __main__ import __dict__ as ns
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, user_ns=ns, display_banner=False)
without any extra arguments.
Let's say I have a large Python library of functions and I want these functions (or some large number of them) to be available as commands in Bash.
First, disregarding Bash command options and arguments, how could I get a function of a Python file containing a number of functions to run using a single word Bash command? I do not want to have the functions available via commands of a command 'suite'. So, let's say I have a function called zappo in this Python file (say, called library1.py). I would want to call this function using a single-word Bash command like zappo, not something like library1 zappo.
Second, how could options and arguments be handled? I was thinking that a nice way could be to capture all of the options and arguments of the Bash command and then use them within the Python functions using docopt parsing *at the function level```.
Yes, but the answer might not be as simple as you hope. No matter what you do, you're going to have to create something in your bash shell for each function you want to run. However, you could have a Python script generate aliases stored in a file which gets sourced.
Here's the basic idea:
#!/usr/bin/python
import sys
import __main__ #<-- This allows us to call methods in __main__
import inspect #<-- This allows us to look at methods in __main__
########### Function/Class.Method Section ##############
# Update this with functions you want in your shell #
########################################################
def takesargs():
#Just an example that reads args
print(str(sys.argv))
return
def noargs():
#and an example that doesn't
print("doesn't take args")
return
########################################################
#Make sure there's at least 1 arg (since arg 0 will always be this file)
if len(sys.argv) > 1:
#This fetches the function info we need to call it
func = getattr(__main__, str(sys.argv[1]), None)
if callable(func):
#Actually call the function with the name we received
func()
else:
print("No such function")
else:
#If no args were passed to this function, just output a list of aliases for this script that can be appended to .bashrc or similar.
funcs = inspect.getmembers(__main__, predicate=inspect.isfunction)
for func in funcs:
print("alias {0}='./suite.py {0}'".format(func[0]))
Obviously, if you're using methods in a class instead of functions in main, change references from __main__ to your class, and change predicate in the inspect to inspect.ismethod. Also, you probably would want to use absolute paths for the aliases, etc.
Sample output:
~ ./suite.py
alias noargs='./suite.py noargs'
alias takesargs='./suite.py takesargs'
~ ./suite.py > ~/pyliases
~ echo ". ~/pyliases" >> ~/.bashrc
~ . ~/.bashrc
~ noargs
doesn't take args
~ takesargs blah
['./suite.py', 'takesargs', 'blah']
If you use the method I've suggested above, you can actually have your .bashrc run ~/suite.py > ~/pyliases before it sources the aliases from the file. Then your environment gets updated every time you log in/start a new terminal session. Just edit your python function file, and then . ~/.bashrc and the functions will be available.
Is it possible to call a script from the command prompt in windows (or bash in linux) to open Maya and then subsequently run a custom script (possibly changing each time its run) inside Maya? I am searching for something a bit more elegant than changing the userSetup file and then running Maya.
The goal here is to be able to open a .mb file, run a script to position the scene inside, setup a generic set of lights and then render the scene to a specific place and file type. I want to be able to set this up as a scheduled task to check for any new scene files in a directory and then open maya and go.
Thanks for the help!
For something like this you can use Maya standalone instead of the full blown UI mode. It is faster. It is ideal for batch scheduled jobs like these. Maya standalone is just Maya running without the GUI. Once you have initialized your Maya standalone, you can import and call any scripts you want, as part of the original calling script. To start you off here is an example: (Feel free to use this as a reference/modify it to meet your needs)
In your script you first initialize Maya standalone.
import maya.standalone
maya.standalone.initialize("Python")
import maya.cmds as cmds
cmds.loadPlugin("Mayatomr") # Load all plugins you might need
That will get Maya running. Now we open and/or import all the files necessary (egs. lights, models etc.)
# full path to your Maya file to OPEN
maya_file_to_open = r"C:/Where/Ever/Your/Maya_Scene_Files/Are/your_main_maya_file.mb"
# Open your file
opened_file = cmds.file(maya_file_to_open, o=True)
# full path to your Maya file to IMPORT
maya_file_to_import = r"C:/Where/Ever/Your/Maya_Scene_Files/Are/your_maya_file.mb"
# Have a namespace if you want (recommended)
namespace = "SomeNamespaceThatIsNotAnnoying"
# Import the file. the variable "nodes" will hold the names of all nodes imported, just in case.
nodes = cmds.file(maya_file_to_import, i=True,
renameAll=True,
mergeNamespacesOnClash=False,
namespace=namespace,
returnNewNodes=True,
options="v=0;",
type="mayaBinary" # any file type you want. this is just an example.
)
#TODO: Do all your scene setup/ positioning etc. if needed here...
#Tip: you can use cmds.viewFit(cam_name, fitFactor=1) to fit your camera on to selected objects
Now we save this file out and call Maya Batch renderer to render it out
render_file = "C:/Where/Ever/Your/Maya_Scene_Files/Are/your_RENDER_file.mb"
cmds.file(rename=render_file)
cmds.file(force=True, save=True, options='v=1;p=17', type='mayaBinary')
import sys
from os import path
from subprocess import Popen
render_project = r"C:/Where/Ever/YourRenderProjectFolder"
renderer_folder = path.split(sys.executable)[0]
renderer_exec_name = "Render"
params = [renderer_exec_name]
params += ['-percentRes', '75']
params += ['-alpha', '0']
params += ['-proj', render_project]
params += ['-r', 'mr']
params += [render_file]
p = Popen(params, cwd=renderer_folder)
stdout, stderr = p.communicate()
That's it! Of Course, your script will have to be run using Maya's Python interpreter (Mayapy).
Do check out the docs for all the commands used for more options, esp.:
cmds.file()
cmds.viewFit()
cmds.loadPlugin()
Subprocess and Popen
PLUS, because of the awesomeness of Python, you can use modules like sched (docs) to schedule the running of this method in your Python code.
Hope this was useful. Have fun with this. Cheers.
A lot depends on what you need to do.
If you want to run a script that has access to Maya functionality, you can run a Maya standalone instance as in Kartik's answer. The mayapy binary installed in the same folder as your maya is the Maya python interpreter, you can run it directly the same way you'd run python.exe Mayapy has the same command flags as a regular python interpreter.
Inside a mayapy session, once you call standalone.initialize() you will have a running Maya session - with a few exceptions, it is as if you were running inside a script tab in a regular maya session.
To force Maya to run a particular script on startup, you can call the -c flag, just the way you would in python. For example, you can start up a maya and print out the contents of an empty scene like this (note: I'm assuming mayapy.exe is on your path. You can just CD to the maya bin directory too).
mayapy -c 'import maya.standalone; maya.standalone.initialize(); import maya.cmds as cmds; print cmds.ls()'
>>> [u'time1', u'sequenceManager1', u'renderPartition', u'renderGlobalsList1', u'defaultLightList1', u'defaultShaderList1', u'postProcessList1', u'defaultRenderUtilityList1', u'defaultRenderingList1', u'lightList1', u'defaultTextureList1', u'lambert1', u'particleCloud1', u'initialShadingGroup', u'initialParticleSE', u'initialMaterialInfo', u'shaderGlow1', u'dof1', u'defaultRenderGlobals', u'defaultRenderQuality', u'defaultResolution', u'defaultLightSet', u'defaultObjectSet', u'defaultViewColorManager', u'hardwareRenderGlobals', u'hardwareRenderingGlobals', u'characterPartition', u'defaultHardwareRenderGlobals', u'lightLinker1', u'persp', u'perspShape', u'top', u'topShape', u'front', u'frontShape', u'side', u'sideShape', u'hyperGraphInfo', u'hyperGraphLayout', u'globalCacheControl', u'brush1', u'strokeGlobals', u'ikSystem', u'layerManager', u'defaultLayer', u'renderLayerManager', u'defaultRenderLayer']
You can run mayapy interactively - effectively a command line version of maya - using the -i flag: This will start mayapy and give you a command prompt:
mayapy -i -c \"import maya.standalone; maya.standalone.initialize()\""
which again starts the standalone for you but keeps the session going instead of running a command and quitting.
To run a script file, just pass in the file as an argument. In that case you'd want to do as Kartik suggests and include the standalone.initalize() in the script. Then call it with
mayapy path/to/script.py
To suppress the userSetup, you can create an environmnet variable called MAYA_SKIP_USERSETUP_PY and set it to a non-zero value, that will load maya without running usersetup. You can also change environment varialbes or path variables before running the mayap; for example I can run mayapys from two different environments with these two bash aliases (in windows you'd use SET instead of EXPORT to change the env vars):
alias mp_zip="export MAYA_DEV=;mayapy -i -c \"import maya.standalone; maya.standalone.initialize()\""
alias mp_std="export MAYA_DEV=C:/UL/tools/python/ulmaya;export ZOMBUILD='C:/ul/tools/python/dist/ulmaya.zip';mayapy -i -c \"import maya.standalone; maya.standalone.initialize()\""
This blog post includes a python module for spinning up Mayapy instances with different environments as needed.
If you want to interact with a running maya from another envrionment - say, if you're trying to remote control it from a handheld device or a C program - you can use the Maya commandPort to handle simple requests via TCP. For more complex situations you could set up a basic remoting service like this of your own, or use a pre-exiating python RPC module like RPyC or ZeroMQ
I often have the case that I'll be writing a script, and I'm up to a part of the script where I want to play around with some of the variables interactively. Getting to that part requires running a large part of the script I've already written.
In this case it isn't trivial to run this program from inside the shell. I would have to recreate the conditions of that function somehow.
What I want to do is call a function, like runshell(), which will run the python shell at that point in the program, keeping all variables in scope, allowing me to poke around in it.
How would I go about doing that?
import code
code.interact(local=locals())
But using the Python debugger is probably more what you want:
import pdb
pdb.set_trace()
By far the most convenient method that I have found is:
import IPython
IPython.embed()
You get all your global and local variables and all the creature comforts of IPython: tab completion, auto indenting, etc.
You have to install the IPython module to use it of course:
pip install ipython
For practicality I'd like to add that you can put the debugger trace in a one liner:
import pdb; pdb.set_trace()
Which is a nice line to add to an editor that supports snippets, like TextMate or Vim+SnipMate. I have it set up to expand "break" into the above one liner.
You can use the python debugger (pdb) set_trace function.
For example, if you invoke a script like this:
def whatever():
x = 3
import pdb
pdb.set_trace()
if __name__ == '__main__':
whatever()
You get the scope at the point when set_trace is called:
$ python ~/test/test.py
--Return--
> /home/jterrace/test/test.py(52)whatever()->None
-> pdb.set_trace()
(Pdb) x
3
(Pdb)
Not exactly a perfect source but I've written a few manhole's before, here is one I wrote for an abandoned pet project http://code.google.com/p/devdave/source/browse/pymethius/trunk/webmud/handlers/konsole.py
And here is one from the Twisted Library http://twistedmatrix.com/trac/browser/tags/releases/twisted-8.1.0/twisted/manhole/telnet.py the console logic is in Shell.doCommand