Multiple -m command line arguments (Python) - python

I want to run both cProfiler (For time measurement, mainly) and also a memory profiler that I found here. However, both require the -m command line argument to be given, which doesn't exactly play nicely.
Is there a way to have both running? All I've managed to do so far is get the interpreter yelling at me.
If you need any more information, let me know and I'll do my best to provide it. Thanks in advance!

It is not possible to start two modules using two -m arguments. This is because the command line arguments after -m are all given to the named module as sys.argv. This is not described explicitly in the documentation but you can try it out experimentally.
Create two python files a.py and b.py.
Contents of a.py:
print 'a'
import sys
print sys.argv
Contents of b.py:
print 'b'
Now try to run both using two -m arguments:
$ python -m a -m b
Output:
a
['/home/lesmana/tmp/a.py', '-m', 'b']
As you can see module b is never started because the second -m is not handled by python. It is given to module a to handle.

While it's now evident to me that you can't use two -m arguments on the same file, I managed to pull together something of a solution. It's a bit round-about, and not exactly perfect, though.
I used 2 .bat files, which can be seen here. On the left hand side is the .bat that handles cProfiler, and on the right is the .bat that handles the memory profiler.
The code for the python programs seen in the .bat handling the memory profiler can be seen here and here.
The first program adds # to the line directly above the function in my main code here, which means that the program can actually run, and cProfiler can do its thing.
The second program removes that #, meaning that the memory profiler can work.
For this system to work properly with my layout, the "#profile" needs to be commented out in the first place.
It's a bit kludgey, and could use some refinement to automate it further (Such as having to specify the name of the file in the .bat file handling the memory profiler), but it'll do for now. I also realize that it's quite a specific case, but who knows, maybe someone is in the exact same position as I was...

Related

Batch Rendering file from a python script without openeing Maya

I have one Maya scene and a Python script where import obj files into it. I need to create a batch render file which calls the maya file and applies the script without opneing maya.
I have this code in a .sh file:
#!/bin/bash
"/Applications/Autodesk/maya2016/Maya.app/Contents/bin/Render" -r file -s 1 -e 4 -cam camera1 -rd "/Users/MyComp/Documents/maya/projects/default/images" "/Users/MyComp/Documents/maya/projects/default/Scenes/test1.mb"
But I have this code into the script which can be an issue or maybe not:
def renderFile(i):
cmds.setAttr("defaultRenderGlobals.imageFilePrefix", i, type="string")
cmds.render(batch=True)
If I execute this .sh file it renders without the python script. How can I add the python script?
I need that file for a renderfarm purposes
I know it's an old thread but thought I'd jump in just incase someone finds this thread in a search.
The comments seem a little confused. This comes from the fact that there are two different Python interpreters being talked about. The first is the system level one, which the original question seems to be talking about. In that case, you can use any of the various shell command launchers (like, subprocess/Popen) that suit your need. Here you are looking to run the render command like you would any other command in in the shell.
In the responses, people there are referring to the other interpreter, the custom Maya Python interpreter (mayapy.exe). In that case you are working with actual Maya libraries and it's the same as working with Python in it's shell, with the added Maya libraries/environment.
The two have different uses, the first is to control things like they were in the shell and the second is controlling things inside of a Maya context. Hope that clarifies things.

Python pdb on python script run as package

I have a python program that I usually run as a part of a package:
python -m mymod.client
in order to deal with relative imports inside "mymod/client.py." How do I run this with pdb - the python debugger. The following does not work:
python -m pdb mymod.client
It yields the error:
Error: mymod.client does not exist
EDIT #1 (to address possible duplicity of question)
My question isn't really about running two modules simultaneously python, rather it is about how to use pdb on a python script that has relative imports inside it and which one usually deals with by running the script with "python -m."
Restated, my question could then be, how do I use pdb on such a script while not having to change the script itself just to have it run with pdb (ie: preserving the relative imports inside the script as much as possible). Shouldn't this be possible, or am I forced to refactor in some way if I want to use pdb? If so what would be the minimal changes to the structure of the script that I'd have to introduce to allow me to leverage pdb.
In summary, I don't care how I run the script, just so long as I can get it working with pdb without changing it's internal structure (relative imports, etc) too much.
I think I have a solution.
Run it like this:
python -m pdb path/mymod/client.py arg1 arg2
that will run it as a script, but will not treat it as a package.
At the top of client.py, the first line should be:
import mymod
That will get the package itself loaded.
I am still playing with this, but it seems to work so far.
This is not possible. Though unstated in documentation, Python will not parse two modules via the -m command line option.

Run Python script in Python environment?

In the terminal when starting Python, can we run a Python script under Python environment?
I know I can run it on bash, but don't know if I can run it in Python environment. The purpose is to see when the script goes wrong, the values of the variables at that time.
The purpose is to see when the script goes wrong, the values of the variables at that time.
You have two options for that (neither of which is precisely the question you're asking, but is nonetheless the proper way to achieve the desired outcome)
First, the pdb module:
import pdb; pdb.set_trace()
This enters the debugger at whatever point you place this code. Useful for seeing variables.
Second, running the command with -i:
$ python -i script.py
This drops into the full interpreter after execution, with all variables intact

Calling a .py script from a specific file path in Python interpreter

I am just getting started with Python.
How do I call a test script from C:\X\Y\Z directory when in Python interpreter command line in interactive mode? How do I specify the full path for the file when it is not in the current working directory?
I can call a test script when using the windows run command with "python -i c:\X\Y\Z\filename.py" and it runs fine. But I want to be able to call it form the Python terminal with the ">>>" prompt.
(I searched and searched for two hours and could not find an answer to this, although it seems like it should be a common question for a beginner and an easy thing to do.)
Thanks
Since you are using backslashes for the file path, python interprets those as "escape characters." When writing the file path in Python, make sure to use forward slashes.
with open("C:/X/Y/Z/filename.py", "r") as file:
exec(file.read())
Double backslashes also work, but I prefer the cleaner look of forward slashes.
If you want to import it into the REPL:
import sys
sys.path.append('c:\X\Y\Z')
import filename
If you want to execute code from a file within the interpreter, you can use execfile
execfile('C:/X/Y/Z/filename.py')
(/ works as path separator in all operating systems, if you use \, you need to escape them ('C:\\X\\Y\\Z\\filename.py')or use raw string literal (r'C:\X\Y\Z\filename.py'))
If you are using IPython (and you should use, it's much more useful than vanilla interactive Python), you can use magic function run (or with % prefix: %run):
run C:\\X\\Y\\Z\\filename.py
%run C:\\X\\Y\\Z\\filename.py
See this link for more information about magic functions.
And by the way, it has even auto completion of filenames.
Exec the heck out of it
Python 2.x:
execfile("C:\\X\Y\\Z")
Python 3+:
with open("C:\\X\Y\\Z", "r") as f:
exec(f.read())
Still, that is very bad practice - it executes code from a string (at some point), instead of using preferred and safer way of importing modules. Still, when you import module and have some of its code after "-f __name__ == '__main__':", that parts won't work (because __name__ in imported module won't be __main__, and it would be, if you ran it as single script).
It is bad for many reasons, in some sense strongly connected to Zen of Python, but if you're beginner, this should speak to you:
When you do anything in interactive mode, you work on some namespace (this term is very important for understanding python, if you don't know it, check it in python language reference). When you exec()/execfile() something without providing globals()/locals(), you may end up with modified namespace.
Modified namespace?
What does it mean? Lets have a script like that:
radius = 3
def field_of_circle(r):
return r*r*3.14
print(field_of_circle(radius))
Now, you have following session:
>>>radius = 5
>>>execfile("script_above.py")
28.26
>>>print(radius)
3
You see what happens? Variables defined by you in interactive session will get overwritten by values from end of script. The same goes for modifying already imported external modules. Lets have a very simple module:
x = 1
and executed script:
import very_simple_module
very_simple_module.x = 3
Now, here's a interpreter interactive session:
>>>import very_simple_module
>>>print(very_simple_module.x)
1
>>>execfile("executed_script.py")
>>>print(very_simple_module.x)
3
Run another interpreter
Interactive sessions are very useful for many things, but not for many things, but running python scripts is not one of them.
Unless... you wanna play tough and use python shell as system shell. Then, you can use subprocess (in standard library) or sh (which can be found on PyPI):
>>>import subprocess
>>>subprocess.call(["python", "C:\\X\Y\\Z"], shell=True)
>>>from sh import python
>>>python("C:\\X\Y\\Z")
Those won't have this problem with modifying interactive interpreters namespace
See script as module
Also, there is one more option: in interactive session add directory with script to pythonpath, and import module named as script:
>>>import sys
>>>if "C:\\X\\Y" not in sys.path:
sys.path.append("C:\\X\\Y")
>>>import Z
Remember that directory in which interpreter was started is automatically on pythonpath, so if you've ran python in the same directory as your script, you just have to use 3rd of lines above.
Interpreters namespace won't change, but code after "-f __name__ == '__main__':" won't be executed. Still you can access scripts variables:
>>>radius = 5
>>>import first_example_script
>>>print(radius)
5
>>>print(first_example_script.radius)
3
Also, you can have module name conflict. For example, if your script was sys.py, then this solution will work, because python will import builtin sys module before yours.

Python meta-debugging

Heyo,
Just started writing an assembler for the imaginary computer my class is creating wire-by-wire since the one the TA's provided sucks hard. I chose python even though I've never really used it that much (but know the basic syntax) and am loving it.
My favorite ability is how I can take a method I just wrote, paste it into the shell and then unit test it by hand (I'm using IDLE).
I'm just wondering if there is a way to expose all the symbols in my python code to the shell automatically, so I can debug without copying and pasting my code into the shell every time (especially when I make a modification in the code).
Cheers
you can import the module that your code is in. This will expose all of the symbols prefixed with the module name.
The details for the easiest way to do it depend on your operating system but you can always do:
>>> sys.path.append('/path/to/directory/that/my/module/is/in/')
>>> import mymod #.py
later after you make a change, you can just do
>>>> reload(mymod)
and the symbols will now reference the new values. Note that from mymod import foo will break reload in the sense that foo will not be updated after a call to reload. So just use mymod.foo.
Essentially the trick is to get the directory containing the file on your PYTHONPATH environment variable. You can do this from .bashrc on linux for example. I don't know how to go about doing it on another operating system. I use virualenv with has a nice wrapper and workon command so I just have to type workon foo and it runs shell scripts (that I had to write) that add the necessary directories to my python path.
When I was just starting off though, I made one permanent addition to my PYTHONPATH env variable and kept module I wrote in there.
Another alternative is to execute your module with the -i option.
$ python -i mymod.py
This will execute the module through to completion and then leave you at the interpreter. this isn't IDLE though, it's a little rougher but you are now in your module's namespace (or rather the module's namespace is the global namespace)
Check IPython. It's enhanced interactive Python shell. You can %run your script and it will automatically expose all your global objects to the shell. It's very easy to use and powerful. You can even debug your code using it.
For example, if your script is:
import numpy as np
def f(x):
return x + 1
You can do the following:
%run yourScript.py
x = np.eye(4)
y = f(x)

Categories