Python: exe file from script, significant performance decrese - python

I am testing a C++ code compiled to exe (O errors, 0 warnings). The code represents a console application. I run the apllication in the following ways:
a) from the windows7 command line: average time 497sec
b) from Python script using
subprocess.call ()
with the average time 1201 sec!
Results:
The application runs almost 3 tines longer from Python script than from command line... Is this significant performance decrease normal ?

Are you measuring from the point that subprocess.call() is executed or from the point that you load the Python script? I would imagine that a large portion of that time arises from waiting for the Python interpreter to load, the subprocess module to load, any other modules you import, etc. If your Python script which calls the program will end up being large then I think this will become insignificant. If it will be short, you may be better off creating a Windows batch (.bat) file to call the program (assuming those still exist in Win7...I haven't used Windows in a while).

Related

How to run memory_profiler in python inside 3D Studio Max

I am writing a script for 3ds Max. I think tak it has some sort of memory leackage, it is getting slow after some time. That's about 3000 lines of code and there are many different variables, I cannot determinate what causes this problem.
So I thought that I can use memory_profiler
The problem is, that I cannot run it form inside of 3ds max. Python is installed with this software and can be only run with command(in MaxScript, internal 3ds max language):
python.Execute "print 'hello'"
or
python.ExecuteFile "demoBentCylinder.py"
So I think so the only way to run the memory profiler would be to run a command:
python -m memory_profiler script.py
From a running script, in a simmilar way to execfile() (I know that execfile only imoports code)
Is this possible? Is there other waty to run memory_profiler on my code?
Regards

Why does Python only save the bytecode for a script if it is imported?

Given that executing Python bytecode will be faster than running the original source code because Python does not have to recompile, why does Python only save the compiled bytecode when a script is imported? Wouldn't it be better to save the .pyc file for every script that's executed?
The startup time of your Python interpreter takes time anyway (even if you might not notice it that much), so it simply doesn't matter and it is more convenient to start a script that might have been updated by you, than always compiling and executing the script manually.
be faster than running the original source code
Btw, Python does not 'run' the source. The initial source from the main script is compiled and executed as well.
Also keep in mind (Introduction to Python):
A program doesn't run any faster when it is read from a ‘.pyc’ or
‘.pyo’ file than when it is read from a ‘.py’ file; the only thing
that's faster about ‘.pyc’ or ‘.pyo’ files is the speed with which
they are loaded.
Further they say:
When a script is run by giving its name on the command line, the
bytecode for the script is never written to a ‘.pyc’ or ‘.pyo’ file.
Thus, the startup time of a script may be reduced by moving most of
its code to a module and having a small bootstrap script that imports
that module. It is also possible to name a ‘.pyc’ or ‘.pyo’ file
directly on the command line.
You can always test it. Here's an anectode from my machine:
~$ time python test.py
real 0m0.029s
user 0m0.025s
sys 0m0.004s
~$ time python test.pyc
real 0m0.031s
user 0m0.025s
sys 0m0.004s

Py2exe executable shows black command line window

There was a simple task - well, in Python it took around one hundred lines of code (the task was only to ask users for input files, process them and write results on disk). But the requirement was that it must be executable on PCs without Python interpreter. I used py2exe to make an executable file (it has increased the size from 3Kb to ~12 Mb but doesn't really matter).
The problem is - when one tries to run this *.exe, it shows black command line window for half a minute and only after that - file-selecting dialogue. Is there a possibility to get rid of that half-minute delay? Of maybe there are other .py to .exe converters which would suit better in this situation?
In py2exe
console = ['yourscript.py']
will generate a command window, use the following instead
windows = ['yourscript.py']
http://www.py2exe.org/index.cgi/ListOfOptions
This is perfectly normal when making exe files with Python code. When you make an executable file, Python itself is bundled into the .exe. This is so the user does not have to install Python on their machine to make it work. Python is an interpreted language and requires the interpreter to be bundled.
You could always try using alternatives to see if the compression rate is smaller, but chances are, its not a big deal.
If it is the code that is taking a long time, you may consider posting your code on Stack Exchanges' Code Review to see if there is anything that could be improved.
Further, if you are using Python 2.7, you should consider checking out PyInstaller. It is surprisingly easy, however, it has a couple of problems - especially with the PySide Framework - works great for plain PyQt though.
pyinstaller myscript.py --name="MyApp" --onefile
However, for a full list of optional parameters you should really check out the documentation.

Call a Python Script from parallel shell scripts at the same time

I have a question about the Python Interpreter. How does is treat the same script running 100 times, for example with different sys.argv entries? Does it create a different memory space for each script or something different?
System is Linux , CentOS 6.5. Is there any operational limit that can be observed and tuned?
You won't have any problem with what you're trying to do. You can call the same script in parallel a lot of times, with different input arguments. (sys.argv entries). For each run, a new memory space will be allocated.

Python - Two processes after compiling?

I'm currently working on a small python script, for controlling my home PC (really just a hobby project - nothing serious).
Inside the script, there is two threads running at the same time using thread (might start using threading instead) like this:
thread.start_new_thread( Function, (Args) )
Its works as intended when testing the script... but after compiling the code using Pyinstaller there are two processes (One for each thread - I think).
How do I fix this?
Just kill the loader from the main program if it really bothers you. Here's one way to do it.
import os
import win32com.client
proc_name = 'MyProgram.exe'
my_pid = os.getpid()
wmi = win32com.client.GetObject('winmgmts:')
all_procs = wmi.InstancesOf('Win32_Process')
for proc in all_procs:
if proc.Properties_("Name").Value == proc_name:
proc_pid = proc.Properties_("ProcessID").Value
if proc_pid != my_pid:
print "killed my loader %s\n" % (proc_pid)
os.kill(proc_pid, 9)
Python code does not need to be "compiled with pyinstaller"
Products like "Pyinstaller" or "py2exe" are usefull to create a single executable file that you can distribute to third parties, or relocate inside your computer without worrying about the Python instalation - however, they don add "speed" nor is the resulting binary file any more "finished" than your original .py (or .pyw on Windows) file.
What these products do is to create another copy of the Python itnrepreter, alogn with all the modules your porgram use, and pack them inside a single file. It is likely the Pyinstaller keep a second process running to check things on the main script (like launching it, maybe there are options on it to keep the script running and so on). This is not part of a standard Python program.
It is not likely Pyinstaller splits the threads into 2 separate proccess as that would cause compatibility problems - thread run on the same process and can transparently access the same data structures.
How a "canonical" Python program runs: the main process, seen by the O.S. is the Python binary (Python.exe on Windows) - it finds the Python script it was called for - if there is a ".pyc" file for it, that is loaded - else, it loads your ".py" file and compiles that to Python byte code (not to windwos executable). This compilation is authomatic and transparent to people running the program. It is analogous to a Java compile from a .java file to a .class - but there is no explicit step needed by the programmer or user - it is made in place - and other factors control wether Python will store the resulting bytecode as .pyc file or not.
To sum up: there is no performance impact in running the ".py" script directly instead of generating an .exe file with Pyinstaller or other product. You have a disk-space usage inpact if you do, though, as you will have one copy of the Python interpreter and libraries for each of your scripts.
The URL pointeded by Janne Karila on the comment nails it - its even worse than I thought:
in order to run yioru script, pyinstaller unpacks Python DLLs and modules in a temporary directory. The time and system resources needed todo that, compared with a single script run is non-trivial.
http://www.pyinstaller.org/export/v2.0/project/doc/Manual.html?format=raw#how-one-file-mode-works

Categories