How to run memory_profiler in python inside 3D Studio Max - python

I am writing a script for 3ds Max. I think tak it has some sort of memory leackage, it is getting slow after some time. That's about 3000 lines of code and there are many different variables, I cannot determinate what causes this problem.
So I thought that I can use memory_profiler
The problem is, that I cannot run it form inside of 3ds max. Python is installed with this software and can be only run with command(in MaxScript, internal 3ds max language):
python.Execute "print 'hello'"
or
python.ExecuteFile "demoBentCylinder.py"
So I think so the only way to run the memory profiler would be to run a command:
python -m memory_profiler script.py
From a running script, in a simmilar way to execfile() (I know that execfile only imoports code)
Is this possible? Is there other waty to run memory_profiler on my code?
Regards

Related

Running Python Scripts Outside of IDLE

1- i want someone to help with this part
https://automatetheboringstuff.com/appendixb/
about Running Python Scripts Outside of IDLE and sending command line args .
2- i converted my code to .exe by pyinstaller ; what's the difference between this and running it
as a script .
3-how scripts are done . i see experienced people says :"i made a script to do something for me " how is that done >?
*i'm a beginner so try to make answers simple as possible.
If you're wondering about command line arguments, look into the "argparse" library https://docs.python.org/3/library/argparse.html.
The difference in the .exe and the normal script is that that the .exe versions (conventionally) are able to be redistributed to other sytems that don't have python installed whilst still being able to run the script. If you're only making the script for yourself the only real benefit is that you dont have to enter your IDE (code editor) every time you want to run the code, however if it's still in development then you'd have to compile the code every time you made a modification if you're wanting to run the code as an executable, which is very impractical.
Your third part is very, very vague. Python can be very versatile and i recommend you continue looking at the automatetheboringstuff.com website if you're interested in making scripts that can complete repetitive activities (although i must strongly advise you against using scripts maliciously).

Run .m-file from python script in Linux

I read the proposals but I dont found a answer.
How can I run a Octave .m-File from a python script in Linux?
Must I change the direction?
I want to run the Octave .m-File with functions in the background without gui.
I try it with the os module from python:
(INFO: The .m-File run without bugs in octave)
import os
os.system("octave-cli /home/myscripts/test.m")
If I run this in the python console in spyder, I get "0" as output.
Normaly It must be "1" if have no bugs or? Must I make the test.m file executable?

Running two python scripts with bash file

I would like to run two python scripts at the same time on my lap top without any decreasing in their calculation's speed.
I have searched and saw this question saying that we should use bash file.
I have searched but I did not understand what should I do and how to run those scrips with this way called bash.
python script1.py &
python script2.py &
I am inexperienced in it and I need your professional advice.
I do not understand how to do that, where and how.
I am using Windows 64bit.
Best
PS: The answer I checked the mark is a way to run in parallel two tasks, but it does not decrease the calculation time for two parallel tasks at all.
If you can install GNU Parallel on Windows under Git Bash (ref), then you can run the two scripts on separate CPUs this way:
▶ (cat <<EOF) | parallel --jobs 2
python script1.py
python script2.py
EOF
Note from the parallel man page:
--jobs N
Number of jobslots on each machine. Run up to N jobs in parallel.
0 means as many as possible. Default is 100% which will run one job per
CPU on each machine.
Note that the question has been updated to state that parallelisation does not improve calculation time, which is not generally a correct statement.
While the benefits of parallelisation are highly machine- and workload-dependent, parallelisation significantly improves the processing time of CPU-bound processes on multi-core computers.
Here is a demonstration based on calculating 50,000 digits of Pi using Spigot's algorithm (code) on my quad-core MacBook Pro:
Single task (52s):
▶ time python3 spigot.py
...
python3 spigot.py 52.73s user 0.32s system 98% cpu 53.857 total
Running the same computation twice in GNU parallel (74s):
▶ (cat <<EOF) | time parallel --jobs 2
python3 spigot.py
python3 spigot.py
EOF
...
parallel --jobs 2 74.19s user 0.48s system 196% cpu 37.923 total
Of course this is on a system that is busy running an operating system and all my other apps, so it doesn't halve the processing time, but it is a big improvement all the same.
See also this related Stack Overflow answer.
I use a batch file which contains these lines:
start python script1.py
start python script2.py
This opens a new window for each start statement.
A quite easy way to run parallel jobs of every kind is using nohup. This redirect the output to a file call nohup.out (by default). In your case you should just write:
nohup python script1.py > output_script1 &
nohup python script2.py > output_script2 &
That's it. With nohup you can also logout and the script will be continuing until they have finished

run python script in parallel to matlab

I have 2 scripts:
python script
matlab script
I need to run this two scripts in parallel (no output for both of them). I was thinknig to call to the python scirpt, from the matlab script.
I know it is possible to run python script from matlab like:
systemCommand='my_script.py'
system(systemCommand)
however in this way, the matlab script will wait to the return of the python script, and the rest of my matlab script will not be executed.
any ideas?
As mentioned near the end of MATLAB's system documentation in the "Tips" section, to run a system command in the background (on *nix), you can append an ampersand(&) to the end of your command to tell it to run in the background.
system('my_script.py &')
If you're on Windows, you'll want to use the following to prevent a command window from opening.
system('start /b my_script.py');

Python: exe file from script, significant performance decrese

I am testing a C++ code compiled to exe (O errors, 0 warnings). The code represents a console application. I run the apllication in the following ways:
a) from the windows7 command line: average time 497sec
b) from Python script using
subprocess.call ()
with the average time 1201 sec!
Results:
The application runs almost 3 tines longer from Python script than from command line... Is this significant performance decrease normal ?
Are you measuring from the point that subprocess.call() is executed or from the point that you load the Python script? I would imagine that a large portion of that time arises from waiting for the Python interpreter to load, the subprocess module to load, any other modules you import, etc. If your Python script which calls the program will end up being large then I think this will become insignificant. If it will be short, you may be better off creating a Windows batch (.bat) file to call the program (assuming those still exist in Win7...I haven't used Windows in a while).

Categories