Execute script after Blender is fully loaded - python

How do I automatically execute a python script after Blender has fully loaded?
Context
My script generates a scene based on a seed. I want to creat a couple thousand images but since Blender leaks memory after a hundred generations or so everything becomes significantly slower and eventually crashes. I want to migitate the problem by creating only x images per session and completely restart Blender after each session.
Problem
If I load the blend file manually and click the play button in the script editor, everything works as expected. When I try to call the script after startup, it crashes in add_curve_spirals.pyline 184, since context.space_data is None.
Since manually starting the script works fine, the problem is, that Blender is in some sort of wrong state. Starting it with or without GUI (--background) does not affect this.
Failed solutions
blender myfile.blend --python myscript.py executes the script before the context is fully ready and thus produces the error.
Using a handler to delay execution (bpy.app.handlers.load_post) calls my script after completely loading the file but still the context is not ready and it produces the error.
Setting the script in Blender to auto execute on startup (Text/Register) also produces the error.
Using sockets, as suggested here, to send command to Blender at a later time. The server script, that is waiting for incomming commands, blocks Blender during startup and prevents it from fully loading, hence the effect is the same as executing the script directly.
Using timed events (bpy.app.timers.register(render_fun, first_interval=10).
These are all the ways that I found to automatically execute a script. In every case the script seems to be executed too early / in the wrong state and all fail in the same way.
I want to stress that the script is not the issue here. Even if I could work around the particular line, many similar problems might follow and I don't want to rewrite my whole script. So what is the best way to automatically invoke it in the right state?

It turns out, that the problem was the execution context. This became clear after invoking the timed event manually, e.g. after the scene was loaded completely the timed event was still executed in a wrong context.
Since the crash happened in the add_curve_spirals addon, the solution was to provide a context override the the operator invokation. The rest of my script was not equally sensitive to the context and worked just fine.
It was not clear to me, how exactly I should override the context, but this works for now (collected from other parts of the internet, so I don't understand all details):
def get_context():
# create a context that works when blender is executed from the command line.
idx = bpy.context.window_manager.windows[:].index(bpy.context.window)
window = bpy.context.window_manager.windows[idx]
screen = window.screen
views_3d = sorted(
[a for a in screen.areas if a.type == 'VIEW_3D'],
key=lambda a: (a.width * a.height))
a = views_3d[0]
# override
o = {"window" : window,
"screen" : screen,
"area" : a,
"space_data": a.spaces.active,
"region" : a.regions[-1]
}
return o
Final invocation: bpy.ops.curve.spirals(get_context(), spiral_type='ARCH', radius = radius, turns = turns, dif_z = dif_z, ...

Related

What are the potential reasons for my problem with running multiprocessed Python script from elsewhere?

I have a Python script that uses multiprocessing, specifically Pool().map() from the package multiprocessing.
When I run the script locally on my machine, I remember to say if __name__ == "__main__" before running my code, and I run it by simply pressing run in my IDE.
It works, does everything I expect it to.
However, at work, we have this server (I believe it uses C#) which takes Python scripts and executes them. When I upload my script to this server, the multiprocessing part of the code fails.
Is is hard for me to tell what's going on (I do not have access to the server so I have limited info to work with, and really this problem is not my responsibility, so I am asking purely out of curiosity), however, it seems that the code does not throw an error, but rather it seems to get stuck in an infinite loop of some sort, creating and ending new sessions over and over. No work happens inside these sessions, they just begin and end.
Moreover, the code never actually enters the multiprocessed part of the code (i.e. the functions that I map to the Pool) , since if it did, it would fail (since for debugging, I threw a raise Exception as soon as the code enters the code that is meant to run parallel, just to check if it ever reaches it … but it never does).
Any clue what is going on, and how it is meant to be fixed?

Stop a python script without losing data

We have been running a script on partner's computer for 18 hours. We underestimated how long it would take, and now need to turn in the results. Is it possible to stop the script from running, but still have access to all the lists we are building?
We need to add additional code to the one we are currently running that will use the lists being populated right now. Is there a way to stop the process, but still use (what has been generated of) the lists in the next portion of code?
My partner was using python interactively.
update
We were able to successfully print the results and copy and paste after interrupting the program with control-C.
Well, OP doesn't seem to need an answer anymore. But I'll answer anyway for anyone else coming accross this.
While it is true that stopping the program will delete all data from memory you can still save it. You can inject a debug session and save whatever you need before you kill the process.
Both PyCharm and PyDev support attaching their debugger to a running python application.
See here for an explanation how it works in PyCharm.
Once you've attached the debugger, you can set a breakpoint in your code and the program will stop when it hits that line the next time. Then you can inspect all variables and run some code via the 'Evaluate' feature. This code may save whatever variable you need.
I've tested this with PyCharm 2018.1.1 Community Edition and Python 3.6.4.
In order to do so I ran this code which I saved as test.py
import collections
import time
data = collections.deque(maxlen=100)
i = 0
while True:
data.append(i % 1000)
i += 1
time.sleep(0.001)
via the command python3 test.py from an external Windows PowerShell instance.
Then I've opened that file in PyCharm and attached the debugger. I set a Breakpoint at the line i += 1 and it halted right there. Then I evaluated the following code fragment:
import json
with open('data.json', 'w') as ofile:
json.dump(list(data), ofile)
And found all entries from data in the json file data.json.
Follow-up:
This even works in an interactive session! I ran the very same code in a jupyter notebook cell and then attached the debugger to the kernel. Still having test.py open, I set the breakpoint again on the same line as before and the kernel halted. Then I could see all variables from the interactive notebook session.
I don't think so. Stopping the program should also release all of the memory it was using.
edit: See Swenzel's comment for one way of doing it.

running 2 python scripts without them effecting each other

I have 2 python scripts I'm trying to run side by side. However, each of them have to open and close and reopen independently from each other. Also, one of the scripts is running inside a shell script.
Flaskserver.py & ./pyinit.sh
Flaskserver.py is just a flask server that needs to be restarted everynow and again to load a new page. (cant define all pages as the html is interchangeable). the pyinit is runs as xinit ./pyinit.sh (its selenium-webdriver pythoncode)
So when the Flaskserver changes and restarts the ./pyinit needs to wait about 20 seconds then restart as well.
Either one of these can create errors so I need to be able to check if Flaskserver has an error before restarting ./pyinit if ./pyinit errors i need to set the Flaskserver to a default value and then relaunch both of them.
I know a little about subprocess but I'm unsure on how it can deal with errors and stop-start code.
Rather than using sub-process I would recommend you to create a different thread for your processes using multithread.
Multithreading will not solve the problem if global variables are colliding, but by running them in different scripts, while you might solve this, you might collide in something else like a log file.
Now, if you keep both processes running from a single process that takes care of keeping them separated and assigning different global variables where necessary, you should be able to keep a better control. Using things like join and lock from the multithreading library, will also ensure that they don't collide and it should be easy to put a process to sleep while the other is running (as per waiting 20 secs).
You can keep a thread list as a global variable, as well as your lock. I have done this successfully with CherryPy's server for example. Any more details about multithreading look into the question I linked above, it's very well explained.

Possible to run a delayed code execution?

Will it is possible to run a small set of code automatically after a script was run?
I am asking this because for some reasons, if I added this set of code into the main script, though it works, it will displays a list of tab errors (its already there, but it is stating that it cannot find it some sort).
I realized that after running my script, Maya seems to 'load' its own setup of refreshing, along with some plugins done by my company. As such, if I am running the small set of code after my main script execution and the Maya/ plugins 'refresher', it works with no problem. I had like to make the process as automated as possible, all within a script if that is possible...
Thus is it possible to do so? Like a delayed sort of coding method?
FYI, the main script execution time depends on the number of elements in the scene. The more there are, it will takes longer...
Maya has a command Maya.cmds.evalDeferred that is meant for this purpose. It waits till no more Maya processing is pending and then evaluates itself.
You can also use Maya.cmds.scriptJob for the same purpose.
Note: While eval is considered dangerous and insecure in Maya context its really normal. Mainly because everything in Maya is inherently insecure as nearly all GUI items are just eval commands that the user may modify. So the second you let anybody use your Maya shell your security is breached.

Simplest way to have Python output, from a compiled package?

Prior info: I'm on a Mac.
Q: How can I get terminal-like text output from the program execution, if I compile it with py2app for redistribution?
My case is a program that copies a lot of big files and takes a while to process so I would like to at least have an output notification everytime each file is copied.
This is easy if I run it on the command line, I can just print a new line.
But when I make a self-sufficient package, it simply opens on the bottom dock, with no window, and closes upon completion.
A simple text window would be fine.
Thanks in advance.
If you want to create a simple text window, you need to pick a GUI framework to do that with. For something this simple, there's no reason not to use Tkinter (which comes with any Python) or PyObjC (which is pre-installed with Apple's Python 2.7), unless you happen to be more familiar with wx, gobject, Qt, etc.
At any rate, however you do it, you'll need to write a function that takes a message and appends it to the text window (maybe creating it lazily, if necessary), and call that function wherever you would normally print. You may also want to write and install a logging handler that does the same thing, so you can just log.info stuff. (You could instead create a file-like object that does this and redirect stdout and/or stderr, but unless you have no control over the printing code, that's going to be a lot more work.)
The only real problem here is that a GUI needs an event loop, and you probably just wrote your code as a sequential script.
One way around that is to turn your whole current script into a background thread. If you're using a GUI library that allows you to access the widgets from background threads, everything is easy; your printfunc just does textwidget.append(msg). If not, it may at least have a call_on_main_thread type function, so your printfunc does call_on_main_thread(textwidget.append, msg). If worst comes to worst (and I believe with Tkinter, it does), you have to create an explicit queue to push messages through, and write a queue handler in the event loop. This recipe should give you an idea. Replace the body of workerThread with your code, and end it with self.endApplication(). (There are probably better examples out there; this was just what I found first in a quick search.)
The other way around that is to have your code cooperatively operate with the event loop. Some libraries, like wx, have functions like SafeYield that make things work if you just call it after every chunk of processing. Others don't have that, but have a way to explicitly drive the event loop from your code. Others have neither—but every event loop framework has to have a way to schedule new events, so you can break your code up into a sequence of functions that each finish quickly and then do something like root.after_idle(nextfunc).
However… are you sure you need to do this?
First, any app, including one created by py2app, will send its stdout to the terminal if you run it with Foo.app/Contents/MacOS/Foo. And you can even set things up so that open Foo.app works that way, if you want. Obviously this doesn't help for people who just double-click the app in Finder (because then there is no terminal), but sometimes it's sufficient to just have to output available when people need it and know how to follow instructions.
And you can take this farther: Create a Foo.command file that just does something like $(dirname $0)/Foo.app/Contents/MacOS/Foo, and when you double-click that file, it launches Terminal.app and runs your script.
Or you can get even simpler: Just use logging to syslog the output, and if you want to see when each file is done, just watch the log messages go by in Console.app.
Finally, do you even need py2app in the first place? If you don't have any external dependencies, just rename you script to Foo.command, and double-clicking it will run it in Terminal.app. If you do have external dependencies, you might still be able to get away with bundling it all together as a folder with a .command in it instead of as a .app.
Obviously none of these ideas are exactly a professional or newbie-friendly way to build an interface, so if that matters, you will have to create a GUI.

Categories