Normally, I would use "blender -P script.py" to run a python script. In this case, a new blender process is started to execute the script. What I am trying to do now is to run a script using a blender process that is already running, instead of starting a new one.
I have not seen any source on this issue so far, which makes me concern about the actual feasibility of this approach.
Any help would be appreciated.
Blender isn't designed to be started from the cli and to then keep receiving more commands from the cli as it is running. It does however include a text editor that can open text files and run the text block as a python script, it also includes a python console that can be used to interactively type in commands while blender is running. You may also find this addon useful as it lets you to run a text block in the python console, this leaves you with an interactive session that contains the variables as they exist at the end of the scripts execution.
There is a cli option to run blender as a python console blender --python-console - the gui does not get updated while this console is running, so you could open and exec several scripts and then when you exit the console, blender will update it's gui and allow interactive use, or if you start in background mode -b then it will quit when you exit the console.
My solution was to launch Blender via console with a python script (blender --python script.py) that contains a while loop and creates a server socket to receive requests to process some specific code. The loop will prevent blender from opening the GUI, and the socket will handle the multiple requests inside the same blender process.
Related
I have an embedded linux system that I need to run a python script whenever it boots. The python script needs to have a terminal interface so the user can interact and see outputs. The script also spawns another process to transfer large amounts of data over SPI, this was written in C.
I've managed to get the script to start on launch and have terminal access by adding
#reboot /usr/bin/screen -d -m python3 /scripts/my_script.py
to the crontab. I can then do "screen -r" and interact with the script. However if launched in this way the script fails to start the external SPI script. In python I launch the script with subprocess.Popen
proc=subprocess.Popen(["./spi_newpins,"-o","/media/SD/"+ latest_file"])
and this works perfectly whenever I manually launch the script, even within screen. Just not when it is launched by crontab. Does anyone have any ideas on how to get the spi subprocess to also work from crontab?
Fixed now, I had to add an absolute path to the spi_newpins function call
proc=subprocess.Popen(["/scripts/./spi_newpins","-o","/media/SD/"+ latest_file"])
I have a python program which is instantier as a linux service.
This service updates itself by downloading a new version of the code on an ftp server and launches a bash file to update the service.
In this file I have a line that destroys the current service before recreating it with the new source code.
I run this bash script with:
subprocess.call("sudo bash /home/pi/install.sh",shell=True)
I understand that this "subprocess" lives in my python program. And the bash script stop the linux service so stop the python program so stop itself ... And so it never ends.
What are the solutions to solve my problem?
I think there's several ways to do it - one of them being (maybe not the most elegant?) to make your python schedule a cron-job of the bash-script using python-crontab.
Say it's 13:00 and you want your job to run - then make the python script schedule a cron-job to 13:05 (just to add a time buffer).
You can then remove your cron-job after the bash-job has been run, either manually or implement it in your bash-script (or make it call a python script which uses python-crontab to remove it, it's fairly easy to do so)
Don't let the script stop the service. Just let it exit with a specific exit code if it installed a new version, and restart the service accordingly in the Python code.
I am a beginner in Maya, and I wanted to run commands in a cmd in the active window of maya to modify things in the scene, like put a sphere for example, be in python
from pymel.all import *
sphere()
or MEL
polySphere -r 1 -sx 20 -sy 20 -ax 0 1 0 -cuv 2 -ch 1;
I found the mayapi, and I found several content related to "headless" (without the GUI), but I found nothing to run in the open window yet, maybe because I don't know the terms very much. I would like it to be in python, but if you know any solution in MEL you can put it here too!
Is there any way to do this without specifying the open document path?
You have four basic options for programmatic control of maya.
Running scripts inside the script editor in Maya. This requires that you have an open GUI maya instance running and you'd either type the commands yourself, or initiate scripts by loading and executing them in the script editor. This works fine for automating repetitive tasks but requires manual intervention on your part.
You can send individual commands to Maya over a TCP connection using the maya command port This is basically like connecting to another computer over telnet: you can control the Maya sessions but you'll be communicating entirely via text. It's commonly used, for example, by people who are writing scripts in Sublime Text to test them out in Maya without switching windows
You can run a commandline-only copy of Maya using the MayaPy python interpreter that ships with Maya and the maya.standalone module, which hosts a non-GUI maya session. That lets you excute python commands in a Maya without needing the GUI at all -- it's a common tool for automation tasks.
You can pass a script argument to Maya at startup with the '-c' (for "command") flag. Maya will open and run that script. For legacy reasons the commands are only MEL, not python, but you can get around that by using the MEL command "python" along with a Python command in quotes.
All of these are useful, the right one really depends on what you need to do. For long running tasks however #3 is probably the most reliable method because it's easy to iterate on and test.
I'm writing a program to monitor a python script running. There will be multiple instances of this script, which is a server, running from different locations on the computer. My program will be monitoring to see that all instances that are being "watched" are up and running and will restart them as necessary.
The server script cannot be edited.
My problem is that the server process just shows up as the python executable, and I am unable to determine the location of the specific server script on the computer.
Is there anyway to determine what script is actually running on a specific python/pythonw.exe process? And also its path?
ENV: Windows only
You can run the following command from the command line:
WMIC PROCESS get Caption,Commandline,Processid,ExecutablePath
This will show you which module is being executed.
Pretty much what the title says, I would like to be able to connect to a python process running under paster or uwsgi and utilize pdb functionality.
Using winpdb, you can attach to a running process like this:
Insert
import rpdb2; rpdb2.start_embedded_debugger('mypassword')
inside your script.
Launch your script (through paster or uwsgi) as usual.
Run winpdb
Click File>Attach
Type in password (e.g. "mypassword"), select the process.
To detach, click File>Detach. The script will continue to run, and can be attached to again later.