Transferring a mesh from one process to another in PYTHON - python

I've been cracking my head over this but nothing comes to my mind yet.
I want my script to execute a .py file inside of another already started process. I have a maya process opened, and inside for example modo I want to start file hello.py (print 'hello!') inside that exact Maya.
I already got the PID of that maya process, but don't know how to actually send a command to execute.
is theres some attribute/flag in subprocess or signal modules I could be missing? or is it done even another way?
import os
openedMaya = []
r = os.popen('tasklist /v').read().strip().split('\n')
for i in range(len(r)):
s = r[i]
if 'maya.exe' in s and ': untitled' in s:
openedMaya.append(s)
mayaPID = openedMaya.split('maya.exe')[1].split('Console')[0]
I need a command that could execute hello.py in that maya process.

You could use RPyC to act as a bridge so that you can communicate from one software to another. The idea is that you use RPyC to run an idle server in Maya, where the PYTHONPATH is also pointing to your hello.py script. This server stays active in the session, but the user shouldn't notice it exists.
Then in your other software you use RPyC to broadcast a message using the same port as the server so that it triggers it in Maya. This would then run your command.
It's slightly more overhead, but I have been able to use this successfully for stand-alone tools to trigger events in Maya. As far as using subprocess, you can use it to run a command in a new Maya session, but I don't think there's a way to use it for an existing one.
Hope that nudges you in the right direction.

Maybe an easier way would be to transfer your mesh by using an intermediate file. One process creates the file, another process (running inside the host app) reads it in.

Thanks for the advices, at the end I found a solution by opening the port of maya, by starting a mel command (at the startup):
commandPort -n ":<some_port>";
and connecting from modo to that port through socket:
HOST = '127.0.0.1'
PORT = <some_port>
ADDR=(HOST,PORT)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDR)
client.send(<message_that_you_want_to_send)
data = client.recv(1024)
client.close()
and i'm able to do whatever I want inside that opened maya, as long as I send mel commands.
Thanks for the help though!

Related

How to check if Python script is already running

I have python script on Ubuntu, which sometimes running more than 24 hours. I have set in cron, to run this script every day. However if script is still running, I would like to terminate new instance of this script. I have already found some solution, but they seems to be complicated. I would like to add few lines in the beginning of script, which will be checking if the script is running, if yes return, else continue.
I like this command:
pgrep -a python | grep 'script.py'
it is possible to make some smart solution for this problem?
There is no simple way how to do it. As mentioned in comments you can use creation of some locking file. But i prefer use of sockets. Not sure if it works same on Linux but on Windows i use this:
import socket
class AppMutex:
"""
Class serves as single instance mutex handler (My application can be run only once).
It use OS default property where single UDP port can be bind only once at time.
"""
#staticmethod
def enable():
"""
By calling this you bind UDP connection on specified port.
If binding fails then port is already opened from somewhere else.
"""
try:
AppMutex.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
AppMutex.sock.bind(("127.0.0.1", 40000))
except OSError:
raise Exception("Application can be run only once.")
Simple call at begining of your script:
AppMutex.enable()

Running infinite loop and getting commands from "outside" (e.g. shell or other scripts)

I am working on my Raspberry Pi, that is handling some WS2812B RGB-LEDs. I can control the light and everything with the neopixel library and Python. So fine right now.
I want this Python script running an infinite loop that only deals with light management. Dimming LEDs, changing color and lots more. But, I want to be able to get commands from other scripts. Let's say I want to type in a shell command that will change the color. In my infinite Python script (LED Handler), I will be able to recognize this command and change the color or the light mode softly to the desired color.
One idea is, to constantly look into a text file, if there is a new command. And my shell script is able to insert command lines into this text file.
But can you tell me, if there is a better solution of doing it?
Many thanks in advance.
One method would be to expose a TCP server, then communicate with the Python process over TCP. A simple example on how to create a TCP server is here, showcasing both the server script (running the LEDs) and the command scripts: example
I suggest opening a port with your python script and make it receive commands from that port (network programming). Although this would make your project more complicated, it is a very robust implementation.
You can use ZeroMQ and host it locally. It provides bindings for Python. Here is an example script (sender and receiver):
from threading import Thread
import zmq
class Sender(Thread):
def run(self):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:8000')
while True:
socket.send_string(input('Enter command: '))
class Receiver(Thread):
def run(self):
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.bind('tcp://127.0.0.1:8000')
socket.setsockopt(zmq.SUBSCRIBE, b'')
while True:
data = socket.recv().decode('ascii')
print(data) # Do stuff with data.
The receiver would be the instance that controls the lights on the RPi and the sender is the command line script that let's you input the various commands. An advantage is that ZeroMQ supports bindings for various programming languages and you can also send/receive commands over a network.
Another solution is to allow commands from a network connection. The script with the "infinite loop" will read input from a socket and perform the commands.

Only one python program running (like Firefox)?

When I open Firefox, then run the command:
firefox http://somewebsite
the url opens in a new tab of Firefox (same thing happens with Chromium as well). Is there some way to replicate this behavior in Python? For example, calling:
processStuff.py file/url
then calling:
processStuff.py anotherfile
should not start two different processes, but send a message to the currently running program. For example, you could have info in one tabbed dialog box instead of 10 single windows.
Adding bounty for anyone who can describe how Firefox/Chromium do this in a cross-platform way.
The way Firefox does it is: the first instance creates a socket file (or a named pipe on Windows). This serves both as a way for the next instances of Firefox to detect and communicate with the first instance, and forward it the URL before dying. A socket file or named pipe being only accessible from processes running on the local system (as files are), no network client can have access to it. As they are files, firewalls will not block them either (it's like writing on a file).
Here is a naive implementation to illustrate my point. On first launch, the socket file lock.sock is created. Further launches of the script will detect the lock and send the URL to it:
import socket
import os
SOCKET_FILENAME = 'lock.sock'
def server():
print 'I\'m the server, creating the socket'
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
s.bind(SOCKET_FILENAME)
try:
while True:
print 'Got a URL: %s' % s.recv(65536)
except KeyboardInterrupt, exc:
print 'Quitting, removing the socket file'
s.close
os.remove(SOCKET_FILENAME)
def client():
print 'I\'m the client, opening the socket'
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
s.connect(SOCKET_FILENAME)
s.send('http://stackoverflow.com')
s.close()
def main():
if os.path.exists(SOCKET_FILENAME):
try:
client()
except (socket.error):
print "Bad socket file, program closed unexpectedly?"
os.remove(SOCKET_FILENAME)
server()
else:
server()
main()
You should implement a proper protocol (send proper datagrams instead of hardcoding the length for instance), maybe using SocketServer, but this is beyond this question. The Python Socket Programming Howto might also help you. I have no Windows machine available, so I cannot confirm that it works on that platform.
You could create a data directory where you create a "locking file" once your program is running, after having checked if the file doesn't exist yet.
If it exists, you should try to communicate with the existing process, which creates a socket or a pipe or something like this and communicates its address or its path in an appropriate way.
There are many different ways to do so, depending on which platform the program runs.
While I doubt this is how Firefox / Chrome does it, it would be possible to archive your goal with out sockets and relying solely on the file system. I found it difficult to put into text, so see below for a rough flow chart on how it could be done. I would consider this approach similar to a cookie :). One last thought on this is that with this it could be possible to store workspaces or tabs across multiple sessions.
EDIT
Per a comment, environment variables are not shared between processes. All of my work thus far has been a single process calling multiple modules. Sorry for any confusion.
I think you could use multiprocessing connections with a subprocess to accomplish this. Your script would just have to try to connect to the "remote" connection on localhost and if it's not available then it could start it.
Very Basic is use sockets.
http://wiki.python.org/moin/ParallelProcessing
Use Threading, http://www.valuedlessons.com/2008/06/message-passing-conccurrency-actor.html
Example for Socket Programming: http://code.activestate.com/recipes/52218-message-passing-with-socket-datagrams/

python,running command line servers - they're not listening properly

Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).

To stop returning through SSH using Pexpect

I am trying to use pexpect to ssh into a computer but I do not want to return back to the original computer. The code I have is:
#!/usr/bin/python2.6
import pexpect, os
def ssh():
# Logs into computer through SSH
ssh_newkey = 'Are you sure you want to continue connecting'
# my ssh command line
p=pexpect.spawn('ssh build#10.51.11.10')
i=p.expect([ssh_newkey,'password:',pexpect.EOF])
p.sendline("password")
i=p.expect('-bash-3.2')
print os.getcwd()
ssh()
This allows me to ssh into the computer but when I run the os.getcwd() the pexpect has returned me to the original computer. You see I want to ssh into another computer and use their environment not drag my environment using pexpect. Can anyone suggest how to get this working or an alternative way.
Thanks
The process that launches ssh is never going to leave the computer it runs on. When you ssh into another computer, you start a new process there. That process is an entirely separate thing, a separate program to run. If you want to do anything on the remote machine, you have to either send the commands to execute over the connection, or copy over the program you want to run and execute it remotely.
your instance to the other machine is p. p.sendline what you want on the other machine and p.expect the result. in the case outlined
p.sendline("pwd && hostname")
p.expect("-bash-3.2") # although its better to set the prompt yourself so that this can be ported to any machine
response = p.before
print "received response [[" + response + "]]"
Try that. Also try module pxssh to use ssh with python. This module uses pexpect and has all of the methods in it to do exactly what you want here

Categories