Feasibility of using pipe for ruby-python communication - python

Currently, I have two programs, one running on Ruby and the other in Python. I need to read a file in Ruby but I need first a library written in Python to parse the file. Currently, I use XMLRPC to have the two programs communicate. Porting the Python library to Ruby is out of question. However, I find and read that using XMLRPC has some performance overhead. Recently, I read that another solution for the Ruby-Python conundrum is the use of pipes. So I tried to experiment on that one. For example, I wrote this master script in ruby:
(0..2).each do
slave = IO.popen(['python','slave.py'],mode='r+')
slave.write "master"
slave.close_write
line = slave.readline
while line do
sleep 1
p eval line
break if slave.eof
line = slave.readline
end
end
The following is the Python slave:
import sys
cmd = sys.stdin.read()
while cmd:
x = cmd
for i in range(0,5):
print "{'%i'=>'%s'}" % (i, x)
sys.stdout.flush()
cmd = sys.stdin.read()
Everything seems to work fine:
~$ ruby master.rb
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
My question is, is it really feasible to implement the use of pipes for working with objects between Ruby and Python? One consideration is that there may be multiple instances of master.rb running. Will concurrency be an issue? Can pipes handle extensive operations and objects to be passed in between? If so, would it be a better alternative for RPC?

Yes. No. If you implement it, yes. Depends on what your application needs.
Basically if all you need is simple data passing pipes are fine, if you need to be constantly calling functions on objects in your remote process then you'll probably be better of using some form of existing RPC instead of reinventing the wheel. Whether that should be XMLRPC or something else is another matter.
Note that RPC will have to use some underlying IPC mechanism, which could well be pipes. but might also be sockets, message queues, shared memory, whatever.

Related

How to reuse subprocess

There's a Windows Python application.
The application must gather several data about network configuration to build internal data structures.
The process takes now 10 seconds, making it barely usable.
Those 10 seconds are spent on creating separate subprocesses, which call powershell to get the data.
I think process creation is heavy on windows, so wanted to reuse single process to see if it makes a difference.
I'm using code similar to:
if not ps:
ps = sp.Popen([conf.prog.powershell],
stdout=sp.PIPE,
stdin=sp.PIPE,
universal_newlines=True)
And later:
ps.stdin.write(' '.join(cmd + ['|', 'select %s' % ', '.join(fields), '|', 'fl', '\n']))
# ... ps.stdout.read()/readline()
It hangs, so I've searched for alternatives.
They were:
Use communicate in subprocess to avoid any deadlocks - I can't do that, because communicate waits for the process to finish and I don't want the process to finish.
Use pexpect, but it's not fully functional on Windows and by using it powershell console took over python console.
Use own threads for read/write (inspired by http://eyalarubas.com/python-subproc-nonblock.html) - subsequent commands sent to subprocess instance didn't cause any action.
I couldn't find any Python library to not use processes to get powershell data (COM?) other than reading registry.
The libraries I found (netifaces, psutil) didn't offer the requested functionality.
So, does anybody have a working code example for the mentioned case (or can provide alternative way to get the information)?
Python 2.7, but I don't think it matters
OS: Win7/Win10
Regards,
Robert

Python simple communication between python programs?

I have two python programs.
One is kind of data gathering program.
The other is for analysis and prediction using Tensorflow.
(on Windows OS. python3.5. local )
The data gathering program requires 32-bit env because of API it is using.
And as you know, the other program requires 64-bit env because of TensorFlow.
So
Q : I just need to send a dict data to TensorFlow and it sends one integer back as a return.
What is the most simple way to send data each other?
thanks for your time.
The simplest way would be to have one program save the data into a file, and then have the other program read the file. The recommended way to do this is to use JSON, via the json module.
import json
#Write
with open('file.txt', 'w') as file:
file.write(json.dumps(myDict))
#Read
with open('file.txt') as file:
myDict = json.load(json_data)
However, depending on your use case, it might not be the best way. Sockets are a common solution. Managers are also very robust, but are overkill in my opinion.
For more information, I recommend checking out the list that the Python team maintains, of mechanisms that you can use for communication between processes.
If you want to connect both programs over the network, may I suggest you take a look at Pyro4? Basically what that does for you is enable you to do normal Python method calls, but over the network, to code running on another computer or in another Python process. You (almost) don't have to worry about low-level network details with it.

terminating application through python script

How to close (terminate) Windows applications using Python script? When I switch-on my PC, I find many applications like MSOSYNC.exe, ONENOTEM.exe etc. along with many others, running, which are not very useful. I want to close those? I tried "subprocess" module and some other, they not working. Which method should I use?
You're using the Popen class to construct a new object in you example already. It has methods to deal with it. Read the documentation
import subprocess
proc = subprocess.Popen(['c:\\windows\\system32\\notepad.exe','C:\\file1.txt'])
proc.terminate()

Run python program from Erlang

I want to read some data from a port in Python in a while true.
Then I want to grab the data from Python in Erlang on a function call.
So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return.
I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response.
How can I do this?
any other alternative approaches is also good if you think this is not the correct way to do it.
If I understand the question correctly you want to receive some data from an external port in Python, aggregate it and then transfer it to Erlang.
In case if you can use threads with your Python code you probably can do it the following way:
Run external port receive loop in a thread
Once data is aggregated push it as a message to Erlang. (Unfortunately you can't currently use threads and call Erlang functions from Python with ErlPort)
The following is an example Python module which works with ErlPort:
from time import sleep
from threading import Thread
from erlport.erlterms import Atom
from erlport import erlang
def start(receiver):
Thread(target=receive_loop, args=[receiver]).start()
return Atom("ok")
def receive_loop(receiver):
while True:
data = ""
for chunk in ["Got ", "BIG ", "Data"]:
data += chunk
sleep(2)
erlang.cast(receiver, [data])
The for loop represents some data aggregation procedure.
And in Erlang shell it works like this:
1> {ok, P} = python:start().
{ok,<0.34.0>}
2> python:call(P, external_port, start, [self()]).
ok
3> timer:sleep(6).
ok
4> flush().
Shell got [<<"Got BIG Data">>]
ok
Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem.

How to write python code to access input and output from a program written in C?

There is a program written and compiled in C, with typical data input from a Unix shell; on the other hand, I'm using Windows.
I need to send input to this program from the output of my own code written in Python.
What is the best way to go about doing this? I've read about pexpect, but not sure really how to implement it; can anyone explain the best way to go about this?
i recommend you use the python subprocess module.
it is the replacement of the os.popen() function call, and it allows to execute a program while interacting with its standard input/output/error streams through pipes.
example use:
import subprocess
process = subprocess.Popen("test.exe", stdin=subprocess.PIPE, stdout=subprocess.PIPE)
input,output = process.stdin,process.stdout
input.write("hello world !")
print(output.read().decode('latin1'))
input.close()
output.close()
status = process.wait()
If you don't need to deal with responding to interactive questions and prompts, don't bother with pexpect, just use subprocess.communicate, as suggested by Adrien Plisson.
However, if you do need pexpect, the best way to get started is to look through the examples on its home page, then start reading the documentation once you've got a handle on what exactly you need to do.

Categories