Python simple communication between python programs? - python

I have two python programs.
One is kind of data gathering program.
The other is for analysis and prediction using Tensorflow.
(on Windows OS. python3.5. local )
The data gathering program requires 32-bit env because of API it is using.
And as you know, the other program requires 64-bit env because of TensorFlow.
So
Q : I just need to send a dict data to TensorFlow and it sends one integer back as a return.
What is the most simple way to send data each other?
thanks for your time.

The simplest way would be to have one program save the data into a file, and then have the other program read the file. The recommended way to do this is to use JSON, via the json module.
import json
#Write
with open('file.txt', 'w') as file:
file.write(json.dumps(myDict))
#Read
with open('file.txt') as file:
myDict = json.load(json_data)
However, depending on your use case, it might not be the best way. Sockets are a common solution. Managers are also very robust, but are overkill in my opinion.
For more information, I recommend checking out the list that the Python team maintains, of mechanisms that you can use for communication between processes.

If you want to connect both programs over the network, may I suggest you take a look at Pyro4? Basically what that does for you is enable you to do normal Python method calls, but over the network, to code running on another computer or in another Python process. You (almost) don't have to worry about low-level network details with it.

Related

Inconsistent python mmap behaviour with /dev/mem

I've been working on a project in PHP which requires mmap'ing /dev/mem to gain access to the hardware peripheral registers. As there's no native mmap, the simplest way I could think of to achieve this was to construct a python subprocess, which communicated with the PHP app via stdin/stdout.
I have run into a strange issue which only occurs while reading addresses, not writing them. The subprocess functions correctly (for reading) with the following:
mem.write(sys.stdin.read(length))
So, I expected that I could conversely write memory segments back to the parent using the following:
sys.stdout.write(mem.read(length))
If I mmap a standard file, both commands work as expected (irrelevant of the length of read/write). If I map the /dev/mem "file," I get nonsense back during the read. It's worth noting that the area I'm mapping is outside the physical memory address space and is used to access the peripheral registers.
The work-around I have in place is the following:
for x in range(0, length / 4):
sys.stdout.write(str(struct.pack('L', struct.unpack_from('L', mem, mem.tell())[0])))
mem.seek(4, os.SEEK_CUR)
This makes the reads behave as expected.
What I can't understand is why reading from the address using unpack_from should see anything different to reading it directly. The same (non-working) thing occurs if I try to just assign a read to a variable.
In case additional context is helpful, I'm running this on a Raspberry Pi/Debian 8. The file that contains the above issue is here. The project that uses it is here.

dump1090 offline raspberry pi

I'm currently doing a project in which I'm making an ADS-B flightradar on a led matrix, which is controlled by a Raspberry Pi. I've found a program called dump1090 which receives and decodes the data from my SDR receiver. I can find lots of example on how to use to forward that data to a webserver or whatever, but I can't seem to find anything on how you can programmatically listen to the data dump1090 produces. Does anyone know how you can programmatically receive dump1090's data in order to use the data in a program? (any language would do, but perhaps python would be the most obvious choice)
You should be able to start dump1090 using a programming language of choice (c/c++/java/python/etc.) and and read the std out pipe.
Personally, on Raspberry Pi, I find Python nicer to use since it's easier to test/reiterate without needing to compile. Python provides the subprocess package which allows you run dump1090(or any other application) from within Python and have a look at the output (using subprocess.check_output('dump1090') for example). Have a look at check_output and Popen options to see what works best with your application.

Feasibility of using pipe for ruby-python communication

Currently, I have two programs, one running on Ruby and the other in Python. I need to read a file in Ruby but I need first a library written in Python to parse the file. Currently, I use XMLRPC to have the two programs communicate. Porting the Python library to Ruby is out of question. However, I find and read that using XMLRPC has some performance overhead. Recently, I read that another solution for the Ruby-Python conundrum is the use of pipes. So I tried to experiment on that one. For example, I wrote this master script in ruby:
(0..2).each do
slave = IO.popen(['python','slave.py'],mode='r+')
slave.write "master"
slave.close_write
line = slave.readline
while line do
sleep 1
p eval line
break if slave.eof
line = slave.readline
end
end
The following is the Python slave:
import sys
cmd = sys.stdin.read()
while cmd:
x = cmd
for i in range(0,5):
print "{'%i'=>'%s'}" % (i, x)
sys.stdout.flush()
cmd = sys.stdin.read()
Everything seems to work fine:
~$ ruby master.rb
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
My question is, is it really feasible to implement the use of pipes for working with objects between Ruby and Python? One consideration is that there may be multiple instances of master.rb running. Will concurrency be an issue? Can pipes handle extensive operations and objects to be passed in between? If so, would it be a better alternative for RPC?
Yes. No. If you implement it, yes. Depends on what your application needs.
Basically if all you need is simple data passing pipes are fine, if you need to be constantly calling functions on objects in your remote process then you'll probably be better of using some form of existing RPC instead of reinventing the wheel. Whether that should be XMLRPC or something else is another matter.
Note that RPC will have to use some underlying IPC mechanism, which could well be pipes. but might also be sockets, message queues, shared memory, whatever.

Python: Two script working with same file , one updating it another deleting the data when processed

Firstly I am new to Python.
Now my question goes like this:
I have a call back script running in remote machine
which sends some data and run a script in local machine
which process that data and write to a file. Now another
script of mine locally needs to process the file data
one by one and delete them from the file if done.
The problem is the file may be updating continuoulsy.
How do i schyncronize the work so that it doesnt mess up
my file.
Also please suggest me if the same work can be done in some
better way.
I would suggest you to look into named pipes or sockets which seem to be more suited for your purpose than a file. If it's really just between those two applications and you have control on the source code of both.
For example, on unix, you could create a pipe like (see os.mkfifo):
import os
os.mkfifo("/some/unique/path")
And then access it like a file:
dest = open("/some/unique/path", "w") # on the sending side
src = open("/some/unique/path", "r") # on the reading side
The data will be queued between your processes. It's a First In First Out really, but it behaves like a file (mostly).
If you cannot go for named pipes like this, I'd suggest to use IP sockets over localhost from the socket module, preferably DGRAM sockets, as you do not need to do some connection handling there. You seem to know how to do networking already.
I would suggest using a database whose transactions allow for concurrent processing.

Using cProfile or line_profile on a Python script in /cgi-bin/?

Is there a way to run cProfile or line_profile on a script on a server?
ie: how could I get the results for one of the two methods on http://www.Example.com/cgi-bin/myScript.py
Thanks!
Not sure what line_profile is. For cProfile, you just need to direct the results to a file you can later read on the server (depending on what kind of access you have to the server).
To quote the example from the docs,
import cProfile
cProfile.run('foo()', 'fooprof')
and put all the rest of the code into a def foo(): -- then later retrieve that fooprof file and analyze it at leisure (assuming your script runs with permissions to write it in the first place, of course).
Of course you can ensure different runs get profiled into different files, etc, etc -- whether this is practical also depends on what kind of access and permissions you're getting from your hosting provider, i.e., how are you allowed to persist data, in a way that lets you retrieve that data later? That's not a question of Python, it's a question of contracts between you and your hosting provider;-).

Categories