I want to read some data from a port in Python in a while true.
Then I want to grab the data from Python in Erlang on a function call.
So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return.
I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response.
How can I do this?
any other alternative approaches is also good if you think this is not the correct way to do it.
If I understand the question correctly you want to receive some data from an external port in Python, aggregate it and then transfer it to Erlang.
In case if you can use threads with your Python code you probably can do it the following way:
Run external port receive loop in a thread
Once data is aggregated push it as a message to Erlang. (Unfortunately you can't currently use threads and call Erlang functions from Python with ErlPort)
The following is an example Python module which works with ErlPort:
from time import sleep
from threading import Thread
from erlport.erlterms import Atom
from erlport import erlang
def start(receiver):
Thread(target=receive_loop, args=[receiver]).start()
return Atom("ok")
def receive_loop(receiver):
while True:
data = ""
for chunk in ["Got ", "BIG ", "Data"]:
data += chunk
sleep(2)
erlang.cast(receiver, [data])
The for loop represents some data aggregation procedure.
And in Erlang shell it works like this:
1> {ok, P} = python:start().
{ok,<0.34.0>}
2> python:call(P, external_port, start, [self()]).
ok
3> timer:sleep(6).
ok
4> flush().
Shell got [<<"Got BIG Data">>]
ok
Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem.
Related
I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.
I have a task where I need to run some python file (call it app.py)
that uploads a server (using flask). This is done in run_tests function.
Then, I want to query this
server for some test inputs that I have. This is done in the function
get_sentences_and_test (I do not put its code here for simplicity of the question. It includes waiting for the server to be up, using sleep instructions, and then query it).
I use python mutiprocessing package, for process and subprocess.
My program has a very simple structure like:
def run_tests():
subprocess.call(['python3', path_to_app.py])
main:
api_proc = Process(target=run_tests)
api_proc.start()
get_sentences_and_test(api_proc)
api_proc.terminate()
My problem is this code works ok, and does what it supposed to do.
However, the port that the subcall in run_tests creates when the server is up and running is not
killed once the program is done. And, I have to kill it manually.
I want to know:
How can I kill the process that occupies this port?
What is the best practice to do this? This should be a day-to-day problem for
people working with services and multi processing\threading. Yet, I didn't find a simple
solution or many sources on this issue.
I have a script running on my raspberry, these script is started from a command from an php page. I’ve multiple if stetements, now I would like to pass new arguments to the script whithout stopping it. I found lots of information by passing arguments to the python script, but not if its possible while the svpcript is already running to pass new arguments. Thanks in advance!
The best option for me is to use a configuration file input for your script.
Some simple yaml will do. Then in a separate thread you must observe the hash of the file, if it gets changed that
means somebody has updated your file and you must re/adjust your inputs.
Basically you have that constant observer running all the time.
You need some sort of IPC mechanism really. As you are executing/updating the script from a PHP application, I'd suggest you'll look into something like ZeroMQ which supports both Python and PHP, and will allow you to do a quick and dirty Pub/Sub implementation.
The basic idea is, treat your python script as a subscriber to messages coming from the PHP application which publishes them as and when needed. To achieve this, you'll want to start your python "script" once and leave it running in the background, listening for messages on ZeroMQ. Something like this should get you going
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
while True:
# Wait for next message from from your PHP application
message = socket.recv()
print("Recieved a message: %s" % message)
# Here you should do the work you need to do in your script
# Once you are done, tell the PHP application you are done
socket.send(b"Done and dusted")
Then, in your PHP application, you can use something like the following to send a message to your Python service
$context = new ZMQContext();
// Socket to talk to server
$requester = new ZMQSocket($context, ZMQ::SOCKET_REQ);
$requester->connect("tcp://localhost:5555");
$requester->send("ALL THE PARAMS TO SEND YOU YOUR PYTHON SCRIPT");
$reply = $requester->recv();
Note, I found the above examples using a quick google search (and amended slightly for educational purposes), but they aren't tested, and purely meant to get you started. For more information, visit ZeroMQ and php-zmq
Have fun.
I have 2 python scripts. 1st is Flask server and 2nd Is NRF24L01 receiver/transmitter(On Raspberry Pi3) script. Both scripts are running at the same time. I want to pass variables (variables are not constant) between these 2 scripts. How I can do that in a simplest way?
How about a python RPC setup? I.e. Run a server on each script, and each script can also be a client to invoke Remote Procedure Calls on each other.
https://docs.python.org/2/library/simplexmlrpcserver.html#simplexmlrpcserver-example
I'd like to propose a complete solution basing on Sush's proposition. For last few days I've been struggling with the problem of communicating between two processes run separately (in my case - on the same machine). There are lots of solutions (Sockets, RPC, simple RPC or other servers) but all of them had some limitations. What worked for me was a SimpleXMLRPCServer module. Fast, reliable and better than direct socket operations in every aspect. Fully functioning server which can be cleanly closed from client is just as short:
from SimpleXMLRPCServer import SimpleXMLRPCServer
quit_please = 0
s = SimpleXMLRPCServer(("localhost", 8000), allow_none=True) #allow_none enables use of methods without return
s.register_introspection_functions() #enables use of s.system.listMethods()
s.register_function(pow) #example of function natively supported by Python, forwarded as server method
# Register a function under a different name
def example_method(x):
#whatever needs to be done goes here
return 'Enterd value is ', x
s.register_function(example_method,'example')
def kill():
global quit_please
quit_please = 1
#return True
s.register_function(kill)
while not quit_please:
s.handle_request()
My main help was 15 years old article found here.
Also, a lot of tutorials use s.server_forever() which is a real pain to be cleanly stopped without multithreading.
To communicate with the server all you need to do is basically 2 lines:
import xmlrpclib
serv = xmlrpclib.ServerProxy('http://localhost:8000')
Example:
>>> import xmlrpclib
>>> serv = xmlrpclib.ServerProxy('http://localhost:8000')
>>> serv.example('Hello world')
'Enterd value is Hello world'
And that's it! Fully functional, fast and reliable communication. I am aware that there are always some improvements to be done but for most cases this approach will work flawlessly.
Currently, I have two programs, one running on Ruby and the other in Python. I need to read a file in Ruby but I need first a library written in Python to parse the file. Currently, I use XMLRPC to have the two programs communicate. Porting the Python library to Ruby is out of question. However, I find and read that using XMLRPC has some performance overhead. Recently, I read that another solution for the Ruby-Python conundrum is the use of pipes. So I tried to experiment on that one. For example, I wrote this master script in ruby:
(0..2).each do
slave = IO.popen(['python','slave.py'],mode='r+')
slave.write "master"
slave.close_write
line = slave.readline
while line do
sleep 1
p eval line
break if slave.eof
line = slave.readline
end
end
The following is the Python slave:
import sys
cmd = sys.stdin.read()
while cmd:
x = cmd
for i in range(0,5):
print "{'%i'=>'%s'}" % (i, x)
sys.stdout.flush()
cmd = sys.stdin.read()
Everything seems to work fine:
~$ ruby master.rb
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
My question is, is it really feasible to implement the use of pipes for working with objects between Ruby and Python? One consideration is that there may be multiple instances of master.rb running. Will concurrency be an issue? Can pipes handle extensive operations and objects to be passed in between? If so, would it be a better alternative for RPC?
Yes. No. If you implement it, yes. Depends on what your application needs.
Basically if all you need is simple data passing pipes are fine, if you need to be constantly calling functions on objects in your remote process then you'll probably be better of using some form of existing RPC instead of reinventing the wheel. Whether that should be XMLRPC or something else is another matter.
Note that RPC will have to use some underlying IPC mechanism, which could well be pipes. but might also be sockets, message queues, shared memory, whatever.