I have 2 python scripts. 1st is Flask server and 2nd Is NRF24L01 receiver/transmitter(On Raspberry Pi3) script. Both scripts are running at the same time. I want to pass variables (variables are not constant) between these 2 scripts. How I can do that in a simplest way?
How about a python RPC setup? I.e. Run a server on each script, and each script can also be a client to invoke Remote Procedure Calls on each other.
https://docs.python.org/2/library/simplexmlrpcserver.html#simplexmlrpcserver-example
I'd like to propose a complete solution basing on Sush's proposition. For last few days I've been struggling with the problem of communicating between two processes run separately (in my case - on the same machine). There are lots of solutions (Sockets, RPC, simple RPC or other servers) but all of them had some limitations. What worked for me was a SimpleXMLRPCServer module. Fast, reliable and better than direct socket operations in every aspect. Fully functioning server which can be cleanly closed from client is just as short:
from SimpleXMLRPCServer import SimpleXMLRPCServer
quit_please = 0
s = SimpleXMLRPCServer(("localhost", 8000), allow_none=True) #allow_none enables use of methods without return
s.register_introspection_functions() #enables use of s.system.listMethods()
s.register_function(pow) #example of function natively supported by Python, forwarded as server method
# Register a function under a different name
def example_method(x):
#whatever needs to be done goes here
return 'Enterd value is ', x
s.register_function(example_method,'example')
def kill():
global quit_please
quit_please = 1
#return True
s.register_function(kill)
while not quit_please:
s.handle_request()
My main help was 15 years old article found here.
Also, a lot of tutorials use s.server_forever() which is a real pain to be cleanly stopped without multithreading.
To communicate with the server all you need to do is basically 2 lines:
import xmlrpclib
serv = xmlrpclib.ServerProxy('http://localhost:8000')
Example:
>>> import xmlrpclib
>>> serv = xmlrpclib.ServerProxy('http://localhost:8000')
>>> serv.example('Hello world')
'Enterd value is Hello world'
And that's it! Fully functional, fast and reliable communication. I am aware that there are always some improvements to be done but for most cases this approach will work flawlessly.
Related
I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.
I am currently developing a web app using the aiohttp module. I'm using:
aiohttp.web, asyncio, uvloop, aiohttp_session, aiohttp_security, aiomysql, and aioredis
I have run some benchmarks against it and while they're pretty good, I can't help but want for more. I know that Python is, by nature, single-threaded. AIOHTTP is using async as to be non-blocking but am I correct in assuming that it is not utilizing all CPU cores?
My idea: Run multiple instances of my aiohttp.web code via concurrent.futures in multiprocessing mode. Each process would serve the site on a different port. I would then put a load balancer in front of them. MySQL and Redis can be used to share state where necessary such as for sessions.
Question: Given a server with several CPU cores, will this result in the desired performance increase? If so, is there any specific pattern to pursue in order to avert problems? I can't think of anything that these aio modules are doing that would require that there only be a single thread though I could be wrong.
Note: This is not a subjective question as I've posed it. Either the module is currently bound to one thread/process or it isn't - can benefit from a multiprocessing module + load balancer or can't.
You're right asyncio uses one CPU only. (one event loop uses one thread only and thus one CPU only)
Whether your whole project is network or CPU bound is something I can't say.
You have to try.
You could use nginx or haproxy as load balancer.
You might even try to use no load balancer at all. I never tried this feature for load balancing, just as proof of concept for a fail-over system.
With new kernels multiple processes can listen to the same port (when using the SO_REUSEPORT option) and I guess it's the kernel who would be doing a round robin.
Here a small link to an article comparing performance of a typical nginx configuration vs an nginx setup with the SO_REUSEPORT feature:
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
It seems SO_REUSEPORT might distributes the CPU charge rather evenly, but might increase the variation of response times. Not sure this is relevant in your setup, but thought I let you know.
Added 2020-02-04:
My solution added 2019-12-09 works, but triggers a deprecation warning.
When having more time and time for testing it myself I will post the improved solution here. For the time being you can find it at AIOHTTP - Application.make_handler(...) is deprecated - Adding Multiprocessing
Added 2019-12-09:
Here a small example of an HTTP server, that can be started multiple times listening on the same socket.
The kernel would distribute the tasks. I never checked whether this is efficient or not though.
reuseport.py:
import asyncio
import os
import socket
import time
from aiohttp import web
def mk_socket(host="127.0.0.1", port=8000, reuseport=False):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if reuseport:
SO_REUSEPORT = 15
sock.setsockopt(socket.SOL_SOCKET, SO_REUSEPORT, 1)
sock.bind((host, port))
return sock
async def handle(request):
name = request.match_info.get('name', "Anonymous")
pid = os.getpid()
text = "{:.2f}: Hello {}! Process {} is treating you\n".format(
time.time(), name, pid)
time.sleep(0.5) # intentionally blocking sleep to simulate CPU load
return web.Response(text=text)
if __name__ == '__main__':
host = "127.0.0.1"
port=8000
reuseport = True
app = web.Application()
sock = mk_socket(host, port, reuseport=reuseport)
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
loop = asyncio.get_event_loop()
coro = loop.create_server(
protocol_factory=app.make_handler(),
sock=sock,
)
srv = loop.run_until_complete(coro)
loop.run_forever()
And one way to test it:
./reuseport.py & ./reuseport.py &
sleep 2 # sleep a little so servers are up
for n in 1 2 3 4 5 6 7 8 ; do wget -q http://localhost:8000/$n -O - & done
The output might look like:
1575887410.91: Hello 1! Process 12635 is treating you
1575887410.91: Hello 2! Process 12633 is treating you
1575887411.42: Hello 5! Process 12633 is treating you
1575887410.92: Hello 7! Process 12634 is treating you
1575887411.42: Hello 6! Process 12634 is treating you
1575887411.92: Hello 4! Process 12634 is treating you
1575887412.42: Hello 3! Process 12634 is treating you
1575887412.92: Hello 8! Process 12634 is treating you
I think is better to not reinvent the wheel and use one of the proposed solutions at the documentation:
https://docs.aiohttp.org/en/stable/deployment.html#nginx-supervisord
I have simple Twisted webserver application serving my math requests. Everything working fine (I hide big code pieces which is not related to my question):
#import section ...
class PlsPage(Resource):
isLeaf = True
def render_POST(self, request):
reactor.callLater(0, self._delayedRender, request)
return NOT_DONE_YET
def _delayedRender(self, request):
#some actions before
crossval_scores = cross_validation.cross_val_score(pls1, X, y=numpy.asarray(Y), scoring=my_custom_scorer, cv=KFold(700, n_folds=700))
#some actions after
request.finish()
reactor.listenTCP(12000, server.Site(PlsPage()))
reactor.run()
When I try to speed up cross_validation calculation by setting n_jobs for example to 3.
crossval_scores = cross_validation.cross_val_score(pls1, X, y=numpy.asarray(Y), scoring=my_custom_scorer, cv=KFold(700, n_folds=700), n_jobs=3)
and after that I got exactly 3 exceptions:
twisted.internet.error.CannotListenError: Couldn't listen on any:12000: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted.
For some reasons I can't call cross_val_score with n_jobs > 1 inside _delayedRender.
Here is a traceback of exception, for some reasons reactor.listenTCP trying to start 3 times too.
Any ideas how to get it work?
UPD1. I create file PLS.py and moved all the code here, except last 2 lines:
from twisted.web import server
from twisted.internet import reactor, threads
import PLS
reactor.listenTCP(12000, server.Site(PLS.PlsPage()))
reactor.run()
But the problem still persists. I also found that this problem persists only on Windows. My Linux machine run this scripts well.
scikit_learn apparently uses the multiprocessing module in order to achieve concurrency. The multiprocessing transmits data between processes using pickle, which, among other... idiosyncratic problems that it causes, will cause some of the modules imported in your parent process to be imported in your worker processes.
Your PLS_web.py "module", however, is not actually a module, it's a script; since you have put reactor.listenTCP and reactor.run at the bottom of it, it actually does stuff when you import it rather than just loading its code.
This particular error is because since your web server is being run 4 times (once for the controller process, once for each of the three jobs), each of the 3 times beyond the first encounter an error because the first server is already listening on port 12000.
You should remove the reactor.run/reactor.listenTCP lines elsewhere, into a top level script. A good rule of thumb is that these lines should never appear in the same file as a class or def statement; define your code in one place and start it up in another. Once you've moved it to a file that doesn't get imported (and you might want to even put it in a file whose name isn't a legal module identifier, like run-my-server.py) then multiprocessing might be able to import all the code it needs and do its job.
Better yet, don't write those lines at all, write a twisted application plugin and run your program with twistd. If you don't have to put the reactor.run statement in any place, you can't put it in the wrong place :).
I want to read some data from a port in Python in a while true.
Then I want to grab the data from Python in Erlang on a function call.
So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return.
I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response.
How can I do this?
any other alternative approaches is also good if you think this is not the correct way to do it.
If I understand the question correctly you want to receive some data from an external port in Python, aggregate it and then transfer it to Erlang.
In case if you can use threads with your Python code you probably can do it the following way:
Run external port receive loop in a thread
Once data is aggregated push it as a message to Erlang. (Unfortunately you can't currently use threads and call Erlang functions from Python with ErlPort)
The following is an example Python module which works with ErlPort:
from time import sleep
from threading import Thread
from erlport.erlterms import Atom
from erlport import erlang
def start(receiver):
Thread(target=receive_loop, args=[receiver]).start()
return Atom("ok")
def receive_loop(receiver):
while True:
data = ""
for chunk in ["Got ", "BIG ", "Data"]:
data += chunk
sleep(2)
erlang.cast(receiver, [data])
The for loop represents some data aggregation procedure.
And in Erlang shell it works like this:
1> {ok, P} = python:start().
{ok,<0.34.0>}
2> python:call(P, external_port, start, [self()]).
ok
3> timer:sleep(6).
ok
4> flush().
Shell got [<<"Got BIG Data">>]
ok
Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem.
Can you recommend on a python tool / module that allows scheduling tasks on remote machine in a network?
Note that the solution must be able to not only run certain jobs/commands on remote machines, but also verify that jobs etc are still running (for example, consider the case where a machine dies after a task has been assigned to it?)
RPyC or Remote Python Call, is a transparent and symmetrical python library for remote procedure calls, clustering and distributed-computing. Here an example from Wikipedia:
import rpyc
conn = rpyc.classic.connect("hostname") # assuming a classic server is running on 'hostname'
print conn.modules.sys.path
conn.modules.sys.path.append("lucy")
print conn.modules.sys.path[-1]
# a version of 'ls' that runs remotely
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
# and exceptions...
try:
f = conn.builtin.open("/non/existent/file/name")
except IOError:
pass
To check if the remote server has died after assigning it a job, you can use the ping method of the Connection class. The complete API is described here.
Fabric (http://docs.fabfile.org/en/1.0.1/index.html) is a pretty good toolkit for various sys admin and deployment tasks. It comes with a few pre defined tasks but also gives you the flexibility to add what you need.
I highly recommend it.
Should be able to use Python WMI, for *NIX based systems it's a wrap around SSH and CRON.