How to keep python interpreter in memory across executions? - python

I need to repeatedly call short programs in python.
Since programs are trivial, but use several (standard) modules and target hardware (embedded ARM9 running linux) is not very powerful loading time of interpreter+libs greatly exceeds prog runtime.
Is there a way to keep a python interpreter in memory and "just" feed it a program to execute?
I know I can write a fairly simple "C" wrapper spawning the interpreter and then feed it my programs via PyRun_SimpleFile(), but that looks like an overkill. Surely there's some simpler (and probably more "pythonic") way of achieving the same.

There are probably many ways of solving this problem.
A simple one would be to combine all your short programs into a simple web application, potentially one that listens on a local Unix socket rather than a network socket. E.g., using the minimal Flask application in the flask quickstart:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!\n'
You could expose it on a local Unix socket like this, assuming you've put the above code into a script called myapp.py:
uwsgi --http-socket /tmp/app.sock --manage-script-name --plugin python --mount /=myapp:app
And now you can access it like this (note the single / in http:/; that's because we don't need a hostname when connecting to a local socket):
$ curl --unix-socket /tmp/app.sock http:/
Hello, world!
This would let you start your Python scripts once and let it run persistently, thus avoiding the costs associated with start up and module loading for subsequent calls, while providing you with a way to run different functions, provide input parameters, etc.
Here's an example that takes a filename as input and performs some transformations on the file:
#app.route('/cmd1', methods=['POST'])
def cmd1():
inputfile = request.form.get('inputfile')
with open(inputfile) as fd:
output = fd.read().replace('Hello', 'Goodbye')
return output
Assuming that we have:
$ cat data
Hello world
We can call:
$ curl --unix-socket /tmp/app.sock http:/cmd1 -d inputfile=$PWD/data
Goodbye world

Related

Passing arguments to a running python script

I have a script running on my raspberry, these script is started from a command from an php page. I’ve multiple if stetements, now I would like to pass new arguments to the script whithout stopping it. I found lots of information by passing arguments to the python script, but not if its possible while the svpcript is already running to pass new arguments. Thanks in advance!
The best option for me is to use a configuration file input for your script.
Some simple yaml will do. Then in a separate thread you must observe the hash of the file, if it gets changed that
means somebody has updated your file and you must re/adjust your inputs.
Basically you have that constant observer running all the time.
You need some sort of IPC mechanism really. As you are executing/updating the script from a PHP application, I'd suggest you'll look into something like ZeroMQ which supports both Python and PHP, and will allow you to do a quick and dirty Pub/Sub implementation.
The basic idea is, treat your python script as a subscriber to messages coming from the PHP application which publishes them as and when needed. To achieve this, you'll want to start your python "script" once and leave it running in the background, listening for messages on ZeroMQ. Something like this should get you going
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
while True:
# Wait for next message from from your PHP application
message = socket.recv()
print("Recieved a message: %s" % message)
# Here you should do the work you need to do in your script
# Once you are done, tell the PHP application you are done
socket.send(b"Done and dusted")
Then, in your PHP application, you can use something like the following to send a message to your Python service
$context = new ZMQContext();
// Socket to talk to server
$requester = new ZMQSocket($context, ZMQ::SOCKET_REQ);
$requester->connect("tcp://localhost:5555");
$requester->send("ALL THE PARAMS TO SEND YOU YOUR PYTHON SCRIPT");
$reply = $requester->recv();
Note, I found the above examples using a quick google search (and amended slightly for educational purposes), but they aren't tested, and purely meant to get you started. For more information, visit ZeroMQ and php-zmq
Have fun.

Communication between Python Scripts

I have 2 python scripts. 1st is Flask server and 2nd Is NRF24L01 receiver/transmitter(On Raspberry Pi3) script. Both scripts are running at the same time. I want to pass variables (variables are not constant) between these 2 scripts. How I can do that in a simplest way?
How about a python RPC setup? I.e. Run a server on each script, and each script can also be a client to invoke Remote Procedure Calls on each other.
https://docs.python.org/2/library/simplexmlrpcserver.html#simplexmlrpcserver-example
I'd like to propose a complete solution basing on Sush's proposition. For last few days I've been struggling with the problem of communicating between two processes run separately (in my case - on the same machine). There are lots of solutions (Sockets, RPC, simple RPC or other servers) but all of them had some limitations. What worked for me was a SimpleXMLRPCServer module. Fast, reliable and better than direct socket operations in every aspect. Fully functioning server which can be cleanly closed from client is just as short:
from SimpleXMLRPCServer import SimpleXMLRPCServer
quit_please = 0
s = SimpleXMLRPCServer(("localhost", 8000), allow_none=True) #allow_none enables use of methods without return
s.register_introspection_functions() #enables use of s.system.listMethods()
s.register_function(pow) #example of function natively supported by Python, forwarded as server method
# Register a function under a different name
def example_method(x):
#whatever needs to be done goes here
return 'Enterd value is ', x
s.register_function(example_method,'example')
def kill():
global quit_please
quit_please = 1
#return True
s.register_function(kill)
while not quit_please:
s.handle_request()
My main help was 15 years old article found here.
Also, a lot of tutorials use s.server_forever() which is a real pain to be cleanly stopped without multithreading.
To communicate with the server all you need to do is basically 2 lines:
import xmlrpclib
serv = xmlrpclib.ServerProxy('http://localhost:8000')
Example:
>>> import xmlrpclib
>>> serv = xmlrpclib.ServerProxy('http://localhost:8000')
>>> serv.example('Hello world')
'Enterd value is Hello world'
And that's it! Fully functional, fast and reliable communication. I am aware that there are always some improvements to be done but for most cases this approach will work flawlessly.

using twistd to run a twisted application but script run twice

sample code here
# main.py
from twisted.application import service, internet
application = service.Application("x")
service.IProcess(application).processName = "x"
print "some log...."
if I run this main.py with:
twistd -y main.py
I got 2 "some log...." lines.
If this code run twice?
The "process name" feature you're using works by re-executing the process with a new argv[0]. There is no completely reliable way to save an arbitrary object (like the Application) across this process re-execution. This means that the .py file has to be re-evaluated in the new process to recreate the Application object so twistd knows what you want it to do.
You might want to consider using setproctitle rather than twistd's built-in process title feature. (For that matter, maybe twistd should just use it if it's available...)

Run python program from Erlang

I want to read some data from a port in Python in a while true.
Then I want to grab the data from Python in Erlang on a function call.
So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return.
I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response.
How can I do this?
any other alternative approaches is also good if you think this is not the correct way to do it.
If I understand the question correctly you want to receive some data from an external port in Python, aggregate it and then transfer it to Erlang.
In case if you can use threads with your Python code you probably can do it the following way:
Run external port receive loop in a thread
Once data is aggregated push it as a message to Erlang. (Unfortunately you can't currently use threads and call Erlang functions from Python with ErlPort)
The following is an example Python module which works with ErlPort:
from time import sleep
from threading import Thread
from erlport.erlterms import Atom
from erlport import erlang
def start(receiver):
Thread(target=receive_loop, args=[receiver]).start()
return Atom("ok")
def receive_loop(receiver):
while True:
data = ""
for chunk in ["Got ", "BIG ", "Data"]:
data += chunk
sleep(2)
erlang.cast(receiver, [data])
The for loop represents some data aggregation procedure.
And in Erlang shell it works like this:
1> {ok, P} = python:start().
{ok,<0.34.0>}
2> python:call(P, external_port, start, [self()]).
ok
3> timer:sleep(6).
ok
4> flush().
Shell got [<<"Got BIG Data">>]
ok
Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem.

Execute arbitrary python code remotely - can it be done?

I'm working on a grid system which has a number of very powerful computers. These can be used to execute python functions very quickly. My users have a number of python functions which take a long time to calculate on workstations, ideally they would like to be able to call some functions on a remote powerful server, but have it appear to be running locally.
Python has an old function called "apply" - it's mostly useless these days now that python supports the extended-call syntax (e.g. **arguments), however I need to implement something that works a bit like this:
rapply = Rapply( server_hostname ) # Set up a connection
result = rapply( fn, args, kwargs ) # Remotely call the function
assert result == fn( *args, **kwargs ) #Just as a test, verify that it has the expected value.
Rapply should be a class which can be used to remotely execute some arbitrary code (fn could be literally anything) on a remote server. It will send back the result which the rapply function will return. The "result" should have the same value as if I had called the function locally.
Now let's suppose that fn is a user-provided function I need some way of sending it over the wire to the execution server. If I could guarantee that fn was always something simple it could could just be a string containing python source code... but what if it were not so simple?
What if fn might have local dependencies: It could be a simple function which uses a class defined in a different module, is there a way of encapsulating fn and everything that fn requires which is not standard-library? An ideal solution would not require the users of this system to have much knowledge about python development. They simply want to write their function and call it.
Just to clarify, I'm not interested in discussing what kind of network protocol might be used to implement the communication between the client & server. My problem is how to encapsulate a function and its dependencies as a single object which can be serialized and remotely executed.
I'm also not interested in the security implications of running arbitrary code on remote servers - let's just say that this system is intended purely for research and it is within a heavily firewalled environment.
Take a look at PyRO (Python Remote objects) It has the ability to set up services on all the computers in your cluster, and invoke them directly, or indirectly through a name server and a publish-subscribe mechanism.
It sounds like you want to do the following.
Define a shared filesystem space.
Put ALL your python source in this shared filesystem space.
Define simple agents or servers that will "execfile" a block of code.
Your client then contacts the agent (REST protocol with POST methods works well for
this) with the block of code.
The agent saves the block of code and does an execfile on that block of code.
Since all agents share a common filesystem, they all have the same Python library structure.
We do with with a simple WSGI application we call "batch server". We have RESTful protocol for creating and checking on remote requests.
Stackless had ability to pickle and unpickle running code. Unfortunately current implementation doesn't support this feature.
You could use a ready-made clustering solution like Parallel Python. You can relatively easily set up multiple remote slaves and run arbitrary code on them.
You could use a SSH connection to the remote PC and run the commands on the other machine directly. You could even copy the python code to the machine and execute it.
Syntax:
cat ./test.py | sshpass -p 'password' ssh user#remote-ip "python - script-arguments-if-any for test.py script"
1) here "test.py" is the local python script.
2) sshpass used to pass the ssh password to ssh connection

Categories