Calling Python scripts in Meteor - python

What is the best way to have my meteor app call a python script that resides on the same machine as the meteor server-side code? All I want to do is have meteor pass a string to a function in python and have python return a string to meteor.
I was thinking that I could have python monitor mongodb and extract values and write them back to mongodb once computed, but it seems much cleaner to have meteor call the function in python directly.
I am new to DDP and was not able to get very far with python-meteor (https://github.com/hharnisc/python-meteor).
Is ZeroRPC (http://zerorpc.dotcloud.com/) a good way to do it?
Thanks.

Great question.
I have looked at using DDP and ZeroRPC and even having Python write directly to Mongo.
For me, the easiest way to have Meteor and Python talk was to set up the python script as a flask app and then add an API to the flask app and have Meteor talk to Python through the API.
To get this setup working I used:
Flask API
(https://flask-restful.readthedocs.org/en/0.3.1/quickstart.html#a-minimal-api)
The Meteor HTTP package (http://docs.meteor.com/#/full/http_call)
To test it you can build something basic like this (python script converts text to upper case):
from flask import Flask
from flask.ext import restful
app = Flask(__name__)
api = restful.Api(app)
class ParseText(restful.Resource):
def get(self, text):
output = text.upper()
return output
api.add_resource(ParseText, '/<string:text>')
if __name__ == '__main__':
app.run(debug=True) # debug=True is for testing to see if calls are working.
Then in Meteor use HTTP.get to test calling the API.
If you are running everything locally then the call from Meteor would probably look something like: Meteor.http.get("http://127.0.0.1:5000/test");

I have experience in the past in implementing somehting similar by using RestFul approach.
By triggering observeChanges from Meteor, sending a http request to Python restful api endpoints (in Flask) from server, then Flask handling the requests in calling the relevant Python scripts/functions, with the return response, Meteor then handle the callback accordingly.
There are of course many other approaches you can consider, like using DDP, child_process etc. I have also considered using python-meteor before however after taking into accounts that RestFul approach is more portable and scalable (both in the same machine, or even in different machines... you can expand your servers to handle more requests etc. you get the idea).
Everyone's use case is different, and I found RestFul appoach is the best fit for my use case. I hope you find my answer useful and expand your choices of consideration and pick one which is best for your case. Good luck.

Related

How we can automate python client application which is used an an interface for user to call APIs

We have made an python client which is used as an interface for user. some function is defined in the client which internally calls the APIs and give output to users.
My requirement is to automate the python client - functions and validate the output.
Please suggest tools to use.
There are several ways to do that:
You can write multiple tests for your application as the test cases which are responsible to call your functions and get the result and validate them. It calls the "feature test". To do that, you can use the python "unittest" library and call the tests periodically.
If you have a web application you can use "selenium" to make automatic test flows. (Also you can run it in a docker container virtually)
The other solution is to write another python application to call your functions or send requests everywhere you want to get the specific data and validate them. (It's the same with the two other solutions with a different implementation)
The most straightforward way is using Python for this, the simplest solution would be a library like pytest. More comprehensive option would be something like Robot framework
Given you have jmeter in your tags I assume that at some point you will want to make a performance test, however it might be easier to use Locust for this as it's pure Python load testing framework.
If you still want to use JMeter it's possible to call Python programs using OS Process Sampler

Trouble executing python flask-assistant code in cloud function

I'm trying to run flask-assistant code in cloud function. The code works fine in my local machine , but it is not working as a cloud function. I'm using the http trigger. The function crashes every time it is triggered.
from flask import Flask
from flask_assistant import Assistant, ask, tell
app = Flask(__name__)
assist = Assistant(app, route='/')
#assist.action('TotalSales')
def greet_and_start(request):
app.run
speech = "Hey! 1500?"
return ask(speech)
if __name__ == '__main__':
app.run(debug=True)
When you write a Google Cloud Function in Python, all you need write is the function that handles the request. For example:
def hello_get(request)
return 'Hello World!'
Cloud Functions handles all the work to create the Flask environment and handle the incoming request. All you need to do is provide the handler to handle the processing. This is the core behind Cloud Functions which provides "Serverless" infrastructure. The number and existence of actual running servers is removed from your world and you can concentrate only on what you want your logic to do. It is not surprising that your example program doesn't work as it is trying to do too much. Here is a link to a Google Cloud Functions tutorial for Python that illustrates a simple sample.
https://cloud.google.com/functions/docs/tutorials/http
Let me recommend that you study this and related documentation on Cloud Functions found here:
https://cloud.google.com/functions/docs/
Other good references include:
YouTube: Next 17 - Building serverless applications with Google Cloud Functions
Migrating from a Monolith to Microservices (Cloud Next '19)
Run Cloud Functions Everywhere (Cloud Next '19)
Functions as a Service (Cloud Next '19)

Caching Google API calls for unit tests

I've got a Google App Engine project that uses the Google Cloud Language API, and I'm using the Google API Client Library (Python) to make the API calls.
When running my unit tests, I make quite a few calls to the API. This slows down my testing and also incurs costs.
I'd like to cache the calls to the Google API to speed up my tests and avoid the API charges, and I'd rather not roll my own if another solution is available.
I found this Google API page, which suggests doing this:
import httplib2
http = httplib2.Http(cache=".cache")
And I've added these lines to my code (there is another option to use GAE memcache but won't be persisted between test code invocations) and right after these lines, I create my API call connection:
NLP = discovery.build("language", "v1", API_KEY)
The caching isn't working and the above solution seems too simple so I suspect I am missing something.
UPDATE:
I updated my tests so that App Engine is not used (just a regular unit test) and I also figured out that I can pass the http I created to the Google API client like this:
NLP = discovery.build("language", "v1", http, API_KEY)
Now, the initial discovery call is cached but the actual API calls are not cached,e.g., this call is not cached:
result = NLP.documents().annotateText(body=data).execute()
The suggested code:
http = httplib2.Http(cache=".cache") is trying to cache to the local filesystem in a directory called ".cache". On App Engine, you cannot write to the local filesystem, so this does nothing.
Instead, you could try caching to Memcache. The other suggestion on the Python Client docs referenced is to do exactly this:
from google.appengine.api import memcache
http = httplib2.Http(cache=memcache)
Since all App Engine apps get free access to shared memcache this should be better than nothing.
If this fails, you could also try memoization. I've had success memoizing calls to slow or flaky APIs, but it comes at the cost of increased memory usage (so I need bigger instances).
EDIT: I see from your comment you're having this problem locally. I was originally thinking that memoization would be an alternative, but the need to hack on httplib2 makes that overly complicated. I'm back to thinking about how to convince httplib2 to do the right thing.
If you're trying to make a test run faster by caching an API call result, stop and consider whether you may have taken a wrong turn.
If can you restructure your code such that you can replace the API call with a unittest.mock, your tests will run much, much faster.
I just came across vcrpy which seems to do exactly this. I'll update this answer after I've had a chance to try it out.

Web application: Hold large object between requests

I'm working on a web application related to genome searching. This application makes use of this suffix tree library through Cython bindings. Objects of this type are large (hundreds of MB up to ~10GB) and take as long to load from disk as it takes to process them in response to a page request. I'm looking for a way to load several of these objects once on server boot and then use them for all page requests.
I have tried using a remote manager / client setup using the multiprocessing module, modeled after this demo, but it fails when the client connects with an error message that says the object is not picklable.
I would suggest writing a small Flask (or even raw WSGI… But it's probably simpler to use Flask, as it will be easier to get up and running quickly) application which loads the genome database then exposes a simple API. Something like this:
app = Flask(__name__)
database = load_database()
#app.route('/get_genomes')
def get_genomes():
return database.all_genomes()
app.run(debug=True)
Or, you know, something a bit more sensible.
Also, if you need to be handling more than one request at a time (I believe that app.run will only handle one at a time), start by threading… And if that's too slow, you can os.fork() after the database is loaded and run multiple request handlers from there (that way they will all share the same database in memory).

What's the best strategy for creating a web interface for a python application?

I've created a simple gstreamer-based python audio application with a GTK+ GUI for picking and playing a webstream from a XML list. Then I connected my PC speakers output to the input of an old stereo receiver with large loudspeakers and presto, I have a pretty good sound system that is heard over most of my home.
Now I'd like to add a web user-interface to remote control the application from a room other than the one with the computer but so far all my attempts have been fruitless.
In particular I wonder if it is possible to create a sort of socket with signals like those of GTK GUIs to run methods that change gstreamer parameters.
Or is there a more realistic/feasible strategy?
Thanks in advance for any help!
You could use Bottle, a very simple micro web-framework.
Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library.
Hello world:
from bottle import route, run
#route('/hello/:name')
def index(name='World'):
return '<b>Hello %s!</b>' % name
run(host='localhost', port=8080)
The fastest and easiest way would probably be using cgi-scripts. If you want a more sophisticated approach you could consider using a webframework like django, turbogears or the likes.
I would suggest using one of the lighter-weight pure-Python web server options and either write a stand-alone WSGI application or use a micro-framework.
Gevent would be a good option: http://www.gevent.org/servers.html
Here is a sample implementation of a WSGI application using Gevent:
https://bitbucket.org/denis/gevent/src/tip/examples/wsgiserver.py#cl-4
For a micro-framework, I'd suggest using Flask.

Categories