The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model).
So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service.
At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use.
ADDED
As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?
You have the right idea with using a socket but there are tons of frameworks doing exactly what you want. Like hleggs, I suggest you checkout Flask to build a microservice. This will let the other team post JSON objects in an HTTP request to your flask application and receive JSON objects back. No knowledge of the underlying system or additional requirements required!
Here's a template for a flask app that replies and responds with JSON
from flask import Flask, request, jsonify
app = Flask(__name__)
#app.route('/', methods=['POST'])
def index():
json = request.json
return jsonify(your_function(json))
if __name__=='__main__':
app.run(host='0.0.0.0', port=5000)
Edit: embeded my code directly as per Peter Britain's advice
My understanding of your question boils down to:
How can I share a Python library with the rest of my team, that may not be using Python otherwise?
And how can I make sure my code and its dependencies are what the receiving team will run?
And that the receiving team can install things easily mostly anywhere?
This is a simple question with no straightforward answer... as you just mentioned that this may be integrated in some webservice, but you do not know the actual platform for this service.
You also ask:
As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?
In the most simple case and for starting I would say no in general. Starting network servers such as an HTTP server (which is built-in Python) is super easy. But a service (even if qualified as "micro") means infrastructure, means security, etc.
What if the port you expect is not available on the deployment machine? - What happens when you restart that machine?
How will your server start or restart when there is a failure?
Would you need also to eventually provide an upstart or systemd service (on Linux)?
Will your simple socket or web server support multiple concurrent requests?
is there a security risk to expose a socket?
Etc, etc. When deployed, my experience with "simple" socket servers is that they end up being not so simple after all.
In most cases, it will be simpler to avoid redistributing a socket service at first. And the proposed approach here could be used to package a whole service at a later stage in a simpler way if you want.
What I suggest instead is a simple command line interface nicely packaged for installation.
The minimal set of things to consider would be:
provide a portable mechanism to call your function on many OSes
ensure that you package your function such that it can be installed with all the correct dependencies
make it easy to install and of course provide some doc!
Step 1. The simplest common denominator would be to provide a command line interface that accepts the path to a JSON file and spits JSON on the stdout.
This would run on Linux, Mac and Windows.
The instructions here should work on Linux or Mac and would need a slight adjustment for Windows (only for the configure.sh script further down)
A minimal Python script could be:
#!/usr/bin/env python
"""
Simple wrapper for calling a function accepting JSON and returning JSON.
Save to predictor.py and use this way::
python predictor.py sample.json
[
"a",
"b",
4
]
"""
from __future__ import absolute_import, print_function
import json
import sys
def predict(json_input):
"""
Return predictions as a JSON string based on the provided `json_input` JSON
string data.
"""
# this will error out immediately if the JSON is not valid
validated = json.loads(json_input)
# <....> your code there
with_predictions = validated
# return a pretty-printed JSON string
return json.dumps(with_predictions, indent=2)
def main():
"""
Print the JSON string results of a prediction, loading an input JSON file from a
file path provided as a command line argument.
"""
args = sys.argv[1:]
json_input = args[0]
with open(json_input) as inp:
print(predict(inp.read()))
if __name__ == '__main__':
main()
You can process eventually large inputs by passing the path to a JSON file.
Step 2. Package your function. In Python this is achieved by creating a setup.py script. This takes care of installing any dependent code from Pypi too. This will ensure that the version of libraries you depend on are the ones you expect. Here I added nltk as an example for a dependency. Add yours: this could be scikit-learn, pandas, numpy, etc. This setup.py also creates automatically a bin/predict script which will be your main command line interface:
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
from __future__ import absolute_import, print_function
from setuptools import setup
from setuptools import find_packages
setup(
name='predictor',
version='1.0.0',
license='public domain',
description='Predict your life with JSON.',
packages=find_packages(),
# add all your direct requirements here
install_requires=['nltk >= 3.2, < 4.0'],
# add all your command line entry points here
entry_points={'console_scripts': ['predict = prediction.predictor:main']}
)
In addition as is common for Python and to make the setup code simpler I created a "Python package" directory moving the predictor inside this directory.
Step 3. You now want to package things such that they are easy to install. A simple configure.sh script does the job. It installs virtualenv, pip and setuptools, then creates a virtualenv in the same directory as your project and then installs your prediction tool in there (pip install . is essentially the same as python setup.py install). With this script you ensure that the code that will be run is the code you want to be run with the correct dependencies. Furthermore, you ensure that this is an isolated installation with minimal dependencies and impact on the target system. This is tested with Python 2 but should work quite likely on Python 3 too.
#!/bin/bash
#
# configure and installs predictor
#
ARCHIVE=15.0.3.tar.gz
mkdir -p tmp/
wget -O tmp/venv.tgz https://github.com/pypa/virtualenv/archive/$ARCHIVE
tar --strip-components=1 -xf tmp/venv.tgz -C tmp
/usr/bin/python tmp/virtualenv.py .
. bin/activate
pip install .
echo ""
echo "Predictor is now configured: run it with:"
echo " bin/predict <path to JSON file>"
At the end you have a fully configured, isolated and easy to install piece of code with a simple highly portable command line interface.
You can see it all in this small repo: https://github.com/pombredanne/predictor
You just clone or fetch a zip or tarball of the repo, then go through the README and you are in business.
Note that for a more engaged way for more complex applications including vendoring the dependencies for easy install and not depend on the network you can check this https://github.com/nexB/scancode-toolkit I maintain too.
And if you really want to expose a web service, you could reuse this approach and package that with a simple web server (like the one built-in in the Python standard lib or bottle or flask or gunicorn) and provide configure.sh to install it all and generate the command line to launch it.
Your task is (in generality) about productionizing a machine learning model, where the consumer of the model may not be working in the same environment as the one which was used to develop the model. I've been trying to tackle this problem since past few years. The problem is faced by many companies and it is aggravated due to skill set, objectives as well as environment (languages, run time) mismatch between data scientists and developers. From my experience, following solutions/options are available, each with its unique advantages and downsides.
Option 1 : Build the prediction part of your model as a standalone web service using any lightweight tool in Python (for example, Flask). You should try to decouple the model development/training and prediction part as much as possible. The model that you have developed, must be serialized to some form so that the web server can use it.
How frequently is your machine learning model updated? If it is not done very frequently, the serialized model file (example: Python pickle file) can be saved to a common location accessible to the web server (say s3), loaded in memory. The standalone web server should offer APIs for prediction.
Please note that exposing a single model prediction using Flask would be simple. But scaling this web server if needed, configuring it with right set of libraries, authentication of incoming requests are all non-trivial tasks. You should choose this route only if you have dev teams ready to help with these.
If the model gets updated frequently, versioning your model file would be a good option. So in fact, you can piggyback on top of any version control system by checking in the whole model file if it is not too large. The web server can de-serialize (pickle.load) this file at startup/update and convert to a Python object on which you can call prediction methods.
Option 2 : use predictive modeling markup language. PMML was developed specifically for this purpose: predictive modeling data interchange format independent of environment. So data scientist can develop model, export it to a PMML file. The web server used for prediction can then consume the PMML file for doing predictions. You should definitely check the open scoring project which allows you to expose machine learning models via REST APIs for deploying models and making predictions.
Pros: PMML is standardized format, open scoring is a mature project with good development history.
Cons: PMML may not support all models. Open scoring is primarily useful if your tech team's choice of development platform is JVM. Exporting machine learning models from Python is not straightforward. But R has good support for exporting models as PMML files.
Option 3 : There are some vendors offering dedicated solutions for this problem. You will have to evaluate cost of licensing, cost of hardware as well as stability of the offerings for taking this route.
Whichever option you choose, please consider the long term costs of supporting that option. If your work is in a proof of concept stage, Python flask based web server + pickled model files will be the best route. Hope this answer helps you!
As already suggested in other answers the best option would be creating a simple web service. Besides Flask you may want to try bottle which is very thin one-file web framework. Your service may looks as simple as:
from bottle import route, run, request
#route('/')
def index():
return my_function(request.json)
run(host='0.0.0.0', port=8080)
In order to keep environments the same check virtualenv to make isolated environment for avoiding conflicts with already installed packages and pip to install exact version of packages into virtual environment.
I guess you have 3 possibilities :
convert python function to javascript function:
Assuming the "tech-team" use Javascript for web-service, you may try to convert your python function directly to a Javascript function (which will be really easy to integrate on web page) using empythoned (based on emscripten)
The bad point of this method is that each time you need update/upgrade your python function, you need also to convert to Javascript again, then check & validate that the function continue to work.
simple API server + JQuery
If the conversion method is impossible, I am agree with #justin-bell, you may use FLASK
getting JSON as input > JSON to your function parameter > run python function > convert function result to JSON > serve the JSON result
Assuming you choose the FLASK solution, "tech-team" will only need to send an async. GET/POST request containing all the arguments as JSON obj, when they need to get some result from your python function.
websocket server + socket.io
You can also use take a look on Websocket to dispatch to the webservice (look at flask + websocket for your side & socket.io for webservice side.)
=> websocket is really usefull when you need to push/receive data with low cost and latency to (or from) a lot of users (Not sure that websocket will be the best fit to your need)
Regards
I'm using Py2neo in a project. Most of the time the neo4j server runs on localhost so in order to connect to the graph I just do:
g = Graph()
But when I run tests I'd like connect to a different graph, preferably one I can trash without any consequencews.
I'd like to have a "production" graph, possibly set up in such a way that even though it also runs on localhost, the tests won't have access to it.
Can this be done?
UPDATE 0 - A better way to put this question might have been how can I get my locahost Neo4J to serve up 2 databases on two different ports? Once I've got that working it's trivial ot use the REST client to connect to one or the other. I'm running the latest .deb version of Neo4J on an Ubuntu workstation (if that matters).
You can have multiple instances of Neo4j running on the same machine by configuring them to use different ports, i.e. 7474 for development and 7473 for tests.
Graph() defaults to http://localhost:7474/db/data/ but you can also pass a connection URI explicitly:
dev = Graph()
test = Graph("http://localhost:7473/db/data/")
prod = Graph("https://remotehost.com:6789/db/data/")
You can run neo4j server on a different machine and access it through REST service.
Inside the neo4j-server.properties, you can uncomment the line where it says IP address of 0.0.0.0
This would allow that server to be accessed from any place. Now I dont what with Python, but with Java I am using Java Rest library to access that server using the Java Rest Library for Neo4j. Take a look here
https://github.com/rash805115/bookeeping/blob/master/src/main/java/database/service/impl/Neo4JRestServiceImpl.java
Update 0: There are three ways to complete your wish.
Method 1: Start neo4j instance on a separate machine. Then access that instance using some REST API. The way to do that would be to go in conf/neo4j-server.properties and then to find this line and uncomment it.
#org.neo4j.server.webserver.address=0.0.0.0
Method 2: Start two neo4j instances on the same machine but different port and use the REST service to access those. To do this copy the neo4j distribution into two separate folders. Then change this line in conf/neo4j-server.properties and change the port in atleast one if them.
First Instance - org.neo4j.server.webserver.port=7474
org.neo4j.server.webserver.https.port=7473
Second Instance - org.neo4j.server.webserver.port=8484
org.neo4j.server.webserver.https.port=8483
Method 3: From your comments it appears you want to do this and indeed this is the easiest method. Have two separate databases on the same Neo4J Instance. For you to do this you dont have to change any configuration files, just a line in your code. I have not done this in python exactly, but I have done the same in Java. Let me give you the Java code and you can see how easy it is.
Production Code:
package rash.experiments.neo4j;
import org.neo4j.cypher.javacompat.ExecutionEngine;
import org.neo4j.cypher.javacompat.ExecutionResult;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
public class Neo4JEmbedded
{
public static void main(String args[])
{
GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase("db/productiondata/");
ExecutionEngine executionEngine = new ExecutionEngine(graphDatabaseService);
try(Transaction transaction = graphDatabaseService.beginTx())
{
executionEngine.execute("create (node:Person {userId: 1})");
transaction.success();
}
ExecutionResult executionResult = executionEngine.execute("match (node) return count(node)");
System.out.println(executionResult.dumpToString());
}
}
Test Code:
package rash.experiments.neo4j;
import org.neo4j.cypher.javacompat.ExecutionEngine;
import org.neo4j.cypher.javacompat.ExecutionResult;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
public class Neo4JEmbedded
{
public static void main(String args[])
{
GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase("db/testdata/");
ExecutionEngine executionEngine = new ExecutionEngine(graphDatabaseService);
try(Transaction transaction = graphDatabaseService.beginTx())
{
executionEngine.execute("create (node:Person {userId: 1})");
transaction.success();
}
ExecutionResult executionResult = executionEngine.execute("match (node) return count(node)");
System.out.println(executionResult.dumpToString());
}
}
Note the difference in line:
GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase("db/testdata/");
This creates two separate folders db/productiondata and db/testdata. Both of these folders contains separate data and your code can use either folder based on your requirement.
I am pretty sure, in your python code you have to do almost the same thing. Something like (Note that this code might not be correct):
g = Graph("/db/productiondata")
g = Graph("/db/testdata")
Unfortunately, this is a problem without a perfect solution right now. There are however a few options available which may suffice for what you need.
First, have a look at the py2neo build script: https://github.com/nigelsmall/py2neo/blob/release/2.0.5/bau
This is a bash script that spawns a new database instance for each version that needs testing, starting up with an empty store beforehand and closing down afterwards. It uses the default port 7474 but it should be an easy change to tweak this automatically in the properties file. Specifically here, you'll probably want to look at the test, neo4j_start and neo4j_stop functions.
Additionally, py2neo provides an extension called neobox:
http://py2neo.org/2.0/ext/neobox.html
This is intended to be a quick and simple way to set up new database instances running on free ports and might be helpful in this case.
Note that generally speaking, clearing down the data store between tests is a bad idea as this is a slow operation and can seriously impact the running time of your test suite. For that reason, a test database that lives for all tests is a better idea although requires a little thought when writing tests so as they don't overlap.
Going forward, Neo4j will gain DROP functionality to help with this kind of work but it will likely be a few releases before this appears.
I have a existing Website deployed in Google App Engine for Python. Now I have setup the local development server in my System. But I don't know how to get the updated DataBase from live server. There is no Export option in Google's developer console.
And, I don't want to read the data for each request from Production Datastore, I want to set it up locally for once. The google manual says that it stores the local datastore in sqlite file.
Any hint would be appreciated.
First, make sure your app.yaml enables the "remote" built-in, with a stanza such as:
builtins:
- remote_api: on
This app.yaml of course must be the one deployed to your appspot.com (or whatever) "production" GAE app.
Then, it's a job for /usr/local/google_appengine/bulkloader.py or wherever you may have installed the bulkloader component. Run it with -h to get a list of the many, many options you can pass.
You may need to generate an application-specific password for this use on your google accounts page. Then, the general use will be something like:
/usr/local/google_appengine/bulkloader.py --dump --url=http://your_app.appspot.com/_ah/remote_api --filename=allkinds.sq3
You may not (yet) be able to use this "all kinds" query -- the server only generates the needed statistics for the all-kinds query "periodically", so you may get an error message including info such as:
[ERROR ] Unable to download kind stats for all-kinds download.
[ERROR ] Kind stats are generated periodically by the appserver
[ERROR ] Kind stats are not available on dev_appserver.
If that's the case, then you can still get things "one kind at a time" by adding the option --kind=EntityKind and running the bulkloader repeatedly (with separate sqlite3 result files) for each kind of entity.
Once you've dumped (kind by kind if you have to, all at once if you can) the production datastore, you can use the bulkloader again, this time with --restore and addressing your localhost dev_appserver instance, to rebuild the latter's datastore.
It should be possible to explicitly list kinds in the --kind flag (by separating them with commas and putting them all in parentheses) but unfortunately I think I've found a bug stopping that from working -- I'll try to get it fixed but don't hold your breath. In any case, this feature is not documented (I just found it by studying the open-source release of bulkloader.py) so it may be best not to rely on it!-)
More info about the then-new bulkloader can be found in a blog post by Nick Johnson at http://blog.notdot.net/2010/04/Using-the-new-bulkloader (though it doesn't cover newer functionalities such as the sqlite3 format of results in the "zero configuration" approach I outlined above). There's also a demo, with plenty of links, at http://bulkloadersample.appspot.com/ (also a bit outdated, alas).
Check out the remote API. This will tunnel your database calls over HTTP to the production database.
Hi I am trying to write python functional tests for our application. It involves several external components and we are mocking them all out.. We have got a better framework for mocking a service, but not for mocking a database yet.
sqlite is very lite and thought of using them but its a serverless, is there a way I can write some python wrapper to make it a server or I should look at other options like HSQL DB?
I don't understand your problem. Why do you care that it's serverless?
My standard technique for this is:
use SQLAlchemy
in tests, configure it with sqlite:/// or sqlite:///:memory:
I'm on an app engine project where I'd like to put in a link to a Javascript test runner that I'd like to only exist when running the development server. I've made some experiments on a local shell with configuration loaded using the technique found in NoseGAE versus live on the 'App Engine Console' [1] and it looks to me like a distinction btw real instance and dev server is the presence of the module google.appengine.tools. Which lead me to this utility function:
def is_dev():
"""
Tells us if we're running under the development server or not.
:return:
``True`` if the code is running under the development server.
"""
try:
from google.appengine import tools
return True
except ImportError:
return False
The question (finally!) would be: is this a bad idea? And in that case, can anyone suggest a better approach?
[1] http://con.appspot.com/console/ (try it! very handy indeed)
The standard way to test for the development server is as follows:
DEBUG = os.environ['SERVER_SOFTWARE'].startswith("Dev")
Relying on the existence or nonexistence of a particular module - especially an undocumented one - is probably a bad idea.
I'd recommend doing it this way:
import os
def onDevServer():
return os.environ['SERVER_SOFTWARE'].find('Development') >= 0
This looks at the environment you're running in, and returns true if you're running on the development server. However, its a much cleaner way than checking an import, in my opinion.
I'm not a google app developer, but I wouldn't make this 100% dynamic, but also look at a value from a config file. I'm pretty sure you will be running into the problem, that you want to see this console on the prod system (google servers) or run your local version without the dev code (for testing).
To sum it up: Such a logic is fine for small stuff, like adding a debug link, but provide a way to overwrite it (e.g. by a configuration value)