Consume Python DAO from cocoa/objective c front end - python

My plan is to develop a multi-tier, multi-platform database application.
I would like to consume the data from cocoa/objective c apps, .net apps, and web browsers.
I don’t really know where to start and have been looking a Python, but can’t find if cocoa/objective c apps can consume python data objects.
Can anyone point me in the right direction as to how to achieve my goal?
My requirements are:
Data layer should be platform independent.
Whole system is scalable. Therefore multi tier.
Data access can be from cocoa, .net and web based clients.

You can make python and objective-c work together. Since you can use 100% normal C you can use the Python C interface. It's very tedious though.
There's also PyObjC. This acts as a bridge between Objective-C and Python. The documentation is pretty good and it will be much simpler than using the Python C interface directly.
You could also try using Thrift. Thrift is like Protocol Buffers by Google, but has support for generating Objective-C classes. You will have to write some boiler plate code to convert the data object into a thrift object; but after that is done you can pass information amongst any of the languages thrift supports. Documentation is on the thin side; I wrote a tutorial on using with Objective-C available on the thrift wiki here some time ago, not sure if it us up-to-date though as there have been several releases of thrift since then.

Related

How to integrate BIRT with Python Django Project by using Py4j

Hi is there anyone who is help me to Integrate BIRT report with Django Projects? or any suggestion for connect third party reporting tools with Django like Crystal or Crystal Clear Report.
Some of the 3rd-party Crystal Reports viewers listed here provide a full command line API, so your python code can preview/export/print reports via subprocess.call()
The resulting process can span anything between an interactive Crystal Report viewer session (user can login, set/change parameters, print, export) and an automated (no user interaction) report printing/exporting.
While this would simplify your code, it would restrict deployment to Windows.
For prototyping, or if you don't mind performance, you can call from BIRT from the command line.
For example, download the POJO runtime and use the script genReport.bat (IIRC) to generate a report to a file (eg. PDF format). You can specify the output options and the report parameters on the command line.
However, the BIRT startup is heavy overhead (several seconds).
For achieving reasonable performance, it is much better to perform this only once.
To achieve this goal, there are at least two possible ways:
You can use the BIRT viewer servlet (which is included as a WAR file with the POJO runtime). So you start the servlet with a web server, then you use HTTP requests to generate reports.
This looks technically old-fashioned (eg. no JSON Requests), but it should work. However, I never used this approach.
The other option is to write your own BIRT server.
In our product, we followed this approach.
You can take the viewer servlet as a template for seeing how this could work.
The basic idea is:
You start one (or possibly more than one) Java process.
The Java process initializes the BIRT runtime (this is what takes some seconds).
After that, the Java process listens for requests somehow (we used a plain socket listener, but of course you could use HTTP or some REST server framework as well).
A request would contain the following information:
which module to run
which output format
report parameters (specific to the module)
possibly other data/metadata, e.g. for authentication
This would create a RunAndRenderTask or separate RunTask and RenderTasks.
Depending on your reports, you might consider returning the resulting output (e.g. PDF) directly as a response, or using an asynchronous approach.
Note that BIRT will happily create several reports at the same time - multi-threading is no problem (except for the initialization), given enough RAM.
Be warned, however, that you will need at least a few days to build a POC for this "create your own server" approach, and probably some weeks for prodction quality.
So if you just want to build something fast to see if the right tool for you, you should start with the command line approach, then the servlet approach and only then, and only if you find that the servlet approach is not quite good enough, you should go the "create your own server" way.
It's a pity that currently there doesn't seem to exist an open-source, production-quality, modern BIRT REST service.
That would make a really good contribution to the BIRT open-source project... (https://github.com/eclipse/birt)

Pure python database driver

Is it possible to have a database driver written in pure python that doesn't need an underlying system library/ shared object to connect to a database?
Apologies for the necro-bump, but this still comes up in a google search for pure python drivers. So:
Implementing a database driver in pure python is conceptually quite straight forward, but only if you have the wire protocol it uses documented. Then you (just) write a handler for each type of message to and from the database server in byte format and away you go. The devil is in the detail of course and that's why you have to have the protocol documented unless you are patient enough to reverse engineer it (and handle undocumented changes!)
There is a pure python driver for mssql (called python-tds) and has been for a long time (v1.0 Jan 2013). There are also pure python drivers for postgresql (pg8000) and mysql (can't remember the name). I haven't done an exhaustive search for other databases as I don't generally use them.
Pure python drivers are excellent for cross platform development, using alternative python implementations, or simplifying packaging. I especially like them for putting a python program onto Android. You don't need to worry about how to cross compile db client libraries.
Yes. It is possible to implement python database API as it stated in PEP 249
Even more: such database API implementations exists.
E.g. nuodb-python

Are there reactive state libraries like Mobx for Python?

I'm looking for reactive state libraries like Mobx for Python, i.e. on server-side rather than client-side of a web application.
Mobx is similar to classic reactive libraries like RxPY, but has a different focus: It is not so much avout the low-level event dispatching, but reacting on data changes, recalculating derived values (but only those affected, and being lazy on non-observed dependent values). And Mobx determines dependencies of calculated values automatically.
Also, the Vue framework has such functionality built-in, with an even better syntax, with the upside (as well as downside) by being closely tied to the framework.
Alas, both are JavaScript and targeted at client-side / user interface.
So my specific questions are:
Are there similar reactive state libraries for Python?
Do these provide integration for storing/observing data in files?
(This would essentially be an inotify-based build system, but more fine-grained and more flexible.)
Do these provide integration with relational databases?
(Yes, there is a conceptual gap to be bridged, and it probably works only as long a single server instance accesses the database. It would still be very useful for wide range of applications.)
Do these provide integration with webserver frameworks?
(i.e. received HTTP requests trigger state changed and reclaculations, some calculated values are JSON structures which are observed by the client through web sockets, long polling or messaging systems.)
I did one. It's called MoPyX. It's toolkit independent so you can just observe objects. But is geared towards UIs.
See: https://github.com/germaniumhq/mopyx
PySide2 demo: https://github.com/germaniumhq/mopyx-sample

Python Interprocess w/REST Interface options

Is there any tool/library for Python that will aid in interprocess communication, while keeping the API/client code easy to maintain?
I'm writing an application wherein I own both the server and the client portion, but I'd like to make it expandable to others via a REST interface (or something similarly accessible). Is my only options to write the boilerplate connective tissue for REST communication?
The REST interface should be implemented with small functions that call the actual Python API, which you will implement any way.
If you search here, on SO, the most frequent recommendation will be to use Flask to expose the REST interface.
There are libraries around that will try to turn the methods of a class into REST paths, and such, and those may save you a couple of hours on the onset, but cost you many hours down the road.
This morning I coded a backend service that way. The Requests calls to the external service are hidden by a module so the business logic doesn't know where the objects come from (ORM?), and the business logic produces objects that a simple Flask layer consumes to produce the JSON required by each matched URL.
#app.route("/api/users")
def users():
return json_response(
api.users(limit=request.args.get('limit', None)),
)
A one-liner.

using neo4J (server) from python with transaction

I'm currently building a web service using python / flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph.
I'm a bit confused by the different technologies offered by neo4j for that case. Especially :
i originally planned on using the REST Api through py2neo , but the lack of transaction is a bit of a problem.
The "embedded database" neo4j doesn't seem to suit my case very well. I guess it's useful when you're working with batch and one-time analytics, and don't need to store the database on a different server from the web server.
I've stumbled upon the neo4django project, but i'm not sure this one offers transaction support (since there are no native client to neo4j for python), and if it would be a problem to use it outside django itself. In fact, after having looked at the project's documentation, i feel like it has exactly the same limitations, aka no transaction (but then, how can you build a real-world service when you can corrupt your model upon a single connection timeout ?). I don't even understand what is the use for that project.
Could anyone could recommend anything ? I feel completely stuck.
Thanks
None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however, my general approach for client applications is to try to build code which can gracefully handle partially complete data, removing the need for explicit transaction control.
Often, this approach will make heavy use of unique indexing and this is one reason that I have provided a large number of "get_or_create" type methods within py2neo. Cypher itself is incredibly powerful and also provides uniqueness capabilities, in particular through the CREATE UNIQUE clause. Using these, you can make your writes idempotent and you can err on the side of "doing it more than once" safe in the knowledge that you won't end up with duplicate data.
Agreed, this approach doesn't give you transactions per se but in most cases it can give you an equivalent end result. It's certainly worth challenging yourself as to where in your application transactions are truly necessary.
Hope this helps
Nigel
I think neo4django makes use of neo4j-rest-client, that does support transactions through the batch resource in the Neo4j REST interface.
The syntax is quite similar to the one used by Neo4j Python emebedded API:
>>> n = gdb.nodes.create()
>>> n["age"] = 25
>>> n["place"] = "Houston"
>>> n.properties
{'age': 25, 'place': 'Houston'}
>>> with gdb.transaction():
....: n.delete("age")
....:
>>> n.properties
{u'place': u'Houston'}
More information can be found in the neo4j-rest-client documentation about transactions.

Categories