How to integrate BIRT with Python Django Project by using Py4j - python

Hi is there anyone who is help me to Integrate BIRT report with Django Projects? or any suggestion for connect third party reporting tools with Django like Crystal or Crystal Clear Report.

Some of the 3rd-party Crystal Reports viewers listed here provide a full command line API, so your python code can preview/export/print reports via subprocess.call()
The resulting process can span anything between an interactive Crystal Report viewer session (user can login, set/change parameters, print, export) and an automated (no user interaction) report printing/exporting.
While this would simplify your code, it would restrict deployment to Windows.

For prototyping, or if you don't mind performance, you can call from BIRT from the command line.
For example, download the POJO runtime and use the script genReport.bat (IIRC) to generate a report to a file (eg. PDF format). You can specify the output options and the report parameters on the command line.
However, the BIRT startup is heavy overhead (several seconds).
For achieving reasonable performance, it is much better to perform this only once.
To achieve this goal, there are at least two possible ways:
You can use the BIRT viewer servlet (which is included as a WAR file with the POJO runtime). So you start the servlet with a web server, then you use HTTP requests to generate reports.
This looks technically old-fashioned (eg. no JSON Requests), but it should work. However, I never used this approach.
The other option is to write your own BIRT server.
In our product, we followed this approach.
You can take the viewer servlet as a template for seeing how this could work.
The basic idea is:
You start one (or possibly more than one) Java process.
The Java process initializes the BIRT runtime (this is what takes some seconds).
After that, the Java process listens for requests somehow (we used a plain socket listener, but of course you could use HTTP or some REST server framework as well).
A request would contain the following information:
which module to run
which output format
report parameters (specific to the module)
possibly other data/metadata, e.g. for authentication
This would create a RunAndRenderTask or separate RunTask and RenderTasks.
Depending on your reports, you might consider returning the resulting output (e.g. PDF) directly as a response, or using an asynchronous approach.
Note that BIRT will happily create several reports at the same time - multi-threading is no problem (except for the initialization), given enough RAM.
Be warned, however, that you will need at least a few days to build a POC for this "create your own server" approach, and probably some weeks for prodction quality.
So if you just want to build something fast to see if the right tool for you, you should start with the command line approach, then the servlet approach and only then, and only if you find that the servlet approach is not quite good enough, you should go the "create your own server" way.
It's a pity that currently there doesn't seem to exist an open-source, production-quality, modern BIRT REST service.
That would make a really good contribution to the BIRT open-source project... (https://github.com/eclipse/birt)

Related

API Design Questions using Django for OS tasks (REST vs RPC)

Background:
I have an application that is supposed to automate some infrastructure & OS-heavy tasks that happen on a network file system (for example: mounting volumes, shutting down / bringing up servers, creating directories, moving data around, ssh-ing etc). Ultimately there are a lot of OS-level commands that need to be run in a sequence for each action. Our consumer/client likely does not know this sequence, but knows "I want to do X task".
Tech stack: Python/Django
I have been tasked with setting this application up but am perplexed on the best way to approach for modularity from the API standpoint & just overall design. Currently, we have a similar application that is a SOAP-style (rpc) but the way it is written is not very modular. Like for example, one function will have a ton of random hardcoded subprocess commands - not the approach I want to emulate here.
Initially I was leaning more towards REST API since Django has a nice django rest framework plug-in, but am having trouble modelling these very action-oriented tasks. The more I read other things online, the more I come to believe I really need to think of every little action as a resource with the client having to GET/POST/PUT to each of these to keep things very modular but when I boil that down further it looks like I may need to set up 15+ endpoints for each situation needed and the client likely isn't going to want to call all 15 endpoints to get their singular behavior they want. That being said - moving to rpc so users can have one endpoint that 'moves the moon on a single call' might not be the best approach either.
I think one of the issues I see is our application is doing a lot of work on a file system, not all contained within our application's database. I reckon that's kind of a central point of this application, but I have trouble modelling things that require file system actions outside our application's database.
Question 1:
One example action that our client might want to call would be responsible for ssh-ing to a remote server and running a command. How might you model this in REST?
Question 2:
How do you all model file system actions in your applications?
Question 3:
After reviewing the above, does RPC seem like the better option?
Other:
Any other help or feedback (even in generally is much appreciated).
REST is similar to SOAP in a sense that you call operations in SOAP and REST just maps those operations to web resources and HTTP methods.
For example
z DoSSHStuffOnARemoteServer(x=1,y=2)
vs
POST /RemoteServer/SSHStuff {x:1,y:2}
If it timeouts, because it takes a lot of time, then you can do
202 accepted
{type: "transaction": href: "/RemoteServer/SSHStuff/123", status: "pending"}
and poll it in every 5-10 mins or use websockets to update the status. After it is done:
200 ok
{type: "transaction": href: "/RemoteServer/SSHStuff/123", status: "done", result: {z:3}}
So there is no magic. Just keep in mind that REST is in the presentation layer of your application, it returns view models and the entire structure is connected to the application services if you do DDD. It should not reflect the database structure unless you have an anemic domain model, or others call it thick client. Normally I would not say anything about RemoteServer/SSHStuff, just tell the client what will be done and stay silent about how it will be done. They don't need to know anything about how you store data or how many servers you have with what protocols and applications. It should not be their concern. The only thing they need to know what will be done, how long it takes to respond and what will be the response. The other part is irrelevant to them and it is a security risk if you share too much of it. When we design an interface like an interface for a REST service or just an OOP interface we always do it to hide implementation details. I hope that helps, have a nice day!

Scriptable HTTP benchmark (preferable in Python)

I'm searching for a good way to stress test a web application. Basically I'm searching für something like ab with a scriptable interface. Ideally I want to define some tasks, that simulate different action on the webapp (register a account, login, search, etc.) and the tool runs a hole bunch of processes that executes these tasks*. As result I would like something like "average request time", "slowest request (per uri)", etc.
*: To be independed from the client bandwith I will run theses test from some EC2 instances so in a perfect world the tool will already support this - otherwise I will script is using boto.
If you're familiar with the python requests package, locust is very easy to write load tests in.
http://locust.io/
I've used it to write all of our perf tests in it.
You can maybe look onto these tools:
palb (Python Apache-Like Benchmark Tool) - HTTP benchmark tool with command line interface resembles ab.
It lacks the advanced features of ab, but it supports multiple URLs (from arguments, files, stdin, and Python code).
Multi-Mechanize - Performance Test Framework in Python
Multi-Mechanize is an open source framework for performance and load testing.
Runs concurrent Python scripts to generate load (synthetic transactions) against a remote site or service.
Can be used to generate workload against any remote API accessible from Python.
Test output reports are saved as HTML or JMeter-compatible XML.
Pylot (Python Load Tester) - Web Performance Tool
Pylot is a free open source tool for testing performance and scalability of web services.
It runs HTTP load tests, which are useful for capacity planning, benchmarking, analysis, and system tuning.
Pylot generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics.
Tests suites are executed and monitored from a GUI or shell/console.
( Pylot on GoogleCode )
The Grinder
Default script language is Jython.
Pretty compact how-to guide.
Tsung
Maybe a bit unusual for the first use but really good for stress-testing.
Step-by-step guide.
+1 for locust.io in answer above.
I would recommend JMeter.
See: http://jmeter.apache.org/
You can setup JMeter as proxy of your browser to record actions like login and then stress test your web application. You can also write scripts to it.
Don't forget FunkLoad, it's very easy to use

Python Web Backend

I am an experienced Python developer starting to work on web service
backend system. The system feeds data (constantly) from the web to a
MySQL database. This data is later displayed by a frontend side (there
is no connection between the frontend and the backend). The backend
system constantly downloads flight information from the web (some of
the data is fetched via APIs, and some by downloading and parsing
text / xls files). I already have a script that downloads the data,
parses it, and inserts it to the MySQL db - all in a big loop. The
frontend side is just a bunch of php pages that properly display the
data by querying the MySQL server.
It is crucial that this web service be robust, strong and reliable.
Therefore, I have been looking into the proper ways to design it, and came across the following parts to comprise my system:
1) django as a framework (for HTTP connections and for using Piston)
2) Piston as an API provider (this is great because then my front-end can use the API instead of actually running queries)
3) SQLAlchemy as the DB layer (I don't like the little control you get when using django ORM, I want to be able to run a more complex DB framework)
4) Apache with mod_wsgi to run everything
5) And finally, Celery (or django-cron) to actually run my infinite loop that pulls the data off the web - hopefully in some sort of organized tasks format). This is the part I am least sure of, and any pointers are appreciated.
This all sounds great. I used django before to write websites (aka
request handlers that return data). However, other than using Celery or django-cron I can't really see how it fits a role of a constant data feeding backend.
I just wanted to run this by you guys to hear your ideas / comments. Any input you have / pointers to documentation and/or other libraries would be greatly greatly appreciated!
If You are about to use SQLAlchemy, I would refrain from using Django: Django is fine if You are using the whole stack, but as You are about to rip Models off, I do not see much value in using it and I would take a look at another option (perhaps Pylons or pure old CherryPy would do).
Even more so if FEs will not run queries, but only ask API providers.
As for robustness, I am more satisfied with starting separate fcgi processess with supervise and using more lightweight web server (ligty / nginx), but that's a matter of taste.
For the "infinite loop" part, it depends on what behavior you want: if there is a problem with the source, would you just like to skip the step or repeat it multiple times when source is back up?
Periodic Tasks might be good for former, while cron that would just spawn scraping tasks is better for latter.

fastcgi, cherrypy, and python

So I'm trying to do more web development in python, and I've picked cherrypy, hosted by lighttpd w/ fastcgi. But my question is a very basic one: why do I need to restart lighttpd (or apache) every time I change my application code, or the code for an underlying library?
I realize this question extends from a basic mis(i.e. poor)understanding of the fastcgi model, so I'm open to any schooling here, but I'm used to just changing a PHP file and it showing up, versus having to bounce the web server.
Any elucidation/useful mockery appreciated.
This is because of performance. For development, autoreloading is helpful. But for production, you don't want to autoreload. This is actually a decently-sized bottleneck in say PHP. Every time you access a PHP webpage, the server has to parse and load each page from scratch. With Python, the script is already loaded and running after the first access.
As has been pointed out, CherryPy has a autoreload setting. I'd recommend using the CherryPy built-in server for development and using lighttpd for production. That will likely save you some time. The tutorial shows you how to do this.
From a system-software-writer's pointer of view: This all depends on how the meta-data about the server process is organized within your daemon (lighttpd or fcgi). Some programs are designed for one time only initialization -- MOSTLY this allows a much simpler and better performing internal programming model.
Often it is very hard to program a server process reload config data in a easy way. You might have to introduce locks and external event objects (signals in UNIX). When you can synchronize the data structures by design -- i.e., only initializing once .... why complicate things by making the data model modifiable multiple times ?

Is it more efficient to parse external XML or to hit the database?

I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
First off -- measure. Don't just assume that one is better or worse than the other.
Second, if you really don't want to measure, I'd guess the database is a bit faster (assuming the database is relatively local compared to the web service). Network latency usually is more than parse time unless we're talking a really complex database or really complex XML.
Everyone is being very polite in answering this question: "it depends"... "you should test"... and so forth.
True, the question does not go into great detail about the application and network topographies involved, but if the question is even being asked, then it's likely a) the DB is "local" to the application (on the same subnet, or the same machine, or in memory), and b) the webservice is not. After all, the OP uses the phrases "external service" and "display on your own site." The phrase "parsing it once or however many times you need to each day" also suggests a set of data that doesn't exactly change every second.
The classic SOA myth is that the network is always available; going a step further, I'd say it's a myth that the network is always available with low latency. Unless your own internal systems are crap, sending an HTTP query across the Internet will always be slower than a query to a local DB or DB cluster. There are any number of reasons for this: number of hops to the remote server, outage or degradation issues that you can't control on the remote end, and the internal processing time for the remote web service application to analyze your request, hit its own persistence backend (aka DB), and return a result.
Fire up your app. Do some latency and response times to your DB. Now do the same to a remote web service. Unless your DB is also across the Internet, you'll notice a huge difference.
It's not at all hard for a competent technologist to scale a DB, or for you to completely remove the DB from caching using memcached and other paradigms; the latency between servers sitting near each other in the datacentre is monumentally less than between machines over the Internet (and more secure, to boot). Even if achieving this scale requires some thought, it's under your control, unlike a remote web service whose scaling and latency are totally opaque to you. I, for one, would not be too happy with the idea that the availability and responsiveness of my site are based on someone else entirely.
Finally, what happens if the remote web service is unavailable? Imagine a world where every request to your site involves a request over the Internet to some other site. What happens if that other site is unavailable? Do your users watch a spinning cursor of death for several hours? Do they enjoy an Error 500 while your site borks on this unexpected external dependency?
If you find yourself adopting an architecture whose fundamental features depend on a remote Internet call for every request, think very carefully about your application before deciding if you can live with the consequences.
Consuming the webservices is more efficient because there are a lot more things you can do to scale your webservices and webserver (via caching, etc.). By consuming the middle layer, you also have the options to change the returned data format (e.g. you can decide to use JSON rather than XML). Scaling database is much harder (involving replication, etc.) so in general, reduce hits on DB if you can.
There is not enough information to be able to say for sure in the general case. Why don't you do some tests and find out? Since it sounds like you are using python you will probably want to use the timeit module.
Some things that could effect the result:
Performance of the web service you are using
Reliability of the web service you are using
Distance between servers
Amount of data being returned
I would guess that if it is cacheable, that a cached version of the data will be faster, but that does not necessarily mean using a local RDBMS, it might mean something like memcached or an in memory cache in your application.
It depends - who is calling the web service? Is the web service called every time the user hits the page? If that's the case I'd recommend introducing a caching layer of some sort - many web service API's throttle the amount of hits you can make per hour.
Whether you choose to parse the cached XML on the fly or call the data from a database probably won't matter (unless we are talking enterprise scaling here). Personally, I'd much rather make a simple SQL call than write a DOM Parser (which is much more prone to exceptional scenarios).
It depends from case to case, you'll have to measure (or at least make an educated guess).
You'll have to consider several things.
Web service
it might hit database itself
it can be cached
it will introduce network latency and might be unreliable
or it could be in local network and faster than accessing even local disk
DB
might be slow since it needs to access disk (although databases have internal caches, but those are usually not targeted)
should be reliable
Technology itself doesn't mean much in terms of speed - in one case database parses SQL, in other XML parser parses XML, and database is usually acessed via socket as well, so you have both parsing and network in either case.
Caching data in your application if applicable is probably a good idea.
As a few people have said, it depends, and you should test it.
Often external services are slow, and caching them locally (in a database in memory, e.g., with memcached) is faster. But perhaps not.
Fortunately, it's cheap and easy to test.
Test definitely. As a rule of thumb, XML is good for communicating between apps, but once you have the data inside of your app, everything should go into a database table. This may not apply in all cases, but 95% of the time it has for me. Anytime I ever tried to store data any other way (ex. XML in a content management system) I ended up wishing I would have just used good old sprocs and sql server.
It sounds like you essentially want to cache results, and are wondering if it's worth it. But if so, I would NOT use a database (I assume you are thinking of a relational DB): RDBMSs are not good for caching; even though many use them. You don't need persistence nor ACID.
If choice was between Oracle/MySQL and external web service, I would start with just using service.
Instead, consider real caching systems; local or not (memcache, simple in-memory caches etc).
Or if you must use a DB, use key/value store, BDB works well. Store response message in its serialized form (XML), try to fetch from cache, if not, from service, parse. Or if there's a convenient and more compact serialization, store and fetch that.

Categories