Tool to Compare Locust Load Test Results - python

I'm looking for any recommendations on a tools that can be used to compare load test statistics that Locust outputs. Currently, after each run, Locust produces either an HTML page in its Web UI or a CSV file. I would like to compare these documents over the course of multiple test runs to see, for example, if a release degrades performance.
I've reviewed the list of locust extensions and found nothing.

You can check out locust-influx package or Locust Monitoring with Grafana in Just 15 Minutes article.
The idea is that Locust will be sending the results to InfluxDB and you will be able to come up with a Grafana dashboard visualising and comparing different test run results.

I like Dmitri T's answer. I've also considered JTL Reporter in the past but never got approval from my company to use it.
The use case is the same as with Grafana like Dmitri suggested, using Locust's event hooks to create "listeners" that ship off Locust's stats to a service to store, analyze, and visualize the data to facilitate run comparisons.
https://jtlreporter.site/docs/integrations/locust

Locust Dashboards (a part of locust-plugins stores results in Postgres/Timescale, reporting in Grafana) has a useful view for comparing runs over time.
https://github.com/SvenskaSpel/locust-plugins/tree/master/locust_plugins/dashboards

Related

Is it possible to load a Google Ads custom report via API into production DB

Looking through the API documentation it seems that there's currently no way to access a custom report via the API. If this is, in fact, the case, is there a workaround to make this possible?
The goal is to get a modified version of this report shown on the web interface:
No, you need to build the report yourself and call it with the API unfortunately.
Depending on how complex the report is, it can be done pretty quickly. You can quickly generate the GAQL needed for your APU query using this tool: https://developers.google.com/google-ads/api/fields/v7/overview_query_builder
This will save you typing out all the resources manually, and will even validate it for you.
If you're stuck, let us know what report you're trying to generate and we can help with the GAQL.

Any way to track custom statistics in locust

Locust is a great and simple load testing tool. By default it only tracks response times and content length from which it can deduce RPS, etc. Is there any way to track custom statistics in locust as well?
In my case a site Im testing returns couple of stats via headers. For example a count of SQL queries within a request. It would be very helpful to track some of these statistics in conjunction to tracking standard response times.
I do not see any way to do that in locust however. Is there a simple way for doing that?
Only customization I could see is setting url names in a request in docs.
Manually storing some of the stats is not that straight forward either as locust is distributed so would like to avoid doing anything custom.
edit
There is an example how custom stats can be passed around however that does not show up in the UI and requires custom export. Any way to add additional data in locust which will get logged both in UI and data export?
Maybe something like:
class MyTaskSet(TaskSet):
#task
def my_task(self):
response = self.client.get("/foo")
self.record(foo=response.headers.get('x-foo'))
As far as I know, there is no simple way of visualizing custom data in Locust. However, by looking at https://github.com/locustio/locust/blob/master/locust/main.py#L370, you could easily replace main locust run function and inject some custom logic to https://github.com/locustio/locust/blob/master/locust/web.py. This seem to be a low hanging fruit for the Locust devs to make this part of code more adjustable out of the box so I'd suggest opening issue in their GitHub.

Flask website backend structure guidance assistance?

I have a basic personal project website that I am looking to learn some web dev fundamentals with and database (SQL) fundamentals as well (If SQL is even the right technology to use??).
I have the basic skeleton up and running but as I am new to this, I want to make sure I am doing it in the most efficient and "correct" way possible.
Currently the site has a main index (landing) page and from there the user can select one of a few subpages. For the sake of understanding, each of these sub pages represents a different surf break and they each display relevant info about that particular break i.e. wave height, wind, tide.
As I have already been able to successfully scrape this data, my main questions revolve around how would I go about inserting this data into a database for future use (historical graphs, trends)? How would I ensure data is added to this database in a continuous manner (once/day)? How would I use data that was scraped from an earlier time, say at noon, to be displayed/used at 12:05 PM rather than scraping it again?
Any other tips, guidance, or resources you can point me to are much appreciated.
This kind of data is called time series. There are specialized database engines for time series, but with a not-extreme volume of observations - (timestamp, wave heigh, wind, tide, which break it is) tuples - a SQL database will be perfectly fine.
Try to model your data as a table in Postgres or MySQL. Start by making a table and manually inserting some fake data in a GUI client for your database. When it looks right, you have your schema. The corresponding CREATE TABLE statement is your DDL. You should be able to write SELECT queries against your table that yield the data you want to show on your webapp. If these queries are awkward, it's a sign that your schema needs revision. Save your DDL. It's (sort of) part of your source code. I imagine two tables: a listing of surf breaks, and a listing of observations. Each row in the listing of observations would reference the listing of surf breaks. If you're on a Mac, Sequel Pro is a decent tool for playing around with a MySQL database, and playing around is probably the best way to learn to use one.
Next, try to insert data to the table from a Python script. Starting with fake data is fine, but mold your Python script to read from your upstream source (the result of scraping) and insert into the table. What does your scraping code output? Is it a function you can call? A CSV you can read? That'll dictate how this script works.
It'll help if this import script is idempotent: you can run it multiple times and it won't make a mess by inserting duplicate rows. It'll also help if this is incremental: once your dataset grows large, it will be very expensive to recompute the whole thing. Try to deal with importing a specific interval at a time. A command-line tool is fine. You can specify the interval as a command-line argument, or figure out out from the current time.
The general problem here, loading data from one system into another on a regular schedule, is called ETL. You have a very simple case of it, and can use very simple tools, but if you want to read about it, that's what it's called. If instead you could get a continuous stream of observations - say, straight from the sensors - you would have a streaming ingestion problem.
You can use the Linux subsystem cron to make this script run on a schedule. You'll want to know whether it ran successfully - this opens a whole other can of worms about monitoring and alerting. There are various open-source systems that will let you emit metrics from your programs, basically a "hey, this happened" tick, see these metrics plotted on graphs, and ask to be emailed/texted/paged if something is happening too frequently or too infrequently. (These systems are, incidentally, one of the main applications of time-series databases). Don't get bogged down with this upfront, but keep it in mind. Statsd, Grafana, and Prometheus are some names to get you started Googling in this direction. You could also simply have your script send an email on success or failure, but people tend to start ignoring such emails.
You'll have written some functions to interact with your database engine. Extract these in a Python module. This forms the basis of your Data Access Layer. Reuse it in your Flask application. This will be easiest if you keep all this stuff in the same Git repository. You can use your chosen database engine's Python client directly, or you can use an abstraction layer like SQLAlchemy. This decision is controversial and people will have opinions, but just pick one. Whatever database API you pick, please learn what a SQL injection attack is and how to use user-supplied data in queries without opening yourself up to SQL injection. Your database API's documentation should cover the latter.
The / page of your Flask application will be based on a SQL query like SELECT * FROM surf_breaks. Render a link to the break-specific page for each one.
You'll have another page like /breaks/n where n identifies a surf break (an integer that increments as you insert surf break rows is customary). This page will be based on a query like SELECT * FROM observations WHERE surf_break_id = n. In each case, you'll call functions in your Data Access Layer for a list of rows, and then in a template, iterate through those rows and render some HTML. There are various Javascript and Python graphing libraries you can feed this list of rows into and get graphs out of (client side or server side). If you're interested in something like a week-over-week change, you should be able to express that in one SQL query and get that dataset directly from the database engine.
For performance, try not to get in a situation where more than one SQL query happens during a page load. By default, you'll be doing some unnecessary work by going back to the database and recomputing the page every time someone requests it. If this becomes a problem, you can add a reverse proxy cache in front of your Flask app. In your case this is easy, since nothing users do to the app cause its content to change. Simply invalidate the cache when you import new data.

Implementation progressive visualization from Python to google maps?

I'm mostly working on backend staff, except now in a project I need to use python to do computing and visualize the results on google maps. Think about it as, for example, compute the geographical clusters of people tweeting in new york city.
In the python program, it runs about 10 seconds, and then output one iteration of data, which is a json object for coordinates. I'm wondering how should I connect this data to google maps?
What I thought was let python write data into a file and JS would listen to that file every few milliseconds. However that sounds too hacky. Just wondering is there a better way to do it?
I'm really a newbie to js. please forgive my ignorance.
Thanks
The normal way a HTML page gets data from a backend service (like your coordinate generator every 10 seconds) is to poll a web service (usually, a JSON feed) for updates.
All of the dynamic Google Maps stuff happens within a browser, and that page polls a JSON endpoint, or uses something fancier like websockets to stream data into the browser window.
For the frontend, consider using jQuery, which makes polling JSON dead simple. Here's some examples.
Your "python program" should dump results into a simple database. While relational and traditional databases like MySQL or PostgreSQL should suffice, i'd encourage you to use a NoSQL database, which handles capped collections. This prevents you from having to clean old data out from a cron schedule. It additionally allows storing data in ranged buckets for some cool playback style histories.
You should then have a simple web server which can handle the JSON requests from the HTML frontend page, and simply pulls data from the MongoDB. This can be done quickly in any one of the python web frameworks like Flask, Bottle or Pyramid. You could also play with something a little sexier like node.js. The only requirement here is that a database driver exists for it.
Hope that gives a 10,000 foot view of what you need to do now.

RSS aggregation packages

We are looking to add a news/articles section to an existing site which will be powered by aggregating content via RSS feeds. The requirements are
Be able to aggregate lots of feeds. Initially we will start with small number of and eventually we may be aggregating few hundreds of them.
We don't want to display the whole post on our site. We will display summary or short description and when user clicks on read more, he will be taken to the original post on external site.
We would like to grab the image/s related to a post and display that as a small thumbnail with a post on our site.
Create an automated tag cloud out of all the aggregated content.
Categorize aggregated content by using category/sub-category structure.
The aggregation piece should perform well.
Our web app is built using Django and so I am looking into selecting one the following packages. Based on our requirements, which package would you recommend?
django-planet
django-news
planetplanet
feedjack
If you have a good idea of what you want, why not just try them all? If you have pretty strict requirements, write it yourself, roll your own aggregator with feedparser.

Categories