How possible data pre processing possible with python for Klipfolio - python

I am not sure exactly right place to ask but I need any single infrmation about it.
I am going to create a dashboard with Klipfolio and I want to make data pre processing with Python and integrate in klipfolio but unfortunately Klipfoli does not have any specific place to do it.
Is anyone used Klipfolio, data pre processing with Python for Klipfolio.

While Klipfolio does not have any Python integrations, Klipfolio does connect to various types of SQL databases. One work around is to dump your processed data from Python into a SQL database and then connect that SQL database with Klipfolio to make data sources to build the visualization.

You can either directly connect to the database, or if you are running python on a server, you can user "Rest/URL" method in Klipfolio to directly connect to your python code and integrate the output into your dashboard.

Related

csv->Oracle DB autoscheduled import

I have basic csv report that is produced by other team on a daily basis, each report has 50k rows, those reports are saved on sharedrive everyday. And I have Oracle DB.
I need to create autoscheduled process (or at least less manual) to import those csv reports to Oracle DB. What solution would you recommend for it?
I did not find such solution in SQL Developer, since it is upload from file and not a query. I was thinking about python cron script, that will autoran on a daily basis and transform csv report to txt with needed SQL syntax (insert into...) and then python will connect to Oracle DB and will ran txt file as SQL command and insert data.
But this looks complicated.
Maybe you know other solution that you would recommend yo use?
Create an external table to allow you to access the content of the CSV as if it were a regular table. This assumes the file name does not change day-to-day.
Create a scheduled job to import the data in that external table and do whatever you want with it.
One common blocking issue that prevents using 'external tables' is that external tables require the data to be on the computer hosting the database. Not everyone has access to those servers. Or sometimes the external transfer of data to that machine + the data load to the DB is slower than doing a direct path load from the remote machine.
SQL*Loader with direct path load may be an option: https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-sql-loader.html#GUID-8D037494-07FA-4226-B507-E1B2ED10C144 This will be faster than Python.
If you do want to use Python, then read the cx_Oracle manual Batch Statement Execution and Bulk Loading. There is an example of reading from a CSV file.

Is there a way to display db2 data in grafana

We use db2 at our company. I would like to find a way to query db2 for data and display that data in grafana. For example to get a number of completed transactions.
I see grafana does support mysql natively but not db2. Is there a way to just add the db2 driver/libraries?
Worst case is writing queries in python and then simply displaying that recorded data with grafana an effective solution?
Thanks
Don't know if you found what you need but in case you didnĀ“t you might consider using 'Db2 rest services' and Grafana plugin 'Simple JSON Datasource'.
In the meantime, two suitable datasources have been developed
grafana-odbc-datasource
grafana-db2-datasource
Unfortunately, both require an enterprise license.
We're currently evaluating other approaches like copying the data in a PostgreSQL database with an ETL-like job.
A generic ODBC/JDBC plugin is really needed.

Storing/Copying PostgreSQL Database to Another Server through SQLAlchemy

I know there are ways of storing data/tables from one server to another, such as the instruction provided here. However, due to I use python to scrape, create, and store data, I am wondering that whether I could fulfill this process by directly using SQLAlchemy. More precisely, after I store the scraped data in the database I create through SQLAlchemy in my own computer, can I simultaneously store.copy those database/tables to another computer/server directly through SQLAlchemy? Can anyone help? Thanks so much.

Python ORM - save or read sql data from/to files

I'm completely new to managing data using databases so I hope my question is not too stupid but I did not find anything related using the title keywords...
I want to setup a SQL database to store computation results; these are performed using a python library. My idea was to use a python ORM like SQLAlchemy or peewee to store the results to a database.
However, the computations are done by several people on many different machines, including some that are not directly connected to internet: it is therefore impossible to simply use one common database.
What would be useful to me would be a way of saving the data in the ORM's format to be able to read it again directly once I transfer the data to a machine where the main database can be accessed.
To summarize, I want to do:
On the 1st machine: Python data -> ORM object -> ORM.fileformat
After transfer on a connected machine: ORM.fileformat -> ORM object -> SQL database
Would anyone know if existing ORMs offer that kind of feature?
Is there a reason why some of the machine cannot be connected to the internet?
If you really can't, what I would do is setup a database and the Python app on each machine where data is collected/generated. Have each machine use the app to store into its own local database and then later you can create a dump of each database from each machine and import those results into one database.
Not the ideal solution but it will work.
Ok,
thanks to MAhsan's and Padraic's answers I was able to find the how this can be done: the CSV format is indeed easy to use for import/export from a database.
Here are examples for SQLAlchemy (import 1, import 2, and export) and peewee

How do I perform "mysqldump" from within Python?

I would like to copy the contents of a MySQL database from one server to another using a third server. This could be done from the shell prompt using this:
mysqldump --host=hostname1 --user=username --password="mypwd" acme | mysql --host=hostname2 --user=username --password="mypwd" acme
However, how do I do this from within a Python script without using os.system or any of the other subprocess methods? I've read through the MySQLdb docs, but don't see a way to do a bulk export/import. Thank you!
If you dont want to use mysqldump from the command line (using the os.system methods) you are kind of tied to get the data straight from MySQL and then put it to the other server. In that respect your question looks very similar to Get Insert Statement for existing row in MySQL
you can use a query to get the schema creation sql
SHOW CREATE TABLE MyTable;
And then you need to implement a script that just querys data and inserts it to the other server.
You could also look into third party applications that allows you to copy data from one database to another.

Categories