How does dbeaver programatically export data as .sql files? - python

I'm building a python app that that can help authorised users to retrieve sample data from production mysql databases to cloud data stores for auditing and tracking purposes. I'm able provide data as CSV files.
However, the auditors now need to import this data into databases in SQL format. For this, I'm currently exporting the query results manually into SQL format using Dbeaver so that it can be ingested by the users by executing the .sql files. It will be great if I can use any libraries that enable this feature in my app.
I tried searching the Dbeaver code base to check for any libraries but could not identify. Please suggest if these is a smarter pythonic way to achieve this?

Related

csv->Oracle DB autoscheduled import

I have basic csv report that is produced by other team on a daily basis, each report has 50k rows, those reports are saved on sharedrive everyday. And I have Oracle DB.
I need to create autoscheduled process (or at least less manual) to import those csv reports to Oracle DB. What solution would you recommend for it?
I did not find such solution in SQL Developer, since it is upload from file and not a query. I was thinking about python cron script, that will autoran on a daily basis and transform csv report to txt with needed SQL syntax (insert into...) and then python will connect to Oracle DB and will ran txt file as SQL command and insert data.
But this looks complicated.
Maybe you know other solution that you would recommend yo use?
Create an external table to allow you to access the content of the CSV as if it were a regular table. This assumes the file name does not change day-to-day.
Create a scheduled job to import the data in that external table and do whatever you want with it.
One common blocking issue that prevents using 'external tables' is that external tables require the data to be on the computer hosting the database. Not everyone has access to those servers. Or sometimes the external transfer of data to that machine + the data load to the DB is slower than doing a direct path load from the remote machine.
SQL*Loader with direct path load may be an option: https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-sql-loader.html#GUID-8D037494-07FA-4226-B507-E1B2ED10C144 This will be faster than Python.
If you do want to use Python, then read the cx_Oracle manual Batch Statement Execution and Bulk Loading. There is an example of reading from a CSV file.

Is there a way to display db2 data in grafana

We use db2 at our company. I would like to find a way to query db2 for data and display that data in grafana. For example to get a number of completed transactions.
I see grafana does support mysql natively but not db2. Is there a way to just add the db2 driver/libraries?
Worst case is writing queries in python and then simply displaying that recorded data with grafana an effective solution?
Thanks
Don't know if you found what you need but in case you didnĀ“t you might consider using 'Db2 rest services' and Grafana plugin 'Simple JSON Datasource'.
In the meantime, two suitable datasources have been developed
grafana-odbc-datasource
grafana-db2-datasource
Unfortunately, both require an enterprise license.
We're currently evaluating other approaches like copying the data in a PostgreSQL database with an ETL-like job.
A generic ODBC/JDBC plugin is really needed.

Pull data from Tableau Server into Pandas Dataframe

My goal is to join three datasources that are only available to me through Tableau Server (no direct database access). The data is too large to efficiently use Tableau's Data Blending.
One way forward is to pull the data from the three Tableau Server Datasources into a Pandas dataframe, do the necessary manipulations, and save down an Excel File to use as a datasource for a visualization in Tableau.
I have found lots of information on the TabPy module that allows one to convert a Pandas dataframe to a Tableau Data Extract but have not found much re: how to pull data from Tableau server in an automated fashion.
I have also read about tabcmd as a way of automating tasks, but do not have the necessary admin permissions.
Let me know if you need further information.
Tabcmd does not require admin privileges. Anyone with permissions to Server can use it, but it will respect the privileges you do have. You can install tabcmd on computers other than your server without needing extra license keys.
That being said, it's very simple to automate data downloading. Take the URL to your workbook and add ".csv" to the end of it. The .csv goes at the end of the URL, not any query parameters you have.
For example: http://[Tableau Server Location]/views/[Workbook Name]/[View Name].csv
Using URL parameters, you can customize the data filters and how it looks. Just make sure you put the .csv before the ? for any query parameters.
More info for this plus a few others hacks at http://www.vizwiz.com/2014/03/the-greatest-tableau-tip-ever-exporting.html.
You can use pantab to both read and write from Hyper extracts https://pantab.readthedocs.io/en/latest/

User friendly SQLite database csv file import update solution

I was wondering if there is a way to allow a user to export a SQLite database as a .csv file, make some changes to it in a program like Excel, then upload that .csv file back to the table it came from using a record UPDATE method.
Currently I have a client that needed an inventory and pricing management system for their e-commerce store. I designed a database system and logic in Python 3 and SQLite. The system from a programming standpoint works flawlessly.
The problem I have is that there are some less then technical office staff that need to edit things like product markup within the database. Currently, I have them setup with SQLite DB Browser, from there they can edit products one at a time and write the changes to the database. They can also export tables to a .csv file for data manipulation in Excel.
The main issue is getting that .csv file back into the table it was exported from using an UPDATE method. When importing a .csv file to a table in SQLite DB Browser there is no way to perform an update import. It can only insert new rows by default and do to my table constraints that is a problem.
I like SQLite DB Browser because it is clean and simple and does exactly what I need. However, as soon as you have to edit more then one thing at a time and filter information in more complicated ways it starts to lack the functionality needed.
Is there a solution out there for SQLite DB Browser to tackle this problem? Is there a better software option all together to interact with a SQLite database that would give me that last bit of functionality?
Have you tried SQLiteForExcel? however, some coding is required.
So after researching some off the shelf options I found that the Devart Excel Add Ins did exactly what I needed. They are paid add ins, however, they seem to support almost all modern databases including SQlite. Once the add in is installed you can connect to a database and manipulate the data returned just like normal in Excel including bulk edits and advanced filtering, all changes are highlighted and can easily be written to the database with one click.
Overall I thought it was a pretty solid solution and everyone seems to be very happy with it as it made interacting with a database intuitive and non threatening to the more technically challenged.

Python ORM - save or read sql data from/to files

I'm completely new to managing data using databases so I hope my question is not too stupid but I did not find anything related using the title keywords...
I want to setup a SQL database to store computation results; these are performed using a python library. My idea was to use a python ORM like SQLAlchemy or peewee to store the results to a database.
However, the computations are done by several people on many different machines, including some that are not directly connected to internet: it is therefore impossible to simply use one common database.
What would be useful to me would be a way of saving the data in the ORM's format to be able to read it again directly once I transfer the data to a machine where the main database can be accessed.
To summarize, I want to do:
On the 1st machine: Python data -> ORM object -> ORM.fileformat
After transfer on a connected machine: ORM.fileformat -> ORM object -> SQL database
Would anyone know if existing ORMs offer that kind of feature?
Is there a reason why some of the machine cannot be connected to the internet?
If you really can't, what I would do is setup a database and the Python app on each machine where data is collected/generated. Have each machine use the app to store into its own local database and then later you can create a dump of each database from each machine and import those results into one database.
Not the ideal solution but it will work.
Ok,
thanks to MAhsan's and Padraic's answers I was able to find the how this can be done: the CSV format is indeed easy to use for import/export from a database.
Here are examples for SQLAlchemy (import 1, import 2, and export) and peewee

Categories