Optimize SQLite Table Access Efficiency in Python - python

My app reads dozens of SQLite databases ranging in size from 1MB to 100MB with a simple table structure like,
"create table dictionary(id INTEGER PRIMARY KEY, topics, definition)"
On startup the app reads all the databases to extract the "topics" column data. With the DB configured as above extracting the topic data is a lengthy process. It seems the whole DB file is being read to just access the relatively small "topics" column.
If I add another table with just the topics data,
"create table dictionary(id INTEGER PRIMARY KEY, topics)"
access is quicker. However, if I make another DB file with just the topics column, access is much quicker. I don't really want to use two files to access a single DB.
My questions,
Does table creation order affect the speed of access?
Is there a way to read only the part of the file that has the queried table/column thus saving time and hard dive grinding?
Is it possible to read a small table/column from a huge file nearly as quick as the small table alone from a small file? And if so, how?
Thanks,

Related

data base or text file and excel file in Django

if you have some fixed data in Django, for example, ten rows and 5 columns.
Is it better to create a database for it and read it from the database, or is it not good and it is better to create a dictionary and read the data from the dictionary?
In terms of speed and logic and ...
If the database is not a good choice, should I write the data as a dictionary in View Django or inside a text file or inside an Excel file?
Whichever method is better, please explain why.
It depends upon the application.. but if there is doubt, create a model for it and put it in the database. And here's why I say that:
If your data needs to be changed, or if you want to view it, you can easily do so in the Django Admin app.
If your applications contains models which relate to this data, you can use a foreign key to reference it, rather than replicating it or using references that aren't enforced by the database.
It makes it much easier to do queries on your whole database if everything is in the database. For example, let's say that you have a table of "houses" and each house has a "color".. but you've stored the list of color names in a dictionary outside the database. Now you want a list of houses that are "Bright Blue". First you have to look in your dictionary to find the id of the color "Bright Blue", then you have to do your database lookup using the id you found. It takes something that would normally be a very simple one-line query in Django and makes it much harder.
By the same logic, if you wanted a list of houses along with their color, this would be a very simple query if done entirely in the database but is extra work if you keep some data elsewhere.

How can I have a database with thousands of tables with varying number of columns that are all of the same class in Django / SQLAlchemy ORM?

I have financial statement data on thousands of different companies. Some of the companies have data only for 2019, but for some I have decade long data. Each company financial statement have its own table structured as follows with columns in bold:
lineitem---2019---2018---2017
2...............1000....800.....600
3206...........700....300....-200
56.................50....100.....100
200...........1200......90.....700
This structure is preferred over more of a flat file structure like lineitem-year-amount since one query gives me the correct structure of the output for a financial statement table. lineitem is a foreignkey linking to the primary key of a mapping table with over 10,000 records. 3206 can for example mean "Debt to credit instituions". I also have a companyIndex table which has the company ID, company name, and table name. I am able to get the data into the database and make queries using sqlite3 in python, but advanced queries is somewhat of a challenge at times, not to mention that it can take a lot of time and not be very readable. I like the potential of using ORM in Django or SQLAlchemy. The ORM in SQLAlchemy seems to want me to know the name of the table I am about to create and want me to know how many columns to create, but I don't know that since I have a script that parses a datadump in csv which includes the company ID and financial statement data for the number of years it has operated. Also, one year later I will have to update the table with one additional year of data.
I have been watching and reading tutorials Django and SQLAlchemy, but have not been able to try it out too much in practise due to this initial problem which is a prerequisite for succeding in my project. I have googled and googled, and checked stackoverflow for a solution, but not found any solved questions (which is really surprising since I always find the solution on here).
So how can I insert the data using Django/SQLAlchemy given the structure I plan to have it fit into? How can I have the selected table(s) (based on company ID or company name) be an object(s) in ORM just like any other object allowing me the select the data I want at the granularity level I want?
Ideally there is a solution to this in Django, but since I haven't found anything I suspect there is not or that how I have structured the database is insanity.
You cannot find a solution because there is none.
You are mixing the input data format with the table schema.
You establish an initial database table schema and then add data as rows to the tables.
You never touch the database table columns again, unless you decide that the schema has to be altered to support different, usually additional functionality in the application, because for example, at a certain point in the application lifetime, new attributes become required for data. Not because there is more data, wich simply translates to new data rows in one or more tables.
So first you decide about a proper schema for database tables, based on the data records you will be reading or importing from somewhere.
Then you make sure the database is normalized until 3rd normal form.
You really have to understand this. Haven't read it, just skimmed over but I assume it is correct. This is fundamental database knowledge you cannot escape. After learning it right and with practice it becomes second nature and you will apply the rules without even noticing.
Then your problems will vanish, and you can do what you want with whatever relational database or ORM you want to use.
The only remaining problem is that input data needs validation, and sometimes it is not given to us in the proper form. So the program, or an initial import procedure, or further data import operations, may need to give data some massaging before writing the proper data rows into the existing tables.

User friendly SQLite database csv file import update solution

I was wondering if there is a way to allow a user to export a SQLite database as a .csv file, make some changes to it in a program like Excel, then upload that .csv file back to the table it came from using a record UPDATE method.
Currently I have a client that needed an inventory and pricing management system for their e-commerce store. I designed a database system and logic in Python 3 and SQLite. The system from a programming standpoint works flawlessly.
The problem I have is that there are some less then technical office staff that need to edit things like product markup within the database. Currently, I have them setup with SQLite DB Browser, from there they can edit products one at a time and write the changes to the database. They can also export tables to a .csv file for data manipulation in Excel.
The main issue is getting that .csv file back into the table it was exported from using an UPDATE method. When importing a .csv file to a table in SQLite DB Browser there is no way to perform an update import. It can only insert new rows by default and do to my table constraints that is a problem.
I like SQLite DB Browser because it is clean and simple and does exactly what I need. However, as soon as you have to edit more then one thing at a time and filter information in more complicated ways it starts to lack the functionality needed.
Is there a solution out there for SQLite DB Browser to tackle this problem? Is there a better software option all together to interact with a SQLite database that would give me that last bit of functionality?
Have you tried SQLiteForExcel? however, some coding is required.
So after researching some off the shelf options I found that the Devart Excel Add Ins did exactly what I needed. They are paid add ins, however, they seem to support almost all modern databases including SQlite. Once the add in is installed you can connect to a database and manipulate the data returned just like normal in Excel including bulk edits and advanced filtering, all changes are highlighted and can easily be written to the database with one click.
Overall I thought it was a pretty solid solution and everyone seems to be very happy with it as it made interacting with a database intuitive and non threatening to the more technically challenged.

Multiple pandas users connecting to SQL DB

New to Pandas & SQL. Haven't found an answer specific to this config, and not sure if standard SQL wisdom applies when introducing pandas to the mix.
Doing a school project that involves ~300 gb of data in ~6gb .csv chunks.
School advised syncing data via dropbox, but this seemed impractical for a 4-person team.
So, current solution is AWS EC2 & RDS instance (MySQL, I think it'll be, 1 table).
What I wanted to confirm before we start setting it up:
If multiple users are working with (and occasionally modifying) the data, can this arrangement manage conflicts? e.g., if user A uses pandas to construct a dataframe from a query, are the records in that query frozen if user B tries to work with them?
My assumption is that the data in the frame are in memory, and the records in the SQL database are free to be modified by others until the dataframe is written back to the db, but I'm hoping that either I'm wrong or there's a simple solution here (like a random sample query for each user or something).
A pandas DataFrame object does not interact directly with the db. Once you read it in it sits in memory locally. You would have to use a method like DataFrame.to_sql to write your changes back to the MySQL DB. For more information on reading and writing to SQL tables, see the pandas documentation here.

Large text database: Convert to SQL or use as is

My python project involves an externally provided database: A text file of approximately 100K lines.
This file will be updated daily.
Should I load it into an SQL database, and deal with the diff daily? Or is there an effective way to "query" this text file?
ADDITIONAL INFO:
Each "entry", or line, contains three fields - any one of which can be used as an index.
The update is is the form of the entire database - I would have to manually generate a diff
The queries are just looking up records and displaying the text.
Querying the database will be a fundamental task of the application.
How often will the data be queried? On the one extreme, if once per day, you might use a sequential search more efficiently than maintaining a database or index.
For more queries and a daily update, you could build and maintain your own index for more efficient queries. Most likely, it would be worth a negligible (if any) sacrifice in speed to use an SQL database (or other database, depending on your needs) in return for simpler and more maintainable code.
What I've done before is create SQLite databases from txt files which were created from database extracts, one SQLite db for each day.
One can query across SQLite db to check the values etc and create additional tables of data.
I added an additional column of data that was the SHA1 of the text line so that I could easily identify lines that were different.
It worked in my situation and hopefully may form the barest sniff of an acorn of an idea for you.

Categories