MS Access Relationship Questions - python

I've got a .mdb that opens with the ability to select a value from a drop down (Setup Main window with Pragram Data field). This then generates the Setup Data window. This information is all pulled from relationships between different tables, yet when I view relationships there are no connections between the tables? The small buttons to the mid/upper right (Op sheet, Tool Sheet, etc) all generate a report to be printed. I also cannot find where these reports come from or the connections between them and the Setup Data window. Anyone familiar with MS Access can give some guidance into viewing these things?
My goal is to recreate these reports in a Python Pandas dataframe or similar. I've connected to the .mdb using pyodbc already and can load them into a df but am unsure how these connections are made?

You can try holding the [Shift] key down as you open the database in Access. That may bypass the startup code (if the developer hasn't disabled that feature) and allow you to open the Form(s) in Design View. From there you can inspect the Events associated with the Form controls to see what they do (e.g., a button Click event may call DoCmd.OpenReport to open a Report).
Specifically with regard to Relationships, the developer may have simply not bothered to define them in the Relationships window. However, you can still identify the relationships between the tables by looking at the JOINs in saved queries, including the queries defined as the Record Source of a Report.

Related

How to give all users the ability to view a base data set. And allowing them to add their own data entries to view and share to users

Just a heads up, I am new to using web frameworks. My only experience so far comes from completing the VSCode, Django and Mozilla MDN tutorials. But I'm making my way through these along with my own project.
I'm creating a web app that has an "official" database table that all website/app users can view. But, I want to be able to let them add their own data entries to the table, which only they can view and edit. And would be able to grant other users/friends access to their created data entries, to expand the total number of entries made available. Without making everyone using the site having to work out which data entries are the "official" and which are user created.
Hopefully a better way of understanding what I'm planning
So, what would be the best method for setting up user accounts to have access to the main database table and their own data set, which they can grant access for others to view?
Would this mean creating a table for each user, and if so how can this be set up automatically upon account creation?
I've read that creating a new table in the database can be cumbersome later on if lots of accounts with their own tables of data are created.
I've looked through the Django documentation, but it seems to be more focussed on user account creation and authorisation. And regarding databases, I can't find any questions/posts that relate to what I'm trying to make. Especially with creating a personal list of data entries for each user upon account creation.
Thank you, for taking the time to read this, even if you don't have an answer!

Flask website backend structure guidance assistance?

I have a basic personal project website that I am looking to learn some web dev fundamentals with and database (SQL) fundamentals as well (If SQL is even the right technology to use??).
I have the basic skeleton up and running but as I am new to this, I want to make sure I am doing it in the most efficient and "correct" way possible.
Currently the site has a main index (landing) page and from there the user can select one of a few subpages. For the sake of understanding, each of these sub pages represents a different surf break and they each display relevant info about that particular break i.e. wave height, wind, tide.
As I have already been able to successfully scrape this data, my main questions revolve around how would I go about inserting this data into a database for future use (historical graphs, trends)? How would I ensure data is added to this database in a continuous manner (once/day)? How would I use data that was scraped from an earlier time, say at noon, to be displayed/used at 12:05 PM rather than scraping it again?
Any other tips, guidance, or resources you can point me to are much appreciated.
This kind of data is called time series. There are specialized database engines for time series, but with a not-extreme volume of observations - (timestamp, wave heigh, wind, tide, which break it is) tuples - a SQL database will be perfectly fine.
Try to model your data as a table in Postgres or MySQL. Start by making a table and manually inserting some fake data in a GUI client for your database. When it looks right, you have your schema. The corresponding CREATE TABLE statement is your DDL. You should be able to write SELECT queries against your table that yield the data you want to show on your webapp. If these queries are awkward, it's a sign that your schema needs revision. Save your DDL. It's (sort of) part of your source code. I imagine two tables: a listing of surf breaks, and a listing of observations. Each row in the listing of observations would reference the listing of surf breaks. If you're on a Mac, Sequel Pro is a decent tool for playing around with a MySQL database, and playing around is probably the best way to learn to use one.
Next, try to insert data to the table from a Python script. Starting with fake data is fine, but mold your Python script to read from your upstream source (the result of scraping) and insert into the table. What does your scraping code output? Is it a function you can call? A CSV you can read? That'll dictate how this script works.
It'll help if this import script is idempotent: you can run it multiple times and it won't make a mess by inserting duplicate rows. It'll also help if this is incremental: once your dataset grows large, it will be very expensive to recompute the whole thing. Try to deal with importing a specific interval at a time. A command-line tool is fine. You can specify the interval as a command-line argument, or figure out out from the current time.
The general problem here, loading data from one system into another on a regular schedule, is called ETL. You have a very simple case of it, and can use very simple tools, but if you want to read about it, that's what it's called. If instead you could get a continuous stream of observations - say, straight from the sensors - you would have a streaming ingestion problem.
You can use the Linux subsystem cron to make this script run on a schedule. You'll want to know whether it ran successfully - this opens a whole other can of worms about monitoring and alerting. There are various open-source systems that will let you emit metrics from your programs, basically a "hey, this happened" tick, see these metrics plotted on graphs, and ask to be emailed/texted/paged if something is happening too frequently or too infrequently. (These systems are, incidentally, one of the main applications of time-series databases). Don't get bogged down with this upfront, but keep it in mind. Statsd, Grafana, and Prometheus are some names to get you started Googling in this direction. You could also simply have your script send an email on success or failure, but people tend to start ignoring such emails.
You'll have written some functions to interact with your database engine. Extract these in a Python module. This forms the basis of your Data Access Layer. Reuse it in your Flask application. This will be easiest if you keep all this stuff in the same Git repository. You can use your chosen database engine's Python client directly, or you can use an abstraction layer like SQLAlchemy. This decision is controversial and people will have opinions, but just pick one. Whatever database API you pick, please learn what a SQL injection attack is and how to use user-supplied data in queries without opening yourself up to SQL injection. Your database API's documentation should cover the latter.
The / page of your Flask application will be based on a SQL query like SELECT * FROM surf_breaks. Render a link to the break-specific page for each one.
You'll have another page like /breaks/n where n identifies a surf break (an integer that increments as you insert surf break rows is customary). This page will be based on a query like SELECT * FROM observations WHERE surf_break_id = n. In each case, you'll call functions in your Data Access Layer for a list of rows, and then in a template, iterate through those rows and render some HTML. There are various Javascript and Python graphing libraries you can feed this list of rows into and get graphs out of (client side or server side). If you're interested in something like a week-over-week change, you should be able to express that in one SQL query and get that dataset directly from the database engine.
For performance, try not to get in a situation where more than one SQL query happens during a page load. By default, you'll be doing some unnecessary work by going back to the database and recomputing the page every time someone requests it. If this becomes a problem, you can add a reverse proxy cache in front of your Flask app. In your case this is easy, since nothing users do to the app cause its content to change. Simply invalidate the cache when you import new data.

User friendly SQLite database csv file import update solution

I was wondering if there is a way to allow a user to export a SQLite database as a .csv file, make some changes to it in a program like Excel, then upload that .csv file back to the table it came from using a record UPDATE method.
Currently I have a client that needed an inventory and pricing management system for their e-commerce store. I designed a database system and logic in Python 3 and SQLite. The system from a programming standpoint works flawlessly.
The problem I have is that there are some less then technical office staff that need to edit things like product markup within the database. Currently, I have them setup with SQLite DB Browser, from there they can edit products one at a time and write the changes to the database. They can also export tables to a .csv file for data manipulation in Excel.
The main issue is getting that .csv file back into the table it was exported from using an UPDATE method. When importing a .csv file to a table in SQLite DB Browser there is no way to perform an update import. It can only insert new rows by default and do to my table constraints that is a problem.
I like SQLite DB Browser because it is clean and simple and does exactly what I need. However, as soon as you have to edit more then one thing at a time and filter information in more complicated ways it starts to lack the functionality needed.
Is there a solution out there for SQLite DB Browser to tackle this problem? Is there a better software option all together to interact with a SQLite database that would give me that last bit of functionality?
Have you tried SQLiteForExcel? however, some coding is required.
So after researching some off the shelf options I found that the Devart Excel Add Ins did exactly what I needed. They are paid add ins, however, they seem to support almost all modern databases including SQlite. Once the add in is installed you can connect to a database and manipulate the data returned just like normal in Excel including bulk edits and advanced filtering, all changes are highlighted and can easily be written to the database with one click.
Overall I thought it was a pretty solid solution and everyone seems to be very happy with it as it made interacting with a database intuitive and non threatening to the more technically challenged.

Dynamic database tables in django

I am working on a project which requires me to create a table of every user who registers on the website using the username of that user. The columns in the table are same for every user.
While researching I found this Django dynamic model fields. I am not sure how to use django-mutant to accomplish this. Also, is there any way I could do this without using any external apps?
PS : The backend that I am using is Mysql
An interesting question, which might be of wider interest.
Creating one table per user is a maintenance nightmare. You should instead define a single table to hold all users' data, and then use the database's capabilities to retrieve only those rows pertaining to the user of interest (after checking permissions if necessary, since it is not a good idea to give any user unrestricted access to another user's data without specific permissions having been set).
Adopting your proposed solution requires that you construct SQL statements containing the relevant user's table name. Successive queries to the database will mostly be different, and this will slow the work down because every SQL statement has to be “prepared” (the syntax has to be checked, the names of table and columns has to be verified, the requesting user's permission to access the named resources has to be authorized, and so on).
By using a single table (model) the same queries can be used repeatedly, with parameters used to vary specific data values (in this case the name of the user whose data is being sought). Your database work will move along faster, you will only need a single model to describe all users' data, and database management will not be a nightmare.
A further advantage is that Django (which you appear to be using) has an extensive user-based permission model, and can easily be used to authenticate user login (once you know how). These advantages are so compelling I hope you will recant from your heresy and decide you can get away with a single table (and, if you planning to use standard Django logins, a relationship with the User model that comes as a central part of any Django project).
Please feel free to ask more questions as you proceed. It seems you are new to database work, and so I have tried to present an appropriate level of detail. There are many pitfalls such as this if you cannot access knowledgable advice. People on SO will help you.
This page shows how to create a model and install table to database on the fly. So, you could use type('table_with_username', (models.Model,), attrs) to create a model and use django.core.management to install it to the database.

PyQt Automatic Repeating Forms

I'm currently attempting to migrate a legacy VBA/Microsoft Access application to Python and PyQt. I've had no problems migrating any of the logic, and most of the forms have been a snap, as well. However, I've hit a problem on the most important part of the application--the main data-entry form.
The form is basically a row of text boxes corresponding to fields in the database. The user simply enters data in to a fields, tabs to the next and repeats. When he comes to the end of the record/row, he tabs again, and the form automatically creates a new blank row for him to start entering data in again. (In actuality, it displays a "blank" row below the current new record, which the user can actually click in to to start a new records as well.) It also allows the user to scroll up and down to see all the current subset of records he's working on.
Is there a way to replicate this functionality in PyQt? I haven't managed to find a way to get Qt to do this easily. Access takes care of it automatically; no code outside the form is required. Is it that easy in PyQt (or even close), or is this something that's going to need to be programmed from scratch?
You should look into QSqlTableModel, and the QTableView Objects. QSqlTableModel offers an abstraction of a relational table that can be used inside on of the Qt view classes. A QTableView for example. The functionality you describe can be implemented with moderate effort just by using these two classes.
The QSqlTableModel also supports editing on database fields.
My guess the only functionality that you will have to manually implement is the "TAB" at the end of the table to create a new row if you want to keep that.
I don't know much about Access, but using the ODBC-SQL driver you should be able use the actual access database for your development or testing there is some older information here, you might want to consider moving to Sqlite, Mysql or another actual SQL database.

Categories