Is it possible to store the executed fetchall() query from psycopg2 into memory, so that I don't have to run that again on my database with a million entries?
What's the best way to store them locally on a machine so that Python can any time dissect the list?
Related
I'm having an issue where I'm connecting to a local Cassandra DB using Python Cassandra Driver. Issue is, there will intermittently be an issue where my CREATE TABLE queries are not being executed.
There is also an issue where I can see the table via DESC table in CQLSH, but upon execution of a SELECT query, it tells me that the table doesn't exist.
Restarting my virtual machine where my Cassandra is installed seems to fix this for awhile.
What is the cause of this and how do I fix it?
I have a complex SQL script that updates several tables, and a different Python script to create reports. I use MS SQL Server for SQL, and VSCode for Python. I have to switch applications to run each file independently.
Not sure if this is possible, but could you run SQL code and Python code in separate code chunks within the same file in VSCode? I don’t mean execute the SQL script in Python (this doesn’t work because the query has dozens of temp tables and multiple outputs), but run them independently in the same file -so I only run one file for my outputs.
All ideas are welcomed. Thanks.
Is there any in memory DB for Python similar to HSQLDB. MySQL is the DB the application uses and for running end to end test cases, we are currently bringing up a clone of actual DB which causes some delay and couple of manual steps.
I have tried SQLite3 facing some trouble with running DDL queries generated for MySQL.
What are the good options for bringing up temporary DB to run all test cases and shut it down after test execution?
Thank you
MySQL has an in-memory engine (https://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html). I've never used, but I guess it will help you run the tests quickly.
So I've been looking all over the internet and only found resources/tutorials on how to connect to a MySQL server but my question is, how do you host a MySQL server both on Windows & Linux?
I am not quite sure what you are asking but if the question is how to run a database for python independent of the OS, consider using sqlite.
From the link (emphasis mine)
SQLite is an embedded SQL database engine. Unlike most other SQL
databases, SQLite does not have a separate server process. SQLite
reads and writes directly to ordinary disk files. A complete SQL
database with multiple tables, indices, triggers, and views, is
contained in a single disk file. The database file format is
cross-platform - you can freely copy a database between 32-bit and
64-bit systems or between big-endian and little-endian architectures.
These features make SQLite a popular choice as an Application File
Format. Think of SQLite not as a replacement for Oracle but as a
replacement for fopen().
So it allows you to use a database from your python code without the hassle of running a server or setting something up locally.
Note that sqlite can also be stored in-memory if you want to avoid writing to disk.
Unless you have a very specific reason to start the server from Python, I.e. you want to be able to programmatically do stuff you wouldn't do from the command line, I think the best you could do is to install an instance of Mysql server in your local machine, run it and then, you'll be able to connect to it from Python.
Bear in mind that your local installation of Mysql will be running on localhost (127.0.0.1)
I need to migrate information that I created in SQLite3 to a MySQL database on my website. The website is on a hosted server. I have remote access to the MySQL database. I initially thought it would be easy, but I am not finding any good info on it, and everything I read seems to imply that you need to dump the SQLite3 file, convert it to a MySQL dump file using messy scripts, and then import it into the MySQL.
(example: Quick easy way to migrate SQLite3 to MySQL?)
My question: Is it not better to read it and import it straight from the script into MySQL. I haven't used MySQL at all with Python, but it would seem intuitive that it would be better to have less steps for things to be miss-read. I am also trying to save a little time by not having to understand the way that a MySQL dump file works.
If you have a script (Python if possible), tool or link that will help me out that would be great, but my main concern first of all is how to go about doing it. I can't realistically check everything (otherwise I would just do copy and paste), so I want to be sure that I am going about it the right way.
I am using Django, perhaps there is a Django specific method for it, but I haven't been able to find one.
The reason the "messy" scripts are required is that it's generally a difficult problem to solve.
If you are lucky there won't be too many schema incompatibilities between the databases.
These may help
db_dump.py
dbpickle.py
What is your favorite solution for managing database migrations in django?
Have you tried using django-admin's dumpdata and loaddata?
e.g. dump sqllite3 db
manage.py dumpdata
change db to mysql, recreate tables and load
manage.py loaddata mydata.json
dumpdata by default dumps json but you can set format to xml too.
Alternatively, you can just read whole sqlite3 db using python db api or sqlalchemy or django ORM and dump it to MySQL db.