I am trying to use sqlalchemy to connect with mysql database. I have set up charset=utf-8$use_unicode=0. This worked with almost all databases, but not with a particular one. I believe it is because it has 'init-connect' variable set to 'SET NAMES latin2;' I have no privileges to change that.
It works for me if I send explicit query SET NAMES utf8, however if there is a temporal disconnection, then after reconnecting my program breaks again as it gets lati2-encoded data from the server.
Is it possible to create some hook to always send the SET NAMES when sqlalchemy connects? Or any other way to solve this problem?
Sounds like what you want is a custom PoolListener. This SO answer explains how to write one in the context of SQLite's PRAGMA foreign_keys=ON
Sqlite / SQLAlchemy: how to enforce Foreign Keys?
Related
I have a table with 30k clients, with the ClientID as primary key.
I'm getting data from API calls and inserting them into the table using python.
I'd like to find a way to insert rows with new clients and, if the ClientID that comes with the API call already exists in the table, update the existing register with the updated information of this client.
Thanks!!
A snippet of code would be nice to show us what exactly you are doing right now. I presume you are using an ORM like SqlAlchemy? If so, then you are looking at doing an UPSERT type of an operation.
That is already answered HERE
Alternatively, if you are executing raw queries without an ORM then you could write a custom procedure and pass required parameters. HERE is a good write up on how that is done in MSSQL under high concurrency. You could use this as a starting point for understanding and then re-write it for PostgreSQL.
How to dump/load data from python test response to a database table(SQL)?
Assuming I know nothing, can you guide me or provide all the possible ways to dump/load/store data from a pytest response to a SQL table
Below are the high level steps you should take to load data into a SQL database. The lack of context makes it impractical to go into further detail.
Set up a database (choose one, install it, configure it).
(Usually) change the database schema to suit your needs. (Could also happen after #3.)
Connect to the database from wherever you have the data.
Insert the data into the database.
Maybe this example will help.
I don't know what you mean by "responses".
When creating an mlflow tracking server and specifying that a SQL Server database is to be used as a backend store, mlflow creates a bunch of table within the dbo schema. Does anyone know if it is possible to specify a different schema in which to create these tables?
It is possible to alter mlflow/mlflow/store/sqlalchemy_store.py to change the schema of the tables that are stored.
It is very likely that this is the wrong solution for you, since you will go out of sync with the open source and lose newer features that alter this, unless you maintain the fork yourself. Could you maybe reply with your use case?
You can use postgres uri options:
Postgres URI options sample:
"postgresql://postgres:postgres#localhost:5432/postgres?options=-csearch_path%3Ddbo,mlflow_schema"
In your Mlflow Code:
mlflow.set_tracking_uri("postgresql://postgres:postgres#localhost:5432/postgres?options=-csearch_path%3Ddbo,mlflow_schema")
! Don't forget to create 'mlflow_schema' schema.
how-to-specify-schema-in-psycopg2-connection-method
I'm using MSSQLServer as the backend store. I could use a different schema than dbo by specifying the default schema for the SQLServer user being used by MLFlow.
In my case, if the MLFlow tables (e.g: experiences) exist in dbo, then those tables will be used. If not, MLFlow will create those tables in the default schema.
I know there are ways of storing data/tables from one server to another, such as the instruction provided here. However, due to I use python to scrape, create, and store data, I am wondering that whether I could fulfill this process by directly using SQLAlchemy. More precisely, after I store the scraped data in the database I create through SQLAlchemy in my own computer, can I simultaneously store.copy those database/tables to another computer/server directly through SQLAlchemy? Can anyone help? Thanks so much.
I have a MySQL server providing access to both a database for the Django ORM and a separate database called "STATES" that I built. I would like to query tables in my STATES database and return results (typically a couple of rows) to Django for rendering, but I don't know the best way to do this.
One way would be to use Django directly. Maybe I could move the relevant tables into the Django ORM database? I'm nervous about doing this because the STATES database contains large tables (10 million rows x 100 columns), and I worry about deleting that data or messing it up in some other way (I'm not very experienced with Django). I also imagine I should avoid creating a separate connection for each query, so I should use the Django connection to query STATE tables?
Alternatively, I could treat the STATE database as existing on a totally different server. I could import SQLAlchemy, create a connection, query STATE.table, return the result, and close that connection.
Which is better, or is there another path?
The docs describe how to connect to multiple databases by adding another database ("state_db") to DATABASES in settings.py, I can then do the following.
from django.db import connections
def query(lname)
c = connections['state_db'].cursor()
c.execute("SELECT last_name FROM STATE.table WHERE last_name=%s;",[lname])
rows = c.fetchall()
...
This is slower than I expected, but I'm guessing this is close to optimal because it uses the open connection and Django without adding extra complexity.