How to load remote database in cache, in python? - python

I want to load a database from a remote server to my memory/cache, so that I don't have to make network calls every time I want to use the database.
I am doing this in Python and the database is cassandra. How should I do it? I have heard about memcached and beaker. Which library is best for this purpose?

If you are trying to get some data from a database, use the pyodbc module. This module can be used to download data from a given table in a database. Answers can also be found in here.
An example how to connect:
import pyodbc
cnxn = pyodbc.connect('DRIVER={SQLServer};SERVER=SQLSRV01;
DATABASE=DATABASE;UID=USER;PWD=PASSWORD')
cursor = cnxn.cursor()
cursor.execute("SQL_QUERY")
for row in cursor.fetchall():
print row

Related

Is it possible to import data from AWS DynamoDB table into Microsoft SQL server?

I want to import data from dynamodb table into SQL Server.
I use Python botot3.
Basically, you need to use pymssql:
A simple database interface for Python that builds on top of FreeTDS
to provide a Python DB-API (PEP-249) interface to Microsoft SQL
Server.
You create a connection:
conn = pymssql.connect(server, user, password, "tempdb")
cursor = conn.cursor(as_dict=True)
Then you can use execute or executemany to built an ISNERT statement.
It will be better if you are able to save this data in CSV file and then use BULK INSERT statement as it will be faster if you are working with large amount of data.

Python script, query to GCP postgresql db from local machine?

I have a GCP workspace, complete with a Postgresql database. On a frequent basis, I need to insert and/or select rows from the db. I've been searching for a python script that will (A) connect to GCP, then (B) connect to the db, then (C) query a specific table. I'd prefer not to hard code my credentials if possible, that way I could share this script with others on my team, and provided that they were authorized users, it would run without any hiccups.
Does anyone have such a script?
I believe I just answered your question here: Access GCP Cloud SQL from AI notebook?
Using the Cloud SQL Python Connector which was mentioned in the other post, you can run a script that looks something like this to connect to your database and run a query:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import os
from google.cloud.sql.connector import connector
# Connect to the database
conn = connector.connect(
os.getenv("INSTANCE_CONNECTION_NAME"),
"pg8000",
user=os.getenv("DB_USER"),
password=os.getenv("DB_PASSWORD"),
db=os.getenv("DB_NAME")
)
# Execute a query
cursor = conn.cursor()
cursor.execute("SELECT * from my_table")
# Fetch the results
result = cursor.fetchall()
# Do something with the results
for row in result:
print(row)
The instance connection name should be in the format project:region:instance. If you don't want to hard code database credentials, you can read them in from environment variables instead.

why should we set the local_infile=1 in sqlalchemy to load local file? Load file not allowed issue in sqlalchemy

I am using sqlalchemy to connect to MySQL database and found a strange behavior.
If I query
LOAD DATA LOCAL INFILE
'C:\\\\Temp\\\\JaydenW\\\\iata_processing\\\\icer\\\\rename\\\\ICER_2017-10-
12T09033
7Z023870.csv
It pops an error:
sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1148, u'The used
command is not allowed with this MySQL versi
on') [SQL: u"LOAD DATA LOCAL INFILE
'C:\\\\Temp\\\\JaydenW\\\\iata_processing\\\\icer\\\\rename\\\\ICER_2017-10-
12T090337Z023870.csv' INTO TABLE genie_etl.iata_icer_etl LINES TERMINATED BY
'\\n'
IGNORE 1 Lines (rtxt);"] (Background on this error at:
http://sqlalche.me/e/2j85)
And I find the reason is that:
I need to set the parameter as
args = "mysql+pymysql://"+username+":"+password+"#"+hostname+"/"+database+"?
local_infile=1"
If I use MySQL official connection library. I do not need to do so.
myConnection = MySQLdb.connect(host=hostname, user=username, passwd=password, db=database)
Can anyone help me to understand the difference between the two mechanisms?
The reason is that the mechanisms use different drivers.
In SQLAlchemy you appear to be using the pymysql engine, which uses the PyMySQL Connection class to create the DB connection. That one requires the user to explicitly pass the local_infile parameter if they want to use the LOAD DATA LOCAL command.
The other example uses MySQLdb, which is basically a wrapper around the MySQL C API (and to my knowledge not the official connection library; that would be MySQL Connector Python, which is also available on SQLAlchemy as mysqlconnector). This one apparently creates the connection in a way that the LOAD DATA LOCAL is enabled by default.

Overwrite a database

I have an online database and connect to it by using MySQLdb.
db = MySQLdb.connect(......)
cur = db.cursor()
cur.execute("SELECT * FROM YOUR_TABLE_NAME")
data = cur.fetchall()
Now, I want to write the whole database to my localhost (overwrite). Is there any way to do this?
Thanks
If I'm reading you correctly, you have two database servers, A and B (where A is a remote server and B is running on your local machine) and you want to copy a database from server A to server B?
In all honesty, if this is a one-off, consider using the mysqldump command-line tool, either directly or calling it from python.
If not, the last answer on http://bytes.com/topic/python/answers/24635-dump-table-data-mysqldb details the SQL needed to define a procedure to output tables and data, though this may well miss subtleties mysqldump does not

An example how I directly connect to Cassandra by CQL (1)

I need full example that show me how I can Connect to Cassandra by CQL. I am working on a project that should be written in (python) Django and I want to do some thing directly not only using cqlEngine as my Project Driver.
Thanks for your help.
This is how I made python communicate to cassandra.
First, you need to establish a connection, adding:
import cql
con= cql.connect(host="127.0.0.1",port=9160,keyspace="testKS")
Then generate a cursor for query:
cur=con.cursor()
result=cur.execute("select * from TestCF")
And I can see the result by:
result.fetchone()
or
result.fetch()
This queries nicely and doesn't need extra library, such pycasaa or cqlengine

Categories