How to make queries to remote Postgres in different views in Django? - python

I'm making a project where I need to access to remote databases and get data from it.
I'm connecting to remote postgres database and getting list of all tables in my class-based view like so:
try:
# connect to the PostgreSQL serve
conn = psycopg2.connect(
host='host',
database='db_name',
user='username',
password='password',
port='port',
)
# create a cursor
cursor = conn.cursor()
cursor.execute("select relname from pg_class where relkind='r' and relname !~ '^(pg_|sql_)';")
rows = cursor.fetchall()
# close the communication with the PostgreSQL
cursor.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
Now, in another view I want to make other queries (like retrieving specific rows from specific table). How am I able to do that?
The goal is to take all credentials from user's input in template to connect to db. Then on another template choose which table and which rows to use to get certain data.

Related

How to Make SQL Table Show on Pycharm

I have a postgres db running on docker. I am able to access this db via my sql client Dbeaver and when I run select statements I see the expected results.
I would like to be able to query this db via a Python script and after some searching found psycopg2 package.
When I run the code below it 'looks' like it's successful, the conn and cursor objects appear as a variables.
import pandas as pd
import psycopg2
# connect to db
conn = psycopg2.connect(
host="localhost",
database="postgres",
user="postgres",
password="example")
# create a cursor
cur = conn.cursor()
However, when trying to query the db using cur.connect(), , variable ex_data is None. This exact same query via my sql client returns a table of data.
ex_data = cur.execute('select * from myschema.blah limit 10;')
How can I query my db via Python using psycopg2? Desired result wold be a data frame with the result set from the query string above.

Redshift queries not working with psycopg2

I am creating a Python script to interact with schema permissions (and relative tables) on Redshift. As suggested in some other StackOverflow posts I am using psycopg2 library.
When I try to execute some simple SELECT FROM queries I have no problems: I can execute and see results with no issues.
Problem comes when for example I try to create a new schema or granting / revoking permissions. This kind of queries don't look like to produce any effect.
Here I show a very simple example in which I attempt to create a new schema:
conn_string = "dbname='{}' port='{}' host='{}' user='{}' password='{}'".format(DB_NAME, DB_PORT, DB_HOST, DB_USER, DB_PWD)
con = psycopg2.connect(conn_string)
sql = "CREATE SCHEMA new_schema"
cur = con.cursor()
cur.execute(sql)
But when I look into the Redshift DB I don't see any new schema called new_schema. Same behaviour happens when I try to run some permission grant / revoke query.
Anyone knows what is going on?
You have to commit the transaction.
con = psycopg2.connect(conn_string)
sql = "CREATE SCHEMA new_schema"
cur = con.cursor()
cur.execute(sql)
con.commit()

Python MySQLdb doesn't return all the data from the database

I'm using the Python package MySQLdb to fetch data from a MySQL database. However, I notice that I can't fetch the entirety of the data.
import MySQLdb
db = MySQLdb.connect(host=host, user=user, passwd=password)
cur = db.cursor()
query = "SELECT count(*) FROM table"
cur.execute(query)
This returns a number less than what I get if I execute the exact same query in MySQL Workbench. I've noticed that the data it doesn't return is the data that was inserted into the database most recently. Where am I going wrong?
You are not committing the inserted rows on the other connection.

Python not inserting data's into mysql

I am testing Python and Mysql in that i am able to create and delete table's but i am unable to insert data in them.I searched stackoverflow and mostly they suggest to use
commit()
So i used it and even after i used the data is not inserted into the database.Please help me.
This is the code i use it creates the table but not inserting data
import MySQLdb
db = MySQLdb.connect("localhost","user","password")
cxn = MySQLdb.connect(db='test')
cursor = cxn.cursor()
cursor.execute("CREATE TABLE users(name VARCHAR(40),id VARCHAR(40))")
cursor.execute("INSERT INTO users(name,id) VALUES('John','1')")
db.commit()
print "Opertion completed successfully"
Are db and cxn connections to the same database?
You should establish your connection using following:
db = MySQLdb.connect(host="localhost",
db="test",
user="user",
passwd="password")
The cursor should then be derived from this connection via:
cursor = db.cursor()
I would hazard that your issue is coming from the ambiguity between db and cxn.

Creation pyodbc cursor to multiple databases

I am processing events stored in MS-Access databases using pyodbc.
Each month is a seperate file / database and I would like to process events from multiple months.
Is it possible to create a cursor to a view containing multiple months i.e. database connections?
Edit 1: And without having to write a new database? (Something like UNION VIEW maybe?)
You'll need to make multiple connections and cursors, but you should be able to process the data.
Let's say the files are stored as month_1.mdb, month_2.mdb, etc. in C:\access.
# Set up each connection, must have a way to access each file's name
connect_string = "Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\access\\month_{}.mdb;"
# Assuming that you'll get the same data from each database
sql = "SELECT column_1, column_2 FROM table"
# Connect to each file
connections = [pyodbc.connect(connect_string.format(n)) for n in range(1, 12 + 1)]
# Create a cursor for each file
cursors = [conn.cursor() for conn in connections]
# Query each file and save the data
data = []
for cur in cursors:
cur.execute(sql)
data.extend(cur.fetchall())
OK, so now you have all the data. You can create an in-memory database with the sqlite3 module and then do queries against it.
import sqlite3
# Create your temporary database
connection = sqlite3.connect(":memory:")
cursor = connection.cursor()
# Set up a place to hold the data fetched previously
_ = cur.execute("CREATE TABLE t(x INTEGER, y INTEGER)")
# Dump all the data into the database
for column_1, column_2 in data:
_ = cursor.execute("INSERT INTO t VALUES (?, ?)", [column_1, column_2])
# Now you can run queries against the new view of your data
sql = "SELECT t.column_1, t.count(*) FROM t GROUP BY t.column_1"

Categories