Download Data from SQL in python - python

I want to download data from SQL via python. But, instead of downloading the whole of dataset I only need specific variables.
I am restricted to use only the read_sql from pyodbc
My code is the following:
# call from SQL
import pandas as pd
import pyodbc
conn = pyodbc.connect("""DRIVER={SQL Server};
Server=BXTS131133.eu.rabonet.com\LWID_LAB_03;
Database=CORP_Modelling;
Trusted_connection=yes;""")
SQL1 = 'SELECT * FROM [CORP_Modelling].[LDM_Freeze_1].[JointObligorMonthly]'
Nevertheless, suppose that I want to download only a few variables/attributes from SQL. For example, from the tables sepecified in 'SLQ1' I only want to download:
var_to_download = ['MeasurementPeriodID', 'JointObligorID' ]
I cannot understand how I can modify the above code in order to download only these variables.

Related

"Memory error" when using pd.read_sql_query method

I am trying to create a dataframe using the data in redshift table.
But I am getting "Memory error" because the data I am fetching is huge in volume.
how to sove this issue, (I found chunking is one option. How to implement chucking) Is there any other library useful for such situations ?
The following is an example code
import pandas as pd
import psycopg2
conn = psycopg2.connect(host=host_name,user=usr,port=pt,password=pass,db_name=DB)
sql_query = "SELECT * FROM Table_Name"
df = pd.read_sql_query(conn,sql_query)

Converting CSV to DB for SQL

I am trying to convert a .csv file I've download into a .db so that I can analyze it in DBreaver with SQLite3.
I'm using Anaconda Prompt and python within it.
Can anyone point out where I'm mistaken?
import pandas as pd
import sqlite 3
df = pd.read_csv('0117002-eng.csv')
df.to_sql('health', conn)
And I just haven't been able to figure out how to set up conn appropriately. All the guides I've read have you do something like:
conn = sqlite3.connect("file.db")
But, as I mentioned I have only the csv file. And when I did try to do that, it also doesn't work.

error in getting pandas time series data into mysql database on localhost

I am trying to retrieve data from National Stock Exchange for a given Scrip name.
I already have created a database name "NSE" in MySQL. But did not create any table.
Following script I am using to retrieve per minute data from the NSE website (let's say I want to retrieve data for scrip (stock) 'CYIENT'.
from alpha_vantage.timeseries import TimeSeries
import matplotlib.pyplot as plt
import sys
import pymysql
#database connection
conn = pymysql.connect(host="localhost", user="root", passwd="pwd123", database="NSE")
c = conn.cursor()
your_key = "WLLS3TVOG22C6P9J"
def stockchart(symbol):
ts = TimeSeries(key=your_key, output_format='pandas')
data, meta_data = ts.get_intraday(symbol=symbol,interval='1min', outputsize='full')
sql.write_frame(data, con=conn, name='NSE', if_exists='replace', flavor='mysql')
print(data.head())
data['close'].plot()
plt.title('Stock chart')
plt.show()
symbol=input("Enter symbol name:")
stockchart(symbol)
#commiting the connection then closing it.
conn.commit()
conn.close()
On running the above script I am getting following errors:
'sql' is not defined.
Also I am not sure if the above script will also create a table in NSE for (user input) stock 'CYIENT'.
Before answering, I hope the code is a mock, not the real code. Otherwise, I'd suggest to change your credentials.
Now, I believe you are trying to use pandas.io.sql.write_frame (for pandas<=0.13.1). However, you forgot to import the module, thus the interpreter doesn't recognize the module sql. To fix it just add
from pandas.io import sql
to the begining of the script.
Notice the parameters you use in the function call. You use if_exists='replace', so the table NSE will be dropped and recreated every time you run the function. It will contain whatever data contains.

Error in reading an sql file using pandas

I have an sql file locally stored in my PC. I want to open and read it using the pandas library. Here it iswhat I have tried:
import pandas as pd
import sqlite3
my_file = 'C:\Users\me\Downloads\\database.sql'
#I am creating an empty database
conn = sqlite3.connect(r'C:\Users\test\Downloads\test.db')
#I am reading my file
df = pd.read_sql(my_file, conn)
However, I am receiving the following error:
DatabaseError: Execution failed on sql 'C:\Users\me\Downloads\database.sql': near "C": syntax error
Try moving the file to D://
Sometimes Python is not granted access to read/write in C.
Hence may be that is an issue.
You can also try alternative method using cursors.
cur=conn.cursor()
r=cur.fetchall()
This r would contain a tuple of your dataset.

How to export parsed data from Python to an Oracle table in SQL Developer?

I have used Python to parse a txt file for specific information (dates, $ amounts, lbs, etc) and now I want to export that data to an Oracle table that I made in SQL Developer.
I have successfully connected Python to Oracle with the cx_Oracle module, but I am struggling to export or even print any data to my database from Python.
I am not proficient at using SQL, I know of simple queries and that's about it. I have explored the Oracle docs and haven't found straightforward export commands. When exporting data to an Oracle table via Python is it Python code I am going to be using or SQL code? Is it the same as importing a CSV file, for example?
I would like to understand how to write to an Oracle table from Python; I need to parse and export a very large amount of data so this won't be a one time export/import. I would also ideally like to have a way to preview my import to ensure it aligns correctly with my already created Oracle table, or if a simple undo action exists that would suffice.
If my problem is unclear I am more than happy to clarify it. Thanks for all help.
My code so far:
import cx_Oracle
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
con.close()
From Import a CSV file into Oracle using CX_Oracle & Python 2.7 you can see overall plan.
So if you already parsed data into csv you can easily do it like:
import cx_Oracle
import csv
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
text_sql = '''
INSERT INTO tablename (firstfield, secondfield) VALUES(:1,:2)
'''
my_file = 'C:\CSVData\Book1.csv'
cr = csv.reader(open(my_file,"rb"))
for row in cr:
print row
cursor.execute(text_sql, row)
print 'Imported'
con.close()

Categories