program hangs... am I closing the connection/program correctly - python

I am new to pyodbc. I have written a simple program. I think. The program does not close/quit. When I run it it hangs until I manually break the program. It outputs the data. Am I writing it correctly?
import pyodbc
import csv
conn = pyodbc.connect('DSN=connect;UID=gog;PWD=humbleb')
cursor2 = conn.cursor()
cursor2.execute("SELECT PTIME, PVALUE FROM HISTORY_TABLE WHERE POINT = 'POINT' AND PTIME> '2017-04-12' AND PTIME<'2017-04-13' AND HISTTYPE='AVG' AND PERIOD=1200")
i=1
sample = cursor2.fetchall()
for rows in sample:
with open ('C:/directory/record{0}.csv'.format(i), 'w') as f:
csv.writer(f).writerow(rows)
i += 1
cursor2.close()
conn.close()

Related

Why only insert 1000 elements?

I want to insert 1 million of rows in my database with this code, but only insert 1000 and I don't know why.
I have 2 csv files with 1000 rows like this:
Katherina,Rasmus,82-965-3140,29/09/1962,krasmus8thetimescouk
import psycopg2
import csv
print("\n")
csv_file1=open('/home/oscarg/Downloads/base de datos/archivo1.csv', "r")
csv_file2=open('/home/oscarg/Downloads/base de datos/archivo2.csv', "r")
try:
connection = psycopg2.connect(user = "oscar",
password = "",
host = "127.0.0.1",
port = "5432",
database = "challenge6_7")
cursor = connection.cursor()
csv_reader1 = csv.reader(csv_file1, delimiter=',')
for row in csv_reader1:
csv_reader2 = csv.reader(csv_file2, delimiter=',')
contador=+1
for row2 in csv_reader2:
nombre=row[0]+" "+row2[0]
apellido=row[1]+" "+row2[1]
cedula_id=row[2]+row2[2]
if not(contador%1000):
fecha_nacimiento="'"+row[3]+"'"
else:
fecha_nacimiento="'"+row2[3]+"'"
if not (contador%3):
email=row[4]+"#hotmail.com"
else:
email=row2[4]+"#gmail.com"
postgres_insert_query = " INSERT INTO cliente (nombre, apellido, cedula_id,fecha_nacimiento, cliente_email) VALUES (%s,%s, %s, %s,%s)"
record_to_insert = (nombre, apellido, cedula_id, fecha_nacimiento, email)
cursor.execute(postgres_insert_query, record_to_insert)
connection.commit()
if (contador==1000):
contador=0
except (Exception, psycopg2.Error) as error :
print(error.pgerror)
finally:
#closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
csv_file1.close()
csv_file2.close()
Insert 1000 rows and then stop, it's a problem with my code, psycopg or my database?
It is possible that the reader pointer expires (End of File) for the second iteration of the second csv file, so nothing is being read.
You might want to store the rows in a list first, then iterate over them.
See: Python import csv to list
Edit: This is the issue. I made a little test myself.
import csv
csv_file1=open("a.csv", "r")
csv_file2=open("1.csv", "r")
csv_reader1 = csv.reader(csv_file1, delimiter=',')
for row in csv_reader1:
csv_file2=open("1.csv", "r") # Removing this line makes the code run N times
# Instead of N x N (a million in your example.)
csv_reader2 = csv.reader(csv_file2, delimiter=',')
for row2 in csv_reader2:
print(row, row2)
I tested it by opening the file (not the reader), in the first loop. However opening the file again and again does not seem like the best practice. You should store it in a list if you don't have memory limitations.

SQL query returns blank output when running inside Python script

I have a python script that is supposed to loop through a text file and gather the domain as an argument from each line in the text file. Then it is supposed to use the domain as an argument in a SQL query. The issue is when I'm passing in the domain_name as an argument the JSON output the script produces is blank. If I set the domain_name argument in my sql query directly inside the query then the script outputs perfect JSON format. As you can see in the top of my script right below def connect_to_db() I start to loop through the text file. I'm not sure where in my code the error is occurring by any assistance would be greatly appreciated!
Code
from __future__ import print_function
try:
import psycopg2
except ImportError:
raise ImportError('\n\033[33mpsycopg2 library missing. pip install psycopg2\033[1;m\n')
sys.exit(1)
import re
import sys
import json
import pprint
DB_HOST = 'crt.sh'
DB_NAME = 'certwatch'
DB_USER = 'guest'
def connect_to_db():
filepath = 'test.txt'
with open(filepath) as fp:
for cnt, domain_name in enumerate(fp):
print("Line {}: {}".format(cnt, domain_name))
print(domain_name)
domain_name = domain_name.rstrip()
conn = psycopg2.connect("dbname={0} user={1} host={2}".format(DB_NAME, DB_USER, DB_HOST))
cursor = conn.cursor()
cursor.execute(
"SELECT c.id, x509_commonName(c.certificate), x509_issuerName(c.certificate) FROM certificate c, certificate_identity ci WHERE c.id = ci.certificate_id AND ci.name_type = 'dNSName' AND lower(ci.name_value) = lower('%s') AND x509_notAfter(c.certificate) > statement_timestamp();".format(
domain_name))
unique_domains = cursor.fetchall()
# print out the records using pretty print
# note that the NAMES of the columns are not shown, instead just indexes.
# for most people this isn't very useful so we'll show you how to return
# columns as a dictionary (hash) in the next example.
pprint.pprint(unique_domains)
outfilepath = domain_name + ".json"
with open(outfilepath, 'a') as outfile:
outfile.write(json.dumps(unique_domains, sort_keys=True, indent=4))
if __name__ == "__main__":
connect_to_db()
Don't use format to create your SQL statement. Use ? placeholders and then a tuple of the values to insert:
c.execute('''SELECT c.id, x509_commonName(c.certificate),
x509_issuerName(c.certificate) FROM certificate c, certificate_identity ci WHERE
c.id= ci.certificate_id AND ci.name_type = 'dNSName' AND lower(ci.name_value) =
lower(?) AND x509_notAfter(c.certificate) > statement_timestamp()''',(domain_name,))
More generically:
c.execute('''SELECT columnX FROM tableA where columnY = ? AND columnZ =?'''
(desired_columnY_value,desired_columnZ_value))

Read and wirte postgres script using python

I have postgres tables and i want to run a PostgreSQL script file on these tables using python and then write the result of the queries in a csv file. The script file have multiple queries separated by semicolon ;. Sample script is shown below
Script file:
--Duplication Check
select p.*, c.name
from scale_polygons_v3 c inner join cartographic_v3 p
on (metaphone(c.name_displ, 20) LIKE metaphone(p.name, 20)) AND c.kind NOT IN (9,10)
where ST_Contains(c.geom, p.geom);
--Area Check
select sp.areaid,sp.name_displ,p.road_id,p.name
from scale_polygons_v3 sp, pak_roads_20162207 p
where st_contains(sp.geom,p.geom) and sp.kind = 1
and p.areaid != sp.areaid;
When i run the python code, it executes successfully without any error but the problem i am facing is, during writing the result of the queries to a csv file. Only the result of last executed query is written to the csv file. It means that first query result is overwrite by the second query, second by third and so on till the last query.
Here is my python code:
import psycopg2
import sys
import csv
import datetime, time
def run_sql_file(filename, connection):
'''
The function takes a filename and a connection as input
and will run the SQL query on the given connection
'''
start = time.time()
file = open(filename, 'r')
sql = s = " ".join(file.readlines())
#sql = sql1[3:]
print "Start executing: " + " at " + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M")) + "\n"
print "Query:\n", sql + "\n"
cursor = connection.cursor()
cursor.execute(sql)
records = cursor.fetchall()
with open('Report.csv', 'a') as f:
writer = csv.writer(f, delimiter=',')
for row in records:
writer.writerow(row)
connection.commit()
end = time.time()
row_count = sum(1 for row in records)
print "Done Executing:", filename
print "Number of rows returned:", row_count
print "Time elapsed to run the query:",str((end - start)*1000) + ' ms'
print "\t ==============================="
def main():
connection = psycopg2.connect("host='localhost' dbname='central' user='postgres' password='tpltrakker'")
run_sql_file("script.sql", connection)
connection.close()
if __name__ == "__main__":
main()
What is wrong with my code?
If you are able to change the SQL script a bit then here is a workaround:
#!/usr/bin/env python
import psycopg2
script = '''
declare cur1 cursor for
select * from (values(1,2),(3,4)) as t(x,y);
declare cur2 cursor for
select 'a','b','c';
'''
print script
conn = psycopg2.connect('');
# Cursors exists and available only inside the transaction
conn.autocommit = False;
# Create cursors from script
conn.cursor().execute(script);
# Read names of cursors
cursors = conn.cursor();
cursors.execute('select name from pg_cursors;')
cur_names = cursors.fetchall()
# Read data from each available cursor
for cname in cur_names:
print cname[0]
cur = conn.cursor()
cur.execute('fetch all from ' + cname[0])
rows = cur.fetchall()
# Here you can save the data to the file
print rows
conn.rollback()
print 'done'
Disclaimer: I am totally newbie with Python.
This is the simplest to output each query as a different file. copy_expert
query = '''
select p.*, c.name
from
scale_polygons_v3 c
inner join
cartographic_v3 p on metaphone(c.name_displ, 20) LIKE metaphone(p.name, 20) and c.kind not in (9,10)
where ST_Contains(c.geom, p.geom)
'''
copy = "copy ({}) to stdout (format csv)".format(query)
f = open('Report.csv', 'wb')
cursor.copy_expert(copy, f, size=8192)
f.close()
query = '''
select sp.areaid,sp.name_displ,p.road_id,p.name
from scale_polygons_v3 sp, pak_roads_20162207 p
where st_contains(sp.geom,p.geom) and sp.kind = 1 and p.areaid != sp.areaid;
'''
copy = "copy ({}) to stdout (format csv)".format(query)
f = open('Report2.csv', 'wb')
cursor.copy_expert(copy, f, size=8192)
f.close()
If you want to append the second output to the same file then just keep the first file object opened.
Notice that it is necessary that copy outputs to stdout to make it available to copy_expert

Python csv from database query adding a custom column to csv file

here is what I try to achieve my current code is working fine I get the query to run on my sql server but I will need to gather information from several servers. How would I add a column with the dbserver listed in that column?
import pyodbc
import csv
f = open("dblist.ini")
dbserver,UID,PWD = [ variable[variable.find("=")+1 :] for variable in f.readline().split("~")]
connectstring = "DRIVER={SQL server};SERVER=" + dbserver + ";DATABASE=master;UID="+UID+";PWD="+PWD
cnxn = pyodbc.connect(connectstring)
cursor = cnxn.cursor()
fd = open('mssql1.txt', 'r')
sqlFile = fd.read()
fd.close()
cursor.execute(sqlFile)
with open("out.csv", "wb") as csv_file:
csv_writer = csv.writer(csv_file, delimiter = '!')
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)
You could add the extra information in your sql query. For example:
select "dbServerName", * from table;
Your cursor will return with an extra column in front of your real data that has the db Server name. The downside to this method is you're transferring a little more extra data.

New to python: My method to import CSV to SQlite DB

I just started python programming and find it very useful so far coming from a Delphi/Lazarus background.
I recently downloaded trend data from a SCADA system and needed to import the data into a sqlite db. I thought I would share my python script here.
This process would have taken a lot more programming in Pascal. Now I just create a GUI with Lazarus and use TProcess to run the script with some parameters and the data is in the db.
Sample of trend data
Time,P1_VC70004PID_DRCV,P1_VC70004PID_DRPV,P1_VC70004PID_DRSP
6:00:30,27.75,3000,3000
6:01:00,27.75,3000,3000
6:01:30,27.75,3000,3000
6:02:00,27.75,3000,3000
6:02:30,27.75,3000,3000
6:03:00,27.75,3000,3000
6:03:30,27.75,3000,3000
6:04:00,27.75,3000,3000
6:04:30,27.75,3000,3000
6:05:00,27.75,3000,3000
Python code:
import csv
import sqlite3
import sys
FileName = sys.argv[1]
TableName = "data"
db = "trenddata.db3"
conn = sqlite3.connect(db)
conn.text_factory = str # allows utf-8 data to be stored
c = conn.cursor()
c.execute("DROP TABLE IF EXISTS " + TableName)
c.execute("VACUUM")
i = 0
f = open(FileName, 'rt')
try:
reader = csv.reader(f)
for row in reader:
if i == 0:
## Create Table header section from Header info in CSV doc
c.execute("CREATE TABLE %s (%s)" % (TableName, ", ".join(row)))
else:
## Import row data into database
c.execute("INSERT INTO %s VALUES ('%s')" % (TableName, "', '".join(row)))
i += 1
conn.commit()
finally:
f.close()
conn.close()
print("Imported %s records" % (i))

Categories