I have a access table like this (Date format is mm/dd/yyyy)
col_1_id-----col_2
1 1/1/2003
2
3 1/5/2009
4
5 3/2/2008
Output should be a table where Co_1 is between 2 to 4 (Blank cell must be blank)
2
3 1/5/2009
4
I tried with sql query. The output print 'None' in blank cell. I need blank in blank cell.
Other thing is when I tried to insert this value in another table it only insert
rows having date value. The code stops when it gets any row without date. I need to insert rows as it is.
I tried in python with
import pyodbc
DBfile = Data.mdb
conn = pyodbc.connect ('Driver = {Microsoft Access Driver (*.mdb)}; DBQ =' +DBfile
cursor = conn.cursor()
sql_table = "CREATE TABLE Table_new (Col_1 integer, Col_2 Date)"
cursor.execute.sql_table()
conn.commit()
i = 0
while i => 2 and i <= 4:
sql = "INSERT INTO New_Table (Col_1, Col_2)VALUES ('%s', '%s')" % (A[i][0], A[i][1])
cursor.execute(sql)
conn.commit()
i = i + 1
cursor.close
conn.close
`
Instead of using A[i][x] which dictates the value for you, why not simply add an OR logic to eliminate the possibility of None.
For any cell you wish to keep as "blank" (assume you mean empty string), let's say A[i][1], just do
A[i][1] or ""
Which will yield empty string "" if the cell gives you None.
The string representation of None is actually 'None', not the empty string. Try:
"... ('%s', '%s')" % (A[i][0], A[i][1] if A[i][1] else '')
Related
I have a dataframe named Data2 and I wish to put values of it inside a postgresql table. For reasons, I cannot use to_sql as some of the values in Data2 are numpy arrays.
This is Data2's schema:
cursor.execute(
"""
DROP TABLE IF EXISTS Data2;
CREATE TABLE Data2 (
time timestamp without time zone,
u bytea,
v bytea,
w bytea,
spd bytea,
dir bytea,
temp bytea
);
"""
)
My code segment:
for col in Data2_mcw.columns:
for row in Data2_mcw.index:
value = Data2_mcw[col].loc[row]
if type(value).__module__ == np.__name__:
value = pickle.dumps(value)
cursor.execute(
"""
INSERT INTO Data2_mcw(%s)
VALUES (%s)
"""
,
(col.replace('\"',''),value)
)
Error generated:
psycopg2.errors.SyntaxError: syntax error at or near "'time'"
LINE 2: INSERT INTO Data2_mcw('time')
How do I rectify this error?
Any help would be much appreciated!
There are two problems I see with this code.
The first problem is that you cannot use bind parameters for column names, only for values. The first of the two %s placeholders in your SQL string is invalid. You will have to use string concatenation to set column names, something like the following (assuming you are using Python 3.6+):
cursor.execute(
f"""
INSERT INTO Data2_mcw({col})
VALUES (%s)
""",
(value,))
The second problem is that a SQL INSERT statement inserts an entire row. It does not insert a single value into an already-existing row, as you seem to be expecting it to.
Suppose your dataframe Data2_mcw looks like this:
a b c
0 1 2 7
1 3 4 9
Clearly, this dataframe has six values in it. If you were to run your code on this dataframe, then it would insert six rows into your database table, one for each value, and the data in your table would look like the following:
a b c
1
3
2
4
7
9
I'm guessing you don't want this: you'd rather your database table contained the following two rows instead:
a b c
1 2 7
3 4 9
Instead of inserting one value at a time, you will have to insert one entire row at time. This means you have to swap your two loops around, build the SQL string up once beforehand, and collect together all the values for a row before passing it to the database. Something like the following should hopefully work (please note that I don't have a Postgres database to test this against):
column_names = ",".join(Data2_mcw.columns)
placeholders = ",".join(["%s"] * len(Data2_mcw.columns))
sql = f"INSERT INTO Data2_mcw({column_names}) VALUES ({placeholders})"
for row in Data2_mcw.index:
values = []
for col in Data2_mcw.columns:
value = Data2_mcw[col].loc[row]
if type(value).__module__ == np.__name__:
value = pickle.dumps(value)
values.append(value)
cursor.execute(sql, values)
Im trying to save three values from SQLite in a Python list. All values are from a different column but in the same row. If the value is null I dont want to add it to the list. This is the code I wrote:
def create_list(self, chat):
list = []
for x in range(1, 3):
column_name = "list" + str(x)
value = c.execute("SELECT (?) FROM user WHERE (?) NOTNULL AND id = (?)", (column_name, column_name, chat)).fetchall()
if value != None:
list.append(value[0][0])
print(list)
Instead of printing the SQLite values in a list it just prints: ['list1', 'list2', 'list3'] (If one of the values in the table is null it doesnt print that one. For example if the value in column liste3 is null it just prints ['list1', 'list2'])
How can I fix this so that it saves the actual SQLite values in the list?
I had the same problem.
SQLite package ignores NULL values by default after fetching the data. you need to put a condition that prints 'Empty' or something when it faces NULL values. something like:
conn = sqlite3.connect("someDataBase.db")
db = conn.cursor()
db.execute('''
SELECT name, email
FROM person
WHERE name = 'some dude'
''')
result = db.fetchone()
if result is None: #if a value IS NULL
print 'Empty'
else:
var = result[0]
print var
I want to add a new column to a db file and fill its value at an interval of 2. Here I wrote some codes...
import sqlite3
WorkingFile = "C:\\test.db"
con = sqlite3.connect(WorkingFile)
cur = con.cursor()
cur.execute("ALTER table MyTable add column 'WorkingID' 'long'") # Add a column "WorkingID"
rows = cur.fetchall()
iCount = 0
for row in rows:
iCount = iCount + 2
print iCount
cur.execute("UPDATE MyTable SET WorkingID = ?" , (iCount,)) # Here I have question: How to write the WHERE command?
con.commit()
cur.close()
The code above gives me a new Column with same values. Something like this:
WorkingID
10
10
10
...
But I want a result like this:
WorkingID
2
4
6
8
10
...
My question on the code is, I dont know how to write the WHERE code under UPDATE. Would you please help me out? Thanks.
The SQL engine cannot tell the difference between the rows.
You should add an autoincrement or ROWID column in order to have it set as ID.
I do not know the structure of your table as you don't create it here but rather alter it.
If you do happen to create it, create a new INTEGER PRIMARY KEY column which you can then use WHERE on like so:
cur.execute(
"CREATE TABLE MyTable"
"(RowID INTEGER PRIMARY KEY,"
"WorkingID INTEGER)")
If your table has some unique ID, get a sorted list of them (their actual values do not matter, only their order):
ids = [row[0] for row in cur.execute("SELECT id FROM MyTable ORDER BY id")]
iCount = 0
for id in ids:
iCount += 2
cur.execute("UPDATE MyTable SET WorkingID = ? WHERE id = ?",
[iCount, id])
If you don't have such a column, use the rowid instead.
If you can't add ID to your table, you can make a really strange way. So..
Before your loop you make a query
UPDATE MyTable SET WorkingID = 2 LIMIT 1
And after
iCount = 2
for row in rows:
iCount = iCount + 2
print iCount
cur.execute("UPDATE MyTable SET WorkingID = ? WHERE WorkibgID is NULL LIMIT 1" , (iCount,))
It's not a good way, but it should work.
With Python's sqlite3 library, can I have a variable number of place holders in the SQL statement:
INSERT INTO table VALUES (?,?)`
where ? are the place holders, which is safe from an SQL injection attack?
I want to be able to have a general function (below) that checks number of columns and writes data into a row but it could work for any table with any number of columns.
I looked at:
Python Sqlite3: INSERT INTO table VALUE(dictionary goes here) and
PHP MYSQL 'INSERT INTO $table VALUES ......' variable number of fields
but I'm still not sure.
def rowin(self, TableName, ColumnData=[]):
# First check number columns in the table TableName to confirm ColumnData=[] fits
check = "PRAGMA table_info(%s)"%TableName
conn = sqlite3.connect(self.database_file)
c = conn.cursor()
c.execute(check)
ColCount = len(c.fetchall())
# Compare TableName Column count to len(ColumnData)
if ColCount == len(ColumnData):
# I want to be have the number of ? = ColCount
c.executemany('''INSERT INTO {tn} VALUES (?,?)'''.format(tn=TableName), ColumnData)
conn.commit()
else:
print("Input doesn't match number of columns")
def rowin(self,TableName,ColumnData=[]):
#first count number columns in the table TableName
check = "PRAGMA table_info(%s)"%TableName
conn = sqlite3.connect(self.database_file)
c = conn.cursor()
c.execute(check)
#assing number of columns to ColCount
ColCount = len(c.fetchall())
#compare TableName Column count to len(ColumnData)
qmark = "?"
#first create a place holder for each value going to each column
for cols in range(1,len(ColumnData)):
qmark += ",?"
#then check that the columns in the table match the incomming number of data
if ColCount == len(ColumnData):
#now the qmark should have an equl number of "?" to match each item in the ColumnData list input
c.execute('''INSERT INTO {tn} VALUES ({q})'''.format(tn=TableName, q=qmark),ColumnData)
conn.commit()
print "Database updated"
else:
print "input doesnt match number of columns"
I have a tuple with a single value that's the result of a database query (it gives me the max ID # currently in the database). I need to add 1 to the value to utilize for my subsequent query to create a new profile associated with the next ID #.
Having trouble converting the tuple into an integer so that I can add 1 (tried the roundabout way here by turning the values into a string and then turning into a int). Help, please.
sql = """
SELECT id
FROM profiles
ORDER BY id DESC
LIMIT 1
"""
cursor.execute(sql)
results = cursor.fetchall()
maxID = int(','.join(str(results)))
newID = maxID + 1
If you are expecting just the one row, then use cursor.fetchone() instead of fetchall() and simply index into the one row that that method returns:
cursor.execute(sql)
row = cursor.fetchone()
newID = row[0] + 1
Rather than use an ORDER BY, you can ask the database directly for the maximum value:
sql = """SELECT MAX(id) FROM profiles"""