Inserting through pymssql but no rows appear in the database - python

Im quite new to python and im trying to write a script that puts values into a SQL database.
its a simple 2 column table that looks like this:
CREATE TABLE [dbo].[pythonInsertTest](
[ID] [int] IDENTITY(1,1) NOT NULL,
[value] [varchar](50) NULL
I tried doing a select query from python and that worked! So the connection or anything of that sort is not the problem. but when i'm inserting into the table with the following code:
import pymssql
conn = pymssql.connect(server='XXXX', user='XXXX', password='XXXX', database='XXXX')
cursor = conn.cursor()
cursor.execute("insert into pythonInsertTest(value) OUTPUT INSERTED.ID VALUES('test')")
row = cursor.fetchone()
while row:
print "Inserted Product ID : " +str(row[0])
row = cursor.fetchone()
the response is:
$? && exit 1
Inserted Product ID : 20
Exit status: 0
However if i look in my sql manager and select all the rows in said table, the row i just added is not in there... But when i manually insert a row trough an SQL query in the manager its added.
Thing to note that it did skip the ID the id that was "inserted" trough my python script.
Anyone seen this before or knows what to do?

Ha, I've banged my head on this a few times as well. As far as I can tell, you are missing a commit statement. As per this example, add a conn.commit(), and hopefully you will be golden.

Related

Is there a way to iterate through list and get row and column count of multiple Redshift tables using Python?

I am trying to get row and column count for tables that are present in different schemas in Redshift using Python's redshift_connector. I tried below code and receiving Programming Error that says. Not sure what is the issue ? Basically, I am trying to replace the shema and tables that are present in the list tables and get row count for the tables.
{'S: 'ERROR','C':123489, 'M' 'Syntax error at or near "{"",'P':'22....'L':714','R':'yyerror'}
Thanks in advance for your time and efforts!
Python Code:
import redshift_connector
tables=['schema1.tablename','schema2.table2']
conn=redshift_connector.connect(
host='my_host',
port= "my_port",
database='my_db'
user="user"
password='password')
cur=conn.cursor()
cur.execute ('select count(*) from {',' .join("'"+y+"'" for y in tables)}')
results=cur.fetchall()
print("The table {} contained".format(tables[0]),*result[0],"rows"+"\n" ) #Printing row counts along with table names
cur.close()
conn.close()

Insert pandas dataframe into actian PSQL database table using python

I want to import data of file "save.csv" into my actian PSQL database table "new_table" but i got error
ProgrammingError: ('42000', "[42000] [PSQL][ODBC Client Interface][LNA][PSQL][SQL Engine]Syntax Error: INSERT INTO 'new_table'<< ??? >> ('name','address','city') VALUES (%s,%s,%s) (0) (SQLPrepare)")
Below is my code:
connection = 'Driver={Pervasive ODBC Interface};server=localhost;DBQ=DEMODATA'
db = pyodbc.connect(connection)
c=db.cursor()
#create table i.e new_table
csv = pd.read_csv(r"C:\Users\user\Desktop\save.csv")
for row in csv.iterrows():
insert_command = """INSERT INTO new_table(name,address,city) VALUES (row['name'],row['address'],row['city'])"""
c.execute(insert_command)
c.commit()
Pandas have a built-in function that empty a pandas-dataframe into a sql-database called pd.to_sql(). This might be what you are looking for. Using this you dont have to manually insert one row at a time but you can insert the entire dataframe at once.
If you want to keep using your method, the issue might be that the table "new_table" hasn't been created yet in the database. And thus you first need something like this:
CREATE TABLE new_table
(
Name [nvarchar](100) NULL,
Address [nvarchar](100) NULL,
City [nvarchar](100) NULL
)
EDIT:
You can use to_sql() like this on tables that already exist in the database:
df.to_sql(
"new_table",
schema="name_of_the_schema",
con=c.session.connection(),
if_exists="append", # <--- This will append an already existing table
chunksize=10000,
index=False,
)
I have tried the same, in my case the table is created , I just want to insert each row from pandas dataframe into the database using Actian PSQL

Table won't alter when using psycopg

I'm having some trouble altering tables in my postgres database. I'm using psycopg2 and working out of Python. I tried to add a serial primary key. It took a long time (large table), and threw no error, so it did something, but when I went to check, the new column wasn't there.
I'm hoping this is something silly that I've missed, but right now I'm at a total loss.
import psycopg2
username = *****
password = *****
conn = psycopg2.connect(database='mydb',user=username,password=password)
query = "ALTER TABLE mytable ADD COLUMN sid serial PRIMARY KEY"
cur = conn.cursor()
cur.execute(query)
conn.close()
Other things I've tried while debugging:
It doesn't work when I remove PRIMARY KEY.
It doesn't work when try a different data type.
You need to add a commit statement in order for your changes to reflect in the table. Add this before you close the connection.
conn.commit()

Python Update in MySQL

I've been struggling to get this really simple query to actually work in Python. I am a complete python newb, and strings seem to be handled a lot differently than what I am accustomed to.
The query I am trying...
cur = db.cursor()
cur.execute("SELECT bluetooth_Id FROM student_Data")
data = cur.fetchall()
for row in data :
bluetoothId=row[0]
result=bluetooth.lookup_name(bluetoothId,timeout=5)
print(result)
if(result != None):
cur.execute("UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id= %s",(bluetoothId))
print("UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id= %s",(bluetoothId))
What seems to be the problem is that my actual SQL query is not correctly formatted.. I know this because that last print statement returns this
('UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id= %s', 'th:is:is:my:bt:id')
and of course that id is not actually the id it returns... I didn't want to give that to you :)
I am following examples to the dot, and not getting anywhere.. my bluetooth is on, the program sees my bluetooth id, it processes through the list of ids already in my mysql table, but it isn't updating any records.
and I did check to make sure I entered my id in the mysql table correctly, so that is not the problem either!
Update
I was able to get the correct MySQL query created using this:
cur.execute("UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id = '%s"%(bluetoothId)+"'")
which creates
UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id = '11:11:11:11:11:11'
but the MySQL table still isn't updating correctly.. I'll have to look into seeing why that is...
The solution was this:
cur.execute("UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id = '%s"%(bluetoothId)+"'")
and then I had to add
db.commit()
After the execute, in order to actually commit the changes to the MySQL table.
Thanks all for the help:)
Python mySQL Update, Working but not updating table
You need to do this:
cur.execute("UPDATE student_Data SET attendance = 1 WHERE bluetooth_Id= %s"%(bluetoothId))
If you are doing substitution, you need a % instead of a comma

Search through SQL table and delete row if cell already exists

I have a table using SQL Lite with Python. The size of the table always has 3 columns and could have many rows. Each of the cells are strings. Here is example table:
serial_num date_measured status
1234A 1-1-2015 passed
4321B 6-21-2015 failed
1423C 12-25-2015 passed
......
My program prompts me for a serial number. This is saved as a variable called serialNum. How can I delete (or overwrite) an entire row if serialNum equals any of the strings in the serial_num column in my table?
I've seen many examples on how to delete (or overwrite) a row in a table if I know all the values in each cell of that row, but my trouble is that the only cell that could ever be the same in each row would be the serial number. I need to so a search through the serial_number column and if any string in that column equals the current value of my serialNum variable, I need to delete (or overwrite) that row.
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute('''CREATE TABLE test (serial_num text, date_measured text, status text)''')
c.execute("INSERT INTO test VALUES ('1234A', '1-1-2015', 'passed')")
c.execute("INSERT INTO test VALUES ('4321B', '6-21-2015', 'failed')")
c.execute("INSERT INTO test VALUES ('1423C', '12-25-2015', 'passed')")
conn.commit()
Does anyone know a simple way to do this? I've seen others say that an ID must be used or a temporary table, but I would hope there might be an easier way to accomplish my task. Any advice would be great.
SQL suports this: simply use delete
"delete from test where serial_num=<some input>;"
or in this case
c.execute("delete from test where serial_num=%s;", serialNum);
There's no need to search through the list when using SQL. SQL is declarative: you tell it what to do using your query, not how to do it. Don't loop though all your rows to check which to delete: tell it what to delete and the database engine will find the best/fastest way to satisfy that goal.
Hope I well interpreted your question
for row in c.execute('SELECT * FROM test WHERE serial_num = ?', serialNum'):
# do whatever you want on row
print row
I was able to figure out a working solution:
sql = "DELETE FROM test WHERE serial_num = ?"
c.execute(sql, (serialNum,))
The comma after serialNum for some reason has to be there. Thank you #Michiel Arienfor the head start

Categories