I'm trying to delete rows from a psql table on a condition.
I want all rows to be deleted if column "TagNaam" equals a variable var_tagnaam.
I've tried the following code and some variants but I can't get it to work.
There aren't any errors though.
cur.execute("DELETE FROM opc_actuelewaardentags WHERE 'TagNaam' = %s", (var_tagnaam,))
Is there something wrong with the syntax?
Edit:
Maybe it is more clear with additional code, the error might be in the other code?
for i in range(len(taginhoud)):
(var_tagnaam, var_tagwaarde, var_tagkwaliteit, var_tagtime) = taginhoud[i]
print (var_tagnaam)
cur.execute("DELETE FROM opc_actuelewaardentags WHERE 'TagNaam' = %s", (var_tagnaam,))
conn.commit()
cur.execute('INSERT INTO opc_actuelewaardentags ("TagNaam", "TagWaarde", "TagKwaliteit", create_date, write_date) VALUES (%s,%s,%s,now(),now())',
(var_tagnaam, var_tagwaarde, var_tagkwaliteit))
conn.commit()
So what I try to do here is:
Retrieve "var_tagnaam" from list "taginhoud".
Then in table opc_actuelewaardentags find all rows where column "Tagnaam" equals the value in "var_tagnaam". (Should be a string)
Then delete those rows where "Tagnaam" = "var_tagnaam". This part doesn't work.
Then insert new rows with data. This part works.
Could this code be wrong to do what I want?
I have tried many things already to solve the upper/lower case problem.
Edit 2:Query in pgadmin worked, trying to do the same thing in python:
I ran this query in pgadmin and it deleted the rows:
delete FROM opc_actuelewaardentags where "TagNaam" = 'Bakkerij.Device1.DB100INT8';
My attempt to make it as similar as possible in python:
var_tagnaam2 = "'"+var_tagnaam+"'"
cur.execute("DELETE FROM opc_actuelewaardentags WHERE \"TagNaam\" = %s", (var_tagnaam2,))
conn.commit()
Tried to escape the double quotes in attempt to make it the same as in pgadmin.
'TagNaam' is not a valid column_name identifier in sql language. You must not use single or double quotes in writing database name, table name or colunm name, but you can use apostrophe (`) .
Invalid:
DELETE FROM opc_actuelewaardentags WHERE 'TagNaam' = 'test';
DELETE FROM opc_actuelewaardentags WHERE "TagNaam" = 'test';
Valid:
DELETE FROM opc_actuelewaardentags WHERE TagNaam = 'test';
DELETE FROM opc_actuelewaardentags WHERE `TagNaam` = 'test';
DELETE FROM opc_actuelewaardentags WHERE "TagNaam" = 'test';
Update: According to PSQL dosc, double quote is a valid character in table and column names. It is especially used for key words while usinga as a table or column name. So following is valid:
DELETE FROM opc_actuelewaardentags WHERE "TagNaam" = 'test';
More is here...
I don't have a psql server, but a mysql server.
For MySQL:
mysql> select * from user where '1' = '1';
+------+
| id |
+------+
| 2 |
| 1 |
+------+
2 rows in set (0.05 sec)
mysql> select * from user;
+------+
| id |
+------+
| 2 |
| 1 |
+------+
2 rows in set (0.00 sec)
mysql> select * from user where '1' = "1";
+------+
| id |
+------+
| 2 |
| 1 |
+------+
2 rows in set (0.00 sec)
mysql> select * from user where 'id' = "1";
Empty set (0.00 sec)
mysql> select * from user where 'id' = 1;
Empty set, 1 warning (0.02 sec)
mysql> select * from user where id = 1;
+------+
| id |
+------+
| 1 |
+------+
1 row in set (0.02 sec)
mysql> select * from user where 'id' = "id";
+------+
| id |
+------+
| 2 |
| 1 |
+------+
2 rows in set (0.00 sec)
The SQL grammar should be similar. Therefore,
cur.execute("DELETE FROM opc_actuelewaardentags WHERE 'TagNaam' = %s", (var_tagnaam,))
should be
cur.execute("DELETE FROM opc_actuelewaardentags WHERE TagNaam = %s", (var_tagnaam,))
or
cur.execute("DELETE FROM opc_actuelewaardentags WHERE `TagNaam` = %s", (var_tagnaam,))
Above analyusis is error.
Simple Postgresql Statement - column name does not exists gives the answer.
RobbeM wrote: Edit 2:Query in pgadmin worked, trying to do the same thing in python
I've had the same symptoms - I could delete table rows using pgadmin or in SQL console, but Python code wouldn't work. The thing was I was accidentally creating cursor before establishing connection with postgreSQL server:
c = db_conn.cursor()
db_conn = psycopg2.connect(conn_string)
So, the solution for me was to create cursor after establishing connection with database:
db_conn = psycopg2.connect(conn_string)
c = db_conn.cursor()
Related
I'm trying to create a query using Pycopg2's SQL String Composition which in I need to use a curly brackets inside my query to update a key value in a jsonb column. Something like this:
update myschema.users set data = jsonb_set(data, '{someId}', '100')
This is how I'm trying to write this query using Sql Composition string in Python:
statement = SQL(
"UPDATE {schema}.{table} set data = jsonb_set(data, '{{key}}', '{value}') {where};"
).format(
schema=Identifier(schema_var),
table=Identifier(table_var),
key=SQL(id_key),
value=SQL(id_value),
where=SQL(where),
)
But by running this, a new key called key will be added in the jsonb value. and if I try to run it with just one pair of curly brackets like this:
statement = SQL(
"UPDATE {schema}.{table} set data = jsonb_set(data, '{key}' ...." # The rest is the same
I get this error:
Array value must start with "{" or dimension information
How can I fix this?
To solve this issue I needed to use three nested curly brackets like this:
statement = SQL(
"UPDATE {schema}.{table} set data = jsonb_set(data, '{{{key}}}' ...." # The rest is the same
This way, the someId key will actually gets updated in the database.
You are over thinking this.
Load data into table:
json_import
Table "public.json_import"
Column | Type | Collation | Nullable | Default
-----------+---------+-----------+----------+-----------------------------------------
id | integer | | not null | nextval('json_import_id_seq'::regclass)
jsonb_fld | jsonb | | |
insert into json_import values (1, '{"test": "dog"}'::jsonb);
select * from json_import;
id | jsonb_fld
----+-----------------
1 | {"test": "dog"}
import psycopg2
from psycopg2 import sql
con = psycopg2.connect("dbname=test user=postgres host=localhost port=5432")
cur = con.cursor()
sql_str = sql.SQL('update {table} set jsonb_fld = jsonb_set(jsonb_fld,
%(key)s, %(val)s) where id = 1').format(table=sql.Identifier('json_import'))
cur.execute(sql_str, {'key': '{test}', 'val': '"cat"'})
con.commit()
select * from json_import;
id | jsonb_fld
----+-----------------
1 | {"test": "cat"}
The values for the jsonb_set() should be passed in as parameters not as part of the composition process.
UPDATE
Using same sql_str and assigning the values to variables.
key_val = '{test}'
fld_val = '"cat"'
cur.execute(sql_str, {'key': key_val, 'val': fld_val})
con.commit()
cur.execute("select * from json_import")
cur.fetchone()
(1, {'test': 'cat'})
I have a Postgres table with a _text type (note the underscore) and am unable to determine how to insert the string [] into that table.
Here is my table definition:
CREATE TABLE public.newtable (
column1 _text NULL
);
I have the postgis extension enabled:
CREATE EXTENSION IF NOT EXISTS postgis;
And my python code:
conn = psycopg2.connect()
conn.autocommit = True
cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
rows = [("[]",)]
insert_query = f"INSERT INTO newtable (column1) values %s"
psycopg2.extras.execute_values(cur, insert_query, rows, template=None, page_size=100)
This returns the following error:
psycopg2.errors.InvalidTextRepresentation: malformed array literal: "[]"
LINE 1: INSERT INTO newtable (column1) values ('[]')
^
DETAIL: "[" must introduce explicitly-specified array dimensions.
How can I insert this data? What does this error mean? And what is a _text type in Postgres?
Pulling my comments together:
CREATE TABLE public.newtable (
column1 _text NULL
);
--_text gets transformed into text[]
\d newtable
Table "public.newtable"
Column | Type | Collation | Nullable | Default
---------+--------+-----------+----------+---------
column1 | text[] | | |
insert into newtable values ('{}');
select * from newtable ;
column1
---------
{}
In Python:
import psycopg2
con = psycopg2.connect(dbname="test", host='localhost', user='postgres')
cur = con.cursor()
cur.execute("insert into newtable values ('{}')")
con.commit()
cur.execute("select * from newtable")
cur.fetchone()
([],)
cur.execute("truncate newtable")
con.commit()
cur.execute("insert into newtable values (%s)", [[]])
con.commit()
cur.execute("select * from newtable")
cur.fetchone()
([],)
From the psycopg2 docs Type adaption Postgres arrays are adapted to Python lists and vice versa.
UPDATE
Finding _text type in Postgres system catalog pg_type. In psql:
\x
Expanded display is on.
select * from pg_type where typname = '_text';
-[ RECORD 1 ]--+-----------------
oid | 1009
typname | _text
typnamespace | 11
typowner | 10
typlen | -1
typbyval | f
typtype | b
typcategory | A
typispreferred | f
typisdefined | t
typdelim | ,
typrelid | 0
typelem | 25
typarray | 0
typinput | array_in
typoutput | array_out
typreceive | array_recv
typsend | array_send
typmodin | -
typmodout | -
typanalyze | array_typanalyze
typalign | i
typstorage | x
typnotnull | f
typbasetype | 0
typtypmod | -1
typndims | 0
typcollation | 100
typdefaultbin | NULL
typdefault | NULL
typacl | NULL
Refer to the pg_type link above to get information on what the columns refer to. The typcategory of A as mapped in "Table 52.63. typcategory Codes Code Category A Array types" at the link is one clue. As well as typinput, typoutput, etc values.
I am trying to check whether table-name exists in database and it is throwing me that schema doesn't exist.I have tried to get the values from table and it is successful.Following is the code that I am trying.
***Settings***
Library DatabaseLibrary
Library Collections
***Testcases***
Connect to Vertica and Check if table exist
Connect To Database Using Custom Params vertica_python database='pmdb',user='dbadmin', password='warehouse', host='10.166.12.242', port=5433
Table Must Exist DCA_ITOC_RESOURCE_D
#${tableName} Query select table_name from tables where table_schema='OBR' AND table_name='DCA_ITOC_RESOURCE_D'
#List Should Contain Value ${tableName} DCA_ITOC_RESOURCE_D
Test result
root#hyi01lr0bsaehost92:/var/robot-tests# pybot database-tests.robot
==============================================================================
Database-Tests
==============================================================================
Connect to Vertica and Check if table exist | FAIL |
MissingSchema: Severity: ERROR, Message: Schema "information_schema" does not exist, Sqlstate: 3F000, Routine: RangeVarGetObjid, File: /scratch_a/release/svrtar1291/vbuild/vertica/Catalog/Namespace.cpp, Line: 288, SQL: u"SELECT * FROM information_schema.tables WHERE table_name='DCA_ITOC_RESOURCE_D'"
------------------------------------------------------------------------------
Database-Tests | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Output: /var/robot-tests/output.xml
Log: /var/robot-tests/log.html
Report: /var/robot-tests/report.html
This has worked for me after adding vertica query in assertion.py in databaselibrary module
def table_must_exist(self, tableName, sansTran=False):
"""
Check if the table given exists in the database. Set optional input `sansTran` to True to run command without an
explicit transaction commit or rollback.
For example, given we have a table `person` in a database
When you do the following:
| Table Must Exist | person |
Then you will get the following:
| Table Must Exist | person | # PASS |
| Table Must Exist | first_name | # FAIL |
Using optional `sansTran` to run command without an explicit transaction commit or rollback:
| Table Must Exist | person | True |
"""
logger.info('Executing : Table Must Exist | %s ' % tableName)
if self.db_api_module_name in ["cx_Oracle"]:
selectStatement = ("SELECT * FROM all_objects WHERE object_type IN ('TABLE','VIEW') AND owner = SYS_CONTEXT('USERENV', 'SESSION_USER') AND object_name = UPPER('%s')" % tableName)
elif self.db_api_module_name in ["sqlite3"]:
selectStatement = ("SELECT name FROM sqlite_master WHERE type='table' AND name='%s' COLLATE NOCASE" % tableName)
elif self.db_api_module_name in ["ibm_db", "ibm_db_dbi"]:
selectStatement = ("SELECT name FROM SYSIBM.SYSTABLES WHERE type='T' AND name=UPPER('%s')" % tableName)
else:
selectStatement = ("SELECT * FROM v_catalog.columns WHERE table_schema='OBR' AND table_name='%s'" % tableName)
#else:
# selectStatement = ("SELECT * FROM information_schema.tables WHERE table_name='%s'" % tableName)
num_rows = self.row_count(selectStatement, sansTran)
if num_rows == 0:
raise AssertionError("Table '%s' does not exist in the db" % tableName)
I am trying to do a insert query in the SQL. It indicates that it succeed but shows no record in the database. Here's my code
conn = MySQLdb.connect("localhost",self.user,"",self.db)
cursor = conn.cursor()
id_val = 123456;
path_val = "/homes/error.path"
host_val = "123.23.45.64"
time_val = 7
cursor.execute("INSERT INTO success (id,path,hostname,time_elapsed) VALUES (%s,%s,%s,%s)", (id_val, path_val,host_val,time_val))
print "query executed"
rows = cursor.fetchall()
print rows
this outputs the following
query executed
()
it gives me no errors but the database seems to be empty. I tried my SQL query in the mysql console. executed the following command.
INSERT INTO success (id,path,hostname,time_elapsed)
VALUES (1,'sometext','hosttext',4);
This works fine as I can see the database got populated.
mysql> SELECT * FROM success LIMIT 5;
+----+----------+----------+--------------+
| id | path | hostname | time_elapsed |
+----+----------+----------+--------------+
| 1 | sometext | hosttext | 4 |
+----+----------+----------+--------------+
so I am guessing the SQL query command is right. Not sure why my cursor.execute is not responding. Could someone please point me to the right direction. Can't seem to figure out the bug. thanks
After you are sending your INSERT record, you should commit your changes in the database:
cursor.execute("INSERT INTO success (id,path,hostname,time_elapsed) VALUES (%s,%s,%s,%s)", (id_val, path_val,host_val,time_val))
conn.commit()
When you want to read the data, you should first send your query as you did through your interpreter.
So before you fetch the data, execute the SELECT command:
cursor.execute("SELECT * FROM success")
rows = cursor.fetchall()
print rows
If you want to do it pythonic:
cursor.execute("SELECT * FROM success")
for row in cursor:
print(row)
I dont see why it's not working. I have created several databases and tables and obviously no problem. But I am stuck with this table which is created from django data model. To clarify what I have done, created new database and table from mysql console and try to insert from python and working. But, this one is strange for me.
class Experiment(models.Model):
user = models.CharField(max_length=25)
filetype = models.CharField(max_length=10)
createddate= models.DateField()
uploaddate = models.DateField()
time = models.CharField(max_length=20)
size = models.CharField(max_length=20)
located= models.CharField(max_length=50)
Here is view in mysql console
mysql> describe pmass_experiment;
+-------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| user | varchar(25) | NO | | NULL | |
| filetype | varchar(10) | NO | | NULL | |
| createddate | date | NO | | NULL | |
| uploaddate | date | NO | | NULL | |
| time | varchar(20) | NO | | NULL | |
| size | varchar(20) | NO | | NULL | |
| located | varchar(50) | NO | | NULL | |
+-------------+-------------+------+-----+---------+----------------+
8 rows in set (0.01 sec)
Above pmass_experiment table is created by django ORM after python manage.py syncdb
Now I am trying to insert data into pmass_experiment through python MySQLdb
import MySQLdb
import datetime,time
import sys
conn = MySQLdb.connect(
host="localhost",
user="root",
passwd="root",
db="experiment")
cursor = conn.cursor()
user='tchand'
ftype='mzml'
size='10MB'
located='c:\'
date= datetime.date.today()
time = str(datetime.datetime.now())[10:19]
#Insert into database
sql = """INSERT INTO pmass_experiment (user,filetype,createddate,uploaddate,time,size,located)
VALUES (user, ftype, date, date, time, size, located)"""
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
conn.commit()
except:
# Rollback in case there is any error
conn.rollback()
# disconnect from server
conn.close()
But, unfortunately nothing is inserting. I am guessing it's may be due to primary_key (id) in table which is not incrementing automatically.
mysql> select * from pmass_experiment;
Empty set (0.00 sec)
can you simply point out my mistake?
Thanks
sql = """INSERT INTO pmass_experiment (user,filetype,createddate,uploaddate,time,size,located)
VALUES (user, ftype, date, date, time, size, located)"""
Parametrize your sql and pass in the values as the second argument to cursor.execute:
sql = """INSERT INTO pmass_experiment (user,filetype,createddate,uploaddate,time,size,located)
VALUES (%s, %s, %s, %s, %s, %s, %s)"""
try:
# Execute the SQL command
cursor.execute(sql,(user, ftype, date, date, time, size, located))
# Commit your changes in the database
conn.commit()
except Exception as err:
# logger.error(err)
# Rollback in case there is any error
conn.rollback()
It is a good habit to always parametrize your sql since this will help prevent sql injection.
The original sql
INSERT INTO pmass_experiment (user,filetype,createddate,uploaddate,time,size,located)
VALUES (user, ftype, date, date, time, size, located)
seems to be valid. An experiment in the mysql shell shows it inserts a row of NULL values:
mysql> insert into foo (first,last,value) values (first,last,value);
Query OK, 1 row affected (0.00 sec)
mysql> select * from foo order by id desc;
+-----+-------+------+-------+
| id | first | last | value |
+-----+-------+------+-------+
| 802 | NULL | NULL | NULL |
+-----+-------+------+-------+
1 row in set (0.00 sec)
So I'm not sure why your are not seeing any rows committed to the database table.
Nevertheless, the original sql is probably not doing what you intend.