How can I find a missed table in postgres - python

I´m using python psycopg2 library to create a table in a postgres database:
self.conn=pg.connect(host='localhost',user='eba',password='****',database='eba')
cur=self.conn.cursor()
cur.execute(sql)
cur.close()
sql='''CREATE TABLE public.tempimport (
id integer NOT NULL DEFAULT nextval('tempimport_id_seq'::regclass),
tablename character varying(32) COLLATE pg_catalog."default",
index_ character varying(32) COLLATE pg_catalog."default",
CONSTRAINT tempimport_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.tempimport
OWNER to eba;'''
cur=self.conn.cursor()
cur.execute(sql)
cur.close()
After, if I run:
cur=self.conn.cursor()
cur.execute("SELECT count(*) FROM pg_catalog.pg_tables where tablename = 'tempimport'")
x=cur.fetchall()
print x
I get 1 as the answer, that is, the table exists.
However, if I log in the same database/user/pwd using pgAdmin and run the same SELECT COUNT(*)... sentence there I get 0 as the answer.
Where is the table I created by code?
How can I find to find where it is?

Try using information_schema:
SELECT *
FROM information_schema.tables
WHERE table_name = 'tempimport'

Related

Issue while trying to select record in mysql using Python

Error Message
You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use
near '%s' at line 1
MySQL Database Table
CREATE TABLE `tblorders` (
`order_id` int(11) NOT NULL,
`order_date` date NOT NULL,
`order_number` varchar(50) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `tblorders`
ADD PRIMARY KEY (`order_id`),
ADD UNIQUE KEY `order_number` (`order_number`);
ALTER TABLE `tblorders`
MODIFY `order_id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=4;
Code
mydb = mysql.connector.connect(host = "localhost", user = "root", password = "", database = "mydb")
mycursor = mydb.cursor()
sql = "Select order_id from tblorders where order_number=%s"
val = ("1221212")
mycursor.execute(sql, val)
Am I missing anything?
You must pass a list or a tuple as the arguments, but a tuple of a single value is just a scalar in parentheses.
Here are some workarounds to ensure that val is interpreted as a tuple or a list:
sql = "Select order_id from tblorders where order_number=%s"
val = ("1221212",)
mycursor.execute(sql, val)
sql = "Select order_id from tblorders where order_number=%s"
val = ["1221212"]
mycursor.execute(sql, val)
This is a thing about Python that I always find weird, but it makes a kind of sense.
In case you want to insert data you have to modify your SQL. Use INSERT instead of SELECT like this:
INSERT INTO tblorders (order_number) VALUES ("122121");
That statement will add new record to the table. Besides, in MariaDB you need to use ? instead of %s that works on Mysql database.
sql = "INSERT INTO tblorders (order_number) VALUES (?);"
val = "1231231"
mycursor.execute(sql, [val])

How to insert values into a postgresql database with serial id using sqlalchemy

I have a function that I use to update tables in PostgreSQL. It works great to avoid duplicate insertions by creating a temp table and dropping it upon completion. However, I have a few tables with serial ids and I have to pass the serial id in a column. Otherwise, I get an error that the keys are missing. How can I insert values in those tables and have the serial key get assigned automatically? I would prefer to modify the function below if possible.
def export_to_sql(df, table_name):
from sqlalchemy import create_engine
engine = create_engine(f'postgresql://{user}:{password}#{host}:5432/{user}')
df.to_sql(con=engine,
name='temporary_table',
if_exists='append',
index=False,
method = 'multi')
with engine.begin() as cnx:
insert_sql = f'INSERT INTO {table_name} (SELECT * FROM temporary_table) ON CONFLICT DO NOTHING; DROP TABLE temporary_table'
cnx.execute(insert_sql)
code used to create the tables
CREATE TABLE symbols
(
symbol_id serial NOT NULL,
symbol varchar(50) NOT NULL,
CONSTRAINT PK_symbols PRIMARY KEY ( symbol_id )
);
CREATE TABLE tweet_symols(
tweet_id varchar(50) REFERENCES tweets,
symbol_id int REFERENCES symbols,
PRIMARY KEY (tweet_id, symbol_id),
UNIQUE (tweet_id, symbol_id)
);
CREATE TABLE hashtags
(
hashtag_id serial NOT NULL,
hashtag varchar(140) NOT NULL,
CONSTRAINT PK_hashtags PRIMARY KEY ( hashtag_id )
);
CREATE TABLE tweet_hashtags
(
tweet_id varchar(50) NOT NULL,
hashtag_id integer NOT NULL,
CONSTRAINT FK_344 FOREIGN KEY ( tweet_id ) REFERENCES tweets ( tweet_id )
);
CREATE INDEX fkIdx_345 ON tweet_hashtags
(
tweet_id
);
The INSERT statement does not define the target columns, so Postgresql will attempt to insert values into a column that was defined as SERIAL.
We can work around this by providing a list of target columns, omitting the serial types. To do this we use SQLAlchemy to fetch the metadata of the table that we are inserting into from the database, then make a list of target columns. SQLAlchemy doesn't tell us if a column was created using SERIAL, but we will assume that it is if it is a primary key and is set to autoincrement. Primary key columns defined with GENERATED ... AS IDENTITY will also be filtered out - this is probably desirable as they behave in the same way as SERIAL columns.
import sqlalchemy as sa
def export_to_sql(df, table_name):
engine = sa.create_engine(f'postgresql://{user}:{password}#{host}:5432/{user}')
df.to_sql(con=engine,
name='temporary_table',
if_exists='append',
index=False,
method='multi')
# Fetch table metadata from the database
table = sa.Table(table_name, sa.MetaData(), autoload_with=engine)
# Get the names of columns to be inserted,
# assuming auto-incrementing PKs are serial types
column_names = ','.join(
[f'"{c.name}"' for c in table.columns
if not (c.primary_key and c.autoincrement)]
)
with engine.begin() as cnx:
insert_sql = sa.text(
f'INSERT INTO {table_name} ({column_names}) (SELECT * FROM temporary_table) ON CONFLICT DO NOTHING; DROP TABLE temporary_table'
)
cnx.execute(insert_sql)

How to upsert pandas DataFrame to MySQL with SQLAlchemy

I'm pushing data from a data-frame into MySQL, right now it is only adding new data to the table if the data does not exists(appending). This works perfect, however I also want my code to check if the record already exists then it needs to update. So I need it to append + update. I really don't know how to start fixing this as I got stuck....someone tried this before?
This is my code:
engine = create_engine("mysql+pymysql://{user}:{pw}#localhost/{db}"
.format(user="root",
pw="*****",
db="my_db"))
my_df.to_sql('my_table', con = engine, if_exists = 'append')
You can use next solution on DB side:
First: create table for insert data from Pandas (let call it test):
CREATE TABLE `test` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(100) NOT NULL,
`capacity` INT(11) NOT NULL,
PRIMARY KEY (`id`)
);
Second: Create table for resulting data (let call it cumulative_test) exactly same structure as test:
CREATE TABLE `cumulative_test` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(100) NOT NULL,
`capacity` INT(11) NOT NULL,
PRIMARY KEY (`id`)
);
Third: set trigger on each insert into the test table will insert ore update record in the second table like:
DELIMITER $$
CREATE
/*!50017 DEFINER = 'root'#'localhost' */
TRIGGER `before_test_insert` BEFORE INSERT ON `test`
FOR EACH ROW BEGIN
DECLARE _id INT;
SELECT id INTO _id
FROM `cumulative_test` WHERE `cumulative_test`.`name` = new.name;
IF _id IS NOT NULL THEN
UPDATE cumulative_test
SET `cumulative_test`.`capacity` = `cumulative_test`.`capacity` + new.capacity;
ELSE
INSERT INTO `cumulative_test` (`name`, `capacity`)
VALUES (NEW.name, NEW.capacity);
END IF;
END;
$$
DELIMITER ;
So you will already insert values into the test table and get calculated results in the second table. The logic inside the trigger can be matched for your needs.
Similar to the approach used for PostgreSQL here, you can use INSERT … ON DUPLICATE KEY in MySQL:
with engine.begin() as conn:
# step 0.0 - create test environment
conn.execute(sa.text("DROP TABLE IF EXISTS main_table"))
conn.execute(
sa.text(
"CREATE TABLE main_table (id int primary key, txt varchar(50))"
)
)
conn.execute(
sa.text(
"INSERT INTO main_table (id, txt) VALUES (1, 'row 1 old text')"
)
)
# step 0.1 - create DataFrame to UPSERT
df = pd.DataFrame(
[(2, "new row 2 text"), (1, "row 1 new text")], columns=["id", "txt"]
)
# step 1 - create temporary table and upload DataFrame
conn.execute(
sa.text(
"CREATE TEMPORARY TABLE temp_table (id int primary key, txt varchar(50))"
)
)
df.to_sql("temp_table", conn, index=False, if_exists="append")
# step 2 - merge temp_table into main_table
conn.execute(
sa.text(
"""\
INSERT INTO main_table (id, txt)
SELECT id, txt FROM temp_table
ON DUPLICATE KEY UPDATE txt = VALUES(txt)
"""
)
)
# step 3 - confirm results
result = conn.execute(
sa.text("SELECT * FROM main_table ORDER BY id")
).fetchall()
print(result) # [(1, 'row 1 new text'), (2, 'new row 2 text')]

MySQL (python 3.6) - SELECT * FROM table - returns number of rows instead of table

I am working with MySQL but I have some unexpected behaviour.
I have past experience with SQLite but I guess I am missing something here.
Using the query SELECT * FROM tableName I would expect the content of the table to be the output.
Instead I get an int, being the count of rows in the table.
Here is the piece of code I am using.
import MySQLdb
conn=MySQLdb.connect(host="xxx",user="xxx",passwd="xxx")
cursor = conn.cursor()
cursor.execute("create database if not exists Test;")
cursor.execute("use Test;")
cursor.execute("create table if not exists City (id int not null primary key auto_increment, city varchar(50), unique(city));")
cursor.execute("insert into City (city) values ('Firenze');")
cursor.execute("insert into City (city) values ('Roma');")
conn.commit()
print(cursor.execute("select city from City;"))
I would expect to get:
Firenze
Roma
Instead I get:
2
If I run the same query from a SQL client I get the expected output. Any clever idea?
Thanks :)
You are missing the FetchAll() function in your code.
Fetch all is nothing but fetching the data of last executed statement.
import MySQLdb
conn=MySQLdb.connect(host="xxx",user="xxx",passwd="xxx")
cursor = conn.cursor()
cursor.execute("create database if not exists Test;")
cursor.execute("use Test;")
cursor.execute("create table if not exists City (id int not null primary key
auto_increment, city varchar(50), unique(city));")
cursor.execute("insert into City (city) values ('Firenze');")
cursor.execute("insert into City (city) values ('Roma');")
conn.commit()
print(cursor.execute("select city from City;"))
myresult = mycursor.fetchall()
for x in myresult:
print(x)
The thing is print(cursor.execute("select city from City;") returns you the number of rows or rows count.
For the complete records use something like this
myresult = cursor.fetchall()
for x in myresult:
print(x)

How to commit MySql table name as variable in python?

I want to create a new mysqldb table for each unique user, but i'm getting errors:
1.'bytes' object has no attribute 'encode'
2.Can't convert 'bytes' object to str implicitly
3.You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''Text'(
id int(11) NOT NULL AUTO_INCREMENT,
UserName text NOT NULL' at line 1
c.execute("CREATE TABLE IF NOT EXISTS {table_name}".format(table_name=belekas), (belekas) + """(
`id` int(11) NOT NULL AUTO_INCREMENT,
`UserName` text NOT NULL,
`Data` date NOT NULL,
`Laikas` time NOT NULL,
`KeyStrokes` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8""")
con.commit()
c.execute("INSERT INTO {table_name} VALUES (id, %s, Data, Laikas, %s)".format(table_name=belekas),
(belekas, vartotojas, tekstas))
con.commit()
I tried using:
c.execute("CREATE TABLE IF NOT EXISTS" + vartotojas + """(
and this:
c.execute("CREATE TABLE IF NOT EXISTS" + repr(vartotojas.decode('utf-8')) + """(
and this:
c.execute("CREATE TABLE IF NOT EXISTS {this_table}".format(this_table=vartotojas), (vartotojas.encode("utf-8")) + """(
Can someone suggest solution for this problem?

Categories