im trying to create a schema in postgres database using psycopg2.
For some reason the schema is not created and later on the code crashes because it tries to refer to the missing schema. The connection is set to auto commit mode, which definetly works because i can create a database with this specific connection.
For debugging purposes i have wrapped every step in it's own try/except statement.
Code is below, as it is right there, it does not raise any exceptions, just the follow up crashes because the schema is missing.
def createDB(dbName, connString):
conn = psycopg2.connect(connString)
conn.set_session(autocommit =True) # autocommit must be True sein, else CREATE DATABASE will fail https://www.psycopg.org/docs/usage.html#transactions-control
cursor = conn.cursor()
createDB = sql.SQL('CREATE DATABASE {};').format(
sql.Identifier(dbName)
)
createSchema = sql.SQL('CREATE SCHEMA IF NOT EXISTS schema2;')
searchpath = sql.SQL('ALTER DATABASE {} SET search_path TO public, schema2;').format(
sql.Identifier(dbName)
)
dropDB = sql.SQL('DROP DATABASE IF EXISTS {};').format(
sql.Identifier(dbName)
)
try:
cursor.execute(dropDB)
except Exception as e:
print('drop DB failed')
logging.error(e)
conn.close()
exit()
try:
cursor.execute(createDB)
except Exception as e:
print('create DB failed')
logging.error(e)
conn.close()
exit()
try:
cursor.execute(createSchema)
print('schema created')
except Exception as e:
print('create schema failed')
logging.error(e)
conn.close()
exit()
try:
cursor.execute(searchpath)
except Exception as e:
print('set searchpath failed')
logging.error(e)
conn.close()
exit()
conn.close()
Adding an explicit commit does not do the trick either.
What am i missing?
EDIT
I have added a small screenshot with the console logs. As you can see, the code below gets executed.
EDIT 2
Out of sheer curiosity, i have tried to execute this very SQL statement in pgadmin:
CREATE SCHEMA IF NOT EXISTS schema2
and it works just fine, which shows, that my SQL is not wrong, so back to square one.
EDIT 3 -- Solution
So i have come up with a solution, thank to you #jjanes for pointing me in the right direction. This function does not connect to a specific database, but the server as a whole, since im using it to create new databases, hence the connection string looks something like this :
user=postgres password=12345 host=localhost port=5432
Which allows me to perform server level operations like create and drop database. But schemas are a Database level operation. Moving the exact same logic to the part of the code which is connected to the newly created database works like a charm.
You create the schema in the original database specified by the connect string. Once you create the new database, you need to connect to it in order to work in it. Otherwise, you are just working in the old database.
Related
I want to insert given values from my docker app-service to the MariaDB-service.
The connection has been established because I can execute SELECT * FROM via the MariaDB.connection.cursor.
First of all I create the connection:
def get_conn() -> mariadb.connection:
try:
conn = mariadb.connect(
user="XXX",
database="XXX",
password="XXX",
host="db",
port=33030,
)
except mariadb.Error as e:
print(f'Error connecting to MariaDB Platform: {e}')
sys.exit(1)
return conn
Then I create a mariadb.connection.cursor-Object:
def get_cur() -> mariadb.connection.cursor:
conn = get_conn()
cur = conn.cursor()
return cur
Finally I want to insert new values in the table testing:
def write_data():
cursor = get_cur()
conn = get_conn()
cursor.execute('INSERT INTO testing (title) VALUE ("2nd automatic entry");')
print("Executed Query")
conn.commit()
cursor.close()
conn.close()
print("Closed Connection")
return True
To test, if the entries are inserted, I started with 1 manual entry, then executed the write_data()-function and to finish of I inserted a 2nd manual entry via the console.
After the procedure the table looks like:
Note that the ìd is on AUTO_INCREMENT. So the function write_data() was not skipped entirely, because the 2nd manual entry got the id 3 and not 2.
You're committing a transaction in a different connection than the one your cursor belongs to.
get_conn() creates a new database connection and returns it.
get_cur() calls get_conn, that gets it a new connection, retrieves a cursor object that belongs to it, and returns it.
In your main code, you call get_conn - that gives you connection A.
Then you obtain a cursor by calling get_cur - that creates a connection B and returns a cursor belonging to it.
You run execute on the cursor object (Connection B) but commit the connection you got in the first call (Connection A).
PS: This was a really fun problem to debug, thanks :)
It's really easy, in a new table with new code, to unintentionally do an INSERT without a COMMIT. That is especially true using the Python connector, which doesn't use autocommit. A dropped connection with an open transaction rolls back the transaction. And, a rolled-back INSERT does not release the autoincremented ID value for reuse.
This kind of thing happens, and it's no cause for alarm.
A wise database programmer won't rely on a set of autoincrementing IDs with no gaps in it.
try:
connection = mysql.connector.connect(host='localhost',database='USER',user='root',password='password')
sql_select_Query = "select * from AuthSys WHERE mac = '%s'"%mac
cursor = connection.cursor()
cursor.execute(sql_select_Query)
row_headers=[x[0] for x in cursor.description]
records = cursor.fetchall()
except mysql.connector.Error as e:
return [e]
finally:
if connection.is_connected():
connection.close()
cursor.close()
print("MySQL connection is closed")
I wanted to store the host='localhost',database='USER',user='root',password='password' securely in my python project.So that everyone whoever uses my script will not get access to my database
Note: I am new to stackoverflow.If i wrote something wrong please suggent me right.Thanks in Advance.
You should probably put the credentials in a separate config file that isn't deployed with the project. And pass the path of this file to the main entry of the application, something like this:
python main.py --config=/your-path/to/your-config-file.ini
You will also need to parse this --config argument and then read and parse the your-config-file.ini file.
If you dont have too many such settings one common option is to get them from system environment variables.
user= os.environ["myuser"]
password= os.environ["mypassword"]
connection = mysql.connector.connect(host='localhost',database='USER',user=user,password=password)
See https://12factor.net/ factor 3.
I’d prefix all app settings environment names with something common, giving bkapp_user, bkapp_password.
I need to execute certain MySQL commands in a python script, which is a straight-forward task. For testing purposes I have boiled down the commands to this:
import mysql.connector
script = """
CREATE DATABASE `new_project`;
CREATE TABLE `new_project`.`category` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
UNIQUE KEY `unq_name` (`name`),
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
"""
connection = mysql.connector.connect(
host="localhost",
port="3306",
user="root",
passwd="somepassword",
)
cursor = connection.cursor()
try:
print("begin execution")
cursor.execute(script, multi=True)
warnings = cursor.fetchwarnings()
if warnings:
for warning in warnings:
print(warning)
connection.commit()
cursor.close()
connection.close()
print("connection closed")
except mysql.connector.Error as err:
print(err.msg)
The user credentials are replaced with the correct information when running the script.
The output of this script is
begin execution
connection closed
with no errors, warnings or other output. The database new_project is not created. When I run the same MySQL commands in another interface, they work as expected and create the database and the table.
I must be overlooking something very simple.
The documentation of execute mentions that the method returns an iterator with the results for each query when multi=True. It seems the queries are not doing anything until the iterator has been processed, regardless of commit(). However, CREATE statements do not produce any results and trying to iterate over the return value of execute results in an exception: generator raised StopIteration. This is related to a bug in the connector module and has been fixed in version 8.0.13 with support of python 3.7.
The solution is now to always iterate over the return value of execute, even if no return data is expected, and to upgrade the connector module. If the upgrade is not feasible it is possible to catch the failed iteration and continue.
The fixed code (including the part for earlier versions of the connector module) now looks something like this:
try:
results = cursor.execute(script, multi=True)
try:
for result in results:
pass
except Exception as e:
pass
warnings = cursor.fetchwarnings()
if warnings:
for warning in warnings:
# handle warning
connection.commit()
cursor.close()
connection.close()
except mysql.connector.Error as err:
# handle error
Try using password="somepassword" instead of passwd="somepassword". And remove multi=True. It'll create a warning but will still execute both of your statements.
I am trying to update my db2 database using pyodbc in python. The sql statement runs normally without errors on the database directly. when I run the code below, I get no errors and the code executes successfully but when I query the database, the changes did not save.
try:
conn2 = pyodbc.connect("DRIVER={iSeries Access ODBC Driver};SYSTEM="+ Config_Main.iseriesServer +";DATABASE="+ Config_Main.iseriesDB +";UID="+ Config_Main.iseriesUser +";PWD=" + Config_Main.iseriesPass)
db2 = conn2.cursor()
for row in encludeData:
count = len(str(row[2]))
srvid = row[2]
if count < 10:
sql3 = "UPDATE SVCEN2DEV.SRVMAST SET svbrch = ? WHERE svtype != '*DCS-' AND svacct = ? AND svcid LIKE '%?' and svbrch = ?"
db2.execute(sql3, (row[4],row[1],"%" + str(srvid),row[5]))
else:
sql3 = "UPDATE SVCEN2DEV.SRVMAST SET svbrch = ? WHERE svtype != '*DCS-' AND svacct = ? AND svcid = ? and svbrch = ?"
db2.execute(sql3, (row[4],row[1],srvid,row[5]))
conn2.commit()
except pyodbc.Error as e:
logging.error(e)
I have tried setting conn2.autocommit = True. and I have also tried moving the conn2.commit() inside of the for loop to commit after each iteration. I also tried a different driver {IBM i Access ODBC Driver}
EDIT:
Sample of encludeData
['4567890001','4567890001','1234567890','1234567890','foo','bar']
After changing the except statement to grab general errors, the code above now produces this error:
IntegrityError('23000', '[23000] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0803 - Duplicate key value specified. (-803) (SQLExecDirectW)')
As OP found out, the application layer language, Python, may not raise specific database exceptions such as duplicate index or foreign key issues and hence will silently fail or will be logged on server side. Usually errors that affect actual SQL queries to run like incorrect identifiers and syntax errors will raise an error on client side.
Therefore, as best practice in programming it is necessary to use exception handling like Python's try/except/finally or the equivalent in other general purpose languages that interface with any external API like database connections in order to catch and properly handle runtime issues.
Below will print any exception on statements raised in the try block including connection and query execution. And regardless of success or fail will run the finally statements.
try:
conn2 = pyodbc.connect(...)
db2 = conn2.cursor()
sql = "..."
db2.execute(sql, params)
conn2.commit()
except Exception as e:
print(e)
finally:
db2.close()
conn2.close()
I use psycopg2 for accessing my postgres database in python. My function should create a new database, the code looks like this:
def createDB(host, username, dbname):
adminuser = settings.DB_ADMIN_USER
adminpass = settings.DB_ADMIN_PASS
try:
conn=psycopg2.connect(user=adminuser, password=adminpass, host=host)
cur = conn.cursor()
cur.execute("CREATE DATABASE %s OWNER %s" % (nospecial(dbname), nospecial(username)))
conn.commit()
except Exception, e:
raise e
finally:
cur.close()
conn.close()
def nospecial(s):
pattern = re.compile('[^a-zA-Z0-9_]+')
return pattern.sub('', s)
When I call createDB my postgres server throws an error:
CREATE DATABASE cannot run inside a transaction block
with the errorcode 25001 which stands for "ACTIVE SQL TRANSACTION".
I'm pretty sure that there is no other connection running at the same time and every connection I used before calling createDB is shut down.
It looks like your cursor() is actually a transaction:
http://initd.org/psycopg/docs/cursor.html#cursor
Cursors created from the same
connection are not isolated, i.e., any
changes done to the database by a
cursor are immediately visible by the
other cursors. Cursors created from
different connections can or can not
be isolated, depending on the
connections’ isolation level. See also
rollback() and commit() methods.
Skip the cursor and just execute your query. Drop commit() as well, you can't commit when you don't have a transaction open.