Is there an equivalent way to connect to a postgres database in a similar way that sqlite connects to a database using python?
For example, in sqlite, a connection will be defined by conn = sqlite3.connect(curruser.dbname). What is the similar connection syntax for postgres?
You can use the psycopg connector then, and the connection syntax will be similar, except that you'll need to specify some more information in a connection string:
conn_string = "host='localhost' dbname='my_database' user='postgres' password='secret'"
conn = psycopg2.connect(conn_string)
Here are some examples: Using psycopg2 with PostgreSQL
Related
How can I make the connection to the msssql database using pymsssql?
Below is a snippet of my code where I try to make the connection encrypted
conn = pymssql.connect(
server=conn.host,
user=conn.login,
password=conn.password,
database=self.schema or conn.schema,
port=conn.port,
)
return conn
I am using a simple python script to connect the postgresql and future will create the table into the postgresql just using the script.
My code is:
try:
conn = "postgresql://postgres:<password>#localhost:5432/<database_name>"
print('connected')
except:
print('not connected')
conn.close()
when I run python connect.py (my file name), it throws this error :
Instance of 'str' has no 'commit' member
pretty sure is because it detects 'conn' as a string instead of database connection. I've followed this documentation (33.1.1.2) but now sure if Im doing it right. How to correct this code so it will connect the script to my postgresql server instead of just detects it as a string?
p/s: Im quite new to this.
You are trying to call a method on a string object.
Instead you should establish a connection to your db at first.
I don't know a driver which allows the use of a full connection string but you can use psycopg2 which is a common python driver for PostgreSQL.
After installing psycopg2 you can do the following to establish a connection and request your database
import psycopg2
try:
connection = psycopg2.connect(user = "yourUser",
password = "yourPassword",
host = "serverHost",
port = "serverPort",
database = "databaseName")
cursor = connection.cursor()
except (Exception, psycopg2.Error) as error :
print ("Error while connecting", error)
finally:
if(connection):
cursor.close()
connection.close()
You can follow this tutorial
I'm using SQLAlchemy (Core only, not ORM) to create a connection to a SQL Server 2008 SP3.
When looking at the process' network connections, I noticed that the TCP/IP connection to the SQL Server (port 1433) remains open (ESTABLISHED).
Sample code:
from urllib.parse import quote_plus
from sqlalchemy.pool import NullPool
import sqlalchemy as sa
# parameters are read from a config file
db_params = quote_plus(';'.join(['{}={}'.format(key, val) for key, val in db_config.items()]))
# Hostname based connection
engine = sa.create_engine('mssql:///?odbc_connect={}'.format(db_params),
poolclass=NullPool)
conn = engine.connect()
conn.close()
engine.dispose()
engine = None
I added the NullPool and the engine.dispose() afterwards, thinking they might solve the lingering connection, but alas.
I'm using as hostname based connection as specified here.
Versions:
Python 3.5.0 (x32 on Win7)
SQLAlchemy 1.0.10
pyODBC 3.0.10
Edit: I've rewritten my code to solely use pyODBC instead of SQLAlchemy + pyODBC, and the issue remains. So as far as I can see, the issue is caused by pyODBC keeping the connection open.
When only pyODBC, the issue is because of connection pooling as discussed here.
As described in the docs:
pooling
A Boolean indicating whether connection pooling is enabled.
This is a global (HENV) setting, so it can only be modified before the
first connection is made. The default is True, which enables ODBC
connection pooling.
Thus:
import pyodbc
pyodbc.pooling = False
conn = pyodbc.connect(db_connection_string)
conn.close()
It seems that when using SQLAlchemy and disabling the SA pooling by using the NullPool, this isn't passed down to pyODBC.
I'm running this from PyDev in Eclipse...
import pymysql
conn = pymysql.connect(host='localhost', port=3306, user='userid', passwd='password', db='fan')
cur = conn.cursor()
print "writing to db"
cur.execute("INSERT INTO cbs_transactions(leagueID) VALUES ('test val')")
print "wrote to db"
The result is, at the top of the Console it says C:...test.py, and in the Console:
writing to db
wrote to db
So it's not terminating until after the execute command. But when I look in the table in MySQL it's empty. A record did not get inserted.
First off, why isn't it writing the record. Second, how can I see a log or error to see what happened. Usually there should be some kind of error in red if the code fails.
Did you commit it? conn.commit()
PyMySQL disable autocommit by default, you can add autocommit=True to connect():
conn = pymysql.connect(
host='localhost',
user='user',
passwd='passwd',
db='db',
autocommit=True
)
or call conn.commit() after insert
You can either do
conn.commit() before calling close
or
enable autocommit via conn.autocommit(True) right after creating the connection object.
Both ways have been suggested from various people at a duplication of the question that can be found here: Database does not update automatically with MySQL and Python
The backup statement can't be used in a transaction when it execute with pyodbc cursor. It seems that the pyodbc execute the query inside a default transaction.
I have also tried to use the autocommit mode or add the commit statement before the backup statement. Both of these are not working.
#can't execute the backup statement in transaction
cur.execute("backup database database_name to disk = 'backup_path'")
#not working too
cur.execute("commit;backup database database_name to disk = 'backup_path'")
Is it possible to execute the backup statement by pyodbc? Thanks in advance!
-----Added aditional info-----------------------------------------------------------------------
The backup operation is encapsulate in a function such as:
def backupdb(con, name, save_path):
# with autocommit mode, should be pyodbc.connect(con, autocommit=True)
con = pyodbc.connect(con)
query = "backup database %s to disk = '%s'" % (name, save_path)
cur = con.cursor()
cur.execute(query)
cur.commit()
con.close()
If the function is called by following code,
backupdb('DRIVER={SQL Server};SERVER=.\sqlexpress;DATABASE=master;Trusted_Connection=yes',
'DatabaseName',
'd:\\DatabaseName.bak')
then the exception will be:
File "C:/Documents and Settings/Administrator/Desktop/bakdb.py", line 14, in <module>'d:\\DatabaseName.bak')
File "C:/Documents and Settings/Administrator/Desktop/bakdb.py", line 7, in backupdb cur.execute(query)
ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot perform a backup or restore operation within a transaction. (3021) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP DATABASE is terminating abnormally. (3013)')
With open the keyword autocommit=True, the function will run silently but there is no backup file generated in the backup folder.
Assuming you are using SQL Server, specify autocommit=True when the connection is built:
>>> import pyodbc
>>> connection = pyodbc.connect(driver='{SQL Server Native Client 11.0}',
server='InstanceName', database='master',
trusted_connection='yes', autocommit=True)
>>> backup = "BACKUP DATABASE [AdventureWorks] TO DISK = N'AdventureWorks.bak'"
>>> cursor = connection.cursor().execute(backup)
>>> connection.close()
This is using pyodbc 3.0.7 with Python 3.3.2. I believe with older versions of pyodbc you needed to use Cursor.nextset() for the backup file to be created. For example:
>>> import pyodbc
>>> connection = pyodbc.connect(driver='{SQL Server Native Client 11.0}',
server='InstanceName', database='master',
trusted_connection='yes', autocommit=True)
>>> backup = "E:\AdventureWorks.bak"
>>> sql = "BACKUP DATABASE [AdventureWorks] TO DISK = N'{0}'".format(backup)
>>> cursor = connection.cursor().execute(sql)
>>> while cursor.nextset():
>>> pass
>>> connection.close()
It's worth noting that I didn't have to use Cursor.nextset() for the backup file to be created with the current version of pyodbc and SQL Server 2008 R2.
After hours I found solution. It must be performed no MASTER, other sessions must be terminated, DB must be set to OFFLINE, then RESTORE and then set to ONLINE again.
def backup_and_restore():
server = 'localhost,1433'
database = 'myDB'
username = 'SA'
password = 'password'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE=MASTER;UID='+username+';PWD='+ password)
cnxn.autocommit = True
def execute(cmd):
cursor = cnxn.cursor()
cursor.execute(cmd)
while cursor.nextset():
pass
cursor.close()
execute("BACKUP DATABASE [myDB] TO DISK = N'/usr/src/app/myDB.bak'")
# do something .......
execute("ALTER DATABASE [myDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;")
execute("ALTER DATABASE [myDB] SET OFFLINE;")
execute("RESTORE DATABASE [myDB] FROM DISK = N'/usr/src/app/myDB.bak' WITH REPLACE")
execute("ALTER DATABASE [myDB] SET ONLINE;")
execute("ALTER DATABASE [myDB] SET MULTI_USER;")