I'm trying to connect to an Oracle DB using AWS Lambda Python code.
My code is below:
import sys, os
import cx_Oracle
import traceback
def main_handler(event, context):
# Enter your database connection details here
host = "server_ip_or_name"
port = 1521
sid = "server_sid"
username = "myusername"
password = "mypassword"
try:
dsn = cx_Oracle.makedsn(host, port, sid)
print dsn
connection = cx_Oracle.Connection("%s/%s#%s" % (username, password, dsn))
cursor = connection.cursor()
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, exc:
error, = exc.args
print >> sys.stderr, "Oracle-Error-Code:", error.code
print >> sys.stderr, "Oracle-Error-Message:", error.message
tb = traceback.format_exc()
else:
tb = "No error"
finally:
print tb
if __name__ == "__main__":
main_handler(sys.argv[0], None)
If have already added all dependencies in "lib" folder, thanks to AWS Python Lambda with Oracle
When running this code, I'm getting:
DatabaseError: ORA-21561: OID generation failed
i've tried to connect using IP of the Oracle server and the name: same error.
Here is the output of the error
Oracle-Error-Code: 21561
Oracle-Error-Message: ORA-21561: OID generation failed
Traceback (most recent call last):
File "/var/task/main.py", line 20, in main_handler
connection = cx_Oracle.Connection("%s/%s#%s" % (username, password, dsn))
DatabaseError: ORA-21561: OID generation failed
For those who have successfully run the CX_Oracle in AWS Lambda Python, can you please help ?
Thanks
Ok, here is the explanation:
Oracle has a funny behavior where if the hostname given by hostname can't be resolved, it will fail connecting to the DB. Fortunately, in Linux, one can override the DNS entry for a session by writing an alias file in /tmp, then setting the environment variable HOSTALIASES to that file.
So adding this code to my function help to generate this file, and now I can successfully connect:
f = open('/tmp/HOSTALIASES','w')
str_host = os.uname()[1]
f.write(str_host + ' localhost\n')
f.close()
Hope it can help someone else !
See the following other question for the resolution to this problem.
sqlplus remote connection giving ORA-21561
Effectively the client requires a host name in order to generate a unique identifier which is used when connecting to the database.
The accepted solution for this correct, but please also be aware that the HOSTALIASES mechanism does require working DNS (as strange as it sounds).
I struggled with this for a few hours having implemented the accepted solution and realised that I was not permitting outbound DNS on the security group attached to by Lambda function VPC interface (my connection was by IP address to Oracle DB, so initially did not think this was required).
Related
I am trying to connect to a PostgreSQL database hosted on a Google Cloud Platform instance using unix sockets and the SQLAlchemy library. My database instance is configured to accept unix socket connections.
When I run my application locally, I use the following line to connect to the database and it works perfectly:
pool = create_engine("postgresql://{user}:{password}#/{dbname}?host={socket}".format(params_dic))
However, when I run the same application on Google Cloud Platform, I get an error with the following connection string:
pool = create_engine(engine.url.URL.create(
drivername="postgresql+psycopg2",
username=params_dic['user'],
password=params_dic['password'],
database=params_dic['dbname'],
query={"unix_socket": "{}/.s.PGSQL.5432".format(params_dic['socket'])},
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=100,)
The error message is: (psycopg2.ProgrammingError) invalid dsn: invalid connection option "unix_socket"
How can I connect to a PostgreSQL database on Google Cloud Platform using unix sockets and SQLAlchemy?
I Tried connecting to GCP Postgres with unix socket and SQLAlchemy, expected success but got error "invalid dsn: invalid connection option "unix_socket"
Also I dont want to use a Public IP conection.
I was able to solve it using the psycopg2 library directly, there was also a missing line in the .yaml file to add the instance to the uinx socket. Thank you all.
de function:
def connect(params_dic):
"""
Generate the connection to the database
"""
if is_gcp():
print('coneccting from GCP...')
conn = None
try:
conn = ppg2.connect(
host=params_dic['socket_dir']+'/'+params_dic['socket'],
database = params_dic['dbname'],
user = params_dic['user'],
password = params_dic['password'],
)
return conn
except Exception as error:
print('problema en la funcion ppg2: ',error)
else:
print("Running locally")
conn = None
try:
# connect to the PostgreSQL server
conn = ppg2.connect(**params_dic)
except (Exception, ppg2.DatabaseError) as error:
print(error)
return conn
the missing line in the yaml file
beta_settings:
cloud_sql_instances: proyect-name:region:instance-name
I am using a simple python script to connect the postgresql and future will create the table into the postgresql just using the script.
My code is:
try:
conn = "postgresql://postgres:<password>#localhost:5432/<database_name>"
print('connected')
except:
print('not connected')
conn.close()
when I run python connect.py (my file name), it throws this error :
Instance of 'str' has no 'commit' member
pretty sure is because it detects 'conn' as a string instead of database connection. I've followed this documentation (33.1.1.2) but now sure if Im doing it right. How to correct this code so it will connect the script to my postgresql server instead of just detects it as a string?
p/s: Im quite new to this.
You are trying to call a method on a string object.
Instead you should establish a connection to your db at first.
I don't know a driver which allows the use of a full connection string but you can use psycopg2 which is a common python driver for PostgreSQL.
After installing psycopg2 you can do the following to establish a connection and request your database
import psycopg2
try:
connection = psycopg2.connect(user = "yourUser",
password = "yourPassword",
host = "serverHost",
port = "serverPort",
database = "databaseName")
cursor = connection.cursor()
except (Exception, psycopg2.Error) as error :
print ("Error while connecting", error)
finally:
if(connection):
cursor.close()
connection.close()
You can follow this tutorial
I am getting this error when executing my code in Python.
Here is my Python - DataBaseHelper.py:
import psycopg2
#class
class DataBaseHelper:
database = "testdata";user = "test";password = "pass123"; host = "mtest.75tyey.us-east-1.rds.amazonaws.com"
#create and return the connection of database
def getConection(self):
self.conn = psycopg2.connect(database=self.database, user = self.user, password = self.password, host = self.host, port = "5432")
return self.conn
Then I am importing this file and using in another python file - MyScript.py:
import sys
import uuid
from DBHelper import DataBaseHelper
from ExecutionLogHelper import ExecutionLogHelper
from GotUtility import GotUtility
class MyScript:
def __init__(self,):
self.con = DataBaseHelper().getConection()
self.logHelper = ExecutionLogHelper()
self.uuid = self.logHelper.Get_TEST_Run_Id()
When I run my code concurrently, it gives me this error:
psycopg2.errors.AdminShutdown: terminating connection due to administrator command
SSL connection has been closed unexpectedly
I am not able to understand why am I getting this error. When I run the Python program again, it works. And I checked the Postgres server is in running, no restart, no signal to shutdown. This keeps on happening every few hours for me.
This is happening because psycopg2 is try to connect to AWS Postgresql over SSL and failing to do so.
Try connecting with sslmode = disable
def getConection(self):
self.conn = psycopg2.connect(database=self.database,
user = self.user,
password = self.password,
host = self.host,
port = "5432",
sslmode="disable")
return self.conn
Method 1 will not work if your AWS Postgresql is configured to force a ssl connection ie. parameter rds.force_ssl = 1. If you enable set rds.force_ssl all non-SSL connections are refused. In that case try connecting using something like this:
$ psql -h testpg.cdhmuqifdpib.us-east-1.rds.amazonaws.com -p 5432 "dbname=testpg user=testuser sslrootcert=rds-ca-2015-root.pem sslmode=verify-full"
For more on how to connect to AWS RDS over ssl using various drivers : AWS RDS SSL.
After a little digging, I found a few answers. According to this link:
This error message comes from intervention by a program external to
Postgres: http://www.postgresql.org/message-id/4564.1284559661#sss.pgh.pa.us
To elaborate, this link says:
If user stop postgresql server with "service postgresql stop" or if any SIGINT has been called on the postgresql PID then this error will occur.
Solution:
Since your code is running concurrently, multiple transactions at the same time, at the same row could be causing this error. So you've got to make sure that that doesn't happen... Look here for more details:
When you update rows in a transaction, these rows are locked until the transaction is committed.
If you are unable to do that, I suggest you enable query logging and look to see if something odd is in it.
I am not able to connect to MySQL sever using python it gives and error which says
MySQLdb._exceptions.OperationalError: (1130, "Host 'LAPTOP-0HDEGFV9' is not allowed to connect to this MySQL server")
The code I'm using:
import MySQLdb
db = MySQLdb.connect(host="LAPTOP-0HDEGFV9", # your host, usually localhost
user="root", # your username
passwd="abcd13de",
db="testing") # name of the data base
cur = db.cursor()
cur.execute("SELECT * Employee")
for row in cur.fetchall():
print(row[0])
db.close()
This is an authorization problem not a connectivity problem. Is the db running locally? If not, confirm with the admin where it is hosted. If so, try changing the host parameter to 127.0.0.1?
As described here the admin can get the hostname by running:
select ##hostname;
show variables where Variable_name like '%host%';
If the connection was timing out you could try setting the connect_timeout kwarg but that's already None by default.
I have two virtual machines. One is used as Agent, One is used as Master. A Mysql Server is installed on Master(Mysql Client is forbidden to be installed on Master which means I can't use mysql command line on Master). The Mysql server has a default user 'root'. Now I want to write a python script which use 'MySQLdb' module on Agent. The test code is easy. just see as bellow:
#!/usr/bin/python
import MySQLdb
def main():
try:
conn=MySQLdb.connect(host=master ip,user='root',passwd='xxx',db='xxx',port=3306)
cur=conn.cursor()
count = cur.execute('select * from table')
print count
cur.close()
conn.close()
except MySQLdb.Error,e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
if __name__ == "__main__":
main()
however, when I execute it on Agent, there is an error:
Mysql Error 1045: Access denied for user 'root#Agent ip'(using password: YES)
So I don't know why the user is 'root#Agent ip', not the default user of Mysql server on Master. Anyone knows how to solve this problem?
There is a command named GRANT in MySQL. You have to grant permission for root#AgentIP (Where AgentIP is the IP of the system from which you have to access the db.)
The command to be run in the mysql client of the server:
GRANT ALL on mydb.* to 'root'#'YourIP' identified by 'YourPassword'
Only then the MySQL server running in the remote system will grant access to the database.
If not sure of the IP details, you can also specify 'root'#'%' ,
which will allow all requests from user named root from anywhere. This is not a recommended way, but there is such an option.