Psycopg2 could not connect to server - python

I'have a little problem when I want to connect to my DB with psycopg2 and python. I have this little script :
#!/usr/bin/python3
import time
import psycopg2
import sys
def main():
# Get a connection
conn = psycopg2.connect(database='my_db', host='10.10.2.1', port='5433', user='me', password='my_password_that_i_dont_show_here')
# conn.cursor will return a cursor object, to perform queries
cursor = conn.cursor()
# Execute our query
cursor.execute("select date(created_at), email, firstname, lastname, locale from users where date(created_at) = current_date;")
# retrieve the records from the database
records = cursor.fetchall()
print(records)
if __name__ == "__main__":
main()
That worked well on windows but now I'm on ubuntu and I have this error :
Traceback (most recent call last):
File "Bureau/script.py", line 29, in <module>
main()
File "Bureau/script.py", line 15, in main
conn = psycopg2.connect(database='my_db', host='10.10.2.1', port='5433', user='me', password='best_password_ever')
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: Connection timed out
Is the server running on host "10.10.2.1" and accepting
TCP/IP connections on port 5433?

Related

How to connect fdb database using Python in Ubuntu?

I am trying to connect fdb database using python. I am remotely connected to ubuntu. I have tried this:
path = '/home/ubuntu/Firebird4.0/A.fdb'
con = fdb.connect(host=host, database=path, user=user, password=pswd2, charset='UTF8')
and many combinations for the path of ubuntu fdb file. How can I solve this problem? Is it about the wrong host, port, password, or something?
I got this error all the time
Traceback (most recent call last):
File "/home/ubuntu/test.py", line 24, in <module>
conn = connection()
File "/home/ubuntu/test.py", line 15, in connection
con = fdb.connect(host=host, database=path, user=user, password=pswd2, charset='UTF8')
File "/home/ubuntu/.local/lib/python3.10/site-packages/fdb/fbcore.py", line 869, in connect
raise exception_from_status(DatabaseError, _isc_status,
fdb.fbcore.DatabaseError: ('Error while connecting to database:\n- SQLCODE: -902\n- Unable to complete network request to host "15.206.100.218".\n- Failed to establish a connection.', -902, 335544721)

After checking the database is not connected to it

I have connect to postgreql which write in documentation Heroku:
import dj_database_url
import os
import psycopg2
DATABASE_URL = os.environ['DATABASE_URL']
conn = psycopg2.connect(DATABASE_URL, sslmode='require')
DATABASE_URL['default'] = dj_database_url.config(conn_max_age=600, ssl_require=True)
And I have connect in code:
conn = psycopg2.connect(
database = data.database,
user = data.user,
password = data.password,
host = data.host,
port = data.port,
sslmode = 'require'
)
cursor = conn.cursor()
They did some check of my database and after they finished at me nothing works and cannot be connected to a DB.
I get error:
2021-07-07T18:21:39.307157+00:00 app[worker.1]: Traceback (most recent call last):
2021-07-07T18:21:39.307173+00:00 app[worker.1]: File "/app/main.py", line 23, in <module>
2021-07-07T18:21:39.308848+00:00 app[worker.1]: conn = psycopg2.connect(
2021-07-07T18:21:39.308891+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
2021-07-07T18:21:39.309172+00:00 app[worker.1]: conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
2021-07-07T18:21:39.309299+00:00 app[worker.1]: psycopg2.OperationalError: FATAL: password authentication failed for user "cozqztdivphuut"
All information about password, user, database ect. input correctly.
Help please, because I can't understand this^^

Error connection to postgres database using psycopg2

I am trying to connect to a postgres database to run some queries for validation. Trying to establish connection via psycopg2. Have tried passing actual values for host instead of variables.
Tried the connection using psql to verify the connection parameters, password etc and they are right too.
But with the code below it always errors out with the below error
conn1 = psycopg2.connect(host=db_host,database=db_instance_name,user=masterusername,port=db_port,password=masterpassword)
cur1 = conn1.cursor()
print ('\n************Connection is successful ************ \n')
Traceback (most recent call last):
File "restored_db_details_draft.py", line 50, in <module>
conn_test = psycopg2.connect(host="***",database="my_db_name",user="master",port="9876",password="****")
File "/Users/me/Library/Python/3.8/lib/python/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

Connecting to Kerberized hadoop cluster using python module impyla

I am using impyla module to connect to kerberized hadoop cluster. I want to access
hiveserver2/hive but I was getting the below error:
test_conn.py
from impala.dbapi import connect
import os
connection_string = 'hdp296m1.XXX.XXX.com'
conn = connect(host=connection_string, port=21050,auth_mechanism="GSSAPI",kerberos_service_name='testuser#Myrealm.COM',password='testuser')
cursor = conn.cursor()
cursor.execute('select count(*) form t_all_types_simple_t')
print cursor.description
results = cursor.fetchall()
Stacktrace:
[vagrant#localhost vagrant]$ python test_conn.py
Traceback (most recent call last):
File "test_conn.py", line 4, in <module>
conn = connect(host=connection_string, port=21050, auth_mechanism="GSSAPI",kerberos_service_name='testuser#Myrealm.COM',password='testuser')
File "/usr/lib/python2.7/site-packages/impala/dbapi.py", line 147, in connect
auth_mechanism=auth_mechanism)
File "/usr/lib/python2.7/site-packages/impala/hiveserver2.py", line 758, in connect
transport.open()
File "/usr/lib/python2.7/site-packages/thrift_sasl/__init__.py", line 61, in open
self._trans.open()
File "/usr/lib64/python2.7/site-packages/thrift/transport/TSocket.py", line 101, in open
message=message)
thrift.transport.TTransport.TTransportException: Could not connect to hdp296m1.XXX.XXX.com:21050
testuser is my kerberos principal which I will be using to do kinit.
Your connection appears to be incorrect.. Try,
from impala.dbapi import *
import sys, os
# set your parms
host=os.environ.get("CDH_HIVE",'x.x.x.x')
port=os.environ.get("CDH_HIVE_port",'10000')
auth_mechanism=os.environ.get("CDH_auth",'GSSAPI')
user='hive'
db='mydb'
# No password use kinit
password=''
# hive is principal with krb
kbservice='hive'
class Hive:
def __init__(self,db):
self.database=db
self.__conn = connect(host=host,
port=port,
auth_mechanism=auth_mechanism,
user=user,
password=password,
database=db,
kerberos_service_name=kbservice
)
self.__cursor = self.__conn.cursor()
h = Hive(db)

Crawler can't connect to MySQL database under heavy load

I have a crawler written in Python, which can't connect to the database after crawling for a while. Sometimes it works for 10 minutes, after which the following error log appears:
(2003, "Can't connect to MySQL server on 'localhost' (10055)")
(2003, "Can't connect to MySQL server on 'localhost' (10055)")
Traceback (most recent call last):
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\Crawlers\Zanox.py", line 73, in <module>
c.main()
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\Crawlers\Zanox.py", line 38, in main
self.getInfo()
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\Crawlers\Zanox.py", line 69, in getInfo
comparator.main()
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\CrawlerHelpScripts\Comparator.py", line 23, in main
self.compare()
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\CrawlerHelpScripts\Comparator.py", line 36, in compare
deliveryInfo = self.db.getDeliveryInfo()
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\Database\dell.py", line 29, in getDeliveryInfo
result = self.db.select(com, vals)
File "C:\Users\Admin\Documents\eclipse\workspace\Crawler\src\Database\Database.py", line 24, in select
self.con.close()
_mysql_exceptions.ProgrammingError: closing a closed connection
So at a certain point it just can't connect to the database, running on localhost, and afterwards produces a ProgrammingError. Handling this exception is not the problem, because it will keep running but also will keep producing the Can't connect Error.
Here is the code I use for inserting/selecting in my database:
def select(self, com, vals):
try:
self.con = mdb.connect(self.host, self.user, self.password, self.database)
cur = self.con.cursor()
cur.execute(com,vals)
ver = cur.fetchall()
return ver
except mdb.Error as e:
print e
finally:
if self.con:
self.con.close()
def insert(self, com, vals):
try:
self.con = mdb.connect(self.host, self.user, self.password, self.database)
cur = self.con.cursor()
cur.execute(com, vals)
self.con.commit()
except mdb.Error as e:
print e
finally:
if self.con:
self.con.close()
Also, the crawler is NOT multi-threaded. Any ideas why it keeps losing connection?
EDIT: It seems that the crawler works fine untill it has inserted about 15.000 records in the database. If the table holds about 15.000 records, the crawler produces the error a lot quicker.

Categories