DB2 connectivity from Python - ibm_db.connect running continuously - python

I have been searching for an answer for this for hours, but unfortunately the closest thing I can find is 1 unanswered question. This is a similar issue, but it unfortunately did not have a resolution.
I had a working connection to a IBM DB2 database, but the web console was erroring out, so I was forced to delete the instance and make a new one. I changed nothing regarding the code to connect other than the values used to connect. When I changed these values the ibm_db.connect function runs continuously. There are no output errors as I have left it running for 10 minutes and nothing happens at all. I do change the values to force an error and it will error out saying the values are not correct. I have no clue what the problem is as I have no information to go off of. My only thought is the SSL could have something to do with it.
dsn_driver = connection_data['dsn_driver']
dsn_database = connection_data['dsn_database']
dsn_hostname = connection_data['dsn_hostname']
dsn_port = connection_data['dsn_port']
dsn_protocol = connection_data['dsn_protocol']
dsn_uid = connection_data['dsn_uid']
dsn_pwd = connection_data['dsn_pwd']
dsn = (
"DRIVER={0};"
"DATABASE={1};"
"HOSTNAME={2};"
"PORT={3};"
"PROTOCOL={4};"
"UID={5};"
"PWD={6};").format(dsn_driver, dsn_database, dsn_hostname,
dsn_port, dsn_protocol, dsn_uid, dsn_pwd)
try:
connection = ibm_db.connect(dsn, "", "")
print("Connected to database: ", dsn_database,
"as user: ", dsn_uid, "on host: ", dsn_hostname)
return connection
except:
print("Unable to connect: ", ibm_db.conn_errormsg())
The breakpoint is at connection = ibm_db.connect(dsn, "", "")
This data is loaded from a local JSON file with the following values (except for sensitive information).
{
"dsn_driver": "{IBM DB2 ODBC DRIVER}",
"dsn_database":"BLUDB",
"dsn_hostname": "hostname",
"dsn_port": "port",
"dsn_protocol": "TCPIP",
"dsn_uid": "uid",
"dsn_pwd": "pwd"
}
I have tried everything I can think of, but since nothing outputs I unfortunately do not know where to start. If someone has experience with this please let me know.
Thank you.
Edit: I did end up getting this error message returned from the ibm_db.connect method
Unable to connect: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "xxx.xx.xxx.xxx". Communication function detecting the error: "recv". Protocol specific err SQLCODE=-30081054", "*", "0". SQLSTATE=08001

A couple of points for clarification:
When you say "the ibm_db.connect function runs continuously" do you mean you see the CPU spinning or just that the python process doesn't progress past the connect?
What type of database are you connecting to? DB2 LUW or z/OS?
Have you tried to make sure that the connectivity is still working? i.e. did you try the suggestion from the other linked post? This:
To verify that there is network connectivity between you and the database you can try telnet xxxxxx 1234 (or nc xxxxxx 1234, where xxxxxx and 1234 are the service hostname and port, respectively
From a debugging point of view I'd be looking at the logs of the intervening processes:
Db2 Connect log if you are using it
DB2 target logs
TCPIP and z/OS Connect address spaces if z/os. BAQ region ? (not sure if that would just be my site)
Firewall - I know that you had a working connection but always best to check the obvious as well
As you've pointed out, without an error message it's hard to know where to start

Related

Process finished with exit code -1073741819 (0xC0000005) in PyCharm, RESTART: Shell in IDLE, and closes console

I'm trying to connect c-TreeSQL using Python. I know I have the correct driver because I can connect using George Poulose's Query Tool
I have tried these variations and each one has crashed;
import pyodbc
## Instructions from http://doc.4d.com/4Dv17/4D/17/Using-a-connection-string.200-3786162.en.html
# conn = pyodbc.connect('Driver={c-treeACE ODBC Driver};Host=<Host from driver config>;UID=<User name>;PWD=<Password>;DATABASE=liveSQL;')
# Connection string from Query Tool
# conn = pyodbc.connect('Driver={c-treeACE ODBC Driver};ODBC;DSN=DOTLIVEREP;Host=<Host from driver config>;UID=<User name>;PWD=<Password>;DATABASE=liveSQL;SERVICE=6597 ;CHARSET NAME=;MAXROWS=;OPTIONS=;;PRSRVCUR=OFF;;FILEDSN=;SAVEFILE=;FETCH_SIZE=;QUERY_TIMEOUT=;SCROLLCUR=OFF;')
# Connection string from Query Tool. Added driver parameter
conn = pyodbc.connect('ODBC;DSN=DOTLIVEREP;Host=<Host from driver config>;UID=<User name>;PWD=<Password>;DATABASE=liveSQL;SERVICE=6597 ;CHARSET NAME=;MAXROWS=;OPTIONS=;;PRSRVCUR=OFF;;FILEDSN=;SAVEFILE=;FETCH_SIZE=;QUERY_TIMEOUT=;SCROLLCUR=OFF;')
print('Success')
Each one of these connection strings causes a crash in PyCharm, IDLE, and the console.
I'm not sure what would be causing this
you should change the field in the connection string where is for the specific value of your application.
For example here, instead of < Host from driver config >. Lets say your host is MYHOST. You should put MYHOST. Same thing for < User name >, you should put your username, like jacob, and so on. Normally these field with brackets are the on you have put a value:
# Connection string from Query Tool. Added driver parameter
conn = pyodbc.connect('ODBC;DSN=DOTLIVEREP;Host=MYHOST;UID=jacob;PWD=<Password>;DATABASE=liveSQL;SERVICE=6597 ;CHARSET NAME=;MAXROWS=;OPTIONS=;;PRSRVCUR=OFF;;FILEDSN=;SAVEFILE=;FETCH_SIZE=;QUERY_TIMEOUT=;SCROLLCUR=OFF;')
Even tho I haven't had the same issue like you had, here's my fix for the error while using pyodbc and MS SQL Server:
I tried to connect to multiple databases on the same server right at the beginning of my code. When I tried connecting to a second database on the same server, my code threw this error.
What I did to fix it was to close my first connection to the server before, do the work that needed to be done, and close the connection afterwords. Then I connected to my second database and everything worked fine.

Parallel-SSH - how to close ssh channel after a certain time?

Ok, so it's possible that the answer to this question is simply "stop using parallel-ssh and write your own code using netmiko/paramiko. Also, upgrade to python 3 already."
But here's my issue: I'm using parallel-ssh to try to hit as many as 80 devices at a time. These devices are notoriously unreliable, and they occasionally freeze up after giving one or two lines of output. Then, the parallel-ssh code hangs for hours, leaving the script running, well, until I kill it. I've jumped onto the VM running the scripts after a weekend and seen a job that's been stuck for 52 hours.
The relevant pieces of my first code, the one that hangs:
from pssh.pssh2_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return result
The next thing I tried was the channel_timout option, because if it takes more than 4 minutes to get the command output, then I know that the device froze, and I need to move on and cycle it later in the script:
from pssh.pssh_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, channel_timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return result
This version never actually connects to anything. Any advice? I haven't been able to find anything other than channel_timeout to attempt to kill an ssh session after a certain amount of time.
The code is creating a client object inside a function and then returning only the output of run_command which includes remote channels to the SSH server.
Since the client object is never returned by the function it goes out of scope and gets garbage collected by Python which closes the connection.
Trying to use remote channels on a closed connection will never work. If you capture stack trace of the stuck script it is most probably hanging at using remote channel or connection.
Change your code to keep the client alive. Client should ideally also be reused.
from pssh.pssh2_client import ParallelSSHClient
def remote_ssh(ip_list, ssh_user, ssh_pass, cmd):
client = ParallelSSHClient(ip_list, user=ssh_user, password=ssh_pass, timeout=180, retry_delay=60, pool_size=100, allow_agent=False)
result = client.run_command(cmd, stop_on_errors=False)
return client, result
Make sure you understand where the code is going wrong before jumping to conclusions that will not solve the issue, ie capture stack trace of where it is hanging. Same code doing the same thing will break the same way..

Postgres SSL SYSCALL error: EOF detected with python and psycopg

Using psycopg2 package with python 2.7 I keep getting the titled error: psycopg2.DatabaseError: SSL SYSCALL error: EOF detected
It only occurs when I add a WHERE column LIKE ''%X%'' clause to my pgrouting query. An example:
SELECT id1 as node, cost FROM PGR_Driving_Distance(
'SELECT id, source, target, cost
FROM edge_table
WHERE cost IS NOT NULL and column LIKE ''%x%'' ',
1, 10, false, false)
Threads on the internet suggest it is an issue with SSL intuitively, but whenever I comment out the pattern matching side of things the query and connection to the database works fine.
This is on a local database running Xubuntu 13.10.
After further investigation: It looks like this may be cause by the pgrouting extension crashing the database because it is a bad query and their are not links which have this pattern.
Will post an answer soon ...
The error: psycopg2.operationalerror: SSL SYSCALL error: EOF detected
The setup: Airflow + Redshift + psycopg2
When: Queries take a long time to execute (more than 300 seconds).
A socket timeout occurs in this instance. What solves this specific variant of the error is adding keepalive arguments to the connection string.
keepalive_kwargs = {
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 5,
"keepalives_count": 5,
}
conection = psycopg2.connect(connection_string, **keepalive_kwargs)
Redshift requires a keepalives_idle of less than 300. A value of 30 worked for me, your mileage may vary. It is also possible that the keepalives_idle argument is the only one you need to set - but ensure keepalives is set to 1.
Link to docs on postgres keepalives.
Link to airflow doc advising on 300 timeout.
I ran into this problem when running a slow query in a Droplet on a Digital Ocean instance. All other SQL would run fine and it worked on my laptop. After scaling up to a 1 GB RAM instance instead of 512 MB it works fine so it seems that this error could occur if the process is running out of memory.
Very similar answer to what #FoxMulder900 did, except I could not get his first select to work. This works, though:
WITH long_running AS (
SELECT pid, now() - pg_stat_activity.query_start AS duration, query, state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '1 minutes'
and state = 'active'
)
SELECT * from long_running;
If you want to kill the processes from long_running just comment out the last line and insert SELECT pg_cancel_backend(long_running.pid) from long_running ;
This issue occurred for me when I had some rogue queries running causing tables to be locked indefinitely. I was able to see the queries by running:
SELECT * from STV_RECENTS where status='Running' order by starttime desc;
then kill them with:
SELECT pg_terminate_backend(<pid>);
I encountered the same error. By CPU, RAM usage everything was ok, solution by #antonagestam didn't work for me.
Basically, the issue was at the step of engine creation. pool_pre_ping=True solved the problem:
engine = sqlalchemy.create_engine(connection_string, pool_pre_ping=True)
What it does, is that each time when the connection is being used, it sends SELECT 1 query to check the connection. If it is failed, then the connection is recycled and checked again. Upon success, the query is then executed.
sqlalchemy docs on pool_pre_ping
In my case, I had the same error in python logs. I checked the log file in /var/log/postgresql/, and there were a lot of error messages could not receive data from client: Connection reset by peer and unexpected EOF on client connection with an open transaction. This can happen due to network issues.
In my case that was OOM killer (query is too heavy)
Check dmesg:
dmesg | grep -A2 Kill
In my case:
Out of memory: Kill process 28715 (postgres) score 150 or sacrifice child
I got this error running a large UPDATE statement on a 3 million row table. In my case it turned out the disk was full. Once I had added more space the UPDATE worked fine.
You may need to express % as %% because % is the placeholder marker. http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries

Python background process not writing to MySQL db

I apologize if this question has been asked before, but I was unable to find any record of this issue. Full disclosure: I've only been using Python for a few months and MySQL for about 1 month.
I've written a short Python script on a Raspberry Pi (running Raspbian Wheezy) that sniffs wifi packets and writes signal strength info to a MySQL database. I've also created a small PHP file that grabs the info from the database and presents it in a table (pretty basic). All components of this little system work exactly as planned, however...
When I run the Python script in the background (sudo python my_script.py &) it does not appear to update the MySQL database with new readings. Yet it also throws no errors and outputs to the console without a problem (I have a line printed each time a wifi packet is intercepted and its RSSI is added to the database). I encountered the same problem when starting the script at boot up using the /etc/rc.local file. No errors, but no updates in the database either.
Is the problem on the Python side of things? A MySQL setting that I need to change? Is there something else I'm completely missing?
EDITED TO ADD CODE:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import MySQLdb as mdb
import sys
from scapy.all import *
# Create connection to MySQL database called 'DATABASE' at localhost with username 'USERNAME' and password 'PASSWORD'
HOST = "localhost"
USER = "USERNAME"
PW = "PASSWORD"
DB = "DATABASE"
con = mdb.connect(HOST, USER, PW, DB)
# set interface that will be used to monitor wifi
interface = "mon0"
with con:
cur = con.cursor()
# This function will be called every time a packet is intercepted. Packet is passed to function as 'p'
def sniffmgmt(p):
# These are the 3 types of management frames that are sent exclusively by clients (allows us to weed out packets sent by APs)
stamgmtstypes = (0, 2, 4)
if p.haslayer(Dot11):
# Make sure packet is a client-only type
if p.subtype in stamgmtstypes:
# Calculate RSSI
sig_str = -(256-(ord(p.notdecoded[-4:-3])))
# Update database with most recent detection
cur.execute("REPLACE INTO attendance(mac_address, rssi, timestamp) VALUES('%s', %d, NOW())" % (p.addr2, sig_str))
# Print MAC address that was detected (debugging only)
print "MAC Address (%s) has been logged" % (p.addr2)
# Tell scapy what interface to use (see above) and which function to call when a packet is intercepted. lfilter limits packets to management frames.
sniff(iface=interface, prn=sniffmgmt, store=0, lfilter=lambda x:x.type==0)
Thanks!
Kyle
P.S. For those who are wondering: this is not intended for malicious use, it is being used to investigate product tracking techniques at our warehouse.
I expect you're not calling commit on the db transaction.

webapp in python

I have got a very simple idea in mind that i want to try out. Say i have a browser, chrome for instance, and i want to search for the ip of the domain name, say www.google.com. I use windows 7 and i have set the dns lookup properties to manual and have given the address 127.0.0.1 where my server (written in Python is running). I started my server and i could see the dns query but it was very weird as in it is showing faces like this:
WAITING FOR CONNECTION.........
.........recieved from : ('127.0.0.1', 59339)
╟╝☺ ☺ ♥www♠google♥com ☺ ☺
The waiting for connection and the received from is from my server. How do i get a human readable dns query?
This is my server code(quiet elementary but still):
Here is the code:
from time import sleep
import socket
host=''
port=53
addr_list=(host,port)
buf_siz=1024
udp=socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
udp.bind(addr_list)
while True:
print 'WAITING FOR CONNECTION.........'
data,addr = udp.recvfrom(buf_siz) print '.........recieved from : ',addr
sleep(3)
print data
DNS uses a compression algorithm and uses [length]string to represent parts of the domain name (as far as I remember). e.g. [3]www[6]google[3]com.
Have a look at the DNS RFCs, e.g. http://www.zoneedit.com/doc/rfc/rfc1035.txt

Categories