IDE Crash causes hung job on Server - python

Good Day All! I am using pyodbc to connnect to a Microsoft SQL server using a Native Client 11.0 ODBC connection. Occasionally something will happen to cause Spyder to crash resulting in my query to hanging on the server. When this happens, all variables are lost, so I'm not able to cancel the job that is still on the server or close the connection. My DBAs do not have rules in place to cancel long running queries, but hung queries like this block ETLs. I have my ODBC connection set up the way they've requested, so the question is, what else can I do to prevent issues for my partners when Spyder crashes? Note: I've imported pandas as "pd".
try:
data_conn = pyodbc.connect(dECTV)
data_conn.timeout = 1000
tfn = pd.read_sql(tele,data_conn)
print("Call information retrieved")
except:
print('!~!~!~!n Exception has been Raised for Inbound information!~!~!~!')
tfn = pd.read_csv(export_location + r'\TFN_Details.csv')
finally:
data_conn.close
print("Connection Closed. Moving on.")

BTW, I've done a lot of reading over the last two hours and have what I consider to be a solution, but I wanted to see if others agree. My thoughts would be to execute the following before running anything new on the same server.
exec sp_who 'my_login_id'; kill 'resulting_SPID';

Related

Force killing of q session after query is done

I am trying to force killing (not closing) a q session once my query is done to save resources on my machine.
It is currently working using:
conn.sendAsync("exit 0")
Problem is, if I run a query right after it again (trying to reopen the connection and run another query), it might fail as the previous connection would still being killed as it is asynchronous.
Therefore, I am trying to do the same thing with a synchronous query, but when trying:
conn.sendSync("exit 0")
I get:
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
python-BaseException
Can I specify a timeout such that the q session will be killed automatically after say 10 seconds instead, or maybe there is another way to force killing the q session?
My code looks like this:
conn = qc.QConnection(host='localhost', port=12345, timeout=10000)
conn.open()
res = None
try:
res = conn.sendSync(query, numpy_temporals=True)
except Exception as e:
print(f'Error running {query}: {e}')
conn.sendSync("exit 0")
conn.close()
I'd suggest we take a step back and re-evaluate if it's really a right thing to kill the KDB process after your Python program runs a query. If the program isn't responsible to bring up the KDB process, most likely it should not bring the process down.
Given the rationale of saving resource, I believe it keeps many data in memory and thus takes time to start up. It adds another reason that you shouldnt kill it if you need to use it a second time.
You shouldn't be killing a kdb process you intend to query again. Some suggestions on points in your question:
once my query is done to save resources -> you can manually call garbage collection with .Q.gc[] to free up memory or alternatively and perhaps better enable immediate garbage collection with -g 1 on start. Note if you create large global variables in your query this memory will not be freed up / returned.
https://code.kx.com/q/ref/dotq/#qgc-garbage-collect
https://code.kx.com/q/basics/syscmds/#g-garbage-collection-mode
killed automatically after say 10 seconds -> if your intention here is to not allow client queries such as from your python process to run over 10 seconds you can set a query timeout with -T 10 on start or when process is running with \T 10 / system "T 10"
https://code.kx.com/q/basics/cmdline/#-t-timeout

Django - possible to retry any DB operation on failure?

We are having issues recently with our prod servers connecting to Oracle. Intermittently we are getting "DatabaseError: ORA-12502: TNS:listener received no CONNECT_DATA from client". This issue is completely random and goes away in a second by itself and it's not a Django problem, can replicate it with SQLPlus from the servers.
We opened ticket with Oracle support but in the meantime i'm wondering if it's possible to simply retry any DB-related operation when it fails.
The problem is that i can't use try/catch blocks in the code to handle this since this can happen on ANY DB interaction in the entire codebase. I have to do this at a lower level so that i do it only once. Is there any way to install an error handler or something like that directly on django.db.backends.oracle level so that it will cover all the codebase? Basically, all i want to do is this:
try:
execute_sql()
catch:
if code == ORA-12502:
sleep 1 second
#re-try the same operation
exexute_sql()
Is this even possible or I'm out of luck?
Thanks!

Is it ideal to use a try statement to handle connection reset by peer error on my python worker process on Heroku

I made a python web app and deployed it to heroku successfully and it works well to an extent.
The problem starts when once in a while the worker process throws a connection reset by peer error once in a while for which i have to go in and redeploy again only for it to happen again.
This process affects the entire web app as those small glitches cause the entire program to malfunction and produce inconsistent if not wrong information, so I'm trying to validate if an exception handling statement in the following format would work:
def conti():
opens the connection to the site
performs the operations needed
closes the connection
try:
conti()
except:
conti()
How can i make the try statement sort of recursive so that whenever the error happens it would still continue.
Do i need to put the try statement in a recursive function to handle the error.
Thank you.
My recommendation is to consider a connection pool. If you are on Heroku and using PostgreSQL, you are probably already using psycopg2, which has a pool built in. See psycopg2 and infinite python script
This will avoid either recursion or explicit connection state/error detection in your code.

Python using try to reduce timeout wait

I am using exscripts module which has a call conn.connect('IP address').
It tries to open a telnet session to that IP.
It will generate an error after connection times out.
The timeout exception is set somewhere in the code of the module or it would be what the default for telnet is. (not sure)
This timeout time is too long and slowing down the script if 1 device is not reachable. Is there something we can do with the try except here ? Like
Try for 3 secs:
then process the code
except:
print " timed out"
We changed the API. Mike Pennington only recently introduced the new connect_timeout parameter for that specific use case.
New solution (current master, latest release on pypi 2.1.451):
conn = Telnet(connect_timeout=3)
We changed the API because you usually don't want to wait for unreachable devices, but want to wait for commands to finish (some take a little longer).
I think you can use
conn = Telnet(timeout=3)
I dont know whether timeout in seconds. If microseconds, try 3000

Segfault with pymssql when cannot connect to server on multiple threads

We've come across this when our MS SQL server became unreachable. This caused a bug in our code that brought our program to a screeching halt and of course pitchforks and torches of users to our door. We've been able to boil down our problem to this: If a user, Bob, attempts to connect to the downed database he will of course wait while the program attempts to connect. If at this point while Bob is waiting, a second user, Joe, attempts to connect and he will wait as well. After awhile Bob will timeout and get a proper error raised. However Joe's connection will timeout and a segmentation fault occurs bringing everything to a screeching halt.
We've been able to reliably reproduce this error with the following code
import threading
import datetime
import time
import pymssql
class ThreadClass(threading.Thread):
def run(self):
now = datetime.datetime.now()
print "%s connecting at time: %s" % (self.getName(), now)
conn = pymssql.connect(host="10.255.255.1", database='blah',
user="blah", password="pass")
for i in range(2):
t = ThreadClass()
t.start()
time.sleep(1)
This will cause a segfault after the first thread raises it's error. Is there a way to stop this segfault, make it properly raise an error, or is there something I'm missing here?
Pymssql version 1.0.2 and python 2.6.6.
We went and asked the question over at pymssql's user group as well to cover all our bases. According to one of the developers pymssql is not thread safe in the current stable release. It sounds like it might be in the development 1.9 release or in the next major 2.0 release. We will probably switch to a different module or maybe use some sort of connection pooler with it but that's probably more of a bandage fix and not really ideal.

Categories