Force killing of q session after query is done - python

I am trying to force killing (not closing) a q session once my query is done to save resources on my machine.
It is currently working using:
conn.sendAsync("exit 0")
Problem is, if I run a query right after it again (trying to reopen the connection and run another query), it might fail as the previous connection would still being killed as it is asynchronous.
Therefore, I am trying to do the same thing with a synchronous query, but when trying:
conn.sendSync("exit 0")
I get:
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
python-BaseException
Can I specify a timeout such that the q session will be killed automatically after say 10 seconds instead, or maybe there is another way to force killing the q session?
My code looks like this:
conn = qc.QConnection(host='localhost', port=12345, timeout=10000)
conn.open()
res = None
try:
res = conn.sendSync(query, numpy_temporals=True)
except Exception as e:
print(f'Error running {query}: {e}')
conn.sendSync("exit 0")
conn.close()

I'd suggest we take a step back and re-evaluate if it's really a right thing to kill the KDB process after your Python program runs a query. If the program isn't responsible to bring up the KDB process, most likely it should not bring the process down.
Given the rationale of saving resource, I believe it keeps many data in memory and thus takes time to start up. It adds another reason that you shouldnt kill it if you need to use it a second time.

You shouldn't be killing a kdb process you intend to query again. Some suggestions on points in your question:
once my query is done to save resources -> you can manually call garbage collection with .Q.gc[] to free up memory or alternatively and perhaps better enable immediate garbage collection with -g 1 on start. Note if you create large global variables in your query this memory will not be freed up / returned.
https://code.kx.com/q/ref/dotq/#qgc-garbage-collect
https://code.kx.com/q/basics/syscmds/#g-garbage-collection-mode
killed automatically after say 10 seconds -> if your intention here is to not allow client queries such as from your python process to run over 10 seconds you can set a query timeout with -T 10 on start or when process is running with \T 10 / system "T 10"
https://code.kx.com/q/basics/cmdline/#-t-timeout

Related

IDE Crash causes hung job on Server

Good Day All! I am using pyodbc to connnect to a Microsoft SQL server using a Native Client 11.0 ODBC connection. Occasionally something will happen to cause Spyder to crash resulting in my query to hanging on the server. When this happens, all variables are lost, so I'm not able to cancel the job that is still on the server or close the connection. My DBAs do not have rules in place to cancel long running queries, but hung queries like this block ETLs. I have my ODBC connection set up the way they've requested, so the question is, what else can I do to prevent issues for my partners when Spyder crashes? Note: I've imported pandas as "pd".
try:
data_conn = pyodbc.connect(dECTV)
data_conn.timeout = 1000
tfn = pd.read_sql(tele,data_conn)
print("Call information retrieved")
except:
print('!~!~!~!n Exception has been Raised for Inbound information!~!~!~!')
tfn = pd.read_csv(export_location + r'\TFN_Details.csv')
finally:
data_conn.close
print("Connection Closed. Moving on.")
BTW, I've done a lot of reading over the last two hours and have what I consider to be a solution, but I wanted to see if others agree. My thoughts would be to execute the following before running anything new on the same server.
exec sp_who 'my_login_id'; kill 'resulting_SPID';

psycopg2 cursor hanging on terminated Redshift query

I am using psycopg2 (2.6.1) to connect to Amazon's Redshift.
I have a query that should last about 1 second, but about 1 time out of every 20 concurrent tries it just hangs forever (I manually kill them after 1 hour). To address this, I configured the statement_timeout setting before my query, as such:
rcur.execute("SET statement_timeout TO 60000")
rcur.execute(query)
so that after 1 minute the query will give up, and I can try again (the second try does complete quickly as expected), but even with this (which I confirmed worked by setting the timeout to 1 ms and seeing it raise an Exception), sometimes the Python code hangs instead of raising an Exception (it never reaches the print directly after the rcur.execute(query)). And I can see in the Redshift AWS dashboard that the query has been "terminated" after 59 seconds, but my code still hangs for an hour instead of raising an Exception.
Does anyone know how to resolve this, or have a better method of dealing with typically short queries that occasionally take unnaturally long and simply need to be cancelled and retried?
I think you need to configure your keepAlive settings for the Redshift connection.
Follow the steps in this AWS doc to do that,
http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html

Python using try to reduce timeout wait

I am using exscripts module which has a call conn.connect('IP address').
It tries to open a telnet session to that IP.
It will generate an error after connection times out.
The timeout exception is set somewhere in the code of the module or it would be what the default for telnet is. (not sure)
This timeout time is too long and slowing down the script if 1 device is not reachable. Is there something we can do with the try except here ? Like
Try for 3 secs:
then process the code
except:
print " timed out"
We changed the API. Mike Pennington only recently introduced the new connect_timeout parameter for that specific use case.
New solution (current master, latest release on pypi 2.1.451):
conn = Telnet(connect_timeout=3)
We changed the API because you usually don't want to wait for unreachable devices, but want to wait for commands to finish (some take a little longer).
I think you can use
conn = Telnet(timeout=3)
I dont know whether timeout in seconds. If microseconds, try 3000

Is there a way to stop the execution of a mysql query in python if a python exception occurs?

I'm running a python script that executes mysql queries with many rows. The query takes a lot of time to run and sometimes I need to stop the script to change something or to start over. Is there a way to tell the mysql server that the script is terminating (for example if a KeyboardInterrupt exception occurs) and that the result of the query is no longer needed?
I tried closing the mysql connection and it didn't work. The execution of the query keeps on going.
I know that I can kill the query from the server directly but I would like the script to do it by itself.
I'm using the mysql-connector-python library, version is 2.0.2.
You can wrap you code in try except block and kill the query once exception raised. Something like that:
pid = <get the pid of the query>
try:
.... your code
except:
cursor.execute('kill %s', pid)

Closing python MySQL script

I was wondering if one of you could advice me how to tackle a problem I am having. I developed a python script that updates data to a database (MySQL) every iteration (endless while loop). What I want to prevent is that if the script is accidentally closed or stopped half way the script it waits till all the data is loaded into the database and the MySQL connection is closed (I want this to prevent incomplete queries). Is there a way to tell the program to wait till the loop is done before it closes?
I hope this all makes sense, feel free to ask questions.
Thank you for your time in advance.
There are some things you can do to prevent a program from being closed unexpectedly (signal handlers, etc), but they only work in some cases and not others. There is always the chance of a system shutdown, power failure or SIGKILL that will terminate your program whether you like it or not. The canonical solution to this sort of problem is to use database transactions.
If you do your work in a transaction, then the database will simply roll back any changes if your script is interrupted, so you will not have any incomplete queries. The worst that can happen is that you need to repeat the query from the beginning next time.
I hope you are asking for a way by which if someone presses ctrl+c or ctrl+z the program should not stop execution till it completes all the data insertion.
There are two approach to it.
1) Insert all the data into the database by enabling transaction. When transaction is enabled until you commit the data will not be inserted. So you can commit once all the datas are entered and incase if some one closes the application the transaction will not be commited.
2) You can trap the interrupt signals of ctrl+c and ctrl+z and thus your program still runs uninterrupted. This would help.
Use with statement. Some examples here.
Define some exception handler. Like:
class Cursor(object):
def __init__(self,
username,
password
):
# init your connection here
def __iter__(self):
# for reading content of cursor
def __enter__(self):
# something executed before establish connection
def __exit__(self, ext_type, exc_value, traceback):
# something executed when there is an error or connection finishes
with Cursor() as cursor:
print(cursor)
connection = (cursor.connection)
print(connection)

Categories