Cleaning up after AWS Lambda execution context is closed with Python - python

From the Best Practices for Working with AWS Lambda Functions:
Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, [...]
I would like to implement this principle to improve my lambda function, where a database handle is initialized and closed every time the function is invocated. Take the following example:
def lambda_handler(event, context):
# Open a connection to the database
db_handle = connect_database()
# Do something with the database
result = perform_actions(db_handle)
# Clean up, close the connection
db_handle.close()
# Return the result
return result
From my understanding of the AWS documentation, the code should be optimized as follows:
# Initialize the database connection outside the handler
db_handle = conn_database()
def lambda_handler(event, context):
# Do something with the database and return the result
return perform_actions(db_handle)
This would result in the db_handle.close() method not being called, thus potentially leaking a connection.
How should I handle the cleanup of such resources when using AWS Lambda with Python?

Many people looking for the same thing with you. I believe it is impossible at this time. But we could handle the issue from the database side.
Take a look at this one

The connection leak would only happen while the Lambda execution environment is alive; in other words the connection would timeout (be closed) after the execution environment is destroyed.
Whether a global connection object is worth implementing depends on your particular use case:
- how much of the total execution time is taken by the database
initialization
- how often your function is called
- how do you handle database connection errors
If you want to have a bit more control of the connection you can try this approach which recycles the database connection every two hours or when encountering a database-related exception:
# Initialize the global object to hold database connection and timestamp
db_conn = {
"db_handle": None,
"init_dt": None
}
def lambda_handler(event, context):
# check database connection
if not db_conn["db_handle"]:
db_conn["db_handle"] = connect_database()
db_conn["init_dt"] = datetime.datetime.now()
# Do something with the database and return the result
try:
result = do_work(db_conn["db_handle"])
except DBError:
try:
db_conn["db_handle"].close()
except:
pass
db_conn["db_handle"] = None
return "db error occured"
# check connection age
if datetime.datetime.now() - db_conn["init_dt"] > datetime.timedelta(hours=2):
db_conn["db_handle"].close()
db_conn["db_handle"] = None
return result
Please note I haven't tested the above on Lambda so you need to check it with your setup.

Related

How do i reconnect to postgreSQL in psycopg2 threaded connection class? Failure caused by SSL SYSCALL error: EOF detected in Azure?

Our application was working fine till we ported to Microsoft database for PostgreSQL in Azure. Then periodically, our application fails for no real reason and we have SSL SYSCALL errors all over the place - on DELETE's and so on. We have tried everything described in the internet- use keepalive args, RAM, memory and everything else . We want to try automatically reestablishing the connection. But we have a threaded connection pool. have looked at this thread Psycopg2 auto reconnect inside a class
But our functions that read the database are in another class. So we have two questions:
1)What is the cause of the SSL SYSCALL errors . I have searched all threads and the usual suspects are ruled out.
2)How do i reconnect on failure inside a threaded connection pool class--> this is being used in a flask app
Here is how our app is structured
class DBClass(object):
_instance = None
conn= None
def __new__(cls):
if cls._instance is None:
cls._instance = object.__new__(cls)
try:
max_conn = 12
keepalive_args = { "keepalives": 1, "keepalives_idle": 25, "keepalives_interval": 4,"keepalives_count": 9,
}
db._instance.pool = psycopg2.pool.ThreadedConnectionPool(3, max_conn, db=,
host=, user=,
password=,
port=, **keepalive_args)
except Exception as ex:
db._instance = None
raise ex
return cls._instance
def __enter__(self):
self.conn= self._instance.pool.getconn()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._instance.pool.putconn(self.connection)
def __del__(self):
self._instance.pool.closeall()
#another python module has a class called clsEmployee. We have dozens of functions using the above-mentioned database class. Something like this.
with DBClass() as db:
pg_conn = db.connection
cur = pg_conn.cursor()
cur.execute("SELECT * from emp")
row = cur.fetchone()[0]
There are many ways you could handle this.
The solution proposed by Psycopg2 auto reconnect inside a class Will still work if your calls to execute db work is outside of the DBClass. You just need functions that call the database, and you wrap them with a decorator. All the decorator is doing is adding a loop to allow the function to be called multiple times, wrapping the actual function in a try/except and reconnecting on an except. This is actually a pretty standard way of handling this type of problem as it can work for DB, APIs, or anything that could fail. The one thing you may want to do is add an exponential backoff to your retry (Where the sleep call is).
The other option you have is to create your own subclass of cursor that has the same retry logic inside an overridden version of execute. This will accomplish the same thing, It's just a case of what you think is easier to work with.
Since this is being used in a Flask app you could modify the first approach and instead of doing the retry at the model code level, You could do the retry on the Flask route level.

simple example of working with neo4j python driver?

Is there a simple example of working with the neo4j python driver?
How do I just pass cypher query to the driver to run and return a cursor?
If I'm reading for example this it seems the demo has a class wrapper, with a private member func I pass to the session.write,
session.write_transaction(self._create_and_return_greeting, ...
That then gets called it with a transaction as a first parameter...
def _create_and_return_greeting(tx, message):
that in turn runs the cypher
result = tx.run("CREATE (a:Greeting) "
This seems 10X more complicated than it needs to be.
I did just try a simpler:
def raw_query(query, **kwargs):
neodriver = neo_connect() # cached dbconn
with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
return result.data()
But this results in a socket error on the query, probably because the session goes out of scope?
[dfcx/__init__] ERROR | Underlying socket connection gone (_ssl.c:2396)
[dfcx/__init__] ERROR | Failed to write data to connection IPv4Address(('neo4j-core-8afc8558-3.production-orch-0042.neo4j.io', 7687)) (IPv4Address(('34.82.120.138', 7687)))
Also I can't return a cursor/iterator, just the data()
When the session goes out of scope, the query result seems to die with it.
If I manually open and close a session, then I'd have the same problems?
Python must be the most popular language this DB is used with, does everyone use a different driver?
Py2neo seems cute, but completely lacking in ORM wrapper function for most of the cypher language features, so you have to drop down to raw cypher anyway. And I'm not sure it supports **kwargs argument interpolation in the same way.
I guess that big raise should help iron out some kinks :D
Slightly longer version trying to get a working DB wrapper:
def neo_connect() -> Union[neo4j.BoltDriver, neo4j.Neo4jDriver]:
global raw_driver
if raw_driver:
# print('reuse driver')
return raw_driver
neoconfig = NEOCONFIG
raw_driver = neo4j.GraphDatabase.driver(
neoconfig['url'], auth=(
neoconfig['user'], neoconfig['pass']))
if raw_driver is None:
raise BaseException("cannot connect to neo4j")
else:
return raw_driver
def raw_query(query, **kwargs):
# just get data, no cursor
neodriver = neo_connect()
session = neodriver.session()
# logging.info('neoquery %s', query)
# with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
data = result.data()
return data
except neo4j.exceptions.CypherSyntaxError as err:
logging.error('neo error %s', err)
logging.error('failed query: %s', query)
raise err
# finally:
# logging.info('close session')
# session.close()
update: someone pointed me to this example which is another way to use the tx wrapper.
https://github.com/neo4j-graph-examples/northwind/blob/main/code/python/example.py#L16-L21
def raw_query(query, **kwargs):
neodriver = neo_connect() # cached dbconn
with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
return result.data()
This is perfectly fine and works as intended on my end.
The error you're seeing is stating that there is a connection problem. So there must be something going on between the server and the driver that's outside of its influence.
Also, please note, that there is a difference between all of these ways to run a query:
with driver.session():
result = session.run("<SOME CYPHER>")
def work(tx):
result = tx.run("<SOME CYPHER>")
with driver.session():
session.write_transaction(work)
The latter one might be 3 lines longer and the team working on the drivers collected some feedback regarding this. However, there are more things to consider here. Firstly, changing the API surface is something that needs careful planning and cannot be done in say a patch release. Secondly, there are technical hurdles to overcome. Here are the semantics, anyway:
Auto-commit transaction. Runs only that query as one unit of work.
If you run a new auto-commit transaction within the same session, the previous result will buffer all available records for you (depending on the query, this will consume a lot of memory). This can be avoided by calling result.consume(). However, if the session goes out of scope, the result will be consumed automatically. This means you cannot extract further records from it. Lastly, any error will be raised and needs handling in the application code.
Managed transaction. Runs whatever unit of work you want within that function. A transaction is implicitly started and committed (unless you rollback explicitly) around the function.
If the transaction ends (end of function or rollback), the result will be consumed and become invalid. You'll have to extract all records you need before that.
This is the recommended way of using the driver because it will not raise all errors but handle some internally (where appropriate) and retry the work function (e.g. if the server is only temporarily unavailable). Since the function might be executed multiple time, you must make sure it's idempotent.
Closing thoughts:
Please remember that stackoverlfow is monitored on a best-effort basis and what can be perceived as hasty comments may get in the way of getting helpful answers to your questions

Not all Python code executing in AWS Lambda function

I have a simple lambda function which prints an event and then attempts to insert a row into a database. It runs with no error, but does not execute all of the code.
event gets printed, but the row never gets inserted into the table. Anything, even a print statement I put after connection doesn't get executed. I'm guessing something is wrong with the connection, but as far as I know I have no way of telling what is wrong. Are there more logs somewhere? In CloudWatch I see at the end it says Task timed out after 3.00 seconds
import boto3
import psycopg2
s3 = boto3.client('s3')
def insert_data(event=None, context=None):
print(event)
connection = psycopg2.connect(user="xxxx", password="xxxx",
host="xxxx", port="xx",
database="xxxx")
cursor = connection.cursor()
postgres_insert_query = "INSERT INTO dronedata (name,lat,long,other) VALUES ('img2','54','43','from lambda')"
cursor.execute(postgres_insert_query)
connection.commit()
count = cursor.rowcount
print(count, "Record inserted successfully into mobile table")
The typical security setup is:
A security group on the AWS Lambda function (Lambda-SG) that permits all outbound access (no need for inbound rules)
A security group on the database (either an EC2 instance or Amazon RDS) (DB-SG) that permits inbound access on the appropriate port from Lambda-SG
That is, DB-SG should specifically reference Lambda-SG in its inbound rules.
Yes, you have to increase default Timeout from 3 seconds to more:
Timeout – The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.
hence psycopg2 is an external lib, please upload that lib along with your code into your Lambda Function. So Issue is, it is not able to connect, that's why you are facing a timeout issue.

Using decorators for database access with psycopg2

I am constructing a model that does large parts of its calculations in a Postgresql database (for performance reasons). It looks somewhat like this:
def sql_func1(conn):
# prepare some data, crunch some number, etc.
curs = conn.cursor()
curs.execute("SOME SQL COMMAND")
curs.commit()
curs.close()
if __name__ == "__main__":
connection = psycopg2.connect(dbname='name', user='user', password='pass', host='localhost', port=1234)
sql_func1(conn)
sql_func2(conn)
sql_func3(conn)
connection.close()
The script uses around 30 individual functions like sql_func1. Obviously it is a little awkward to manage the connection and cursor in each function all the time. Thus I started using a decorator as described here. Now I can simply wrap sql_func1 with a decorator #db_connect and pass the connection from there. However, that means I am opening and closing the connection all the time, which is not good practice either. The psycopg2 FAQ says:
Creating a connection can be slow (think of SSL over TCP) so the best
practice is to create a single connection and keep it open as long as
required. It is also good practice to rollback or commit frequently
(even after a single SELECT statement) to make sure the backend is
never left “idle in transaction”. See also psycopg2.pool for
lightweight connection pooling.
Could you please give me some insights which would be an ideal practice im my case. Should I rather use a decorator that passes the cursor object instead of the connection? If so, please provide a code sample for the decorator. As I am rather new to programming, please let me also know in case you think my overall approach is wrong.
What about storing the connection in a global variable without closing it in the finally block? Something like this (according to the example you linked):
cnn = None
def with_connection(f):
def with_connection_(*args, **kwargs):
global cnn
if not cnn:
cnn = psycopg.connect(DSN)
try:
rv = f(cnn, *args, **kwargs)
except Exception, e:
cnn.rollback()
raise
else:
cnn.commit() # or maybe not
return rv
return with_connection_

How to connect to Cassandra inside a Pylons app?

I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta.
Unfortunately, I don't know where to instantiate the connection to make it available in my application.
The goal would be to :
Create a pool when the application is launched
Get a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool "lazily", i.e. only if needed
If a connexion has been used, release it when the request has been processed
Additionally, is there something important I should know about it ? When I see some comments like "Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.", what does it mean exactly ?
Thanks.
--
Pierre
Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be.
I ended up with just pycassa.connect_thread_local() in app_globals, and there I go.
Okay.
I worked a little, I learned a lot, and I found a possible answer.
Creating the pool
The best place to create the pool seems to be in the app_globals.py file, which is basically a container for objects which will be accessible "throughout the life of the application". Exactly what I want for a pool, in fact.
I just added at the end of the file my init code, which takes settings from the pylons configuration file :
"""Creating an instance of the Pycassa Pool"""
kwargs = {}
# Parsing servers
if 'cassandra.servers' in config['app_conf']:
servers = config['app_conf']['cassandra.servers'].split(',')
if len(servers):
kwargs['server_list'] = servers
# Parsing timeout
if 'cassandra.timeout' in config['app_conf']:
try:
kwargs['timeout'] = float(config['app_conf']['cassandra.timeout'])
except:
pass
# Finally creating the pool
self.cass_pool = pycassa.QueuePool(keyspace='Keyspace1', **kwargs)
I could have done better, like moving that in a function, or supporting more parameters (pool size, ...). Which I'll do.
Getting a connection at each request
Well. There seems to be the simple way : in the file base.py, adding something like c.conn = g.cass_pool.get() before calling WSGIController, something like c.conn.return_to_pool() after. This is simple, and works. But this gets a connection from the pool even when it's not required by the controller. I have to dig a little deeper.
Creating a connection manager
I had the simple idea to create a class which would be instantiated at each request in the base.py file, and which would automatically grab a connection from the pool when requested (and release it after). This is a really simple class :
class LocalManager:
'''Requests a connection from a Pycassa Pool when needed, and releases it at the end of the object's life'''
def __init__(self, pool):
'''Class constructor'''
assert isinstance(pool, Pool)
self._pool = pool
self._conn = None
def get(self):
'''Grabs a connection from the pool if not already done, and returns it'''
if self._conn is None:
self._conn = self._pool.get()
return self._conn
def __getattr__(self, key):
'''It's cooler to write "c.conn" than "c.get()" in the code, isn't it?'''
if key == 'conn':
return self.get()
else:
return self.__dict__[key]
def __del__(self):
'''Releases the connection, if needed'''
if not self._conn is None:
self._conn.return_to_pool()
Just added c.cass = CassandraLocalManager(g.cass_pool) before calling WSGIController in base.py, del(c.cass) after, and I'm all done.
And it works :
conn = c.cass.conn
cf = pycassa.ColumnFamily(conn, 'TestCF')
print cf.get('foo')
\o/
I don't know if this is the best way to do this. If not, please let me know =)
Plus, I still did not understand the "synchronization" part in Pycassa source code. If it is needed in my case, and what should I do to avoid problems.
Thanks.

Categories