Is there a way to create/modify connections through Airflow API - python

Going through Admin -> Connections, we have the ability to create/modify a connection's params, but I'm wondering if I can do the same through API so I can programmatically set the connections
airflow.models.Connection seems like it only deals with actually connecting to the instance instead of saving it to the list. It seems like a function that should have been implemented, but I'm not sure where I can find the docs for this specific function.

Connection is actually a model which you can use to query and insert a new connection
from airflow import settings
from airflow.models import Connection
conn = Connection(
conn_id=conn_id,
conn_type=conn_type,
host=host,
login=login,
password=password,
port=port
) #create a connection object
session = settings.Session() # get the session
session.add(conn)
session.commit() # it will insert the connection object programmatically.

You can also add, delete, and list connections from the Airflow CLI if you need to do it outside of Python/Airflow code, via bash, in a Dockerfile, etc.
airflow connections --add ...
Usage:
airflow connections [-h] [-l] [-a] [-d] [--conn_id CONN_ID]
[--conn_uri CONN_URI] [--conn_extra CONN_EXTRA]
[--conn_type CONN_TYPE] [--conn_host CONN_HOST]
[--conn_login CONN_LOGIN] [--conn_password CONN_PASSWORD]
[--conn_schema CONN_SCHEMA] [--conn_port CONN_PORT]
https://airflow.apache.org/cli.html#connections
It doesn't look like the CLI currently supports modifying an existing connection, but there is a Jira issue for it with an active open PR on GitHub.
AIRFLOW-2840 - cli option to update existing connection
https://github.com/apache/incubator-airflow/pull/3684

First check if connection exists, after create new Connection using from airflow.models import Connection :
import logging
from airflow import settings
from airflow.models import Connection
def create_conn(conn_id, conn_type, host, login, pwd, port, desc):
conn = Connection(conn_id=conn_id,
conn_type=conn_type,
host=host,
login=login,
password=pwd,
port=port,
description=desc)
session = settings.Session()
conn_name = session.query(Connection).filter(Connection.conn_id == conn.conn_id).first()
if str(conn_name) == str(conn.conn_id):
logging.warning(f"Connection {conn.conn_id} already exists")
return None
session.add(conn)
session.commit()
logging.info(Connection.log_info(conn))
logging.info(f'Connection {conn_id} is created')
return conn

You can populate connections using environment variables using the connection URI format.
The environment variable naming convention is AIRFLOW_CONN_<conn_id>, all uppercase.
So if your connection id is my_prod_db then the variable name should be AIRFLOW_CONN_MY_PROD_DB.
In general, Airflow’s URI format is like so:
my-conn-type://my-login:my-password#my-host:5432/my-schema?param1=val1&param2=val2
Note that connections registered in this way do not show up in the Airflow UI.

To use session = settings.Session(), it assumes the airflow database backend has been initiated. For those who haven't set it up for your development environment, a hybrid method using both Connection class and environment variables will be a workaround.
Below is the example for setting up a S3Hook
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
from airflow.models.connection import Connection
import os
import json
aws_default = Connection(
conn_id="aws_default",
conn_type="aws",
login='YOUR-AWS-KEY-ID',
password='YOUR-AWS-KEY-SECRET',
extra=json.dumps({'region_name': 'us-east-1'})
)
os.environ["AIRFLOW_CONN_AWS_DEFAULT"] = aws_default.get_uri()
s3_hook = S3Hook(aws_conn_id='aws_default')
s3_hook.list_keys(bucket_name='YOUR-BUCKET', prefix='YOUR-FILENAME')

Related

programatically set connections / variables in airflow

Is there a way to set connections / variables programtically in airflow? I am aware this is defeating the very purpose of not exposing these details in the code, but to debug it would really help me big time if I could do something like the following pseudo code:
# pseudo code
from airflow import connections
connections.add({name:'...',
user:'...'})
Connection is DB entity and you can create it. See below
from airflow import settings
from airflow.models import Connection
conn = Connection(
conn_id=conn_id,
conn_type=conn_type,
host=host,
login=login,
password=password,
port=port
)
session = settings.Session()
session.add(conn)
session.commit()
As for variables - just use the API. See example below
from airflow.models import Variable
Variable.set("my_key", "my_value")
A good blog post on this topic can be found here.

How to connect GCP Composer Airflow's metadata database from Python?

I'm having trouble connecting to the Airflow's metadata database from Python.
The connection is set, and I can query the metadata database, using the UI's Ad hoc query window.
If I try to use the same connection but from Python, it wouldn't work.
So, in details, I have two connection setup, both are working from the UI:
Connection 1:
Conn type: MYSQL
Host: airflow-sqlproxy-service
Schema: composer-1-6-1-airflow-1-10-0-57315b5a
Login: root
Connection 2:
Conn type: MYSQL
Host: 127.0.0.1
Schema: composer-1-6-1-airflow-1-10-0-57315b5a
Login: root
As I said, both of them working from the UI (Data Profiling -> Ad Hoc Query)
But whenever I create a DAG and try to trigger it from a PythonOperator using various hooks, I'm always getting the same error message:
Sample code 1:
from airflow import DAG
from datetime import datetime, timedelta
from airflow.operators.python_operator import PythonOperator
from airflow.operators.mysql_operator import MySqlOperator
from airflow.hooks.mysql_hook import MySqlHook
class ReturningMySqlOperator(MySqlOperator):
def execute(self, context):
self.log.info('Executing: %s', self.sql)
hook = MySqlHook(mysql_conn_id=self.mysql_conn_id,
schema=self.database)
return hook.get_records(
self.sql,
parameters=self.parameters)
with DAG(
"a_JSON_db",
start_date=datetime(2020, 11, 19),
max_active_runs=1,
schedule_interval=None,
# catchup=False # enable if you don't want historical dag runs to run
) as dag:
t1 = ReturningMySqlOperator(
task_id='basic_mysql',
mysql_conn_id='airflow_db_local',
#sql="select * from xcom",
sql="select * from xcom")
def get_records(**kwargs):
ti = kwargs['ti']
xcom = ti.xcom_pull(task_ids='basic_mysql')
string_to_print = 'Value in xcom is: {}'.format(xcom)
# Get data in your logs
logging.info(string_to_print)
t2 = PythonOperator(
task_id='records',
provide_context=True,
python_callable=get_records)
Sample code 2:
def get_dag_ids(**kwargs):
mysql_hook = MySqlOperator(task_id='query_table_mysql',mysql_conn_id="airflow_db",sql="SELECT MAX(execution_date) FROM task_instance WHERE dag_id = 'Test_Dag'")
MySql_Hook = MySqlHook(mysql_conn_id="airflow_db_local")
records = MySql_Hook.get_records(sql="SELECT MAX(execution_date) FROM task_instance")
print(records)
t1 = PythonOperator(
task_id="get_dag_nums",
python_callable=get_dag_ids,
provide_context=True)
The error message is this:
ERROR - (2003, "Can't connect to MySQL server on '127.0.0.1' (111)")
I looked up the config, and I found this env_variable:
core sql_alchemy_conn mysql+mysqldb://root:#127.0.0.1/composer-1-6-1-airflow-1-10-0-57315b5a env var
I tried to use a postgress connection with this uri as well, same error message (as above).
I'm statred thinking the GCP Airflow's IAP blocking me to have access from a Python DAG.
My Airflow composer version is the following:
composer-1.6.1-airflow-1.10.0
Can anyone help me?
Only the service account associated with Composer will have read access to the tenant project for the metadata database. This will incorporate connecting from the underlying Kubernetes (Airflow) system to the tenant project hosting Airflow's Cloud SQL instance.
The accepted connection methods are SQLAlchemy and using the Kubernetes cluster as a proxy. Connections from GKE will use the airflow-sqlproxy-service.default service discovery name for connecting.
We also you have a CLI option through GKE. Use the following command to run a temporary deployment+pod using the mysql:latest image, which comes preinstalled with the mysql CLI tool:
$ kubectl run mysql-cli-tmp-deployment --generator=run-pod/v1 --rm --stdin --tty --image mysql:latest -- bash
Once in the shell, we can use the mysql tool to open an interactive session with the Airflow database:
$ mysql --user root --host airflow-sqlproxy-service --database airflow-db
Once the session is open, standard queries can be executed against the database.

How to use connection pooling with psycopg2 (postgresql) with Flask

How should I use psycopg2 with Flask? I suspect it wouldn't be good to open a new connection every request so how can I open just one and make it globally available to the application?
from flask import Flask
app = Flask(__name__)
app.config.from_object('config') # Now we can access the configuration variables via app.config["VAR_NAME"].
import psycopg2
import myapp.views
Using connection Pooling is needed with Flask or any web server, as you rightfully mentioned, it is not wise to open and close connections for every request.
psycopg2 offers connection pooling out of the box. The AbstractConnectionPool class which you can extend and implement or a SimpleConnectionPool class that can be used out of the box. Depending on how you run the Flask App, you may what to use ThreadedConnectionPool which is described in docs as
A connection pool that works with the threading module.
Creating a simple Flask app and adding a ConnectionPool to it
import psycopg2
from psycopg2 import pool
from flask import Flask
app = Flask(__name__)
postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20, user="postgres",
password="pass##29",
host="127.0.0.1",
port="5432",
database="postgres_db")
#app.route('/')
def hello_world():
# Use getconn() to Get Connection from connection pool
ps_connection = postgreSQL_pool.getconn()
# use cursor() to get a cursor as normal
ps_cursor = ps_connection.cursor()
#
# use ps_cursor to interact with DB
#
# close cursor
ps_cursor.close()
# release the connection back to connection pool
postgreSQL_pool.putconn(ps_connection)
return 'Hello, World!'
The Flask App itself is not complete or production-ready, please follow the instructions on Flask Docs to manage DB credentials and use the Pool object across the Flask App within the Flask context
I would strongly recommend using Libraries such as SQLAlchemy along with Flask (available as a wrapper) which will maintain connections and manage the pooling for you. Allowing you to focus on your logic

how to create pymongo connection per request in Flask

In my Flask application, I hope to use pymongo directly. But I am not sure what's the best way to create pymongo connection for each request and how to reclaim the connection resource.
I know Connection in pymongo is thread-safe and has built-in pooling. I guess I need to create a global Connection instance, and use before_request to put it in flask g.
In the app.py:
from pymongo import Connection
from admin.views import admin
connection = Connection()
db = connection['test']
#app.before_request
def before_request():
g.db = db
#app.teardown_request
def teardown_request(exception):
if hasattr(g, 'db'):
# FIX
pass
In admin/views.py:
from flask import g
#admin.route('/')
def index():
# do something with g.db
It actually works. So questions are:
Is this the best way to use Connection in flask?
Do I need to explicitly reclaim resources in teardown_request and how to do it?
I still think this is an interesting question, but why no response... So here is my update.
For the first question, I think using current_app is more clearer in Flask.
In app.py
app = Flask(__name__)
connection = Connection()
db = connection['test']
app.db = db
In the view.py
from Flask import current_app
db = current_app.db
# do anything with db
And by using current_app, you can use application factory to create more than one app as http://flask.pocoo.org/docs/patterns/appfactories/
And for the second question, I'm still figuring it out.
Here's example of using flask-pymnongo extension:
Example:
your mongodb uri (till db name) in app.config like below
app.config['MONGO_URI'] = 'mongodb://192.168.1.1:27017/your_db_name'
mongo = PyMongo(app, config_prefix='MONGO')
and then under your api method where u need db do the following:
db = mongo.db
Now you can work on this db connection and get your data:
users_count = db.users.count()
I think what you present is ok. Flask is almost too flexible in how you can organize things, not always presenting one obvious and right way. You might make use of the flask-pymongo extension which adds a couple of small conveniences. To my knowledge, you don't have to do anything with the connection on request teardown.

How do I make one instance in Python that I can access from different modules?

I'm writing a web application that connects to a database. I'm currently using a variable in a module that I import from other modules, but this feels nasty.
# server.py
from hexapoda.application import application
if __name__ == '__main__':
from paste import httpserver
httpserver.serve(application, host='127.0.0.1', port='1337')
# hexapoda/application.py
from mongoalchemy.session import Session
db = Session.connect('hexapoda')
import hexapoda.tickets.controllers
# hexapoda/tickets/controllers.py
from hexapoda.application import db
def index(request, params):
tickets = db.query(Ticket)
The problem is that I get multiple connections to the database (I guess that because I import application.py in two different modules, the Session.connect() function gets executed twice).
How can I access db from multiple modules without creating multiple connections (i.e. only call Session.connect() once in the entire application)?
Try the Twisted framework with something like:
from twisted.enterprise import adbapi
class db(object):
def __init__(self):
self.dbpool = adbapi.ConnectionPool('MySQLdb',
db='database',
user='username',
passwd='password')
def query(self, sql)
self.dbpool.runInteraction(self._query, sql)
def _query(self, tx, sql):
tx.execute(sql)
print tx.fetchone()
That's probably not what you want to do - a single connection per app means that your app can't scale.
The usual solution is to connect to the database when a request comes in and store that connection in a variable with "request" scope (i.e. it lives as long as the request).
A simple way to achieve that is to put it in the request:
request.db = ...connect...
Your web framework probably offers a way to annotate methods or something like a filter which sees all requests. Put the code to open/close the connection there.
If opening connections is expensive, use connection pooling.

Categories