How to setup Celery to talk ssl to Azure Redis Instance - python

Using the great answer to "How to configure celery-redis in django project on microsoft azure?", I can configure Celery to use Azure Redis Cache using the non-ssl port, 6379, using the following Python code:
from celery import Celery
# This one works
url = 'redis://:<access key>#<my server>.redis.cache.windows.net:6379/0'
# I want to use a url that specifies ssl like one of the following:
# url = 'redis://:<my key>=#<my server>.redis.cache.windows.net:6380/0'
# url = 'redis://:<my key>#<my server>.redis.cache.windows.net:6380/0?ssl=True'
app = Celery('tasks', broker=url)
#app.task
def add(x, y):
return x + y
However, I would like to have celery use ssl and communicate on port 3380 using ssl to the Azure Redis Cache. If I change the port to 6380, I get an "Error while reading from socket" after a few minutes of waiting after running the following command:
celery -A tasks worker --loglevel=INFO -Q "celery" -Ofair
Does anyone know how to configure this, on the Celery or Azure side, so that I can have celery communicate on the default 3380 port on Azure Redis Cache using ssl?
I am using the latest version of Celery (4.0.2)
Note that code like the following works with no problem when connecting directly from a Linux client (on Azure) using port 3380 and ssl using Python's redis library:
import redis
redis.StrictRedis(host='<my host>.redis.cache.windows.net', port=6380, db=0, password='<my key>', ssl=True)

It's already possible using rediss:// instead redis://.
url = 'rediss://:<access key>#<my server>.redis.cache.windows.net:6380/0'

For the broker, you should be able to set the broker_use_ssl configuration option.
For the backend, the option redis_backend_use_ssl was made available in the 4.1.0 release.
The ability to enable SSL via the URL isn't available yet: https://github.com/celery/celery/issues/2833
Also, note that official support for Windows was dropped in 4.0. However, you might be able to get it working by following the instructions at https://github.com/celery/celery/issues/4082

Related

Django and Celery error when using RabbitMQ in Centos "[Errno 111] Connection Refused"

I'm using Django and Celery with RabbitMQ as the message broker. While developing in Windows I installed RabbitMQ and configured Celery inside Django like this:
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('DjangoExample')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
init.py
from .celery import app as celery_app
__all__ = ['celery_app']
When running Celery inside my development Windows machine everything works correctly and tasks are being executed as expected.
Now I'm trying to deploy the app inside a Centos7 machine.
I installed RabbitMQ and I tried running Celery with the following command:
celery -A main worker -l INFO
But I get a "connection refused" error:
[2021-02-24 17:39:58,221: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
I don't have any special configuration for Celery inside my settings.py since it was working fine in Windows without it.
You can find the settings.py here:
https://github.com/adrenaline681/DjangoExample/blob/master/main/settings.py
Here is a screenshot of the celery error:
And here is the status of my RabbitMQ Server that shows that it's currently installed and running.
Here is an image of the RabbitMQ Management Plugin web interface, we you can see the port used for amqp:
Does anyone know why this is happening?
How can I get Celery to work correctly with RabbitMQ inside of Centos7?
Many thanks in advance!
I had similar problem and it was SELinux blocking access between those two processes, I mean RabbitMQ and Python. To check my guess please disable selinux temporarily and check if it goes ok. And if it is ok then you have to configure selinux to grant access Python to connect Rabbitmq. To disable SELinux temporarily you can run in shell
# setenforce 0
See more here about disabling SELinux either temporarily or permanently. But actually I would not recommend disabling SELinux. Actually it is better to configure SELinux to grant access. See more about SELinux here.
You said you are developing on Windows but you showed some outputs that look like Linux. Are you using Docker or some other container?
I don't know if you are using Docker or some other containers but you can likely adapt my advice to your setup.
If you are using docker you'll have need to have Django's settings.py configured to docker container running RabbitMQ instead of 127.0.0.1. The URL you provided for your settings.py file doesn't work so I cannot see what you have in there.
Here's my CELERY_... settings:
# Celery settings
CELERY_BROKER_URL = 'amqp://user:TheUserName#rabbitmq'
CELERY_RESULT_BACKEND = 'redis://redis:6379/'
I set it to the name I use for my container_name hosting each service because my docker-compose file has these:
services:
rabbitmq:
...
container_name: rabbitmq
redis:
...
container_name: redis

Can't get multiple uwsgi workers to work with flask-socketio

In development, flask-socketio (4.1.0) with uwsgi is working nicely with just 1 worker and standard initialization.
Now I'm preparing for production and want to make it work with multiple workers.
I've done the following:
Added redis message_queue in init_app:
socketio = SocketIO()
socketio.init_app(app,async_mode='gevent_uwsgi', message_queue=app.config['SOCKETIO_MESSAGE_QUEUE'])
(Sidenote: we are using redis in the app itself as well)
gevent monkey patching at top of the file that we run with uwsgi
from gevent import monkey
monkey.patch_all()
run uwsgi with:
uwsgi --http 0.0.0.0:63000 --gevent 1000 --http-websockets --master --wsgi-file rest.py --callable application --py-autoreload 1 --gevent-monkey-patch --workers 4 --threads 1
This doesn't seem to work. The connection starts rapidly alternating between a connection and 400 Bad request responses. I suspect these correspond to the ' Invalid session ....' errors I see when I enable SocketIO logging.
Initially it was not using redis at all,
redis-cli > PUBSUB CHANNELS *
resulted in an empty result even with workers=1.
it seemed the following (taken from another SO answer) fixed that:
# https://stackoverflow.com/a/19117266/492148
import gevent
import redis.connection
redis.connection.socket = gevent.socket
after doing so I got a "flask-socketio" pubsub channel with updating data.
but after returning to multiple workers, the issue returned. Given that changing the redis socket did seem to bring things in the right direction I feel like the monkeypatching isn't working properly yet, but the code I used seems to match all examples I can find and is at the very top of the file that is loaded by uwsgi.
You can run as many workers as you like, but only if you run each worker as a standalone single-worker uwsgi process. Once you have all those workers running each on its own port, you can put nginx in front to load balance using sticky sessions. And of course you also need the message queue for the workers to use when coordinating broadcasts.
Eventually found https://github.com/miguelgrinberg/Flask-SocketIO/issues/535
so it seems you can't have multiple workers with uwsgi either as it needs sticky sessions. Documentation mentions that for gunicorn, but I did not interpret that to extend to uwsgi.

Gevent/Gevent-websocket not being used by Flask-SocketIO

I am building a web interface/data API using Flask and Flask-SocketIO for websocket communication. I would like to start shifting to a more development-ready setup using Gevent/Gevent-websocket, Gunicorn, and eventually Nginx for load balancing. However, after installing Gevent and Gevent-websocket, I am still getting a warning message when starting the SocketIO server:
WebSocket transport not available. Install eventlet or gevent and gevent-websocket for improved performance.
According to the Flask-SocketIO docs,
When the application is in debug mode the Werkzeug development server is still used and configured properly inside socketio.run(). In production mode the eventlet web server is used if available, else the gevent web server is used. If eventlet and gevent are not installed, the Werkzeug development web server is used.
This implies that the use of Gevent should be automated behind the scenes as part of Flask-SocketIO. I checked my Python installs with pip list and confirmed that I have Gevent 1.3.4 and Gevent-websocket 0.10.1 installed. Here is the initialization code for the SocketIO server:
app.py
flaskApp = Flask(__name__)
flaskApp.config['SESSION_TYPE'] = 'filesystem'
Session(flaskApp)
socketio = SocketIO(flaskApp, async_mode='threading', manage_session=False)
def createApp():
flaskApp.secret_key = "super secret"
socketio.run(flaskApp, host='0.0.0.0', port=80)
start.py
app.register_blueprint(monitor.blueprint)
...
createApp()
Why is Flask-SocketIO not detecting my Gevent install?
The portion of the docs that you quoted refers to the async_mode argument, and how it is set by default. You are setting async_mode='threading', so that disables the automatic selection of an async mode. Remove the argument, and then you'll get eventlet or gevent, depending on what you have installed.

Python websocket support on Azure web appservice?

Does Azure appservice has native websockets for Python like they do for node.js/.net?
I'm assuming as of right now, the answer is no, and you'd need to use a VM to achieve this?
(fyi. there's a similar question here but it's been deleted.)
The answer if yes, Python websocket support on Azure Web Apps. The necessary steps or guideline as below.
First of all, you need to enable the WEB SOCKETS option of Application settings to ON on Azure portal, as the blog said as below, it's matter with any languages.
Azure IIS supports Python webapp using WSGI, you can refer to the tutorial to know it and follow the tutorial content to build & configure your Python webapp with WSGI.
There is a similar Combining websockets and WSGI in a python app SO thread which had been answered about the feasibility for websocket with WSGI in Python. And as references, there are some packages supported this combination, such as Eventlet, dwebsocket for Django, etc that you can search the words websocket & wsgi to know more.
Hope it helps.
When using Python, Azure App Service on Linux by default uses Gunicorn as webserver for all incoming requests. WebSocket connections start with a special HTTP GET request containing an "Upgrade" header which must be handled accordingly by the server. There are a few WSGI compatible WebSocket libraries out there, for this example I'm using geventwebsocket
First, create a new Azure App Service Plan + Service:
az appservice plan create -g <ResourceGroupName> -n MyAppPlan --is-linux --number-of-workers 4 --sku S1
az webapp create -g <ResourceGroupName> -p MyAppPlan -n <AppServiceName> --runtime "PYTHON|3.7
Save the following sample to server.py:
from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler
def websocket_app(environ, start_response):
if environ["PATH_INFO"] == '/echo':
ws = environ["wsgi.websocket"]
while not ws.closed:
message = ws.receive()
ws.send(message)
Create a file requirements.txt with the following content
gevent
gevent-websocket
Create a file .deployment with the following content
[config]
SCM_DO_BUILD_DURING_DEPLOYMENT = true
Put all three files in a zip folder upload.zip and deploy it to Azure
az webapp deployment source config-zip -g <ResourceGroupName> -n <AppServiceName> --src upload.zip
Set the startup command, we tell Gunicorn here to use a GeventWebSocketWorker for requests and serve the application in the file server.py, function name websocket_app.
az webapp config set -g <ResourceGroupName> -n <AppServiceName> --startup-file "gunicorn --bind=0.0.0.0 -k "geventwebsocket.gunicorn.workers.GeventWebSocketWorker" server:websocket_app"
Enable WebSockets in Azure
az webapp config set -g <ResourceGroupName> -n <AppServiceName> --web-sockets-enabled true
After startup, you should now be able to send requests to the server and get an echo response (assuming the Python websockets package is installed - pip install websockets)
python -m websockets ws://<AppServiceName>.azurewebsites.net/echo

Using python-memcached and the Pyramid Framework Pserve Server

My web application uses the Pyramid framework, and runs on a Debian Linux system. I'm adding python-memcached to the application but cannot get the objects to be stored and retrieved. I get a null value when I retrieve a object from memcached using the key I used to set it with. The testing/debugging server I am using is the Pyramid Framework pserve server.
import memcache
mc = memcache.Client(['127.0.0.1:6543'], debug=0)
mc.set('key1', 'value1', 10)
val = mc.get('key1')
The val is equal to 'null'.
The command I use to run the application is:
$ pserve development.ini --reload
I doubt your memcache server is being run on port 6543 -- assuming you're using the default pyramid config file, your development server is running on port 6543, your memcache server is probably on port 11211. Try running the memcache server and then set
mc = memcache.Client(['127.0.0.1:11211'], debug=0)

Categories