Gunicorn access to other workers memory - python

Currently I've got a trading bot application which has a UI interface to start and stop currently running bots. In behind there is also a bot manager, which keeps track of all running bots and has the ability to start and stop them. Each bot is started in its own thread and the manager must have access to thats thread memory (in order to stop it when necessary). Currently If i specify more workers then depending which worker the request lands on I've no access to the thread I need.
At this moment, I've got a gunicorn setup in docker:
gunicorn app:application --worker-tmp-dir /dev/shm --bind 0.0.0.0:8000 --timeout 600 --workers 1 --threads 4
The problem:
Yesterday one of the bots stopped because apparently gunicorn ran out of memory and the worker had to restart in the process killing running bot. (Can't have that). These bots should be able to run for months if needed. Is there a way to fix or tell gunicorn to never stop workers or processes? Or maybe I should use a different python server like waitress or uWSGI?

Related

How to make my selenium script run forever on heroku

I have created an selenium bot that posts every 20 minutes on Instagram
I deployed my project to heroku and everything but i don't know how to make it run forever
I tried heroku run python mycode.py in the command promt but the program would stop working if i close command prompt
heroku run is for ad hoc interactive stuff.
For a long-running background process you should define a worker process in your Procfile:
worker: python mycode.py
Commit that change and redeploy. Then scale up a dyno to run it:
heroku ps:scale worker=1
This will either consume free dyno hours or, if you are using paid dynos, incur costs.

One time Python script on Heroku

I would like to run a python script on Heroku, but I would like to run it only once and stop at the end of the script
Right now my script is running endlessly, so at the end of it, it restarts from the beginning.
How can I stop it at the end of the script ?
right now my procfile look the following;
web: python ValueAppScript.py
worker: python ValueAppScript.py
thx you
First of all, you probably don't want to declare the same command as both a web and a worker. If your script listens for HTTP requests it should be a web process, otherwise a worker makes more sense:
worker: python ValueAppScript.py
Since you don't want your worker running all the time, scale it down to zero dynos:
heroku ps:scale worker=0
If you wish to run it once interactively, you can use heroku run:
heroku run python ValueAppScript.py
If you want it to run on a schedule, e.g. once per day, you can use the Heroku Scheduler. Since you have defined this as a worker process you should be able to just use worker as the command.

Gunicorn + Gevent + kafka python + Flask : Consumer STOPPING after idle time

We have a Python FLASK app in which we are using Kafka Consumer and Flask app is running through Gunicorn with Gevent worker.
We spawn 3 threads for same script with some parameters and it creates 3 Kafka consumers. We don't run same script 3 times.
Once Consumer has started and is idle for few minutes the consumer exits the poll loop.
Restarting the consumer via Flask api[Curl Command] we get some records from Kafka and then after some time the consumer goes in idle state and cannot resume again.
The traffic in Kafka is less[it is not continuous], the records come after hours.
We have dockerized it and it is running in container.
We have to hit manually the curl command[3 times for 3 threads] to start the service for each consumer.
Running in nohup python3 script.py is working perfectly/continuously i.e without Gunicorn.
Any thoughts on this?

Detect gunicorn worker restart

We have a django application served through gunicorn sync workers. There's a tornado server run in a thread from the django application itself(Let's not argue about architecture, it's legacy code) and is tightly coupled with worker.
It binds itself to a free port stored in a db. Now every time a worker restarts, I would need to start the tornado too, but since the port has been used it won't start.
I would need to somehow detect that worker went down and would need to store the state of port to available. However, I am not able to detect that anyhow.
signal.signal(signal.SIGTERM, stop_handler)
signal.signal(signal.SIGINT, stop_handler)
Above are only detected when I manually kill the worker but not when gunicorn restarts the worker itself.
How can I detect the same?

Celery beat sometimes stops working

I'm using latest stable Celery (4) with RabbitMQ within my Django project.
RabbitMQ is running on separate server within local network. And beat periodically just stops to send tasks to worker without any errors, and only restarting it resolves the issue.
There are no exceptions in worker (checked in logs & also I'm using Sentry to catch exceptions). It just stops sending tasks.
Service config:
[Unit]
Description=*** Celery Beat
After=network.target
[Service]
User=***
Group=***
WorkingDirectory=/opt/***/web/
Environment="PATH=/opt/***/bin"
ExecStart=/opt/***/bin/celery -A *** beat --max-interval 30
[Install]
WantedBy=multi-user.target
Is it possible to fix this? Or are there any good alternatives? (Cron seems to be not a best solution).
Your description sounds a lot like this open bug: https://github.com/celery/celery/issues/3409
There are a lot of details there, but the high-level bug description is that if the connection to RabbitMQ is lost that it's unable to regain the connection.
Unfortunately, I can't see that anyone has definitely solved this issue.
You could start by debugging this using this:
ExecStart=/opt/***/bin/celery -A *** beat --loglevel DEBUG --max-interval 30

Categories