I am experiencing an issue between Celery:5.2.7 and RabbitMQ:3.11 (through Docker). My program is periodically sending tasks to Celery, like below:
from tasks import getElements
def collectElements(elements):
for el in elements:
getElements.apply_async(queue="queue_getelements",kwargs={"elDict": el.__dict__})
collectElements(elements)
time.sleep(600)
while(True):
collectElements(elements)
time.sleep(120)
Strangely, the queue queue_getelements freezes after the first launch of collectElements(elements) (After 5 minutes/600 seconds), and the message appears after 30 minutes (Which is the default consumer_timeout time):
[2022-10-05 02:45:32,706: CRITICAL/MainProcess] Unrecoverable error: PreconditionFailed(406, 'PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more', (0, 0), '')
I have tried to change the default consumer_timeout time in the configuration by increasing the timer (Like seen as a solution here and here), but the freezes still happens before the first loop of my program. Celery seems to receive the tasks only the first time in the queue, and freezes afterward. If I relaunch the queue after stopped it, it receives again the tasks that are waiting on RabbitMQ:
celery -A tasks worker --loglevel=info -E -Q queue_getelements -c 4
Does anyone experienced this issue before? Any help would be appreciated, thank you in advance!
Related
I am using Django Channels for a real-time progress bar. With the help of this progressbar the client gets actual feedback of a simulation. This simulation can take longer than 5 minutes depending on the data size. Now to the problem. The client can start the simulation successfully, but during this time no other page can be loaded. Furthermore I get the following error message:
Application instance <Task pending coro=<StaticFilesWrapper.__call__() running at /.../python3.7/site-packages/channels/staticfiles.py:44> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/futures.py:348, <TaskWakeupMethWrapper object at 0x123aa88e8>()]>> for connection <WebRequest at 0x123ab7e10 method=GET uri=/2/ clientproto=HTTP/1.1> took too long to shut down and was killed.
Only after the simulation is finished, further pages can be loaded.
There are already articles on this topic, but they do not contain any working solutions for me. Like for example in the following link:
https://github.com/django/channels/issues/1119
A suggestion here is to downgrade the Channels version. However, I encounter other errors and it must also work with the newer versions.
The code of the consumer, routing and asgi are implemented exactly as required in the instructions: https://channels.readthedocs.io/en/stable/tutorial/index.html
requirments.txt:
Django==3.1.2
channels==3.0.3
channels-redis==3.3.1
asgiref==3.2.10
daphne==3.0.2
I am grateful for any hint or tip.
Best regards,
Dennis
I have a Flask app that uses WSGI. For a few tasks I'm planning to use Celery with RabbitMQ. But as the title says, I am facing an issue where the Celery tasks run for a few minutes and then after a long time of inactivity it just dies off.
Celery config:
CELERY_BROKER_URL='amqp://guest:guest#localhost:5672//'
BROKER_HEARTBEAT = 10
BROKER_HEARTBEAT_CHECKRATE = 2.0
BROKER_POOL_LIMIT = None
From this question, I added BROKER_HEARTBEAT and BROKER_HEARTBEAT_CHECKRATE.
I run the worker inside the venv with celery -A acmeapp.celery worker & to run it in the background. And while checking the status, for the first few minutes, it shows that one node is online and gives an OK response. But after a few hours of the app being idle, when I check the Celery status, it shows Error: No nodes replied within time constraint..
I am new to Celery and I don't know what to do now.
Your Celery worker might be trying to reconnect to the app until it reaches the retry limit. If that is the case, setting up this options in your config file will fix that problem.
BROKER_CONNECTION_RETRY = True
BROKER_CONNECTION_MAX_RETRIES = 0
The first line will make it retry whenever it fails, and the second one will disable the retry limit.
If that solution does not suit you enough, you can also try a high timeout (specified in seconds) for your app using this option:
BROKER_CONNECTION_TIMEOUT = 120
Hope it helps!
I need a function that starts several beanstalk workers before starting to record some videos with different cameras. All of these work in beanstalk. As I need to start the workers before the video record, I want to do a subprocess, but this does not work. The most curious thing is that if I run the subprocess alone in a different python script outside of this function (in the shell), this works! This is my code (the one which is not working):
os.chdir(path_to_the_manage.py)
subprocess.call("python manage.py beanstalk_worker -w 4",shell=True)
phase = get_object_or_404(Phase, pk=int(phase_id))
cameras = Video.objects.filter(phase=phase)
###########################################################################
## BEANSTALK
###########################################################################
num_workers = 4
time_to_run = 86400
[...]
for camera in cameras:
arg = phase_id +' '+settings.PATH_ORIGIN_VIDEOS +' '+camera.name
beanstalk_client.call('video.startvlcserver', arg=arg, ttr=time_to_run)
I want to include the subprocess because it's annoying to me if I have to run the beanstalk workers on each video record I wanna do.
Thanks in advance.
I am not quite sure that subproccess.call is what you are looking for. I believe the issue is subproccess call is syncronous. It doesn't spawn a new process but calls the command within the context of the web request. This ties up resources and if the request times out or the user cancels, weird things could happen?
I have never used beanstalkd, but with celery (another job queue) the celeryd worker process is always running, waiting for jobs. This makes it easy to manage using supervisord. If you look at beanstalkd deployment, I wouldn't be suprised if they recommend doing the same thing. This should include starting your beanstalk workers outside of the context of a view.
From the command line
python manage.py beanstalk_worker -w 4
Once your beanstalkd workers are set up and running, you can send jobs to the queue using an async beanstalk api call, from your view
https://groups.google.com/forum/#!topic/django-users/Vyho8TFew2I
So I've just pushed my twitter bot to Heroku, and set to run every hour on the half hour with the Heroku scheduler addon. However, for whatever reason it's running every 10 minutes instead. Is this a bug with the scheduler? Here's an excerpt of my logs from when the scheduler ran it successfully and then it tried to run it again ten minutes later:
2013-01-30T19:30:20+00:00 heroku[scheduler.4875]: Starting process with command `python ff7ebooks.py`
2013-01-30T19:30:21+00:00 heroku[scheduler.4875]: State changed from starting to up
2013-01-30T19:30:24+00:00 heroku[scheduler.4875]: Process exited with status 0
2013-01-30T19:30:24+00:00 heroku[scheduler.4875]: State changed from up to complete
2013-01-30T19:34:34+00:00 heroku[web.1]: State changed from crashed to starting
2013-01-30T19:34:42+00:00 heroku[web.1]: Starting process with command `python ff7ebooks.py`
2013-01-30T19:34:44+00:00 heroku[web.1]: Process exited with status 0
2013-01-30T19:34:44+00:00 heroku[web.1]: State changed from starting to crashed
I can provide whatever info anyone needs to help me diagnose this issue.The [web.1] log messages repeat every couple of minutes. I don't want to spam my followers.
If anyone else has this issue, I figured it out. I enabled the scheduler and then allocated 0 dynos, that way it only allocates a Heroku dyno when it is scheduled to run. For some reason it was running my process continuously and (my assumption is that) Twitter only let it connect to a socket every few minutes which resulted in the sporadic tweeting.
I would share with you the solution of a guy that have helped me with a one-off running script (like a python script that starts and then ends, and not keeps running).
Any question let me know, and I will help you --> andreabalbo.com
Hi Andrea
I have also just created a random process-type in my Procfile:
tmp-process-type: command:test
I did not toggle on the process-type in the Heroku Dashboard. After
installing the Advanced Scheduler, I creating a trigger with command
"tmp-process-type" that runs every minute. Looking at my logs I can
see that every minute a process started with "command:test",
confirming that the process-type in the Procfile is working. I then
toggled on the process-type in the Heroku Dashboard. This showed up
immediately in my logs:
Scaled to tmp-process-type#1:Free web#0:Free by user ...
This is because after toggling, Heroku will spin up a normal dyno that
it will try to keep up. Since your script is a task that ends, the
dyno dies and Heroku will automatically restart it, causing your task
to be run multiple times.
In summary, the following steps should solve your problem:
1. Toggle your process-type off (but leave it in the Procfile)
2. Install advanced-scheduler
3. Create a trigger (recurring or one-off) with command "tmp-process-type"
4. Look at your logs to see if anything weird shows up
With kind regards, Oscar
Fixed this problem by only one action at the end:
I put the amount of workers to 0
then in the scheduler it is still put "python ELO_voetbal.py" and automatically starts a worker for that.
so I did not use either advanced scheduler or placed "tmp-process-type" somewhere.
I've gotten Celery tasks happening ok, using the default settings in the tutorials and rabbitmq running on ubuntu. All is fine when I schedule a task with no delay, but when I give them an eta, they get scheduled in the future as if my clock is off somewhere.
Here is some python code that is asking for tasks:
for index, to_address in enumerate(email_addresses):
# schedule one email every two seconds
delay = index * 2
log.info("MessageUsersFormView.process_action() scheduling task,"
"email to %s, countdown = %i" % (to_address, delay) )
tasks.send_email.apply_async(args=[to_address, subject, body],
countdown = delay)
So the first one should go out immediately, and then every two seconds. Looking at my celery console, the first one happens immediately, and then the others are scheduled two seconds apart, but starting tomorrow:
[2012-03-09 17:32:40,988: INFO/MainProcess] Got task from broker: stabil.tasks.send_email[24fafc0b-071b-490b-a808-29d47bbee435]
[2012-03-09 17:32:40,989: INFO/MainProcess] Got task from broker: stabil.tasks.send_email[3eb6c3ea-2c84-4368-babe-8a2ac0093836] eta:[2012-03-10 01:32:42.971072-08:00]
[2012-03-09 17:32:40,991: INFO/MainProcess] Got task from broker: stabil.tasks.send_email[a53110d6-b704-4d9c-904a-8d74b99a33af] eta:[2012-03-10 01:32:44.971779-08:00]
[2012-03-09 17:32:40,992: INFO/MainProcess] Got task from broker: stabil.tasks.send_email[2363329b-47e7-4edd-b38e-b09fed232003] eta:[2012-03-10 01:32:46.972422-08:00]
I'm totally new to both Celery and RabbitMQ so any tips on how to fix this or where to look for the cause would be great. This is on a VMWare virtual machine of Ubuntu, but I have the clock set correctly.
Thanks!
I think it is actually working as you expect. The time on the left (between the square brackets and before INFO/MainProcess) is presented in local time, but the eta time is shown as UTC time. For instance:
Take the ETA time presented in the second line of your console output:
2012-03-10 01:32:42.971072-08:00
Subtract 8 hours (-08:00 is the timezone offset) and you get:
2012-03-09 17:32:42.971072
Which is just 2 seconds after the sent time:
2012-03-09 17:32:40,989
I hope that makes sense. Dealing with times often gives me a headache.