While running the status command I get the following error:-
Am using rabbitmq as the messaging broker
I am following this blog
sudo /etc/init.d/celeryd status
Error: No nodes replied within time constraint
How can I Debug this error?
I have also checked this question. The answer there did not help.
django/celery - celery status: Error: No nodes replied within time constraint
Edit:-
After checking the logs of celery beat I found the following error
celerybeat raised exception <class 'gdbm.error'>: error(13, 'Permission denied')
Perhaps this is caused by celery not having write permissions for the celerybeat-schedule file. The docs you linked to show celery configured to use /var/run/celerybeat-schedule as the celery beat schedule file.
Does your process have write permissions to that file? If that directory is owned by root (as it should be) and your process is running as anything other than the root user, that could cause the permission denied errors.
Check that your permissions are correct and then try deleting that file then restarting everything.
Use the following command to find the problem :
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
This usually happens because there are problems in your source project(permission issues, syntax error etc.)
As mentioned in celery docs:-
If the worker starts with “OK” but exits almost immediately afterwards
and there is nothing in the log file, then there is probably an error
but as the daemons standard outputs are already closed you’ll not be
able to see them anywhere. For this situation you can use the
C_FAKEFORK environment variable to skip the daemonization step
Good Luck
Source: Celery Docs
I'm having the same problem.
Restarting rabbitmq fixed it:
sudo systemctl restart rabbitmq-server
and the strange thing is that I needed to wait at least 100 seconds.
For me, I think there is a disk problem.
Related
Im getting the following airflow issue:
When I run Dags that have mutiple tasks in it, randomly airflow set some of the tasks to failed state, and also doesn't show any logs on the UI. I went to my running worker container and saw that the log files for those failed tasks were also not created.
Going to Celery Flower, I found these logs on failed tasks:
airflow.exceptions.AirflowException: Celery command failed on host
How to solve this?
My environment is:
airflow:2.3.1
Docker compose
Celery Executor
Worker, webserver, scheduler and triggerer in different containers
Docker compose hosted on Ubuntu
I also saw this https://stackoverflow.com/a/69201032/11949273 answer that might be related.
Anyone with these same issues?
Edit:
On my EC2 Instance I got more vCPU's and fine tuned airflow/celery workers parameters and solved this. Probably is some issue with lack of CPU and or something else.
I am faced with some issue. In my case in Inspect -> Console has some error with replaceAll in old browser (Chrome 83.X). Chrome 98.X does not have this issue.
I am getting this error:
An error occurred initializing the application server: Failed to locate pgAdmin4.py, terminating server thread.
As it fails it will prompt to adjust the python and application path but I read an answer on Stack Overflow where the person said he deleted the path it worked for him and did so but it still gave me the same error and I don't see the prompt again.
So I went to pgAdmin official site only to see that if it fails I must enter python and application path. How can I configure the paths for the pgAmin. I am using Fedora 27.
Try to just delete the config file. You may have an old one from a previous install.
rm ~/.config/pgadmin/pgadmin4.conf
As it fails it will prompt to adjust the python and application path but read an answer on stackoverflow where the person said he deleted the path it worked for him and did so but it still gave me the same error and i don't see the prompt again
Probably your first error was actually
An error occurred initialising the application server:
Failed to launch the application server, server thread exiting.
Most likely you missing some dep like python3-flask-babelex
e.g on fedora install
sudo dnf install python3-flask-babelex
You see following error (one you mentioned) when you have misconfigured user config file. Which was created after you edited default values from prompt
An error occurred initializing the application server:
Failed to locate pgAdmin4.py, terminating server thread.
This error can be solved by either fixing your config or deleting it to use default values:
e.g. on Fedora checking that your user config is correct
vi ~/.config/pgadmin/pgadmin4.conf
Primarily check that path variables in [General] section are ok.
# example
[General]
ApplicationPath=/usr/lib/python3.6/site-packages/pgadmin4-web/
PythonPath=/usr/lib/python3.6/site-packages:/usr/lib64/python3.6/site-packages
For me, the solution was to sudo dnf remove pgadmin4* then sudo find / -iname "*pgadmin4*" and delete any scraps lying around, then sudo dnf install pgadmin4* - everything is now working fine.
I started with fresh install of readthedocs.org:
http://read-the-docs.readthedocs.org/en/latest/install.html
Next I have added SocialApp GitHub in admin panel and then connected my superuser to that GitHub account.
Then I went to github and I forked readthedocs repository.
https://github.com/Drachenfels/Test-Fork
Next I clicked import projects. Task never concludes but when I refresh page, repos are there.
I picked forked repository Test-Fork and I clicked build.
Task never finishes, when I refresh or start another one, they are stuck in state "Triggered". There is no error, nothing.
What is more I am default configuration of readthedocs.
I have running in the background following processes:
./manager.py runserver 9000
./manage.py celerdybeat --verbosity=3
./manage.py celeryd -E
./manage.py celercycam
redis-server
Do I miss anything at this point?
It looks like for me that despite having celery active and running tasks are never initiated nor killed, nor errored.
Problem was not with celery, tasks were running eagerly (what I was suspecting but not really sure), so as soon as they triggered they were executed.
Problem was that task responsible for building documentation (update_docs) was failing silently. Thus state 'Triggering' never concluded and build was never initiated. It happens that this error was my own fault I run django server on different port than it's in default settings. Exception was thrown, it was never logged, state of task was never updated, readthedocs was left in the limbo. I hope it will help some lost souls out there.
Is there any way to capture a python error and restart the gunicorn worker that it came from?
I get intermittent InterfaceError: connection already closed errors that essentially stop a worker from processing further requests that require the database. Restarting the worker manually (or via newrelic http hooks) gets rid of the issue.
Stack is heroku + newrelic.
Obviously there's an underlying issue with the code somewhere, but whilst we try to find it, it'd be good to know that the workers are re-starting reliably.
Thanks
i am trying to install an init.d script, to run celery for scheduling tasks. when i tried to start it by sudo /etc/init.d/celeryd start, it throws error "User does not exist: 'celery'"
my celery configuration file (/etc/default/celeryd) contains these:
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
i know that these are wrong that is why it throws error.
The documentation just says this:
CELERYD_USER
User to run celeryd as. Default is current user.
nothing more about it.
Any help will be appreciated.
I am adding a proper answer in order to be clearly visible:
Workers are unix processes that will run the various celery tasks. As you can see in the documentation, the CELERYD_USER and CELERYD_GROUP determine the name of user and group these workers will be run in your Unix environment.
So, what happened initially in your case is that celery tried to start the worker with a user named "celery" which did not exist in your system. When you commented out these two options, then celery started the workers with the user that issued the command sudo /etc/init.d/celeryd start which in this case is the root (administrator) user (default is the current user).
However, it is recommended to run the workers as unpriviledged users and not as root for obvious reasons. So I recommend to actually add the celery user and group using the small tutorial found here http://www.cyberciti.biz/faq/unix-create-user-account/ and uncomment again the
CELERYD_USER="celery"
CELERYD_GROUP="celery"
options.