Trouble deploying Django application with Celery to Elastic Beanstalk - python

I have a working django application deployed to Elastic Beanstalk. I am trying to add some asynchronous commands to it so am adding Celery. Currently I am running container commands through python.config in my .ebextensions.
I have added the command:
06startworker:
command: "source /var/app/venv/*/bin/activate && celery -A project worker --loglevel INFO -B &"
to my python.config. When I add this command and try to deploy my elasticbeanstalk instance timesout and deployment fails.
I have confirmed that connection to my redis server is working and my application can connect to it. Checking my logs in my cfn-init.log I see:
Command 01wsgipass succeeded
Test failed with code 1
...
Command 06startworker succeeded
So I think that when adding the 06startworker command it is somehow interfering with my 01wsgipass command which runs fine when I dont have the start worker command.
For reference my wsgi command is:
01wsgipass: command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
I'm at a bit of a loss on how to troubleshoot this from here. I'm finding the logs that I am getting are not very helpful.

Related

When i run heroku ps:scale web=1 command i am getting this error. Can anyone help me with this

When i run heroku ps:scale web=1 command i am getting this error.
▸ Missing required flag:
▸ -a, --app APP app to run command against
▸ See more help with --help
Can anyone help me with this.
You have to mention the app name against which you intend to run the command from CLI despite having a single app.
When running commands from CLI, it is advised to always mention the app name using
--app 'app-name'
against every command.
If you want to avoid doing that, you can set your heroku git remote app repo to a specific app, using the command
heroku git:remote -a 'app_name'
That should help you not having to type the app_name every time.
About scaling the dynos, you can read here.

How can I run one-off dyno inside another one-off dyno on Heroku using custom django-admin commands?

I need to create one-off dyno in Django application deployed on Heroku using custom django-admin commands. I want to use Heroku Scheduler to run command heroku run python manage.py test_function2. It creates one-off dyno with running function test_function2 in it. Then I would like to use test_function2 function to create more one-off dynos. I added example code below. My problem is associated with line command = 'heroku run:detached myworker2' When I use command = 'heroku run:detached myworker2' in test_function2 I get error sh: 1: heroku: not found.
In Heroku documentation there is written One-off dynos are created using heroku run. Does anyone have an idea how can I create heroku one-off dyno when I am already in one?
test_function2:
class Command(BaseCommand):
def handle(self, *args, **options):
command = 'heroku run:detached myworker2'
os.system(command)
Procfile:
web: sh -c 'gunicorn backend.wsgi --log-file -'
myworker2: python manage.py test_function2
myworker2: python manage.py test_function

Debug Docker that quits immediately?

I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled

Docker container with Newrelic Python agent and Supervisord is not sending data

I have a dockerised Django app in uwsgi running using supervisord and I am trying to monitor the app using Newrelic APM. New Relic Python agent installation commands are written inside Dockerfile and in wsgi.py,the below code is included.
import newrelic.agent<br/>
newrelic.agent.initialize('/opt/testapp/newrelic.ini')
My supervisord.conf file:
[program:newrelic]<br>
command=newrelic-admin run-program uwsgi --thunder-lock --ini /opt/testapp/uwsgi.ini --protocol http<br>
autostart=true<br>
autorestart=true<br>
redirect_stderr=true
Below is my Dockerfile command to copy supervisord conf file and run supervisord
COPY config/supervisord-newrelic.conf /etc/supervisor/conf.d/supervisord.conf <br>
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
After running docker-compose up command, the app gets started and running without any issues, App name is listed in New Relic APM dashboard but not showing any data in my newrelic APM dashboard.

Your recommendation to reset a postgress database in heroku after CircleCI tests

I am using Circle CI for tests and pushing a python application to heroku in order to also run Web GUI tests on the machine.
What is your recommended way of filling up the heroku instance with a certain database content?
I do not think that circle CI should access the heroku instance or the database directly?
The heroku deploy hooks only seem to be able to call a web hook. But I would like to run a command to reset the database.
If you're using the built-in heroku deployments, you won't be able to do this, e.g. if your configuration looks like this:
deployment:
staging:
branch: master
heroku:
appname: foo-bar-123
You can instead configure your deployment to run several commands:
deployment:
production:
branch: production
commands:
- "[[ ! -s \"$(git rev-parse --git-dir)/shallow\" ]] || git fetch --unshallow"
- git push git#heroku.com:foo-bar-123.git $CIRCLE_SHA1:refs/heads/master
- heroku run rake db:migrate --app foo-bar-123:
timeout: 400 # if your deploys take a long time
The comment from Two-Bit Alchemist would be a good command to reset the database.
The above examples are taken from: https://circleci.com/docs/continuous-deployment-with-heroku#heroku-setup

Categories