docker healthcheck shows not found in django - python

I added health check to my Dockerfile:
HEALTHCHECK --interval=1m --timeout=5s --retries=2 --start-period=10s \
CMD wget -qO- http://localhost:8070/healthcheck || exit 1
In my project main urls.py file I added entry:
url(r'^healthcheck/', lambda r: HttpResponse())
The project is activated and deployed, so I can understand the healthcheck is valid, however - I keep getting:
2017-12-17 13:25:27,891 WARNING base 51 140551932685128 Not Found:
/healthcheck
written to the logs (once a minute).
The log entry is added also when I run the wget from inside the server.
Is it an issue with the healthcheck syntax, the django entry set up or the wget in docker?
Please assist. Thanks.

the healthcheck url:
http://localhost:8070/healthcheck
should be:
http://localhost:8070/healthcheck/
because the trailing slashes setting in django.

Related

Docker-compose flask app not printing output from 'print'

I have a flask app that has one route and nothing complex going on, running in a docker container. I cannot for the life of me get print statements to show up in the logs (docker-compose logs -f <containername>). So far, I have tried various answers that supposedly have fixed this problem for others including:
Calling print("test", flush=True)
Setting PYTHONUNBUFFERED=1 and verifying it is set in the actual container with echo
Setting PYTHONUNBUFFERED=0
Running python with the -u flag
Using the logging module (logger.warning, logger.info, etc)
So far nothing has worked. The flask app is starting perfectly fine, but no output from my print statements is shown. I have sanity checked that i'm editing the correct file by adding random syntax errors and watching the app brick itself. I'm using python 3.8 and docker-compose 2
Try this:
import sys
print('It is working',file=sys.stderr)
I found this question while looking for answers to a similar problem. I was running a flask app in a conda environment in a container and wasn't getting any log output even though the flask app itself was working fine. I added the following lines to my Dockerfile and it starting logging as expected -
ENV PYTHONUNBUFFERED=1
RUN echo "source activate my_env" > ~/.bashrc
ENV PATH /opt/conda/envs/my_env/bin:$PATH
CMD ["python", "api.py"]
You can see logs with docker-compose or docker
With docker-compose you have to see SERVICE
Note: you add containername but you have to add service name
NOT $ docker-compose logs -f <containername>
USE $ docker-compose logs -f <SERVICE_NAME>)
With docker you have to add container name or container id
docker logs -f CONTAINER_ID | CONTAINER_NAME

404 error when deploying Scrapy

I am trying to deploy Django+Scrapy project on Ubuntu 16.04. When I run scrapyd-deploy, as it is described in the docs, - I get:
Packing version 1526639948
Deploying to project "first_scrapy" in http://my_ip/addversion.json
Deploy failed (404): <full HTML code of '404.html' page>
When I run scrapyd-deploy -l - I see:
default http://my_ip
My scrapy.cfg:
[settings]
default = first_scrapy.settings
[deploy]
url = http://my_ip
username = root
password = rootpassword
project = first_scrapy
What am I doing wrong?
UPDATE 1:
If I change in my scrapy.cfg url=http://my_ip:6800 - this still throws 404 error. Next I tried to run scrapyd in the second console and this was the first time I saw another answer - details are here.
So question now is - how to run scrapyd constantly so if I close the console - it will be still working?
You just have to change directory into your project folder and then run scrapyd command with “nohup” and that will make sure that scrapyd doesn’t get closed after you disconnect with server
cd /path/to/your/project && nohup scrapyd >& /dev/null &

Django and Gunicorn: 403 Forbidden

I have a django application inside /home//my_app that I am trying to deploy using gunicorn:
sudo gunicorn --workers=2 -b :8081 tutorial.wsgi:application
After deploying the application with the command above, I log into another ssh instance (on the same server) and run the following command:
wget 127.0.0.1:8081
This returns a 403 FORBIDDEN.
Things I have tried:
1. Tried to chmod 755, and even 777, in app directory (Did not work)
2. Tried to move app directory to /etc/www/myapp (Did not work)
3. Tried to run all commands using root access (Did not work)
It is worth noting that I am not that familiar with linux and that this error is literally driving me crazy.
SOLVED IT:
after downloading cURL, in order to see the http header, it turned out that the service worked, but returned a 403 because a missing token authorization. Oops.
Please make sure you have coded views.py and urls.py to server GET requeat at /.

Create Django Superuser on AWS Elastic Beanstalk

I have a custom user class called MyUser. It works fine locally with registrations, logins and so on. I'm trying to deploy my application to AWS Elastic Beanstalk and I'm running into some problems with creating my superuser.
I tried making a script file and run it as the official AWS guide suggests. Didnt work well so I decided to try a secondary method suggested here and create a custom manage.py command to create my user.
When I deploy I get the following errors in the log.
[Instance: i-8a0a6d6e Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPostBuild] command failed with error code 1: Error occurred during build: Command 02_createsu failed.
[2015-03-10T08:05:20.464Z] INFO [17937] : Command processor returning results:
{"status":"FAILURE","api_version":"1.0","truncated":"false","results":[{"status":"FAILURE","msg":"[CMD-AppDeploy/AppDeployStage0/EbExtensionPostBuild] command failed with error code 1: Error occurred during build: Command 02_createsu failed","returncode":1,"events":[]}]}
[2015-03-10T08:05:20.463Z] ERROR [17937] : Command execution failed: [CMD-AppDeploy/AppDeployStage0/EbExtensionPostBuild] command failed with error code 1: Error occurred during build: Command 02_createsu failed (ElasticBeanstalk::ActivityFatalError)
at /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/beanstalk-core-1.1/lib/elasticbeanstalk/activity.rb:189:in `rescue in exec'
...
caused by: command failed with error code 1: Error occurred during build: Command 02_createsu failed (Executor::NonZeroExitStatus)
The code looks like the following:
This is my mysite.config file in .ebextensions/
01_syncdb and 03_collectstatic works fine.
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_createsu:
command: "manage.py createsu"
leader_only: true
03_collectstatic:
command: "django-admin.py collectstatic --noinput"
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: treerating/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: treerating.settings
This is my /profiles/management/commands/createsu.py file:
from django.core.management.base import BaseCommand
from django.contrib.auth.models import User
from profiles.models import MyUser
class Command(BaseCommand):
def handle(self, *args, **options):
if MyUser.objects.count() == 0:
MyUser.objects.create_superuser("admin", "treerating", "password")
And I have __init__.py files in both /management/ and /commands/ folders.
I tried this command locally from command line and it works fine and creates the user without errors. So there shouldnt be any issue with the command itself or the MyUser.objects.create_superuser().
EDIT: I tried changing my def handle(): function to only set a variable to True and I still get the same errors. So it seems like the problem is not related to the create_superuser function or the handle, but more something with using manage.py.
Any ideas?
EDIT 2:
I tried executing the command by SSH and failed. I then followed the instructions in this post and set the Python Path's manually with:
source /opt/python/run/venv/bin/activate
and
source /opt/python/current/env
I was then able to successfully create my user.
The official AWS Django Deployment guide does not mention anything about this. But I guess you are suppose to set your Python Path's in the .config file somehow. I'm not sure exactly how to do this so if someone still want to answer that, I will test it and accept it as answer if that will solve the deployment errors.
Double-check the link to your secondary method. You can set the python path in the option settings (.ebextensions/02_python.config) that you've created:
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "iotd.settings"
"PYTHONPATH": "/opt/python/current/app/iotd:$PYTHONPATH"
"ALLOWED_HOSTS": ".elasticbeanstalk.com"
However, I've done this and am still experiencing the issue you've described, so you'll have to see if it fixes it.
EDIT: It turns out my issue was a file structure issue. I had the management directory in the project directory, when it should have been placed one level deeper in the directory of one of my apps.
This placed it one level deeper beneath my manage.py and settings.py than is shown in the example, but it is working fine now.
I know this could be late but I just wanted to share that I solved this issue by adding the file /profiles/management/commands/createsu.py into the app folder you are using.
In my case was:
easy/easyapp/management/commands/createsu.py
where easy is my project and easyapp my app.
Another alternative that worked for me is to just go directly into the config.yml file and change the wsgi path there. You can get access with the eb config command and just go down 50 lines or so, make your changes, escape and save. This is only an environment-specific solution though.

unable to deploy portia spider with scrapyd-deploy

Could you please help me figure out what I'm doing wrong ? Here are the steps:
followed the portia install manual found here https://github.com/scrapinghub/portia - all ok
created a new project, entered an url, tagged an item - all ok
clicked "continue browsing", browsed through site, items were being extracted as expected - all ok
Next I wanted to deploy my spider:
1st try : I tried to run, as the docs specified, scrapyd-deploy your_scrapyd_target -p project_name - got error - scrapyd wasn't installed
fix: pip install scrapyd
2nd try : I launched scrapyd server, accessed http://localhost:6800/ -all ok
After a brief reading of scrapyd docs I found out I had to edit the file scrapy.cfg from my project : slyd/data/projects/new_project/scrapy.cfg
added the following :
[deploy:local]
url = http://localhost:6800/
Went back to the console, checked all is ok :
$:> scrapyd-deploy -l
local http://localhost:6800/
$:> scrapyd-deploy -L local
default
Seemed ok so i gave it another try :
$scrapyd-deploy local -p default
Packing version 1418722113
Deploying to project "default" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "IOError: [Errno 21] Is a directory: '/Users/Mike/www/portia/slyd/data/projects/new_project'"}
What am I missing ?
For anyone who stumbles upon this issue, the fix is to deploy scrapyd in another directory other than that of the project.
See details here : https://github.com/scrapinghub/portia/issues/128

Categories