CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
I am trying to connect to redis to save my object in it, but it gives me this error when i try to connect
Error 10061 connecting to 127.0.0.1:6379. No connection could be made
because the target machine actively refused it
How does it work, what should i give in location and i am on a proxy from my company. Need some detailed explanation on location.
if your redis is password protected, you should have a config like this:
CACHES.update({
"redis": {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"PASSWORD": "XXXXXXXXXXX",
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
},
},
})
First start the redis server. Your OS will provide a mechanism to do that, e.g. on some Linuxes you could use systemctl start redis, or /etc/init.d/redis start or similar. Or you could just start it directly with:
$ redis-server
which will run it as a foreground process.
Then try running the redis-cli ping command. Receiving a PONG response indicates that redis is in fact up and running on your local machine:
$ redis-cli ping
PONG
Once you have that working try Django again.
Related
I have taken a free trial for Redis and it gave me an endpoint with a password. I haven't done anything with Redis or celery before so I really don't have any idea how it works. From the Docs of Celery everyone connects to the local host but how can I connect to this endpoint?
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379'
What should I replace this with? Where should I give the password?
My endpoint looks something like this: redis-18394.c252.######.cloud.redislabs.com:18394, Should I add the password at the end of this after a / ?
According to celery's documentation, the format is
redis://:password#hostname:port/db_number
By default, redis has 16 databases so you can use any number from 0-15 for db_number. Use a different db number for broker and result backend.
https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/redis.html#configuration
You can use channel_redis for this
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": ["password#your_ip"],
},
},
}
I'm trying to deploy a super simple containerized flask app to ECS. I know the image is docker-compose-able and building fine because I have it as part of a CodePipeline that's building, tagging and pushing the final docker image to ECR.
Build completed on Sat Feb 15 21:48:44 UTC 2020
[Container] 2020/02/15 21:48:44 Running command echo Pushing the Docker images...
Pushing the Docker images...
[Container] 2020/02/15 21:48:44 Running command docker push $REPOSITORY_URI:latest
...
025f20c0831b: Pushed
98e916abdf11: Pushed
I should probably clarify at this point that the application works locally, and I've specified the app.run() host to 0.0.0.0.
At this point I have an ECS Cluster
with a running task and public IP
and logs that indicate the app has started.
I have also modified the security group's inbound ports to include 5000 on 0.0.0.0, ::/0.
Theoretically this means you should be able to visit the port 3.80.1.115, but instead you should be seeing the standard, what did you just even type in response from your browser.
I'm just not sure what is happening anymore because I seem so close... the app is running without errors in ECS and everything is wonderful except the app is, well, inaccessible. Thoughts?
Maybe you should review the networkMode and portMappings settings for your ECS task definition.
In the security group, you're allowing traffic for the port TCP/80, but what I can read at your application logs (Running on http://0.0.0.0:5000/) is that your app is running in a different port TCP/5000.
Here is an example for a Nginx task listening on por TCP/80:
{
"requiresCompatibilities": [
"FARGATE"
],
"containerDefinitions": [
{
"name": "nginx",
"image": "nginx:latest",
"memory": 256,
"cpu": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
],
"logConfiguration":{
"logDriver":"awslogs",
"options":{
"awslogs-group":"awslogs-nginx-ecs",
"awslogs-region":"us-east-1",
"awslogs-stream-prefix":"ecs"
}
}
}
],
"volumes": [],
"networkMode": "awsvpc",
"placementConstraints": [],
"family": "nginx",
"memory": "512",
"cpu": "256"
}
You can see other examples here
I'm testing a pipeline in which an ssh client talks to many servers to gather data from them. I currently test my software by spinning up a docker machine to simulate an ssh server. Now I want to automate testing in Jenkins. How can I spin up docker ssh servers in Jenkins that don't act as agents but rather wait for an agent to contact them with an ssh request? Here's the current Jenkins pipeline below. The dockerfile creates the machine that runs the python script but I need to create a docker ssh server for it to talk to.
pipeline {
agent { dockerfile true }
stages {
stage('Checkout') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'somecredentials', url: 'a git repo']]])
}
}
stage('Run Tests') {
steps {
sh "python ./rsyncUnitTests.py"
}
}
}
post {
failure {
sendEmail('foo#bar.org')
}
changed {
sendEmail('foo#bar.org')
}
}
}
I have been following this article - https://blog.mangoforbreakfast.com/2017/02/13/django-channels-on-aws-elastic-beanstalk-using-an-alb/
to get my django-channels app working on aws..but only non-websockets request are getting handled.
my channel layer setting is :
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
},
"ROUTING": "malang.routing.channel_routing",
},
}
I have two target group as mentioned in the article. One forwarding path / to port 80 and /ws/* to 5000.
My supervisord.conf is -
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000
malang.asgi:channel_layer
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
user=root
stdout_logfile=/tmp/daphne.out.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command= /opt/python/run/venv/bin/python manage.py runworker
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
When I check the result of supervisorctl status on aws logs it shows them running fine. But still I get 404 response for ws.
Please help and let me know if you want some more info..
Does the project run locally? If not, the issue is with the software. If so, the issue is with your deployment. I would check the security group/firewall/ELB configuration to ensure the correct ports are accessible.
It makes no sense to run a Redis backend locally on each instance, provided the fact that you actually deployed it, which you don't given your info.
Redis is a cache system that allow data sharing through different instances. Closer to a DB on architectural point of view that a simple daemon thread.
You should use a external Redis Cache instead and refer to it on you Django conf.
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "<YOUR_APP>.routing.application",
"CONFIG": {
"hosts": ["redis://"+REDIS_URL+":6379"],
},
},
}
See AWS ElasticCache service for that.
I'm trying to host a python HTTPS server on Amazon Web Services using Docker.
The application "application.py" works well when executed on my local computer on the arbitrary port 8012.
The problem is that the required port is not open when I try to run the same application remotely.
Here are the configuration files:
Dockerfile
FROM python:2-onbuild
ADD . /mnt
EXPOSE 8012
WORKDIR /mnt
CMD [ "python", "./application.py" ]
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "8012",
"hostPort": "8012"
}
]
}
Inspecting the running docker instance on AWS gives the following parameters:
"Config": {
"ExposedPorts": {
"8012/tcp": {}
}
}
...
"NetworkSettings": {
"PortMapping": null,
"Ports": {
"8012/tcp": null
}
}
Does someone know how to transform a "403 connection refused" to a "200 OK" by opening this port ?
Since the application is made to be automatically scaled , I can only use the configuration files and can't use the following:
docker run -p 8012:8012...