I'm trying to get a docker service up using the python SDK. My service is a nginx container that's supposed to bind the container 80 port to the machine 80 port. The nginx config has a redirect clause for all http traffic to https.
When launching the service, i'm using the following parameters:
params = {
'endpoint_spec': docker.types.EndpointSpec(ports={80: 80}),
'name': name,
'hostname': name,
'image': NGINX_IMAGE_NAME,
'networks': [NETWORK_NAME],
'mounts': ['/home/ubuntu/.ssh/certs:/certs:ro'],
'log_driver': 'syslog',
'log_driver_options': {'syslog-address': 'udp://localhost:514'},
'constraints': ['node.labels.environment==test']
}
api.services.create(**params)
The behaviour i was expecting was the 80 port exposed on all nodes that have the environment labes set to test. What i got instead is the 80 port open on all nodes in the docker swarm. Is this intended ? Can one restrict the binding to a particular node?
EDIT:
Apparently the docker ingress network is responsible for my dilemma. Any hints on how to disable this particular behavior?
Related
I created an aplication but Its is not linked to my domain
exp: "site.com", "www.site.com", when I access it I get:
I need to make my ElasticBeanstalk application connect to my domain (jamelaumn.com) im the owner
here's my application loadbalancer prints:
currently I have no rules on EB LB
My EC2 LoadBalancer::
Based on the comments and your updates. I see two issues.
SSL certificate is setup for jamelaumn.com. This will not work. It must be setup for *.jamelaumn.com or api.jamelaumn.com. So you have to make new SSL certificate and add it to your ALB.
You have to redirect port 80 (http) to 443 (https) on your load balancer. The process is described in How can I redirect HTTP requests to HTTPS using an Application Load Balancer?
Context
I have an application which uses a service running in my kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
...
spec:
ports:
- port: 5672
...
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
rabbitmq ClusterIP 10.105.0.215 <none> 5672/TCP 25h
That application, has as well a client (Python) which at some point needs to connect to that service (for example, using pika). Of course, the client is running outside the cluster, but in a machine with a kubectl configuration.
I would like to design the code of the "client" module as if it would be inside the cluster (or similar):
host = 'rabbitmq'
port = 5672
class AMQPClient(object):
def __init__(self):
"""Creates a connection with a AMQP broker"""
self.parameters = pika.ConnectionParameters(host=host, port=port)
self.connection = pika.BlockingConnection(self.parameters)
self.channel = self.connection.channel()
Issue
when I run the code I get the following error:
$ ./client_fun some_arguments
2020-09-18 09:36:31,137 - ERROR - Address resolution failed: gaierror(-3, 'Temporary failure in name resolution')
Of course, as "rabbitmq" is not in my network but in the k8-cluster network.
However, as kubernetes python client uses a proxy interface, according to this manually-constructing-apiserver-proxy-urls it should be possible to access to the service using an url similar to this:
host = 'https://xxx.xxx.xxx.xxx:6443/api/v1/namespaces/default/services/rabbitmq/proxy'
Which is not working, so something else is missing.
In theory, when using kubectl, the cluster is accessed. So, maybe, there is an easy way that my application can access rabbitmq service without using nodeport.
Note the following:
The service does not necessarily use HTTP/HTTPS protocol
The IP of the cluster might be different, so the proxy-utl cannot be hardcoded. A kubernetes python client function should be use to get the IP and port. Similar to kubectl cluster-info, see at manually-constructing-apiserver-proxy-urls
Port-forwarding to internal service might be a perfect solution, see forward-a-local-port-to-a-port-on-the-pod
If you want to use python client you need a python package: https://github.com/kubernetes-client/python
In this package, you can find how to connect with k8s.
If you want to use k8s self API, you need the k8s token. K8s API doc is useful, also you can find kubectl detail by use -v 9 like: kubectl get ns -v 9
I would suggest not to access the rabbitmq through kubernetes API server as a proxy. It introduces a load on the kubernetes API server as well as kubernetes API server becomes a point of failure.
I would say expose rabbitmq using LoadBalancer type service. If you are not in supported cloud(AWS,Azure etc) environment then you could use metlalb as a load balancer implementation.
I am trying to use cloud SQL / mysql instance for my APPEngine account. The app is an python django-2.1.5 appengine app. I have created an instance of MYSQL in the google cloud.
I have added the following in my app.yaml file copied from the SQL instance details:
beta_settings:
cloud_sql_instances: <INSTANCE_CONNECTION_NAME>=tcp:<TCP_PORT>
I have given rights for my appengine project xxx-app's owner xxx-app#appspot.gserviceaccount.com the Cloud SQL Client rights. I have created a DB user account specific for the app XYZ which can connect to all hosts (* option)
My connection details in settings.py are the following:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'my-db',
'USER': 'appengine',
'PASSWORD': 'xxx',
'HOST': '111.111.11.11', # used actual ip
'PORT': '3306'
}
}
I have also tried as per https://github.com/GoogleCloudPlatform/appengine-django-skeleton/blob/master/mysite/settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': '/cloudsql/<your-project-id>:<your-cloud-sql-instance>',
'NAME': '<your-database-name>',
'USER': 'root',
}
}
I cannot connect from the local as well. However, if I do a Add Network of my local IP and then try to connect the local connects. THe app runs fine locally after adding the network of local IP Address using CIDR notation.
My problems:
I am unable to connect to the Cloud sql without adding the AppEngine assigned IP Address. It gives me an error :
OperationalError: (2003, "Can't connect to MySQL server on '0.0.0.0' ([Errno 111] Connection refused)")
Where can I find the appengine's assigned IP Address. I dont mind even if it is temporary. I understand if I need static IP Address I will have to create a Compute VM Instance.
App Engine doesn't have any guarantees regarding IP address of a particular instance, and may change at any time. Since it is a Serverless platform, it abstracts away the infrastructure to allow you to focus on your app.
There are two options when using App Engine Flex: Unix domain socket and TCP port. Which one App Engine provides for you depends on how you specify it in your app.yaml:
cloud_sql_instances: <INSTANCE_CONNECTION_NAME> provides a Unix socket at
/cloudsql/<INSTANCE_CONNECTION_NAME>
cloud_sql_instances: <INSTANCE_CONNECTION_NAME>=tcp:<TCP_PORT> provides a local tcp port (127.0.0.1:<TCP_PORT>).
You can find more information about this on the Connecting from App Engine page.
I've encountered this issue before and after hours of scratching my head, all I needed to do was enable "Cloud SQL Admin API" and the deployed site connected to the database. This also sets permissions on your GAE service account for cloud proxy to connect to your GAE service.
Q. How do I set django ALLOW_HOSTS on elastic beanstalk instance to allow Elastic Load Balancer IP?
Background
I deployed django website on elastic beanstalk. Website domain is added to ALLOW_HOSTS so normal requests are accepted by django correctly.
ALLOWED_HOSTS = ['.mydomain.com']
Elastic Load balancer visit Elastic beanstalk instances directly with IP address for health check, so next line allow the health check:
# add elastic beanstalk instance local ip
aws_ip = requests.get('http://169.254.169.254/latest/meta-data/local-ipv4', timeout=0.1).text
ALLOWED_HOSTS.append(aws_ip)
But I still got invalid HOST IP errors that seems elastic beanstalk instances are visited with elastic load balancer public IP. There are solutions online for EC2 deployments as you can set HTTPD softwares to set http HOST header when it's visited by IP directly. But we cannot config apache on elastic beanstalk. So how do I add elastic load balancer IP to the ALLOW_HOSTS?
There is no good reason to accept traffic that is directed to your ELB's IP. For the health check, my preferred method:
import requests
try:
internal_ip = requests.get('http://instance-data/latest/meta-data/local-ipv4').text
except requests.exceptions.ConnectionError:
pass
else:
ALLOWED_HOSTS.append(internal_ip)
del requests
No complicated apache configuration, which depend on your domain
Fails quickly on dns, no need to rely on timeout
I believe the best approach would be to configure Apache to handle request host validation. Even with beanstalk you should be able to configure Apache using .ebextensions.
The general idea is to check incoming requests for the 'ELB-HealthChecker/1.0' User-Agent and the health check URL you set as the request's REQUEST_URI. Those requests can have their host header changed to an allowed host with the RequestHeader set Host command.
If really don't want to configure Apache, you could implement a custom middleware to override Django's CommonMiddleware to allow the health checker requests to bypass Django's ALLOWED_HOST validation.
I went into greater detail in this answer if you need more on implementing one of these solutions.
Adding the local IP of the EC2 instance to the allowed hosts worked for me.
settings.py:
import socket
hostname = socket.gethostname()
local_ip = socket.gethostbyname(hostname)
ALLOWED_HOSTS = ['...', local_ip]
You add list of IPs assigned to that host.
Only the IP address no http:// or /latest/media...
In settings.py this is the configuration I use and it works well for me (for DEV at least) :
# AWS config for ElasticBeanstalk
ALLOWED_HOSTS = [
'127.0.0.1',
'localhost',
'.compute-1.amazonaws.com', # allows viewing of instances directly
'.elasticbeanstalk.com'
]
I hope it helps.
I have a SAML2 service provider (Open edX Platform if it makes a difference), configured according to docs and otherwise working normally. It runs at http://lms.local:8000 and works just fine with TestShib test Identity Provider and other 3rd party providers.
Problems begin when nginx reverse proxy is introduced. The setup is as follows:
nginx, obviously, runs on port 80
LMS (the service provider) runs on port 8000
lms.local is aliased to localhost via hosts file
Nginx have the following site config:
server {
listen 80;
server_name lms.local;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
if ($request_method = 'OPTIONS') {
return 204;
}
}
}
The problem is the following: python-social-auth detects that the server runs on lms.local:8000 (via request.META['HTTP_PORT']). So, if an attempt was made to use SAML SSO via the nginx proxy, it fails with the following message:
Authentication failed: SAML login failed: ['invalid_response'] (The response was received at http://lms.local:8000/auth/complete/tpa-saml/ instead of http://lms.local/auth/complete/tpa-saml/)
If that helps, an exception that causes this message is thrown in python-saml.OneLogin_Saml2_Response.is_valid.
The questions is: is that possible to run SP behind a reverse proxy on the same domain, but on different port? Shibboleth wiki says it is totally possible to run a SP behind the reverse proxy on different domain, but says nothing about ports.
In this particular case reverse proxy was sending X-Forwarded-Host and X-Forwarded-Port headers, so I just modified django strategy to use those values instead of what Django provides (i.e. request.get_host and request.META['SERVER_PORT']), which yielded two pull requests:
https://github.com/edx/edx-platform/pull/9848
https://github.com/omab/python-social-auth/pull/741