I'm looking for the most modular way to use ansible to provision a server that will host multiple node.js applications on it. The current setup I have is not scalable.
The roles I have are common, nginx, nodejs, mongodb, apps.
The apps hash/dictionary
I maintain a dict called apps in roles/apps/defaults/main.yml and this is both the solution and problem:
- apps:
shark:
repo: git#example.com:shark.git
subdomain: shark
port: 3001
tiger:
repo: git#example.com:tiger.git
subdomain: tiger
port: 3002
Example of how the apps dict is used in roles/apps/tasks/main.yml:
- name: clone repos
git: repo={{ item.value.repo }}
with_dict: apps
- name: create vhost
template:
src=vhost.j2
dest=/etc/nginx/sites-available/{{ item.value.subdomain }}
with_dict: apps
sudo: yes
playbook.yml
Both staging.yml and production.yml playbooks are set to run all of the aforementioned roles. That means that both servers will install all of the apps configured in the apps dict.
I need to figure out a more modular pattern for roles/apps so that I can install different apps on the different environments. Ideally, I'd like to be able to specify which apps go to which machine directly in staging.yml and production.yml.
Any idea how this can be accomplished? Feel free to suggest entirely different methods of configuring multiple apps.
Another solution I figured out was to create dictionaries for the various apps in roles/apps/defaults/main.yml:
shark:
repo: git#example.com:shark.git
subdomain: shark
port: 3001
tiger:
repo: git#example.com:tiger.git
subdomain: tiger
port: 3002
Note the fact that they're not enclosed inside an apps dict.
Then, specify the apps dict contents in staging.yml or production.yml instead:
- hosts: example
vars:
apps:
- '{{ shark }}'
- '{{ tiger }}'
This allows you to direct which applications are included in which playbook.
If you're willing to maintain another dict, you could parameterize the with_dict to:
- name: clone repos
git: repo={{ item.value.repo }}
with_dict: {{env_app_dict}}
and specify your env_app_dict in either the inventory file or via the command line.
edit: Alternatively, try the lookup plugin that I wrote and specify folders for apps_common, apps_production, and apps_staging.
With this plugin, you'd put a collection of common items in apps_common and:
- name: clone common repos
git: repo="{{item.repo}}"
fileglob_to_dict:
- "{{ 'apps_common/*' }}"
and then simply specify the apps_env file parameter (as in the original answer) that you want by either targeting with a host pattern (e.g. - hosts: staging in the play) or by specifying the parameter on the command line or inventory file.
Related
I have deployed a Pod with several containers. In my Pod I have certain environment variables that I can access in Python script with os.getenv(). However, if I try to use os.getenv to access the Container's environment variables I get an error stating they don't exist (NoneType). When I write kubectl describe pod <POD_Name> I see that all the environment variables (both Pod and Container) are set.
Any ideas?
The issue was in creating helm tests. In order to get the environment variables from the containers in a helm test then the environment variables need to be duplicated in the test.yaml file or injected from a shared configmap.
According to your answer I would like to add a little theory.
See this documentation about ConfigMaps.
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
Here one can find also an example of Pod that uses values from ConfigMap to configure a Pod:
env:
# Define the environment variable
- name: PLAYER_INITIAL_LIVES # Notice that the case is different here
# from the key name in the ConfigMap.
valueFrom:
configMapKeyRef:
name: game-demo # The ConfigMap this value comes from.
key: player_initial_lives # The key to fetch.
I'm trying to port in python an ECS services deployment that at the moment is done with a bunch of bash script containing commands like the following:
ecs-cli compose -f foo.yml -p foo --cluster bar --ecs-params "dir/ecs-params.yml" service up
I thought that the easiest/fastest way could be using boto3 (which I already extensively use elsewhere so It's a safe spot), but I didn't understand from the documentation what would be the instruction equivalent of the formerly written command.
Thanks in advance.
UPDATE: this is the content of foo.yml:
version: '3'
services:
my-service:
image: ecr-image:version
env_file:
- ./some_envs.env
- ./more_envs.env
command: python3 src/main.py param1 param2
logging:
driver: awslogs
options:
awslogs-group: /my-service-log-group
awslogs-region: my-region
awslogs-stream-prefix: my-prefix
UPDATE2: this is the content of dir/ecs-params.yml:
version: 1
task_definition:
task_role_arn: my-role
services:
my-service:
cpu_shares: my-cpu-shares
mem_reservation: my-mem-reservation
The ecs-cli is a high level construct that creates a workflow that wraps many lower level API calls. NOT the same thing but you can think of the ecs-cli compose up command the trigger to deploy what's included in your foo.yml file. Based on what's in your foo.yml file you can walk backwards and try to map to single atomic ECS API calls.
None of this answers your question but, for background, the ecs-cli is no longer what we suggest to use for deploying on ECS. Its evolution is Copilot (if you are not starting from a docker compose story) OR the new docker compose integration with ECS (if docker compose is your jam).
If you want / can post the content of your foo.yml file I can take a stab at how many lower level API calls you'd need to make to do the same (or suggest some other alternatives).
[UPDATE]
Based on the content of your two files you could try this one docker compose file:
services:
my-service:
image: ecr-image:version
env_file:
- ./some_envs.env
- ./more_envs.env
x-aws-policies:
- <my-role>
deploy:
resources:
limits:
cpus: '0.5'
memory: 2048M
Some of the ECS params are interpreted off the compose spec (e.g. resource limits). Some other do not have a specific compose-ECS mapping so they are managed through x-aws extensions (e.g. IAM role). Please note that compose only deploy to Fargate so the shares do not make much sense and you'd need to use limits (to pick the right Fargate task size). As a reminder this is an alternative CLI way to deploy the service to ECS but it does not solve for how you translate ALL API calls to boto3.
I just hosted my website on the digital ocean by following below link.
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
It works like a charm.
But i also want to host multiple site on the single drop let. I've no idea that how to host multiple site on the single droplet. Does name matters while creating gunicorn service file and socket file. I mean do I need to create separate service and socket file for separate project and also do i need to create separate sock file for separate project.
You can run as much as resources (RAM, Disk space) you have. For this, there is some tips i list them below:
Have separate virtualenvs for each site, inside its project folder.
Manage Database names to prevent conflicts
Don't use port 8000 and reserve it for tests.
Create separate systemd service for each project. (remember to use separate name for each service)
Therefore you should create separate socket for each site.
First start with 1 worker per site, to lower your resources costs.
Create separate nginx block for each site you have.
with these tips you can have multiple sites in an single droplet easily.
Yes you just have to create separate *.service and *.socket files for each project.
And just don't forget to change all strings in this tutorial from
gunicorn.service
gunicorn.socket
to
your_new_project.service
your_new_project.socket
when I had similar question this answer from DO website helped me.
You just have to change the project name and server_name when doing the "Configure Nginx to Proxy Pass to Gunicorn" part. If done correctly, after you restart nginx both websites will work.
I have an application that I wish to deploy on path: www.example.com/foo
I have another application that I want to deploy on path: www.example.com/bar
My load balancer currently doesn't support that.
How do I accomplish that? I read about path_beg but I can't seem to grasp it correctly. Is there an example that I can follow?
It's pretty straightforward.
frontend main-frontend
mode http
bind :80
use_backend foo-backend if { path_beg /foo }
use_backend bar-backend if { path_beg /bar }
Then you'd need to declare 2 backends, named "foo-backend" and "bar-backend" pointing to the servers and ports where those apps are listening (could be different servers, or just different ports on the same back-end servers). The names of the backends don't have to have "foo" and "bar" in them, as long as they match the names in the "use_backend" statements.
With this setup, the back-end servers need to be expecting the /foo or /bar at the beginning of the incoming path, because the entire request-path will be forwarded.
It is possible for haproxy to rewrite the path to scrub those out, but that configuration is rather more advanced.
I have a Jenkins shell script which has something like this to create a Nginx configuration from a template.
nginx.conf.j2:
server {
listen 80;
server_name {{ server_name }};
...
The rendering process which passes all the environment variables to the template:
env server_name=$SERVER_NAME \
python - <<'EOF' > "nginx.conf"
import os, jinja2
template = jinja2.Template(open(os.environ["nginx.conf.j2"]).read())
print template.render(**os.environ)
EOF
How to do the same using Ansible? I guess it could be something like:
ansible <host-pattern> -m template -a "src=nginx.conf.j2 dest=nginx.conf"
But how to skip <host-pattern> to do it locally? How to pass environment variables to the template?
If you need to force Ansible to run locally you can either create an inventory file that just has localhost in it like this:
[local]
localhost ansible_host=127.0.0.1 ansible_connection=local
Assuming you saved that into a file called local you would then use this like so:
ansible all -i local -m template -a "src=nginx.conf.j2 dest=nginx.conf"
Alternatively you can also so use the slightly hacky way round of providing an inventory as a list directly on the CLI:
ansible all -i "localhost," -m template -a "src=nginx.conf.j2 dest=nginx.conf" --connection=local
Specifically notice the trailing , as this makes Ansible see it as a list rather than a string and it expects inventories to be lists.
However, it sounds like you're trying to use Ansible as a drop in replacement for the Python snippet you included in your question. If you try the above (as mentioned in the comments) you will also see that Ansible only supports templates in playbooks and not in ad-hoc commands.
Instead I'd suggest you step back a little and use Ansible more as it was intended and use Jenkins to trigger an Ansible playbook with a specified inventory (that includes your Nginx box) that then configures Nginx.
A really basic example playbook might look something like this:
- hosts: nginx-servers
tasks:
- name: Template nginx.conf
template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
Where the nginx-servers in the host would correspond to an inventory group block that would be defined like so:
[nginx-servers]
nginx1.example.com
nginx2.example.com
With this you might then want to start looking at roles which will greatly improve the ability to re-use a lot of the Ansible code you write.