I have a Jenkins shell script which has something like this to create a Nginx configuration from a template.
nginx.conf.j2:
server {
listen 80;
server_name {{ server_name }};
...
The rendering process which passes all the environment variables to the template:
env server_name=$SERVER_NAME \
python - <<'EOF' > "nginx.conf"
import os, jinja2
template = jinja2.Template(open(os.environ["nginx.conf.j2"]).read())
print template.render(**os.environ)
EOF
How to do the same using Ansible? I guess it could be something like:
ansible <host-pattern> -m template -a "src=nginx.conf.j2 dest=nginx.conf"
But how to skip <host-pattern> to do it locally? How to pass environment variables to the template?
If you need to force Ansible to run locally you can either create an inventory file that just has localhost in it like this:
[local]
localhost ansible_host=127.0.0.1 ansible_connection=local
Assuming you saved that into a file called local you would then use this like so:
ansible all -i local -m template -a "src=nginx.conf.j2 dest=nginx.conf"
Alternatively you can also so use the slightly hacky way round of providing an inventory as a list directly on the CLI:
ansible all -i "localhost," -m template -a "src=nginx.conf.j2 dest=nginx.conf" --connection=local
Specifically notice the trailing , as this makes Ansible see it as a list rather than a string and it expects inventories to be lists.
However, it sounds like you're trying to use Ansible as a drop in replacement for the Python snippet you included in your question. If you try the above (as mentioned in the comments) you will also see that Ansible only supports templates in playbooks and not in ad-hoc commands.
Instead I'd suggest you step back a little and use Ansible more as it was intended and use Jenkins to trigger an Ansible playbook with a specified inventory (that includes your Nginx box) that then configures Nginx.
A really basic example playbook might look something like this:
- hosts: nginx-servers
tasks:
- name: Template nginx.conf
template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
Where the nginx-servers in the host would correspond to an inventory group block that would be defined like so:
[nginx-servers]
nginx1.example.com
nginx2.example.com
With this you might then want to start looking at roles which will greatly improve the ability to re-use a lot of the Ansible code you write.
Related
I'm trying to port in python an ECS services deployment that at the moment is done with a bunch of bash script containing commands like the following:
ecs-cli compose -f foo.yml -p foo --cluster bar --ecs-params "dir/ecs-params.yml" service up
I thought that the easiest/fastest way could be using boto3 (which I already extensively use elsewhere so It's a safe spot), but I didn't understand from the documentation what would be the instruction equivalent of the formerly written command.
Thanks in advance.
UPDATE: this is the content of foo.yml:
version: '3'
services:
my-service:
image: ecr-image:version
env_file:
- ./some_envs.env
- ./more_envs.env
command: python3 src/main.py param1 param2
logging:
driver: awslogs
options:
awslogs-group: /my-service-log-group
awslogs-region: my-region
awslogs-stream-prefix: my-prefix
UPDATE2: this is the content of dir/ecs-params.yml:
version: 1
task_definition:
task_role_arn: my-role
services:
my-service:
cpu_shares: my-cpu-shares
mem_reservation: my-mem-reservation
The ecs-cli is a high level construct that creates a workflow that wraps many lower level API calls. NOT the same thing but you can think of the ecs-cli compose up command the trigger to deploy what's included in your foo.yml file. Based on what's in your foo.yml file you can walk backwards and try to map to single atomic ECS API calls.
None of this answers your question but, for background, the ecs-cli is no longer what we suggest to use for deploying on ECS. Its evolution is Copilot (if you are not starting from a docker compose story) OR the new docker compose integration with ECS (if docker compose is your jam).
If you want / can post the content of your foo.yml file I can take a stab at how many lower level API calls you'd need to make to do the same (or suggest some other alternatives).
[UPDATE]
Based on the content of your two files you could try this one docker compose file:
services:
my-service:
image: ecr-image:version
env_file:
- ./some_envs.env
- ./more_envs.env
x-aws-policies:
- <my-role>
deploy:
resources:
limits:
cpus: '0.5'
memory: 2048M
Some of the ECS params are interpreted off the compose spec (e.g. resource limits). Some other do not have a specific compose-ECS mapping so they are managed through x-aws extensions (e.g. IAM role). Please note that compose only deploy to Fargate so the shares do not make much sense and you'd need to use limits (to pick the right Fargate task size). As a reminder this is an alternative CLI way to deploy the service to ECS but it does not solve for how you translate ALL API calls to boto3.
I am making some script in python which is run by Zabbix Action.
I want to add value in
Default subject and Default message in Action fields and then use this values in my script. So I am running script and forward all needed macros in script parameters like:
python /path/script.py -A "{HOST.NAME}" -B "{ALERT.MESSAGE}" -C "{ALERT.SUBJECT}"
and i can get only HOST.NAME value, for others I get only macros name but no value
Have you any idea where is the problem? Those macros are unavailable using by Custom scripts?
example
After doing some research & testing myself, it seems as if these Alert macros are indeed not available in a custom script operation.1
You have two options for a workaround:
If you need to be able to execute this script on the host itself, the quick option is to simply replace the macro with the actual text of your subject & alert names. Some testing is definitely necessary to make sure it will work with your environment, and it's not the most elegant solution, but something like this may well work with little extra effort:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
Verifying of course that e.g. the newlines do not break your custom script in your environment.
It doesn't look pretty but it may well be the easiest option.
If you can run the command on any host, the nicer option is to create a new Media type, which will let you use these variables and may even make adding this script to other hosts much easier. These macros can definitely be used as part of a custom Media type (see Zabbix Documentation - Media Types) which can include custom scripts.
You'll need to make a bash or similar script file for the Zabbix server to run (which means doing anything on a host outside the Zabbix server itself is going to be more difficult, but not impossible).
Once the media type is setup, as a bit of a workaround (not ideal, of course) you'll need a user to 'send' to; assigning that media type to the user and then 'sending' the alert to the user with that media type should execute your script with the macros just like executing the custom command.
1: While I did do my own testing on this, I couldn't found any documentation which specifically states that these macros aren't supported in this case, and they definitely look like they should be - more than happy to edit/revoke this answer if anyone can find documentation that confirms or denies this.
I should also explain how it works now, so I did sth like:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
works for me :)
I want to deploy my scrapy project on a ip that is not listed in the scrapy.cfg file , because the ip can change and i want to automate the process of deploying. i tried giving the ip of the server directly in the deploy command but it did not work. any suggestion to do this?
First, you should consider assigning a domain to the server, so you can always get to it regardless of its dynamic IP. DynDNS comes handy at times.
Second, you probably won't do the first, because you haven't got access to the server, or for whatever other reason. In that case, I suggest mimicking above behavior by using your system's hosts file. As described at wikipedia article:
The hosts file is a computer file used by an operating system to map hostnames to IP addresses.
For example, lets say you set your url to remotemachine in your scrapy.cfg. You can write a script that would edit the hosts file with the latest IP address, and execute it before deploying your spider. This approach has a benefit of having a system-wide effect, so if you are deploying multiple spiders, or using the same server for some other purpose, you don't have to update multiple configuration files.
This script could look something like this:
import fileinput
import sys
def update_hosts(hostname, ip):
if 'linux' in sys.platform:
hosts_path = '/etc/hosts'
else:
hosts_path = 'c:\windows\system32\drivers\etc\hosts'
for line in fileinput.input(hosts_path, inplace=True):
if hostname in line:
print "{0}\t{1}".format(hostname, ip)
else:
print line.strip()
if __name__ == '__main__':
hostname = sys.argv[1]
ip = sys.argv[2]
update_hosts(hostname, ip)
print "Done!"
Ofcourse,you should do additional argument checks, etc., this is just a quick example.
You can then run it prior deploying like this:
python updatehosts.py remotemachine <remote_ip_here>
If you want to take it a step further and add this functionality as a simple argument to scrapyd-deploy, you can go ahead and edit your scrapyd-deploy file (its just a Python script) to add the additional parameter and update the hosts file from within. But I'm not sure this is the best thing to do, since leaving this implementation separate and more explicit would probably be a better choice.
This is not something you can solve on the scrapyd side.
According to the source code of scrapyd-deploy, it requires the url to be defined in the [deploy] section of the scrapy.cfg.
One of the possible workarounds could be having a placeholder in scrapy.cfg which you would replace with a real IP address of the target server, before starting scrapyd-deploy.
I'm looking for the most modular way to use ansible to provision a server that will host multiple node.js applications on it. The current setup I have is not scalable.
The roles I have are common, nginx, nodejs, mongodb, apps.
The apps hash/dictionary
I maintain a dict called apps in roles/apps/defaults/main.yml and this is both the solution and problem:
- apps:
shark:
repo: git#example.com:shark.git
subdomain: shark
port: 3001
tiger:
repo: git#example.com:tiger.git
subdomain: tiger
port: 3002
Example of how the apps dict is used in roles/apps/tasks/main.yml:
- name: clone repos
git: repo={{ item.value.repo }}
with_dict: apps
- name: create vhost
template:
src=vhost.j2
dest=/etc/nginx/sites-available/{{ item.value.subdomain }}
with_dict: apps
sudo: yes
playbook.yml
Both staging.yml and production.yml playbooks are set to run all of the aforementioned roles. That means that both servers will install all of the apps configured in the apps dict.
I need to figure out a more modular pattern for roles/apps so that I can install different apps on the different environments. Ideally, I'd like to be able to specify which apps go to which machine directly in staging.yml and production.yml.
Any idea how this can be accomplished? Feel free to suggest entirely different methods of configuring multiple apps.
Another solution I figured out was to create dictionaries for the various apps in roles/apps/defaults/main.yml:
shark:
repo: git#example.com:shark.git
subdomain: shark
port: 3001
tiger:
repo: git#example.com:tiger.git
subdomain: tiger
port: 3002
Note the fact that they're not enclosed inside an apps dict.
Then, specify the apps dict contents in staging.yml or production.yml instead:
- hosts: example
vars:
apps:
- '{{ shark }}'
- '{{ tiger }}'
This allows you to direct which applications are included in which playbook.
If you're willing to maintain another dict, you could parameterize the with_dict to:
- name: clone repos
git: repo={{ item.value.repo }}
with_dict: {{env_app_dict}}
and specify your env_app_dict in either the inventory file or via the command line.
edit: Alternatively, try the lookup plugin that I wrote and specify folders for apps_common, apps_production, and apps_staging.
With this plugin, you'd put a collection of common items in apps_common and:
- name: clone common repos
git: repo="{{item.repo}}"
fileglob_to_dict:
- "{{ 'apps_common/*' }}"
and then simply specify the apps_env file parameter (as in the original answer) that you want by either targeting with a host pattern (e.g. - hosts: staging in the play) or by specifying the parameter on the command line or inventory file.
Is it possible to execute a python-script on a server without using something like django?
I mean I put script.py on host.com and want to call it like this:
http://www.host.com/script.py
The script then does something like calculating some variables and saving them on a mysql database.
edit: I assume I have to use something like cgi :-\
In short, yes. http://wiki.python.org/moin/CgiScripts. You'll have to either put your scripts in a cgi-bin folder or adjust the configuration for your web server.