Task run by all ec2 instances - python

I have a python scripts that send mail to users, I want to run this scripts by all instances which is in running state at a particular interval of time(decided by task scheduler of instances).
I have not to use AWS Lambda.
Is there any way by which i can do it? Can i use AWS image?

It appears that you wish to run a script on multiple Amazon EC2 instances at a particular time.
There are basically two ways to do this:
Using cron on each instance
Each Amazon EC2 instance can trigger its own script. On Linux, you would use cron. On Windows you would define a Schedule Task.
Running commands using AWS Systems Manager Run Command
If you wish to externally trigger a command on multiple Amazon EC2 instances, you can use the AWS Systems Manager Run Command. You will first define the commands to be run, and then nominate the instances on which to run the command. The Run Command will manage the process of running the script, gather the results, retry failures and report the results.
The benefit of using the Run Command is that you can centrally manage the process. It is very easy to edit the script and run it when desired. In contrast, if using cron you would need to update the script on every instance.

Related

Is there a way to create or track metrics using aws cloudwatch for processes that are running in a server?

Is there a way to create or track metrics using aws cloudwatch for processes that are running in a server
For example my instance has these processes running
Python,
kafka (zoo keeper)
is there anyway to find whether python, kafka is working or not in aws ec2 instance, using cloud watch
You can create a script having the ps command which will check the process. Run this script through crontab. Using crontab, you can continuously keep an eye on the process.
You can use below commands to configure the script in crontab
crontab -l # This is to list the crontab
crontab -e # This is to edit the crontab
Then you can use put-metric-data to send the data to cloud watch.
Below is the sample script :
========Sample script======
#!/bin/bash
Process_Check=$(<your-custom-ps-command>)
aws cloudwatch put-metric-data --metric-name memory-usage --dimensions Instance=<Instance-ID> --namespace "Custom" --value $Process_Check
The above script will collect the process details and then send it to AWS cloud watch using put-metric-data. Keep on sending data to CloudWatch through crontab for better visibility.
Refer below links to create and view the custom cloudwatch metric.
Create Custom CloudWatch Metrics
Publishing Custom Metrics

Running Python Script in an existing EC2 instance on AWS

I have an API (in python) which has to alter files inside an EC2 instance that is already running. I'm searching on boto3 documentation, but could only find functions to start new EC2 instances, not to connect to an already existing one.
I am currently thinking of replicating the APIs functions to alter the files in a script inside the EC2 instance, and having the API simply start that script on the EC2 instance by accessing it using some sort of SSH library.
Would that be the correct approach, or is there some boto3 function (or in some of the other Amazon/AWS libraries) that allows me to start a script inside existing instances?
An Amazon EC2 instance is just like any computer on the Internet. It is running an operating system (eg Linux or Windows), and it has standard security in-built. The fact that it is an Amazon EC2 instance has no impact.
So, the question really becomes: How do I run a command on a remote computer?
Typical ways of doing this include:
Connecting to the computer (eg via SSH) and running a command
Running a service on the computer that listens on a particular port (eg responding to an API request)
Using remote shell commands to run an operation on another computer
Fortunately, AWS offers an additional option: Use the AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost.
Administrators use Run Command to perform the following types of tasks on their managed instances: install or bootstrap applications, build a deployment pipeline, capture log files when an instance is terminated from an Auto Scaling group, and join instances to a Windows domain, to name a few.
Basically, it is an agent installed on the instance (or, for that matter, on any computer on the Internet) and commands can be sent to the computer that are executed by the agent. In fact, the same command can be sent to hundreds of computers if desired.
The AWS Systems Manager Run Command can be triggered by an API call, such as a program using boto3.
Unless you have a specific service running on that machine which allows you to modify mentioned files. I would make an attempt to log onto EC2 instance as to any other machine via network.
You can access EC2 machine via ssh with use of paramiko or pexpect libraries.
If you want to use the execute a script inside of an existing EC2 instance - you could use the reference from the existing answer here : Boto Execute shell command on ec2 instance
IMO, to be able to start a script inside the EC2, the script should be present on the EC2.

One-time running Python code when AWS EC2 instance start

I have a two EC2 instances named "aws-example" and "aws-sandbox". At same time, they are docker machines. "aws-example" is a manager and "aws-sandbox" is a worker in a docker-swarm.
I wrote two Python script, when run the script in "aws-example", it stopped to "aws-sandbox" instance and start it again.
When I run the script in "aws-sandbox", worker has to left the swarm and join again.
I do by my hand all of this process. However, I have to automate that. How I do the one-time running Python script in "aws-sandbox" when the "aws-sandbox" instance started? I've had investigate to services AWS Lambda, CloudWatch etc. and I'm very confused. Are here any person who have clear pathway?
Make use of #reboot /path/to/script.py in cron, it should work.
For more info check this out.

Automatically run python script twice a day

I have python script for ssh which help to run various Linux commands on remote server using paramiko module. All the outputs are saved in text file, script is running properly. Now I wanted to run these script twice a day automatically at 11am and 5pm everyday.
How can I run these script automatically every day at given time without compiling every time manually. Is there any software or module.
Thanks for your help.
If you're running Windows, your best bet would be to create a Scheduled Task to execute Python itself, passing the path to your script as an argument.
If you're using OSX or Linux, CRON is your friend. There are references abound for how to create scheduled events in crontab. This is a good start for setting up CRON tasks.
One thing to mention is permissions. If you're running this from a Linux machine, you'll want to ensure you set up the CRON job to run under the right account (best practice not to use your own).
Assuming you are running on a *nix system, cron is definitely a good option. If you are running a Linux system that uses systemd, you could try creating a timer unit. It is probably more work than cron, but it has some advantages.
I won't go though all the details here, but basically:
Create a service unit that runs your program.
Create a timer unit that activates the server unit at the prescribed times.
Start and enable the timer unit.

Execute remote python script via SSH

I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to.
I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).
This might work, or something similar:
ssh user#remote.host nohup python scriptname.py &
Basically, have a look at the nohup command.
On Linux machines, you can run the script with 'at'.
echo "python scriptname.py" ¦ at now
If you are going to perform repetitive tasks on many hosts, like for example deploying software and running setup scripts, you should consider using something like Fabric
Fabric is a Python (2.5 or higher) library and command-line tool for
streamlining the use of SSH for application deployment or systems
administration tasks.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
Typical use involves creating a Python module containing one or more
functions, then executing them via the fab command-line tool.
You can even use tmux in this scenario.
As per the tmux documentation:
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. And do a lot more
From a tmux session, you can run a script, quit the terminal, log in again and check back as it keeps the session until the server restart.
How to configure tmux on a cloud server

Categories