I am writing a program in Python that will need to be having an uptime of 30 days straight. It is connecting to an MQTT-client, and listens for messages for a number of topics.
I have using an EC2 server instance running Linux AMI and I wonder how I could set this up to run constantly for this duration of time?
I was looking for cronjobs and rebooting every X days, but preferably the system should have no down time if possible.
However, I am unsure how to set this up and make sure the script restarts if the server/program was ever to fail.
The client will connect to an OpenVPN VPC through amazon, and then run the script and keep it running. Would this be possible to setup?
The version I am running is:
Amazon Linux AMI 2018.03.0.20180811 x86_64 HVM GP2
NAME="Amazon Linux AMI"
VERSION="2018.03"
ID_LIKE="rhel fedora"
VERSION_ID="2018.03"
You can accomplish this by using Auto Scaling to automatically maintain the required number of EC2 instances. If an instance becomes unresponsive or fails health checks, auto scaling will launch a new one. See: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-maintain-instance-levels.html
You'll want to make an AMI of your system to use to launch new instances, or maybe put your configuration into a user data script.
If your use case is simply to receive messages over MQTT I would recommend that you take a look at the AWS IoT Core service as a solution rather than running an EC2 instance. This will solve your downtime issues because it's a managed service with a high degree of resiliency built-in.
You can choose the route the messages to a variety of targets, including storing them in S3 for batch processing or using AWS Lambda to process them as they arrive without having to run EC2 instances. With Lambda, you get 1 million invokes per month for free so if your volume is less than this, your compute costs will be zero too.
Related
I'm building an application that can run user-submitted python code. I'm considering the following approaches:
Spinning up a new AWS lambda function for each user's request to run the submitted code in it. Delete the lambda function afterwards. I'm aware of AWS lambda's time limit - so this would be used to run only small functions.
Spinning up a new EC2 machine to run a user's code. One instance per user. Keep the instance running while the user is still interacting with my application. Kill the instance after the user is done.
Same as the 2nd approach but also spin up a docker container inside the EC2 instance to add an additional layer of isolation (is this necessary?)
Are there any security vulnerabilities I need to be aware of? Will the user be able to do anything if they gain access to environment variables in their own lambda function/ec2 machine? Are there any better solutions?
Any code which you run on AWS Lambda will have the capabilities of the associated function. Be very careful what you supply.
Even logging and metrics access can be manipulated to incur additional costs.
I am looking for swift ovirt shutdown procedure/project, which involves large number of active VMs. I found an open pull request at github for parallel shutdown i.e. https://github.com/oVirt/ovirt-ansible-shutdown-env/pulls. I wonder if there are other or better solutions.
I am trying to develop or find a working solution which works with NMS like libreNMS to receive alerts and based on power failure alert, perform swift ovirt shutdown including hosted engine, vms etc...
Does a solution like this already exist as this should be quite common for handling power outages.
Currently i use ovirtclt for autostart selected vm. So you can use it for shutdown.
Usage:
vm-list
vm-start|vm-stop|vm-shutdown|vm-reboot|vm-suspend|vm-get <vmName>
vm-start-all|vm-stop-all|vm-shutdown-all
I have a django webapp using Celery, Supervisord and connected to a t2.micro rabbitmq instance. I wanted to upgrade to a t2.large but was wondering if taking a snapshot will affect anything. Orginally I had not built this set up and so I am trying to learn. Will proceeding with the upgrade only require me to switch the RabbitMQ ip address? What precautions should I take?
Taking a snapshot of any form of datastore usually has a certain tax on the underlying hardware in terms of CPU and IOPS. Given you are currently running on a t2.instance, assuming you have burst credits remaining, taking a snapshot is probably acceptable, as the instance size suggest your traffic is low. Once you provisioned the new instance, setting it's connection string (IP address or DNS name of you set one through a proxy) in your Django settings should be sufficient to start routing traffic to the new instance.
Just FYI, AWS has a hosted RabbitMQ option available which takes care of much of the heavy lifting for you :)
I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop.
I converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet.
Is there a specific website I can host my Python code on which will allow it to always run?
More generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake?
Thanks in advance!
well i think one of the best option is pythonanywhere.com there you can upload your python script(script.py) and then run it and then finish.
i did this with my telegram bot
You can deploy your application using AWS Beanstalk. It will provide you with the whole python environment along with server configuration likely to be changed according to your needs. Its a PAAS offering from AWS cloud.
The best and cheapest solution I have found so far is to use AWS Event Bridge with AWS Lambda.
AWS Lambda allows you upload and execute any script you want in most popular programming languages without needing to pay for a server monthly.
And you can use AWS Event Bridge to trigger an execution of a Lambda function.
You only get charged for what you use in AWS Lambda and it is extremely cheap. Below is the pricing for Lambda in the AWS N. Virginia region. For most scripts, the minimum memory is more than enough. So to run a script every hour for a month that takes 5 seconds to finish, it will cost $0.00756 a month (less than a cent!).
Memory (MB)
Price per 1ms
128
$0.0000000021
512
$0.0000000083
1024
$0.0000000167
1536
$0.0000000250
2048
$0.0000000333
3072
$0.0000000500
4096
$0.0000000667
5120
$0.0000000833
6144
$0.0000001000
7168
$0.0000001167
8192
$0.0000001333
9216
$0.0000001500
10240
$0.0000001667
Then you can use AWS Event Bridge to schedule to run an AWS Lambda function every minute, hour, etc.
Here are some articles to help you run any script every minute, hour, etc.
How to Create Lambda Functions in Python
How to Schedule Running an AWS Lambda Function
I wrote a Python script that will pull data from a 3rd party API and push it into a SQL table I set up in AWS RDS. I want to automate this script so that it runs every night (e.g., the script will only take about a minute to run). I need to find a good place and way to set up this script so that it runs each night.
I could set up an EC2 instance, and a cron job on that instance, and run it from there, but it seems expensive to keep an EC2 instance alive all day for only 1 minute of run-time per night. Would AWS data pipeline work for this purpose? Are there other better alternatives?
(I've seen similar topics discussed when googling around but haven't seen recent answers.)
Thanks
Based on your case, I think you can try to use shellCommandActivity in data pipeline. It will launch a ec2 instance and execute the command you give to data pipeline on your schedule. After finishing the task, pipeline will terminate ec2 instance.
Here is doc:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-ec2resource.html
Alternatively, you could use a 3rd-party service like Crono. Crono is a simple REST API to manage time-based jobs programmatically.