I have a two EC2 instances named "aws-example" and "aws-sandbox". At same time, they are docker machines. "aws-example" is a manager and "aws-sandbox" is a worker in a docker-swarm.
I wrote two Python script, when run the script in "aws-example", it stopped to "aws-sandbox" instance and start it again.
When I run the script in "aws-sandbox", worker has to left the swarm and join again.
I do by my hand all of this process. However, I have to automate that. How I do the one-time running Python script in "aws-sandbox" when the "aws-sandbox" instance started? I've had investigate to services AWS Lambda, CloudWatch etc. and I'm very confused. Are here any person who have clear pathway?
Make use of #reboot /path/to/script.py in cron, it should work.
For more info check this out.
Related
I have some python scripts that I look to run daily form a Windows PC.
My current workflow is:
The desktop PC stays all every day except for a weekly restart over the weekend
After the restart I open VS Code and run a little bash script ./start.sh that kicks off the tasks.
The above works reasonably fine, but it is also fairly painful. I need to re-run start.sh if I ever close VS Code (eg. for an update). Also the processes use some local python libraries so I need to stop them if I'm going to update them.
With regards to how to do this properly, 4 tools came to mind:
Windows Scheduler
Airflow
Prefect (https://www.prefect.io/)
Rocketry (https://rocketry.readthedocs.io/en/stable/)
However, I can't quite get my head around the fundamental issue that Prefect/Airflow/Rocketry run on my PC then there is nothing that will restart them after the PC reboots. I'm also not sure they will give me the isolation I'd prefer on these tools.
Docker came to mind, I could put each task into a docker image and run them via some form of docker swarm or something like that. But not sure if I'm re-inventing the wheel.
I'm 100% sure I'm not the first person in this situation. Could anyone point me to a guide on how this could be done well?
Note:
I am not considering running the python scripts in the cloud. They interact with local tools that are only licenced for my PC.
You can definitely use Prefect for that - it's very lightweight and seems to be matching what you're looking for. You install it with pip install prefect, start Orion API server: prefect orion start and once you create a Deployment, and start an agent prefect agent start -q default you can even configure schedule from the UI
For more information about Deployments, check our FAQ section.
It sounds Rocketry could also be suitable. Rocketry can shut down itself using a task. You could do a task that:
Runs on the main thread and process (blocking starting new tasks)
Waits or terminates all the currently running tasks (use the session)
Calls session.shut_down() which sets a flag to the scheduler.
There is also a app configuration shut_cond which is simply a condition. If this condition is True, the scheduler exits so alternatively you can use this.
Then after the line app.run() you simply have a line that runs shutdown -r (restart) command on shell using a subprocess library, for example. Then you need something that starts Rocketry again when the restart is completed. For this, perhaps this could be an answer: https://superuser.com/a/954957, or use Windows scheduler to have a simple startup task that starts Rocketry.
Especially if you had Linux machines (Raspberry Pis for example), you could integrate Rocketry with FastAPI and make a small cluster in which Rocketry apps communicate with each other, just put script with Rocketry as a startup service. One machine could be a backup that calls another machine's API which runs Linux restart command. Then the backup executes tasks until the primary machine answers to requests again (is up and running).
But as the author of the library, I'm possibly biased toward my own projects. But Rocketry very capable on complex scheduling problems, that's the purpose of the project.
You can use schtasks for windows to schedule the tasks like running bash script or python script and it's pretty reliable too.
I'm trying to create a task on Unix similar to a task of Task Scheduler in Windows which will run at specific times of the day and even gets triggered after server restarts. Aim of this job is to execute a python file.
My question is two parts:
1). How to write a job which I can schedule at multiple times of the day. I tried to write cron job using crontab command but it gives You <user> are not allowed to access to (crontab) because of pam configuration. I would like to know a way where I can schedule the triggering of python script without needing root/admin rights.
2). How can I schedule a job whose scheduling stays in effect even after the server is restarted. While going through various resources, I found systemd, using which we can use to start and stop the services. For example, https://linuxconfig.org/how-to-write-a-simple-systemd-service#:~:text=%20How%20To%20Write%20A%20Simple%20Systemd%20Service,section%20that%20you%20need%20to%20w...%20More%20
.But I'm unable to find how i can write a service script which will run my python script.
Can someone please guide on how can I run a job which executes my python script a some specific times of day and keeps working even after server bounce.
First PAM error say you do not have permissions so check /etc/security/access.conf and add line
+ : youruser : cron crond :0 tty1 tty2 tty3 tty4 tty5 tty6
For exec cron job on boot add in cron line like this:
#reboot /path/to/your_program
I have a python scripts that send mail to users, I want to run this scripts by all instances which is in running state at a particular interval of time(decided by task scheduler of instances).
I have not to use AWS Lambda.
Is there any way by which i can do it? Can i use AWS image?
It appears that you wish to run a script on multiple Amazon EC2 instances at a particular time.
There are basically two ways to do this:
Using cron on each instance
Each Amazon EC2 instance can trigger its own script. On Linux, you would use cron. On Windows you would define a Schedule Task.
Running commands using AWS Systems Manager Run Command
If you wish to externally trigger a command on multiple Amazon EC2 instances, you can use the AWS Systems Manager Run Command. You will first define the commands to be run, and then nominate the instances on which to run the command. The Run Command will manage the process of running the script, gather the results, retry failures and report the results.
The benefit of using the Run Command is that you can centrally manage the process. It is very easy to edit the script and run it when desired. In contrast, if using cron you would need to update the script on every instance.
I have python script for ssh which help to run various Linux commands on remote server using paramiko module. All the outputs are saved in text file, script is running properly. Now I wanted to run these script twice a day automatically at 11am and 5pm everyday.
How can I run these script automatically every day at given time without compiling every time manually. Is there any software or module.
Thanks for your help.
If you're running Windows, your best bet would be to create a Scheduled Task to execute Python itself, passing the path to your script as an argument.
If you're using OSX or Linux, CRON is your friend. There are references abound for how to create scheduled events in crontab. This is a good start for setting up CRON tasks.
One thing to mention is permissions. If you're running this from a Linux machine, you'll want to ensure you set up the CRON job to run under the right account (best practice not to use your own).
Assuming you are running on a *nix system, cron is definitely a good option. If you are running a Linux system that uses systemd, you could try creating a timer unit. It is probably more work than cron, but it has some advantages.
I won't go though all the details here, but basically:
Create a service unit that runs your program.
Create a timer unit that activates the server unit at the prescribed times.
Start and enable the timer unit.
My friends and I have written a simple telegram bot in python. The script is run on a remote shared host. The problem is that for some reason the script stops from time to time, and we want to have some sort of a mechanism to check whether it is running or not and restart it if necessary.
However, we don't have access to ssh, we can't run bash scripts and I couldn't find a way to install supervisord. Is there a way to achieve the same result by using a different method?
P.S. I would appreciate it if you gave detailed a explanation as I'm a newbie hobbyist. However, I have no problem with researching and learning new things.
You can have a small supervisor Python script whose only purpose is to start (and restart) your main application Python script. When your application crashes the supervisor takes care and restarts it.