One time Python script on Heroku - python

I would like to run a python script on Heroku, but I would like to run it only once and stop at the end of the script
Right now my script is running endlessly, so at the end of it, it restarts from the beginning.
How can I stop it at the end of the script ?
right now my procfile look the following;
web: python ValueAppScript.py
worker: python ValueAppScript.py
thx you

First of all, you probably don't want to declare the same command as both a web and a worker. If your script listens for HTTP requests it should be a web process, otherwise a worker makes more sense:
worker: python ValueAppScript.py
Since you don't want your worker running all the time, scale it down to zero dynos:
heroku ps:scale worker=0
If you wish to run it once interactively, you can use heroku run:
heroku run python ValueAppScript.py
If you want it to run on a schedule, e.g. once per day, you can use the Heroku Scheduler. Since you have defined this as a worker process you should be able to just use worker as the command.

Related

Deploy python worker to heroku without running immediatly

I have a simply python script that I deploy on Heroku as worker:
Procfile:
worker: python main.py
The script is scheduled to run every day at a specific time with the Heroku Scheduler. I don't want it to run to other times.
Every time I push new changes to heroku (git push heroku master) the script is run automatically which I want to avoid.
How can I do that?
I looked into using another scheduler, that is set up from within the script like the APScheduler. Would this be a solution? Do I need to change my script?
Thanks!
You can scale the worker dyno formation down to zero:
If you want to stop running a particular process type entirely, simply scale it to 0
So:
heroku ps:scale worker=0
Jobs scheduled via Heroku Scheduler should still run as configured.

How to make my selenium script run forever on heroku

I have created an selenium bot that posts every 20 minutes on Instagram
I deployed my project to heroku and everything but i don't know how to make it run forever
I tried heroku run python mycode.py in the command promt but the program would stop working if i close command prompt
heroku run is for ad hoc interactive stuff.
For a long-running background process you should define a worker process in your Procfile:
worker: python mycode.py
Commit that change and redeploy. Then scale up a dyno to run it:
heroku ps:scale worker=1
This will either consume free dyno hours or, if you are using paid dynos, incur costs.

run python console script from heroku

I've deployed a python script on heroku and I can run that in local terminal by
heroku run python script.py
command, But when I close the local terminal the script has been stopped.
Is there a way to run deployed script through the heroku server and independent of local machine ?
You can check:
https://devcenter.heroku.com/articles/heroku-cli-commands#heroku-run-detached
heroku run:detached -t python script.py should do the trick for you
Reddy Abel Tintaya Conde's answer using heroku run:detached is good for ad hoc stuff.
But if your script should run continuously, automatically restarting when it fails, you should define a process for it in your Procfile. Such processes are often called worker processes:
worker: python script.py
Then you can scale your worker process up (or down) with heroku ps:scale:
heroku ps:scale worker=1
Whether you run your script this way or via heroku run:detached, remember that this consumes free dyno hours or, if you are using paid dynos, incurs costs.

Can I have multiple commands run in a manifest.yml file?

I have a question about manifest.yml files, and the command argument. I am trying to run multiple python scripts, and I was wondering if there was a better way that I can accomplish this?
command: python3 CD_Subject_Area.py python3 CD_SA_URLS.py
Please let me know how I could call more than one script at a time. Thanks!
To run a couple of short-term (ie. run and eventually exit) commands you would want to use Cloud Foundry tasks. The reason to use tasks over adding a custom command into manifest.yml or a Procfile is because the tasks will only run once.
If you add the commands above, as you have them, they may run many times. This is because an application on Cloud Foundry will run and should execute forever. If it exits, the platform considers it to have crashed and will restart it. Thus when your task ends, even if it is successful (i.e. exit 0), the platfrom still thinks it's a crash and will run it again, and again, and again. Until you stop your app.
With a task, you'd do the following instead:
cf push your application. This will start and stage the application. You can simply leave the command/-c argument as empty and do not include a Procfile[1][2]. The push will execute, the buildpack will run and stage your app, and then it will fail to start because there is no command. That is OK.
Run cf stop to put your app into the stopped state. This will tell the platform to stop trying to restart it.
Run cf run-task <app-name> <command>. For example, cf run-task my-cool-app "python3 CD_Subject_Area.py". This will execute the task in it's own container. The task will run to completion. Looking at cf tasks <app-name> will show you the result. Using cf logs <app-name> --recent will show you the output.
You can then repeat this to run any number of other tasks commands. You don't need to wait for the original one to run. They will all execute in separate containers so one task is totally separated from another task.
[1] - An alternative is to set the command to sleep 99999 or something like that, don't map a route, and set the health check type to process. The app should then start successfully. You'll still stop it, but this just avoids an unseemly error.
[2] - If you've already set a command and want to back that out, remove it from your manifest.yml and run cf push -c null, or simply cf delete your app and cf push it again. Otherwise, the command will be retained in Cloud Controller, which isn't what you'd want.

Persistent background task Django + Heroku

I have a persistent background process currently running as a standalone python script on my ubuntu server, managed by supervisor. However, I am migrating to Heroku and wonder if anyone have any experiences setting up the same kind of environment.
The specifications of the script;
Fetch information from external API
Do calculations on the data
Store the data to the database
If the script used less than 5 seconds, sleep for the remaining time, else run again
I could run a cronjob every 5 seconds, but from time to time step 1-3 can take up to a full hour.
Any tips?
Thanks.
What you want to do is create a worker process. Simply define a command line script so that you can call it easily, then in your Procfile, add a new worker entry like so:
# Procfile
web: python manage.py runserver # example
worker: python manage.py start_cronjob # command to run your background process
Once you've got this defined in your Procfile, go ahead and push your app to Heroku, then scale up a worker process:
$ heroku scale worker=1
This will launch a single worker process.
To view the logs and ensure things are working as expected, you can say:
$ heroku logs --tail --ps worker

Categories