I run command airflow webserver in one terminal and it works well.
But when I run airflow scheduler in another terminal it stops webserver and can`t run scheduler too. I tried to change webserver port to 8070 but it still stucks.
Related
Currently, I'm using a Django library named appscheduler.
First code:
job = scheduler.add_job(schedule_job, 'cron', day_of_week='mon-fri', hour=14, minute=5)
When I run this cron job in my localhost it's working very smoothly. But my Cpanel hosting not working. But in my Cpanel terminal if I run this code "python manage.py runserver" it is working until the terminal is open. When I close the terminal, it's not working.
The server opened into the terminal.
Another thing is that,
Second code:
job = scheduler.add_job(schedule_job, 'interval', seconds=50)
This interval job is running fine into localhost and also cpnael (N.B without running terminal it's working smoothly into Cpanel)
I couldn't detect here why the first code is not working without Cpanel and why the second code is working fine without terminal.
I am looking for help deploying my flash app. I've already written the app and it works well. I'm currently using the following command in the directory of my flask code:
sudo uwsgi --socket 0.0.0.0:70 --protocol=http -w AppName:app --buffer-size=32768
This is on my Amazon Lightsail instance. I have the instance linked to a static public IP, and if I navigate to the website, it works great. However, to get the command to continuously run in the background even after logging out of the Lightsail, I first start a screen command, execute the above line of code, and then detach the screen using ctrl-a-d.
The problem is, if the app crashes (which is understandable since it is very large and under development), or if the command is left running for too long, the process is killed, and it is no longer being served.
I am looking for a better method of deploying a flask app on Amazon Lightsail so that it will redeploy the app in the event of a crash without any interaction from myself.
Generally you would write your own unit file for systemd to keep your application running, auto restart when it crashes and start when you boot your instances.
There are many tutorials out there showing how to write such a unit file. Some examples:
Systemd: Service File Examples
Creating a Linux service with systemd
How to write startup script for Systemd?
You can use pm2
Starting an application with PM2 is straightforward. It will auto
discover the interpreter to run your application depending on the
script extension. This can be configurable via the Ecosystem config
file, as I will show you later on this article.
All you need to install pm2 and then
pm2 start appy.py
Great, this application will now run forever, meaning that if the process exit or throw an exception it will get automatically restarted. If you exit the console and connect again you will still be able to check the application state.
To list application managed by PM2 run:
pm2 ls
You can also check logs
pm2 logs
Keeping Processes Alive at Server Reboot
If you want to keep your application online across unexpected (or expected) server restart, you will want to setup init script to tell your system to boot PM2 and your applications.
It’s really simple with PM2, just run this command (without sudo):
pm2 startup
Pm2 Manage-Python-Processes
I have a small ETL to configure for which it was decided to use Airflow within a server. Airflow scheduler and webserver is started using supervisor and I get no error when doing so, supervisorctl status shows:
airflow-scheduler RUNNING pid 6007, uptime 0:08:53
airflow-webserver RUNNING pid 6017, uptime 0:08:46
app RUNNING pid 21737, uptime 1
The Problem is:
Scheduler runs fine with supervisor but webserver running at 0.0.0.0 port 9000 is running but whenever I go to check it directly from my browser I get connection timeout.
I've checked logs and error from supervisor for the process that runs the webserver and everything seems to be ok.
If I check myserverdomain:9000/admin/ I get nothing or connection timeout.
Any recommendation?
How do I run a python script that is in my main directory with Heroku scheduler?
Normally I run this through the command line with Heroku run python "script.py", but this syntax is clearly not correct for the Heroku Scheduler. Where it says "rake do_something" what should the correct syntax be to run a python script here? I've tried "python script.py" and this does not work either.
Thanks!
The Heroku Scheduler will try to run any command you give it. For Python, if you added a mytask.py to
your app repo, you could have the Scheduler run:
python mytask.py
Instead of waiting for the Scheduler to run to see if the command works as expected, you can also test run it like this:
heroku run python mytask.py # or heroku run:detached ...
heroku logs --tail
Another way to use the Scheduler would be to extend your app with a cli tool or a script runner that shares the app context. A popular one for Flask is Flask-Script.
Note: the "rake" references in the Heroku Scheduler docs example is for running tasks with ruby.
I'm trying to use airflow to define a specific workflow that I want to manually trigger from the command line.
I create the DAG and add a bunch of tasks.
dag = airflow.DAG(
"DAG_NAME",
start_date=datetime(2015, 1, 1),
schedule_interval=None,
default_args=args)
I then run in the terminal
airflow trigger_dag DAG_NAME
and nothing happens. The scheduler is running in another thread. Any direction is much appreciated. Thank You
I just encountered the same issue.
Assuming you are able to see your dag in airflow list_dags or via the web server then:
Not only did I have to turn on the dag in the web UI, but I also had to ensure that airflow scheduler was running as a separate process.
Once I had the scheduler running I was able to successfully execute my dag using airflow trigger_dag <dag_id>
My dag configuration is not significantly different from yours. I also have schedule_interval=None
You may have disabled the workflow.
To enable the workflow manually. Open up the airflow web server by
$ airflow webserver -p 8080
Go to http://localhost:8080 . You should see the list of all available dags with a toggle button on/off. By default everything is set to off. Search for your dag and toggle your workflow. Now try triggering the workflow from terminal. It should work now.
first make sure your database connection string on the airflow is working, weather it be on postgres, sqlite(by default) or any other database. Then run the command
airflow initdb
This command should not be showing any connection errors
Secondly make sure your webserver is running on a separate thread
airflow webserver
Then run your schdeuler on a different thread
airflow scheduler
Finally trigger your dag on a different thread after the scheduler is running
airflow trigger_dag dag_id
Also make sure the dag name and task are present in the dag and task list
airflow list_dags
airflow list_tasks dag_id
And if the dag is switched off in your UI then toggle it on.
You should 'unpause' the drag you what to trigger. use airflow unpause xxx_drag and then airflow trigger_dag xxx_drag and it should work.
airflow trigger_dag -e <execution_date> <dag_id>