I running python flask as my backend and react as my frontend. Every time I start my app, I have to run export FLASK_APP=app and then flask start in terminal 1 and npm start in terminal 2. How do I write a single script that starts both processes?
Here is my attempt:
#!/bin/bash
export FLASK_APP=microblog.py
flask run > /dev/null
npm start --prefix ~/app
Try this:
#!/bin/bash
export FLASK_APP=microblog.py
flask run > /dev/null & pids=$!
npm start --prefix ~/app & pids+=" $!"
trap "kill $pids" SIGTERM SIGINT
wait $pids
This script starts both flask and npm in background, and stores their PIDs. After that, we set up a trap - in case you hit CTRL - C, both programs will get killed.
The wait line will block until both the flask and npm process has finished - so you can easily terminate both with CTRL-C.
Related
I am trying to run my flask application on my linux server using nohup , but each time i run my flask application with nohup , my server will require me to kill with cntrl + c in order for me to do other things
assuming i have 2 file in my server
path(home/app/)
flask_app.py [ which is my flask application ]
flk.sh
inside my run.sh i have included
nohup python /home/app/flask_app.py
when i run my shell script in my server which is sh flk.sh
my system will hang as per shown in below image if i dont exit it as its running and all activity will go inside nohup.out
how can i run nohup on my flask application without me having to use cntrl + c to exit to run other commands.
You usually want to use nohup command arg1 arg2 &. You're just missing the ampersand.
Briefly, nohup prevents the OS from killing your job, & sends the job in the background, so it isn't tied to your current shell.
I have a shell script called kill.sh that helps me restart a python script I've written. I normally use pkill -f main.py to kill my forever-running python script. However, when I wrote it into a shell script it does not work.
My script
pkill -f main.py
ps aux | grep main.py # Still shows the process running.
While just executing pkill -f main.py in bash command line works as expected. Why is this?
This is not a satisfactory answer, as I cannot find out the root cause of why pkill -f does not work in a script. I ended up using a systemd Service file to manage my python process. Here's an example fyi.
[Unit]
Description=Service Name
[Service]
Environment=PYTHONUNBUFFERED=1
ExecStart=/path/to/python /path/to/python/script.py
Restart=on-failure
RestartSec=5s
WorkingDirectory=/python/project/dir/
Name the file main.service and place it in /lib/systemd/system/
Running the service systemctl start main.service
Stop the service systemctl stop main.service
Restart the service systemctl restart main.service
Show status and output systemctl status main.service -l
Now I don't have to worry about multiple processes running. If the program dies it'll even restart.
I have transferred forecasts.py file from GitHub to my virtual machine via Azure Pipelines. If I start the script from the virtual machine terminal with python3 forecasts.py &, everything goes smoothly and the script remains running in the background. For some reason, I get the following message from the Azure Pipelines, if I try to start that script similarly:
The STDIO streams did not close within 10 seconds of the exit event from process '/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited.
Full debug logs can be found here
The core content of the forecasts.py is the following:
import schedule
import time
def job():
print("I'm working...")
schedule.every().minute.at(":00").do(job)
while True:
schedule.run_pending()
time.sleep(5)
This script should print "I'm working..." once per minute. Should I start the script with some different way?
EDIT
azure-pipelines.yml might help to solve this:
variables:
- name: system.debug
value: true
jobs:
- deployment: fmi_forecasts_deployment
displayName: fmi_forecasts
environment:
name: AnalyticsServices
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- checkout: self
- script: sudo apt install python3-pip
displayName: 'Update pip'
- script: python3 -m pip install -r requirements.txt
displayName: 'Install requirements.txt modules'
- script: rsync -a $(Build.SourcesDirectory) /home/ubuntu/$(Build.Repository.Name)/
displayName: 'Sync files to $(Build.Repository.Name)'
- task: Bash#3
inputs:
targetType: 'inline'
script: python3 /home/ubuntu/$(Build.Repository.Name)/s/forecasts.py &
displayName: 'Start the script'
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
EDIT
I edited the forecasts.py file to print "Sleeping..." every 5 seconds. And when I execute that with nohup python -u /home/ubuntu/$(Build.Repository.Name)/s/forecasts.py & I will receive the following logs. So, the script works, but when I look the running processes in the VM, there is not any python processes running. The script dies, when the pipeline ends, I assume.
##[debug]The task was marked as "done", but the process has not closed after 5 seconds. Treating the task as complete.
According to the debug log, this should be more like a prompt message indicating that some process is still running and has not been cleaned up, rather than an error message which didn't write into the standard error steam and fail the task.
If you want that script will continue to to run in the background while the task has finished. You could try to use start-process command to launch the script. This will make sure that the launched job keeps running when the task is finished. But the job will be closed when the build is finished.
Start-Process powershell.exe -ArgumentList '-file xxx\forecasts.py'
For details, please refer to the workaround in this ticket.
I am currently running ngrock and a python app concurrently on a specific port to text my raspberry pi, and have it respond accordingly to my message via Twilio. Each time my raspberry pi boots up, or reboots, I need to manually start the services again with ./ngrok http 5000 and python /path/to/file/app.py. To avoid that, I edited my cron jobs as follows, and wrote a script called startService.py. However, it doesn't seem to be functioning properly, as I do not receive answers to texts after reboot. Any ideas?
Cron:
# m h dom mon dow command
*/5 * * * * python /rasp/system/systemCheck.py
#reboot python /Rasp/system/twilio/startService.py &
startService.py
import os
os.system('/./ngrok http -subdomain=ABC123 5000')
os.system('python /Rasp/system/twilio/starter/app.py')
You did not mention your OS, assuming your OS using Upstart for boot you can create upstart script to start then your process will automatically spawn at boot or when the process die. For ngrok, create a file /etc/init/ngrok.conf
# Ngrok
#
# Create tunnel provided by ngrok.io
description "Ngrok Tunnel"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 10 5
umask 022
exec /ngrok http -subdomain={{My Reserved Subdomain}} 5000
Now it will automaticlaly start at boot. If you want to start manually just issue command.
$ sudo service ngrok start
Just for suggestion, putting your binary on root directory / is not good practice.
Reference:
http://notes.rioastamal.net/2016/05/starting-ngrok-automatically-at-boot.html
After multiple failed attempts I have seem to come up with a working system. I first had to authorize the root user to use my ngrok account by doing the following:
sudo su
./ngrok authtoken {{Insert Your Auth Token Here}}
exit
Then, I created ngrokLauncher.sh and appLauncher.sh as shown.
ngrokLauncher.sh:
#!/bin/sh
# launcher.sh
cd /
./ngrok http -subdomain={{My Reserved Subdomain}} 5000 &
cd /
appLauncher.sh:
#!/bin/sh
# launcher.sh
cd /Storage/system/twilio/starter
python app.py &
cd /
Then, I modified the file permissions so they would be able to be run on startup sudo chmod 755 ngrokLauncher.sh && sudo chmod 755 appLauncher.sh. Lastly, I edited the Cron Jobs as follows:
Crontab:
# m h dom mon dow command
*/5 * * * * python /SKYNET/system/systemCheck.py
#reboot sh /Storage/ngrokLauncher.sh >/Storage/logs/cronlog 2>&1
#reboot sh /Storage/appLauncher.sh >/Storage/logs/cronlog 2>&1
Then after running sudo reboot, the system restarted and I received a response to my sms from the python app.
I added a bottle server that uses python's cassandra library, but it exits with this error: Bottle FATAL Exited too quickly (process log may have details) log shows this: File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1765, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)So I tried to run it manually using supervisorctl start Bottle ,and then it started with no issue. The conclusion= Bottle service starts too fast (before the needed cassandra supervised service does): a delay is needed!
This is what I use:
[program:uwsgi]
command=bash -c 'sleep 5 && uwsgi /etc/uwsgi.ini'
Not happy enough with the sleep hack I created a startup script and launched supervisorctl start processname from there.
[program:startup]
command=/startup.sh
startsecs = 0
autostart = true
autorestart = false
startretries = 1
priority=1
[program:myapp]
command=/home/website/venv/bin/gunicorn /home/website/myapp/app.py
autostart=false
autorestart=true
process_name=myapp
startup.sh
#!/bin/bash
sleep 5
supervisorctrl start myapp
This way supervisor will fire the startup script once and this will start myapp after 5 seconds, mind the autostart=false and autorestart=true on myapp.
I had a similar issue where, starting 64 python rq-worker processes using supervisorctl was raising CPU and RAM alert at every restart. What I did was the following:
command=/bin/bash -c "sleep %(process_num)02d && virtualenv/bin/python3 manage.py rqworker --name %(program_name)s_my-rq-worker_%(process_num)02d default low"
Basically, before running the python command, I sleep for N second, where N is the process number, which basically means I supervisor will start one rq-worker process every second.