I want a python file to run automatically at 8am every day going forward. I try to use the library schedule as pointed out in the second answer here, in Windows.
import schedule
import time
def query_fun(t):
print('Do a bunch of things')
print("I'm working...", t)
df.to_csv('C:/Documents/Query_output.csv', encoding='utf-8')
schedule.every().day.at("08:00").do(query_fun, 'It is 08:00')
while True:
schedule.run_pending()
time.sleep(60) # wait one minute
But 8am has come and gone, and the csv file hasn't been updated, and it doesn't look like the script runs when I want it to.
Edit: Based on this, I used pythonw.exe to run the script in the command line: C:\Program Files\Python3.7>pythonw.exe daily_query.py but the script doesn't run when expected.
You took out the key part of the script by commenting it out. How is the script magically supposed to rise up at 8 AM to do something? The point is to always keep it running and trigger at the right time using the mechanism provided by schedule library (running any pending jobs at time T on day D that is). What you are doing right now is just declaring the method and exiting without doing anything.
The point is to keep the script running in background and trigger the function by matching the current time with the time specified, running any pending assigned jobs as per your logic. You run your script in background and forget about it until 8 AM:
nohup python MyScheduledProgram.py &
nohup will take care that your terminal doesn’t get any output printed on it from the program. You can view the output from nohup.out though.
Here you can easily see what the skript does:
schedule.every().day.at("08:00").do(query_fun, 'It is 08:00')
tells the scheduler to run the function if it is 8am.
But the other part of the library is this one:
while True:
schedule.run_pending()
time.sleep(60) # wait one minute
this part checks if it should start a skript, then it waits for 60 seconds, and checks again.
EDIT:
The question was related to a Windows machine, therefore my answer has no point here.
If you are on a linux machine, you should consider using crontabs:
Open a terminal and type
crontab -e
After you selected the editor you wanted (lets take nano) it opens a list, where you can add various entries
just add:
0 8 * * * /usr/bin/python3 /home/path/to/skript.py
Then save with STRG + O and exit nano with STRG + X
The skript will run everyday at 8am, just test the command
/usr/bin/python3 /home/path/to/skript.py
to make sure the skript does not produce an error
Related
I want to start a python script and then automatically close that script after 2 minutes, run another command, and keep doing the same thing again like this (loop) forever :
Cd c:/location.of.script/
pythonscript.py
Stop (like ctrl+c) pythonscript.py after 120s
Del -f cookies.file
.
.
.
Is this even possible with a batch file on windows 10? If so, can someone please help me with this?
I’ve been looking everywhere but found nothing except the exit() command which stops the script from inside - this isn’t what I want to do.
You can change your python script to exit after 2 minutes, and you could batch file that has a while loop that runs forever and run the python script then deletes the cookie.file, I don't know if that's exactly what you want, but you can do it by putting a timer in your python script.
You can make a separate thread that keeps track of the time and terminates the code after some time.
An example of such a code could be:
import threading
def eternity(): # your method goes here
while True:
pass
t=threading.Thread(target=eternity) # create a thread running your function
t.start() # let it run using start (not run!)
t.join(3) # join it, with your timeout in seconds
And this code is copied from https://stackoverflow.com/a/30186772/4561068
I have some script in Python, which does some work. I want to re-run this script automatically. Also, I want to relaunch it on any crashes/freezes.
I can do something like this:
while True:
try:
main()
except Exception:
os.execv(sys.executable, ['python'] + sys.argv)
But, for unknown reason, this still crashes or freezes one time in few days. So I see crash, write "Python main.py" in cmd and it started, so I don't know why os.execv don't do this work by self. I guess it's because this code is part of this app. So, I prefer some script/app, which will control relaunch in external way. I hope it will be more stable.
So this script should work in this way:
Start any script
Check that process of this script is working, for example check some file time change and control it by process name|ID|etc.
When it dissapears from process list, launch it again
When file changed more than 5 minutes ago, stop process, wait few sec, launch it again.
In general: be cross-platform (Linux/Windows)
not important log all crashes.
I can do this by self (right now working on it), but I'm pretty sure something like this must already be done by somebody, I just can't find it in Google\Github.
UPDATE: added code from the #hansaplast answer to GitHub. Also added some changes to it: relauncher. Feel free to copy/use it.
As it needs to work both in windows and on linux I don't know a way to do that with standard tools, so here's a DIY solution:
from subprocess import Popen
import os
import time
# change into scripts directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
while True:
p = Popen(['python', 'my_script.py', 'arg1', 'arg2'])
time.sleep(20) # give the program some time to write into logfile
while True:
if p.poll() != None:
print('crashed or regularly terminated')
break
file_age_in_s = time.time() - os.path.getmtime('output.log')
if file_age_in_s > 60:
print('frozen, killing process')
p.kill()
break
time.sleep(1)
print('restarting..')
Explanation:
time.sleep(20): give script 20 seconds to write into the log file
poll(): regularly check if script died (either crashed or regularly terminated, you can check the return value of poll() to differentiate that)
getmtime(): regularly check output.log and check if that was changed the past 60 seconds
time.sleep(1): between every check wait for 1s as otherwise it would eat up too many system resources
The script assumes that the check-script and the run-script are in the same directory. If that is not the case, change the lines beneath "change into scripts directory"
I personally like supervisor daemon, but it has two issues here:
It is only for unix systems
It restarts app only on crashes, not freezes.
But it has simple XML-RPC API, so It makes your job to write an freeze-watchdog app simplier. You could just start your process under supervisor and restart it via supervisor API when you see it freezes.
You could install it via apt install supervisor on ubuntu and write config like this:
[program:main]
user=vladimir
command=python3 /var/local/main/main.py
process_name=%(program_name)s
directory=/var/local/main
autostart=true
autorestart=true
I have a Python script that runs all day long checking time every 60 seconds so it can start/end tasks (other python scripts) at specific periods of the day.
This script is running almost all ok. Tasks are starting at the right time and being open over a new cmd window so the main script can keep running and sampling the time. The only problem is that it just won't kill the tasks.
import os
import time
import signal
import subprocess
import ctypes
freq = 60 # sampling frequency in seconds
while True:
print 'Sampling time...'
now = int(time.time())
#initialize the task.. lets say 8:30am
if ( time.strftime("%H:%M", time.localtime(now)) == '08:30'):
# The following method is used so python opens another cmd window and keeps original script running and sampling time
pro = subprocess.Popen(["start", "cmd", "/k", "python python-task.py"], shell=True)
# kill process attempts.. lets say 11:40am
if ( time.strftime("%H:%M", time.localtime(now)) == '11:40'):
pro.kill() #not working - nothing happens
pro.terminate() #not working - nothing happens
os.kill(pro.pid, signal.SIGINT) #not working - windows error 5 access denied
# Kill the process using ctypes - not working - nothing happens
ctypes.windll.kernel32.TerminateProcess(int(pro._handle), -1)
# Kill process using windows taskkill - nothing happens
os.popen('TASKKILL /PID '+str(pro.pid)+' /F')
time.sleep(freq)
Important Note: the task script python-task.py will run indefinitely. That's exactly why I need to be able to "force" kill it at a certain time while it still running.
Any clues? What am I doing wrong? How to kill it?
You're killing the shell that spawns your sub-process, not your sub-process.
Edit: From the documentation:
The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.
Warning
Passing shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
So, instead of passing a single string, pass each argument separately in the list, and eschew using the shell. You probably want to use the same executable for the child as for the parent, so it's usually something like:
pro = subprocess.Popen([sys.executable, "python-task.py"])
I already searched for solutions to my questions and found some, but they don't work for me or are very complicated for what I want to achieve.
I have a python (2.7) script that creates 3 BaseHTTPServers using threads. I now want to be able to close the python script from itself and restart it. For this, I create an extra file called "restart_script" with this content:
sleep 2
python2 myScript.py
I then start this script and after that, close my own python script:
os.system("nohup bash restart_script & ")
exit()
This works quite well, the python script closes and the new one pops up 2 seconds later, but the BaseHTTPServers do not come up, the report that the Address is already in use. (socket.error Errno 98).
I initiate the server with:
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
Then I let it serve forever:
thread.start_new_thread(httpd.serve_forever, tuple())
I alternatively tried this:
httpd_thread = threading.Thread(target=httpd.serve_forever)
httpd_thread.daemon = True
httpd_thread.start()
But this has the same result.
If I kill the script using strg+c and then start it right again right after that, everything works fine. I think as long as I want to restart the script from its own, the old process is still somehow active and I need to somehow disown it so that the sockets can be cleared.
I am running on Linux (Xubuntu).
How can I really really kill my own script and then bring it up again seconds later so that all sockets are closed?
I found an answer to my specific problem.
I just use another script which starts my main program using os.system(). If the script wants to restart, I just close it regularly and the other script just starts it again, over and over...
If I want to actually close my script, I add a file and check in the other script if this file exists..
The restart-helper-script looks like this:
import os, time
cwd = os.getcwd()
#first start --> remove shutdown:
try:
os.remove(os.path.join(cwd, "shutdown"))
except:
pass
while True:
#check if shutdown requested:
if os.path.exists(os.path.join(cwd, "shutdown")):
break
#else start script:
os.system("python2 myMainScript.py")
#after it is done, wait 2 seconds: (just to make sure sockets are closed.. might be optional)
time.sleep(2)
I take notes in class on my computer and share these notes via a public folder on dropbox. When I take notes in class, I create a lot of unnecessary files (I take notes in LaTeX) before I generate a PDF. I don't want to clutter my dropbox space with the unnecessary files, and would rather post only the PDFs to dropbox.
In order to facilitate all of this, I set up a cronjob that runs a python script (below) after every class (weekly). Sometimes, I stay back for a few minutes while I fix something in my notes before I export a PDF, so the python script has a bunch of sleeps in it, waiting for the PDF to be generated. I accidentally manually ran that script today, and need help stopping it.
import os
import subprocess
from sys import exit as crash
from datetime import date as dt
from time import sleep
def getToday():
answer = dt.strftime(dt.today(), "%b") + str(int(dt.strftime(dt.today(), "%d")))
return answer
def zipNotes(date):
today = getToday()
while 1:
if today not in os.listdir('.'):
with open("FuzzyLog", 'a') as logfile:
logfile.write("Sleeping\n")
sleep(60*5) # sleep 5 minutes
continue
if "Notes.pdf" not in os.listdir(today):
with open("FuzzyLog", 'a') as logfile:
logfile.write("pdf not exported. Sleeping\n")
sleep(60*5) # sleep 5 minutes
continue
subprocess.call("""zip Notes.zip */Notes.pdf""", shell=True)
crash(0)
zipNotes(getToday())
Since the script doesn't find any files made today (I could easily just create a dummy file, but that's not a "proper" solution), it loops through the sleep condition infinitely. Since the looping conditions are quite simple, I can't count on the process to be active for very long to "catch it in the act" to get its PID to kill it.
ps aux | grep python doesn't show me which PID is running the python script I want to kill, nor does ps -ax | grep python or ps -e | grep python.
Does anyone have any idea how I can track a python script while it's sleeping?
I'm on Mac OSX 10.7.5 (Lion), if that matters
The traditional way Unix daemons handle this problem is with "pid files".
When your program starts up (or when your launcher starts your program up), it creates a file in a well-known location (/var/run/<PROGRAM NAME>.pid for system daemons; per-user daemons don't have a universal equivalent), writes the PID into that file (as in the ASCII string 12345\n for PID 12345), and deletes it when it exits. So, you can just kill $(cat /var/run/myprogram.pid).
However, it doesn't seem like you need this here. You could easily design the program to be cleanly shutdown, instead of designing it to be killable.
Or, even easier, remove the sleep; instead of having cron run your script it every hour and having the script sleep for a minute at a time until the PDF file is created, just have LaunchServices run the script when the PDF is created.