Python script completes, but Windows Task Scheduler shows running - python

I wrote a small script to clear out stopped torrents in Transmission. The script writes a log, does it's work in Transmission, and exits. Script below:
#Module Imports#
import transmissionrpc
import os
import logging
import sys
import datetime
#Set variables before main() function
logdir = 'D:\\scripts\\logs'
myDate = datetime.datetime.now().strftime("%y-%m-%d")
myTime = datetime.datetime.now().strftime("%H:%M")
myDateTime = datetime.datetime.now().strftime("%y-%m-%d %H:%M")
if not os.path.exists(logdir):
os.makedirs(logdir)
logger = logging.getLogger('transmissionrpc')
logdate = datetime.datetime.now().strftime("%y-%m-%d %H%M")
logfile = logdir + "\\CTS-" + logdate + '.log'
hdlr = logging.FileHandler(logfile)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
logger.info("Begin Transmission clean")
tc = transmissionrpc.Client('localhost', port = 9091, user = 'USER', password = 'PASS')
for t in tc.get_torrents():
if t.status == 'stopped':
tc.remove_torrent(t.id, delete_data = True)
print ('Removing Torrent %s - %s' % (t.id, t.name))
logger.info('Removing Torrent %s - %s' % (t.id, t.name))
logger.info("No more stopped torrents. Exiting")
sys.exit()
Running the script in Tasks Scheduler as pythonw D:\Path\to\script.py
How can I get Task Scheduler to properly show the script has ended?

Add these lines to your code:
import os
print(os.getpid())
Compare the number that it prints with the process ID in the Task Scheduler. If you don't see a match, your script must be running in another process.

Related

Python polling library show message for each iteration

A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)

Python Watchdog with Slurm Output

I'm trying to use python-watchdog to monitor output of SLURM jobs on a supercomputer. For some reason, the watchdog program isn't detecting changes in the files, even if a tail -f shows that the file is indeed being changed. Here's my watchdog program:
import logging
import socket
import sys
import time
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
filename="/work/ollie/pgierz/PISM/pindex_vostok_ds50/scripts/pindex_vostok_ds50.watchdog")
def on_created(event):
logging.info(f"hey, {event.src_path} has been created!")
def on_deleted(event):
logging.info(f"what the f**k! Someone deleted {event.src_path}!")
def on_modified(event):
logging.info(f"hey buddy, {event.src_path} has been modified")
def on_moved(event):
logging.info(f"ok ok ok, someone moved {event.src_path} to {event.dest_path}")
if __name__ == "__main__":
if "ollie" in socket.gethostname():
logging.info("Not watching on login node...")
sys.exit()
# Only do this on compute node:
patterns = "*"
ignore_patterns = "*.watchdog"
ignore_directories = False
case_sensitive = True
my_event_handler = PatternMatchingEventHandler(
patterns, ignore_patterns, ignore_directories, case_sensitive
)
my_event_handler.on_created = on_created
my_event_handler.on_deleted = on_deleted
my_event_handler.on_modified = on_modified
my_event_handler.on_moved = on_moved
path = "/work/ollie/pgierz/PISM/pindex_vostok_ds50/scripts"
#path = "/work/ollie/pgierz/PISM/pindex_vostok_ds30/"
go_recursively = True
my_observer = Observer()
my_observer.schedule(my_event_handler, path, recursive=go_recursively)
my_observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
my_observer.stop()
my_observer.join()
This is just a suspicion, but could it be that the filesystem doesn't actually register the file as being "changed" since it is still open from the batch job? Doing an ls -l or stat on the output files shows it was "modified" when the job started. Do I need to tell slurm to "flush" the file?

python inotify to monitor for In_closted_write and in_moved_to events

I am monitoring a directory for new files to be moved to or created.
Upon detecting the new file I call a another python script to process the file.
#!/usr/bin/python
import os
import signal
import sys
import logging
import inotify.adapters
import subprocess
_DEFAULT_LOG_FORMAT = ''
_LOGGER = logging.getLogger(__name__)
def _configure_logging():
_LOGGER.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
formatter = logging.Formatter(_DEFAULT_LOG_FORMAT)
ch.setFormatter(formatter)
_LOGGER.addHandler(ch)
def exit_gracefully(signum, frame):
signal.signal(signal.SIGINT, original_sigint)
sys.exit(1)
signal.signal(signal.SIGINT, exit_gracefully)
def main():
i = inotify.adapters.Inotify()
i.add_watch(b'/home/sort/tmp')
try:
for event in i.event_gen():
if event is not None:
if 'IN_MOVED_TO' in event[1] or 'IN_CLOSE_WRITE' in event[1]:
(header, type_names, watch_path, filename) = event
_LOGGER.info("%s" #"WD=(%d) MASK=(%d) COOKIE=(%d) LEN=(%d) MASK->NAMES=%s "
#"WATCH-PATH=[%s]"
"FILENAME=%s" + "/" + "%s",
type_names,#header.wd, header.mask, header.cookie, header.len, type_names,
watch_path.decode('utf-8'), filename.decode('utf-8'))
fnp = str(event[2] + "/" + event[3])
print fnp
proc = subprocess.Popen([orgpath, fnp], stderr=subprocess.STDOUT, bufsize=1)
#proc.communicate()
finally:
i.remove_watch(b'/home/sort/tmp')
if __name__ == '__main__':
_configure_logging()
orgdir = os.path.dirname(os.path.realpath(sys.argv[0]))
orgpath = os.path.join(orgdir, "organize.py")
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, exit_gracefully)
print("Watching /home/sort/tmp for new files")
main()
The end goal is to only process one file at a time as I call to an API to scrape for metadata. To many calls to the API in a short period of time could result in the API key to be banned or temporarily blocked.
Right now when I copy more than a single file into the monitoring directory the script gets called on each file at the same time.
Try putting a for loop to run the python file..
for files in directory:
...code that runs the python file
if it is still running too fast, you can put a timer on to throttle the API calls
import time
for files in directory:
...code that runs the python file
time.sleep(5)

Python Logging different file on each turn

I am using a python 2.7 script that runs 24/7, I want a different log file produced by the logging module each time a loop is executed. Each file would have the timestamp has filename to avoid confusion.
So Far I got:
def main():
while True:
datetimenow = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
logging.basicConfig(format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S', filename="logs/" + datetimenow + '.log',
level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler())
# ... start of action
if __name__ == "__main__":
main()
This produces one file and when the loop is started again it doesnt close and open a new file.
Also, seems that the console output is double printed as each line is outputted to console twice.
Any ideas to fix these ?
Ok, I got it working by removing the basicConfig snippet and building two handlers, one inside the loop for the file with different timestamp and one in the class for the console. The key is to remove the handler at the end of the loop, before adding it again with a different date. Here is the complete example:
import logging
import time
import datetime
logger = logging.getLogger('simple_example')
logger.setLevel(logging.INFO)
con = logging.StreamHandler()
con.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
con.setFormatter(formatter)
logger.addHandler(con)
def main():
a = 0
while True:
datetimenow = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
ch = logging.FileHandler("logs/" + datetimenow + '.log')
ch.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
time.sleep(5)
a += 1
logger.warning("logging step "+ str(b))
time.sleep(5)
logger.removeHandler(ch)
if __name__ == "__main__":
main()
Sleep(5) is used for the purpose of testing and that it doesnt go too fast.

No shell prompt message, just a blinking cursor after starting a Python script as a daemon?

python-daemon-1.5.2-1.el6.noarch
Below is the script that I received from a developer:
import threading
import multiprocessing, os, signal, time, Queue
import time
from suds.client import Client
from hotqueue import HotQueue
from config import config
queue = HotQueue(config['redis_hotqueue_list'], host=config['redis_host'], port=int(config['redis_port']),password=config['redis_pass'], charset="utf-8",db=0)
#queue.worker()
def sendMail(item):
key = item[0]
domain = item[1]
fromemail = item[2]
fromname = item[3]
subject = item[4]
content = item[5]
toemail = item[6]
cc = item[7]
bcc = item[8]
replyto = item[9]
# Convert to string variable
url = config['sendmail_tmdt_url']
client = Client(url)
client.service.send_mail(key,domain, fromemail,subject, content, toemail,fromname, '','','');
for i in range(10):
t = threading.Thread(target=sendMail)
t.setDaemon(True)
t.start()
while True:
time.sleep(50)
As you can see, he's using the threading module to make it can be run as a daemon.
I'm going to switch to use the daemon library follow this blog post.
Here's my first try:
from daemon import runner
import logging
import time
import threading
import multiprocessing, os, signal, time, Queue
import time
from suds.client import Client
from hotqueue import HotQueue
from config import config
class Mail():
def __init__(self):
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
self.pidfile_path = '/var/run/sendmailworker/sendmailworker.pid'
self.pidfile_timeout = 1
def run(self):
while True:
queue = HotQueue(config['redis_hotqueue_list'], host=config['redis_host'], port=int(config['redis_port']), password=config['redis_pass'], charset=r"utf-8", db=0)
#queue.worker()
def sendMail(item):
key = item[0]
domain = item[1]
fromemail = item[2]
fromname = item[3]
subject = item[4]
content = item[5]
toemail = item[6]
cc = item[7]
bcc = item[8]
replyto = item[9]
# Convert to string variable
url = config['sendmail_tmdt_url']
client = Client(url)
client.service.send_mail(key,domain, fromemail,subject, content, toemail, fromname, '', '', '');
logger.debug("result")
#sleep(50)
mail = Mail()
logger = logging.getLogger("sendmailworker")
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.FileHandler("/var/log/sendmailworker/sendmailworker.log")
handler.setFormatter(formatter)
logger.addHandler(handler)
daemon_runner = runner.DaemonRunner(mail)
daemon_runner.daemon_context.files_preserve=[handler.stream]
daemon_runner.do_action()
It works but I have to press the Ctrl-C to get the shell prompt after starting:
/etc/init.d/sendmailworker start
Starting server
# started with pid 2586
^C
#
How can I get rid of this problem?
Append an ampersand doesn't help:
# /etc/init.d/sendmailworker start &
[1] 4094
# Starting server
started with pid 4099
^C
[1]+ Done /etc/init.d/sendmailworker start
#
As #Celada pointed out: actually, I already had my shell prompt, but it doesn't display [root#hostname ~]# as usual, just a blinking cursor. Simple pressing Enter make my shell prompt reappear. So the question should be: how to make the started with pid xxxxx come first, at the same line with Starting server, then display my shell prompt?
The stop function is working fine:
[root#hostname ~]# /etc/init.d/sendmailworker stop
Stopping server
Terminating on signal 15
[root#hostname ~]#
How can I do the similar for the start function? Something like this:
[root#hostname ~]# /etc/init.d/sendmailworker start
Starting server
started with pid 30624
[root#hostname ~]#
You can get your expected beahviour by change
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
to:
self.stdout_path = '/dev/null'
self.stderr_path = '/dev/null'
I recommend to write a init script using shell script in your case.
FYI, I can not find any document of runner except its source code.

Categories