Read Apache log file realtime and send an e-mail-Python - python

Need to read an Apache log file realtime from the server and if some string is found an e-mail has to be sent. I have adopted the code found here to read the log file. Next how to send this e-mail. Do I have to issue a sleep command? Please advice.
Note: Since this is real time, after sending the e-mail python program has to begin reading the log file again. This process continues.
import time
import os
#open the file
filename = '/var/log/apache2/access.log'
file = open(filename,'r')
while 1:
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
if 'MyTerm' in line:
print line

Well, if you want it real time and not to be stuck on sending mails you could start a separate thread to send the email. Here is how you use threads in python (thread and threading):
http://www.tutorialspoint.com/python/python_multithreading.htm
Next, you can easily send an email in python using the smtplib. Here is another example from the same website (which I use and it is pretty good):
http://www.tutorialspoint.com/python/python_sending_email.htm
Well, you need to do this to speed up as much as possible the log-reading thread and to be sure it won't wait for mailing.
Now some pitfalls you have to take care of:
You must be careful with starting too many threads. For instance you are parsing (let's just assume for the moment) the log every 1 second, but sending an email takes 10 seconds. It is easy to see that (this is an exaggerated example, of course) you will start many threads and you will fill the available resources. I don't know how many times the string you are expecting will pop out each second, but it is a scenario you must consider.
Again depending on the workload, you can implement a streaming algorithm and avoid emails entirely. I don't know if it applies in your case, but I prefer to remind you about this scenario too.
You can create a queue and put a certain number of messages in it and send them together, thus avoiding sending many mails at once (again assuming you don't need to trigger an alarm for every single occurrence of your target string you have).
UPDATE
If you really want to create the perfect program you can do something else by using event triggering when the log file is modified. This way, you will avoid sleep entirely and each time something has been added to the file, python will be called and you can parse the new content and send the email if required. Take a look at watchdog:
http://pythonhosted.org/watchdog/
and this:
python pyinotify to monitor the specified suffix files in a dir
https://github.com/seb-m/pyinotify

Related

What is the most efficient way to run independent processes from the same application in Python

I have a script that in the end executes two functions. It polls for data on a time interval (runs as daemon - and this data is retrieved from a shell command run on the local system) and, once it receives this data will: 1.) function 1 - first write this data to a log file, and 2.) function 2 - observe the data and then send an email IF that data meets certain criteria.
The logging will happen every time, but the alert may not. The issue is, in cases that an alert needs to be sent, if that email connection stalls or takes a lengthy amount of time to connect to the server, it obviously causes the next polling of the data to stall (for an undisclosed amount of time, depending on the server), and in my case it is very important that the polling interval remains consistent (for analytics purposes).
What is the most efficient way, if any, to keep the email process working independently of the logging process while still operating within the same application and depending on the same data? I was considering creating a separate thread for the mailer, but that kind of seems like overkill in this case.
I'd rather not set a short timeout on the email connection, because I want to give the process some chance to connect to the server, while still allowing the logging to be written consistently on the given interval. Some code:
def send(self,msg_):
"""
Send the alert message
:param str msg_: the message to send
"""
self.msg_ = msg_
ar = alert.Alert()
ar.send_message(msg_)
def monitor(self):
"""
Post to the log file and
send the alert message when
applicable
"""
read = r.SensorReading()
msg_ = read.get_message()
msg_ = read.get_message() # the data
if msg_: # if there is data in general...
x = read.get_failed() # store bad data
msg_ += self.write_avg(read)
msg_ += "==============================================="
self.ctlog.update_templog(msg_) # write general data to log
if x:
self.send(x) # if bad data, send...
This is exactly the kind of case you want to use threading/subprocesses for. Fork off a thread for the email, which times out after a while, and keep your daemon running normally.
Possible approaches that come to mind:
Multiprocessing
Multithreading
Parallel Python
My personal choice would be multiprocessing as you clearly mentioned independent processes; you wouldn't want a crashing thread to interrupt the other function.
You may also refer this before making your design choice: Multiprocessing vs Threading Python
Thanks everyone for the responses. It helped very much. I went with threading, but also updated the code to be sure it handled failing threads. Ran some regressions and found that the subsequent processes were no longer being interrupted by stalled connections and the log was being updated on a consistent schedule . Thanks again!!

Best practice for writing text to file when abrupt program closure is likely with Python

I'm working in an environment where I'd like to leave a script running overnight but for policy reasons can not assume the PC will be left powered on (auto shutdown in an ungraceful manner after arbitrary time period).
I have a python script that is writing to a text file. During testing when I ungracefully terminate the program on some occasions a line of text will only be partially written out to the file. I'm also using the csv module.
Attempt at approximate code here:
import csv
outCSV = open("filename.txt", "a")
#more code here for writing multiline non CSV "header" block if file doesn't already exist
csvWriter = csv.writer(outCSV,lineterminator='\n')
#loop through a list, using values to derive other data for writing out later
lookupList = range(5)
for row in lookupList:
#function to return list of data elements from web source for CSV writer, using range(100) for mock data
outDataRow = range(100)
csvWriter.writerow(outDataRow)
#save after each row in case script is closed aburptly
outCSV.flush()
print "done!"
I realize the above example is trivial, it probably runs too fast to reliably close the script so that the csvWriter.writerow() fails to finish writing out a line. The actual project involves checking some web based content, where each url takes up to 15 seconds to load then writes potentially hundreds of items to a line. Looking more for conceptual answer (i suspect the issue is when "csvWriter.writerow(outDataRow)" is still executing and the program closes).
So far the best idea I've had is to build in an error checker to go over any output (once i restart the next day) that looks for incomplete records and redo those lines. Wondering if there is a smarter way?
P.S. I tried searching but even picking effective keywords was difficult, pardon if this is duplicate question (add keywords used to find it in the comments?)
I think you should be looking at the signal module. here is a brief explanation related to your case.
When the operating system restarts, it first sends signals to all programs. That is telling programs "please close now!!". programs are responsible to clean up and close. signals are used in other things too but this is what is related to you.
In python (and others), we write a function to handle a signal, that is to clean up and exit when the signal is received. signal handling is done like this in python:
import signal #you need this to handle signals
import time #used in this simple example
#open a file
f = open ('textfile.txt', 'w')
#signal handler. it has two parameters
#signum: signal number. there are many signals each with its number
#frame: i am not really sure what this is but i guess it is used for
# things i do not need. just a guess though
def sig_handler (signum, frame):
#print debugging information
print ('got', signum, 'signal, closing file...')
#close the file before exiting
f.close()
print ('exiting')
#exit. you have to write this to end the program from here
exit (1)
#create signal handler
#when `SIGTERM` signal is received, call function `sig_handler`
#if program receive the signal without creating a handler for it, the program
#will terminate without any clean up
signal.signal (signal.SIGTERM, sig_handler)
#infinite loop. will never end unless a signal is received to stop the program
while True:
#write to file
print ('test line', file=f)
#delay to simulate slow writing to file
time.sleep (1)
you should know what signal is sent when the operating system shuts down or simply handle all possible signals with the same handler. i think SIGTERM is the one used to terminate processes during shut down but i am not sure.
there is 1 signal you can never handle and your program will simply be forced to close without any clean up (i am sure in unix like. it is probably present in windows). but that one is sent in rare cases (eg. program hangs and does not close after SIGTERM or other signals are sent).
hope this helps...

Python2: How to parse a logfile that is held open in another process reliably?

I'm trying to write a Python script that will parse a logfile produced by another daemon. This is being done on Linux. I want to be able to parse the log file reliably.
In other words, periodically, we run a script that reads the log file, line by line, and does something with each line. The logging script would need to see every line that may end up in the log file. It could run say once per minute via cron.
Here's the problem that I'm not sure exactly how to solve. Since the other process has a write handle to the file, it could write to the while at the same time that I am reading from the same log file.
Also, every so often we would want to clear this logfile so its size does not get out of control. But the process producing the log file has no way to clear the file other than regularly stopping, truncating or deleting the file, and then restarting. (I feel like logrotate has some method of doing this, but I don't know if logrotate depends on the daemon being aware, or if it's actually closing and restarting daemons, etc. Not to mention I don't want other logs rotated, just this one specific log; and I don't want this script to require other possible users to setup logrotate.)
Here's the problems:
Since the logger process could write to the file while I already have an open file handle, I feel like I could easily miss records in the log file.
If the logger process were to decide to stop, clear the log file, and restart, and the log analyzer didn't run at exactly the same time, log entries would be lost. Similarly, if the log analyzer causes the logger to stop logging while it analyzes, information could also be lost that is dropped because the logger daemon isn't listening.
If I were to use a method like "note the size of the file since last time and seek there if the file is larger", then what would happen if, for some reason, between runs, the logger reset the logfile, but then had reason to log even more than it contained last time? E.g. We execute a log analyze loop. We get 50 log entries, so we set a mark that we have read 50 entries. Next time we run, we see 60 entries. But, all 60 are brand new; the file had been cleared and restarted since the last log run. Instead we end up seeking to entry 51 and missing 50 entries! Either way it doesn't solve the problem of needing to periodically clear the log.
I have no control over the logger daemon. (Imagine we're talking about something like syslog here. It's not syslog but same idea - a process that is pretty critical holds a logfile open.) So I have no way to change its logging method. It starts at init time, opens a log file, and writes to it. We want to be able to clear that logfile AND analyze it, making sure we get every log entry through the Python script at some point.
The ideal scenario would be this:
The log daemon runs at system init.
Via cron, the Python log analyzer runs once per minute (or once per 5 minutes or whatever is deemed appropriate)
The log analyzer collects every single line from the current log file and immediately truncates it, causing the log file to be blanked out. Python maintains the original contents in a list.
The logger then continues to go about its business, with the now blank file. In the mean time, Python can continue to parse the entries at its leisure from the Python list in memory.
I've very, very vaguely studied fifo's, but am not sure if that would be appropriate. In that scenario the log analyzer would run as a daemon itself, while the original logger writes to a FIFO. I have very little knowledge in this area however and don't know if it'd be a solution or not.
So I guess the question really is twofold:
How to reliably read EVERY entry written to the log from Python? Including if the log grows, is reset, etc.
How, if possible to truncate a file that has an open write handle? (Ideally, this would be something I could do from Python; I could do something like logfile.readlines(); logfile.truncate so that way no entries would get lost. But this seems like unless the logger process was well aware of this, it'd end up causing more problems than it solves.)
Thanks!
I don’t see any particular reason why you should be not able to read log file created by syslogd. You are saying that you are using some process similar to syslog, and process is keeping your log file open? Since you are asking rather for ideas, I would recommend you to use syslog! http://pic.dhe.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpc1%2Fhsyslog.html
It is working anyway – use it. Some easy way to write to log is to use logger command:
logger “MYAP: hello”
In python script you can do it like:
import os
os.system(‘logger “MYAP: hello”’)
Also remember you can actually configure syslogd. http://pic.dhe.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpc1%2Fconstmt.html
Also about your problem with empty logs – sysclog is not clearing logs. There are other tools for it – on debian for example logrotate is used. In this scenario if your log is empty – you can check backup file created by logrotate.
Since it looks like your problem is in logging tool, my advise would be to use syslog for logging. And other tool for rotating logs. Then you can easily parse logs. And if by any means (I don’t know if it is even possible with syslog) you miss some data – remember you will get it in next iteration anyway ;)
Some other idea would be to copy your logfile and work with copy...

How to sleep a python script running as a cronjob?

I wrote a python script to monitor a log file on a CentOS server for a specific value and send an email when it finds it. It runs as a cron every 5 minutes.
My question is what is the best way to put this script to sleep after it has sent the first email. I don't want it to be sending emails every 5 mins, but it needs to wake up and check the log again after an hour or so. This is assuming the problem can be fixed in under an hour. The people who are receiving the email don't have shell access to disable the cron.
I thought about sleep but I'm not sure if cron will try to run the script again if another process is active (sleeping).
cron will absolutely run the script again. You need to think this through a little more carefully than just "sleep" and "email every 10 minutes."
You need to write out your use cases.
System sends message and user does something.
System sends message and user does nothing. Why email the user again? What does 2 emails do that 1 email didn't do? Perhaps you should SMS or email someone else.
How does the user register that something was done? How will they cancel or stop this cycle of messages?
What if something is found in the log, an email is sent and then (before the sleep finishes) the thing is found again in the log. Is that a second email? It is two incidents. Or is that one email with two incidents?
#Lennart, #S. Lott: I think the question was somewhat the other way around - the script runs as a cron job every five minutes, but after sending an error-email it shouldn't send another for at least an hour (even if the error state persists).
The obvious answer, I think, is to save a self-log - for each problem detected, an id and a timestamp for the last time an email was sent. When a problem is detected, check the self-log; if the last email for this problem-id was less than an hour ago, don't send the email. Then your program can exit normally until called again by cron.
When your scripts sends email, make it also create a txt file "email_sent.txt". Then make it check for existence of this txt file before sending email. If it exists, don't send email. If it does not exist, send email and create the text file.
The text files serves as an indicator that email has already been sent and it does not need to be sent again.
You are running it every five minutes. Why would you sleep it? Just exit. If you want to make sure it doesn't send email every five minutes, then make the program only send an email if there is anything to send.
If you sleep it for an hour, and run it every five minutes, after an hour you'll have 12 copies running (and twelve emails sent) so that's clearly not the way to go forward. :-)
Another way to go about this might be to run your script as a daemon and, instead of having cron run it every five minutes, put your logic in a loop. Something like this...
while True:
# The check_my_logfile() looks for what you want.
# If it finds what you're looking for, it sends
# an email and returns True.
if check_my_logfile():
# Then you can sleep for 10 minutes.
time.sleep(600)
# Otherwise, you can sleep for 5 minutes.
else:
time.sleep(300)
Since you are monitoring a log file, It might be worth checking into things that already do log file monitoring. Logwatch is one, but there are log analyzing tools, that handle all of these things for you:
http://chuvakin.blogspot.com/2010/09/on-free-log-management-tools.html
Is a good wrap-up of some options. They would handle yelling at people. Also there are system monitoring tools such as opennms or nagios, etc. They also do these things.
I agree with what other people have said above, basically cron ALWAYS runs the job at the specified time, there is a tool called at which lets you run jobs in the future, so you could batch a job for 5 minutes, and then at runtime decide, when do I need to run again, and submit a job to at for whatever time you need it to run again (be it 5 minutes, 10 minutes or an hour). You'd still need to keep state somewhere (like what #infrared said) that would figure out what got sent when, and if you should care some more.
I'd still suggest using a system monitoring tool, which would easily grow and scale and handles people being able to say 'I'm working on XX NOW stop yelling at me' for instance.
Good luck!

Progress bar with long web requests

In a django application I am working on, I have just added the ability to archive a number of files (starting 50mb in total) to a zip file. Currently, i am doing it something like this:
get files to zip
zip all files
send HTML response
Obviously, this causes a big wait on line two where the files are being compressed. What can i do to make this processes a whole lot better for the user? Although having a progress bar would be the best, even if it just returned a static page saying 'please wait' or whatever.
Any thoughts and ideas would be loved.
You should keep in mind showing the progress bar may not be a good idea, since you can get timeouts or get your server suffer from submitting lot of simultaneous requests.
Put the zipping task in the queue and have it callback to notify the user somehow - by e-mail for instance - that the process has finished.
Take a look at django-lineup
Your code will look pretty much like:
from lineup import registry
from lineup import _debug
def create_archive(queue_id, queue):
queue.set_param("zip_link", _create_archive(resource = queue.context_object, user = queue.user))
return queue
def create_archive_callback(queue_id, queue):
_send_email_notification(subject = queue.get_param("zip_link"), user = queue.user)
return queue
registry.register_job('create_archive', create_archive, callback = create_archive_callback)
In your views, create queued tasks by:
from lineup.factory import JobFactory
j = JobFactory()
j.create_job(self, 'create_archive', request.user, your_resource_object_containing_files_to_zip, { 'extra_param': 'value' })
Then run your queue processor (probably inside of a screen session):
./manage.py run_queue
Oh, and on the subject you might be also interested in estimating zip file creation time. I got pretty slick answers there.
Fun fact: You might be able to use a progress bar to trick users into thinking that things are going faster than they really are.
http://www.chrisharrison.net/projects/progressbars/index.html
You could use a 'log-file' to keep track of the zipped files, and of how many files still remain.
The procedural way should be like this:
Count the numbers of file, write it in a text file, in a format like totalfiles.filespreocessed
Every file you zip, simply update the file
So, if you have to zip 3 files, the log file will grown as:
3.0 -> begin, no file still processed
3.1 -> 1 file on 3 processed, 33% task complete
3.2 -> 2 file on 3 processed, 66% task complete
3.3 -> 3 file on 3 processed, 100% task complete
And then with a simple ajax function (an interval) check the log-file every second.
In python, open, read and rite a file such small should be very quick, but maybe can cause some requests trouble if you'll have many users doing that in the same time, but obviously you'll need to create a log file for each request, maybe with rand name, and delete it after the task is completed.
A problem could be that, for let the ajax read the log-file, you'll need to open and close the file handler in python every time you update it.
Eventually, for a more accurate progress meter, you culd even use the file size instead of the number of file as parameter.
Better than a static page, show a Javascript dialog (using Shadowbox, JQuery UI or some custom method) with a throbber ( you can get some at hxxp://www.ajaxload.info/ ). You can also show the throbber in your page, without dialogs. Most users only want to know their action is being handled, and can live without reliable progress information ("Please wait, this could take some time...")
JQUery UI also has a progress bar API. You could make periodic AJAX queries to a didcated page on your website to get a progress report and change the progress bar accordingly. Depending on how often the archiving is ran, how many users can trigger it and how you authenticate your users, this could be quite hard.

Categories