Python/bash wget file and see if contents matches unix timestamp - python

Scenario
One web server user hits execute that will produce a file called now.txt and inside this file will current unix timestamp
On the current machines I need a way to check if that file has a timestamp that is within 5 minutes on its current time and if say execute another command
I was thinking of having a cronjob running on the client machine every 5 minutes that will do a wget of the file from the webserver then check the contents for its unix timestamp then compare it with the current time
Not sure if that makes sense and not sure if I have over egged it so would be good to get some advice?
Python script
wget file
check file is with 5 minutes of current time
run another command
import wget
from datetime import datetime;
url = 'http://example.com/test/now.txt'
filename = wget.download(url)
try:
f = open("now.txt", "rb");
age = datetime.utcnow() - datetime.utcfromtimestamp(long(f.read()));
except :
pass

To read a Unix timestamp that is saved in a remote file and to compare it with the current time on the client (assuming the clocks are synchronized e.g., using ntp):
#!/usr/bin/env python3
import time
from urllib.request import urlopen
with urlopen('http://example.com/test/now.txt') as r:
timestamp = int(r.read())
if abs(time.time() - timestamp) < 300:
print('the current time is within 5 minutes')

Related

Python Script doesn´t work when started via other script

I´m currently working on a raspberry pi 4 and wrote a script in python that send a mail with a picture and then rename the file and puts it in another folder.
The script works fine when I start with command
sudo python script.py
but when start it with another script it won´t execute the part with the renaming
Now the question what is my mistake ?
import os
import time
from sendmail import mail
from sendmail import file_rename
from time import sleep
pic = '/home/pi/Monitor/Bewegung.jpg'
movie= '/home/pi/Monitor/Aufnahme.avi'
archiv = '/home/pi/Archiv/'
time = time.strftime('%d.%m.%Y %H:%M')
mail(filename = pic )
file_rename(oldname = pic ,name = 'Serverraum Bild' + time ,format = '.jpg' ,place = archiv )
file_rename(oldname = movie ,name = 'Serverraum Video' + time ,format = '.avi' ,place = archiv )
I see that you are starting the script as a user with sudo privileges.
but when start it with another script it won´t execute the part with the renaming
This makes me suspicious that the caller script does not have the correct permissions to rename/move a file. You can view the permissions of the script with the following command
ls -la callerscript.py

Python subprocess script failing

Have written the below script to delete files in a folder not matching the dates in the "keep" period. Eg. Delete all except files partly matching this name.
The command works from the shell but fails with the subprocess call.
/bin/rm /home/backups/!(*"20170920"*|*"20170919"*|*"20170918"*|*"20170917"*|*"20170916"*|*"20170915"*|*"20170914"*)
#!/usr/bin/env python
from datetime import datetime
from datetime import timedelta
import subprocess
### Editable Variables
keepdays=7
location="/home/backups"
count=0
date_string=''
for count in range(0,keepdays):
if(date_string!=""):
date_string+="|"
keepdate = (datetime.now() - timedelta(days=count)).strftime("%Y%m%d")
date_string+="*\""+keepdate+"\"*"
full_cmd="/bin/rm "+location+"/!("+date_string+")"
subprocess.call([full_cmd], shell=True)
This is what the script returns:
#./test.py
/bin/rm /home/backups/!(*"20170920"*|*"20170919"*|*"20170918"*|*"20170917"*|*"20170916"*|*"20170915"*|*"20170914"*)
/bin/sh: 1: Syntax error: "(" unexpected
Python version is Python 2.7.12
Just as #hjpotter said, subprocess will use /bin/sh as default shell, which doesn't support the kind of globbing you want to do. See official documentation. You can change that using the executable parameter to subprocess.call() with a more appropriate shell (/bin/bash or /bin/zsh for example): subprocess.call([full_cmd], executable="/bin/bash", shell=True)
BUT you can be a lot better served by Python itself, you don't need to call a subprocess to delete a file:
#!/usr/bin/env python
from datetime import datetime
from datetime import timedelta
import re
import os
import os.path
### Editable Variables
keepdays=7
location="/home/backups"
now = datetime.now()
keeppatterns = set((now - timedelta(days=count)).strftime("%Y%m%d") for count in range(0, keepdays))
for filename in os.listdir(location):
dates = set(re.findall(r"\d{8}", filename))
if not dates or dates.isdisjoint(keeppatterns):
abs_path = os.path.join(location, filename)
print("I am about to remove", abs_path)
# uncomment the line below when you are sure it won't delete any valuable file
#os.path.delete(abs_path)

Looking for a way to de-reference a bash var wrapped in a python command call

I'm trying to find a way to de-reference goldenClusterID to use it in an AWS CLI command to terminate my cluster. This program is to compensate for dynamic Job-Flow Numbers generated each day so normal Data Pipeline shutdown via schedule is applicable. I can os.system("less goldenClusterID") all day and it gives me the right answer. However, it won't give up the goodies with a straight de-ref. Suggestions?
from __future__ import print_function
import json
import urllib
import boto3
import commands
import os
import re
import datetime
import awscli
foundCluster = ""
rawClusterNum = ""
mainClusterNum = ""
goldenClusterID = ""
# Next, we populate the list file with clusters currently active
os.system("aws emr list-clusters --active >> foundCluster")
# We search for a specific Cluster Name
os.system("fgrep 'AnAWSEMRCluster' foundCluster")
os.system("grep -B 1 DrMikesEMRCluster foundCluster >> rawClusterNum")
# Look for the specific Cluster ID in context with it's Cluster Name
os.system("fgrep 'j-' rawClusterNum >> mainClusterNum")
# Regex the Cluster ID from the line
os.system("grep -o '\j-[0-9a-zA-Z]*' mainClusterNum >> goldenClusterID")
# Read the Cluster ID from the file and run AWS Terminate on it
os.system("aws emr describe-cluster --cluster-id %s" % goldenClusterID")
os.system("aws emr terminate-clusters --cluster-ids goldenClusterID")
os.system("rm *")
Never mind, I figured it out. Too much coffee and not enough sleep. The answer is to use:
goldkeyID=open('goldenClusterID', 'r').read()
os.system("aws emr describe-cluster --cluster-id %s" % goldkeyID)

Sqlite python - attempt to write a read only database

I have a simple python script that puts data in a database. Both the script and
the database have owner www-data. When I run sudo python and write the
commands one by one it works, but if I run python monitor.py or sudo python monitor.py it doesn't work; it says, "attempt to write a read only database".
This is my script: (it receives data from arduino)
from serial import Serial
from time import sleep
import sqlite3
serial_port = '/dev/ttyACM0';
serial_bauds = 9600;
# store the temperature in the database
def log_light(value):
conn=sqlite3.connect('/var/db/arduino.db')
curs=conn.cursor()
curs.execute("UPDATE sensor1 set status = (?)", (value,))
# commit the changes
conn.commit()
conn.close()
def main():
s = Serial(serial_port, serial_bauds);
s.write('T');
sleep(0.05);
line = s.readline();
temperature = line;
s.write('H');
sleep(0.05);
line = s.readline();
humidity = line;
s.write('L');
sleep(0.05);
line = s.readline();
light = line;
log_light(light);
if __name__=="__main__":
main()
It sounds like a permission problem. Write access is granted only to the user, which is root. You need to change the user to be directly yourself, not root. You can do this using chmod on many *nix systems.
You could also gove write access to anyone in the group.

Python, Service dies but PID remains.. Log file updates every minute

I have service that as the topic says dies and leave a stale PID behind. This particular service logs every minute. So basically I want to create a python script to check the logfile based on modified time, if file has not updated in say 3 minutes then restart service.
I am new to scripting / programming so I need help with the logic here please...
this is what I have so far to do the check on the file age.
#!/usr/bin/python
import os, datetime, time
from os import path
from datetime import datetime, timedelta
#file = "/var/log/file.log"
def check_file(seconds, file_name):
try:
time_diff = datetime.now() - timedelta(seconds)
file_time = datetime.fromtimestamp(path.getctime(file_name))
if file_time < time_diff:
return [True, "File: %s. Older than: %s, file_time:%s , date_diff:%s " % (file_name, seconds, file_time, time_diff)]
else:
return [True, "File: %s. Older than: %s, file_time:%s , date_diff:%s " % (file_name, seconds, file_time, time_diff)]
except Exception, e:
return [False, e]
Before writing custom code to handle this problem (and now having 2 problems), I'd look at:
fixing the service itself so that it doesn't die,
adding monitoring to that service at the box level (see this
for example).
If these options are not practical, I would start with a bash based solution over Python.
In either case, you'll need to make sure that the restarting script doesn't die either.

Categories