I am new to Python. I need to create a door.lock file that contains the current date and time. Also, I need to overwrite this file every x minutes with a new file containing the current date and time. I'm using this as a pseudo lock file to allow me to test on run of the software whether or not the software crashed and how long ago it crashed. My issue is I can't seem to overwrite the file. I've only failed at creating and/or appending the file. I created the following as a test:
from datetime import datetime, timedelta
ending = False
LOCK_FILENAME = "door.lock" # The lock file
LOCK_FILE_UPDATE = True
MINS_LOCK_FILE_UPDATE = 1 # the (x) time in minutes to write to lock file
NEXT_LOCK_FILE_UPDATE = datetime.now()
lock_file = open(LOCK_FILENAME, "w")
now = datetime.now()
NOW_STRING1 = str(now.strftime("%Y-%m-%d_%a_%H:%M"))
lock_file.write(NOW_STRING1)
print "First Now String"
print NOW_STRING1
# ==============================================================================
#Main Loop:
while ending is False:
# ==============================================================================
# Check if it is time to do a LOCK FILE time update
now = datetime.now()
NOW_STRING1 = str(now.strftime("%Y-%m-%d_%a_%H:%M"))
if LOCK_FILE_UPDATE: # if LOCK_FILE_UPDATE is set to True in DM settings
if NEXT_LOCK_FILE_UPDATE <= datetime.now():
lock_file.write(NOW_STRING1)
print NOW_STRING1
NEXT_LOCK_FILE_UPDATE = datetime.now() + timedelta(minutes=MINS_LOCK_FILE_UPDATE)
Will someone pinpoint my error(s) for me? TIA
When I cat the above file, door.lock, it is empty.
You need to push buffer to file. You can do it with a close() and re-open for next write.
lock_file.close()
...
lock_file = open(LOCK_FILENAME, "a")
If you are logging events you'd be better using a logger instead of a plain text file.
Solution from #MAC will work except it will append and seems that you don't want to do that so just open again with the 'w' option or yet better, use the 'w+' option so it can be truncated (which for what I get it is what you want to do) and read.
Also, consider your changes won't get written down until you close the file (having said that, consider open/close inside your loop instead).
lock_file = open(LOCK_FILENAME, "w+")
now = datetime.now()
NOW_STRING1 = str(now.strftime("%Y-%m-%d_%a_%H:%M"))
lock_file.write(NOW_STRING1)
# your loop and so on ...
lock_file.close()
Related
I have a data logger to record the temperature. I want to save these data and epoch time in a csv file. I tried the following code, there is no error reporting but the csv file is empty. Can anyone help me to figure out the problem?
import board
import busio
import adafruit_mcp9600
import time
i2c = busio.I2C(board.SCL,board.SDA,frequency = 100000)
mcp = adafruit_mcp9600.MCP9600(i2c, 0x60, tctype = "J")
with open ("/home/pi/Documents/test.csv", "a") as log:
while True:
temp = mcp.temperature
temptime = time.time()
log.write("{0},{1}\n".format(str(temptime),str(temp)))
time.sleep(1)
Assuming those libraries you're importing are working correctly, I think this is because the writer is not flushing the buffer, so it appears like nothing is being written.
The solution would be to flush with log.flush() after each time you write a log.
Try a simpler example:
A)
import time
def go():
i = 0
with open("/home/some/dir/test.csv", "a") as nice:
while True:
nice.write(f"hello,{i},{time.time()}\n")
i += 1
time.sleep(5)
if __name__ == "__main__":
go()
versus
B)
import time
def go():
i = 0
while True:
with open("/home/some/dir/test.csv", "a") as nice:
nice.write(f"hello,{i},{time.time()}\n")
i += 1
time.sleep(5)
if __name__ == "__main__":
go()
When I refresh the file in case A, new rows do not appear to be written. They are in case B, though.
If I modify case A) and add nice.flush() after each write, it fixes the issue.
The above two blocks are just to demonstrate what you're seeing. I'm not suggesting you do one or the other. Ultimately, I would not suggest doing anything like this, and I would instead use the logging package and configure a proper logger if you're indeed trying to create log files.
I am simply trying to add a counter around my for loop, to count how many files are in my directory per qualification of the contains 'VCCS'... the logic is working for the iterating, as it's iterating through my directory the amount of times I have a file in... however my counter keeps reporting 1. Relevant lines are files_in_directory market with comment # here.
I am getting this warning in PyLint: Constant name "files_in_directory" doesn't conform to UPPER_CASE naming stylepylint(invalid-name) , on files_in_directory = 0
I've tried moving the set 0, above the for and try, any thoughts?
if __name__ == "__main__":
try:
currentDT = datetime.datetime.now()
files_in_directory = 0 # here
for filename in os.listdir(config.DIRECTORY_LOCATION):
if filename.__contains__('VCCS'):
old_stdout = sys.stdout
log_file = open("./logs/metrics.log","w")
sys.stdout = log_file
files_in_directory += 1 # here
PENDING_RECORDS = FindPendingRecords().get_excel_data()
# Do operations on PENDING_RECORDS
# Reads excel to map data from excel to vital
MAP_DATA = FindPendingRecords().get_mapping_data()
# Configures Driver
VITAL_ENTRY = VitalEntry()
# Start chrome and navigate to vital website
VITAL_ENTRY.instantiate_chrome()
# Begin processing Records
VITAL_ENTRY.process_records(PENDING_RECORDS, MAP_DATA)
print(f"Date: ")
print (str(currentDT))
print(f"Files in Directory #{files_in_directory}") # here
sys.stdout = old_stdout
log_file.close()
except Exception as exc:
# print(exc)
raise
Note: This is to take place of many comments in the interest of readability
Your question as it sits is not an MCVE. To make it more succinct and to identify the exact cause:
import os
if __name__ == "__main__":
# Remove try block, just let it raise an error
my_counter = 0
for file in os.listdir("some_directory"):
# you don't need to call the __contains__ method
# as the 'in' keyword will invoke that for you
if "VCCS" in file:
# increment your counter first
my_counter += 1
print(file, my_counter)
Now there is no question what is modifying my_counter, and this will print out the file you are looking at, alongside the counter.
Once you've ironed out that behavior, you can start adding in your other functions
import os
if __name__ == "__main__":
# Remove try block, just let it raise an error
my_counter = 0
for file in os.listdir("some_directory"):
if 'VCCS' in file:
my_counter += 1
print(my_counter, file)
# Add functions back in one by one
PENDING_RECORDS = FindPendingRecords().get_excel_data()
Continue this process until you identify what is causing your behavior. As it stands, I don't see anything explicit that could be overwriting that counter variable, so I suspect that either A) the code you posted does not reflect what is being run or B) you are modifying/resetting files_in_directory somewhere else in the module.
Suggested Edit:
I would recommend you add in the other code from your module to see what's going on. That way we have a clearer picture of what's happening as your code runs
So first thing first: you're sure you're passing the right directory, and that this directory contains indeed more than one file containing VCCS in their filename ?
I would also try to run this code without the Try/Except block, to see if you don't get some error after your first increment.
LMK what you get, I hope this helps.
I am dealing with a CRON job that places a text file with 9000 lines device names.
The job recreates the file every day with an updated list from a network crawler in our domain.
What I was running into is when I have the following worker running my import into my database the db.[name].id kept growing with this method below
scheduler.py
# -*- coding: utf-8 -*-
from gluon.scheduler import Scheduler
def demo1():
db(db.asdf.id>0).delete()
db.commit()
with open('c:\(project)\devices.list') as f:
content = f.readlines()
for line in content:
db.asdf.insert(asdf = line)
db.commit()
mysched = Scheduler(db, tasks = dict(demo1 = demo1) )
default.py (initial kickoff)
#auth.requires_membership('!Group-IS_MASTER')
def rgroup():
mysched.queue_task('demo1',start_time=request.now,stop_time = None,prevent_drift=True,repeats=0,period=86400)
return 'you are member of a group!'
So the next time the job kicked off it would start at db.[name].id = 9001. So every day the ID number would grow by 9000 or so depending on the crawler's return. It just looked sloppy and I didn't want to run into issues years down the road with database limitations that I don't know about.
(I'm a DB newb (I know, I don't know stuff))
SOOOOOOO.....
This is what I came up with and I don't know if this is the best practice or not. And an issue that I ran into when using db.[name].drop() in the same function that is creating entries is the db tables didn't exist and my job status went to 'FAILED'. So I defined the table in the job. see below:
scheduler.py
from gluon.scheduler import Scheduler
def demo1():
db.asdf.drop() #<=====Kill db.asdf
db.commit() #<=====Commit Kill
db.define_table('asdf',Field('asdf'),auth.signature ) #<==== Phoenix Rebirth!!!
with open('c:\(project)\devices.list') as f:
content = f.readlines()
for line in content:
db.asdf.insert(asdf = line)
db.commit() #<=========== Magic
mysched = Scheduler(db, tasks = dict(demo1 = demo1) )
In the line of Phoenix Rebirth in the comments of code above. Is that the best way to achieve my goal?
It starts my ID back at 1 and that's what I want but is that how I should be going about it?
Thanks!
P.S. Forgive my example with windows dir structure as my current non-prod sandbox is my windows workstation. :(
Why wouldn't you check if the line is present prior to inserting its corresponding record ?
...
with open('c:\(project)\devices.list') as f:
content = f.readlines()
for line in content:
# distinguishing t_ for tables and f_ for fields
db_matching_entries = db(db.t_asdf.f_asdf==line).select()
if len(db_matching_entries) == 0:
db.t_asdf.insert(f_asdf = line)
else:
# here you could update your record, just in case ;-)
pass
db.commit() #<=========== Magic
Got a similar process that takes few seconds to complete with 2k-3k entries. Yours should not take longer than half a minute.
I would like to run a script every five minutes in windows task scheduler. The script reads a JSON web-service of service requests and writes specified fields to an esri geodatabase.
For this process, there is a time stamp, with the information to query the time per service request.
My question is how do I create the logic in this script to say "Hey, I ran successfully for 12:00PM-12:05:59PM, something happened for 12:06-12:11:59, but I will run for 12:11-12:16:59, and since I am so nice I will grab data for 12:06-12:11."
the logic that I have built into my code so far is;
import datetime
DateofDataCreation = 2015-02-17 16:53:25
i = 5
Start = datetime.datetime.now()
now_minus_5 = Start - datetime.timedelta(minutes =i)
if DateofDataCreation >= now_minus_5:
WriteToDatabase
else:
print "No Current Data"
am not sure if I understand your question exactly. Based on what I think you are asking, try this:
import datetime
#If you change the time to be after now() then it will print "WriteToDatabase" .
#If the time is before then it will print "No Current "Data"
DateofDataCreation = datetime.datetime(2015,2,17,16,53,25)
i = 5
Start = datetime.datetime.now()
now_minus_5 = Start - datetime.timedelta(minutes =i)
if DateofDataCreation >= now_minus_5:
print("WriteToDatabase")
else:
print ("No Current Data")
I have created a data transfer program using python and the pyserial module. I am currently using it to communicate text file over a radio device between a Raspberry Pi and my computer. The problem is, the file I am trying to send, which contains 5000 lines of text and is 93.0 Kb in size is taking quite a while to send. To be exact, it takes about a full minute. I need this to be done within seconds. Here is the following code, I am sure that there are many optimizations that can be made with file reading and such that would increase the data transfer speed. My radio device has a data speed of 250 kbps, which is obviously not being reached. Any help would be greatly appreciated.
Code to send(located on raspberry pi)
def s_file():
print 'start'
readline = lambda : iter(lambda:ser.read(1),"\n")
name = "".join(readline())
print name
file_loc = directory_name + name
sleep(1)
print('Waiting for command from client to send file...')
while "".join(readline()) != "<<SENDFILE>>":
pass
with open(file_loc) as FileObj:
for lines in FileObj:
ser.write(lines)
ser.write("\n<<EOF>>\n")
print 'done'
Code to receive(on my laptop)
def r_f_bird(self): #send command to bird to start func,
if ser_open == True:
readline = lambda : iter(lambda:ser.read(1),"\n")
NAME = self.tb2.get()
ser.write('/' + NAME)
print NAME
sleep(0.5)
ser.write('\n<<SENDFILE>>\n')
start = clock()
with open(str(NAME),"wb") as outfile:
while True:
line = "".join(readline())
if line == "<<EOF>>":
break
print >> outfile, line
elapsed = clock() - start
print elapsed
ser.flush()
else:
pass
Perhaps the overhead of ser.read(1) is slowing things down. It seems like you have a \n at the end of each line, so try using pySerial's readline() method rather than rolling your own. Changing line = "".join(readline()) to line = ser.readline() ought to do it. You will also need to change your loop end condition to == "<<EOF>>\n".
You may also need to add a ser.flush() on the writing side.