Create dir with datetime name and subfiles within directory (Python) - python

I'm currently looking to create a directory on Linux using Python v2.7 with the directory name as the date and time (ie. 27-10-2011 23:00:01). My code for this is below:-
import time
import os
dirfmt = "/root/%4d-%02d-%02d %02d:%02d:%02d"
dirname = dirfmt % time.localtime()[0:6]
os.mkdir(dirname)
This code works fine and generates the directory as requested. Nonetheless, what I'd also like to then is, within this directory create two csv files and a log file with the same name. Now as the directory name is dynamically generated, I'm unsure as to how to move into this directory to create these files. I'd like the directory together with the three files to all have the same name (csv files will be prefixed with a letter). So for example, given the above, I'd like a directory created called "27-10-2011 23:00:01" and then within this, two csv files called "a27-10-2011 23:00:01.csv" and "b27-10-2011 23:00:01.csv" and a log file called "27-10-2011 23:00:01.log".
My code for the file creations is as below:-
csvafmt = "a%4d-%02d-%02d %02d:%02d:%02d.csv"
csvbfmt = "b%4d-%02d-%02d %02d:%02d:%02d.csv"
logfmt = "%4d-%02d-%02d %02d:%02d:%02d.log"
csvafile = csvafmt % time.localtime()[0:6]
csvbfile = csvbfmt % time.localtime()[0:6]
logfile = logfmt % time.localtime()[0:6]
fcsva = open(csvafile, 'wb')
fcsvb = open(csvbfile, 'wb')
flog = open(logfile, 'wb')
Any suggestions how I can do this so that the second remains the same throughout? I appreciate this code would only take a split second to run but within that time, the second may change. I assume the key to this lies within altering "time.localtime" but I remain unsure.
Thanks

Sure, just save the time in a variable and then use that variable for the substitutions:
now = time.localtime()[0:6]
dirname = dirfmt % now
csvafile = os.path.join(dirname, csvafmt % now)
csvbfile = os.path.join(dirname, csvbfmt % now)
logfile = os.path.join(dirname, logfmt % now)
Edited to include creating the complete path to your csv and log files.

Only call time.localtime once.
current_time = time.localtime()[0:6]
csvafile = csvafmt % current_time
csvbfile = csvbfmt % current_time
logfile = logfmt % current_time

Related

How to move a copy of specific files in one folder to another folder based on having the latest "Created date" in Python?

I have a folder that contains about 5,000 files and I need to move the files that start with certain names to another folder. The issue is that I need to only take the file with the most recent created date for that name.
For example:
Let's say one of the names I've been given is "Sam Thompson" and there are three files that meet that criteria...
Sam_Thompson_02012023
Sam_Thompson_02052023
Sam_Thompson_02102023 (let's assume this is the most recently created file)
I need to move only the last file to a different folder as it was created most recently.
I have figured out how to move files based on the name piece but have struggled with finding resources online that relate to my issue very closely.
The below code works perfectly -- it moves the files that start with Sam_Thompson to the appropriate folder. The part that doesn't work is the part that's commented out: and max(file_name, key=os.path.getctime):
I think I'm going about getting the max date by file name incorrectly so any input is appreciated. Most of the material I've looked at online is specific to scenarios where you just want to return the name of the latest file (not move it) and it's typically looking across the whole folder not by each given name.
import shutil
import os
source_folder = 'insert path'
destination_folder = 'insert path'
s = 'Sam_Thompson'
for file_name in os.listdir(source_folder):
source = source_folder + file_name
destination = destination_folder + file_name
if os.path.isfile(source) and file_name.startswith(s): #and max(file_name, key=os.path.getctime):
shutil.copy(source, destination)
print('Copied:', file_name)
The issue is that your file_name is only looking at one file at a time. You could store the date and then check if subsequent files have a more recent date to find it, but it's easier just to pull all the valid files at once with list comprehension.
source_folder = ''
destination_folder = ''
s = ''
valid_files = [file for file in os.listdir(source_folder) if file.startswith(s)]
most_recent = max(valid_files, key = lambda x: os.path.getctime(os.path.abspath(source_folder) + os.sep + x))
source = source_folder + os.sep + most_recent
destination = destination_folder + os.sep + most_recent
shutil.copy(source, destination)

Trying to loop a command to run multiple times

Im trying to create a program that deletes files after X days. I got that part working, but now I need to write a code that deletes the file they're located at ONLY if it's empty
This is for a server that is taking log files from builds, and those are getting placed in thousands of other folders. I need to delete those upper level folders if they are empty
import os
import shutil
import sys
import time
#for path you need to use the location of the log files
path = "C:/GroupData"
# Check current working directory.
retval = os.getcwd()
#this will list the name of the current directory
print("Current working directory %s" % retval)
# Now change the directory
#You will put the same location as you did for path
os.chdir('C:/GroupData/Temp')
workdir = os.getcwd()
# Check current working directory.
retval = os.getcwd()
#this will list the new name of the working directory
print("Directory changed successfully %s" % retval)
from subprocess import call
def get_file_directory(file):
return os.path.dirname(os.path.abspath(file))
#this code is getting what today's date is
now = time.time()
#this code is saying that files that are older than todays date by 7 days will be deleted
cutoff = now - (10 * 86400)
files = os.listdir(os.path.join(get_file_directory('C://GroupData//Temp'), "temp"))
file_path = os.path.join(get_file_directory('C://GroupData//Temp'), "temp/")
#this locates what the file name is and looks to see how long ago it was modified
for xfile in files:
files1 = os.listdir(os.path.join(get_file_directory('C://GroupData//Temp//' + xfile), xfile))
file_path1 = os.path.join(get_file_directory('C://GroupData//Temp//' + xfile), xfile)
for xfile1 in files1:
if os.path.isfile(str(file_path1) + "\\" + xfile1):
t = os.stat(str(file_path1) + "\\" + xfile1)
#m is how long ago it was last modified
m = t.st_mtime
#if the cutoff date is older than the modified date the file will be deleted
if m < cutoff:
#if the file IS older than the cutoff os.remove deletes the file from the path
os.remove(str(file_path1) + "\\" + xfile1)
files = os.listdir(os.path.join(get_file_directory('C://GroupData//Temp'), "temp"))
file_path = os.path.join(get_file_directory('C://GroupData//Temp'), "temp/")
os.rmdir(str(file_path1) + "\\")
The code at the bottom works, but only for one file at a time, and I need it to do it as many times as possible so that it can delete all the empty files at once, as this will be run automatically

Getting sub directories list in a text file and append that txt file with new subdirectory name

I am trying to write a script which will list down all subdirectories in a directory into a txt file.
this script will run every 1 hour through cron job so that i can append to the txt file already created in previous run and add new subdir names.
For eg:
/Directory
/subdir1
/subdir2
/subdir3
txt.file should have following columns:
subdir_name timestamp first_filenamein_thatSUBDIR
subdir1 2015-23-12 abc.dcm
subdir2 2014-23-6 ghj.nii
.
.
.
I know to get list of directories using os.listdir but don't know how to approach this problem as i want to write same txt file with new names. ANy idea how should i do that in python?
EDit: With os.listdir i am getting sub directories name but not the time stamp. And other problem is how can i create two columns one with sub directory name and other with its time stamp as shown above?
With #Termi's help i got this code working:
import time
import os
from datetime import datetime
parent_dir = '/dicom/'
sub_dirs = os.walk(parent_dir).next()[1]
with open('exam_list.txt','a+') as f:
lines = f.readlines()
present_dirs = [line.split('\t')[0] for line in lines]
for sub in sub_dirs[1:len(sub_dirs)]:
sub = sub + '/0001'
latest_modified = os.path.getctime(os.path.join(parent_dir,sub))
if sub not in present_dirs and time.time() - latest_modified < 4600 :
created = datetime.strftime(datetime.fromtimestamp(latest_modified),'%Y-%d-%m')
file_in_subdir = os.walk(os.path.join(parent_dir,sub)).next()[2][1]
f.write("%s\t%s\t%s\n"%(sub,created,file_in_subdir))
This code, when typed on python terminal, works well with all the variables sub, created, file_in_subdir holding some value, however, is not able to write it in a file mentioned at the beginning of the code.
I also tried if file writing is a problem using following code:
with open('./exam_list.txt','a+') as f:
f.write("%s\t%s\n"%(sub,file_in_subdir))
Above two lines creates file properly as i intended..
Not able to point out what is the error.
To get the immediate sub-directories in the parent directory use os.walk('path/to/parent/dir').next()[1].
os.walk().next() gives a list of lists as [current_dir, [sub-dirs], [files] ] so next()[1] gives sub-directories
opening the file with 'a+' will allow you to both read and append to the file. Then store the sub-directories that are already in the file
with open('dirname.txt','a+') as f:
lines = f.readlines()
present_dirs = [line.split('\t')[0] for line in lines]
Now for each sub-directory check whether it is already present in the list and if not, add it to the file. If you execute it every hour you can even check for new files created(or modified in linux systems) in the last hour by using getctime
time.time() - os.path.getctime(os.path.join(parent_dir,sub)) < 3600
Now for any new sub-directory use os.walk('path/to/subdir').next[2] and get the filenames inside
import time
import os
from datetime import datetime
parent_dir = '/path/to/parent/directory'
sub_dirs = os.walk(parent_dir).next()[1]
with open('dirname.txt','a+') as f:
lines = f.readlines()
present_dirs = [line.split('\t')[0] for line in lines]
for sub in sub_dirs:
latest_modified = os.path.getctime(os.path.join(parent_dir,sub))
if sub not in present_dirs and time.time() - latest_modified < 3600 :
created = datetime.strftime(datetime.fromtimestamp(latest_modified),'%Y-%d-%m')
file_in_subdir = os.walk(os.path.join(parent_dir,sub)).next()[2][0]
f.write("%s\t%s\t%s\n"%(sub,created,file_in_subdir))
with open('some.txt', 'a') as output:
output.write('whatever you want to add')
Opening a file with 'a' as a parameter appends everything you write to it to its end.
You can use walk from os package.
It's more better than listdir.
You can read more about it here
En Example:
import os
from os.path import join, getctime
with open('output.txt', 'w+') as output:
for root, dirs, files in os.walk('/Some/path/'):
for name in files:
create_time = getctime(join(root, name))
output.write('%s\t%s\t%s\n' % (root, name, create_time))

Python: List files in subdirectories with specific extension created in last 24 hours

First of all, I'm new to programming and Python in particular, therefore I'm struggling to find a right solution.
I'm trying to search the files with specific extension recursively which have been created only in last 24 hours and either print the result to the screen, save to the file, and copy those files to directory.
Below is an example the code which does most of what I would like to achieve, except it finds all files with given extension, however, I need only files created in last 24 or less hours.
import os
import shutil
topdir = r"C:\Docs"
dstdir = r"C:\test"
exten = ".png"
for dname, names, files in os.walk(topdir):
for name in files:
if name.lower().endswith(exten):
# Prints result of walk
print(os.path.join(dname, name))
#copy all files with given extension to the dst folder
path = os.path.realpath(os.path.join(dname, name))
shutil.copy2(path, dstdir)
compare_date = datetime.datetime.today() - datetime.timedelta(hours = 24)
Inside nested loop, you can add these code
create_dt = os.stat(name).st_mtime
created_date = datetime.datetime.fromtimestamp(create_dt)
if created_date > compare_date:
print name

Need to process all files in a directory, but am only getting one

I have a Python script that is successfully processing single files. I am trying to write a for loop to have it get all the files in a directory that the user inputs. However, it is only processing one of the files in the directory.
The code below is followed by the rest of the script that does the analysis on data. Do I need to somehow close the for loop?
import os
print "Enter the location of the files: "; directory = raw_input()
path = r"%s" % directory
for file in os.listdir(path):
current_file = os.path.join(path, file)
data = open(current_file, "rb")
# Here's an abridged version of the data analysis
for i in range(0, 10):
fluff = data.readline()
Input_Parameters = fluff.split("\t")
output.write("%f\t%f\t%f\t%f\t%.3f\t%.1f\t%.2f\t%.2f\t%s\n" % (Voc, Isc, Vmp, Imp, Pmax, 100 * Eff, IscErr, 100 * (1 - (P2 / Pmax)), file))
data.close()
In general, if something does not work, you can try to get something simpler working. I removed the data analysis part and kept the code below. This works with me. I noticed that if I have a directory in my path, the open will fail. I am not sure this is the case with you as well.
import os
import sys
path = '.'
for file in os.listdir(path):
current = os.path.join(path, file)
if os.path.isfile(current):
data = open(current, "rb")
print len(data.read())
The current code in your answer looks basically OK to me. I noticed that it's writing to the same output file for each of the files it processes, so that may lead you to think it's not processing them all.
for you debugging:
for file in os.listdir(path):
current_file = os.path.join(path, file)
print current_file
check the output of current_file
BTW: are you indent your code by tab?
because there are different indent length in your code.
This is bad style

Categories