Duplicating and renaming images based on filename range (xxxx-xxxx.jpg) - python

I have a bunch of images that have filenames that represent a range of values that I need to split into individual images. For example, for an image with the filename 1000-1200.jpg, I need 200 individual copies of the image named 1000.jpg, 1001.jpg, 1002.jpg, etc.
I know a bit of python but any suggestions on the quickest way to go about this would be much appreciated.
EDIT: Here's what I have so far. The only issue is that it strips leading zeros from the filename and I'm not quite sure how to fix that.
import os
from shutil import copyfile
fileList = []
filePath = 'C:\\AD\\Scripts\\to_split'
for file in os.listdir(filePath):
if file.endswith(".jpg"):
fileList.append(file)
for file in fileList:
fileName = os.path.splitext(file)[0].split("-")
rangeStart = fileName[0]
rangeEnd = fileName[1]
for part in range(int(rangeStart), int(rangeEnd)+1):
copyfile(os.path.join(filePath, file), os.path.join(filePath, str(part) + ".jpg"))

Lets break the problem down:
Step 1. Get all files in folder
Step 2. for each file, Get string from filename
Step 3. split the string into two ints a and b with str.split("-")
Step 4. for x in range(a, b), copy file and set the name of the file as str(x)

Related

Extracting a diffrentiating numerical value from multiple files - PowerShell/Python

I have multiple text files containing different text.
They all contain a single appearance of the same 2 lines I am interested in:
================================================================
Result: XX/100
I am trying to write a script to collect all those XX values (numerical values between 0 and 100), and paste them in a CSV file with the text file name in column A and the numerical value in column B.
I have considered using Python or PowerShell for this purpose.
How can I identify the line where "Result" appears under the string of "===..", collect its content until '\n', and then strip it from "Result: " and "/100"?
"Result" and other numerical values could appear in the files, but never in the quoted format, and below "=====", like the line im interested in.
Thank you!
Edit: I have written this poor naive attempt to collect the numerical values.
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
for filename in os.listdir(dir_path):
if filename.endswith(".txt"):
with open(filename,"r") as f:
lineFound=False
for index, line in enumerate(f):
if lineFound:
line=line.replace("Result: ", "")
line=line.replace("/100","")
line.strip()
grade=line
lineFound=False
print(grade, end='')
continue
if index>3:
if "================================================================" in line:
lineFound=True
I'd still be happy to learn if there's a simple way to do this with PowerShell tbh
For the output, I used csv writer to append the results to a file one by one.
So there's two steps involved here, first is to get a list of files. There's a ton of answers for that one on stackoverflow, but this one is stupidly complete.
Once you have the list of files, you can simply just load the files themselves one by one, and then do some simple string.split() to get the value you want.
Finally, write the results into a CSV file. Since the CSV file is a simple one, you don't need to use the CSV library for this.
See the code example below. Note that I copied/pasted the function for generating the list of files from my personal github repo. I reuse that one a lot.
import os
def get_files_from_path(path: str = ".", ext:str or list=None) -> list:
"""Find files in path and return them as a list.
Gets all files in folders and subfolders
See the answer on the link below for a ridiculously
complete answer for this.
https://stackoverflow.com/a/41447012/9267296
Args:
path (str, optional): Which path to start on.
Defaults to '.'.
ext (str/list, optional): Optional file extention.
Defaults to None.
Returns:
list: list of file paths
"""
result = []
for subdir, dirs, files in os.walk(path):
for fname in files:
filepath = f"{subdir}{os.sep}{fname}"
if ext == None:
result.append(filepath)
elif type(ext) == str and fname.lower().endswith(ext.lower()):
result.append(filepath)
elif type(ext) == list:
for item in ext:
if fname.lower().endswith(item.lower()):
result.append(filepath)
return result
filelist = get_files_from_path("path/to/files/", ext=".txt")
split1 = "================================================================\nResult: "
split2 = "/100"
with open("output.csv", "w") as outfile:
outfile.write('filename, value\n')
for filename in filelist:
with open(filename) as infile:
value = infile.read().split(split1)[1].split(split2)[0]
print(value)
outfile.write(f'"{filename}", {value}\n')
You could try this.
In this example the filename written to the CSV will be its full (absolute) path. You may just want the base filename.
Uses the same, albeit seemingly unnecessary, mechanism for deriving the source directory. It would be unusual to have your Python script in the same directory as your data.
import os
import glob
equals = '=' * 64
dir_path = os.path.dirname(os.path.realpath(__file__))
outfile = os.path.join(dir_path, 'foo.csv')
with open(outfile, 'w') as csv:
print('A,B', file=csv)
for file in glob.glob(os.path.join(dir_path, '*.txt')):
prev = None
with open(file) as indata:
for line in indata:
t = line.split()
if len(t) == 2 and t[0] == 'Result:' and prev.startswith(equals):
v = t[1].split('/')
if len(v) == 2 and v[1] == '100':
print(f'{file},{v[0]}', file=csv)
break
prev = line

Script to loop through and match files based on file name and append

I have a directory with many files that are named like:
1234_part1.pdf
1234.pdf
5432_part1.pdf
5432.pdf
2323_part1.pdf
2323.pdf
etc.
I am trying to merge the pdf where the the first number part of the file are the same.
I have code that can do this one at a time, but when I have over 500 files in the directory I am not sure how to loop through, here is what I have so far:
from PyPDF2 import PdfFileMerger, PdfFileReader
merger = PdfFileMerger()
merger.append(PdfFileReader(file('c:/example/1234_part1.pdf', 'rb')))
merger.append(PdfFileReader(file('c:/example/1234.pdf', 'rb')))
merger.write("c:/example/ouput/1234_combined.pdf")
Ideally the output file would be 'xxxx_combined_<today's date>.pdf'.
i.e. 1234_combined_051719.pdf
And also if there is a number file that only has part 1 or the other file it would not combined —
i.e. if there was a 9999_part1.pdf, but no 9999.pdf, then there would be no output for the '9999_combined_<today's date>.pdf'.
Try using os.listdir() to get all of the files in your directory Then use .split() at the end of your string (filename) to isolate the pdf file number. Then look for that number pattern in the list of files you made.
import os
from PyPDF2 import PdfFileMerger, PdfFileReader
dir = 'my/dir/of/pdfs/'
file_list = os.listdir(dir)
num_list = []
for fname in file_list:
if '_' in fname: # if the filename has an underscore in it
file_num = fname.split('_')[0] # get's first element in list of splits
else:
file_num = fname.split('.')[0]
if file_num not in num_list:
num_list.append(file_num)
# now you have a list of all of your file numbers you can grab all files
# in the file_list containing that number
for num in num_list:
pdf_parts = [x for x in file_list if num in x] # grabs all files with that number
if len(pdf_parts < 2): # if there is only one pdf with that num ...
continue # skip it!
# your pdf append operation here for each item in the pdf_parts list.
# something like this maybe ...
merger = PdfFileMerger()
# sorts list by filename length in decending order so that
# '_part' files come first
sorted_pdf_parts = pdf_parts.sort(key=len, reverse=True)
for part in sorted_pdf_parts:
merger.append(PdfFileReader(file(dir + part, 'rb')))
merger.write('out/dir/' + num + '_combined.pdf')
You can do it like this:
from PyPDF2 import PdfFileMerger, PdfFileReader
from os import listdir
from datetime import datetime
file_names = listdir('D:\Code\python-examples\PDF')
for file_name in file_names:
if "_" in file_name:
digits = file_name.split('_')[0]
if f'{digits}.pdf' in file_names:
with open(f'{digits}.pdf', 'rb') as digit_file, open(f'{digits}_part1.pdf', 'rb') as part1_file:
merger = PdfFileMerger()
merger.append(PdfFileReader(part1_file))
merger.append(PdfFileReader(digit_file))
merger.write(f'{digits}_combined_{datetime.now().strftime("%m%d%y")}.pdf')
A couple of notes:
It's recommended to use with when opening files.
You can use datetime.now().strftime("%m%d%y") to get the date format you mentioned.
So if we have a folder like this:
After we run the code, we have:
And we can see that it works:
I also uploaded the code, along with relevant files, to my GitHub page. If anyone wants to try it themselves, they can check it out.

how to save file image as original name

I want to save output images using original name of image.
I try this code, but most cases work well, and a half save other file names. How to do it better?
cropped_images = "GrabCut"
if not os.path.exists(cropped_images):
os.makedirs(cropped_images)
# Load data
filepath = "Data"
images = [cv2.imread(file) for file in glob.glob(filepath + "/*.jpg")]
file_names = []
for filename in os.listdir(filepath):
org_image_name = os.path.splitext(filename)[0]
file_names.append(org_image_name)
for i, image in enumerate(images):
DO SOMETHING...
img_name = file_names[i]
cropped_images_path = os.path.join(cropped_images, img_name + '.jpg')
cv2.imwrite(cropped_images_path, image)
The reason you have the error is because the lists made by glob and os.listdir are not the same, either different files (glob is only getting jpg files and listdir gets everything) or different order, or both. You can change the filenames in a list, orig_files, to make a corresponding list of new filenames, new_files.
It also looks like it makes more sense to just read one image at a time (you only use them one at a time) so I moved that into the loop. You can also use os.path.basename to get the filename, and zip to iterate through multiple lists together.
cropped_images = "GrabCut"
if not os.path.exists(cropped_images):
os.makedirs(cropped_images)
# Load data
filepath = "Data"
orig_files = [file for file in glob.glob(filepath+"/*.jpg")]
new_files = [os.path.join(cropped_images, os.path.basename(f)) for f in orig_files]
for orig_f,new_f in zip(orig_files,new_files):
image = cv2.imread(orig_f)
DO SOMETHING...
cv2.imwrite(new_f, image)

Rename files in a directory with incremental index

INPUT: I want to add increasing numbers to file names in a directory sorted by date. For example, add "01_", "02_", "03_"...to these files below.
test1.txt (oldest text file)
test2.txt
test3.txt
test4.txt (newest text file)
Here's the code so far. I can get the file names, but each character in the file name seems to be it's own item in a list.
import os
for file in os.listdir("/Users/Admin/Documents/Test"):
if file.endswith(".txt"):
print(file)
The EXPECTED results are:
01_test1.txt
02_test2.txt
03_test3.txt
04_test4.txt
with test1 being the oldest and test 4 being the newest.
How do I add a 01_, 02_, 03_, 04_ to each file name?
I've tried something like this. But it adds a '01_' to every single character in the file name.
new_test_names = ['01_'.format(i) for i in file]
print (new_test_names)
If you want to number your files by age, you'll need to sort them first. You call sorted and pass a key parameter. The function os.path.getmtime will sort in ascending order of age (oldest to latest).
Use glob.glob to get all text files in a given directory. It is not recursive as of now, but a recursive extension is a minimal addition if you are using python3.
Use str.zfill to strings of the form 0x_
Use os.rename to rename your files
import glob
import os
sorted_files = sorted(
glob.glob('path/to/your/directory/*.txt'), key=os.path.getmtime)
for i, f in enumerate(sorted_files, 1):
try:
head, tail = os.path.split(f)
os.rename(f, os.path.join(head, str(i).zfill(2) + '_' + tail))
except OSError:
print('Invalid operation')
It always helps to make a check using try-except, to catch any errors that shouldn't be occurring.
This should work:
import glob
new_test_names = ["{:02d}_{}".format(i, filename) for i, filename in enumerate(glob.glob("/Users/Admin/Documents/Test/*.txt"), start=1)]
Or without list comprehension:
for i, filename in enumerate(glob.glob("/Users/Admin/Documents/Test/*.txt"), start=1):
print("{:02d}_{}".format(i, filename))
Three things to learn about here:
glob, which makes this sort of file matching easier.
enumerate, which lets you write a loop with an index variable.
format, specifically the 02d modifier, which prints two-digit numbers (zero-padded).
two methods to format integer with leading zero.
1.use .format
import os
i = 1
for file in os.listdir("/Users/Admin/Documents/Test"):
if file.endswith(".txt"):
print('{0:02d}'.format(i) + '_' + file)
i+=1
2.use .zfill
import os
i = 1
for file in os.listdir("/Users/Admin/Documents/Test"):
if file.endswith(".txt"):
print(str(i).zfill(2) + '_' + file)
i+=1
The easiest way is to simply have a variable, such as i, which will hold the number and prepend it to the string using some kind of formatting that guarantees it will have at least 2 digits:
import os
i = 1
for file in os.listdir("/Users/Admin/Documents/Test"):
if file.endswith(".txt"):
print('%02d_%s' % (i, file)) # %02d means your number will have at least 2 digits
i += 1
You can also take a look at enumerate and glob to make your code even shorter (but make sure you understand the fundamentals before using it).
test_dir = '/Users/Admin/Documents/Test'
txt_files = [file
for file in os.listdir(test_dir)
if file.endswith('.txt')]
numbered_files = ['%02d_%s' % (i + 1, file)
for i, file in enumerate(txt_files)]

Searching multiple text files for two strings?

I have a folder with many text files (EPA10.txt, EPA55.txt, EPA120.txt..., EPA150.txt). I have 2 strings that are to be searched in each file and the result of the search is written in a text file result.txt. So far I have it working for a single file. Here is the working code:
if 'LZY_201_335_R10A01' and 'LZY_201_186_R5U01' in open('C:\\Temp\\lamip\\EPA150.txt').read():
with open("C:\\Temp\\lamip\\result.txt", "w") as f:
f.write('Current MW in node is EPA150')
else:
with open("C:\\Temp\\lamip\\result.txt", "w") as f:
f.write('NOT EPA150')
Now I want this to be repeated for all the text files in the folder. Please help.
Given that you have some amount of files named from EPA1.txt to EPA150.txt, but you don't know all the names, you can put them all together inside a folder, then read all the files in that folder using the os.listdir() method to get a list of filenames. You can read the file names using listdir("C:/Temp/lamip").
Also, your if statement is wrong, you should do this instead:
text = file.read()
if "string1" in text and "string2" in text
Here's the code:
from os import listdir
with open("C:/Temp/lamip/result.txt", "w") as f:
for filename in listdir("C:/Temp/lamip"):
with open('C:/Temp/lamip/' + filename) as currentFile:
text = currentFile.read()
if ('LZY_201_335_R10A01' in text) and ('LZY_201_186_R5U01' in text):
f.write('Current MW in node is ' + filename[:-4] + '\n')
else:
f.write('NOT ' + filename[:-4] + '\n')
PS: You can use / instead of \\ in your paths, Python automatically converts them for you.
Modularise! Modularise!
Well, not in the terms of having to write distinct Python modules, but isolate the different tasks at hand.
Find the files you wish to search.
Read the file and locate the text.
Write the result into a separate file.
Each of these tasks can be solved independently. I.e. to list the files, you have os.listdir which you might want to filter.
For step 2, it does not matter whether you have 1 or 1,000 files to search. The routine is the same. You merely have to iterate over each file found in step 1. This indicates that step 2 could be implemented as a function that takes the filename (and possible search-string) as argument, and returns True or False.
Step 3 is the combination of each element from step 1 and the result of step 2.
The result:
files = [fn for fn in os.listdir('C:/Temp/lamip') if fn.endswith('.txt')]
# perhaps filter `files`
def does_fn_contain_string(filename):
with open('C:/Temp/lamip/' + filename) as blargh:
content = blargh.read()
return 'string1' in content and/or 'string2' in content
with open('results.txt', 'w') as output:
for fn in files:
if does_fn_contain_string(fn):
output.write('Current MW in node is {1}\n'.format(fn[:-4]))
else:
output.write('NOT {1}\n'.format(fn[:-4]))
You can do this by creating a for loop that runs through all your .txt files in the current working directory.
import os
with open("result.txt", "w") as resultfile:
for result in [txt for txt in os.listdir(os.getcwd()) if txt.endswith(".txt")]:
if 'LZY_201_335_R10A01' and 'LZY_201_186_R5U01' in open(result).read():
resultfile.write('Current MW in node is {1}'.format(result[:-4]))
else:
resultfile.write('NOT {0}'.format(result[:-4]))

Categories