how to split the filename using python - python

I am using python to create xml file using element and subelement process.
I have a list of zip files in my folder listed below:
Retirement_participant-plan_info_v1_getPlankeys_rev1_2021_03_09.zip
Retirement_participant-plan_info_resetcache_secretmanager_rev1_2021_03_09.zip
Retirement_participant-plan_info_v1_mypru_plankeys_rev1_2021_03_09.zip
Retirement_participant-plan_info_resetcache_param_value_rev1_2021_03_09.zip
Retirement_participant-plan_info_resetcache_param_v1_balances_rev1_2021_03_09.zip
I want to split those zip files and get the name like this:
Retirement_participant-plan_info_v1_getPlankeys
Retirement_participant-plan_info_resetcache_secretmanager
Retirement_participant-plan_info_v1_mypru_plankeys
Retirement_participant-plan_info_resetcache_param_value
Retirement_participant-plan_info_resetcache_param_v1_balances
PS: I want to remove _rev1_2021_03_09.zip while creating a name from the zip file.
here is my python code. It works with Retirement_participant-plan_info_v1_getPlankeys_rev1_2021_03_09.zip but its not working if i have too big names for a zip file for eg Retirement_participant-plan_info_resetcache_param_v1_balances_rev1_2021_03_09.zip
Proxies = SubElement(proxy, 'Proxies')
path = "./"
for f in os.listdir(path):
if '.zip' in f:
Proxy = SubElement(Proxies, 'Proxy')
name = SubElement(Proxy, 'name')
fileName = SubElement(Proxy, 'fileName')
a = f.split('_')
name.text = '_'.join(a[:3])
fileName.text = str(f)

You can str.split by rev1_
>>> filenames
['Retirement_participant-plan_info_v1_getPlankeys_rev1_2021_03_09.zip',
'Retirement_participant-plan_info_resetcache_secretmanager_rev1_2021_03_09.zip',
'Retirement_participant-plan_info_v1_mypru_plankeys_rev1_2021_03_09.zip',
'Retirement_participant-plan_info_resetcache_param_value_rev1_2021_03_09.zip',
'Retirement_participant-plan_info_resetcache_param_v1_balances_rev1_2021_03_09.zip']
>>> names = [fname.split('_rev1_')[0] for fname in filenames]
>>> names
['Retirement_participant-plan_info_v1_getPlankeys',
'Retirement_participant-plan_info_resetcache_secretmanager',
'Retirement_participant-plan_info_v1_mypru_plankeys',
'Retirement_participant-plan_info_resetcache_param_value',
'Retirement_participant-plan_info_resetcache_param_v1_balances']
Same can be achieved with str.rsplit by limiting the maxsplit to 4:
>>> names = [fname.rsplit('_', 4)[0] for fname in filenames]
>>> names
['Retirement_participant-plan_info_v1_getPlankeys',
'Retirement_participant-plan_info_resetcache_secretmanager',
'Retirement_participant-plan_info_v1_mypru_plankeys',
'Retirement_participant-plan_info_resetcache_param_value',
'Retirement_participant-plan_info_resetcache_param_v1_balances']

If the rev and date is always the same (2021_03_09), just replace them with the empty string:
filenames = [f.replace("_rev1_2021_03_09.zip", "") for f in os.listdir(path)]

Related

How to get the filename in directory with the max number in the filename python?

I have some xml files in a folder as example 'assests/2020/2010.xml', 'assests/2020/20005.xml', 'assests/2020/20999.xml' etc. I want to get the filename with max value in the '2020' folder. For above three files output should be 20999.xml
I am trying as following:
import glob
import os
list_of_files = glob.glob('assets/2020/*')
# latest_file = max(list_of_files, key=os.path.getctime)
# print (latest_file)
I couldn't be able to find logic to get the required file.
Here is the resource that have best answer to my query but I couldn't build my logic.
You can use pathlib to glob for the xml files and access the Path object attributes like .name and .stem:
from pathlib import Path
list_of_files = Path('assets/2020/').glob('*.xml')
print(max((Path(fn).name for fn in list_of_files), key=lambda fn: int(Path(fn).stem)))
Output:
20999.xml
I can't test it out right now, but you may try this:
files = []
for filename in list_of_files:
filename = str(filename)
filename = filename.replace('.xml','') #Assuming it's not printing your complete directory path
filename = int(filename)
files += [filename]
print(files)
This should get you your filenames in integer format and now you should be able to sort them in descending order and get the first item of the sorted list.
Use re to search for the appropriate endings in your file paths. If found use re again to extract the nr.
import re
list_of_files = [
'assests/2020/2010.xml',
'assests/2020/20005.xml',
'assests/2020/20999.xml'
]
highest_nr = -1
highest_nr_file = ''
for f in list_of_files:
re_result = re.findall(r'\d+\.xml$', f)
if re_result:
nr = int(re.findall(r'\d+', re_result[0])[0])
if nr > highest_nr:
highest_nr = nr
highest_nr_file = f
print(highest_nr_file)
Result
assests/2020/20999.xml
You can also try this way.
import os, re
path = "assests/2020/"
files =[
"assests/2020/2010.xml",
"assests/2020/20005.xml",
"assests/2020/20999.xml"
]
n = [int(re.findall(r'\d+\.xml$',file)[0].split('.')[0]) for file in files]
output = str(max(n))+".xml"
print("Biggest max file name of .xml file is ",os.path.join(path,output))
Output:
Biggest max file name of .xml file is assests/2020/20999.xml
import glob
xmlFiles = []
# this will store all the xml files in your directory
for file in glob.glob("*.xml"):
xmlFiles.append(file[:4])
# this will print the maximum one
print(max(xmlFiles))

How to get filename without some special extensions in python

I have a file that has some special extension. Sometime it is '.exe', or 'exe.gz' or 'exe.tar.gz'...I want to get the filename only. I am using the below code to get filename abc but it cannot work for all cases
import os
filename = 'abc.exe'
base = os.path.basename(filename)
print(os.path.splitext(base)[0])
filename = 'abc.exe.gz'
base = os.path.basename(filename)
print(os.path.splitext(base)[0])
Note that, I knew the list of extensions such as ['.exe','exe.gz','exe.tar.gz', '.gz']
You can just split with the . char and take the first element:
>>> filename = 'abc.exe'
>>> filename.split('.')[0]
'abc'
>>> filename = 'abc.exe.gz'
>>> filename.split('.')[0]
'abc'
How about a workaround like this?
suffixes = ['.exe','.exe.gz','.exe.tar.gz', '.gz']
def get_basename(filename):
for suffix in suffixes:
if filename.endswith(suffix):
return filename[:-len(suffix)]
return filename

How to read n number of files in n variables and then add those variables in a list?

This is my code:
file_input1 = open('Amazon_Indi_Seller.py', 'r')
f1 = file_input1.read().lower()
file_input2 = open('Amazon_Prices.py', 'r')
f2 = file_input2.read().lower()
documents = [f1, f2]
import nltk, string, numpy
stemmer = nltk.stem.porter.PorterStemmer()
lemmer = nltk.stem.WordNetLemmatizer()
def LemTokens(tokens):
return [lemmer.lemmatize(token) for token in tokens]
remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation)
def LemNormalize(text):
return
LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict)))
from sklearn.feature_extraction.text import CountVectorizer
LemVectorizer = CountVectorizer(tokenizer=LemNormalize,
stop_words='english')
LemVectorizer.fit_transform(documents)
Instead of reading 2 files i want to read all the files in a directory. And read them individually so that later I can add those variables in a list named documents.
You can use the code mentioned below,
import os
def read_files(file):
file_input1 = open(file, 'r')
f1 = file_input1.read()
return f1
files = ['sample.py', 'Amazon_Indi_Seller.py']
data = list()
for file in files:
data.append(read_files(file))
print(data)
The above code will used to read the files mentioned in the list
import os
def read_files(file):
file_input1 = open(file, 'r')
f1 = file_input1.read()
return f1
src = r'DIRECTORY PATH'
data = list()
for file in os.listdir(src):
data.append(read_files(file))
print(data)
And the above code will read all the files from the directory mentioned
You could collect all the in a list, for example:
lst = []
for file in os.listdir():
file_input = open(file,"r")
lst.append(file_input.read())
One extra recommendation - in general it might be wise to store the contents of a file as a collection of its lines by for example using file_input.readlines() which returns a list of lines.
Create a list of all filenames and then iterate over filename list and add their content in a dictionary.
from collections import defaultdict #imported default dictionary
result = defaultdict() #created empty default dictionary
filenames = ['name1.py', 'name2.py', 'name3.py'] #added filenames to a list
for name in filenames: #iterate over filename list
with open(name, 'r') as stream: #open each file
data = stream.readlines() #read contents lines by line (readlines return list of lines)
result[name] = data # set name as key and content as value in dictionary
print(result)
In this way you will have a dictionary with keys as filenames and values as their contents
If the directory may include other directories with files, which you want to read to - use os.walk
Here is sample code from the official documentation:
import os
from os.path import join, getsize
for root, dirs, files in os.walk('python/Lib/email'):
print root, "consumes",
print sum(getsize(join(root, name)) for name in files),
print "bytes in", len(files), "non-directory files"
if 'CVS' in dirs:
dirs.remove('CVS') # don't visit CVS directories

How to search through both zipped and unzipped folders for a specific line

I'm trying to implement a Python script that takes a folder from the user (can be zipped or unzipped), and search through all the files in the folder to output the specific lines that my regular expression matches. My code below works for regular unzipped folders, but I can't figure out how to do the same with zipped folders that are inputted to function. Below are my code, thanks in advance!
def myFunction(folder_name):
path = folder_name
for (path, subdirs, files) in os.walk(path):
files = [f for f in os.listdir(path) if f.endswith('.txt') or f.endswith('.log') or f.endswith('-release') or f.endswith('.out') or f.endswith('messages') or f.endswith('.zip')] # Specify here the format of files you hope to search from (ex: ".txt" or ".log")
files.sort() # file is sorted list
files = [os.path.join(path, name) for name in files] # Joins the path and the name, so the files can be opened and scanned by the open() function
# The following for loop searches all files with the selected format
for filename in files:
#print('start parsing... ' + str(datetime.datetime.now()))
matched_line = []
try:
with open(filename, 'r', encoding = 'utf-8') as f:
f = f.readlines()
except:
with open(filename, 'r') as f:
f = f.readlines()
# print('Finished parsing... ' + str(datetime.datetime.now()))
for line in f:
#0strip out \x00 from read content, in case it's encoded differently
line = line.replace('\x00', '')
RE2 = r'^Version: \d.+\d.+\d.\w\d.+'
RE3 = r'^.+version.(\d+.\d+.\d+.\d+)'
pattern2 = re.compile('('+RE2+'|'+RE3+')', re.IGNORECASE)
for match2 in pattern2.finditer(line):
matched_line.append(line)
print(line)
#Calling the function to use it
myFunction(r"SampleZippedFolder.zip")
The try and except block of my code was my attempt to open the zipped folder and read it. I'm still not very clear with how to open the zipped folder or how it works. Please let me know how I can modify my code to make it work, much appreciated!
One possibility is first determine what object type folder_name is using zipfile and os.isdir() and whichever one succeeds, get the list of files and proceed. Maybe something like this:
import zipfile, os, re
def myFunction(folder_name):
files = None # nothing yet
path = folder_name
if zipfile.is_zipfile(path):
print('ZipFile: {}'.format(path))
f = zipfile.ZipFile(path)
files = f.namelist()
# for name in f.namelist(): # debugging
# print('file: {}'.format(name))
elif os.path.isdir(path):
print('Folder: {}'.format(path))
files = os.listdir(path)
# for name in os.listdir(path): # debugging
# print('file: {}'.format(name))
# should now have a list of files
# proceed processing the files
for filename in files:
...

Python - Loop through list within regex

Right, i'm relatively new to Python, which you will likely see in my code, but is there any way to iterate through a list within regex?
Basically, i'm looping through each filename within a folder, getting a code (2-6 digits) from the filename, and i'm wanting to compare it with a list of codes in a text file, which have a name attached, in the format "1234_Name" (without the quotation marks). If the code exists in both lists, I want to print out the list entry, i.e. 1234_Name. Currently my code only seems to look at the first entry in the text file's list and i'm not sure how to make it look through them all to find matches.
import os, re
sitesfile = open('C:/Users/me/My Documents/WORK_PYTHON/Renaming/testnames.txt', 'r')
filefolder = r'C:/Users/me/My Documents/WORK_PYTHON/Renaming/files/'
sites = sitesfile.read()
site_split = re.split('\n', sites)
old = []
newname = []
for site in site_split:
newname.append(site)
for root, dirs, filenames in os.walk(filefolder):
for filename in filenames:
fullpath = os.path.join(root, filename)
filename_split = os.path.splitext(fullpath)
filename_zero, fileext = filename_split
filename_zs = re.split("/", filename_zero)
filenm = re.search(r"[\w]+", str(filename_zs[-1:]))#get only filename, not path
filenmgrp = filenm.group()
pacode = re.search('\d\d+', filenmgrp)
if pacode:
pacodegrp = pacode.group()
match = re.match(pacodegrp, site)
if match:
print site
Hope this makes sense - thanks a lot in advance!
So, use this code instead:
import os
import re
def locate(pattern = r'\d+[_]', root=os.curdir):
for path, dirs, files in os.walk(os.path.abspath(root)):
for filename in re.findall(pattern, ' '.join(files)):
yield os.path.join(path, filename)
..this will only return files in a folder that match a given regex pattern.
with open('list_file.txt', 'r') as f:
lines = [x.split('_')[0] for x in f.readlines()]
print_out = []
for f in locate(<your code regex>, <your directory>):
if f in lines: print_out.append(f)
print(print_out)
...find the valid codes in your list_file first, then compare the files that come back with your given regex.

Categories