Searching file python - python

Using this code
I want that it search all the files with name sh like sh.c,sh.txt,sh.cpp etc.But this code does not search unless I write lookfor=sh.txt or lookfor=sh.pdf instead of lookfor=sh in the below code.
Hence I want that by writing lookfor=sh it searches all files named sh.Please help.
import os
from os.path import join
lookfor = "sh"
for root, dirs, files in os.walk('C:\\'):
if lookfor in files:
print "found: %s" % join(root, lookfor)
break

Try glob:
import glob
print glob.glob('sh.*') #this will give you all sh.* files in the current dir

Replace:
if lookfor in files:
With:
for filename in files:
if filename.rsplit('.', 1)[0] == lookfor:
What filename.rsplit('.', 1)[0] is remove the rightmost part of the file that's found after a dot (== the extension). In case the file has several dots in it, we keep the rest in the filename.

Presumably you want to search for files with sh as their basename. (The part of the name excluding the path and extension.) You can do this with the filter function from the fnmatch module.
import os
from os.path import join
import fnmatch
lookfor = "sh.*"
for root, dirs, files in os.walk('C:\\'):
for found in fnmatch.filter(files, lookfor):
print "found: %s" % join(root, found)

The line
if lookfor in files:
says that the following code should be executed if the list files contains the string given in lookfor.
However, you want that the test should be that the found file name starts with the given string and continues with a ..
Besides, you want the real filename to be determined.
So your code should be
import os
from os.path import join, splitext
lookfor = "sh"
found = None
for root, dirs, files in os.walk('C:\\'):
for file in files: # test them one after the other
if splitext(filename)[0] == lookfor:
found = join(root, file)
print "found: %s" % found
break
if found: break
This can even be improved, as I don't like the way how I break the outer for loop.
Maybe you want to have it as a function:
def look(lookfor, where):
import os
from os.path import join, splitext
for root, dirs, files in os.walk(where):
for file in files: # test them one after the other
if splitext(filename)[0] == lookfor:
found = join(root, file)
return found
found = look("sh", "C:\\")
if found is not None:
print "found: %s" % found

import os
from os.path import join
lookfor = "sh."
for root, dirs, files in os.walk('C:\\'):
for filename in files:
if filename.startswith(lookfor):
print "found: %s" % join(root, filename)
You may want to read the doc for fnmatch and glob too.

Related

Find and copy files from directories using wildcard [duplicate]

This is what I have:
glob(os.path.join('src','*.c'))
but I want to search the subfolders of src. Something like this would work:
glob(os.path.join('src','*.c'))
glob(os.path.join('src','*','*.c'))
glob(os.path.join('src','*','*','*.c'))
glob(os.path.join('src','*','*','*','*.c'))
But this is obviously limited and clunky.
pathlib.Path.rglob
Use pathlib.Path.rglob from the pathlib module, which was introduced in Python 3.5.
from pathlib import Path
for path in Path('src').rglob('*.c'):
print(path.name)
If you don't want to use pathlib, use can use glob.glob('**/*.c'), but don't forget to pass in the recursive keyword parameter and it will use inordinate amount of time on large directories.
For cases where matching files beginning with a dot (.); like files in the current directory or hidden files on Unix based system, use the os.walk solution below.
os.walk
For older Python versions, use os.walk to recursively walk a directory and fnmatch.filter to match against a simple expression:
import fnmatch
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
for filename in fnmatch.filter(filenames, '*.c'):
matches.append(os.path.join(root, filename))
For python >= 3.5 you can use **, recursive=True :
import glob
for f in glob.glob('/path/**/*.c', recursive=True):
print(f)
If recursive is True (default is False), the pattern ** will match any files and zero
or more directories and subdirectories. If the pattern is followed by
an os.sep, only directories and subdirectories match.
Python 3 Demo
Similar to other solutions, but using fnmatch.fnmatch instead of glob, since os.walk already listed the filenames:
import os, fnmatch
def find_files(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
if fnmatch.fnmatch(basename, pattern):
filename = os.path.join(root, basename)
yield filename
for filename in find_files('src', '*.c'):
print 'Found C source:', filename
Also, using a generator alows you to process each file as it is found, instead of finding all the files and then processing them.
I've modified the glob module to support ** for recursive globbing, e.g:
>>> import glob2
>>> all_header_files = glob2.glob('src/**/*.c')
https://github.com/miracle2k/python-glob2/
Useful when you want to provide your users with the ability to use the ** syntax, and thus os.walk() alone is not good enough.
Starting with Python 3.4, one can use the glob() method of one of the Path classes in the new pathlib module, which supports ** wildcards. For example:
from pathlib import Path
for file_path in Path('src').glob('**/*.c'):
print(file_path) # do whatever you need with these files
Update:
Starting with Python 3.5, the same syntax is also supported by glob.glob().
import os
import fnmatch
def recursive_glob(treeroot, pattern):
results = []
for base, dirs, files in os.walk(treeroot):
goodfiles = fnmatch.filter(files, pattern)
results.extend(os.path.join(base, f) for f in goodfiles)
return results
fnmatch gives you exactly the same patterns as glob, so this is really an excellent replacement for glob.glob with very close semantics. An iterative version (e.g. a generator), IOW a replacement for glob.iglob, is a trivial adaptation (just yield the intermediate results as you go, instead of extending a single results list to return at the end).
You'll want to use os.walk to collect filenames that match your criteria. For example:
import os
cfiles = []
for root, dirs, files in os.walk('src'):
for file in files:
if file.endswith('.c'):
cfiles.append(os.path.join(root, file))
Here's a solution with nested list comprehensions, os.walk and simple suffix matching instead of glob:
import os
cfiles = [os.path.join(root, filename)
for root, dirnames, filenames in os.walk('src')
for filename in filenames if filename.endswith('.c')]
It can be compressed to a one-liner:
import os;cfiles=[os.path.join(r,f) for r,d,fs in os.walk('src') for f in fs if f.endswith('.c')]
or generalized as a function:
import os
def recursive_glob(rootdir='.', suffix=''):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames if filename.endswith(suffix)]
cfiles = recursive_glob('src', '.c')
If you do need full glob style patterns, you can follow Alex's and
Bruno's example and use fnmatch:
import fnmatch
import os
def recursive_glob(rootdir='.', pattern='*'):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames
if fnmatch.fnmatch(filename, pattern)]
cfiles = recursive_glob('src', '*.c')
Consider pathlib.rglob().
This is like calling Path.glob() with "**/" added in front of the given relative pattern:
import pathlib
for p in pathlib.Path("src").rglob("*.c"):
print(p)
See also #taleinat's related post here and a similar post elsewhere.
import os, glob
for each in glob.glob('path/**/*.c', recursive=True):
print(f'Name with path: {each} \nName without path: {os.path.basename(each)}')
glob.glob('*.c') :matches all files ending in .c in current directory
glob.glob('*/*.c') :same as 1
glob.glob('**/*.c') :matches all files ending in .c in the immediate subdirectories only, but not in the current directory
glob.glob('*.c',recursive=True) :same as 1
glob.glob('*/*.c',recursive=True) :same as 3
glob.glob('**/*.c',recursive=True) :matches all files ending in .c in the current directory and in all subdirectories
In case this may interest anyone, I've profiled the top three proposed methods.
I have about ~500K files in the globbed folder (in total), and 2K files that match the desired pattern.
here's the (very basic) code
import glob
import json
import fnmatch
import os
from pathlib import Path
from time import time
def find_files_iglob():
return glob.iglob("./data/**/data.json", recursive=True)
def find_files_oswalk():
for root, dirnames, filenames in os.walk('data'):
for filename in fnmatch.filter(filenames, 'data.json'):
yield os.path.join(root, filename)
def find_files_rglob():
return Path('data').rglob('data.json')
t0 = time()
for f in find_files_oswalk(): pass
t1 = time()
for f in find_files_rglob(): pass
t2 = time()
for f in find_files_iglob(): pass
t3 = time()
print(t1-t0, t2-t1, t3-t2)
And the results I got were:
os_walk: ~3.6sec
rglob ~14.5sec
iglob: ~16.9sec
The platform: Ubuntu 16.04, x86_64 (core i7),
Recently I had to recover my pictures with the extension .jpg. I ran photorec and recovered 4579 directories 2.2 million files within, having tremendous variety of extensions.With the script below I was able to select 50133 files havin .jpg extension within minutes:
#!/usr/binenv python2.7
import glob
import shutil
import os
src_dir = "/home/mustafa/Masaüstü/yedek"
dst_dir = "/home/mustafa/Genel/media"
for mediafile in glob.iglob(os.path.join(src_dir, "*", "*.jpg")): #"*" is for subdirectory
shutil.copy(mediafile, dst_dir)
based on other answers this is my current working implementation, which retrieves nested xml files in a root directory:
files = []
for root, dirnames, filenames in os.walk(myDir):
files.extend(glob.glob(root + "/*.xml"))
I'm really having fun with python :)
For python 3.5 and later
import glob
#file_names_array = glob.glob('path/*.c', recursive=True)
#above works for files directly at path/ as guided by NeStack
#updated version
file_names_array = glob.glob('path/**/*.c', recursive=True)
further you might need
for full_path_in_src in file_names_array:
print (full_path_in_src ) # be like 'abc/xyz.c'
#Full system path of this would be like => 'path till src/abc/xyz.c'
Johan and Bruno provide excellent solutions on the minimal requirement as stated. I have just released Formic which implements Ant FileSet and Globs which can handle this and more complicated scenarios. An implementation of your requirement is:
import formic
fileset = formic.FileSet(include="/src/**/*.c")
for file_name in fileset.qualified_files():
print file_name
Another way to do it using just the glob module. Just seed the rglob method with a starting base directory and a pattern to match and it will return a list of matching file names.
import glob
import os
def _getDirs(base):
return [x for x in glob.iglob(os.path.join( base, '*')) if os.path.isdir(x) ]
def rglob(base, pattern):
list = []
list.extend(glob.glob(os.path.join(base,pattern)))
dirs = _getDirs(base)
if len(dirs):
for d in dirs:
list.extend(rglob(os.path.join(base,d), pattern))
return list
Or with a list comprehension:
>>> base = r"c:\User\xtofl"
>>> binfiles = [ os.path.join(base,f)
for base, _, files in os.walk(root)
for f in files if f.endswith(".jpg") ]
If the files are on a remote file system or inside an archive, you can use an implementation of the fsspec AbstractFileSystem class. For example, to list all the files in a zipfile:
from fsspec.implementations.zip import ZipFileSystem
fs = ZipFileSystem("/tmp/test.zip")
fs.glob("/**") # equivalent: fs.find("/")
or to list all the files in a publicly available S3 bucket:
from s3fs import S3FileSystem
fs_s3 = S3FileSystem(anon=True)
fs_s3.glob("noaa-goes16/ABI-L1b-RadF/2020/045/**") # or use fs_s3.find
you can also use it for a local filesystem, which may be interesting if your implementation should be filesystem-agnostic:
from fsspec.implementations.local import LocalFileSystem
fs = LocalFileSystem()
fs.glob("/tmp/test/**")
Other implementations include Google Cloud, Github, SFTP/SSH, Dropbox, and Azure. For details, see the fsspec API documentation.
Just made this.. it will print files and directory in hierarchical way
But I didn't used fnmatch or walk
#!/usr/bin/python
import os,glob,sys
def dirlist(path, c = 1):
for i in glob.glob(os.path.join(path, "*")):
if os.path.isfile(i):
filepath, filename = os.path.split(i)
print '----' *c + filename
elif os.path.isdir(i):
dirname = os.path.basename(i)
print '----' *c + dirname
c+=1
dirlist(i,c)
c-=1
path = os.path.normpath(sys.argv[1])
print(os.path.basename(path))
dirlist(path)
That one uses fnmatch or regular expression:
import fnmatch, os
def filepaths(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
try:
matched = pattern.match(basename)
except AttributeError:
matched = fnmatch.fnmatch(basename, pattern)
if matched:
yield os.path.join(root, basename)
# usage
if __name__ == '__main__':
from pprint import pprint as pp
import re
path = r'/Users/hipertracker/app/myapp'
pp([x for x in filepaths(path, re.compile(r'.*\.py$'))])
pp([x for x in filepaths(path, '*.py')])
In addition to the suggested answers, you can do this with some lazy generation and list comprehension magic:
import os, glob, itertools
results = itertools.chain.from_iterable(glob.iglob(os.path.join(root,'*.c'))
for root, dirs, files in os.walk('src'))
for f in results: print(f)
Besides fitting in one line and avoiding unnecessary lists in memory, this also has the nice side effect, that you can use it in a way similar to the ** operator, e.g., you could use os.path.join(root, 'some/path/*.c') in order to get all .c files in all sub directories of src that have this structure.
This is a working code on Python 2.7. As part of my devops work, I was required to write a script which would move the config files marked with live-appName.properties to appName.properties. There could be other extension files as well like live-appName.xml.
Below is a working code for this, which finds the files in the given directories (nested level) and then renames (moves) it to the required filename
def flipProperties(searchDir):
print "Flipping properties to point to live DB"
for root, dirnames, filenames in os.walk(searchDir):
for filename in fnmatch.filter(filenames, 'live-*.*'):
targetFileName = os.path.join(root, filename.split("live-")[1])
print "File "+ os.path.join(root, filename) + "will be moved to " + targetFileName
shutil.move(os.path.join(root, filename), targetFileName)
This function is called from a main script
flipProperties(searchDir)
Hope this helps someone struggling with similar issues.
Simplified version of Johan Dahlin's answer, without fnmatch.
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
matches += [os.path.join(root, f) for f in filenames if f[-2:] == '.c']
Here is my solution using list comprehension to search for multiple file extensions recursively in a directory and all subdirectories:
import os, glob
def _globrec(path, *exts):
""" Glob recursively a directory and all subdirectories for multiple file extensions
Note: Glob is case-insensitive, i. e. for '\*.jpg' you will get files ending
with .jpg and .JPG
Parameters
----------
path : str
A directory name
exts : tuple
File extensions to glob for
Returns
-------
files : list
list of files matching extensions in exts in path and subfolders
"""
dirs = [a[0] for a in os.walk(path)]
f_filter = [d+e for d in dirs for e in exts]
return [f for files in [glob.iglob(files) for files in f_filter] for f in files]
my_pictures = _globrec(r'C:\Temp', '\*.jpg','\*.bmp','\*.png','\*.gif')
for f in my_pictures:
print f
import sys, os, glob
dir_list = ["c:\\books\\heap"]
while len(dir_list) > 0:
cur_dir = dir_list[0]
del dir_list[0]
list_of_files = glob.glob(cur_dir+'\\*')
for book in list_of_files:
if os.path.isfile(book):
print(book)
else:
dir_list.append(book)
I modified the top answer in this posting.. and recently created this script which will loop through all files in a given directory (searchdir) and the sub-directories under it... and prints filename, rootdir, modified/creation date, and size.
Hope this helps someone... and they can walk the directory and get fileinfo.
import time
import fnmatch
import os
def fileinfo(file):
filename = os.path.basename(file)
rootdir = os.path.dirname(file)
lastmod = time.ctime(os.path.getmtime(file))
creation = time.ctime(os.path.getctime(file))
filesize = os.path.getsize(file)
print "%s**\t%s\t%s\t%s\t%s" % (rootdir, filename, lastmod, creation, filesize)
searchdir = r'D:\Your\Directory\Root'
matches = []
for root, dirnames, filenames in os.walk(searchdir):
## for filename in fnmatch.filter(filenames, '*.c'):
for filename in filenames:
## matches.append(os.path.join(root, filename))
##print matches
fileinfo(os.path.join(root, filename))
Here is a solution that will match the pattern against the full path and not just the base filename.
It uses fnmatch.translate to convert a glob-style pattern into a regular expression, which is then matched against the full path of each file found while walking the directory.
re.IGNORECASE is optional, but desirable on Windows since the file system itself is not case-sensitive. (I didn't bother compiling the regex because docs indicate it should be cached internally.)
import fnmatch
import os
import re
def findfiles(dir, pattern):
patternregex = fnmatch.translate(pattern)
for root, dirs, files in os.walk(dir):
for basename in files:
filename = os.path.join(root, basename)
if re.search(patternregex, filename, re.IGNORECASE):
yield filename
I needed a solution for python 2.x that works fast on large directories.
I endet up with this:
import subprocess
foundfiles= subprocess.check_output("ls src/*.c src/**/*.c", shell=True)
for foundfile in foundfiles.splitlines():
print foundfile
Note that you might need some exception handling in case ls doesn't find any matching file.

How can I check if there are any more JSON files remaining in a directory in Python? [duplicate]

This is what I have:
glob(os.path.join('src','*.c'))
but I want to search the subfolders of src. Something like this would work:
glob(os.path.join('src','*.c'))
glob(os.path.join('src','*','*.c'))
glob(os.path.join('src','*','*','*.c'))
glob(os.path.join('src','*','*','*','*.c'))
But this is obviously limited and clunky.
pathlib.Path.rglob
Use pathlib.Path.rglob from the pathlib module, which was introduced in Python 3.5.
from pathlib import Path
for path in Path('src').rglob('*.c'):
print(path.name)
If you don't want to use pathlib, use can use glob.glob('**/*.c'), but don't forget to pass in the recursive keyword parameter and it will use inordinate amount of time on large directories.
For cases where matching files beginning with a dot (.); like files in the current directory or hidden files on Unix based system, use the os.walk solution below.
os.walk
For older Python versions, use os.walk to recursively walk a directory and fnmatch.filter to match against a simple expression:
import fnmatch
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
for filename in fnmatch.filter(filenames, '*.c'):
matches.append(os.path.join(root, filename))
For python >= 3.5 you can use **, recursive=True :
import glob
for f in glob.glob('/path/**/*.c', recursive=True):
print(f)
If recursive is True (default is False), the pattern ** will match any files and zero
or more directories and subdirectories. If the pattern is followed by
an os.sep, only directories and subdirectories match.
Python 3 Demo
Similar to other solutions, but using fnmatch.fnmatch instead of glob, since os.walk already listed the filenames:
import os, fnmatch
def find_files(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
if fnmatch.fnmatch(basename, pattern):
filename = os.path.join(root, basename)
yield filename
for filename in find_files('src', '*.c'):
print 'Found C source:', filename
Also, using a generator alows you to process each file as it is found, instead of finding all the files and then processing them.
I've modified the glob module to support ** for recursive globbing, e.g:
>>> import glob2
>>> all_header_files = glob2.glob('src/**/*.c')
https://github.com/miracle2k/python-glob2/
Useful when you want to provide your users with the ability to use the ** syntax, and thus os.walk() alone is not good enough.
Starting with Python 3.4, one can use the glob() method of one of the Path classes in the new pathlib module, which supports ** wildcards. For example:
from pathlib import Path
for file_path in Path('src').glob('**/*.c'):
print(file_path) # do whatever you need with these files
Update:
Starting with Python 3.5, the same syntax is also supported by glob.glob().
import os
import fnmatch
def recursive_glob(treeroot, pattern):
results = []
for base, dirs, files in os.walk(treeroot):
goodfiles = fnmatch.filter(files, pattern)
results.extend(os.path.join(base, f) for f in goodfiles)
return results
fnmatch gives you exactly the same patterns as glob, so this is really an excellent replacement for glob.glob with very close semantics. An iterative version (e.g. a generator), IOW a replacement for glob.iglob, is a trivial adaptation (just yield the intermediate results as you go, instead of extending a single results list to return at the end).
You'll want to use os.walk to collect filenames that match your criteria. For example:
import os
cfiles = []
for root, dirs, files in os.walk('src'):
for file in files:
if file.endswith('.c'):
cfiles.append(os.path.join(root, file))
Here's a solution with nested list comprehensions, os.walk and simple suffix matching instead of glob:
import os
cfiles = [os.path.join(root, filename)
for root, dirnames, filenames in os.walk('src')
for filename in filenames if filename.endswith('.c')]
It can be compressed to a one-liner:
import os;cfiles=[os.path.join(r,f) for r,d,fs in os.walk('src') for f in fs if f.endswith('.c')]
or generalized as a function:
import os
def recursive_glob(rootdir='.', suffix=''):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames if filename.endswith(suffix)]
cfiles = recursive_glob('src', '.c')
If you do need full glob style patterns, you can follow Alex's and
Bruno's example and use fnmatch:
import fnmatch
import os
def recursive_glob(rootdir='.', pattern='*'):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames
if fnmatch.fnmatch(filename, pattern)]
cfiles = recursive_glob('src', '*.c')
Consider pathlib.rglob().
This is like calling Path.glob() with "**/" added in front of the given relative pattern:
import pathlib
for p in pathlib.Path("src").rglob("*.c"):
print(p)
See also #taleinat's related post here and a similar post elsewhere.
import os, glob
for each in glob.glob('path/**/*.c', recursive=True):
print(f'Name with path: {each} \nName without path: {os.path.basename(each)}')
glob.glob('*.c') :matches all files ending in .c in current directory
glob.glob('*/*.c') :same as 1
glob.glob('**/*.c') :matches all files ending in .c in the immediate subdirectories only, but not in the current directory
glob.glob('*.c',recursive=True) :same as 1
glob.glob('*/*.c',recursive=True) :same as 3
glob.glob('**/*.c',recursive=True) :matches all files ending in .c in the current directory and in all subdirectories
In case this may interest anyone, I've profiled the top three proposed methods.
I have about ~500K files in the globbed folder (in total), and 2K files that match the desired pattern.
here's the (very basic) code
import glob
import json
import fnmatch
import os
from pathlib import Path
from time import time
def find_files_iglob():
return glob.iglob("./data/**/data.json", recursive=True)
def find_files_oswalk():
for root, dirnames, filenames in os.walk('data'):
for filename in fnmatch.filter(filenames, 'data.json'):
yield os.path.join(root, filename)
def find_files_rglob():
return Path('data').rglob('data.json')
t0 = time()
for f in find_files_oswalk(): pass
t1 = time()
for f in find_files_rglob(): pass
t2 = time()
for f in find_files_iglob(): pass
t3 = time()
print(t1-t0, t2-t1, t3-t2)
And the results I got were:
os_walk: ~3.6sec
rglob ~14.5sec
iglob: ~16.9sec
The platform: Ubuntu 16.04, x86_64 (core i7),
Recently I had to recover my pictures with the extension .jpg. I ran photorec and recovered 4579 directories 2.2 million files within, having tremendous variety of extensions.With the script below I was able to select 50133 files havin .jpg extension within minutes:
#!/usr/binenv python2.7
import glob
import shutil
import os
src_dir = "/home/mustafa/Masaüstü/yedek"
dst_dir = "/home/mustafa/Genel/media"
for mediafile in glob.iglob(os.path.join(src_dir, "*", "*.jpg")): #"*" is for subdirectory
shutil.copy(mediafile, dst_dir)
based on other answers this is my current working implementation, which retrieves nested xml files in a root directory:
files = []
for root, dirnames, filenames in os.walk(myDir):
files.extend(glob.glob(root + "/*.xml"))
I'm really having fun with python :)
For python 3.5 and later
import glob
#file_names_array = glob.glob('path/*.c', recursive=True)
#above works for files directly at path/ as guided by NeStack
#updated version
file_names_array = glob.glob('path/**/*.c', recursive=True)
further you might need
for full_path_in_src in file_names_array:
print (full_path_in_src ) # be like 'abc/xyz.c'
#Full system path of this would be like => 'path till src/abc/xyz.c'
Johan and Bruno provide excellent solutions on the minimal requirement as stated. I have just released Formic which implements Ant FileSet and Globs which can handle this and more complicated scenarios. An implementation of your requirement is:
import formic
fileset = formic.FileSet(include="/src/**/*.c")
for file_name in fileset.qualified_files():
print file_name
Another way to do it using just the glob module. Just seed the rglob method with a starting base directory and a pattern to match and it will return a list of matching file names.
import glob
import os
def _getDirs(base):
return [x for x in glob.iglob(os.path.join( base, '*')) if os.path.isdir(x) ]
def rglob(base, pattern):
list = []
list.extend(glob.glob(os.path.join(base,pattern)))
dirs = _getDirs(base)
if len(dirs):
for d in dirs:
list.extend(rglob(os.path.join(base,d), pattern))
return list
Or with a list comprehension:
>>> base = r"c:\User\xtofl"
>>> binfiles = [ os.path.join(base,f)
for base, _, files in os.walk(root)
for f in files if f.endswith(".jpg") ]
If the files are on a remote file system or inside an archive, you can use an implementation of the fsspec AbstractFileSystem class. For example, to list all the files in a zipfile:
from fsspec.implementations.zip import ZipFileSystem
fs = ZipFileSystem("/tmp/test.zip")
fs.glob("/**") # equivalent: fs.find("/")
or to list all the files in a publicly available S3 bucket:
from s3fs import S3FileSystem
fs_s3 = S3FileSystem(anon=True)
fs_s3.glob("noaa-goes16/ABI-L1b-RadF/2020/045/**") # or use fs_s3.find
you can also use it for a local filesystem, which may be interesting if your implementation should be filesystem-agnostic:
from fsspec.implementations.local import LocalFileSystem
fs = LocalFileSystem()
fs.glob("/tmp/test/**")
Other implementations include Google Cloud, Github, SFTP/SSH, Dropbox, and Azure. For details, see the fsspec API documentation.
Just made this.. it will print files and directory in hierarchical way
But I didn't used fnmatch or walk
#!/usr/bin/python
import os,glob,sys
def dirlist(path, c = 1):
for i in glob.glob(os.path.join(path, "*")):
if os.path.isfile(i):
filepath, filename = os.path.split(i)
print '----' *c + filename
elif os.path.isdir(i):
dirname = os.path.basename(i)
print '----' *c + dirname
c+=1
dirlist(i,c)
c-=1
path = os.path.normpath(sys.argv[1])
print(os.path.basename(path))
dirlist(path)
That one uses fnmatch or regular expression:
import fnmatch, os
def filepaths(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
try:
matched = pattern.match(basename)
except AttributeError:
matched = fnmatch.fnmatch(basename, pattern)
if matched:
yield os.path.join(root, basename)
# usage
if __name__ == '__main__':
from pprint import pprint as pp
import re
path = r'/Users/hipertracker/app/myapp'
pp([x for x in filepaths(path, re.compile(r'.*\.py$'))])
pp([x for x in filepaths(path, '*.py')])
In addition to the suggested answers, you can do this with some lazy generation and list comprehension magic:
import os, glob, itertools
results = itertools.chain.from_iterable(glob.iglob(os.path.join(root,'*.c'))
for root, dirs, files in os.walk('src'))
for f in results: print(f)
Besides fitting in one line and avoiding unnecessary lists in memory, this also has the nice side effect, that you can use it in a way similar to the ** operator, e.g., you could use os.path.join(root, 'some/path/*.c') in order to get all .c files in all sub directories of src that have this structure.
This is a working code on Python 2.7. As part of my devops work, I was required to write a script which would move the config files marked with live-appName.properties to appName.properties. There could be other extension files as well like live-appName.xml.
Below is a working code for this, which finds the files in the given directories (nested level) and then renames (moves) it to the required filename
def flipProperties(searchDir):
print "Flipping properties to point to live DB"
for root, dirnames, filenames in os.walk(searchDir):
for filename in fnmatch.filter(filenames, 'live-*.*'):
targetFileName = os.path.join(root, filename.split("live-")[1])
print "File "+ os.path.join(root, filename) + "will be moved to " + targetFileName
shutil.move(os.path.join(root, filename), targetFileName)
This function is called from a main script
flipProperties(searchDir)
Hope this helps someone struggling with similar issues.
Simplified version of Johan Dahlin's answer, without fnmatch.
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
matches += [os.path.join(root, f) for f in filenames if f[-2:] == '.c']
Here is my solution using list comprehension to search for multiple file extensions recursively in a directory and all subdirectories:
import os, glob
def _globrec(path, *exts):
""" Glob recursively a directory and all subdirectories for multiple file extensions
Note: Glob is case-insensitive, i. e. for '\*.jpg' you will get files ending
with .jpg and .JPG
Parameters
----------
path : str
A directory name
exts : tuple
File extensions to glob for
Returns
-------
files : list
list of files matching extensions in exts in path and subfolders
"""
dirs = [a[0] for a in os.walk(path)]
f_filter = [d+e for d in dirs for e in exts]
return [f for files in [glob.iglob(files) for files in f_filter] for f in files]
my_pictures = _globrec(r'C:\Temp', '\*.jpg','\*.bmp','\*.png','\*.gif')
for f in my_pictures:
print f
import sys, os, glob
dir_list = ["c:\\books\\heap"]
while len(dir_list) > 0:
cur_dir = dir_list[0]
del dir_list[0]
list_of_files = glob.glob(cur_dir+'\\*')
for book in list_of_files:
if os.path.isfile(book):
print(book)
else:
dir_list.append(book)
I modified the top answer in this posting.. and recently created this script which will loop through all files in a given directory (searchdir) and the sub-directories under it... and prints filename, rootdir, modified/creation date, and size.
Hope this helps someone... and they can walk the directory and get fileinfo.
import time
import fnmatch
import os
def fileinfo(file):
filename = os.path.basename(file)
rootdir = os.path.dirname(file)
lastmod = time.ctime(os.path.getmtime(file))
creation = time.ctime(os.path.getctime(file))
filesize = os.path.getsize(file)
print "%s**\t%s\t%s\t%s\t%s" % (rootdir, filename, lastmod, creation, filesize)
searchdir = r'D:\Your\Directory\Root'
matches = []
for root, dirnames, filenames in os.walk(searchdir):
## for filename in fnmatch.filter(filenames, '*.c'):
for filename in filenames:
## matches.append(os.path.join(root, filename))
##print matches
fileinfo(os.path.join(root, filename))
Here is a solution that will match the pattern against the full path and not just the base filename.
It uses fnmatch.translate to convert a glob-style pattern into a regular expression, which is then matched against the full path of each file found while walking the directory.
re.IGNORECASE is optional, but desirable on Windows since the file system itself is not case-sensitive. (I didn't bother compiling the regex because docs indicate it should be cached internally.)
import fnmatch
import os
import re
def findfiles(dir, pattern):
patternregex = fnmatch.translate(pattern)
for root, dirs, files in os.walk(dir):
for basename in files:
filename = os.path.join(root, basename)
if re.search(patternregex, filename, re.IGNORECASE):
yield filename
I needed a solution for python 2.x that works fast on large directories.
I endet up with this:
import subprocess
foundfiles= subprocess.check_output("ls src/*.c src/**/*.c", shell=True)
for foundfile in foundfiles.splitlines():
print foundfile
Note that you might need some exception handling in case ls doesn't find any matching file.

Python 3: search subdirectories for a file

I'm using Pycharm on a Mac. In the script below I'm calling the os.path.isfile function on a file called dwnld.py. It prints out "File exists" since dwnld.py is in the same directory of the script (/Users/BobSpanks/PycharmProjects/my scripts).
If I was to put dwnld.py in a different location, how to make the code below search all subdirectories starting from /Users/BobbySpanks for dwnld.py? I tried reading os.path notes but I couldn't really find what I needed. I'm new to Python.
import os.path
File = "dwnld.py"
if os.path.isfile(File):
print("File exists")
else:
print("File doesn't exist")
You can use the glob module for this:
import glob
import os
pattern = '/Users/BobbySpanks/**/dwnld.py'
for fname in glob.glob(pattern, recursive=True):
if os.path.isfile(fname):
print(fname)
A simplified version without checking if dwnld.py is actually file:
for fname in glob.glob(pattern, recursive=True):
print(fname)
Theoretically, it could be a directory now.
If recursive is true, the pattern '**' will match any files and zero
or more directories and subdirectories.
This might work for you:
import os
File = 'dwnld.py'
for root, dirs, files in os.walk('/Users/BobbySpanks/'):
if File in files:
print ("File exists")
os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate
the file names in a directory tree by walking the tree either top-down
or bottom-up. For each directory in the tree rooted at directory top
(including top itself), it yields a 3-tuple (dirpath, dirnames,
filenames).
Source
Try this
import os
File = "dwnld.py"
for root, dirs, files in os.walk('.'):
for file in files: # loops through directories and files
if file == File: # compares to your specified conditions
print ("File exists")
Taken from: https://stackoverflow.com/a/31621120/5135450
You may also use the Path module built into python and use the glob method of that
Code for that:-
import Path
File = "dwnld.py"
pattern = '/Users/BobbySpanks/**/dwnld.py'
for fname in Path(pattern.rstrip(File)).glob(File, recursive=True):
if os.path.isfile(fname):
print("File exists")
else:
print("File doesn't exist")
something like this, using os.listdir(dir):
import os
my_dirs = os.listdir(os.getcwd())
for dirs in my_dirs:
if os.path.isdir(dirs):
os.chdir(os.path.join(os.getcwd(), dirs)
#do even more

Read file in unknown directory

I need to read and edit serveral files, the issue is I know roughly where these files are but not entirely.
so all the files are called QqTest.py in various different directories.
I know that the parent directories are called:
mdcArray = ['MDC0021','MDC0022','MDC0036','MDC0055','MDC0057'
'MDC0059','MDC0061','MDC0062','MDC0063','MDC0065'
'MDC0066','MDC0086','MDC0095','MDC0098','MDC0106'
'MDC0110','MDC0113','MDC0114','MDC0115','MDC0121'
'MDC0126','MDC0128','MDC0135','MDC0141','MDC0143'
'MDC0153','MDC0155','MDC0158']
but after that there is another unknown subdirectory that contains QqTest.txt
so I need to read the QqTest.txt from /MDC[number]/unknownDir/QqTest.txt
So how I wildcard read the file in python similar to how I would in bash
i.e
/MDC0022/*/QqTest.txt
You can use a Python module called glob to do this. It enables Unix style pathname pattern expansions.
import glob
glob.glob("/MDC0022/*/QqTest.txt")
If you want to do it for all items in the list you can try this.
for item in mdcArray:
required_files = glob.glob("{0}/*/QqTest.txt".format(item))
# process files here
Glob documentation
You could search your root folders as follows:
import os
mdcArray = ['MDC0021','MDC0022','MDC0036','MDC0055','MDC0057'
'MDC0059','MDC0061','MDC0062','MDC0063','MDC0065'
'MDC0066','MDC0086','MDC0095','MDC0098','MDC0106'
'MDC0110','MDC0113','MDC0114','MDC0115','MDC0121'
'MDC0126','MDC0128','MDC0135','MDC0141','MDC0143'
'MDC0153','MDC0155','MDC0158']
for root in mdcArray:
for dirpath, dirnames, filenames in os.walk(root):
for filename in filenames:
if filename == 'QqTest.txt':
file = os.path.join(dirpath, filename)
print "Found - {}".format(file)
This would display something like the following:
Found - MDC0022\test\QqTest.txt
The os.walk function can be used to traverse your folder structure.
To search all folders for MDC<number> in the path, you could use the following approach:
import os
import re
for dirpath, dirnames, filenames in os.walk('.'):
if re.search(r'MDC\d+', dirpath):
for filename in filenames:
if filename == 'QqTest.txt':
file = os.path.join(dirpath, filename)
print "Found - {}".format(file)
You might use os.walk. Not exactly what you wanted but will do the job.
rootDir = '.'
for dirName, subdirList, fileList in os.walk(rootDir):
print('Found directory: %s' % dirName)

How to use glob() to find files recursively?

This is what I have:
glob(os.path.join('src','*.c'))
but I want to search the subfolders of src. Something like this would work:
glob(os.path.join('src','*.c'))
glob(os.path.join('src','*','*.c'))
glob(os.path.join('src','*','*','*.c'))
glob(os.path.join('src','*','*','*','*.c'))
But this is obviously limited and clunky.
pathlib.Path.rglob
Use pathlib.Path.rglob from the pathlib module, which was introduced in Python 3.5.
from pathlib import Path
for path in Path('src').rglob('*.c'):
print(path.name)
If you don't want to use pathlib, use can use glob.glob('**/*.c'), but don't forget to pass in the recursive keyword parameter and it will use inordinate amount of time on large directories.
For cases where matching files beginning with a dot (.); like files in the current directory or hidden files on Unix based system, use the os.walk solution below.
os.walk
For older Python versions, use os.walk to recursively walk a directory and fnmatch.filter to match against a simple expression:
import fnmatch
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
for filename in fnmatch.filter(filenames, '*.c'):
matches.append(os.path.join(root, filename))
For python >= 3.5 you can use **, recursive=True, i.e.:
import glob
for f in glob.glob('/path/**/*.c', recursive=True):
print(f)
If recursive is True (default is False), the pattern ** will match any files and zero
or more directories and subdirectories. If the pattern is followed by
an os.sep, only directories and subdirectories match.
Python 3 Demo
Similar to other solutions, but using fnmatch.fnmatch instead of glob, since os.walk already listed the filenames:
import os, fnmatch
def find_files(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
if fnmatch.fnmatch(basename, pattern):
filename = os.path.join(root, basename)
yield filename
for filename in find_files('src', '*.c'):
print 'Found C source:', filename
Also, using a generator alows you to process each file as it is found, instead of finding all the files and then processing them.
I've modified the glob module to support ** for recursive globbing, e.g:
>>> import glob2
>>> all_header_files = glob2.glob('src/**/*.c')
https://github.com/miracle2k/python-glob2/
Useful when you want to provide your users with the ability to use the ** syntax, and thus os.walk() alone is not good enough.
Starting with Python 3.4, one can use the glob() method of one of the Path classes in the new pathlib module, which supports ** wildcards. For example:
from pathlib import Path
for file_path in Path('src').glob('**/*.c'):
print(file_path) # do whatever you need with these files
Update:
Starting with Python 3.5, the same syntax is also supported by glob.glob().
import os
import fnmatch
def recursive_glob(treeroot, pattern):
results = []
for base, dirs, files in os.walk(treeroot):
goodfiles = fnmatch.filter(files, pattern)
results.extend(os.path.join(base, f) for f in goodfiles)
return results
fnmatch gives you exactly the same patterns as glob, so this is really an excellent replacement for glob.glob with very close semantics. An iterative version (e.g. a generator), IOW a replacement for glob.iglob, is a trivial adaptation (just yield the intermediate results as you go, instead of extending a single results list to return at the end).
You'll want to use os.walk to collect filenames that match your criteria. For example:
import os
cfiles = []
for root, dirs, files in os.walk('src'):
for file in files:
if file.endswith('.c'):
cfiles.append(os.path.join(root, file))
Here's a solution with nested list comprehensions, os.walk and simple suffix matching instead of glob:
import os
cfiles = [os.path.join(root, filename)
for root, dirnames, filenames in os.walk('src')
for filename in filenames if filename.endswith('.c')]
It can be compressed to a one-liner:
import os;cfiles=[os.path.join(r,f) for r,d,fs in os.walk('src') for f in fs if f.endswith('.c')]
or generalized as a function:
import os
def recursive_glob(rootdir='.', suffix=''):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames if filename.endswith(suffix)]
cfiles = recursive_glob('src', '.c')
If you do need full glob style patterns, you can follow Alex's and
Bruno's example and use fnmatch:
import fnmatch
import os
def recursive_glob(rootdir='.', pattern='*'):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames
if fnmatch.fnmatch(filename, pattern)]
cfiles = recursive_glob('src', '*.c')
Consider pathlib.rglob().
This is like calling Path.glob() with "**/" added in front of the given relative pattern:
import pathlib
for p in pathlib.Path("src").rglob("*.c"):
print(p)
See also #taleinat's related post here and a similar post elsewhere.
import os, glob
for each in glob.glob('path/**/*.c', recursive=True):
print(f'Name with path: {each} \nName without path: {os.path.basename(each)}')
glob.glob('*.c') :matches all files ending in .c in current directory
glob.glob('*/*.c') :same as 1
glob.glob('**/*.c') :matches all files ending in .c in the immediate subdirectories only, but not in the current directory
glob.glob('*.c',recursive=True) :same as 1
glob.glob('*/*.c',recursive=True) :same as 3
glob.glob('**/*.c',recursive=True) :matches all files ending in .c in the current directory and in all subdirectories
In case this may interest anyone, I've profiled the top three proposed methods.
I have about ~500K files in the globbed folder (in total), and 2K files that match the desired pattern.
here's the (very basic) code
import glob
import json
import fnmatch
import os
from pathlib import Path
from time import time
def find_files_iglob():
return glob.iglob("./data/**/data.json", recursive=True)
def find_files_oswalk():
for root, dirnames, filenames in os.walk('data'):
for filename in fnmatch.filter(filenames, 'data.json'):
yield os.path.join(root, filename)
def find_files_rglob():
return Path('data').rglob('data.json')
t0 = time()
for f in find_files_oswalk(): pass
t1 = time()
for f in find_files_rglob(): pass
t2 = time()
for f in find_files_iglob(): pass
t3 = time()
print(t1-t0, t2-t1, t3-t2)
And the results I got were:
os_walk: ~3.6sec
rglob ~14.5sec
iglob: ~16.9sec
The platform: Ubuntu 16.04, x86_64 (core i7),
Recently I had to recover my pictures with the extension .jpg. I ran photorec and recovered 4579 directories 2.2 million files within, having tremendous variety of extensions.With the script below I was able to select 50133 files havin .jpg extension within minutes:
#!/usr/binenv python2.7
import glob
import shutil
import os
src_dir = "/home/mustafa/Masaüstü/yedek"
dst_dir = "/home/mustafa/Genel/media"
for mediafile in glob.iglob(os.path.join(src_dir, "*", "*.jpg")): #"*" is for subdirectory
shutil.copy(mediafile, dst_dir)
based on other answers this is my current working implementation, which retrieves nested xml files in a root directory:
files = []
for root, dirnames, filenames in os.walk(myDir):
files.extend(glob.glob(root + "/*.xml"))
I'm really having fun with python :)
For python 3.5 and later
import glob
#file_names_array = glob.glob('path/*.c', recursive=True)
#above works for files directly at path/ as guided by NeStack
#updated version
file_names_array = glob.glob('path/**/*.c', recursive=True)
further you might need
for full_path_in_src in file_names_array:
print (full_path_in_src ) # be like 'abc/xyz.c'
#Full system path of this would be like => 'path till src/abc/xyz.c'
Johan and Bruno provide excellent solutions on the minimal requirement as stated. I have just released Formic which implements Ant FileSet and Globs which can handle this and more complicated scenarios. An implementation of your requirement is:
import formic
fileset = formic.FileSet(include="/src/**/*.c")
for file_name in fileset.qualified_files():
print file_name
Another way to do it using just the glob module. Just seed the rglob method with a starting base directory and a pattern to match and it will return a list of matching file names.
import glob
import os
def _getDirs(base):
return [x for x in glob.iglob(os.path.join( base, '*')) if os.path.isdir(x) ]
def rglob(base, pattern):
list = []
list.extend(glob.glob(os.path.join(base,pattern)))
dirs = _getDirs(base)
if len(dirs):
for d in dirs:
list.extend(rglob(os.path.join(base,d), pattern))
return list
Or with a list comprehension:
>>> base = r"c:\User\xtofl"
>>> binfiles = [ os.path.join(base,f)
for base, _, files in os.walk(root)
for f in files if f.endswith(".jpg") ]
If the files are on a remote file system or inside an archive, you can use an implementation of the fsspec AbstractFileSystem class. For example, to list all the files in a zipfile:
from fsspec.implementations.zip import ZipFileSystem
fs = ZipFileSystem("/tmp/test.zip")
fs.glob("/**") # equivalent: fs.find("/")
or to list all the files in a publicly available S3 bucket:
from s3fs import S3FileSystem
fs_s3 = S3FileSystem(anon=True)
fs_s3.glob("noaa-goes16/ABI-L1b-RadF/2020/045/**") # or use fs_s3.find
you can also use it for a local filesystem, which may be interesting if your implementation should be filesystem-agnostic:
from fsspec.implementations.local import LocalFileSystem
fs = LocalFileSystem()
fs.glob("/tmp/test/**")
Other implementations include Google Cloud, Github, SFTP/SSH, Dropbox, and Azure. For details, see the fsspec API documentation.
Just made this.. it will print files and directory in hierarchical way
But I didn't used fnmatch or walk
#!/usr/bin/python
import os,glob,sys
def dirlist(path, c = 1):
for i in glob.glob(os.path.join(path, "*")):
if os.path.isfile(i):
filepath, filename = os.path.split(i)
print '----' *c + filename
elif os.path.isdir(i):
dirname = os.path.basename(i)
print '----' *c + dirname
c+=1
dirlist(i,c)
c-=1
path = os.path.normpath(sys.argv[1])
print(os.path.basename(path))
dirlist(path)
That one uses fnmatch or regular expression:
import fnmatch, os
def filepaths(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
try:
matched = pattern.match(basename)
except AttributeError:
matched = fnmatch.fnmatch(basename, pattern)
if matched:
yield os.path.join(root, basename)
# usage
if __name__ == '__main__':
from pprint import pprint as pp
import re
path = r'/Users/hipertracker/app/myapp'
pp([x for x in filepaths(path, re.compile(r'.*\.py$'))])
pp([x for x in filepaths(path, '*.py')])
In addition to the suggested answers, you can do this with some lazy generation and list comprehension magic:
import os, glob, itertools
results = itertools.chain.from_iterable(glob.iglob(os.path.join(root,'*.c'))
for root, dirs, files in os.walk('src'))
for f in results: print(f)
Besides fitting in one line and avoiding unnecessary lists in memory, this also has the nice side effect, that you can use it in a way similar to the ** operator, e.g., you could use os.path.join(root, 'some/path/*.c') in order to get all .c files in all sub directories of src that have this structure.
This is a working code on Python 2.7. As part of my devops work, I was required to write a script which would move the config files marked with live-appName.properties to appName.properties. There could be other extension files as well like live-appName.xml.
Below is a working code for this, which finds the files in the given directories (nested level) and then renames (moves) it to the required filename
def flipProperties(searchDir):
print "Flipping properties to point to live DB"
for root, dirnames, filenames in os.walk(searchDir):
for filename in fnmatch.filter(filenames, 'live-*.*'):
targetFileName = os.path.join(root, filename.split("live-")[1])
print "File "+ os.path.join(root, filename) + "will be moved to " + targetFileName
shutil.move(os.path.join(root, filename), targetFileName)
This function is called from a main script
flipProperties(searchDir)
Hope this helps someone struggling with similar issues.
Simplified version of Johan Dahlin's answer, without fnmatch.
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
matches += [os.path.join(root, f) for f in filenames if f[-2:] == '.c']
Here is my solution using list comprehension to search for multiple file extensions recursively in a directory and all subdirectories:
import os, glob
def _globrec(path, *exts):
""" Glob recursively a directory and all subdirectories for multiple file extensions
Note: Glob is case-insensitive, i. e. for '\*.jpg' you will get files ending
with .jpg and .JPG
Parameters
----------
path : str
A directory name
exts : tuple
File extensions to glob for
Returns
-------
files : list
list of files matching extensions in exts in path and subfolders
"""
dirs = [a[0] for a in os.walk(path)]
f_filter = [d+e for d in dirs for e in exts]
return [f for files in [glob.iglob(files) for files in f_filter] for f in files]
my_pictures = _globrec(r'C:\Temp', '\*.jpg','\*.bmp','\*.png','\*.gif')
for f in my_pictures:
print f
import sys, os, glob
dir_list = ["c:\\books\\heap"]
while len(dir_list) > 0:
cur_dir = dir_list[0]
del dir_list[0]
list_of_files = glob.glob(cur_dir+'\\*')
for book in list_of_files:
if os.path.isfile(book):
print(book)
else:
dir_list.append(book)
I modified the top answer in this posting.. and recently created this script which will loop through all files in a given directory (searchdir) and the sub-directories under it... and prints filename, rootdir, modified/creation date, and size.
Hope this helps someone... and they can walk the directory and get fileinfo.
import time
import fnmatch
import os
def fileinfo(file):
filename = os.path.basename(file)
rootdir = os.path.dirname(file)
lastmod = time.ctime(os.path.getmtime(file))
creation = time.ctime(os.path.getctime(file))
filesize = os.path.getsize(file)
print "%s**\t%s\t%s\t%s\t%s" % (rootdir, filename, lastmod, creation, filesize)
searchdir = r'D:\Your\Directory\Root'
matches = []
for root, dirnames, filenames in os.walk(searchdir):
## for filename in fnmatch.filter(filenames, '*.c'):
for filename in filenames:
## matches.append(os.path.join(root, filename))
##print matches
fileinfo(os.path.join(root, filename))
Here is a solution that will match the pattern against the full path and not just the base filename.
It uses fnmatch.translate to convert a glob-style pattern into a regular expression, which is then matched against the full path of each file found while walking the directory.
re.IGNORECASE is optional, but desirable on Windows since the file system itself is not case-sensitive. (I didn't bother compiling the regex because docs indicate it should be cached internally.)
import fnmatch
import os
import re
def findfiles(dir, pattern):
patternregex = fnmatch.translate(pattern)
for root, dirs, files in os.walk(dir):
for basename in files:
filename = os.path.join(root, basename)
if re.search(patternregex, filename, re.IGNORECASE):
yield filename
I needed a solution for python 2.x that works fast on large directories.
I endet up with this:
import subprocess
foundfiles= subprocess.check_output("ls src/*.c src/**/*.c", shell=True)
for foundfile in foundfiles.splitlines():
print foundfile
Note that you might need some exception handling in case ls doesn't find any matching file.

Categories