write filenames of different extention into different text files - python

import os
exts = ['ppt', 'pptx', 'doc', 'docx', 'txt', 'pdf', 'epub']
files = []
for root, dirnames, filenames in os.walk('.'):
for i in exts:
for file in filenames:
if file.endswith(i):
file1 = os.path.join(root, file)
print(file1)
with open(os.getcwd()+ r"\ally_"+i+".txt", 'w+') as f:
f.write("%s\n" % file1)
I m trying this code. How do I write all files in my system with ex. doc extention into a file named all_docs.txt in my desktop? file.write() inside for loop only write the last line of each extention into the files.

You need to open the log file in append mode (a) and not in write mode (w), because with w the file gets truncated (all content deleted) before anything new is written to it.
You can look into the docs of open(). This answer also has an overview of all the file modes.
Does it work with a for you?

with open(os.getcwd()+ r"\ally_"+i+".txt", 'w+') as f:
f.write("%s\n" % file1)
According to https://docs.python.org/2/library/functions.html#open the "w+" operation truncates the file.
Modes 'r+', 'w+' and 'a+' open the file for updating (reading and writing); note that 'w+' truncates the file.

The mode w+ for open causes to truncate the file, this is the reason for losing the lines, and only the last one will stay there.
An other little problem can be that this method of joining the path and the file name is not portable. You should user os.path.join for that purpose.
with open(os.path.join(os.getcwd(),"ally_"+i+".txt"), 'a') as f:
f.write("%s\n" % file1)
An other issue can be the week performance which you can have in case of many directories and files.
In your code you run through the filenames in the directory for each extension and open the output file again and again.
One more issue can be the checking of the extension. In most cases the extension can be determined by checking the ending of the file name, but sometimes it can be misleading. E.g. '.doc' is an extension however in a filename 'Medoc' the ending 'doc' is just 3 letters in a name.
So I give an example solution for these problems:
import os
exts = ['ppt', 'pptx', 'doc', 'docx', 'txt', 'pdf', 'epub']
files = []
outfiles = {}
for root, dirnames, filenames in os.walk('.'):
for filename in filenames:
_, ext = os.path.splitext(filename)
ext = ext[1:] # we do not need "."
if ext in exts:
file1 = os.path.join(root, filename)
#print(i,file1)
if ext not in outfiles:
outfiles[ext] = open(os.path.join(os.getcwd(),"ally_"+ext+".txt"), 'a')
outfiles[ext].write("%s\n" % file1)
for ext,file in outfiles.iteritems():
file.close()

Related

read contents of a file from a list of file with os.listdir() (python)

I need to read the contents of a file from the list of files from a directory with os.listdir. My working scriptlet is as follows:
import os
path = "/Users/Desktop/test/"
for filename in os.listdir(path):
with open(filename, 'rU') as f:
t = f.read()
t = t.split()
print(t)
print(t) gives me all the contents from all the files at once present in the directory (path).
But I like to print the contents on first file, then contents of the second and so on, until all the files are read from in dir.
Please guide ! Thanks.
You can print the file name.
Print the content after the file name.
import os
path = "/home/vpraveen/uni_tmp/temp"
for filename in os.listdir(path):
with open(filename, 'rU') as f:
t = f.read()
print filename + " Content : "
print(t)
First, you should find the path of each file using os.path.join(path, filename). Otherwise you'll loop wrong files if you change the variable path. Second, your script already provides the contents of all files starting with the first one. I added a few lines to the script to print the file path and an empty line to see where the contents end and begin:
import os
path = "/Users/Desktop/test/"
for filename in os.listdir(path):
filepath = os.path.join(path, filename)
with open(filepath, 'rU') as f:
content = f.read()
print(filepath)
print(content)
print()
os.listdir returns the name of the files only. you need to os.path.join that name with the path the files live in - otherwise python will look for them in your current working directory (os.getcwd()) and if that happens not to be the same as path python will not find the files:
import os
path = "/Users/Desktop/test/"
for filename in os.listdir(path):
print(filename)
file_path = os.path.join(path, filename)
print(file_path)
..
if you have pathlib at your disposal you can also:
from pathlib import Path
path = "/Users/Desktop/test/"
p = Path(path)
for file in p.iterdir():
if not file.is_file():
continue
print(file)
print(file.read_text())

How to Save file names and their directories path in a text file using Python

I am trying to find a string that is contained in files under a directory. Then make it to store it's file names and directories under a new text file or something.
I got upto where it is going through a directory and finding a string, then printing a result. But not sure of the next step.
Please help, I'm completely new to coding and python.
import glob, os
#Open a source as a file and assign it as source
source = open('target.txt').read()
filedirectories = []
#locating the source file and printing the directories.
os.chdir("/Users/a1003584/desktop")
for root, dirs, files in os.walk(".", topdown=True):
for name in files:
print(os.path.join(root, name))
if source in open(os.path.join(root, name)).read():
print 'treasure found.'
Don't do a string comparison if your looking for a dictionary. Instead use the json module. Like this.
import json
import os
filesFound = []
def searchDir(dirName):
for name in os.listdir(dirName):
# If it is a file.
if os.isfile(dirName+name):
try:
fileCon = json.load(dirName+name)
except:
print("None json file.")
if "KeySearchedFor" in fileCon:
filesFound.append(dirName+name)
# If it is a directory.
else:
searchDir(dirName+name+'/')
# Change this to the directory your looking in.
searchDir("~/Desktop")
open("~/Desktop/OutFile.txt",'w').write(filesFound)
This should write the output to a csv file
import csv
import os
with open('target.txt') as infile: source = infile.read()
with open("output.csv", 'w') as fout:
outfile = csv.writer(fout)
outfile.writerow("Directory FileName FilePath".split())
for root, dirnames, fnames in os.walk("/Users/a1003584/desktop", topdown=True):
for fname in fnames:
with open(os.path.join(root, fname)) as infile:
if source not in infile.read(): continue
outfile.writerow(root, fname, os.path.join(root, fname))

Zipfile not writing data

I have tried to zip log files that I have created, but nothing is written!
Code:
def del_files():
'''Adapted from answer # http://stackoverflow.com/questions/7217196/python-delete-old-files by jterrace'''
dir_to_search = os.path.join(os.getcwd(),'log')
for dirpath, dirnames, filenames in os.walk(dir_to_search):
for file in filenames:
curpath = os.path.join(dirpath, file)
log(curpath)
if curpath != path:
log("Archiving old log files...")
with zipfile.ZipFile("log_old.zip", "w") as ZipFile:
ZipFile.write(curpath)
ZipFile.close()
log("archived")
For one thing, you are overwriting the output zip file on each iteration:
with zipfile.ZipFile("log_old.zip", "w") as ZipFile:
mode "w" means to create a new file, or truncate an existing file. Probably you mean to append to the zip file, in which case it can be opened for append by using mode "a". Or you could open the zip file outside of the outer for loop.
Your code should result in log_old.zip containing a single file - the last one found by os.walk().
Opening the archive outside of the main loop is better since the file will only be opened once, and it will be closed automatically because of the context manager (with):
with zipfile.ZipFile("log_old.zip", "w") as zf:
dir_to_search = os.path.join(os.getcwd(), 'log')
for dirpath, dirnames, filenames in os.walk(dir_to_search):
for file in filenames:
curpath = os.path.join(dirpath, file)
zf.write(curpath)

Python file-IO and zipfile. Trying to loop through all the files in a folder and then loop through the texts in respective file using Python

Trying to extract all the zip files and giving the same name to the folder where all the files are gonna be.
Looping through all the files in the folder and then looping through the lines within those files to write on a different text file.
This is my code so far:
#!usr/bin/env python3
import glob
import os
import zipfile
zip_files = glob.glob('*.zip')
for zip_filename in zip_files:
dir_name = os.path.splitext(zip_filename)[0]
os.mkdir(dir_name)
zip_handler = zipfile.ZipFile(zip_filename, "r")
zip_handler.extractall(dir_name)
path = dir_name
fOut = open("Output.txt", "w")
for filename in os.listdir(path):
for line in filename.read().splitlines():
print(line)
fOut.write(line + "\n")
fOut.close()
This is the error that I encounter:
for line in filename.read().splitlines():
AttributeError: 'str' object has no attribute 'read'
You need to open the file and also join the path to the file, also using splitlines and then adding a newline to each line is a bit redundant:
path = dir_name
with open("Output.txt", "w") as fOut:
for filename in os.listdir(path):
# join filename to path to avoid file not being found
with open(os.path.join(path, filename)):
for line in filename:
fOut.write(line)
You should always use with to open your files as it will close them automatically. If the files are not large you can simply fOut.write(f.read()) and remove the loop.
You also set path = dir_name which means path will be set to whatever the last value of dir_name was in your first loop which may or may not be what you want. You can also use iglob to avoid creating a full list zip_files = glob.iglob('*.zip').

How to recursively go through all subdirectories and read files?

I have a root-ish directory containing multiple subdirectories, all of which contain a file name data.txt. What I would like to do is write a script that takes in the "root" directory, and then reads through all of the subdirectories and reads every "data.txt" in the subdirectories, and then writes stuff from every data.txt file to an output file.
Here's a snippet of my code:
import os
import sys
rootdir = sys.argv[1]
with open('output.txt','w') as fout:
for root, subFolders, files in os.walk(rootdir):
for file in files:
if (file == 'data.txt'):
#print file
with open(file,'r') as fin:
for lines in fin:
dosomething()
My dosomething() part -- I've tested and confirmed for it to work if I am running that part just for one file. I've also confirmed that if I tell it to print the file instead (the commented out line) the script prints out 'data.txt'.
Right now if I run it Python gives me this error:
File "recursive.py", line 11, in <module>
with open(file,'r') as fin:
IOError: [Errno 2] No such file or directory: 'data.txt'
I'm not sure why it can't find it -- after all, it prints out data.txt if I uncomment the 'print file' line. What am I doing incorrectly?
You need to use absolute paths, your file variable is just a local filename without a directory path. The root variable is that path:
with open('output.txt','w') as fout:
for root, subFolders, files in os.walk(rootdir):
if 'data.txt' in files:
with open(os.path.join(root, 'data.txt'), 'r') as fin:
for lines in fin:
dosomething()
[os.path.join(dirpath, filename) for dirpath, dirnames, filenames in os.walk(rootdir)
for filename in filenames]
A functional approach to get the tree looks shorter, cleaner and more Pythonic.
You can wrap the os.path.join(dirpath, filename) into any function to process the files you get or save the array of paths for further processing

Categories