Copying two source files to one destination in python - python

You guys were super helpful for my last question so I figured I'd see if you can help me out again. Right now, I have a bunch of folders named P2_## with each of them containing two folders 0_output and 1_output. Inside the each of the output folders I have a file named Bright_Combo.txt. What I want to do is copy the data from both output folders into a Bright_Sum.txt file in the P2_## folder. This is the code I've got so far, but the problem is that it only copies data from the 1_output folder and in one case save an empty copy of the Bright_Sum file into a 0_output folder.
import os
import re
import shutil
def test():
file_paths = []
filenames = []
for root, dirs, files in os.walk("/Users/Bashe/Desktop/121210 p2"):
for file in files:
if re.match("Bright_Combo.txt",file):
file_paths.append(root)
filenames.append(file)
return file_paths, filenames
def test2(file_paths, filenames):
for file_path, filename in zip(file_paths, filenames):
moving(file_path, filename)
def moving(root,file):
bcombo = open(os.path.join(root,os.pardir, "Bright_Sum.txt"),'w')
shutil.copy(os.path.join(root,"Bright_Combo.txt"), os.path.join(root, os.pardir, "Bright_sum.txt"))
file_paths, filenames = test()
test2(file_paths, filenames)
Thanks for the help everyone =)

Well i cannot give you complete solution, but i can give you an idea...
This is what i implented for your usecase:
code:
import os,re,shutil
f=[]
file='Bright_Combo.txt'
for root,dirs,files in os.walk('/home/ghantasa/test'):
if file in files:
f.append(os.path.join(root,file))
for fil in f:
with open(fil,'r') as readfile:
data = readfile.readlines()
with open(os.path.join('/'.join(fil.split('/')[:-2]),'Bright_Sum.txt'),'a') as writefile:
writefile.write(''.join(data))
That worked for me and i hope you can tweak it according to your need.
Hope this helps .. :)

If you just want to append the second file to the first file, you can just use bash directly. My code below assumes the P2_## folders are located at your root directory.
root="/Users/Bashe/Desktop/121210 p2/"
for folder in $(ls -1 "$root/P2_*"); do
cp "$folder/0_output/Bright Combo.txt" "$folder/Bright Sum.txt"
cat "$folder/1_output/Bright Combo.txt" >> "$folder/Bright Sum.txt"
done

Related

Extract full Path and File Name

Attempting to write a function that walks a file system and returns the absolute path and filename for use in another function.
Example "/testdir/folderA/222/filename.ext".
Having tried multiple versions of this I cannot seem to get it to work properly.
filesCheck=[]
def findFiles(filepath):
files=[]
for root, dirs, files in os.walk(filepath):
for file in files:
currentFile = os.path.realpath(file)
print (currentFile)
if os.path.exists(currentFile):
files.append(currentFile)
return files
filesCheck = findFiles(/testdir)
This returns
"filename.ext" (only one).
Substitute in currentFile = os.path.join(root, file) for os.path.realpath(file) and it goes into a loop in the first directory. Tried os.path.join(dir, file) and it fails as one of my folders is named 222.
I have gone round in circles and get somewhat close but haven't been able to get it to work.
Running on Linux with Python 3.6
There's a several things wrong with your code.
There are multiple values are being assigned to the variable name files.
You're not adding the root directory to each filename os.walk() returns which can be done with os.path.join().
You're not passing a string to the findFiles() function.
If you fix those things there's no longer a need to call os.path.exists() because you can be sure it does.
Here's a working version:
import os
def findFiles(filepath):
found = []
for root, dirs, files in os.walk(filepath):
for file in files:
currentFile = os.path.realpath(os.path.join(root, file))
found.append(currentFile)
return found
filesCheck = findFiles('/testdir')
print(filesCheck)
Hi I think this is what you need. Perhaps you could give it a try :)
from os import walk
path = "C:/Users/SK/Desktop/New folder"
files = []
for (directoryPath, directoryNames, allFiles) in walk(path):
for file in allFiles:
files.append([file, f"{directoryPath}/{file}"])
print(files)
Output:
[ ['index.html', 'C:/Users/SK/Desktop/New folder/index.html'], ['test.py', 'C:/Users/SK/Desktop/New folder/test.py'] ]

How to read particular text files out-of multiple files in a sub directories in python

I have a one folder, within it contains 5 sub-folders.
Each sub folder contains some 'x.txt','y.txt' and 'z.txt' files and it repeats in every sub-folders
Now I need to read and print only 'y.txt' file from all sub-folders.
My problem is I'm unable to read and print 'y.txt' files. Can you tell me how solve this problem.
Below is my code which I have written for reading y.txt file
import os, sys
import pandas as pd
file_path = ('/Users/Naga/Desktop/Python/Data')
for root, dirs, files in os.walk(file_path):
for name in files:
print(os.path.join(root, name))
pd.read_csv('TextInformation.txt',delimiter=";", names = ['Name', 'Value'])
error :File TextInformation.txt does not exist: 'TextInformation.txt'
You could also try the following approach to fetch all y.txt files from your subdirectories:
import glob
import pandas as pd
# get all y.txt files from all subdirectories
all_files = glob.glob('/Users/Naga/Desktop/Python/Data/*/y.txt')
for file in all_files:
data_from_this_file = pd.read_csv(file, sep=" ", names = ['Name', 'Value'])
# do something with the data
Subsequently, you can apply your code to all the files within the list all_files. The great thing with glob is that you can use wilcards (*). Using them you don't need the names of the subdirectories (you can even use it within the filename, e.g. *y.txt). Also see the documentation on glob.
Your issue is forgot adding the parent path of 'y.txt' file. I suggest this code for you, hope it help.
import os
pth = '/Users/Naga/Desktop/Python/Data'
list_sub = os.listdir(pth)
filename = 'TextInformation.txt'
for sub in list_sub:
TextInfo = open('{}/{}/{}'.format(pth, sub, filename), 'r').read()
print(TextInfo)
I got you a little code. you can personalize it anyway you like but the code works for you.
import os
for dirPath,foldersInDir,fileName in os.walk(path_to_main_folder):
if fileName is not []:
for file in fileName:
if file.endswith('y.txt'):
loc = os.sep.join([dirPath,file])
y_txt = open(loc)
y = y_txt.read()
print(y)
But keep in mind that {path_to_main} is the path that has the subfolders.

Copying files in python using shutil

I have the following directory structure:
-mailDir
-folderA
-sub1
-sub2
-inbox
-1.txt
-2.txt
-89.txt
-subInbox
-subInbox2
-folderB
-sub1
-sub2
-inbox
-1.txt
-2.txt
-200.txt
-577.txt
The aim is to copy all the txt files under inbox folder into another folder.
For this I tried the below code
import os
from os import path
import shutil
rootDir = "mailDir"
destDir = "destFolder"
eachInboxFolderPath = []
for root, dirs, files in os.walk(rootDir):
for dirName in dirs:
if(dirName=="inbox"):
eachInboxFolderPath.append(root+"\\"+dirName)
for ii in eachInboxFolderPath:
for i in os.listdir(ii):
shutil.copy(path.join(ii,i),destDir)
If the inbox directory only has .txt files then the above code works fine. Since the inbox folder under folderA directory has other sub directory along with .txt files, the code returns permission denied error. What I understood is shutil.copy won't allow to copy the folders.
The aim is to copy only the txt files in every inbox folder to some other location. If the file names are same in different inbox folder I have to keep both file names. How we can improve the code in this case ? Please note other than .txt all others are folders only.
One simple solution is to filter for any i that does not have the .txt extension by using the string endswith() method.
import os
from os import path
import shutil
rootDir = "mailDir"
destDir = "destFolder"
eachInboxFolderPath = []
for root, dirs, files in os.walk(rootDir):
for dirName in dirs:
if(dirName=="inbox"):
eachInboxFolderPath.append(root+"\\"+dirName)
for ii in eachInboxFolderPath:
for i in os.listdir(ii):
if i.endswith('.txt'):
shutil.copy(path.join(ii,i),destDir)
This should ignore any folders and non-txt files that are found with os.listdir(ii). I believe that is what you are looking for.
Just remembered that I once wrote several files to solve this exact problem before. You can find the source code here on my Github.
In short, there are two functions of interest here:
list_files(loc, return_dirs=False, return_files=True, recursive=False, valid_exts=None)
copy_files(loc, dest, rename=False)
For your case, you could copy and paste these functions into your project and modify copy_files like this:
def copy_files(loc, dest, rename=False):
# get files with full path
files = list_files(loc, return_dirs=False, return_files=True, recursive=True, valid_exts=('.txt',))
# copy files in list to dest
for i, this_file in enumerate(files):
# change name if renaming
if rename:
# replace slashes with hyphens to preserve unique name
out_file = sub(r'^./', '', this_file)
out_file = sub(r'\\|/', '-', out_file)
out_file = join(dest, out_file)
copy(this_file, out_file)
files[i] = out_file
else:
copy(this_file, dest)
return files
Then just call it like so:
copy_files('mailDir', 'destFolder', rename=True)
The renaming scheme might not be exactly what you want, but it will at least not override your files. I believe this should solve all your problems.
Here you go:
import os
from os import path
import shutil
destDir = '<absolute-path>'
for root, dirs, files in os.walk(os.getcwd()):
# Filter out only '.txt' files.
files = [f for f in files if f.endswith('.txt')]
# Filter out only 'inbox' directory.
dirs[:] = [d for d in dirs if d == 'inbox']
for f in files:
p = path.join(root, f)
# print p
shutil.copy(p, destDir)
Quick and simple.
sorry, I forgot the part where, you also need unique file names as well. The above solution only works for distinct file names in a single inbox folder.
For copying files from multiple inboxes and having a unique name in the destination folder, you can try this:
import os
from os import path
import shutil
sourceDir = os.getcwd()
fixedLength = len(sourceDir)
destDir = '<absolute-path>'
filteredFiles = []
for root, dirs, files in os.walk(sourceDir):
# Filter out only '.txt' files in all the inbox directories.
if root.endswith('inbox'):
# here I am joining the file name to the full path while filtering txt files
files = [path.join(root, f) for f in files if f.endswith('.txt')]
# add the filtered files to the main list
filteredFiles.extend(files)
# making a tuple of file path and file name
filteredFiles = [(f, f[fixedLength+1:].replace('/', '-')) for f in filteredFiles]
for (f, n) in filteredFiles:
print 'copying file...', f
# copying from the path to the dest directory with specific name
shutil.copy(f, path.join(destDir, n))
print 'copied', str(len(filteredFiles)), 'files to', destDir
If you need to copy all files instead of just txt files, then just change the condition f.endswith('.txt') to os.path.isfile(f) while filtering out the files.

using subprocess over different files python

I've got a problem with a short script, it'd be great if you could have a look!
import os
import subprocess
root = "/Users/software/fmtomov1.0/remaker_lastplot/source_relocation/observed_arrivals_loc3d"
def loop_loc3d(file_in):
"""Loops loc3d over the source files"""
return subprocess.call (['loc3d'], shell=True)
def relocation ():
for subdir, dirs, files in os.walk(root):
for file in files:
file_in = open(os.path.join(subdir, file), 'r')
return loop_loc3d(file_in)
I think the script is quite easy to understand, it's very simple. However I'm not getting the result wanted. In a few word I just want 'loc3d' to operate over all the files contents present in the 'observed_arrivals_loc3d' directory, which means that I need to open all the files and that's what I've actually done. In fact, if I try to 'print files' after:
for subdir, dirs, files in os.walk(root)
I'll get the name of every file. Furthermore, if I try a 'print file_in' after
file_in = open(os.path.join(subdir, file), 'r')
I get something like this line for every file:
<open file '/Users/software/fmtomov1.0/remaker_lastplot/source_relocation/observed_arrivals_loc3d/EVENT2580', mode 'r' at 0x78fe38>
subprocess has been tested alone on only one file and it's working.
Overall I'm getting no errors but just -11 which means absolutely nothing to me. The output from loc3d should be completly different.
So does the code look fine to you? Is there anything I'm missing? Any suggestion?
Thanks for your help!
I assume you would call loc3d filename from the CLI. If so, then:
def loop_loc3d(filename):
"""Loops loc3d over the source files"""
return subprocess.call (['loc3d',filename])
def relocation():
for subdir, dirs, files in os.walk(root):
for file in files:
filename = os.path.join(subdir, file)
return loop_loc3d(filename)
In other words, don't open the file yourself, let loc3d do it.
Currently your relocation method will return after the first iteration (for the first file). You shouldn't need to return at all.
def loop_loc3d(filename):
"""Loops loc3d over the source files"""
return subprocess.call (['loc3d',filename])
def relocation ():
for subdir, dirs, files in os.walk(root):
for file in files:
filename = os.path.join(subdir, file)
loop_loc3d(filename)
This is only one of the issues. The other is concerning loc3d itself. Try providing the full path for loc3d.
-11 exit code might mean that the command killed by signal Segmentation fault.
It is a bug in loc3d. A well-behaved program should not produce 'Segmentation fault' on any user input.
Feed loc3d only files that it can understand. Print filenames or use subprocess.check_call() to find out which file it doesn't like:
#!/usr/bin/env python
import fnmatch
import os
import subprocess
def loc3d_files(root):
for dirpath, dirs, files in os.walk(root, topdown=True):
# skip hidden directories
dirs[:] = [d for d in dirs if not d.startswith('.')]
# process only known files
for file in fnmatch.filter(files, "*some?pattern[0-9][0-9].[ch]"):
yield os.path.join(dirpath, file)
for path in loc3d_files(root):
print path
subprocess.check_call(['loc3d', path]) # raise on any error
Just found out that loc3d, as unutbu said, relies on several variables and in the specific case one called 'observal_arrivals' that I have to create and delete every time from my directory. In Pythonic terms it means:
import os
import shutil
import subprocess
def loop_loc3d(file_in):
"""Loops loc3d over the source files"""
return subprocess.call(["loc3d"], shell=True)
path = "/Users/software/fmtomo/remaker_lastplot/source_relocation"
path2 = "/Users/Programming/working_directory/2test"
new_file_name = 'observed_arrivals'
def define_object_file ():
for filename in os.listdir("."):
file_in = os.rename (filename, new_file_name) # get the observal_arrivals file
file_in = shutil.copy ("/Users/simone/Programming/working_directory/2test/observed_arrivals", "/Users/software/fmtomo/remaker_lastplot/source_relocation")
os.chdir(path) # goes where loc3d is
loop_loc3d (file_in)
os.remove("/Users/software/fmtomo/remaker_lastplot/source_relocation/observed_arrivals")
os.remove ("/Users/Programming/working_directory/2test/observed_arrivals")
os.chdir(path2)
Now, this is working very well, so it should answer my question. I guess it's quite easy to understand, it's just copying, changing dir and that kind of stuff.

Finding most recently edited file in python

I have a set of folders, and I want to be able to run a function that will find the most recently edited file and tell me the name of the file and the folder it is in.
Folder layout:
root
Folder A
File A
File B
Folder B
File C
File D
etc...
Any tips to get me started as i've hit a bit of a wall.
You should look at the os.walk function, as well as os.stat, which can let you do something like:
import os
max_mtime = 0
for dirname,subdirs,files in os.walk("."):
for fname in files:
full_path = os.path.join(dirname, fname)
mtime = os.stat(full_path).st_mtime
if mtime > max_mtime:
max_mtime = mtime
max_dir = dirname
max_file = fname
print max_dir, max_file
It helps to wrap the built in directory walking to function that yields only full paths to files. Then you can just take the function that returns all files and pick out the one that has the highest modification time:
import os
def all_files_under(path):
"""Iterates through all files that are under the given path."""
for cur_path, dirnames, filenames in os.walk(path):
for filename in filenames:
yield os.path.join(cur_path, filename)
latest_file = max(all_files_under('root'), key=os.path.getmtime)
If anyone is looking for an one line way to do it:
latest_edited_file = max([f for f in os.scandir("path\\to\\search")], key=lambda x: x.stat().st_mtime).name
use os.walk to list files
use os.stat to get file modified timestamp (st_mtime)
put both timestamps and filenames in a list and sort it by timestamp, largest timestamp is most recently edited file.
For multiple files, if anyone came here for that:
import glob, os
files = glob.glob("/target/directory/path/*/*.mp4")
files.sort(key=os.path.getmtime)
for file in files:
print(file)
This will print all files in any folder within /path/ that have the .mp4 extension, with the most recently modified file paths at the bottom.
You can use
os.walk
See: http://docs.python.org/library/os.html
Use os.path.walk() to traverse the directory tree and os.stat().st_mtime to get the mtime of the files.
The function you pass to os.path.walk() (the visit parameter) just needs to keep track of the largest mtime it's seen and where it saw it.
I'm using path = r"C:\Users\traveler\Desktop":
import os
def all_files_under(path):
#"""Iterates through all files that are under the given path."""
for cur_path, dirnames, filenames in os.walk(path):
for filename in filenames:
yield os.path.join(cur_path, filename)
latest_file = max(all_files_under('root'), key=os.path.getmtime)
What am i missing here?

Categories