I had 120 files in my source folder which I need to move to a new directory (destination). The destination is made in the function I wrote, based on the string in the filename. For example, here is the function I used.
path ='/path/to/source'
dropbox='/path/to/dropbox'
files = = [os.path.join(path,i).split('/')[-1] for i in os.listdir(path) if i.startswith("SSE")]
sam_lis =list()
for sam in files:
sam_list =sam.split('_')[5]
sam_lis.append(sam_list)
sam_lis =pd.unique(sam_lis).tolist()
# Using the above list
ID = sam_lis
def filemover(ID,files,dropbox):
"""
Function to move files from the common place to the destination folder
"""
for samples in ID:
for fs in files:
if samples in fs:
desination = dropbox + "/"+ samples + "/raw/"
if not os.path.isdir(desination):
os.makedirs(desination)
for rawfiles in fnmatch.filter(files, pat="*"):
if samples in rawfiles:
shutil.move(os.path.join(path,rawfiles),
os.path.join(desination,rawfiles))
In the function, I am creating the destination folders, based on the ID's derived from the files list. When I tried to run this for the first time it threw me FILE NOT exists error.
However, later when I checked the source all files starting with SSE were missing. In the beginning, the files were there. I want some insights here;
Whether or not os.shutil.move moves the files to somewhere like a temp folder instead of destination folder?
whether or not the os.shutil.move deletes the files from the source in any circumstance?
Is there any way I can test my script to find the potential reasons for missing files?
Any help or suggestions are much appreciated?
It is late but people don't understand the op's question. If you move a file into a non-existing folder, the file seems to become a compressed binary and get lost forever. It has happened to me twice, once in git bash and the other time using shutil.move in Python. I remember the python happens when your shutil.move destination points to a folder instead of to a copy of the full file path.
For example, if you run the code below, a similar situation to what the op described will happen:
src_folder = r'C:/Users/name'
dst_folder = r'C:/Users/name/data_images'
file_names = glob.glob(r'C:/Users/name/*.jpg')
for file in file_names:
file_name = os.path.basename(file)
shutil.move(os.path.join(src_folder, file_name), dst_folder)
Note that dst_folder in the else block is just a folder. It should be dst_folder + file_name. This will cause what the Op described in his question. I find something similar on the link here with a more detailed explanation of what went wrong: File moving mistake with Python
shutil.move does not delete your files, if for any reason your files failed to move to a given location, check the directory where your code is stored, for a '+' folder your files are most likely stored there.
Related
I have a code that locates files in a folder for me by name and moves them to another set path.
import os, shutil
files = os.listdir("D:/Python/Test")
one = "one"
two = "two"
oney = "fold1"
twoy="fold2"
def findfile(x,y):
for file in files:
if x in file.lower():
while x in file.lower():
src = ('D:/Python/Test/'+''.join(file))
dest = ('D:/Python/Test/'+y)
if not os.path.exists(dest):
os.makedirs(dest)
shutil.move(src,dest)
break
findfile(one,oney)
findfile(two,twoy)
In this case, this program moves all the files in the Test folder to another path depending on the name, let's say one as an example:
If there is a .png named one, it will move it to the fold1 folder. The problem is that my code does not distinguish between types of files and what I would like is that it excludes the folders from the search.
If there is a folder called one, don't move it to the fold1 folder, only move it if it is a folder! The other files if you have to move them.
The files in the folder contain the string one, they are not called exactly that.
I don't know if I have explained myself very well, if something is not clear leave me a comment and I will try to explain it better!
Thanks in advance for your help!
os.path.isdir(path)
os.path.isdir() method in Python is used to check whether the specified path is an existing directory or not. This method follows symbolic link, that means if the specified path is a symbolic link pointing to a directory then the method will return True.
Check with that function before.
Home this helps :)
Currently, I am working on a project in which am synchronizing two folders. My folders in the following example names ad Folder_1 as source and Folder_2 as destination I want to do the following things.
If files which are present in Folder_1 are not present in Folder_2,
copy the files from folder_1 to Folder_2 and Vice Versa.
If I rename any file in either folder, it gets updated in the other folder instead of copying a new file with the updated name.
if I delete any file from any folder, it should get deleted from the other folder as well.
I have done half the part of point one in which I am able to copy the files from Folder_1 to Folder_2. Send part where I could be able to copy files from Folder_2 to folder_1 is still remaining.
Following is my code
import os, shutil
path = 'C:/Users/saqibshakeel035/Desktop/Folder_1/'
copyto = 'C:/Users/saqibshakeel035/Desktop/Folder_2/'
files =os.listdir(path)
files.sort()
for f in files:
src = path+f
dst = copyto+f
try:
if os.stat(src).st_mtime < os.stat(dst).st_mtime:
continue
except OSError:
pass
shutil.copy(src,dst)#this is the case when our file in destination doesn't exist
=
print('Files copied from'+ path +'to' + copyto+ '!')
What can I amend or do so that I can synchronize both folders completely?
Thanks in advance :)
(Not the same approach as yours but gets the work done as expected from your query)
Simple code using dirsync:
from dirsync import sync
source_path = '/Give/Source/Folder/Here'
target_path = '/Give/Target/Folder/Here'
sync(source_path, target_path, 'sync') #for syncing one way
sync(target_path, source_path, 'sync') #for syncing the opposite way
See documentation here for more options: dirsync - PyPI
You can, of course, add exception handling manually if you want.
So I have code that completes an analysis for me on imaging data. This code is working fine except that the data is not being outputted into the correct folder. Instead, it is being outputted one folder too high. This is something I obviously want to fix in the long run, but as I am under certain time constraints I want to input code into my script that will simply move the files to the correct folder/directory that I have created. I have tried the mv and shutil commands but they don't seem to be working. I would be grateful if someone had a suggestion for how to fix/improve my method of moving these files to the correct location. If someone has a suggestion for why the files are being outputted to the wrong directory as well that would be awesome. I am relatively new to coding and no expert so please forgive any obvious mistakes. Thank you.
This is where I set up my directories
subject_dir = os.getcwd()
dti_dir = os.path.abspath( os.path.join(subject_dir, 'dti'))
dti_input_dir = os.path.abspath( os.path.join(dti_dir, 'input'))
This is where I entered a few shortcuts
eddy_prefix = 'eddy'
input_path = dti_input_dir
output_prefix = 'dtifit'
output_path = '../dti/dtifit'
output_basename = os.path.abspath(os.path.join(dti_dir, output_path))
infinite_path = os.path.join(os.getenv('INFINITE_PATH'), 'infinite')
dti30 = 'dti30.nii.gz'
dti30_brain = 'bet.b0.dti30.nii.gz'
dti30_mask = 'bet.b0.dti30_mask.nii.gz'
This is where I ran my test.
The test is being run, but my data is being outpputed in dti_dir and not output_basename (this is my second question)
dti = fsl.DTIFit()
dti.inputs.dwi = os.path.join(input_path, eddy_prefix + '.nii.gz')
dti.inputs.bvecs = os.path.join(input_path, eddy_prefix + '.eddy_rotated_bvecs')
dti.inputs.bvals = os.path.abspath(os.path.join(infinite_path, 'dti30.bval'))
dti.inputs.base_name = output_basename
dti.inputs.mask = os.path.join(input_path, dti30_mask)
dti.cmdline
Creating output directory if doesn't exist.
This is working fine and the directory is created in the proper location.
if not os.path.exists(output_basename):
os.makedirs(output_basename)
print('DTI Command line')
print(dti.cmdline)
res = dti.run()
exit()
print('DTIFIT Complete!')
Here I try to move the files and I get the error: IOError: [Errno 2] No such file or directory: even though I know the files exist
src = dti_dir
dst = output_basename
files = os.listdir(src)
for f in files:
if (f.startswith("dtifit_")):
shutil.move(f, dst)
Your problem is most likely in the very first line. You may want to change that to be more exact, pointing to an exact directory rather than os.getcwd(). os.getcwd() will point to wherever the process is executing from, wherever you kicked off your python run, so it could change. You probably want to hard-code it somehow.
For example, you could use something like this, to point to the directory that the file lives in:
Find current directory and file's directory
Another thing you could consider doing is printing out dst in your last few lines and seeing if it's what you expect. You could also use PDB to inspect that value live:
https://pymotw.com/2/pdb/
Edit:
Your problem is using a relative path with shutil.move(). You should make sure that the path is absolute, please see this answer for more info:
stackoverflow.com/a/22230227/7299836
How do I get the data from multiple txt files that placed in a specific folder. I started with this could not fix. It gives an error like 'No such file or directory: '.idea' (??)
(Let's say I have an A folder and in that, there are x.txt, y.txt, z.txt and so on. I am trying to get and print the information from all the files x,y,z)
def find_get(folder):
for file in os.listdir(folder):
f = open(file, 'r')
for data in open(file, 'r'):
print data
find_get('filex')
Thanks.
If you just want to print each line:
import glob
import os
def find_get(path):
for f in glob.glob(os.path.join(path,"*.txt")):
with open(os.path.join(path, f)) as data:
for line in data:
print(line)
glob will find only your .txt files in the specified path.
Your error comes from not joining the path to the filename, unless the file was in the same directory you were running the code from python would not be able to find the file without the full path. Another issue is you seem to have a directory .idea which would also give you an error when trying to open it as a file. This also presumes you actually have permissions to read the files in the directory.
If your files were larger I would avoid reading all into memory and/or storing the full content.
First of all make sure you add the folder name to the file name, so you can find the file relative to where the script is executed.
To do so you want to use os.path.join, which as it's name suggests - joins paths. So, using a generator:
def find_get(folder):
for filename in os.listdir(folder):
relative_file_path = os.path.join(folder, filename)
with open(relative_file_path) as f:
# read() gives the entire data from the file
yield f.read()
# this consumes the generator to a list
files_data = list(find_get('filex'))
See what we got in the list that consumed the generator:
print files_data
It may be more convenient to produce tuples which can be used to construct a dict:
def find_get(folder):
for filename in os.listdir(folder):
relative_file_path = os.path.join(folder, filename)
with open(relative_file_path) as f:
# read() gives the entire data from the file
yield (relative_file_path, f.read(), )
# this consumes the generator to a list
files_data = dict(find_get('filex'))
You will now have a mapping from the file's name to it's content.
Also, take a look at the answer by #Padraic Cunningham . He brought up the glob module which is suitable in this case.
The error you're facing is simple: listdir returns filenames, not full pathnames. To turn them into pathnames you can access from your current working directory, you have to join them to the directory path:
for filename in os.listdir(directory):
pathname = os.path.join(directory, filename)
with open(pathname) as f:
# do stuff
So, in your case, there's a file named .idea in the folder directory, but you're trying to open a file named .idea in the current working directory, and there is no such file.
There are at least four other potential problems with your code that you also need to think about and possibly fix after this one:
You don't handle errors. There are many very common reasons you may not be able to open and read a file--it may be a directory, you may not have read access, it may be exclusively locked, it may have been moved since your listdir, etc. And those aren't logic errors in your code or user errors in specifying the wrong directory, they're part of the normal flow of events, so your code should handle them, not just die. Which means you need a try statement.
You don't do anything with the files but print out every line. Basically, this is like running cat folder/* from the shell. Is that what you want? If not, you have to figure out what you want and write the corresponding code.
You open the same file twice in a row, without closing in between. At best this is wasteful, at worst it will mean your code doesn't run on any system where opens are exclusive by default. (Are there such systems? Unless you know the answer to that is "no", you should assume there are.)
You don't close your files. Sure, the garbage collector will get to them eventually--and if you're using CPython and know how it works, you can even prove the maximum number of open file handles that your code can accumulate is fixed and pretty small. But why rely on that? Just use a with statement, or call close.
However, none of those problems are related to your current error. So, while you have to fix them too, don't expect fixing one of them to make the first problem go away.
Full variant:
import os
def find_get(path):
files = {}
for file in os.listdir(path):
if os.path.isfile(os.path.join(path,file)):
with open(os.path.join(path,file), "r") as data:
files[file] = data.read()
return files
print(find_get("filex"))
Output:
{'1.txt': 'dsad', '2.txt': 'fsdfs'}
After the you could generate one file from that content, etc.
Key-thing:
os.listdir return a list of files without full path, so you need to concatenate initial path with fount item to operate.
there could be ideally used dicts :)
os.listdir return files and folders, so you need to check if list item is really file
You should check if the file is actually file and not a folder, since you can't open folders for reading. Also, you can't just open a relative path file, since it is under a folder, so you should get the correct path with os.path.join. Check below:
import os
def find_get(folder):
for file in os.listdir(folder):
if not os.path.isfile(file):
continue # skip other directories
f = open(os.path.join(folder, file), 'r')
for line in f:
print line
I am attempting to write a simple script to recursively rip through a directory and check if any of the files have been changed. I only have the traversal so far:
import fnmatch
import os
from optparse import OptionParser
rootPath = os.getcwd()
pattern = '*.js'
for root, dirs, files in os.walk(rootPath):
for filename in files:
print( os.path.join(root, filename))
I have two issues:
1. How do I tell if a file has been modified?
2. How can I check if a directory has been modified? - I need to do this because the folder I wish to traverse is huge. If I can check if the dir has been modified and not recursively rip through an unchanged dir, this would greatly help.
Thanks!
If you are comparing two files between two folders, you can use os.path.getmtime() on both files and compare the results. If they're the same, they haven't been modified. Note that this will work on both files and folders.
The typical fast way to tell if a file has been modified is to use os.path.getmtime(path) (assuming a Linux or similar environment). This will give you the modification timestamp, which you can compare to a stored timestamp to determine if a file has been modified.
getmtime() works on directories too, but it will only tell you whether a file has been added, removed or renamed in the directory; it will not tell you whether a file has been modified inside the directory.
This is my own implementation of what you might be looking for. Mind that, beside timestamps you might want to track files that have been added or deleted too (like i do). If not you can just change the code on line:
if now == before:
here is the code:
# check if any txt file in folder "wd" has been modified (rewritten added or deleted)
def src_dir_modified(wd):
now = []
global before
all_files = glob.glob(os.path.join(wd,'*.txt'))
for infile in all_files:
now.append([infile, os.stat(infile).st_mtime])
if now == before: # compare files and their time stamps
return False
else:
before = now
print 'Source code has been modified.'
return True
If you can admit the use of a command-line tool, you could use rsync instead of re-inventing the wheel. rsync uses file modification time and file size to decide if a file has been changed or not.
rsync --verbose --recursive --dry-run dir1 dir2 should get the differences between files in dir1 and dir2. You can write the output to a log file to act on it.