Read multiple excel files from a folder into pandas - python

I would like to read several excel files contained into a folder in the Desktop of my MacBook into pandas.
The folder in the desktop is contains a folder (project dataset) with all the excel files and the Jupiter notebook page where I am writing the code (draft progetto)
I wrote the following code:
path = os.getcwd()
files = os.listdir(path)
files
Output:
['.DS_Store', 'draft progetto.ipynb', '.ipynb_checkpoints', 'project_dataset']
Then when I run:
files_xls = [f for f in files if f[3:] == 'xlsx']
files_xls
I GET AN EMPTY LIST AS OUTPUT!!
WHY IS THIS?

IIUC,
this is something that can be done much easier with pathlib and unix matching using the glob module.
from pathlib import Path
import pandas as pd
#one liner
your_path = 'path_to_excel_files'
df = pd.concat([pd.read_excel(f) for f in Path(your_path).rglob('*.xlsx')])
Breaking it down.
# find the excel files
# if you want to change the path do Path('your_path')...
files = [file for file in Path.cwd.rglob('*.xlsx')]
#create a list of dataframes.
dfs_list = [pd.read_excel(file) for file in files])
#concat
df = pd.concat(dfs_list)

Related

Pandas,(Python) -> Export to xlsx with multiple sheets

i`m traind to read some .xlsx files from a directory that is create earlier using curent timestamp and the files are store there, now i want to read those .xlsx files and put them in only one .xlsx files with multiple sheets, but i tried multiple ways and didnt work, i tried:
final file Usage-SvnAnalysis.xlsx
the script i tried:
import pandas as pd
import numpy as np
from timestampdirectory import createdir
import os
dest = createdir()
dfSvnUsers = pd.read_csv(dest, "SvnUsers.xlsx")
dfSvnGroupMembership = pd.read_csv(dest, "SvnGroupMembership.xlsx")
xlwriter = pd.ExcelWriter("Usage-SvnAnalysis.xlsx")
dfSvnUsers.to_excel(xlwriter, sheet_name='SvnUsers', index = False )
dfSvnGroupMembership.to_excel(xlwriter, sheet_name='SvnGroupMembership', index = False )
xlwriter.close()
the folder that is created automaticaly with curent timestamp that contains files.
this is one of file that file that i want to add as sheet in that final xlsx
this is how i create the director with curent time and return dest to export the files in
I change a bit the script, now its how it looks like, still getting error :
File "D:\Py_location_projects\testfi\Usage-SvnAnalysis.py", line 8, in
with open(file, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'SvnGroupMembership.xlsx'
the files exist, but the script cant take the root path to that directory because i create that directory on other script using timestamp and i returned the path using dest
dest=createdir() represent the path where the files is, what i need to do its just acces this dest an read the files from there and export them in only 1 xlsx as sheets of him , in this cas sheet1 and sheet2, because i tried to reat only 2 files from that dir
import pandas as pd
import numpy as np
from timestampdirectory import createdir
import os
dest = createdir()
files = os.listdir(dest)
for file in files:
with open(file, 'r') as f:
dfSvnUsers = open(os.path.join(dest, 'SvnUsers.xlsx'))
dfSvnGroupMembership = open(os.path.join(dest, 'SvnGroupMembership.xlsx'))
xlwriter = pd.ExcelWriter("Usage-SvnAnalysis.xlsx")
dfSvnUsers.to_excel(xlwriter, sheet_name='SvnUsers', index = False )
dfSvnGroupMembership.to_excel(xlwriter, sheet_name='SvnGroupMembership', index = False )
xlwriter.close()
I think you should try read Excel files use pd.read_excel instead of pd.read_csv.
import os
dfSvnUsers = pd.read_excel(os.path.join(dest, "SvnUsers.xlsx"))
dfSvnGroupMembership = pd.read_excel(os.path.join(dest, "SvnGroupMembership.xlsx"))

(python) read specific type of .xlsx file name in a folder

I search a few related discussions, such as
Read most recent excel file from folder PYTHON however, it does not fit my requirement quite well.
Suppose I have a folder with the following .xlsx files
I want to read the files with name "T2xxMhz", i.e., the last 7 files.
I have the following codes
import os
import pandas as pd
folder = r'C:\Users\work' # <--- find the folder
files = os.listdir(folder) # <--- find files in the folder 'work'
dfs ={}
for i, file in enumerate(files):
if file.endswith('.xlsx'):
dfs[i] = pd.read_excel(os.path.join(folder,file), sheet_name='Z=143', header = None, skiprows=[0], usecols = "B:M") # <--- read specific sheet with the name 'Z=143'
num = i + 1 # <--- number of files.
However in this codes, I cannot differentiate two types of file name 'PYTEST' and 'T2XXX'.
How to deal with this problem? Any suggestions and hints please!
use glob package. allows multiple usage of regexes
import glob
dir = 'path/to/files/'
flist = glob.glob(dir + 'T*Mhz*')
print(flist)

How to find a required file and read it in a zip file?

I have zip files and each zip file contains three subfolders (i.e. ini, log, and output). I want to read a file from output folder and it contains three csv files with different names. Suppose three files name are: initial.csv, intermediate.csv, and final.csv. and just want to read final.csv file.
The code that I tried to read file is:
import zipfile
import numpy
import pandas as pd
zipfiles = glob.glob('/home/data/*.zip')
for i in np.arange(len(zipfiles)):
zip = zipfile.ZipFile(zpfiles[i])
f = zip.open(zip.namelist().startswith('final'))
data = pd.read_csv(f, usecols=[3,7])
and the error I got is 'list' object has no attribute 'startswith'
How can I find the correct file and read it?
Replase
f = zip.open(zip.namelist().startswith('final'))
With
f = zip.open('output/final.csv')
If you can "find" it:
filename = ([name for name in zip.namelist() if name.startswith('output/final')][0])
f = zip.open(filename)
To find sub dirs, let's switch to pathlib which uses glob:
from pathlib import Path
import zipfile
import pandas as pd
dfs = []
files = Path('/home/data/').rglob('*final*.zip') #rglob recursively trawls all child dirs.
for file in files:
zip = zipfile.ZipFile(zpfiles[file])
....
# your stuff
df = pd.read_csv(f, usecols=[3,7])
dfs.append(df)

Search and copy files listed in a dataframe

Hi I'm working on a simple script that copy files from a directory to another based on a dataframe that contains a list of invoices.
Is there any way to do this as a partial match? like i want all the files that contains "F11000", "G13000" and go on continue this loop until no more data in DF.
I tried to figure it out by myself and I'm pretty sure changing the "X" on the copy function will do the trick, but can't see it.
import pandas as pd
import os
import glob
import shutil
data = {'Invoice':['F11000','G13000','H14000']}
df = pd.DataFrame(data,columns=['Doc'])
path = 'D:/Pyfilesearch'
dest = 'D:/Dest'
def find(name,path):
for root,dirs,files in os.walk(path):
if name in files:
return os.path.join(root,name)
def copy():
for x in df['Invoice']:
shutil.copy(find(x,path),dest)
copy()
Using pathlib
This is part of the standard library
Treats paths and objects with methods instead of strings
Python 3's pathlib Module: Taming the File System
Script assumes dest is an existing directory.
.rglob searches subdirectories for files
from pathlib import Path
import pandas as pd
import shutil
# convert paths to pathlib objects
path = Path('D:/Pyfilesearch')
dest = Path('D:/Dest')
# find files and copy
for v in df.Invoice.unique(): # iterate through unique column values
files = list(path.rglob(f'*{v}*')) # create a list of files for a value
files = [f for f in files if f.is_file()] # if not using file extension, verify item is a file
for f in files: # iterate through and copy files
print(f)
shutil.copy(f, dest)
Copy to subdirectories for each value
path = Path('D:/Pyfilesearch')
for v in df.Invoice.unique():
dest = Path('D:/Dest')
files = list(path.rglob(f'*{v}*'))
files = [f for f in files if f.is_file()]
dest = dest / v # create path with value
if not dest.exists(): # check if directory exists
dest.mkdir(parents=True) # if not, create directory
for f in files:
shutil.copy(f, dest)

How to read particular text files out-of multiple files in a sub directories in python

I have a one folder, within it contains 5 sub-folders.
Each sub folder contains some 'x.txt','y.txt' and 'z.txt' files and it repeats in every sub-folders
Now I need to read and print only 'y.txt' file from all sub-folders.
My problem is I'm unable to read and print 'y.txt' files. Can you tell me how solve this problem.
Below is my code which I have written for reading y.txt file
import os, sys
import pandas as pd
file_path = ('/Users/Naga/Desktop/Python/Data')
for root, dirs, files in os.walk(file_path):
for name in files:
print(os.path.join(root, name))
pd.read_csv('TextInformation.txt',delimiter=";", names = ['Name', 'Value'])
error :File TextInformation.txt does not exist: 'TextInformation.txt'
You could also try the following approach to fetch all y.txt files from your subdirectories:
import glob
import pandas as pd
# get all y.txt files from all subdirectories
all_files = glob.glob('/Users/Naga/Desktop/Python/Data/*/y.txt')
for file in all_files:
data_from_this_file = pd.read_csv(file, sep=" ", names = ['Name', 'Value'])
# do something with the data
Subsequently, you can apply your code to all the files within the list all_files. The great thing with glob is that you can use wilcards (*). Using them you don't need the names of the subdirectories (you can even use it within the filename, e.g. *y.txt). Also see the documentation on glob.
Your issue is forgot adding the parent path of 'y.txt' file. I suggest this code for you, hope it help.
import os
pth = '/Users/Naga/Desktop/Python/Data'
list_sub = os.listdir(pth)
filename = 'TextInformation.txt'
for sub in list_sub:
TextInfo = open('{}/{}/{}'.format(pth, sub, filename), 'r').read()
print(TextInfo)
I got you a little code. you can personalize it anyway you like but the code works for you.
import os
for dirPath,foldersInDir,fileName in os.walk(path_to_main_folder):
if fileName is not []:
for file in fileName:
if file.endswith('y.txt'):
loc = os.sep.join([dirPath,file])
y_txt = open(loc)
y = y_txt.read()
print(y)
But keep in mind that {path_to_main} is the path that has the subfolders.

Categories