Python Script saves files one directory above user input directory - python

I am writing a python script which uses tkinter to take the user input for a .xlsx file and segregate the data present it it by grouping the data by location and then exporting individual csv files for each unique value of location alongside the columns I tell it to keep. The issue with it is while taking the user input for the directory to store the files in, the script is saving the file one directory above it.
Ex- lets say the user selects the directory for the files to be saved in as \Desktop\XYZ\Test, the script is saving the exported file one directory above it i.e. \Desktop\XYZ while adding the name for the subdirectory Test into the exported file name. The code I'm using is attached below.
This is probably a simple issue but being a newbie I'm at my wits end trying to resolve this so any help is appreciated.
Code:
import pandas as pd
import csv
import locale
import os
import sys
import unicodedata
import tkinter as tk
from tkinter import simpledialog
from tkinter.filedialog import askopenfilename
from tkinter import *
from tkinter import ttk
ROOT = tk.Tk()
ROOT.withdraw()
data_df = pd.read_excel(askopenfilename())
grouped_df = data_df.groupby('LOCATION')
folderpath = filedialog.askdirectory()
for data in grouped_df.LOCATION:
grouped_df.get_group(data[0]).to_csv(folderpath+data[0]+".csv",encoding='utf-8', mode='w+')
filename =data[0]
f=pd.read_csv(folderpath+filename+".csv", sep=',')
#print f
keep_col = ['ID','NAME','DATA1','DATA4']
new_f = f[keep_col]
new_f.to_csv(folderpath+data[0]+".csv", index=False)
Sample data
P.S- There will be data is DATA3 and DATA 4 columns but I just didn't enter it here
How the Script is giving the output:
Thanks in Advance!

It seems like the return value of filedialog.askdirectory() ends with the folder the uses selected without a trailing slash, i.e:
\Desktop\XYZ\Test
You're full path created by folderpath+data[0]+".csv" with an example value for data[0] of "potato" will be
\Desktop\XYZ\Testpotato.csv
You need to at least append the \ manualy
for data in grouped_df.LOCATION:
grouped_df.get_group(data[0]).to_csv(folderpath+"\\"+data[0]+".csv",encoding='utf-8', mode='w+')
filename =data[0]

Related

Getting excel file from anywhere in computer with openpyxl/Python

Im working on a project to automate some excel processes, but so far I have only found a way to select excel files if theyre in the same folder as the python file. how do i make it to where i can select the excel file without it being in the same folder as python? I know i can type the whole path, but i would like to only use the file name to select it("sample.xlsx")
You can use the Python built-in tkinter filedialog.
A simple sample would be:
import tkinter as tk
from tkinter import filedialog
root = tk.Tk()
root.withdraw()
path2urExcel = filedialog.askopenfilename(title="Select the desired excel sheet ..")
Then proceed with your script.

Take a folder of PDF files and convert them to CSV and save them in another folder with the same name but ending with csv

I am trying to take a folder of existing pdfs and copy and convert the copy to csv and save in a different file location...
the hierarchy is as follows:
Users/.../Test/PDF
Users/.../Test/CSV
I then want to take all the CSV files and add the file name to the first column, then append all of them together while deleting redundant blank lines that will be determined by column B that either has integers in them or otherwise they are deleted.
Here is the code that I have so far:
import os
from pathlib import Path
import csv
import tabula
import shutil
statis = []
pdf_folder = Path("/Users/bensorensen/Documents/Test/PDF/")
csv_folder = Path("/Users/bensorensen/Documents/Test/CSV/")
pdf_files = pdf_folder.glob('*.pdf')
for pdf in pdf_files:
if item.endswith('pdf'):
tabula.convert_into(df, output, output_format="csv", stream = True, pages='all')
shutil.move(pdf_folder,csv_folder)
Any help or additional eyes would be most appreciated.

Reading files from folder and appending it to xlsx file

I have a folder that have say a few hundreds files and is growing every hour. I am trying to consolidate all the data into a single file for analysis use. But the script I wrote is not too effective for processing these data as it will read all the content in the folder and append it to an xlsx file. The processing time is simply too long.
What I seeking is to enhance and improve my script:
1) To be able to only read and extract data new files that have not been previously read
2) To extract and append these data to update the xlxs file.
I just need some to enlightenment to help me improve on the script.
Part of my code is as follows
import pandas as pd
import numpy as np
import os
import dask.dataframe as dd
import glob
import schedule
import time
import re
import datetime as dt
def job():
# Select the path to download the files
path=r'V:\DB\ABCD\BEFORE\8_INCHES'
files=glob.glob(path+"/*.csv")
df=None
# Extracting of information from files
for i, file in enumerate (files) :
if i==0:
df= np.transpose(pd.read_csv(file,delimiter="|",index_col=False))
df['Path'] =file
df['Machine No']=re.findall("MC-11",str(df["Path"]))
df['Process']= re.findall("ABCD",str(df["Path"]))
df['Before/After']=re.findall("BEFORE",str(df["Path"]))
df['Wafer Size']=re.findall("8_INCHES",str(df["Path"]))
df['Employee ID']=df["Path"].str.extract(r'(?<!\d)(\d{6})(?!\d)',expand=False)
df['Date']=df["Path"].str.extract(r'(\d{4}_\d{2}_\d{2})',expand=False)
df['Lot Number']=df["Path"].str.extract(r'(\d{7}\D\d)',expand=False)
df['Part Number']=df["Path"].str.extract(r'([A-Z]{2,3}\d{3,4}[A-Z][A-Z]\d{2,4}[A-Z])',expand=False)
df["Part Number"].fillna("ENGINNERING SAMPLE",inplace=True)
else:
tmp= np.transpose(pd.read_csv(file,delimiter="|",index_col=False))
tmp['Path'] =file
tmp['Machine No']=tmp["Path"].str.extract(r'(\D{3}\d{2})',expand=False)
tmp['Process']= tmp["Path"].str.extract(r'(\w{8})',expand=False)
tmp['Before/After']= tmp["Path"].str.extract(r'([B][E][F][O][R][E])',expand= False)
tmp['Wafer Size']= tmp["Path"].str.extract(r'(\d\_\D{6})',expand= False)
tmp['Employee ID']=tmp["Path"].str.extract(r'(?<!\d)(\d{6})(?!\d)',expand=False)
tmp['Date']=tmp["Path"].str.extract(r'(\d{4}_\d{2}_\d{2})',expand=False)
tmp['Lot Number']=tmp["Path"].str.extract(r'(\d{7}\D\d)',expand=False)
tmp['Part Number']=tmp["Path"].str.extract(r'([A-Z]{2,3}\d{3,4}[A-Z][A-Z]\d{2,4}[A-Z])',expand=False)
tmp["Part Number"].fillna("ENGINNERING SAMPLE",inplace=True)
df= df.append(tmp)
export_excel= rf.to_excel(r'C:\Users\hoosk\Documents\Python Scripts\hoosk\test26_feb_2020.xlsx')
#schedule to run every hour
schedule.every(1).hour.do(job)
while True:
schedule.run_pending()
time.sleep(1)
In general terms you'll want to do the following:
Read in the xlsx file at the start of your script.
Extract a set with all the filename already parsed (Path attribute)
For each file you iterate over check if it is contained within the set of already processed files.
This assumes that existing files don't have their content updated. If that could happen, you may want to track metrics like last change date (a checksum would be most reliable, but probably too expensive to compute).

Python not reading all the information form excel properly

I am trying to open a few excel folders inside a directory and then be able to do stuff with the data (like take the average of one row for three files).
My main goal right now is just to be able to display the information in each excel file. I used the following code to do so. But when I display it, it prints out the '0' element to the '29' element...then it skips 30-50 and and it prints out 51-80.
Here is a snip of my output on python:
import numpy as np
import scipy.io as sio
import scipy
import matplotlib.pyplot as plt
import os
import pandas as pd
from tkinter import filedialog
from tkinter import *
import matplotlib.image as image
import xlsxwriter
import openpyxl
import xlwt
import xlrd
#GUI
root=Tk()
root.withdraw() #closes tkinter window pop-up
path=filedialog.askdirectory(parent=root,title='Choose a file')
path=path+'/'
print('Folder Selected',path)
files=os.listdir(path)
length=len(files)
print('Files inside the folder',files)
Files=[]
for s in os.listdir(path):
Files.append(pd.read_excel(path+s))
print (Files)
I'm quite sure your data is being correctly read. The dots between rows 29 and 51 show that there is more data there. pandas elides these rows, so your console looks cleaner. If you want to see all the rows, you could use the solution from this answer:
with pd.option_context('display.max_rows', None, 'display.max_columns', 3):
print(Files)
Where None sets display limit on rows (no limit) and 3 sets display limit on columns. Here you can find more info on options.
This is actually the standard way to print the data, notice the ellipses between 29 and 51:
29 7.8000 [cont.]
...
51 12.19999 [cont.]
You can still operate on every row. To get the number of rows in a dataframe, you can call
len(df.index)

Read multiple .xlsx files from a directory into separate Pandas data frames based on file name

I want to load multiple xlsx files with varying structures from a directory and assign these their own data frame based on the file name. I have 30+ files with differing structures but for brevity please consider the following:
3 excel files [wild_animals.xlsx, farm_animals_xlsx, domestic_animals.xlsx]
I want to assign each with their own data frame so if the file name contains 'wild' it is assigned to wild_df, if farm then farm_df and if domestic then dom_df. This is just the first step in a process as the actual files contain a lot of 'noise' that needs to be cleaned depending on file type etc they file names will also change on a weekly basis with only a few key markers staying the same.
My assumption is the glob module is the best way to begin to do this but in terms of taking very specific parts of the file extension and using this to assign to a specific df I become a bit lost so any help appreciated.
I asked a similar question a while back but it was part of a wider question most of which I have now solved.
I would parse them into a dictionary of DataFrame's:
import os
import glob
import pandas as pd
files = glob.glob('/path/to/*.xlsx')
dfs = {}
for f in files:
dfs[os.path.splitext(os.path.basename(f))[0]] = pd.read_excel(f)
then you can access them as a normal dictionary elements:
dfs['wild_animals']
dfs['domestic_animals']
etc.
You nee to get all xlsx files, than using comprehension dict, you can access to any elm
import pandas as pd
import os
import glob
path = 'Your_path'
extension = 'xlsx'
os.chdir(path)
result = [i for i in glob.glob('*.{}'.format(extension))]
{elm:pd.ExcelFile(elm) for elm in result}
For completeness wanted to show the solution I ended up using, very close to Khelili suggestion with a few tweaks to suit my particular code including not creating a DataFrame at this stage
import os
import pandas as pd
import openpyxl as excel
import glob
#setting up path
path = 'data_inputs'
extension = 'xlsx'
os.chdir(path)
files = [i for i in glob.glob('*.{}'.format(extension))]
#Grouping files - brings multiple files of same type together in a list
wild_groups = ([s for s in files if "wild" in s])
domestic_groups = ([s for s in files if "domestic" in s])
#Sets up a dictionary associated with the file groupings to be called in another module
file_names = {"WILD":wild_groups, "DOMESTIC":domestic_groups}
...

Categories