Transfer data from excel worksheet (openpyxl) to database table (dbf) - python

I have a simple problem of reading an excel worksheet, treat every row containing about 83 columns as unique database record, add it to local datum record and ultimately append and write onto DBF file.
I can extract all the values from excel and add them to the list. But the list is not correct syntax and I don't know how to prepare/convert the list to database record. I am using Openpyxl, dbf and python 3.7.
At the moment I am only testing and trying to prepare the data for Row 3 (hence min_max rows = 3)
I understand that the data should be in the format
(('','','', ... 83 entries), \
('','','', ... 83 entries) \
)
But I do not know how to convert the list data into record
or, alternatively, how to read in excel data directly into a DF appendable format
tbl_tst.open(mode=dbf.READ_WRITE) # all fields character string
for everyrow in ws_IntMstDBF.iter_rows(min_row = 3, max_row = 3, max_col = ws_IntMstDBF.max_column-1):
datum = [] #set([83]), will defining datum as () help solve the problem?
for idx, cells in enumerate(everyrow):
if cells.value is None: # for None entries, enter empty string
datum.append("")
continue
datum.append(cells.value) # else enter cell values
tbl_tst.append(datum) # append that record to table !!! list is not record error here
tbl_tst.close()
The error is complaining about using list to append to table, and this should be a record etc. Please guide how I can convert excel rows into appendable DBF table data.
raise TypeError("data to append must be a tuple, dict, record, or template; not a %r" % type(data))
TypeError: data to append must be a tuple, dict, record, or template; not a <class 'list'>

Change
tbl_tst.append(datum)
to
tbl_tst.append(tuple(datum))
and that will get rid of that error. As long as all your cell data has the appropriate type then the append should work.

Thank you for the responses, I went on a bit of tangent since last night while trying different solutions.
One solution that worked for me is as follows:
I made sure that the worksheet data I am using is all strings/Text and converted any null entries to String type and entered empty string. So the following code does this task:
#house keeping
for eachrow in ws_IntMstDBF.iter_rows(min_row=2, max_row=ws_IntMstDBF.max_row, max_col=ws_IntMstDBF.max_column):
for idx, cells in enumerate(eachrow):
if cells.value is None: # change every Null cell type to String and put 0x20 (space)
cells.data_type = 's'
cells.value = " "
After writing the worksheet, I reopened it using panda dataframe and verified if the contents were all string type and there were no "nan" values remaining in the dataframe.
Then I used df2dbf function from "Dani Arribas-Bel", modified it to suit the data I am working with and converted to dbf.
The code which imports dataframe and converts to dbf format is as follows:
abspath = Path(__file__).resolve() # resolve to relative path to absolute
rootpath = abspath.parents[3] # root (my source file is3 sub directories deep
xlspath = rootpath / 'sub-dir1' / 'sub-dir2' / 'sub-dir3' / 'test.xlsx'
# above code is only resolving file location, ignore
pd_Mst_df = pd.read_excel(xlspath)
#print(pd_Mst_df) # for debug
print("... Writing Master DBF file ")
df2dbf(pd_Mst_df, dbfpath) # dbf path is defined similar to pd_Mst path
The function df2dbg uses pysal to write dataframe in dbf format:
I made some modifications to the code to detect the length row length and character types as follows:
import pandas as pd
import pysal as ps
import numpy as np
# code from function df2dbf
else:
type2spec = {int: ('N', 20, 0),
np.int64: ('N', 20, 0),
float: ('N', 36, 15),
np.float64: ('N', 36, 15),
str: ('C', 200, 0)
}
#types = [type(df[i].iloc[0]) for i in df.columns]
types = [type('C') for i in range(0, len(df.columns))] #84)] #df.columns)] #range(0,84)] # i not required, to be removed
specs = [type2spec[t] for t in types]
db = ps.open(dbf_path, 'w')
# code continues from function df2dbf
Pandas dataframe didn't require further modifications as all source data was formatted correctly before being committed to excel file.
I will provide the link to pysal and df2dbf as soon as I find it on stackoverflow.

Check out the Python Pandas library...
To read the data from excel inta a Pandas dataframe, you could use pandas.read_excel
Once the date is read into a Pandas dataframe, you can manipulate it and afterwards write it to a database using pandas.DataFrame.to_sql
See also this explanation for dealing with database io

Related

Comparing and updating CSV files using lists

I'm writing something that will take two CSV's: #1 is a list of email's with # received for each, #2 is a catalog of every email addr on record, with a # of received emails per reporting period with date annotated at top of column.
import csv
from datetime import datetime
datestring = datetime.strftime(datetime.now(), '%m-%d')
storedEmails = []
newEmails = []
sortedList = []
holderList = []
with open('working.csv', 'r') as newLines, open('archive.csv', 'r') as oldLines: #readers to make lists
f1 = csv.reader(newLines, delimiter=',')
f2 = csv.reader(oldLines, delimiter=',')
print ('Processing new data...')
for row in f2:
storedEmails.append(list(row)) #add archived data to a list
storedEmails[0].append(datestring) #append header row with new date column
for col in f1:
if col[1] == 'email' and col[2] == 'To Address': #new list containing new email data
newEmails.append(list(col))
counter = len(newEmails)
n = len(storedEmails[0]) #using header row len to fill zeros if no email received
print(storedEmails[0])
print (n)
print ('Updating email lists and tallies, this could take a minute...')
with open ('archive.csv', 'w', newline='') as toWrite: #writer to overwrite old csv
writer = csv.writer(toWrite, delimiter=',')
for i in newEmails:
del i[:3] #strip useless identifiers from data
if int(i[1]) > 30: #only keep emails with sufficient traffic
sortedList.append(i) #add these emails to new sorted list
for i in storedEmails:
for entry in sortedList: #compare stored emails with the new emails, on match append row with new # of emails
if i[0] == entry[0]:
i.append(entry[1])
counter -=1
else:
holderList.append(entry) #if no match, it is a new email that meets criteria to land itself on the list
break #break inner loop after iteration of outer email, to move to next email and avoid multiple entries
storedEmails = storedEmails + holderList #combine lists for archived csv rewrite
for i in storedEmails:
if len(i) < n:
i.append('0') #if email on list but didnt have any activity this period, append with 0 to keep records intact
writer.writerow(i)
print('SortedList', sortedList)
print (len(sortedList))
print('storedEmails', storedEmails)
print(len(storedEmails))
print('holderList',holderList)
print(len(holderList))
print ('There are', counter, 'new emails being added to the list.')
print ('All done!')
CSV's will look similar to this.
working.csv:
1,asdf#email.com,'to address',31
2,fsda#email.com,'to address',19
3,zxcv#email.com,'to address',117
4,qwer#gmail.com,'to address',92
5,uiop#fmail.com,'to address',11
archive.csv:
date,01-sep
asdf#email.com,154
fsda#email.com,128
qwer#gmail.com,77
ffff#xmail.com,63
What I want after processing is:
date,01-sep,27-sep
asdf#email.com,154,31
fsda#email.com,128,19
qwer#gmail.com,77,92
ffff#xmail.com,63,0
zxcv#email.com,0,117
I'm not sure where I've gone wrong at - but it keeps producing duplicate entries. Some of the functionality is there but I've been at it for too long and I'm getting tunnel vision trying to figure out what I have done wrong with my loops.
I know my zero-filler section in the end is wrong as well, as it will append onto the end of a newly created record instead of populating zero's up to its first appearance.
I'm sure there are far more efficient ways to do this, I'm new to programming so its probably overly complicated and messy - initially I tried to compare CSV to CSV and realized that wasnt possible since you cant read and write at the same time, so I attempted to convert to using lists, which I also know wont work forever due to memory limitations when the list gets big.
-EDIT-
Using Trenton's Panda's solution:
I ran a script on working.csv so it instead produces the following:
asdf#email.com,1000
bsdf#gmail.com,500
xyz#fmail.com,9999
I have modified your solution to reflect this change:
import pandas as pd
from datetime import datetime
import csv
# get the date string
datestring = datetime.strftime(datetime.now(), '%d-%b')
# filter original list to grab only emails of interest
with open ('working.csv', 'r') as fr, open ('writer.csv', 'w', newline='') as fw:
reader = csv.reader(fr, delimiter=',')
writer = csv.writer(fw, delimiter=',')
for row in reader:
if row[1] == 'Email' and row[2] == 'To Address':
writer.writerow([row[3], row[4]])
# read archive
arch = pd.read_csv('archive.csv')
# rename columns
arch.rename(columns={'email': 'date'}, inplace=True)
# read working, but only the two columns that are needed
working = pd.read_csv('writer.csv', header=None, usecols=[0, 1]) # I assume usecols isnt necessery anymore, but I'm not sure
# rename columns
working.rename(columns={0: 'email', 1: datestring}, inplace=True)
# only emails greater than 30 or already in arch
working = working[(working[datestring] > 30) | (working.email.isin(arch.email))]
# merge
arch_updated = pd.merge(arch, working, on='email', how='outer').fillna(0)
# save to csv
arch_updated.to_csv('archive.csv', index=False)
I apparently still have no idea how this works because I'm now getting :
Traceback (most recent call last):
File "---/agsdga.py", line 29, in <module>
working = working[(working[datestring] > 30) | (working.email.isin(arch.email))]
File "---\Python\Python38-32\lib\site-packages\pandas\core\generic.py", line 5130, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'email'
Process finished with exit code 1
-UPDATE-
It is working now as:
import pandas as pd
from datetime import datetime
import csv
# get the date string
datestring = datetime.strftime(datetime.now(), '%d-%b')
with open ('working.csv', 'r') as fr, open ('writer.csv', 'w', newline='') as fw:
reader = csv.reader(fr, delimiter=',')
writer = csv.writer(fw, delimiter=',')
for row in reader:
if row[1] == 'Email' and row[2] == 'To Address':
writer.writerow([row[3], row[4]])
# read archive
arch = pd.read_csv('archive.csv')
# rename columns
arch.rename(columns={'date': 'email'}, inplace=True)
# read working, but only the two columns that are needed
working = pd.read_csv('writer.csv', header=None, usecols=[0, 1])
# rename columns
working.rename(columns={0: 'email', 1: datestring}, inplace=True)
# only emails greater than 30 or already in arch
working = working[(working[datestring] > 30) | (working.email.isin(arch.email))]
# merge
arch_updated = pd.merge(arch, working, on='email', how='outer').fillna(0)
# save to csv
arch_updated.to_csv('archive.csv', index=False)
The errors above were caused because I changed
arch.rename(columns={'date': 'email'}, inplace=True)
to
arch.rename(columns={'email': 'date'}, inplace=True)
I ran into further complications because I stripped the header row from the test archive because I didnt think the header mattered, even with header=None I still got issues. I'm still not clear why the header is so important when we are assigning our own values to the columns for purposes of the dataframe, but its working now. Thanks for all the help!
I'd load the data with pandas.read_csv
.rename some columns
Renaming the columns in working, is dependent upon the column index, since working.csv has no column headers.
When the working dataframe is created, look at the dataframe to verify the correct columns have been loaded, and the correct column index is being used for renaming.
The date column of arch should really be email, because headers identify what's below them, not the other column headers.
Once the column name has been changed in archive.csv, then rename won't be required any longer.
pandas.merge on the email column.
Since both dataframes have a column renamed with email, the merged result will only have one email column.
If the merge occurs on two different column names, then the result will have two columns containing email addresses.
pandas: Merge, join, concatenate and compare
As long as the columns in the files are consistent, this should work without modification
import pandas as pd
from datetime import datetime
# get the date string
datestring = datetime.strftime(datetime.now(), '%d-%b')
# read archive
arch = pd.read_csv('archive.csv')
# rename columns
arch.rename(columns={'date': 'email'}, inplace=True)
# read working, but only the two columns that are needed
working = pd.read_csv('working.csv', header=None, usecols=[1, 3])
# rename columns
working.rename(columns={1: 'email', 3: datestring}, inplace=True)
# only emails greater than 30 or already in arch
working = working[(working[datestring] > 30) | (working.email.isin(arch.email))]
# merge
arch_updated = pd.merge(arch, working, on='email', how='outer').fillna(0)
# save to csv
arch_updated.to_csv('archive.csv', index=False)
# display(arch_updated)
email 01-sep 27-Aug
asdf#email.com 154.0 31.0
fsda#email.com 128.0 19.0
qwer#gmail.com 77.0 92.0
ffff#xmail.com 63.0 0.0
zxcv#email.com 0.0 117.0
So, the problem is you have two sets of data. Both have the data stored with a "key" entry (the emails) and additional piece of data that you want condensed down to one storage. Identifying that there is a similar "key" for both of these sets of data simplifies this greatly.
Imagine each key as being the name of a bucket. Each bucket needs two pieces of info, one piece from one csv and the other piece from the other csv.
Now, I must take a small detour to explain a dictionary in python. Here is a definition stolen from here
A dictionary is a collection which is unordered, changeable and indexed.
A collection is a container like a list that holds data. Unordered and indexed means that the dictionary is not accessible like a list where the data is accessible by the index. In this case, the dictionary is accessed using keys, which can be anything like a string or a number (technically the key must be hashable, but thats too indepth). And finally changeable means that the dictionary can actually have its the stored data changed (once again, oversimplified).
Example:
dictionary = dict()
key = "Something like a string or a number!"
dictionary[key] = "any kind of value can be stored here! Even lists and other dictionaries!"
print(dictionary[key]) # Would print the above string
Here is the structure that I suggest you use instead of most of your lists:
dictionary[email] = [item1, item2]
This way, you can avoid using multiple lists and massively simplifying your code. If you are still iffy on the usage of dictionaries, there are a lot of articles and videos on the usage of them. Good luck!

Reading rows in CSV file and appending a list creates a list of lists for each value

I am copying list output data from a DataCamp course so I can recreate the exercise in Visual Studio Code or Jupyter Notebook. From DataCamp Python Interactive window, I type the name of the list, highlight the output and paste it into a new file in VSCode. I use find and replace to delete all the commas and spaces and now have 142 numeric values, and I Save As life_exp.csv. Looks like this:
43.828
76.423
72.301
42.731
75.32
81.235
79.829
75.635
64.062
79.441
When I read the file into VSCode using either Pandas read_csv or csv.reader and use values.tolist() with Pandas or a for loop to append an existing, blank list, both cases provide me with a list of lists which then does not display the data correctly when I try to create matplotlib histograms.
I used NotePad to save the data as well as a .csv and both ways of saving the data produce the same issue.
import matplotlib.pyplot as plt
import csv
life_exp = []
with open ('C:\data\life_exp.csv', 'rt') as life_expcsv:
exp_read = csv.reader(life_expcsv, delimiter = '\n')
for row in exp_read:
life_exp.append(row)
And
import pandas as pd
life_exp_df = pd.read_csv('c:\\data\\life_exp.csv', header = None)
life_exp = life_exp_df.values.tolist()
When you print life_exp after importing using csv, you get:
[['43.828'],
['76.423'],
['72.301'],
['42.731'],
['75.32'],
['81.235'],
['79.829'],
['75.635'],
['64.062'],
['79.441'],
['56.728'],
….
And when you print life_exp after importing using pandas read_csv, you get the same thing, but at least now it's not a string:
[[43.828],
[76.423],
[72.301],
[42.731],
[75.32],
[81.235],
[79.829],
[75.635],
[64.062],
[79.441],
[56.728],
…
and when you call plt.hist(life_exp) on either version of the list, you get each value as bin of 1.
I just want to read each value in the csv file and put each value into a simple Python list.
I have spent days scouring stackoverflow thinking someone has done this, but I can't seem to find an answer. I am very new to Python, so your help greatly appreciated.
Try:
import pandas as pd
life_exp_df = pd.read_csv('c:\\data\\life_exp.csv', header = None)
# Select the values of your first column as a list
life_exp = life_exp_df.iloc[:, 0].tolist()
instead of:
life_exp = life_exp_df.values.tolist()
With csv reader, it will parse the line into a list using the delimiter you provide. In this case, you provide \n as the delimiter but it will still take that single item and return it as a list.
When you append each row, you are essentially appending that list to another list. The simplest work-around is to index into row to extract that value
with open ('C:\data\life_exp.csv', 'rt') as life_expcsv:
exp_read = csv.reader(life_expcsv, delimiter = '\n')
for row in exp_read:
life_exp.append(row[0])
However, if your data is not guaranteed to be formatted the way you have provided, you will need to handle that a bit differently:
with open ('C:\data\life_exp.csv', 'rt') as life_expcsv:
exp_read = csv.reader(life_expcsv, delimiter = '\n')
for row in exp_read:
for number in row:
life_exp.append(number)
A bit cleaner with list comprehension:
with open ('C:\data\life_exp.csv', 'rt') as life_expcsv:
exp_read = csv.reader(life_expcsv, delimiter = '\n')
[life_exp.append(number) for row in exp_read for number in row]

Inserting xls data into Oracle DB using cx_Oracle executemany

I am trying to insert an xls file into oracle table using cx_Oracle. Below is the way how I am trying to achieve the same.
wb = open_workbook('test.xls')
values=[]
sheets=wb.sheet_names()
xl_sheet=wb.sheet_by_name(s)
sql_str=preparesql('MATTERS') ##this is function I have created which will return the insert statement I am using to load the table
for row in range(1, xl_sheet.nrows):
col_names = xl_sheet.row(0)
col_value = []
for name, col in zip(col_names, range(xl_sheet.ncols)):
searchObj = re.search( (r"Open Date|Matter Status Date"), name.value, re.M|re.I)
if searchObj:
if (xl_sheet.cell(row,col).value) == '':
value = ''
else:
value = datetime(*xlrd.xldate_as_tuple(xl_sheet.cell(row,col).value, wb.datemode))
value = value.strftime('%d-%b-%Y ')
else:
value = (xl_sheet.cell(row,col).value)
col_value.append(value)
values.append(col_value)
cur.executemany(sql_str,values,batcherrors=True)
But When I tested it against multiple xls files for some files it was throwing TypeError: I can't share the data due to the restrictions from the client.I feel the issue is related to the dtype of the columns in excel compared to the DB. Is there any way I can match the dtpes of the values list above to match the datatype of the columns in DB or are there any other ways to get the insert done? I tried using the dataframe.to_sql but it is taking lot of time. I am able to insert the same data by looping through the rows in values list.
I suggest you import data into panda dataframe than it become very easy to play with data type in pandas. You can change the data type of whole column and than can insert easily.

Finding Excel cell reference using Python

Here is the Excel file in question:
Context: I am writing a program which can pull values from a PDF and put them in the appropriate cell in an Excel file.
Question: I want to write a function which takes a column value (e.g. 2014) and a row value (e.g. 'COGS') as arguments and return the cell reference where those two intersect (e.g. 'C3' for 2014 COGS).
def find_correct_cell(year=2014, item='COGS'):
#do something similar to what the =match function in Excel does
return cell_reference #returns 'C3'
I have already tried using openpyxl like this to change the values of some random empty cells where I can store these values:
col_num = '=match(2014, A1:E1)'
row_num = '=match("COGS", A1:A5)'
But I want to grab those values without having to arbitrarily write to those random empty cells. Plus, even with this method, when I read those cells (F5 and F6) it reads the formulae in those cells and not the face value of 3.
Any help is appreciated, thanks.
Consider a translated VBA solution as the Match function can adequately handle your needs. Python can access the Excel VBA Object Library using a COM interface with the win32com module. Please note this solution assumes you are using Excel for PC. Below includes the counterpart VBA function.
VBA Function (native interface)
If below function is placed in Excel standard module, function can be called in spreadsheet cell =FindCell(..., ###)
' MATCHES ROW AND COL INPUT FOR CELL ADDRESS OUTPUT
Function FindCell(item As String, year As Integer) As String
FindCell = Cells(Application.Match(item, Range("A1:A5"), 0), _
Application.Match(year, Range("A1:E1"), 0)).Address
End Function
debug.Print FindCell("COGS", 2014)
' $C$3
Python Script (foreign interface, requiring all objects to be declared)
Try/Except/Finally is used to properly close the Excel process regardless of script success or fail.
import win32com.client
# MATCHES ROW AND COL INPUT FOR CELL ADDRESS OUTPUT
def FindCell(item, year):
return(xlWks.Cells(xlApp.WorksheetFunction.Match(item, xlWks.Range("A1:A5"), 0),
xlApp.WorksheetFunction.Match(year, xlWks.Range("A1:E1"), 0)).Address)
try:
xlApp = win32com.client.Dispatch("Excel.Application")
xlWbk = xlApp.Workbooks.Open('C:/Path/To/Workbook.xlsx')
xlWks = xlWbk.Worksheets("SHEETNAME")
print(FindCell("COGS", 2014))
# $C$3
except Exception as e:
print(e)
finally:
xlWbk.Close(False)
xlApp.Quit
xlWks = None
xlWbk = None
xlApp = None
There are a surprising number of details you need to get right to manipulate Excel files this way with openpyxl. First, it's worth knowing that the xlsx file contains two representations of each cell - the formula, and the current value of the formula. openpyxl can return either, and if you want values you should specify data_only=True when you open the file. Also, openpyxl is not able to calculate a new value when you change the formula for a cell - only Excel itself can do that. So inserting a MATCH() worksheet function won't solve your problem.
The code below does what you want, mostly in Python. It uses the "A1" reference style, and does some calculations to turn column numbers into column letters. This won't hold up well if you go past column Z. In that case, you may want to switch to numbered references to rows and columns. There's some more info on that here and here. But hopefully this will get you on your way.
Note: This code assumes you are reading a workbook called 'test.xlsx', and that 'COGS' is in a list of items in 'Sheet1!A2:A5' and 2014 is in a list of years in 'Sheet1!B1:E1'.
import openpyxl
def get_xlsx_region(xlsx_file, sheet, region):
""" Return a rectangular region from the specified file.
The data are returned as a list of rows, where each row contains a list
of cell values"""
# 'data_only=True' tells openpyxl to return values instead of formulas
# 'read_only=True' makes openpyxl much faster (fast enough that it
# doesn't hurt to open the file once for each region).
wb = openpyxl.load_workbook(xlsx_file, data_only=True, read_only=True)
reg = wb[sheet][region]
return [[cell.value for cell in row] for row in reg]
# cache the lists of years and items
# get the first (only) row of the 'B1:F1' region
years = get_xlsx_region('test.xlsx', 'Sheet1', 'B1:E1')[0]
# get the first (only) column of the 'A2:A6' region
items = [r[0] for r in get_xlsx_region('test.xlsx', 'Sheet1', 'A2:A5')]
def find_correct_cell(year, item):
# find the indexes for 'COGS' and 2014
year_col = chr(ord('B') + years.index(year)) # only works in A:Z range
item_row = 2 + items.index(item)
cell_reference = year_col + str(item_row)
return cell_reference
print find_correct_cell(year=2014, item='COGS')
# C3

pandas HDF select does not recognise column name

I'm trying to process a large (2gb) csv file on a machine with only 4gb of RAM (don't ask) to produce a different, formatted csv containing a subset of data that needs some processing. I'm reading the file and creating a HDFstore that I query later for the data that I require for output. Everything works except that I cant retrieve data from the store using Term - error message comes back that PLOT is not a column name. Individual variables look fine and the store is what I expect I just can't see where the error is. (nb pandas v14 and numpy1.9.0). Very new to this so apologies for the clunky code.
#wibble wobble -*- coding: utf-8 -*-
# short version
def filesport():
import pandas as pd
import numpy as np
from pandas.io.pytables import Term
Location = r"CL_short.csv"
store = pd.HDFStore('blarg.h5')
maxlines = sum(1 for line in open (Location))
print maxlines
#set chunk small for test file
chunky=4
plotty =pd.DataFrame(columns=['PLOT'])
dfdum=pd.DataFrame(columns=['PLOT', 'mDate', 'D100'])
#read file in chunks to avoid RAM blowing up
bucket = pd.read_csv(Location, iterator=True, chunksize=chunky, usecols= ['PLOT','mDate','D100'])
for chunk in bucket:
store.append('wibble', chunk, format='table', data_columns=['PLOT','mDate','D100'], ignore_index=True)
#retrieve plot numbers and select unique items
plotty = store.select('wibble', "columns = ['PLOT']")
plotty.drop_duplicates(inplace=True)
#iterate through unique plots to retrieve data and put in dataframe for output
for index, row in plotty.iterrows():
dfdum = store.select('wibble', [Term('PLOT', '=', plotty.iloc[index]['PLOT'])])
#process dfdum for output to new csv
print("successful completion")
filesport()
Final listing for those that wish to fight through the tumbleweed to reach here and are similarly bemused by processing large .csv files and the various methods of trying to retrieve/process data. The biggest problem was getting the sytax of the pytables Term right. Despite several examples indicating that it was possible to use 'A >20' etc this never worked for me. I set up a string condition containing the Term query and this worked (it is in the documentation TBF).
Also found it easier to query the HDF to retrieve unique items direct from the store in a list which could then be sorted and iterated through to retrieve data plot by plot. Note that I wanted the final csv file to have plot and then all the D100 data in date order, hence the pivot at the end.
Reading the csv file in chunks meant that each plot retrieved from the store had a header and this got written to the final csv which messed things up. I'm sure there's a more elegant way of only writing one header than the one I've shown here.
It works, takes about 2 hours to process the data and produce the final csv file (initial file 2GB, 30+million lines, data for 100,000+ unique plots, machine has 4GB of RAM but running 32-bit which means that only 2.5GB of RAM was available).
Good luck if you have a similar problem, and I hope you find this useful
#wibble wobble -*- coding: utf-8 -*-
def filesport():
import pandas as pd
import numpy as np
from pandas.io.pytables import Term
print (pd.__version__)
print (np.__version__)
Location = r"conliq_med.csv"
store = pd.HDFStore('blarg.h5')
maxlines = sum(1 for line in open (Location))
print maxlines
chunky=100000
#read file in chunks to avoid RAM blowing up select only needed columns
bucket = pd.read_csv(Location, iterator=True, chunksize=chunky, usecols= ['PLOT','mDate','D100'])
for chunk in bucket:
store.append('wibble', chunk, format='table', data_columns=['PLOT','mDate','D100'], ignore_index=True)
#retrieve unique plots and sort
plotty = store.select_column('wibble', 'PLOT').unique()
plotty.sort()
#set flag for writing file header
i=0
#iterate through unique plots to retrieve data and put in dataframe for output
for item in plotty:
condition = 'PLOT =' + str(item)
dfdum = store.select('wibble', [Term(condition)])
dfdum["mDate"]= pd.to_datetime(dfdum["mDate"], dayfirst=True)
dfdum.sort(columns=["PLOT", "mDate"], inplace=True)
dfdum["mDate"] = dfdum["mDate"].map(lambda x: x.strftime("%Y - %m"))
dfdum=dfdum.pivot("PLOT", "mDate", "D100")
#only print one header to file
if i ==0:
dfdum.to_csv("CL_OP.csv", mode='a')
i=1
else:
dfdum.to_csv("CL_OP.csv", mode='a', header=False)
print("successful completion")
filesport()

Categories