I am trying to read in a spreadsheet with Python Pandas that does not parse the dates.
I have tried to use a lot of the methods mentioned in a previous posting, but none work for me.
This is the code (spreadsheet is just the name of the file):
columns = pd.ExcelFile(spreadsheet).parse(tab).columns
converters = {column: str for column in columns}
df1 = pd.read_excel(spreadsheet, sheet_name=tab, parse_dates=False, dtype=converters)
I realize that there is a previous posting on this, but none of the suggested fixes work for me. I've even included them above, but I am still getting parsed dates in the text files that I am creating out of the spreadsheet.
Related
[ 10-07-2022 - For anyone stopping by with the same issue. After much searching, I have yet to find a way, that isn't convoluted and complicated, to accurately pull mixed type data from excel using Pandas/Python. My solution is to convert the files using unoconv on the command line, which preserves the formatting, then read into pandas from there. ]
I have to concatenate 1000s of individual excel workbooks with a single sheet, into one master sheet. I use a for loop to read them into a data frame, then concatenate the data frame to a master data frame. There is one column in each that could represent currency, percentages, or just contain notes. Sometimes it has been filled out with explicit indicators in the cell, Eg., '$' - other times, someone has used cell formatting to indicate currency while leaving just a decimal in the cell. I've been using a formatting routine to catch some of this but have run into some edge cases.
Consider a case like the following:
In the actual spreadsheet, you see: $0.96
When read_excel siphons this in, it will be represented as 0.96. Because of the mixed-type nature of the column, there is no sure way to know whether this is 96% or $0.96
Is there a way to read excel files into a data frame for analysis and record what is visually represented in the cell, regardless of whether cell formatting was used or not?
I've tried using dtype="str", dtype="object" and have tried using both the default and openpyxl engines.
UPDATE
Taking the comments below into consideration, I'm rewriting with openpyxl.
import openpyxl
from openpyxl import load_workbook
def excel_concat(df_source):
df_master = pd.DataFrame()
for index, row in df_source.iterrows():
excel_file = Path(row['Test Path']) / Path(row['Original Filename'])
wb = openpyxl.load_workbook(filename = excel_file)
ws = wb.active
df_data = pd.DataFrame(ws.values)
df_master = pd.concat([df_master, df_data], ignore_index=True)
return df_master
df_master1 = excel_concat(df_excel_files)
This appears to be nothing more than a "longcut" to just calling the openpyxl engine with pandas. What am I missing in order to capture the visible values in the excel files?
looking here,https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html , noticed the following
dtypeType name or dict of column -> type, default None
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} Use object to preserve data as stored in Excel and not interpret dtype. **If converters are specified, they will be applied INSTEAD of dtype conversion.**
converters dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers or column labels, values are functions that take one input argument, the Excel cell content, and return the transformed content.
Do you think that might work for you?
I have OHLC data in a .csv file with the stock name is repeated in the header rows, like this:
M6A=F, M6A=F,M6A=F, M6A=F, M6A=F
Open, High, Low, Close, Volume
I am using pandas read_csv to get it, and parse all (and only) the 'M6A=F' columns to FastAPI. So far nothing I do will get all the columns. I either get the first column if I filter with "usecols=" or the last column if I filter with "names=".
I don't want to load the entire .csv file then dump unwanted data due to speed of use, so need to filter before extracting the data.
Here is my code example:
symbol = ['M6A=F']
df = pd.read_csv('myOHCLVdata.csv', skipinitialspace=True, usecols=lambda x: x in symbol)
def parse_csv(df):
res = df.to_json(orient="records")
parsed = json.loads(res)
return parsed
#app.get("/test")
def historic():
return parse_csv(df)
What I have done so far:
I checked the documentation for pandas.read_csv and it says "names=" will not allow duplicates.
I use lambdas in the above code to prevent the symbol hanging FastAPI if it does not match a column.
My understanding from other stackoverflow questions on this is that mangle_dupe_cols=True should be incrementing the duplicates with M6A=F.1, M6A=F.2, M6A=F.3 etc... when pandas reads it into a dataframe, but that isnt happening and I tried setting it to false, but it says it is not implemented yet.
And answers like I found in this stackoverflow solution dont seem to tally with what is happening in my code, since I am only getting the first column returned, or the last column with the others over-written. (I included FastAPI code here as it might be related to the issue or a workaround).
Hey everyone my question is kinda silly but i am new to python)
I am writing a python script for c# aplication and i got kinda strange issue when i work with csv document.
When i open it it and work with Date column it works fine
df=pd.read_csv("../Debug/StockHistoryData.csv")
df = df[['Date']]
But when i try to work with another columns it throws error
df = df[['Close/Last']]
KeyError: "None of [Index(['Close/Last'], dtype='object')] are in the [columns]"
It says there are no such Index but but when i print the whole table it works fine and shows all columns
Table Image
Error image
Take a look at the first row of your CSV file.
It contains colum names (comma-separated).
From time to time it happens that this initial line contains a space after
each comma.
For a human being it is quite readable and even intuitive.
But read_csv adds these spaces to column names, what is sometimes difficult
to discover.
Another option is to run print(df.columns) after you read your file.
Then look for any extra spaces in column names.
Would really appreciate some help with the following problem. I'm intending on using Pandas library to solve this problem, so would appreciate if you could explain how this can be done using Pandas if possible.
I want to take the following excel file:
Before
and:
1)convert the 'before' file into a pandas data frame
2)look for the text in 'Site' column. Where this text appears within the string in the 'Domain' column, return the value in 'Owner' Column under 'Output'.
3)the result should look like the 'After' file. I would like to convert this back into CSV format.
After
So essentially this is similar to an excel vlookup exercise, except its not an exact match we're looking for between the 'Site' and 'Domain' column.
I have already attempted this in Excel but im looking at over 100,000 rows, and comparing them against over 1000 sites, which crashes excel.
I have attempted to store the lookup list in the same file as the list of domains we want to classify with the 'Owner'. If there's a much better way to do this eg storing the lookup list in a separate data frame altogether, then that's fine.
Thanks in advance for any help, i really appreciate it.
Colin
I think the OP's question differs somewhat from the solutions linked in the comments which either deal with exact lookups (map) or lookups between dataframes. Here there is a single dataframe and a partial match to find.
import pandas as pd
import numpy as np
df = pd.ExcelFile('data.xlsx').parse(0)
df = df.astype(str)
df['Test'] = df.apply(lambda x: x['Site'] in x['Domain'],axis=1)
df['Output'] = np.where(df['Test']==True, df['Owner'], '')
df
The lambda allows reiteration of the in test to be applied across the axis, to return a boolean in Test. This then acts as a rule for looking up Owner and placing in Output.
A script I'd written using an earlier version of Pandas now no longer works. Date parsing is not working. This is my read_html line:
gnu = pd.read_html('gnucash.html', flavor="html5lib", header=0, parse_dates=['Date'])
Pandas identifies the HTML table properly but returns the date as unicode. The HTML has been generated by Gnucash and is in ISO format Y-m-d (no times).
Whatever I do I can't get Pandas to recognise the dates. I tried including a date_parser, but read_html doesn't recognise that.
Apologies #IanS, I've inadvertently deleted your comment. When I set out an example it worked. I think the problem is with my html file. There must be a non-date buried in the date column. Anyway, Pandas did what it ought to with my sample file.
Thanks for taking an interest.