Issue
I have an excel file in German format. It looks like this
I want to read the first column as numbers into pandas using the flowing code:
import pandas as pd
import numpy as np
tmp = pd.read_excel("test.xlsx", dtype = {"col1": np.float64})
It gives me the error
ValueError: Unable to convert column col1 to type <class 'numpy.float64'>
The issue is in excel. If I modify the col1 manuelly to number format, it solves the issue. See this new excel file:
Approach
I can first read col1 as object into pandas, then I need to replace , to ., at the last I can change the string to float.
However
The approach is tedious. How can I solve this problem more efficiently?
Unfortunately, there is no way to tell pandas what decimal separator is being used.
What you could do though is create a function to do the conversion and pass it to read_excel as part of the converters argument.
def fix_decimal(num):
### convert numeric value with comma as decimal separator to float
print(num)
return float(num.replace(',', '.')) if num else 0
tmp = pd.read_excel("test.xlsx", converters={0: fix_decimal} )
Related
I am importing study data into a Pandas data frame using read_csv.
My subject codes are 6 numbers coding, among others, the day of birth. For some of my subjects this results in a code with a leading zero (e.g. "010816").
When I import into Pandas, the leading zero is stripped of and the column is formatted as int64.
Is there a way to import this column unchanged maybe as a string?
I tried using a custom converter for the column, but it does not work - it seems as if the custom conversion takes place before Pandas converts to int.
As indicated in this answer by Lev Landau, there could be a simple solution to use converters option for a certain column in read_csv function.
converters={'column_name': str}
Let's say I have csv file projects.csv like below:
project_name,project_id
Some Project,000245
Another Project,000478
As for example below code is trimming leading zeros:
from pandas import read_csv
dataframe = read_csv('projects.csv')
print dataframe
Result:
project_name project_id
0 Some Project 245
1 Another Project 478
Solution code example:
from pandas import read_csv
dataframe = read_csv('projects.csv', converters={'project_id': str})
print dataframe
Required result:
project_name project_id
0 Some Project 000245
1 Another Project 000478
To have all columns as str:
pd.read_csv('sample.csv', dtype=str)
To have certain columns as str:
# column names which need to be string
lst_str_cols = ['prefix', 'serial']
dict_dtypes = {x: 'str' for x in lst_str_cols}
pd.read_csv('sample.csv', dtype=dict_dtypes)
here is a shorter, robust and fully working solution:
simply define a mapping (dictionary) between variable names and desired data type:
dtype_dic= {'subject_id': str,
'subject_number' : 'float'}
use that mapping with pd.read_csv():
df = pd.read_csv(yourdata, dtype = dtype_dic)
et voila!
If you have a lot of columns and you don't know which ones contain leading zeros that might be missed, or you might just need to automate your code. You can do the following:
df = pd.read_csv("your_file.csv", nrows=1) # Just take the first row to extract the columns' names
col_str_dic = {column:str for column in list(df)}
df = pd.read_csv("your_file.csv", dtype=col_str_dic) # Now you can read the compete file
You could also do:
df = pd.read_csv("your_file.csv", dtype=str)
By doing this you will have all your columns as strings and you won't lose any leading zeros.
You Can do This , Works On all Versions of Pandas
pd.read_csv('filename.csv', dtype={'zero_column_name': object})
You can use converters to convert number to fixed width if you know the width.
For example, if the width is 5, then
data = pd.read_csv('text.csv', converters={'column1': lambda x: f"{x:05}"})
This will do the trick. It works for pandas==0.23.0 and also read_excel.
Python3.6 or higher required.
I don't think you can specify a column type the way you want (if there haven't been changes reciently and if the 6 digit number is not a date that you can convert to datetime). You could try using np.genfromtxt() and create the DataFrame from there.
EDIT: Take a look at Wes Mckinney's blog, there might be something for you. It seems to be that there is a new parser from pandas 0.10 coming in November.
As an example, consider the following my_data.txt file:
id,A
03,5
04,6
To preserve the leading zeros for the id column:
df = pd.read_csv("my_data.txt", dtype={"id":"string"})
df
id A
0 03 5
1 04 6
From my point of view in the code below I explicit declared - as a na-value (or missing) for an int32 column while reading a csv file.
But I got the error message
ValueError: Integer column has NA values in column 2
So it looks like that pandas recognize the - as a na value but treat it like an error. But it should just interpret it as a na-value because I defined it that why. Why doesn't it?
#!/usr/bin/env python3
import io
import csv
import pandas as pd
print(pd.__version__)
# there is a missing/empty value in 4th row
csv_data = """idx;FOO;BAR\n
zero;0;-\n
eins;one;2\n
zwei;two;3\n"""
df = pd.read_csv(io.StringIO(csv_data),
header=0,
sep=';',
dtype={'BAR': 'int32'},
na_values={'BAR': '-'})
print(df)
A workaround could be to tread the columns all as strings and convert them after read_csv(). But that is not the kind of solution I am looking here for. I assume I missunderstand the options of read_csv()?
First of all try changing int32 to Int32 (uppercase I letter), it must be like this:
dtype={'BAR': 'Int32'}
The uppercase version of that dtypes supporting na values.
Then I strogly recommend to use np.nan insted of -.
I need to save pandas series and make sure that, once loaded again, they are exactly the same. However, they are not. I tried to manipulate the result in various ways but cannot find a solution. This is my MWE:
import pandas as pd
idx = pd.date_range(start='2010', periods=100, freq='1M')
ts = pd.Series(data=range(100), index=idx)
ts.to_csv(f'test.csv')
imported_ts= pd.read_csv('test.csv', delimiter=',', index_col=None)
print(ts.equals(imported_ts))
>>> False
What am I doing wrong?
You cannot. A pandas Series contains an index and a data column, both having a type (the dtype), a (possibly complex) title which itself has a type, and values.
A CSV file is just a text file which contains text representations of values and optionaly the text representation of the title in first row. Nothing more. When things are simple, meaning if the titles are simple strings, and all values are integers or small decimal (*), the save-load round trip will give you exactly what you initially had.
But if you have more complex use cases, for example date types, or object dtype columns containing decimal.Decimal values, the generated CSV file will only contain a textual representation with no type information. So it is impossible to make sure of the original dtype by reading the content of a csv file, the reason why the read_csv method has so many options.
(*) by small decimal I mean a small number of digits after the decimal point.
I resolved this issue by using pickle instead.
import pandas as pd
idx = pd.date_range(start='2010', periods=100, freq='1M')
ts = pd.Series(data=range(100), index=idx)
ts.to_pickle("./test.pkl")
unpickled_df = pd.read_pickle("./test.pkl")
print(ts.equals(unpickled_df))
>>> True
what happening is read_csv by default is looking for a dataframe even if it is a single column, in addition due to the lack of csv typing, it could possibly be more difficult then my suggestio. i that case see #Serge Ballesta's answer
if its a simple case, try to convert the result :
print(ts.equals(imported_ts.iloc[:,0]))
You are saving the dates as index and comparing with the values of your df. Do this instead..
import pandas as pd
idx = pd.date_range(start='2010', periods=100, freq='1M')
ts = pd.Series(data=range(100), index=idx)
ts.to_csv(f'test.csv')
imported_ts= pd.read_csv('test.csv', delimiter=',', index_col=['Unnamed: 0'])
print(ts.index.equals(imported_ts.index))
Gives
True
I have a Pandas data frame with multiple columns whose types are either float64 or strings. I'm trying to use to_csv to write the data frame to an output file. However, it outputs big numbers with scientific notion. For example, if the number is 1344154454156.992676, it's saved in the file as 1.344154e+12.
How to suppress scientific notion for to_csv and keep the numbers as they are in the output file? I have tried to use float_format parameter in the to_csv function but it broke since there are also columns with strings in the data frame.
Here are some example codes:
import pandas as pd
import numpy as np
import os
df = pd.DataFrame({'names': ['a','b','c'],
'values': np.random.rand(3)*100000000000000})
df.to_csv('example.csv')
os.system("cat example.csv")
,names,values
0,a,9.41843213808e+13
1,b,2.23837359193e+13
2,c,9.91801198906e+13
# if i set up float_format:
df.to_csv('example.csv', float_format='{:f}'.format)
ValueError: Unknown format code 'f' for object of type 'str'
How can I get the data saved in the csv without scientific notion like below?
names values
0 a 94184321380806.796875
1 b 22383735919307.046875
2 c 99180119890642.859375
The float_format argument should be a str, use this instead
df.to_csv('example.csv', float_format='%f')
try setting the options like this
pd.set_option('float_format', '{:f}'.format)
For python 3.xx (tested on 3.6.5 and 3.7):
Options and Settings
For visualization of the dataframe pandas.set_option
import pandas as pd #import pandas package
# for visualisation fo the float data once we read the float data:
pd.set_option('display.html.table_schema', True) # to can see the dataframe/table as a html
pd.set_option('display.precision', 5) # setting up the precision point so can see the data how looks, here is 5
df = pd.DataFrame({'names': ['a','b','c'],
'values': np.random.rand(3)*100000000000000}) # generate random dataframe
Output of the data:
df.dtypes # check datatype for columns
[output]:
names object
values float64
dtype: object
Dataframe:
df # output of the dataframe
[output]:
names values
0 a 6.56726e+13
1 b 1.63821e+13
2 c 7.63814e+13
And now write to_csv using the float_format='%.13f' parameter
df.to_csv('estc.csv',sep=',', float_format='%.13f') # write with precision .13
file output:
,names,values
0,a,65672589530749.0703125000000
1,b,16382088158236.9062500000000
2,c,76381375369817.2968750000000
And now write to_csv using the float_format='%f' parameter
df.to_csv('estc.csv',sep=',', float_format='%f') # this will remove the extra zeros after the '.'
file output:
,names,values
0,a,65672589530749.070312
1,b,16382088158236.906250
2,c,76381375369817.296875
For more details check pandas.DataFrame.to_csv
I am importing study data into a Pandas data frame using read_csv.
My subject codes are 6 numbers coding, among others, the day of birth. For some of my subjects this results in a code with a leading zero (e.g. "010816").
When I import into Pandas, the leading zero is stripped of and the column is formatted as int64.
Is there a way to import this column unchanged maybe as a string?
I tried using a custom converter for the column, but it does not work - it seems as if the custom conversion takes place before Pandas converts to int.
As indicated in this answer by Lev Landau, there could be a simple solution to use converters option for a certain column in read_csv function.
converters={'column_name': str}
Let's say I have csv file projects.csv like below:
project_name,project_id
Some Project,000245
Another Project,000478
As for example below code is trimming leading zeros:
from pandas import read_csv
dataframe = read_csv('projects.csv')
print dataframe
Result:
project_name project_id
0 Some Project 245
1 Another Project 478
Solution code example:
from pandas import read_csv
dataframe = read_csv('projects.csv', converters={'project_id': str})
print dataframe
Required result:
project_name project_id
0 Some Project 000245
1 Another Project 000478
To have all columns as str:
pd.read_csv('sample.csv', dtype=str)
To have certain columns as str:
# column names which need to be string
lst_str_cols = ['prefix', 'serial']
dict_dtypes = {x: 'str' for x in lst_str_cols}
pd.read_csv('sample.csv', dtype=dict_dtypes)
here is a shorter, robust and fully working solution:
simply define a mapping (dictionary) between variable names and desired data type:
dtype_dic= {'subject_id': str,
'subject_number' : 'float'}
use that mapping with pd.read_csv():
df = pd.read_csv(yourdata, dtype = dtype_dic)
et voila!
If you have a lot of columns and you don't know which ones contain leading zeros that might be missed, or you might just need to automate your code. You can do the following:
df = pd.read_csv("your_file.csv", nrows=1) # Just take the first row to extract the columns' names
col_str_dic = {column:str for column in list(df)}
df = pd.read_csv("your_file.csv", dtype=col_str_dic) # Now you can read the compete file
You could also do:
df = pd.read_csv("your_file.csv", dtype=str)
By doing this you will have all your columns as strings and you won't lose any leading zeros.
You Can do This , Works On all Versions of Pandas
pd.read_csv('filename.csv', dtype={'zero_column_name': object})
You can use converters to convert number to fixed width if you know the width.
For example, if the width is 5, then
data = pd.read_csv('text.csv', converters={'column1': lambda x: f"{x:05}"})
This will do the trick. It works for pandas==0.23.0 and also read_excel.
Python3.6 or higher required.
I don't think you can specify a column type the way you want (if there haven't been changes reciently and if the 6 digit number is not a date that you can convert to datetime). You could try using np.genfromtxt() and create the DataFrame from there.
EDIT: Take a look at Wes Mckinney's blog, there might be something for you. It seems to be that there is a new parser from pandas 0.10 coming in November.
As an example, consider the following my_data.txt file:
id,A
03,5
04,6
To preserve the leading zeros for the id column:
df = pd.read_csv("my_data.txt", dtype={"id":"string"})
df
id A
0 03 5
1 04 6