I've been working with data imported from a CSV. Pandas changed some columns to float, so now the numbers in these columns get displayed as floating points! However, I need them to be displayed as integers or without comma. Is there a way to convert them to integers or not display the comma?
To modify the float output do this:
df= pd.DataFrame(range(5), columns=['a'])
df.a = df.a.astype(float)
df
Out[33]:
a
0 0.0000000
1 1.0000000
2 2.0000000
3 3.0000000
4 4.0000000
pd.options.display.float_format = '{:,.0f}'.format
df
Out[35]:
a
0 0
1 1
2 2
3 3
4 4
Use the pandas.DataFrame.astype(<type>) function to manipulate column dtypes.
>>> df = pd.DataFrame(np.random.rand(3,4), columns=list("ABCD"))
>>> df
A B C D
0 0.542447 0.949988 0.669239 0.879887
1 0.068542 0.757775 0.891903 0.384542
2 0.021274 0.587504 0.180426 0.574300
>>> df[list("ABCD")] = df[list("ABCD")].astype(int)
>>> df
A B C D
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
EDIT:
To handle missing values:
>>> df
A B C D
0 0.475103 0.355453 0.66 0.869336
1 0.260395 0.200287 NaN 0.617024
2 0.517692 0.735613 0.18 0.657106
>>> df[list("ABCD")] = df[list("ABCD")].fillna(0.0).astype(int)
>>> df
A B C D
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
Considering the following data frame:
>>> df = pd.DataFrame(10*np.random.rand(3, 4), columns=list("ABCD"))
>>> print(df)
... A B C D
... 0 8.362940 0.354027 1.916283 6.226750
... 1 1.988232 9.003545 9.277504 8.522808
... 2 1.141432 4.935593 2.700118 7.739108
Using a list of column names, change the type for multiple columns with applymap():
>>> cols = ['A', 'B']
>>> df[cols] = df[cols].applymap(np.int64)
>>> print(df)
... A B C D
... 0 8 0 1.916283 6.226750
... 1 1 9 9.277504 8.522808
... 2 1 4 2.700118 7.739108
Or for a single column with apply():
>>> df['C'] = df['C'].apply(np.int64)
>>> print(df)
... A B C D
... 0 8 0 1 6.226750
... 1 1 9 9 8.522808
... 2 1 4 2 7.739108
To convert all float columns to int
>>> df = pd.DataFrame(np.random.rand(5, 4) * 10, columns=list('PQRS'))
>>> print(df)
... P Q R S
... 0 4.395994 0.844292 8.543430 1.933934
... 1 0.311974 9.519054 6.171577 3.859993
... 2 2.056797 0.836150 5.270513 3.224497
... 3 3.919300 8.562298 6.852941 1.415992
... 4 9.958550 9.013425 8.703142 3.588733
>>> float_col = df.select_dtypes(include=['float64']) # This will select float columns only
>>> # list(float_col.columns.values)
>>> for col in float_col.columns.values:
... df[col] = df[col].astype('int64')
>>> print(df)
... P Q R S
... 0 4 0 8 1
... 1 0 9 6 3
... 2 2 0 5 3
... 3 3 8 6 1
... 4 9 9 8 3
This is a quick solution in case you want to convert more columns of your pandas.DataFrame from float to integer considering also the case that you can have NaN values.
cols = ['col_1', 'col_2', 'col_3', 'col_4']
for col in cols:
df[col] = df[col].apply(lambda x: int(x) if x == x else "")
I tried with else x) and else None), but the result is still having the float number, so I used else "".
Use 'Int64' for NaN support
astype(int) and astype('int64') cannot handle missing values (numpy int)
astype('Int64') (note the capital I) can handle missing values (pandas int)
df['A'] = df['A'].astype('Int64') # capital I
This assumes you want to keep missing values as NaN. If you plan to impute them, you could fillna first as Ryan suggested.
Examples of 'Int64' (capital I)
If the floats are already rounded, just use astype:
df = pd.DataFrame({'A': [99.0, np.nan, 42.0]})
df['A'] = df['A'].astype('Int64')
# A
# 0 99
# 1 <NA>
# 2 42
If the floats are not rounded yet, round before astype:
df = pd.DataFrame({'A': [3.14159, np.nan, 1.61803]})
df['A'] = df['A'].round().astype('Int64')
# A
# 0 3
# 1 <NA>
# 2 2
To read int+NaN data from a file, use dtype='Int64' to avoid the need for converting at all:
csv = io.StringIO('''
id,rating
foo,5
bar,
baz,2
''')
df = pd.read_csv(csv, dtype={'rating': 'Int64'})
# id rating
# 0 foo 5
# 1 bar <NA>
# 2 baz 2
Notes
'Int64' is an alias for Int64Dtype:
df['A'] = df['A'].astype(pd.Int64Dtype()) # same as astype('Int64')
Sized/signed aliases are available:
lower bound
upper bound
'Int8'
-128
127
'Int16'
-32,768
32,767
'Int32'
-2,147,483,648
2,147,483,647
'Int64'
-9,223,372,036,854,775,808
9,223,372,036,854,775,807
'UInt8'
0
255
'UInt16'
0
65,535
'UInt32'
0
4,294,967,295
'UInt64'
0
18,446,744,073,709,551,615
Expanding on #Ryan G mentioned usage of the pandas.DataFrame.astype(<type>) method, one can use the errors=ignore argument to only convert those columns that do not produce an error, which notably simplifies the syntax. Obviously, caution should be applied when ignoring errors, but for this task it comes very handy.
>>> df = pd.DataFrame(np.random.rand(3, 4), columns=list('ABCD'))
>>> df *= 10
>>> print(df)
... A B C D
... 0 2.16861 8.34139 1.83434 6.91706
... 1 5.85938 9.71712 5.53371 4.26542
... 2 0.50112 4.06725 1.99795 4.75698
>>> df['E'] = list('XYZ')
>>> df.astype(int, errors='ignore')
>>> print(df)
... A B C D E
... 0 2 8 1 6 X
... 1 5 9 5 4 Y
... 2 0 4 1 4 Z
From pandas.DataFrame.astype docs:
errors : {‘raise’, ‘ignore’}, default ‘raise’
Control raising of exceptions on invalid data for provided dtype.
raise : allow exceptions to be raised
ignore : suppress exceptions. On error return original object
New in version 0.20.0.
The columns that needs to be converted to int can be mentioned in a dictionary also as below
df = df.astype({'col1': 'int', 'col2': 'int', 'col3': 'int'})
>>> import pandas as pd
>>> right = pd.DataFrame({'C': [1.002, 2.003], 'D': [1.009, 4.55], 'key': ['K0', 'K1']})
>>> print(right)
C D key
0 1.002 1.009 K0
1 2.003 4.550 K1
>>> right['C'] = right.C.astype(int)
>>> print(right)
C D key
0 1 1.009 K0
1 2 4.550 K1
In the text of the question is explained that the data comes from a csv. Só, I think that show options to make the conversion when the data is read and not after are relevant to the topic.
When importing spreadsheets or csv in a dataframe, "only integer columns" are commonly converted to float because excel stores all numerical values as floats and how the underlying libraries works.
When the file is read with read_excel or read_csv there are a couple of options avoid the after import conversion:
parameter dtype allows a pass a dictionary of column names and target types like dtype = {"my_column": "Int64"}
parameter converters can be used to pass a function that makes the conversion, for example changing NaN's with 0. converters = {"my_column": lambda x: int(x) if x else 0}
parameter convert_float will convert "integral floats to int (i.e., 1.0 –> 1)", but take care with corner cases like NaN's. This parameter is only available in read_excel
To make the conversion in an existing dataframe several alternatives have been given in other comments, but since v1.0.0 pandas has a interesting function for this cases: convert_dtypes, that "Convert columns to best possible dtypes using dtypes supporting pd.NA."
As example:
In [3]: import numpy as np
In [4]: import pandas as pd
In [5]: df = pd.DataFrame(
...: {
...: "a": pd.Series([1, 2, 3], dtype=np.dtype("int64")),
...: "b": pd.Series([1.0, 2.0, 3.0], dtype=np.dtype("float")),
...: "c": pd.Series([1.0, np.nan, 3.0]),
...: "d": pd.Series([1, np.nan, 3]),
...: }
...: )
In [6]: df
Out[6]:
a b c d
0 1 1.0 1.0 1.0
1 2 2.0 NaN NaN
2 3 3.0 3.0 3.0
In [7]: df.dtypes
Out[7]:
a int64
b float64
c float64
d float64
dtype: object
In [8]: converted = df.convert_dtypes()
In [9]: converted.dtypes
Out[9]:
a Int64
b Int64
c Int64
d Int64
dtype: object
In [10]: converted
Out[10]:
a b c d
0 1 1 1 1
1 2 2 <NA> <NA>
2 3 3 3 3
Although there are many options here,
You can also convert the format of specific columns using a dictionary
Data = pd.read_csv('Your_Data.csv')
Data_2 = Data.astype({"Column a":"int32", "Column_b": "float64", "Column_c": "int32"})
print(Data_2 .dtypes) # Check the dtypes of the columns
This is an useful and very fast way to change the data format of specific columns for quick data analysis.
I have a DataFrame with integer indexes that are missing some values (i.e. not equally spaced), I want to create a new DataFrame with equally spaced index values and forward fill column values. Below is a simple example:
have
import pandas as pd
df = pd.DataFrame(['A', 'B', 'C'], index=[0, 2, 4])
0
0 A
2 B
4 C
want to use above and create:
0
0 A
1 A
2 B
3 B
4 C
Use reindex with method='ffill':
df = df.reindex(np.arange(0, df.index.max()+1), method='ffill')
Or:
df = df.reindex(np.arange(df.index.min(), df.index.max() + 1), method='ffill')
print (df)
0
0 A
1 A
2 B
3 B
4 C
Using reindex and ffill:
df = df.reindex(range(df.index[0],df.index[-1]+1)).ffill()
print(df)
0
0 A
1 A
2 B
3 B
4 C
You can do this:
In [319]: df.reindex(list(range(df.index.min(),df.index.max()+1))).ffill()
Out[319]:
0
0 A
1 A
2 B
3 B
4 C
reproducible code for data:
import pandas as pd
dict = {"a": "[1,2,3,4]", "b": "[1,2,3,4]"}
dict = pd.DataFrame(list(dict.items()))
dict
0 1
0 a [1,2,3,4]
1 b [1,2,3,4]
I wanted to split/delimit "column 1" and create individual rows for each split values.
expected output:
0 1
0 a 1
1 a 2
2 a 3
3 a 4
4 b 1
5 b 2
6 b 3
7 b 4
Should I be removing the brackets first and then split the values? I really don't get any idea of doing this. Any reference that would help me solve this please?
Based on the logic from that answer:
s = d[1]\
.apply(lambda x: pd.Series(eval(x)))\
.stack()
s.index = s.index.droplevel(-1)
s.name = "split"
d.join(s).drop(1, axis=1)
Because you have strings containing a list (and not lists) in your cells, you can use eval:
dict_v = {"a": "[1,2,3,4]", "b": "[1,2,3,4]"}
df = pd.DataFrame(list(dict_v.items()))
df = (df.rename(columns={0:'l'}).set_index('l')[1]
.apply(lambda x: pd.Series(eval(x))).stack()
.reset_index().drop('level_1',1).rename(columns={'l':0,0:1}))
or another way could be to create a DataFrame (probably faster) such as:
df = (pd.DataFrame(df[1].apply(eval).tolist(),index=df[0])
.stack().reset_index(level=1, drop=True)
.reset_index(name='1'))
your output is
0 1
0 a 1
1 a 2
2 a 3
3 a 4
4 b 1
5 b 2
6 b 3
7 b 4
all the rename are to get exactly your input/output
If I would go from a long format to a grouped aggregated format I would simply do:
s = pd.DataFrame(['a','a','a','a','b','b','c'], columns=['value'])
s.groupby('value').size()
value
a 4
b 2
c 1
dtype: int64
Now if I wanted to revert that aggregation and go from a grouped format to a long format, how would I go about doing that? I guess I could loop through the grouped series and repeat 'a' 4 times and 'b' 2 times etc.
Is there a better way to do this in pandas or any other Python package?
Thankful for any hints
Perhaps .transform can help with this:
s.set_index('value', drop=False, inplace=True)
s['size'] = s.groupby(level='value', as_index=False).transform(size)
s.reset_index(inplace=True, drop=True)
s
yielding:
value size
0 a 4
1 a 4
2 a 4
3 a 4
4 b 2
5 b 2
6 c 1
Another and rather simple approach is to use np.repeat (assuming s2 is the aggregated series):
In [17]: np.repeat(s2.index.values, s2.values)
Out[17]: array(['a', 'a', 'a', 'a', 'b', 'b', 'c'], dtype=object)
In [18]: pd.DataFrame(np.repeat(s2.index.values, s2.values), columns=['value'])
Out[18]:
value
0 a
1 a
2 a
3 a
4 b
5 b
6 c
There might be something cleaner, but here's an approach. First, store you groupby results in a dataframe and rename the columsn.
agg = s.groupby('value').size().reset_index()
agg.columns = ['key', 'count']
Then, build a frame with with columns that track the count for each letter.
counts = agg['count'].apply(lambda x: pd.Series([0] * x))
counts['key'] = agg['key']
In [107]: counts
Out[107]:
0 1 2 3 key
0 0 0 0 0 a
1 0 0 NaN NaN b
2 0 NaN NaN NaN c
Finally, this can be melted and nulls droppeed to get your desired frame.
In [108]: pd.melt(counts, id_vars='key').dropna()[['key']]
Out[108]:
key
0 a
1 b
2 c
3 a
4 b
6 a
9 a
I have a pandas dataframe whose indices look like:
df.index
['a_1', 'b_2', 'c_3', ... ]
I want to rename these indices to:
['a', 'b', 'c', ... ]
How do I do this without specifying a dictionary with explicit keys for each index value?
I tried:
df.rename( index = lambda x: x.split( '_' )[0] )
but this throws up an error:
AssertionError: New axis must be unique to rename
Perhaps you could get the best of both worlds by using a MultiIndex:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.arange(8).reshape(4,2), index=['a_1', 'b_2', 'c_3', 'c_4'])
print(df)
# 0 1
# a_1 0 1
# b_2 2 3
# c_3 4 5
# c_4 6 7
index = pd.MultiIndex.from_tuples([item.split('_') for item in df.index])
df.index = index
print(df)
# 0 1
# a 1 0 1
# b 2 2 3
# c 3 4 5
# 4 6 7
This way, you can access things according to first level of the index:
In [30]: df.ix['c']
Out[30]:
0 1
3 4 5
4 6 7
or according to both levels of the index:
In [31]: df.ix[('c','3')]
Out[31]:
0 4
1 5
Name: (c, 3)
Moreover, all the DataFrame methods are built to work with DataFrames with MultiIndices, so you lose nothing.
However, if you really want to drop the second level of the index, you could do this:
df.reset_index(level=1, drop=True, inplace=True)
print(df)
# 0 1
# a 0 1
# b 2 3
# c 4 5
# c 6 7
That's the error you'd get if your function produced duplicate index values:
>>> df = pd.DataFrame(np.random.random((4,3)),index="a_1 b_2 c_3 c_4".split())
>>> df
0 1 2
a_1 0.854839 0.830317 0.046283
b_2 0.433805 0.629118 0.702179
c_3 0.390390 0.374232 0.040998
c_4 0.667013 0.368870 0.637276
>>> df.rename(index=lambda x: x.split("_")[0])
[...]
AssertionError: New axis must be unique to rename
If you really want that, I'd use a list comp:
>>> df.index = [x.split("_")[0] for x in df.index]
>>> df
0 1 2
a 0.854839 0.830317 0.046283
b 0.433805 0.629118 0.702179
c 0.390390 0.374232 0.040998
c 0.667013 0.368870 0.637276
but I'd think about whether that's really the right direction.