pandas read_csv skiprows not working - python

I am trying to skip some rows that have incorrect values in them.
Here is the data when i read it in from a file without using the skiprows argument.
>> df
MstrRecNbrTxt UnitIDNmb PersonIDNmb PersonTypeCde
2194593 P NaN NaN NaN
2194594 300146901 1.0 1.0 1.0
4100689 DAT NaN NaN NaN
4100690 300170330 1.0 1.0 1.0
5732515 DA NaN NaN NaN
5732516 300174170 2.0 1.0 1.0
I want to skip rows 2194593, 4100689, and 5732515. I would expect to not see those rows in the table that I have read in.
>> df = pd.read_csv(file,sep='|',low_memory=False,
usecols= cols_to_use,
skiprows=[2194593,4100689,5732515])
Yet when I print it again, those rows are still there.
>> df
MstrRecNbrTxt UnitIDNmb PersonIDNmb PersonTypeCde
2194593 P NaN NaN NaN
2194594 300146901 1.0 1.0 1.0
4100689 DAT NaN NaN NaN
4100690 300170330 1.0 1.0 1.0
5732515 DA NaN NaN NaN
5732516 300174170 2.0 1.0 1.0
Here is the data:
{'PersonIDNmb': {2194593: nan,
2194594: 1.0,
4100689: nan,
4100690: 1.0,
5732515: nan,
5732516: 1.0},
'PersonTypeCde': {2194593: nan,
2194594: 1.0,
4100689: nan,
4100690: 1.0,
5732515: nan,
5732516: 1.0},
'UnitIDNmb': {2194593: nan,
2194594: 1.0,
4100689: nan,
4100690: 1.0,
5732515: nan,
5732516: 2.0},
'\ufeffMstrRecNbrTxt': {2194593: 'P',
2194594: '300146901',
4100689: 'DAT',
4100690: '300170330',
5732515: 'DA',
5732516: '300174170'}}
What am I doing wrong?
My end goal is to get rid of the NaN values in my dataframe so that the data can be read in as integers and not as floats (because it makes it difficult to join this table to other non-float tables).

Working example... hope this helps!
from io import StringIO
import pandas as pd
import numpy as np
txt = """index,col1,col2
0,a,b
1,c,d
2,e,f
3,g,h
4,i,j
5,k,l
6,m,n
7,o,p
8,q,r
9,s,t
10,u,v
11,w,x
12,y,z"""
indices_to_skip = np.array([2, 6, 11])
# I offset `indices_to_skip` by one in order to account for header
df = pd.read_csv(StringIO(txt), index_col=0, skiprows=indices_to_skip + 1)
print(df)
col1 col2
index
0 a b
1 c d
3 g h
4 i j
5 k l
7 o p
8 q r
9 s t
10 u v
12 y z

Related

Winsorizing on column with NaN does not change the max value

Please note that a similar question was asked a while back but never answered (see Winsorizing does not change the max value).
I am trying to winsorize a column in a dataframe using winsorize from scipy.stats.mstats. If there are no NaN values in the column then the process works correctly.
However, NaN values seem to prevent the process from working on the top (but not the bottom) of the distribution. Regardless of what value I set for nan_policy, the NaN values are set to the maximum value in the distribution. I feel like a must be setting the option incorrectly some how.
Below is an example that can be used to reproduce both correct winsorizing when there are no NaN values and the problem behavior I am experiencing when there NaN values are present. Any help on sorting this out would be appreciated.
#Import
import pandas as pd
import numpy as np
from scipy.stats.mstats import winsorize
# initialise data of lists.
data = {'Name':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T'], 'Age':[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0]}
# Create 2 DataFrames
df = pd.DataFrame(data)
df2 = pd.DataFrame(data)
# Replace two values in 2nd DataFrame with np.nan
df2.loc[5,'Age'] = np.nan
df2.loc[8,'Age'] = np.nan
# Winsorize Age in both DataFrames
winsorize(df['Age'], limits=[0.1, 0.1], inplace = True, nan_policy='omit')
winsorize(df2['Age'], limits=[0.1, 0.1], inplace = True, nan_policy='omit')
# Check min and max values of Age in both DataFrames
print('Max/min value of Age from dataframe without NaN values')
print(df['Age'].max())
print(df['Age'].min())
print()
print('Max/min value of Age from dataframe with NaN values')
print(df2['Age'].max())
print(df2['Age'].min())
It looks like the nan_policy is being ignored. But winsorization is just clipping, so you can handle this with pandas.
def winsorize_with_pandas(s, limits):
"""
s : pd.Series
Series to winsorize
limits : tuple of float
Tuple of the percentages to cut on each side of the array,
with respect to the number of unmasked data, as floats between 0. and 1
"""
return s.clip(lower=s.quantile(limits[0], interpolation='lower'),
upper=s.quantile(1-limits[1], interpolation='higher'))
winsorize_with_pandas(df['Age'], limits=(0.1, 0.1))
0 3.0
1 3.0
2 3.0
3 4.0
4 5.0
5 6.0
6 7.0
7 8.0
8 9.0
9 10.0
10 11.0
11 12.0
12 13.0
13 14.0
14 15.0
15 16.0
16 17.0
17 18.0
18 18.0
19 18.0
Name: Age, dtype: float64
winsorize_with_pandas(df2['Age'], limits=(0.1, 0.1))
0 2.0
1 2.0
2 3.0
3 4.0
4 5.0
5 NaN
6 7.0
7 8.0
8 NaN
9 10.0
10 11.0
11 12.0
12 13.0
13 14.0
14 15.0
15 16.0
16 17.0
17 18.0
18 19.0
19 19.0
Name: Age, dtype: float64
You can consider filling the missing values with the mean in the column, then winsorize and select only the original non nan
df2 = pd.DataFrame(data)
# Replace two values in 2nd DataFrame with np.nan
df2.loc[5,'Age'] = np.nan
df2.loc[8,'Age'] = np.nan
# mask of non nan
_m = df2['Age'].notna()
df2.loc[_m, 'Age'] = winsorize(df2['Age'].fillna(df2['Age'].mean()), limits=[0.1, 0.1])[_m]
print(df2['Age'].max())
print(df2['Age'].min())
# 18.0
# 3.0
or the other option by removing the nan before the winsorize.
df2.loc[_m, 'Age'] = winsorize(df2['Age'].loc[_m], limits=[0.1, 0.1])
print(df2['Age'].max())
print(df2['Age'].min())
# 19.0
# 2.0
I used the following code snipped as the basis for my problem (Whereas I needed to winsorize on a yearly basis, so i introduced two categories (A,B) in my toy data)
I got the same issue with not replacing the max p99 values because of the NaNs.
import pandas as pd
import numpy as np
# Getting the toy data
# To see all columns and 100 rows
pd.options.display.max_columns = None
pd.set_option('display.max_rows', 100)
df = pd.DataFrame({"Zahl":np.arange(100),"Group":[i for i in "A"*50+"B"*50]})
# Getting NaN Values for first 4 rows
df.loc[0:3,"Zahl"] = np.NaN
# Defining a grouped list of 99/1% percentile values
p99 = df.groupby("Group")["Zahl"].quantile(.9).rename("99%-Quantile")
p1 = df.groupby("Group")["Zahl"].quantile(.1).rename("1%-Quantile")
# Defining the winsorize function
def winsor(value,p99,p1):
if (value < p99) & (value > p1):
return value
elif (value > p99) & (value > p1):
return p99
elif (value < p99) & (value < p1):
return p1
else:
return value
df["New"] = df.apply(lambda row: winsor(row["Zahl"],p99[row["Group"]],p1[row["Group"]]),axis=1)
The good thing of the winsor-function is that it naturally ignores NaN Values!
Hope this Idea helps for your problem

Winsorizing does not change the max value [duplicate]

Please note that a similar question was asked a while back but never answered (see Winsorizing does not change the max value).
I am trying to winsorize a column in a dataframe using winsorize from scipy.stats.mstats. If there are no NaN values in the column then the process works correctly.
However, NaN values seem to prevent the process from working on the top (but not the bottom) of the distribution. Regardless of what value I set for nan_policy, the NaN values are set to the maximum value in the distribution. I feel like a must be setting the option incorrectly some how.
Below is an example that can be used to reproduce both correct winsorizing when there are no NaN values and the problem behavior I am experiencing when there NaN values are present. Any help on sorting this out would be appreciated.
#Import
import pandas as pd
import numpy as np
from scipy.stats.mstats import winsorize
# initialise data of lists.
data = {'Name':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T'], 'Age':[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0]}
# Create 2 DataFrames
df = pd.DataFrame(data)
df2 = pd.DataFrame(data)
# Replace two values in 2nd DataFrame with np.nan
df2.loc[5,'Age'] = np.nan
df2.loc[8,'Age'] = np.nan
# Winsorize Age in both DataFrames
winsorize(df['Age'], limits=[0.1, 0.1], inplace = True, nan_policy='omit')
winsorize(df2['Age'], limits=[0.1, 0.1], inplace = True, nan_policy='omit')
# Check min and max values of Age in both DataFrames
print('Max/min value of Age from dataframe without NaN values')
print(df['Age'].max())
print(df['Age'].min())
print()
print('Max/min value of Age from dataframe with NaN values')
print(df2['Age'].max())
print(df2['Age'].min())
It looks like the nan_policy is being ignored. But winsorization is just clipping, so you can handle this with pandas.
def winsorize_with_pandas(s, limits):
"""
s : pd.Series
Series to winsorize
limits : tuple of float
Tuple of the percentages to cut on each side of the array,
with respect to the number of unmasked data, as floats between 0. and 1
"""
return s.clip(lower=s.quantile(limits[0], interpolation='lower'),
upper=s.quantile(1-limits[1], interpolation='higher'))
winsorize_with_pandas(df['Age'], limits=(0.1, 0.1))
0 3.0
1 3.0
2 3.0
3 4.0
4 5.0
5 6.0
6 7.0
7 8.0
8 9.0
9 10.0
10 11.0
11 12.0
12 13.0
13 14.0
14 15.0
15 16.0
16 17.0
17 18.0
18 18.0
19 18.0
Name: Age, dtype: float64
winsorize_with_pandas(df2['Age'], limits=(0.1, 0.1))
0 2.0
1 2.0
2 3.0
3 4.0
4 5.0
5 NaN
6 7.0
7 8.0
8 NaN
9 10.0
10 11.0
11 12.0
12 13.0
13 14.0
14 15.0
15 16.0
16 17.0
17 18.0
18 19.0
19 19.0
Name: Age, dtype: float64
You can consider filling the missing values with the mean in the column, then winsorize and select only the original non nan
df2 = pd.DataFrame(data)
# Replace two values in 2nd DataFrame with np.nan
df2.loc[5,'Age'] = np.nan
df2.loc[8,'Age'] = np.nan
# mask of non nan
_m = df2['Age'].notna()
df2.loc[_m, 'Age'] = winsorize(df2['Age'].fillna(df2['Age'].mean()), limits=[0.1, 0.1])[_m]
print(df2['Age'].max())
print(df2['Age'].min())
# 18.0
# 3.0
or the other option by removing the nan before the winsorize.
df2.loc[_m, 'Age'] = winsorize(df2['Age'].loc[_m], limits=[0.1, 0.1])
print(df2['Age'].max())
print(df2['Age'].min())
# 19.0
# 2.0
I used the following code snipped as the basis for my problem (Whereas I needed to winsorize on a yearly basis, so i introduced two categories (A,B) in my toy data)
I got the same issue with not replacing the max p99 values because of the NaNs.
import pandas as pd
import numpy as np
# Getting the toy data
# To see all columns and 100 rows
pd.options.display.max_columns = None
pd.set_option('display.max_rows', 100)
df = pd.DataFrame({"Zahl":np.arange(100),"Group":[i for i in "A"*50+"B"*50]})
# Getting NaN Values for first 4 rows
df.loc[0:3,"Zahl"] = np.NaN
# Defining a grouped list of 99/1% percentile values
p99 = df.groupby("Group")["Zahl"].quantile(.9).rename("99%-Quantile")
p1 = df.groupby("Group")["Zahl"].quantile(.1).rename("1%-Quantile")
# Defining the winsorize function
def winsor(value,p99,p1):
if (value < p99) & (value > p1):
return value
elif (value > p99) & (value > p1):
return p99
elif (value < p99) & (value < p1):
return p1
else:
return value
df["New"] = df.apply(lambda row: winsor(row["Zahl"],p99[row["Group"]],p1[row["Group"]]),axis=1)
The good thing of the winsor-function is that it naturally ignores NaN Values!
Hope this Idea helps for your problem

How to manipulate pandas dataframe with multiple IF statements/ conditions?

I'm relatively new to python and pandas and am trying to determine how do I create a IF statement or any other statement that once initially returns value continues with other IF statement with in given range?
I have tried .between, .loc, and if statements but am still struggling. I have tried to recreate what is happening in my code but cannot replicate it precisely. Any suggestions or ideas around this problem?
import pandas as pd
data = {'Yrs': [ '2018','2019', '2020', '2021', '2022'], 'Val': [1.50, 1.75, 2.0, 2.25, 2.5] }
data2 = {'F':['2015','2018', '2020'], 'L': ['2019','2022', '2024'], 'Base':['2','5','5'],
'O':[20, 40, 60], 'S': [5, 10, 15]}
df = pd.DataFrame(data)
df2 = pd.DataFrame(data2)
r = pd.DataFrame()
#use this code to get first value when F <= Yrs
r.loc[(df2['F'] <= df.at[0,'Yrs']), '2018'] = \
(1/pd.to_numeric(df2['Base']))*(pd.to_numeric(df2['S']))* \
(pd.to_numeric(df.at[0, 'Val']))+(pd.to_numeric(df2['Of']))
#use this code to get the rest of the values until L = Yrs
r.loc[(df2['L'] <= df.at[1,'Yrs']) & (df2['L'] >= df.at[1,'Yrs']),\
'2019'] = (pd.to_numeric(r['2018'])- pd.to_numeric(df2['Of']))* \
pd.to_numeric(df.at[1, 'Val'] / pd.to_numeric(df.at[0, 'Val'])) + \
pd.to_numeric(df2['Of'])
r
I expect output to be:(the values may be different but its the pattern I want)
2018 2019 2020 2021 2022
0 7.75 8.375 NaN NaN NaN
1 11.0 11.5 12 12.5 13.0
2 NaN NaN 18 18.75 19.25
but i get:
2018 2019 2020 2021 2022
0 7.75 8.375 9.0 9.625 10.25
1 11.0 11.5 12 NaN NaN
2 16.50 17.25 18 NaN NaN

Ignoring nan values when creating confidence intervals

AS the title suggest, I am trying to create confidence intervals based on a table with a ton of nan values. Here is an example of what I am working with.
Attendence% 2016-10 2016-11 2017-01 2017-02 2017-03 2017-04 ...
Name
Karl nan 0.2 0.4 0.5 0.2 1.0
Alice 1.0 0.7 0.6 nan nan nan
Ryan nan nan 1.0 0.1 0.9 0.2
Don nan 0.5 nan 0.2 nan nan
Becca nan 0.2 0.6 0 nan nan
For reference, in my actual dataframe there are more NaNs than not, and they represent months where they did not need to show up, so replacing the values with 0 will affect the result.
Now everytime I try applying a Confidence interval to each name, it it returns the mean as NaN, as well as both intervals.
Karl (nan, nan, nan)
Alice (nan, nan, nan)
Ryan (nan, nan, nan)
Don (nan, nan, nan)
Becca (nan, nan, nan)
Is there a way to filter out the NaN so it just applies the formula while not taking into account the NaN values. So far what I have been doing has been the following:
unstacked being the table i visually represented.
def mean_confidence_interval(unstacked, confidence=0.9):
a = 1.0 * np.array(unstacked)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
return m, m-h, m+h
answer = unstacked.apply(mean_confidence_interval)
answer
Use np.nanmean instead of np.mean: https://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmean.html
And for scipy.stats.sem(a), replace it with pass scipy.stats.sem(a, nan_policy='omit').
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.sem.html

Pandas: Combine two dataframe with the samse structure [duplicate]

I want to perform a join/merge/append operation on a dataframe with datetime index.
Let's say I have df1 and I want to add df2 to it. df2 can have fewer or more columns, and overlapping indexes. For all rows where the indexes match, if df2 has the same column as df1, I want the values of df1 be overwritten with those from df2.
How can I obtain the desired result?
How about: df2.combine_first(df1)?
In [33]: df2
Out[33]:
A B C D
2000-01-03 0.638998 1.277361 0.193649 0.345063
2000-01-04 -0.816756 -1.711666 -1.155077 -0.678726
2000-01-05 0.435507 -0.025162 -1.112890 0.324111
2000-01-06 -0.210756 -1.027164 0.036664 0.884715
2000-01-07 -0.821631 -0.700394 -0.706505 1.193341
2000-01-10 1.015447 -0.909930 0.027548 0.258471
2000-01-11 -0.497239 -0.979071 -0.461560 0.447598
In [34]: df1
Out[34]:
A B C
2000-01-03 2.288863 0.188175 -0.040928
2000-01-04 0.159107 -0.666861 -0.551628
2000-01-05 -0.356838 -0.231036 -1.211446
2000-01-06 -0.866475 1.113018 -0.001483
2000-01-07 0.303269 0.021034 0.471715
2000-01-10 1.149815 0.686696 -1.230991
2000-01-11 -1.296118 -0.172950 -0.603887
2000-01-12 -1.034574 -0.523238 0.626968
2000-01-13 -0.193280 1.857499 -0.046383
2000-01-14 -1.043492 -0.820525 0.868685
In [35]: df2.comb
df2.combine df2.combineAdd df2.combine_first df2.combineMult
In [35]: df2.combine_first(df1)
Out[35]:
A B C D
2000-01-03 0.638998 1.277361 0.193649 0.345063
2000-01-04 -0.816756 -1.711666 -1.155077 -0.678726
2000-01-05 0.435507 -0.025162 -1.112890 0.324111
2000-01-06 -0.210756 -1.027164 0.036664 0.884715
2000-01-07 -0.821631 -0.700394 -0.706505 1.193341
2000-01-10 1.015447 -0.909930 0.027548 0.258471
2000-01-11 -0.497239 -0.979071 -0.461560 0.447598
2000-01-12 -1.034574 -0.523238 0.626968 NaN
2000-01-13 -0.193280 1.857499 -0.046383 NaN
2000-01-14 -1.043492 -0.820525 0.868685 NaN
Note that it takes the values from df1 for indices that do not overlap with df2. If this doesn't do exactly what you want I would be willing to improve this function / add options to it.
For a merge like this, the update method of a DataFrame is useful.
Taking the examples from the documentation:
import pandas as pd
import numpy as np
df1 = pd.DataFrame([[np.nan, 3., 5.], [-4.6, 2.1, np.nan],
[np.nan, 7., np.nan]])
df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5., 1.6, 4]],
index=[1, 2])
Data before the update:
>>> df1
0 1 2
0 NaN 3.0 5.0
1 -4.6 2.1 NaN
2 NaN 7.0 NaN
>>>
>>> df2
0 1 2
1 -42.6 NaN -8.2
2 -5.0 1.6 4.0
Let's update df1 with data from df2:
df1.update(df2)
Data after the update:
>>> df1
0 1 2
0 NaN 3.0 5.0
1 -42.6 2.1 -8.2
2 -5.0 1.6 4.0
Remarks:
It's important to notice that this is an operation "in place", modifying the DataFrame that calls update.
Also note that non NaN values in df1 are not overwritten with NaN values in df2

Categories