How to exclude few columns and replace negative values in Big data? - python

I have a dataframe like as shown below
import pandas as pd
df = pd.DataFrame({'a': [0, -1, 2], 'b': [-3, 2, 1]})
In my real data, I have more than 100 columns. What I would like to do is excluding two columns, I would like replace the negative values in all other columns to zero
I tried this but it works for all columns.
df[df < 0] = 0
Is the only way is to have all column names in a list and run through a loop like as shown below
col_list = ['a1','a2','a3','a4',..........'a100'] # in this `a21`,a22` columns are ignored from the list
for col in col_list:
df[col] = [df[col]<0] = 0
As you can see it's lengthy and inefficient.
Can you help me with any efficient approach to do this?

There is problem df[col_list] return boolean DataFrame, so cannot be filtered by df[df < 0] = 0 with specified columns names, is necessary use DataFrame.mask:
col_list = df.columns.difference(['a21','a22'])
m = df[col_list] < 0
df[col_list] = df[col_list].mask(m, 0)
EDIT:
For numeric columns without a21 and a22 use DataFrame.select_dtypes with Index.difference:
df = pd.DataFrame({
'a21':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[-7,8,9,4,2,3],
'D':[1,3,5,-7,1,'a'], <- object column because last `a`
'E':[5,3,-6,9,2,-4],
'a22':list('aaabbb')
})
col_list = df.select_dtypes(np.number).columns.difference(['a21','a22'])
m = df[col_list] < 0
df[col_list] = df[col_list].mask(m, 0)
print (df)
a21 B C D E a22
0 a 4 0 1 5 a
1 b 5 8 3 3 a
2 c 4 9 5 0 a
3 d 5 4 -7 9 b
4 e 5 2 1 2 b
5 f 4 3 a 0 b

How about simple clipping at 0?
df[col_list] = df[col_list].clip(0)

Related

unique and replace in python

I have huge dataset with more than 100 columns that contain non-null values that I want to replace (and leave all the null values as is). Some columns, however, should stay untouched.
I am planning to do the following:
1) find unique values in these columns
2) replace this values with 1
Problem:
1) something like this barely possible to use for 100+ columns:
np.unique(df[['Col1', 'Col2']].values)
2) how do I apply than loc to all these columns? code below does not work
df_2.loc[df_2[['col1','col2','col3']] !=0, ['col1','col2','col3']] = 1
Maybe there is more reasonable and elegant way to solve the problem. Thanks!
Use DataFrame.mask:
c = ['col1','col2','col3']
df_2[c] = df_2[c].mask(df_2[c] != 0, 1)
Or compare by not equal with DataFrame.ne and cast mask by integers with DataFrame.astype:
df_2 = pd.DataFrame({
'A':list('abcdef'),
'col1':[0,5,0,5,5,0],
'col2':[7,8,9,0,2,0],
'col3':[0,0,5,7,0,0],
'E':[5,0,6,9,2,0],
})
c = ['col1','col2','col3']
df_2[c] = df_2[c].ne(0).astype(int)
print (df_2)
A col1 col2 col3 E
0 a 0 1 0 5
1 b 1 1 0 0
2 c 0 1 1 6
3 d 1 0 1 9
4 e 1 1 0 2
5 f 0 0 0 0
EDIT: For select columns by positions use DataFrame.iloc:
idx = np.r_[6:71,82]
df_2.iloc[:, idx] = df_2.iloc[:, idx].ne(0).astype(int)
Or first solution:
df_2.iloc[:, idx] = df_2.iloc[:, idx].mask(df_2.iloc[:, idx]] != 0, 1)

Swapping values in columns depending on value type in one of the columns

Suppose I have the following pandas dataframe:
df = pd.DataFrame([['A','B'],[8,'s'],[5,'w'],['e',1],['n',3]])
print(df)
0 1
0 A B
1 8 s
2 5 w
3 e 1
4 n 3
If there is an integer in column 1, then I want to swap the value with the value from column 0, so in other words I want to produce this dataframe:
0 1
0 A B
1 8 s
2 5 w
3 1 e
4 3 n
Replace numbers from second column with mask by to_numeric with errors='coerce' and Series.notna:
m = pd.to_numeric(df[1], errors='coerce').notna()
Another solution with convert to strings by Series.astype and Series.str.isnumeric - but working only for integers:
m = df[1].astype(str).str.isnumeric()
And then replace by DataFrame.loc with DataFrame.values for numpy array for avoid columns alignment:
df.loc[m, [0, 1]] = df.loc[m, [1, 0]].values
print(df)
0 1
0 A B
1 8 s
2 5 w
3 1 e
4 3 n
Last if possible better is convert first row to columns names:
df.columns = df.iloc[0]
df = df.iloc[1:].rename_axis(None, axis=1)
print(df)
A B
1 8 s
2 5 w
3 1 e
4 3 n
or possible removing header=None in read_csv.
sorted
with a key that test for int
df.loc[:] = [
sorted(t, key=lambda x: not isinstance(x, int))
for t in zip(*map(df.get, df))
]
df
0 1
0 A B
1 8 s
2 5 w
3 1 e
4 3 n
You can be explicit with the columns if you'd like
df[[0, 1]] = [
sorted(t, key=lambda x: not isinstance(x, int))
for t in zip(df[0], df[1])
]

map DataFrame index and forward fill nan values

I have a DataFrame with integer indexes that are missing some values (i.e. not equally spaced), I want to create a new DataFrame with equally spaced index values and forward fill column values. Below is a simple example:
have
import pandas as pd
df = pd.DataFrame(['A', 'B', 'C'], index=[0, 2, 4])
0
0 A
2 B
4 C
want to use above and create:
0
0 A
1 A
2 B
3 B
4 C
Use reindex with method='ffill':
df = df.reindex(np.arange(0, df.index.max()+1), method='ffill')
Or:
df = df.reindex(np.arange(df.index.min(), df.index.max() + 1), method='ffill')
print (df)
0
0 A
1 A
2 B
3 B
4 C
Using reindex and ffill:
df = df.reindex(range(df.index[0],df.index[-1]+1)).ffill()
print(df)
0
0 A
1 A
2 B
3 B
4 C
You can do this:
In [319]: df.reindex(list(range(df.index.min(),df.index.max()+1))).ffill()
Out[319]:
0
0 A
1 A
2 B
3 B
4 C

Pandas Python : how to create multiple columns from a list

I have a list with columns to create :
new_cols = ['new_1', 'new_2', 'new_3']
I want to create these columns in a dataframe and fill them with zero :
df[new_cols] = 0
Get error :
"['new_1', 'new_2', 'new_3'] not in index"
which is true but unfortunate as I want to create them...
EDIT : This is a duplicate of this question : Add multiple empty columns to pandas DataFrame however I keep this one too because the accepted answer here was the simple solution I was looking for, and it was not he accepted answer out there
EDIT 2 : While the accepted answer is the most simple, interesting one-liner solutions were posted below
You need to add the columns one by one.
for col in new_cols:
df[col] = 0
Also see the answers in here for other methods.
Use assign by dictionary:
df = pd.DataFrame({
'A': ['a','a','a','a','b','b','b','c','d'],
'B': list(range(9))
})
print (df)
0 a 0
1 a 1
2 a 2
3 a 3
4 b 4
5 b 5
6 b 6
7 c 7
8 d 8
new_cols = ['new_1', 'new_2', 'new_3']
df = df.assign(**dict.fromkeys(new_cols, 0))
print (df)
A B new_1 new_2 new_3
0 a 0 0 0 0
1 a 1 0 0 0
2 a 2 0 0 0
3 a 3 0 0 0
4 b 4 0 0 0
5 b 5 0 0 0
6 b 6 0 0 0
7 c 7 0 0 0
8 d 8 0 0 0
import pandas as pd
new_cols = ['new_1', 'new_2', 'new_3']
df = pd.DataFrame.from_records([(0, 0, 0)], columns=new_cols)
Is this what you're looking for ?
You can use assign:
new_cols = ['new_1', 'new_2', 'new_3']
values = [0, 0, 0] # could be anything, also pd.Series
df = df.assign(**dict(zip(new_cols, values)
Try looping through the column names before creating the column:
for col in new_cols:
df[col] = 0
We can use the Apply function to loop through the columns in the dataframe and assigning each of the element to a new field
for instance for a list in a dataframe with a list named keys
[10,20,30]
In your case since its all 0 we can directly assign them as 0 instead of looping through. But if we have values we can populate them as below
...
df['new_01']=df['keys'].apply(lambda x: x[0])
df['new_02']=df['keys'].apply(lambda x: x[1])
df['new_03']=df['keys'].apply(lambda x: x[2])

Drop pandas dataframe rows AND columns in a batch fashion based on value

Background: I have a matrix which represents the distance between two points. In this matrix both rows and columns are the data points. For example:
A B C
A 0 999 3
B 999 0 999
C 3 999 0
In this toy example let's say I want to drop C for some reason, because it is far away from any other point. So I first aggregate the count:
df["far_count"] = df[df == 999].count()
and then batch remove them:
df = df[df["far_count"] == 2]
In this example this looks a bit redundant but please imagine that I have many data points like this (say in the order of 10Ks)
The problem with the above batch removal is that I would like to remove rows and columns in the same time (instead of just rows) and it is unclear to me how to do so elegantly. A naive way is to get a list of such data points and put it in a loop and then:
for item in list:
df.drop(item, axis=1).drop(item, axis=0)
But I was wondering if there is a better way. (Bonus if we could skip the intermdiate step far_count)
np.random.seed([3,14159])
idx = pd.Index(list('ABCDE'))
a = np.random.randint(3, size=(5, 5))
df = pd.DataFrame(
a.T.dot(a) * (1 - np.eye(5, dtype=int)),
idx, idx)
df
A B C D E
A 0 4 2 4 2
B 4 0 1 5 2
C 2 1 0 2 6
D 4 5 2 0 3
E 2 2 6 3 0
l = ['A', 'C']
m = df.index.isin(l)
df.loc[~m, ~m]
B D E
B 0 5 2
D 5 0 3
E 2 3 0
For your specific case, because the array is symmetric you only need to check one dimension.
m = (df.values == 999).sum(0) == len(df) - 1
In [66]: x = pd.DataFrame(np.triu(df), df.index, df.columns)
In [67]: x
Out[67]:
A B C
A 0 999 3
B 0 0 999
C 0 0 0
In [68]: mask = x.ne(999).all(1) | x.ne(999).all(0)
In [69]: df.loc[mask, mask]
Out[69]:
A C
A 0 3
C 3 0

Categories