Identify increasing features in a data frame - python

I have a data frame that present some features with cumulative values. I need to identify those features in order to revert the cumulative values.
This is how my dataset looks (plus about 50 variables):
a b
346 17
76 52
459 70
680 96
679 167
246 180
What I wish to achieve is:
a b
346 17
76 35
459 18
680 26
679 71
246 13
I've seem this answer, but it first revert the values and then try to identify the columns. Can't I do the other way around? First identify the features and then revert the values?
Finding cumulative features in dataframe?
What I do at the moment is run the following code in order to give me the feature's names with cumulative values:
def accmulate_col(value):
count = 0
count_1 = False
name = []
for i in range(len(value)-1):
if value[i+1]-value[i] >= 0:
count += 1
if value[i+1]-value[i] > 0:
count_1 = True
name.append(1) if count == len(value)-1 and count_1 else name.append(0)
return name
df.apply(accmulate_col)
Afterwards, I save these features names manually in a list called cum_features and revert the values, creating the desired dataset:
df_clean = df.copy()
df_clean[cum_cols] = df_clean[cum_features].apply(lambda col: np.diff(col, prepend=0))
Is there a better way to solve my problem?

To identify which columns have increasing* values throughout the whole column, you will need to apply conditions on all the values. So in that sense, you have to use the values first to figure out what columns fit the conditions.
With that out of the way, given a dataframe such as:
import pandas as pd
d = {'a': [1,2,3,4],
'b': [4,3,2,1]
}
df = pd.DataFrame(d)
#Output:
a b
0 1 4
1 2 3
2 3 2
3 4 1
Figuring out which columns contain increasing values is just a question of using diff on all values in the dataframe, and checking which ones are increasing throughout the whole column.
That can be written as:
out = (df.diff().dropna()>0).all()
#Output:
a True
b False
dtype: bool
Then, you can just use the column names to select only those with True in them
new_df = df[df.columns[out]]
#Output:
a
0 1
1 2
2 3
3 4
*(the term cumulative doesn't really represent the conditions you used.Did you want it to be cumulative or just increasing? Cumulative implies that the value in a particular row/index was the sum of all previous values upto that index, while increasing is just that, the value in current row/index is greater than previous.)

Related

How to call a column by combining a string and another variable in a python dataframe?

Imagine I have a dataframe with these variables and values:
ID
Weight
LR Weight
UR Weight
Age
LS Age
US Age
Height
LS Height
US Height
1
63
50
80
20
18
21
165
160
175
2
75
50
80
22
18
21
172
160
170
3
49
45
80
17
18
21
180
160
180
I want to create the additional following variables:
ID
Flag_Weight
Flag_Age
Flag_Height
1
1
1
1
2
1
0
0
3
1
0
1
These flags simbolize that the main variable values (e.g.: Weight, Age and Height) are between the correspondent Lower or Upper limits, which may start with different 2 digits (in this dataframe I gave four examples: LR, UR, LS, US, but in my real dataframe I have more), and whose limit values sometimes differ from ID to ID.
Can you help me create these flags, please?
Thank you in advance.
You can use reshaping using a temporary MultiIndex:
(df.set_index('ID')
.pipe(lambda d: d.set_axis(pd.MultiIndex.from_frame(
d.columns.str.extract('(^[LU]?).*?\s*(\S+)$')),
axis=1)
)
.stack()
.assign(flag=lambda d: d[''].between(d['L'], d['U']).astype(int))
['flag'].unstack().add_prefix('Flag_').reset_index()
)
Output:
ID Flag_Age Flag_Height Flag_Weight
0 1 1 1 1
1 2 0 0 1
2 3 0 1 1
So, if I understood correctly, you want to add columns with these new variables. The simplest solution to this would be df.insert().
You could make it something like this:
df.insert(number of column after which you want to insert the new column, name of the column, values of the new column)
You can make up the new values in pretty much everyway you can imagine. So just copying a column or simple mathematical operations like +,-,*,/, can be performed. But you can also apply a whole function, which returns the flags based on your conditions as values of the new column.
If the new columsn can just be appended, you can even just make up a new column like this:
df['new column name'] = any values you want
I hope this helped.

Conditionally dropping columns in a pandas dataframe

I have this dataframe and my goal is to remove any columns that have less than 1000 entries.
Prior to to pivoting the df I know I have 880 unique well_id's with entries ranging from 4 to 60k+. I know should end up with 102 well_id's.
I tried to accomplish this in a very naïve way by collecting the wells that I am trying to remove in an array and using a loop but I keep getting a 'TypeError: Level type mismatch' but when I just use del without a for loop it works.
#this works
del df[164301.0]
del df['TB-0071']
# this doesn't work
for id in unwanted_id:
del df[id]
Any help is appreciated, Thanks.
You can use dropna method:
df.dropna(thresh=[]) #specify [here] how many non-na values you require to keep the row
The advantage of this method is that you don't need to create a list.
Also don't forget to add the usual inplace = True if you want the changes to be made in place.
You can use pandas drop method:
df.drop(columns=['colName'], inplace=True)
You can actually pass a list of columns names:
unwanted_id = [164301.0, 'TB-0071']
df.drop(columns=unwanted_ids, inplace=True)
Sample:
df[:5]
from to freq
0 A X 20
1 B Z 9
2 A Y 2
3 A Z 5
4 A X 8
df.drop(columns=['from', 'to'])
freq
0 20
1 9
2 2
3 5
4 8
And to get those column names with more than 1000 unique values, you can use something like this:
counts = df.nunique()[df.nunique()>1000].to_frame('uCounts').reset_index().rename(columns={'index':'colName'})
counts
colName uCounts
0 to 1001
1 freq 1050

keep a random lowest value per row in a Python Pandas dataset

I have a dataframe where every line is ranked on several attributes vs. all the other rows. A single line can have the same rank in 2 attributes (meaning a row can be the best in few attributes) like shown in row 2 and 3 below:
att_1 att_2 att_3 att_4
ID
984 5 3 1 46
794 1 1 99 34
6471 20 2 3 2
Per line, I want to keep the index (ID) and the cell with the lowest value - in case there is more than 1 cell, I have to select a random one to keep a normal distribution.
I managed to convert the df into a numpy array and run the following:
idx = np.argmin(h_data.values, axis=1)
But I get the first line every time..
Desired output:
ID MIN
984 att_3
794 att_2
6471 att_1
Thank you!
Use list comprehension with numpy.random.choice:
df['MIN'] = [np.random.choice(df.columns[x == x.min()], 1)[0] for x in df.values]
print (df)
att_1 att_2 att_3 att_4 MIN
ID
984 5 3 1 46 att_3
794 1 1 99 34 att_1
6471 20 2 3 2 att_2
I you want to do something for each row (or column), you should try the .apply method
df.apply(np.argmin, axis=1) #row wise
df.apply(np.argmin, axis=0) #column wise

Function within a function involving each column of a DataFrame in Python

As the question states, I'm trying to learn how to run a function on each element belonging to a column within a DataFrame without having to define that column directly. The point is that I would like to be able to enter any given set of DataFrame and find each element within each column that fulfills a particular condition.
The sample that I've included illustrates what I'm trying to do. I know the below doesn't work and I thought that writing def fun(dataframe[column]) would do the trick but the syntax is incorrect, unfortunately.
Basically, the reason for this is that I have multiple sets of data where I'd like to locate each element that is above a set threshold.
Thanks a lot in advance!
df=pd.DataFrame(np.random.randint(0,100,size=(3, 3)), columns=list('ABC'))
def fun(dataframe):
for column in dataframe:
def fun(column):
mean= sum(column)/len(column)
print (mean)
for element in column:
if element < mean*1.1:
element = 0
print (element)
fun(df)
As #MadPhysicist mentioned in a comment, pandas was created to reduce the need for explicit for-looping.
If I understand your specific case correctly, you intend to replace with zero any element that is less than 1.1 times the mean value of its column. Here's one way to do that in idiomatic pandas:
# Set a random seed for repeatability
np.random.seed(314159)
# Create example data
df = pd.DataFrame(np.random.randint(0,100,size=(3, 3)), columns=list('ABC'))
df
A B C
0 11 34 93
1 79 0 81
2 66 43 71
# By default, df.mean() computes the mean of each numeric column (not row)
df.mean()
A 52.000000
B 25.666667
C 81.666667
dtype: float64
# We can use boolean indexing to replace values less than
# 1.1 * column mean with zero
# docs: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
df[df < 1.1 * df.mean()] = 0
df
A B C
0 0 34 93
1 79 0 0
2 66 43 0

Merge subgroup into adjacent subgroup after groupby

If we run the following code
np.random.seed(0)
features = ['f1','f2','f3']
df = pd.DataFrame(np.random.rand(5000,4), columns=features+['target'])
for f in features:
df[f] = np.digitize(df[f], bins=[0.13,0.66])
df['target'] = np.digitize(df['target'], bins=[0.5]).astype(float)
df.groupby(features)['target'].agg(['mean','count']).head(9)
We get average values for each grouping of the feature set:
mean count
f1 f2 f3
0 0 0 0.571429 7
1 0.414634 41
2 0.428571 28
1 0 0.490909 55
1 0.467337 199
2 0.486726 113
2 0 0.518519 27
1 0.446281 121
2 0.541667 72
In the table above, some of the groups has too few observations and I want to merge it into 'adjacent' group by some rules. For example, I may want to merge the group [0,0,0] with group [0,0,1] since it has no more than 30 observations. I wonder if there is any good way of operating such group combinations according to columns values without creating a separate dictionary? More specifically, I may want to merge from the smallest count group to its adjacent group (the next group within the index order) until the total number of groups is no more than 10.
A simple way to do it is with a loop for on indexes meeting your condition:
df_group = df.groupby(features)['target'].agg(['mean','count'])
# Fist reset_index to get an easier manipulation
df_group = df_group.reset_index()
list_indexes = df_group[df_group['count'] <=58].index.values # put any value you want
# loop for on list_indexes
for ind in list_indexes:
# check again your condition in case at the previous iteration
# merging the row has increase the count above your cirteria
if df_group['count'].loc[ind] <= 58:
# add the count values to the next row
df_group['count'].loc[ind+1] = df_group['count'].loc[ind+1] + df_group['count'].loc[ind]
# do anything you want on mean
# drop the row
df_group = df_group.drop(axis = 0, index = ind)
# Reindex your df
df_group = df_group.set_index(features)

Categories