I have a DataFrame with values 1 and 2.
I want to add a row in the end of the DataFrame, counting the number of 1 in each column. It should be similar
COUNTIF(A:A,1) and drag to all columns in excel
I tried something like df.loc['lastrow']=df.count()[1], but the result is not correct.
How can I do it and what this function (count()[1]) does?
You can do append after sum
df.append(df.eq(1).sum(),ignore_index=True )
You can just compare your dataframe to the value you are interested in (1 for example), and then perform a sum on these booleans, like:
>>> df
0 1 2 3 4
0 2 2 2 1 2
1 2 2 2 2 2
2 2 1 2 1 1
3 1 2 2 1 1
4 2 2 1 2 1
5 2 2 2 2 2
6 1 1 1 1 2
7 2 2 1 1 1
8 1 1 1 2 1
9 2 2 1 2 1
>>> (df == 1).sum()
0 3
1 3
2 5
3 5
4 6
dtype: int64
You can thus append that row, like:
>>> df.append((df == 1).sum(), ignore_index=True)
0 1 2 3 4
0 2 2 2 1 2
1 2 2 2 2 2
2 2 1 2 1 1
3 1 2 2 1 1
4 2 2 1 2 1
5 2 2 2 2 2
6 1 1 1 1 2
7 2 2 1 1 1
8 1 1 1 2 1
9 2 2 1 2 1
10 3 3 5 5 6
The last row here thus contains the number of 1s of the previous rows.
Related
I have the following dataframe:
Group from to
1 2 1
1 1 2
1 3 2
1 3 1
2 1 4
2 3 1
2 1 2
2 3 1
I want create a 4th column that counts the of unique combinations (from, to) within each group and drops any repeated combination within each group (leaves only one)
Expected output:
Group from to weight
1 2 1 1
1 1 2 1
1 3 2 1
1 3 1 1
2 1 4 1
2 3 1 2
2 1 2 1
In the expected output, the 2nd from 3, to 1 row in group 2 was dropped because it is a duplicate.
In your case we just need groupby with size
out = df.groupby(df.columns.tolist()).size().to_frame(name='weight').reset_index()
Out[258]:
Group from to weight
0 1 1 2 1
1 1 2 1 1
2 1 3 1 1
3 1 3 2 1
4 2 1 2 1
5 2 1 4 1
6 2 3 1 2
You can group by the 3 columns using .groupby() and take their size by GroupBy.size(), as follows:
df_out = df.groupby(['Group', 'from', 'to'], sort=False).size().reset_index(name='weight')
Result:
print(df_out)
Group from to weight
0 1 2 1 1
1 1 1 2 1
2 1 3 2 1
3 1 3 1 1
4 2 1 4 1
5 2 3 1 2
6 2 1 2 1
I have a df that looks like this:
time val
0 1
1 1
2 2
3 3
4 1
5 2
6 3
7 3
8 3
9 3
10 1
11 1
How do I create new columns that hold the amount of times a condition occurs and does not change? In this case, I want to create a column for each unique value in val that holds the cumulative sum at the given row of occurences, but does not increment the value if the condition doesn't change.
Expected outcome below:
time val sum_1 sum_2 sum_3
0 1 1 0 0
1 1 1 0 0
2 2 1 1 0
3 3 1 1 1
4 1 2 1 1
5 2 2 2 1
6 3 2 2 2
7 3 2 2 2
8 3 2 2 2
9 3 2 2 2
10 1 3 2 2
11 1 3 2 2
EDIT
To be more specific with the condition:
I want to count the number of times a unique value appears in val. For example, using the code below, I could get this result:
df['sum_1'] = (df['val'] == 1).cumsum()
df['sum_2'] = (df['val'] == 2).cumsum()
df['sum_3'] = (df['val'] == 3).cumsum()
time val sum_1 sum_2 sum_3
0 0 1 1 0 0
1 1 1 2 0 0
2 2 2 2 1 0
3 3 3 2 1 1
4 4 1 3 1 1
5 5 2 3 2 1
However, this code counts EVERY occurence of a condition. For example, val shows 1 occurring 3 times total. However, I want to treat consecutive occurrences of 1 as a single group, counting only the number of consecutive groupings that occur. In the example above, 1 occurs in total 3 times, but only 2 times as a consecutive grouping.
You can chain mask by & for bitwise AND for test first consecutive values by compare by shifted values by Series.ne with Series.shift and run code for test all unique values of column val:
uniq = df['val'].unique()
m = df['val'].ne(df['val'].shift())
for c in uniq:
df[f'sum_{c}'] = (df['val'].eq(c) & m).cumsum()
print (df)
time val sum_1 sum_2 sum_3
0 0 1 1 0 0
1 1 1 1 0 0
2 2 2 1 1 0
3 3 3 1 1 1
4 4 1 2 1 1
5 5 2 2 2 1
6 6 3 2 2 2
7 7 3 2 2 2
8 8 3 2 2 2
9 9 3 2 2 2
10 10 1 3 2 2
11 11 1 3 2 2
For better performance (I hope) here is numpy alternative:
a = df['val'].to_numpy()
uniq = np.unique(a)
m = np.concatenate(([False], a[:-1])) != a
arr = np.cumsum((a[:, None] == uniq) & m[:, None], axis=0)
df = df.join(pd.DataFrame(arr, index=df.index, columns=uniq).add_prefix('sum_'))
print (df)
time val sum_1 sum_2 sum_3
0 0 1 1 0 0
1 1 1 1 0 0
2 2 2 1 1 0
3 3 3 1 1 1
4 4 1 2 1 1
5 5 2 2 2 1
6 6 3 2 2 2
7 7 3 2 2 2
8 8 3 2 2 2
9 9 3 2 2 2
10 10 1 3 2 2
11 11 1 3 2 2
I would like to combine two columns in a new column.
Lets suppose I have:
Index A B
0 1 0
1 1 0
2 1 0
3 1 0
4 1 0
5 1 2
6 1 2
7 1 2
8 1 2
9 1 2
10 1 2
Now I would like to create a column C with the entries from A from Index 0 to 4 and from column B from Index 5 to 10. It should look like this:
Index A B C
0 1 0 1
1 1 0 1
2 1 0 1
3 1 0 1
4 1 0 1
5 1 2 2
6 1 2 2
7 1 2 2
8 1 2 2
9 1 2 2
10 1 2 2
Is there a python code how I can get this? Thanks in advance!
If Index is an actual column you can use numpy.where and specify your condition
import numpy as np
df['C'] = np.where(df['Index'] <= 4, df['A'], df['B'])
Index A B C
0 0 1 0 1
1 1 1 0 1
2 2 1 0 1
3 3 1 0 1
4 4 1 0 1
5 5 1 2 2
6 6 1 2 2
7 7 1 2 2
8 8 1 2 2
9 9 1 2 2
10 10 1 2 2
if your index is your actual index
you can slice your indices with iloc and create your column with concat.
df['C'] = pd.concat([df['A'].iloc[:5], df['B'].iloc[5:]])
print(df)
A B C
0 1 0 1
1 1 0 1
2 1 0 1
3 1 0 1
4 1 0 1
5 1 2 2
6 1 2 2
7 1 2 2
8 1 2 2
9 1 2 2
10 1 2 2
I want to create a DataFrame that has the columns feature1, month and feature_segment. I have over 3,000 unique values in feature1 and 3 feature_segments, I now have to map each feature to each month and feature_segment,
for example:
feature1 = 1 so the mapping should create a data frame as such:
feature1 month feature_Segment
1 1 1
1 1 2
1 1 3
1 2 1
1 2 2
1 2 3
1 3 1
1 3 2
1 3 3
1 4 1
1 4 2
1 4 3
1 5 1
1 5 2
1 5 3
1 6 1
1 6 2
1 6 3
1 7 1
1 7 2
1 7 3
1 8 1
1 8 2
1 8 3
1 9 1
1 9 2
1 9 3
1 10 1
1 10 2
1 10 3
1 11 1
1 11 2
1 11 3
1 12 1
1 12 2
1 12 3
Now is there any way to create this data frame without using a for loop?
All the df columns are in lists.
Use itertools.product:
from itertools import product
feature = [1]
feature_Segment = [1,2,3]
month = range(1, 13)
df = pd.DataFrame(product(feature, month, feature_Segment),
columns=['feature1','month','feature_Segment'])
print (df.head(10))
feature1 month feature_Segment
0 1 1 1
1 1 1 2
2 1 1 3
3 1 2 1
4 1 2 2
5 1 2 3
6 1 3 1
7 1 3 2
8 1 3 3
9 1 4 1
I have a pandas data frame and group it by two columns (for example col1 and col2). For fixed values of col1 and col2 (i.e. for a group) I can have several different values in the col3. I would like to count the number of distinct values from the third columns.
For example, If I have this as my input:
1 1 1
1 1 1
1 1 2
1 2 3
1 2 3
1 2 3
2 1 1
2 1 2
2 1 3
2 2 3
2 2 3
2 2 3
I would like to have this table (data frame) as the output:
1 1 2
1 2 1
2 1 3
2 2 1
df.groupby(['col1','col2'])['col3'].nunique().reset_index()
In [17]: df
Out[17]:
0 1 2
0 1 1 1
1 1 1 1
2 1 1 2
3 1 2 3
4 1 2 3
5 1 2 3
6 2 1 1
7 2 1 2
8 2 1 3
9 2 2 3
10 2 2 3
11 2 2 3
In [19]: df.groupby([0,1])[2].apply(lambda x: len(x.unique()))
Out[19]:
0 1
1 1 2
2 1
2 1 3
2 1
dtype: int64