Summing columns according to pattern in column names - python

Lets start with very simplified abstract example, I hava a dataframe like this:
import pandas as pd
d = {'1-A': [1, 2], '1-B': [3, 4], '2-A': [3, 4], '5-B': [2, 7]}
df = pd.DataFrame(data=d)
1-A 1-B 2-A 5-B
0 1 3 3 2
1 2 4 4 7
I'm looking for elegant pandastic solution to have dataframe like this:
1 2 5
0 4 3 2
1 6 4 7
To make example more concrete column 1-A, means person id=1, expenses category A. Rows are expenses every month. In result, I want to have monthly expenses per person across categories (so column 1 is sum of column 1-A and 1-B). Note that, when there is no expenses, there is no column with 0s. Of course it should be ready for more columns (ids and categories).
I'm quite sure that smart solution with good separation of column selection and summing opeation for this exist.

Use groupby with lambda function with split and select first value, for grouping by columns add axis=1:
df1 = df.groupby(lambda x: x.split('-')[0], axis=1).sum()
#alternative
#df1 = df.groupby(df.columns.str.split('-').str[0], axis=1).sum()
print (df1)
1 2 5
0 4 3 2
1 6 4 7

Related

How to add value of dataframe to another dataframe?

I want to add a row of dataframe to every row of another dataframe.
df1=pd.DataFrame({"a": [1,2],
"b": [3,4]})
df2=pd.DataFrame({"a":[4], "b":[5]})
I want to add df2 value to every row of df1.
I use df1+df2 and get following result
a b
0 5.0 8.0
1 NaN NaN
But I want to get the following result
a b
0 5 7
1 7 9
Any help would be dearly appreciated!
If really need add values per columns it means number of columns in df2 is same like number of rows in df1 use:
df = df1.add(df2.loc[0].to_numpy(), axis=0)
print (df)
a b
0 5 7
1 7 9
If need add by rows it means first value of df1 is add to first column of df2, so output is different:
df = df1.add(df2.loc[0], axis=1)
print (df)
a b
0 5 8
1 6 9

Add multiple columns at once to multiindex Pandas dataframe

I have a dataframe multiindex pandas dataframe df
First Foo Bar
Second Begin Begin
1 5 1
2 4 4
3 6 6
And I want to add two columns of the same name
First Foo Bar
Second Begin End Begin End
1 5 1 1 2
2 4 5 4 4
3 6 7 6 7
From this source (new):
First Foo Bar
1 1 2
2 5 4
3 7 7
I tried things like df[:] = new[:] but this returned only NaN
An alternative would be to use something like a for-loop but that's not the Pandas approach. Searching the web did not give me any insights as to solving this problem.
How can I add new columns with the same name and shape to every first level of a multiindex Pandas dataframe?
Edit:
This approach df[('Foo', 'End')] = new['Foo'] df[('Bar', 'End')] = new['Bar'] is not an option because in my actual problem there is not two columns to be added, but hundreds of columns.
Multi-column names are passed as Tuples, like df[('Foo', 'End')].
import pamadas as pd
# test data
col = pd.MultiIndex.from_arrays([['Foo', 'Bar'], ['Begin', 'Begin']], names=['First', 'Second'])
df = pd.DataFrame([[5, 1], [4, 4], [6, 6]], columns=col)
new = pd.DataFrame({'Foo': [1, 5, 7], 'Bar': [2, 4, 7]})
# write new columns
df[('Foo', 'End')] = new['Foo']
df[('Bar', 'End')] = new['Bar']
# display(df)
First Foo Bar Foo Bar
Second Begin Begin End End
0 5 1 1 2
1 4 4 5 4
2 6 6 7 7
For many columns
col, column name in new, must correspond to the top level column name in df.
for col in new.columns:
df[(col, 'new col name')] = new[col]

Python DataFrame count how many different elements

I need to count how many different elements are in my DataFrame (df).
My df has the day of the month (as a number: 1,2,3 ... 31) in which a certain variable was measured. There are 3 columns that describe the number of the day. There are multiple measurements in one day so my columns have repeated values. I need to know how many days in a month was that variable measured ignoring how many times a day was that measurement done. So I was thinking that counting the days ignoring repeated values.
As an example the data of my df would look like this:
col1 col2 col3
2 2 2
2 2 3
3 3 3
3 4 8
I need an output that tells me that in that DataFrame the numbers are 2, 3, 4 and 8.
Thanks!
Just do:
df=pd.DataFrame({"col1": [2,2,3,3], "col2": [2,2,3,4], "col3": [2,3,3,8]})
df.stack().unique()
Outputs:
[2 3 4 8]
You can use the function drop_duplicates into your dataframe, like:
import pandas as pd
df = pd.DataFrame({'a':[2,2,3], 'b':[2,2,3], 'c':[2,2,3]})
a b c
0 2 2 2
1 2 2 2
2 3 3 3
df = df.drop_duplicates()
print(df['a'].count())
out: 2
Or you can use numpy to get the unique values in the dataframe:
import pandas as pd
import numpy as np
df = pd.DataFrame({'X' : [2, 2, 3, 3], 'Y' : [2,2,3,4], 'Z' : [2,3,3,8]})
df_unique = np.unique(np.array(df))
print(df_unique)
#Output [2 3 4 8]
#for the count of days:
print(len(df_unique))
#Output 4
How about:
Assuming this is your initial df:
col1 col2 col3
0 2 2 2
1 2 2 2
2 3 3 3
Then:
count_df = pd.DataFrame()
for i in df.columns:
df2 = df[i].value_counts()
count_df = pd.concat([count_df, df2], axis=1)
final_df = count_df.sum(axis=1)
final_df = pd.DataFrame(data=final_df, columns=['Occurrences'])
print(final_df)
Occurrences
2 6
3 3
You can use pandas.unique() like so:
pd.unique(df.to_numpy().flatten())
I have done some basic benchmarking, this method appears to be the fastest.

Get count of count of unique values in pandas dataframe

I'm trying to get counts of unique counts of unique values for a column in pandas dataframe.
Sample data bellow:
In [3]: df = pd.DataFrame([[1, 1], [2, 1], [3, 2], [4, 3], [5, 1]], columns=['AppointmentId', 'PatientId'])
In [4]: df
Out[4]:
AppointmentId PatientId
0 1 1
1 2 1
2 3 2
3 4 3
4 5 1
Actual dataset has over 50000 unique values of PatientId. I want to visualize appointment count per patient, but simply grouping by PatientId and getting sizes of groups doesn't work well for plotting, because that would be 50000 bars.
For that reason I'm trying to plot how many patients had a specific number of appointments plotted, instead of plotting number of appointments against PatientId.
Based on sample data above I want to get something like this:
AppointmentCount PatientCount
0 1 2
1 3 3
I approach this by first grouping on PatientId and getting group sizes, drop PatientId, and group sizes, but I can't find a way to extract it after grouping.
In [24]: appointment_counts = df.groupby('PatientId').size()
In [25]: appointment_counts
Out[25]:
PatientId
1 3
2 1
3 1
dtype: int64
In [26]: type(appointment_counts)
Out[26]: pandas.core.series.Series
After your groupby adding value_counts
df.groupby('PatientId').size().value_counts()
Out[877]:
1 2
3 1
dtype: int64
Then you can add rename
df.groupby('PatientId').size().value_counts().reset_index().rename(columns={'index':'Aid',0:'Pid'})
Out[883]:
Aid Pid
0 1 2
1 3 1

How to merge a Series and DataFrame

If you came here looking for information on how to
merge a DataFrame and Series on the index, please look at this
answer.
The OP's original intention was to ask how to assign series elements
as columns to another DataFrame. If you are interested in knowing the
answer to this, look at the accepted answer by EdChum.
Best I can come up with is
df = pd.DataFrame({'a':[1, 2], 'b':[3, 4]}) # see EDIT below
s = pd.Series({'s1':5, 's2':6})
for name in s.index:
df[name] = s[name]
a b s1 s2
0 1 3 5 6
1 2 4 5 6
Can anybody suggest better syntax / faster method?
My attempts:
df.merge(s)
AttributeError: 'Series' object has no attribute 'columns'
and
df.join(s)
ValueError: Other Series must have a name
EDIT The first two answers posted highlighted a problem with my question, so please use the following to construct df:
df = pd.DataFrame({'a':[np.nan, 2, 3], 'b':[4, 5, 6]}, index=[3, 5, 6])
with the final result
a b s1 s2
3 NaN 4 5 6
5 2 5 5 6
6 3 6 5 6
Update
From v0.24.0 onwards, you can merge on DataFrame and Series as long as the Series is named.
df.merge(s.rename('new'), left_index=True, right_index=True)
# If series is already named,
# df.merge(s, left_index=True, right_index=True)
Nowadays, you can simply convert the Series to a DataFrame with to_frame(). So (if joining on index):
df.merge(s.to_frame(), left_index=True, right_index=True)
You could construct a dataframe from the series and then merge with the dataframe.
So you specify the data as the values but multiply them by the length, set the columns to the index and set params for left_index and right_index to True:
In [27]:
df.merge(pd.DataFrame(data = [s.values] * len(s), columns = s.index), left_index=True, right_index=True)
Out[27]:
a b s1 s2
0 1 3 5 6
1 2 4 5 6
EDIT for the situation where you want the index of your constructed df from the series to use the index of the df then you can do the following:
df.merge(pd.DataFrame(data = [s.values] * len(df), columns = s.index, index=df.index), left_index=True, right_index=True)
This assumes that the indices match the length.
Here's one way:
df.join(pd.DataFrame(s).T).fillna(method='ffill')
To break down what happens here...
pd.DataFrame(s).T creates a one-row DataFrame from s which looks like this:
s1 s2
0 5 6
Next, join concatenates this new frame with df:
a b s1 s2
0 1 3 5 6
1 2 4 NaN NaN
Lastly, the NaN values at index 1 are filled with the previous values in the column using fillna with the forward-fill (ffill) argument:
a b s1 s2
0 1 3 5 6
1 2 4 5 6
To avoid using fillna, it's possible to use pd.concat to repeat the rows of the DataFrame constructed from s. In this case, the general solution is:
df.join(pd.concat([pd.DataFrame(s).T] * len(df), ignore_index=True))
Here's another solution to address the indexing challenge posed in the edited question:
df.join(pd.DataFrame(s.repeat(len(df)).values.reshape((len(df), -1), order='F'),
columns=s.index,
index=df.index))
s is transformed into a DataFrame by repeating the values and reshaping (specifying 'Fortran' order), and also passing in the appropriate column names and index. This new DataFrame is then joined to df.
Nowadays, much simpler and concise solution can achieve the same task. Leveraging the capability of DataFrame.apply() to turn a Series into columns of its belonging DataFrame, we can use:
df.join(df.apply(lambda x: s, axis=1))
Result:
a b s1 s2
3 NaN 4 5 6
5 2.0 5 5 6
6 3.0 6 5 6
Here, we used DataFrame.apply() with a simple lambda function as the applied function on axis=1. The applied lambda function simply just returns the Series s:
df.apply(lambda x: s, axis=1)
Result:
s1 s2
3 5 6
5 5 6
6 5 6
The result has already inherited the row index of the original DataFrame df. Consequently, we can simply join df with this interim result by DataFrame.join() to get the desired final result (since they have the same row index).
This capability of DataFrame.apply() to turn a Series into columns of its belonging DataFrame is well documented in the official document as follows:
By default (result_type=None), the final return type is inferred from
the return type of the applied function.
The default behaviour (result_type=None) depends on the return value of the
applied function: list-like results will be returned as a Series of
those. However if the apply function returns a Series these are
expanded to columns.
The official document also includes example of such usage:
Returning a Series inside the function is similar to passing
result_type='expand'. The resulting column names will be the Series
index.
df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
foo bar
0 1 2
1 1 2
2 1 2
If I could suggest setting up your dataframes like this (auto-indexing):
df = pd.DataFrame({'a':[np.nan, 1, 2], 'b':[4, 5, 6]})
then you can set up your s1 and s2 values thus (using shape() to return the number of rows from df):
s = pd.DataFrame({'s1':[5]*df.shape[0], 's2':[6]*df.shape[0]})
then the result you want is easy:
display (df.merge(s, left_index=True, right_index=True))
Alternatively, just add the new values to your dataframe df:
df = pd.DataFrame({'a':[nan, 1, 2], 'b':[4, 5, 6]})
df['s1']=5
df['s2']=6
display(df)
Both return:
a b s1 s2
0 NaN 4 5 6
1 1.0 5 5 6
2 2.0 6 5 6
If you have another list of data (instead of just a single value to apply), and you know it is in the same sequence as df, eg:
s1=['a','b','c']
then you can attach this in the same way:
df['s1']=s1
returns:
a b s1
0 NaN 4 a
1 1.0 5 b
2 2.0 6 c
You can easily set a pandas.DataFrame column to a constant. This constant can be an int such as in your example. If the column you specify isn't in the df, then pandas will create a new column with the name you specify. So after your dataframe is constructed, (from your question):
df = pd.DataFrame({'a':[np.nan, 2, 3], 'b':[4, 5, 6]}, index=[3, 5, 6])
You can just run:
df['s1'], df['s2'] = 5, 6
You could write a loop or comprehension to make it do this for all the elements in a list of tuples, or keys and values in a dictionary depending on how you have your real data stored.
If df is a pandas.DataFrame then df['new_col']= Series list_object of length len(df) will add the or Series list_object as a column named 'new_col'. df['new_col']= scalar (such as 5 or 6 in your case) also works and is equivalent to df['new_col']= [scalar]*len(df)
So a two-line code serves the purpose:
df = pd.DataFrame({'a':[1, 2], 'b':[3, 4]})
s = pd.Series({'s1':5, 's2':6})
for x in s.index:
df[x] = s[x]
Output:
a b s1 s2
0 1 3 5 6
1 2 4 5 6

Categories