Combine 2 df by columns name - python

Helo!
I have loaded a few datasets, the only thing in common is that they have the same Columns names BUT, the number of columns/rows and the data are different , so looks like i cannot use merge or concat because there is not thing in common by ID for example.. I want to to put each df above the other and leave the "extra" columns with Nan values.
df1:
| Column A | Column B |
| -------- | -------- |
| ID 1 | Cell 2 |
| ID 2 | Cell 4 |
df2:
Column A
Column B
ColumnC
ID 3
Cell 2
info
ID 4
Cell 4
info
I wanto something like this:
df:
Column A
Column B
ColumnC
ID 1
Cell 2
Nan
ID 2
Cell 4
Nan
ID 3
Cell 2
info
ID 4
Cell 4
info
Thanks a lot for your time!
I have try something like df = pd.concat(['df1','df2'], axis=1) and merge

Related

Filtering a pandas dataframe to remove duplicates with a criterion

I am new to pandas dataframes, so I apologies in case there's an easy or even built-in way to do so.
Let's say I have a dataframe df with 3 columns A (a string), B (a float) and C (a bool). Values of column A are not unique. B is a random number and rows with same A value can have different values of B. Columns C is True if the value of A is repeated in the dataset.
An example
| | A | B | C |
|---|-----|-----|-------|
| 0 | cat | 10 | True |
| 1 | dog | 10 | False |
| 2 | cat | 20 | True |
| 3 | bee | 100 | False |
(The column C is actually redundant and could be obtained with df['C']=df['A'].duplicated(keep=False))
What I want to obtain is a dataframe were, for duplicated entries of A (C==True), only the row with the highest B value is kept.
I know how to get the list of rows with maximum value of B:
df.loc[df[df['C']].groupby('A')['B'].idxmax()] #is this the best way actually?
but what I want is the opposite: filter df so to get only the entries not duplicated (C==False) and the duplicated ones with the highest B.
One possibility could be to concatenate df[~df['C']] and the previous table but is it the best way actually?
One approach:
res = df.iloc[df.groupby("A")["B"].idxmax()]
print(res)
Output
A B C
3 bee 100 False
2 cat 20 True
1 dog 10 False

Replace the cell with the most frequent word in Pandas DataFrame

I have a DataFrame like this:
df = pd.DataFrame({'Source1': ['Corona,Corona,Corona','Sars,Sars','Corona,Sars',
'Sars,Corona','Sars'],
'Area': ['A,A,A,B','A','A,B,B,C','C,C,B,C','A,B,C']})
df
Source1 Area
0 Corona,Corona,Corona A,A,A,B
1 Sars,Sars A
2 Corona,Sars A,B,B,C
3 Sars,Corona C,C,B,C
4 Sars A,B,C
I want to check each cell in each column (the real data has many columns) and find the frequency of each unique word (we can distinguish the unique words by ','), and replace the whole entry by the most frequent word.
In the case of a tie, it doesn't matter which word to replace. So the desired output would look like this:
df
Source Area
0 Corona A
1 Sars A
2 Corona B
3 Sars C
4 Sars A
In this case, I randomly chose to pick the first word when there is a tie, but it really doesn't matter.
Thanks in advance.
Create DataFrames by Series.str.split and expand=True and is used DataFrame.mode with selecting first column by position:
df['Source1'] = df['Source1'].str.split(',', expand=True).mode(axis=1).iloc[:, 0]
df['Area'] = df['Area'].str.split(',', expand=True).mode(axis=1).iloc[:, 0]
print (df)
Source1 Area
0 Corona A
1 Sars A
2 Corona B
3 Sars C
4 Sars A
Another idea with collections.Counter.most_common:
from collections import Counter
f = lambda x: [Counter(y.split(',')).most_common(1)[0][0] for y in x]
df[['Source1', 'Area']] = df[['Source1', 'Area']].apply(f)
#all columns
#df = df.apply(f)
print (df)
Source1 Area
0 Corona A
1 Sars A
2 Corona B
3 Sars C
4 Sars A
Here would be my offering that can be executed in a single line for each series and requires no extra imports.
df['Area'] = df['Area'].apply(lambda x: max(x.replace(',',''), key=x.count))
After replacing all , in the characters found in the Area series, we replace the field with the element that has the greatest number of occurrences (or first element in the case of equal values) with the key=x.count argument.
You could also use use something similar (demonstrated with the Source1 series), returning the maximum from the list of elements created by splitting the field.
df['Source1'] = df['Source1'].apply(lambda x: max(list(x.split(',')), key=x.count))
+---+---------+------+
| | Source1 | Area |
+---+---------+------+
| 0 | Corona | A |
| 1 | Sars | A |
| 2 | Corona | B |
| 3 | Sars | C |
| 4 | Sars | A |
+---+---------+------+
Two methods shown above to highlight choices; both would work adequately on either or both series.

Copy data from 1 data-set to another on the basis of Unique ID by map function

I am matching two large data-sets and trying to perform update,remove and create operations on original data-set by comparing it with other data-set. How can I update 2 or 3 column out of 10 of original data-set and keep other column's value same as before?
I tried merge but no avail. Merge does not work for me.
Original data:
id | full_name | date
1 | John | 02-23-2006
2 | Paul Elbert | 09-29-2001
3 | Donag | 11-12-2013
4 | Tom Holland | 06-17-2016
other data:
id | full_name | date
1 | John | 02-25-2018
2 | Paul | 03-09-2001
3 | Donag | 07-09-2017
4 | Tom | 05-09-2016
After trying this I checked manually I didn't get expected results.
original[['id']].merge(other[['id','date']],on='id')
Can I solve this problem with map? When ID match then update all values in date column without changing any value in name column of original data set
Use pandas.Series.map:
df['date']=df['id'].map(other_df.set_index('id ')['date'])
print(df)
id full_name date
0 1 John 02-25-2018
1 2 Paul Elbert 03-09-2001
2 3 Donag 07-09-2017
3 4 Tom Holland 05-09-2016
to check other conditons:
cond=df.status.str.contains('new')
df.loc['date',cond]=df.loc['id',cond].map(other_df.set_index('id ')['date'])
Pandas' DataFrame.update does this, if you properly set id as your index on both the original and other:
original.update(other[["date"]])

Python/Pandas: Pivot table

In a jupyter notebook, I have a dataframe created from different merged datasets.
record_id | song_id | user_id | number_times_listened
0 |ABC | Shjkn4987 | 3
1 |ABC | Dsfds2347 | 15
2 |ABC | Fkjhh9849 | 7
3 |XYZ | Shjkn4987 | 20
4 |XXX | Shjkn4987 | 5
5 |XXX | Swjdh0980 | 1
I would like to create a pivot table dataframe by song_id listing the number of user_ids and the sum of number_times_listened.
I know that I need to create a for loop with the count and sum functions, but I cannot make it work. I also tried the pandas module's pd.pivot_table.
df = pd.pivot_table(data, index='song_ID', columns='userID', values='number_times_listened', aggfunc='sum')
OR something like this?
total_user=[]
total_times_listened =[]
for x in data:
total_user.append(sum('user_id'))
total_times_listened.append(count('number_times_listened'))
return df('song_id','total_user','total_times_listened')
You can pass a dictionary of column names as keys and a list of functions as values:
funcs = {'number_times_listened':['sum'], 'user_id':['count']}
Then simply use df.groupby on column song_id:
df.groupby('song_id').agg(funcs)
The output:
number_times_listened user_id
sum count
song_id
ABC 25 3
XXX 6 2
XYZ 20 1
Not sure if this is related but the column names and casing in your example don't match your Python code.
In any case, the following works for me on Python 2.7:
CSV File:
record_id song_id user_id number_times_listened
0 ABC Shjkn4987 3
1 ABC Dsfds2347 15
2 ABC Fkjhh9849 7
3 XYZ Shjkn4987 20
4 XXX Shjkn4987 5
5 XXX Swjdh0980 1
Python code:
csv_data = pd.read_csv('songs.csv')
df = pd.pivot_table(csv_data, index='song_id', columns='user_id', values='number_times_listened', aggfunc='sum').fillna(0)
The resulting pivot table looks like:
user_id Dsfds2347 Fkjhh9849 Shjkn4987 Swjdh0980
song_id
ABC 15 7 3 0
XXX 0 0 5 1
XYZ 0 0 20 0
Is this what you're looking for? Keep in mind that song_id, user_id pairs are unique in your dataset, so the aggregate function isn't actually doing anything in this specific example since there's nothing to group by on these two columns.

Use pandas groupby.size() results for arithmetical operation

I got the following problem which I got stuck on and unfortunately cannot resolve by myself or by similar questions that I found on stackoverflow.
To keep it simple, I'll give a short example of my problem:
I got a Dataframe with several columns and one column that indicates the ID of a user. It might happen that the same user has several entries in this data frame:
| | userID | col2 | col3 |
+---+-----------+----------------+-------+
| 1 | 1 | a | b |
| 2 | 1 | c | d |
| 3 | 2 | a | a |
| 4 | 3 | d | e |
Something like this. Now I want to known the number of rows that belongs to a certain userID. For this operation I tried to use df.groupby('userID').size() which in return I want to use for another simple calculation, like division whatsover.
But as I try to save the results of the calculation in a seperate column, I keep getting NaN values.
Is there a way to solve this so that I get the result of the calculations in a seperate column?
Thanks for your help!
edit//
To make clear, how my output should look like. The upper dataframe is my main data frame so to say. Besides this frame I got a second frame looking like this:
| | userID | value | value/appearances |
+---+-----------+----------------+-------+
| 1 | 1 | 10 | 10 / 2 = 5 |
| 3 | 2 | 20 | 20 / 1 = 20 |
| 4 | 3 | 30 | 30 / 1 = 30 |
So I basically want in the column 'value/appearances' to have the result of the number in the value column divided by the number of appearances of this certain user in the main dataframe. For user with ID=1 this would be 10/2, as this user has a value of 10 and has 2 rows in the main dataframe.
I hope this makes it a bit clearer.
IIUC you want to do the following, groupby on 'userID' and call transform on the grouped column and pass 'size' to identify the method to call:
In [54]:
df['size'] = df.groupby('userID')['userID'].transform('size')
df
Out[54]:
userID col2 col3 size
1 1 a b 2
2 1 c d 2
3 2 a a 1
4 3 d e 1
What you tried:
In [55]:
df.groupby('userID').size()
Out[55]:
userID
1 2
2 1
3 1
dtype: int64
When assigned back to the df aligns with the df index so it introduced NaN for the last row:
In [57]:
df['size'] = df.groupby('userID').size()
df
Out[57]:
userID col2 col3 size
1 1 a b 2
2 1 c d 1
3 2 a a 1
4 3 d e NaN

Categories