Need to extract specific value from excel column using python pandas dataframe
The column Product that I am trying to extract looks the below & need to extract only Product # from it. The column also has other numbers but the Product # always comes after the term 'UK Pro' & Product # could be 3 to 4 digit number in a particular row of data.
In[1]:
df['Product'].head()
#Dataframe looks like this:
Out[1]:
Checking center : King 2000 : UK Pro 1000 : London
Checking center : Queen 321 : UK Pro 250 : Spain
CC : UK Pro 3000 : France
CC : UK Pro 810 : Poland
Expected Output:
Product #
1000
250
3000
810
Started with this:
df['Product #'] = df1['Product'].str.split(':').str[1]
But this does split only based on the first two occurrence of : operator.
Then tried this:
df1['Product #'] = df1['Product'].str.split('UK Pro', 1).str[0].str.strip()
You can use pandas.Series.str.extract :
df["Product #"] = df["Product"].str.extract("UK Pro (\d+)", expand=False)
# Output :
print(df)
Product #
0 NaN
1 NaN
2 1000
3 NaN
4 NaN
5 250
6 NaN
7 3000
8 NaN
9 810
10 NaN
Related
I'd like to compare the difference in data frames. xyz has all of the same columns as abc, but it has an additional column.
In the comparison, I'd like match up the two like columns (Sport) but only show the SportLeague in the output (if a difference exists, that is). Example, instead of showing 'Soccer' as a difference, show 'Soccer:MLS', which is the adjacent column in xyz)
Here's a screenshot of the two data frames:
import pandas as pd
import numpy as np
abc = {'Sport' : ['Football', 'Basketball', 'Baseball', 'Hockey'], 'Year' : ['2021','2021','2022','2022'], 'ID' : ['1','2','3','4']}
abc = pd.DataFrame({k: pd.Series(v) for k, v in abc.items()})
abc
xyz = {'Sport' : ['Football', 'Football', 'Basketball', 'Baseball', 'Hockey', 'Soccer'], 'SportLeague' : ['Football:NFL', 'Football:XFL', 'Basketball:NBA', 'Baseball:MLB', 'Hockey:NHL', 'Soccer:MLS'], 'Year' : ['2022','2019', '2022','2022','2022', '2022'], 'ID' : ['2','0', '3','2','4', '1']}
xyz = pd.DataFrame({k: pd.Series(v) for k, v in xyz.items()})
xyz = xyz.sort_values(by = ['ID'], ascending = True)
xyz
Code already tried:
abc.compare(xyz, align_axis=1, keep_shape=False, keep_equal=False)
The error I get is the following (since the data frames don't have the exact same columns):
Example. If xyz['Sport'] does not show up anywhere within abc['Sport'], then show xyz['SportLeague]' as the difference between the data frames
Further clarification of the logic:
Does abc['Sport'] appear anywhere in xyz['Sport']? If not, indicate "Not Found in xyz data frame". If it does exist, are its corresponding abc['Year'] and abc['ID'] values the same? If not, show "Change from xyz['Year'] and xyz['ID'] to abc['Year'] and abc['ID'].
Does xyz['Sport'] appear anywhere in abc['Sport']? If not, indicate "Remove xyz['SportLeague']".
What I've explained above is similar to the .compare method. However, the data frames in this example may not be the same length and have different amounts of variables.
If I understand you correctly, we basically want to merge both DataFrames, and then apply a number of comparisons between both DataFrames, and add a column that explains the course of action to be taken, given a certain result of a given comparison.
Note: in the example here I have added one sport ('Cricket') to your df abc, to trigger the condition abc['Sport'] does not exist in xyz['Sport'].
abc = {'Sport' : ['Football', 'Basketball', 'Baseball', 'Hockey','Cricket'], 'Year' : ['2021','2021','2022','2022','2022'], 'ID' : ['1','2','3','4','5']}
abc = pd.DataFrame({k: pd.Series(v) for k, v in abc.items()})
print(abc)
Sport Year ID
0 Football 2021 1
1 Basketball 2021 2
2 Baseball 2022 3
3 Hockey 2022 4
4 Cricket 2022 5
I've left xyz unaltered. Now, let's merge these two dfs:
df = xyz.merge(abc, on='Sport', how='outer', suffixes=('_xyz','_abc'))
print(df)
Sport SportLeague Year_xyz ID_xyz Year_abc ID_abc
0 Football Football:XFL 2019 0 2021 1
1 Football Football:NFL 2022 2 2021 1
2 Soccer Soccer:MLS 2022 1 NaN NaN
3 Baseball Baseball:MLB 2022 2 2022 3
4 Basketball Basketball:NBA 2022 3 2021 2
5 Hockey Hockey:NHL 2022 4 2022 4
6 Cricket NaN NaN NaN 2022 5
Now, we have a df where we can evaluate your set of conditions using np.select(conditions, choices, default). Like this:
conditions = [ df.Year_abc.isnull(),
df.Year_xyz.isnull(),
(df.Year_xyz != df.Year_abc) & (df.ID_xyz != df.ID_abc),
df.Year_xyz != df.Year_abc,
df.ID_xyz != df.ID_abc
]
choices = [ 'Sport not in abc',
'Sport not in xyz',
'Change year and ID to xyz',
'Change year to xyz',
'Change ID to xyz']
df['action'] = np.select(conditions, choices, default=np.nan)
Result as below with a new column action with notes on which course of action to take.
Sport SportLeague Year_xyz ID_xyz Year_abc ID_abc \
0 Football Football:XFL 2019 0 2021 1
1 Football Football:NFL 2022 2 2021 1
2 Soccer Soccer:MLS 2022 1 NaN NaN
3 Baseball Baseball:MLB 2022 2 2022 3
4 Basketball Basketball:NBA 2022 3 2021 2
5 Hockey Hockey:NHL 2022 4 2022 4
6 Cricket NaN NaN NaN 2022 5
action
0 Change year and ID to xyz # match, but mismatch year and ID
1 Change year and ID to xyz # match, but mismatch year and ID
2 Sport not in abc # no match: Sport in xyz, but not in abc
3 Change ID to xyz # match, but mismatch ID
4 Change year and ID to xyz # match, but mismatch year and ID
5 nan # complete match: no action needed
6 Sport not in xyz # no match: Sport in abc, but not in xyz
Let me know if this is a correct interpretation of what you are looking to achieve.
I have data for many countries over a period of time (2001-2003). It looks something like this:
index
year
country
inflation
GDP
1
2001
AFG
nan
48
2
2002
AFG
nan
49
3
2003
AFG
nan
50
4
2001
CHI
3.0
nan
5
2002
CHI
5.0
nan
6
2003
CHI
7.0
nan
7
2001
USA
nan
220
8
2002
USA
4.0
250
9
2003
USA
2.5
280
I want to drop countries in case there is no data (i.e. values are missing for all years) for any given variable.
In the example table above, I want to drop AFG (because it misses all values for inflation) and CHI (GDP missing). I don't want to drop observation #7 just because one year is missing.
What's the best way to do that?
This should work by filtering all values that have nan in one of (inflation, GDP):
(
df.groupby(['country'])
.filter(lambda x: not x['inflation'].isnull().all() and not x['GDP'].isnull().all())
)
Note, if you have more than two columns you can work on a more general version of this:
df.groupby(['country']).filter(lambda x: not x.isnull().all().any())
If you want this to work with a specific range of year instead of all columns, you can set up a mask and change the code a bit:
mask = (df['year'] >= 2002) & (df['year'] <= 2003) # mask of years
grp = df.groupby(['country']).filter(lambda x: not x[mask].isnull().all().any())
You can also try this:
# check where the sum is equal to 0 - means no values in the column for a specific country
group_by = df.groupby(['country']).agg({'inflation':sum, 'GDP':sum}).reset_index()
# extract only countries with information on both columns
indexes = group_by[ (group_by['GDP'] != 0) & ( group_by['inflation'] != 0) ].index
final_countries = list(group_by.loc[ group_by.index.isin(indexes), : ]['country'])
# keep the rows contains the countries
df = df.drop(df[~df.country.isin(final_countries)].index)
You could reshape the data frame from long to wide, drop nulls, and then convert back to wide.
To convert from long to wide, you can use pivot functions. See this question too.
Here's code for dropping nulls, after its reshaped:
df.dropna(axis=0, how= 'any', thresh=None, subset=None, inplace=True) # Delete rows, where any value is null
To convert back to long, you can use pd.melt.
I want to find a matching row for another row in a Pandas dataframe. Given this example frame:
name location type year area delta
0 building NY a 2019 650.3 ?
1 building NY b 2019 400.0 ?
2 park LA a 2017 890.7 ?
3 lake SF b 2007 142.2 ?
4 park LA b 2017 333.3 ?
...
Each row has a matching row, where all values equal - except the "type" and the "area". For example row 0 and 1 match, and 2 and 4, ...
I want to somehow get the matching rows; and write the difference between their areas in their "delta" column (e.g. |650.3 - 400.0| = 250.3 for row 0).
The "delta" column doesn't exist yet, but an empty column could be easily added with df["Delta"] = 0. I just don't know how to be able to fill the delta column for ALL rows.
I tried getting a matching row with df[name = 'building' & location = 'type' ... ~& type = 'a']; but I can't edit the result I get from that. Maybe I also don't quite understand when I get a copy, and when a reference.
I hope my problem is clear. If not, I am happy to explain further.
Thanks a lot already for your help!
IIUC, you want groupby.transform:
df['delta']=( df.groupby(df.columns.difference(['type','area']).tolist())
.transform('diff').abs() )
print(df)
name location type year area delta
0 building NY a 2019 650.3 NaN
1 building NY b 2019 400.0 250.3
2 park LA a 2017 890.7 NaN
3 lake SF b 2007 142.2 NaN
4 park LA b 2017 333.3 557.4
If you want to write the difference in both rows ofdelta column:
df['delta']=( df.groupby(df.columns.difference(['type','area']).tolist())
.transform(lambda x: x.diff().bfill()).abs() )
print(df)
name location type year area delta
0 building NY a 2019 650.3 250.3
1 building NY b 2019 400.0 250.3
2 park LA a 2017 890.7 557.4
3 lake SF b 2007 142.2 NaN
4 park LA b 2017 333.3 557.4
Detail:
df.columns.difference(['type','area']).tolist()
#[*df.columns.difference(['type','area'])] or this
#['location', 'name', 'year'] #Output
A solution with merge:
df['other_type'] = np.where(df['type']=='a', 'b', 'a')
(df.merge(df,
left_on=['name','location', 'year', 'type'],
right_on=['name','location', 'year', 'other_type'],
suffixes=['','_r'])
.assign(delta=lambda x: x['area']-x['area_r'])
.drop(['area_r', 'other_type_r'], axis=1)
)
I have a dataset structures as below:
index country city Data
0 AU Sydney 23
1 AU Sydney 45
2 AU Unknown 2
3 CA Toronto 56
4 CA Toronto 2
5 CA Ottawa 1
6 CA Unknown 2
I want to replace 'Unknown' in the city column with the mode of the occurences of cities per country. The result would be:
...
2 AU Sydney 2
...
6 CA Toronto 2
I can get the city modes with:
city_modes = df.groupby('country')['city'].apply(lambda x: x.mode().iloc[0])
And I can replace values with:
df['column']=df.column.replace('Unknown', 'something')
But i cant work out how to combine these to only replace unknowns for each country based on mode of occurrence of cities.
Any ideas?
Use transform for Series with same size as original DataFrame and set new values by numpy.where:
city_modes = df.groupby('country')['city'].transform(lambda x: x.mode().iloc[0])
df['column'] = np.where(df['column'] == 'Unknown',city_modes, df['column'])
Or:
df.loc[df['column'] == 'Unknown', 'column'] = city_modes
I am trying to remove data from a groupby once the Week becomes non-sequential by more than 1. i.e. If there is a gap in a week then i want to remove that row and subsequent rows in that group by. below is a simple example of the sort of structure of data I have and also the ideal output I am looking for. The data needs to be grouped by Country and Product.
import pandas as pd
data = {'Country' : ['US','US','US','US','US','DE','DE','DE','DE','DE'],'Product' : ['Coke','Coke','Coke','Coke','Coke','Apple','Apple','Apple','Apple','Apple'],'Week' : [1,2,3,4,6,1,2,3,5,6] }
df = pd.DataFrame(data)
print df
#Current starting Dataframe.
Country Product Week
0 US Coke 1
1 US Coke 2
2 US Coke 3
3 US Coke 4
4 US Coke 6
5 DE Apple 1
6 DE Apple 2
7 DE Apple 3
8 DE Apple 5
9 DE Apple 6
#Ideal Output below:
Country Product Week
0 US Coke 1
1 US Coke 2
2 US Coke 3
3 US Coke 4
5 DE Apple 1
6 DE Apple 2
7 DE Apple 3
So the output removes Week 6 for the US Coke because the preceding week was 4.
For DE Apple Week 5 & 6 was removed because the preceding Week to Week 5 was 3. note this also eliminates DE Apple Week 6 even though its preceding is 5 or a diff() of 1.
This should work
df.groupby(['Country', 'Product']).apply(lambda sdf: sdf[(sdf.Week.diff(1).fillna(1) != 1).astype('int').cumsum() == 0]).reset_index(drop=True)
Another method, that might be more readable (i.e. generate a set of consecutive weeks and check against the observed week)
df['expected_week'] = df.groupby(['Country', 'Product']).Week.transform(lambda s: range(s.min(), s.min() + s.size))
df[df.Week == df.expected_week]
You could try out this method...
def eliminate(x):
x['g'] = x['Week'] - np.arange(x.shape[0])
x = x[x['g'] == x['g'].min()]
x = x.drop('g',1)
return x
out = df.groupby('Product').apply(eliminate).reset_index(level=0,drop=True)