I would like to replace null value of stadium attendance (affluence in french) with their means. Therefore I do this to have the mean by seasons / teams :
test = data.groupby(['season','domicile']).agg({'affluence':'mean'})
This code works and give me what I want (data is dataframe) :
affluence
season domicile
1999 AS Monaco 10258.647059
AS Saint-Etienne 27583.375000
FC Nantes 28334.705882
Girondins de Bordeaux 30084.941176
Montpellier Hérault SC 13869.312500
Olympique Lyonnais 35453.941176
Olympique de Marseille 51686.176471
Paris Saint-Germain 42792.647059
RC Strasbourg Alsace 19845.058824
Stade Rennais FC 13196.812500
2000 AS Monaco 8917.937500
AS Saint-Etienne 26508.750000
EA Guingamp 13056.058824
FC Nantes 31913.235294
Girondins de Bordeaux 29371.588235
LOSC 16793.411765
Olympique Lyonnais 34564.529412
Olympique de Marseille 50755.176471
Paris Saint-Germain 42716.823529
RC Strasbourg Alsace 13664.875000
Stade Rennais FC 19264.062500
Toulouse FC 19926.294118
....
So now I would like to do a condition on the season and the team. For example test[test.season == 1999]. However this doesn't work because I have only one column 'affluence'. It gives me the error :
'DataFrame' object has no attribute 'season'
I tried :
test = data[['season','domicile','affluence']].groupby(['season','domicile']).agg({'affluence':'mean'})
Which results as above. So I thought of maybe indexing the season/team, but how ? And after that how do I access it ?
Thanks
Doing test = data.groupby(['season','domicile'], as_index=False).agg({'affluence':'mean'}) should do the trick for what you're trying to do.
The parameter as_index=False is particularly useful when you do not want to deal with MultiIndexes.
Example:
import pandas as pd
data = {
'A' : [0, 0, 0, 1, 1, 1, 2, 2, 2],
'B' : list('abcdefghi')
}
df = pd.DataFrame(data)
print(df)
# A B
# 0 0 a
# 1 0 b
# 2 0 c
# 3 1 d
# 4 1 e
# 5 1 f
# 6 2 g
# 7 2 h
# 8 2 i
grp_1 = df.groupby('A').count()
print(grp_1)
# B
# A
# 0 3
# 1 3
# 2 3
grp_2 = df.groupby('A', as_index=False).count()
print(grp_2)
# A B
# 0 0 3
# 1 1 3
# 2 2 3
After the groupby-operation, the columns you refer in the groupby-operation become the index. You can access the index by df.index (or test.index in your case).
In your case, you created a multi-Index. A detailed description of how to handle dataframe with MultiIndex can be found in the pandas documentation.
However, you could recreate a standard dataframe again by using:
df = pd.DataFrame({
'season': test.index.season,
'domicile': test.index.domicile,
'affluence': test.affluence}
)
Related
I want this matrix as outcome:
int = {
"vendor":['A','B','C','D','E'],
"country":['Spain','Spain','Germany','Italy','Italy'],
"yeardum":['2015','2020','2014','2016','2019'],
"sales_year_data":['15','205','24','920','1310'],
"country_image_data":['2','5','-6','7','-1'],
}
df_inv = pd.DataFrame(int)
The data of colum "sales_year_data" in df_inv come df1=
sales_year_data = {
"country":['Spain','France','Germany','Belgium','Italy'],
"2014":['45','202','24','216','219'],
"2015":['15','55','214','2016','209'],
"2016":['615','2333','205','207','920'],
"2017":['1215','255','234','2116','101'],
"2018":['415','1320','214','2516','2019'],
"2019":['215','220','5614','416','1310'],
"2020":['205','202','44','296','2011'],
}
df1 = pd.DataFrame(sales_year_data)
As you can see in the column "sales_year_data" of df_inv, the number 15 is the intersection in df1 between year 2015 and Spain, the number 205 is in the intersection between Spain and 2020, 24 is in the intersection between Germany and 2014 and so on.
Data of colum "country_image_data" in df_inv comes from df2
country_change_data = {
"country":['Spain','Spain','Germany','Italy','Italy'],
"2014":['4','2','-6','6','9'],
"2015":['2','5','-5','2','3'],
"2016":['5','3','5','7','9'],
"2017":['8','7','5','6','1'],
"2018":['5','1','4','6','2'],
"2019":['1','2','4','6','-1'],
"2020":['5','2','4','6','2'],
}
df2 = pd.DataFrame(country_change_data)
As you can see in the column "country_change_data" of df_inv, the number 2 is the intersection in df2 between year 2015 and Spain, the number 5 is in the intersection between Spain and 2020, -6 is in the intersection between Germany and 2014 and so on.
If my original dataframe is:
inv = {
"vendor":['A','B','C','D','E'],
"country":['Spain','Spain','Germany','Italy','Italy'],
"yeardum":['2015','2020','2014','2016','2019'],
}
df0 = pd.DataFrame(inv)
How could I automate the search across various df1 and df2 in the intersections of interest for building df_inv departing prom df0?
This does it.
sales_counters = {}
country_counters = {}
new_df_data = []
for row in df0.iloc:
c = row['country']
y = row['yeardum']
sales_idx = sales_counters[c] = sales_counters.get(c, -1) + 1
country_idx = country_counters[c] = country_counters.get(c, -1) + 1
d1 = df1[df1['country'] == c]
d2 = df2[df2['country'] == c]
sales_year = d1.iloc[min(sales_idx, d1.shape[0]-1)][y]
country_image = d2.iloc[min(country_idx, d2.shape[0]-1)][y]
new_df_data.append([sales_year, country_image])
df0 = pd.concat([df0, pd.DataFrame(new_df_data)], axis=1).rename({0: 'sales_year_data', 1: 'country_image_data'}, axis=1)
Test:
>>> df0
vendor country yeardum sales_year_data country_image_data
0 A Spain 2015 15 2
1 B Spain 2020 205 2
2 C Germany 2014 24 -6
3 D Italy 2016 920 7
4 E Italy 2019 1310 -1
I am new in this field and stuck on this problem. I have two datasets
all_batsman_df, this df has 5 columns('years','team','pos','name','salary')
years team pos name salary
0 1991 SF 1B Will Clark 3750000.0
1 1991 NYY 1B Don Mattingly 3420000.0
2 1991 BAL 1B Glenn Davis 3275000.0
3 1991 MIL DH Paul Molitor 3233333.0
4 1991 TOR 3B Kelly Gruber 3033333.0
all_batting_statistics_df, this df has 31 columns
Year Rk Name Age Tm Lg G PA AB R ... SLG OPS OPS+ TB GDP HBP SH SF IBB Pos Summary
0 1988 1 Glen Davis 22 SDP NL 37 89 83 6 ... 0.289 0.514 48.0 24 1 1 0 1 1 987
1 1988 2 Jim Acker 29 ATL NL 21 6 5 0 ... 0.400 0.900 158.0 2 0 0 0 0 0 1
2 1988 3 Jim Adduci* 28 MIL AL 44 97 94 8 ... 0.383 0.641 77.0 36 1 0 0 3 0 7D/93
3 1988 4 Juan Agosto* 30 HOU NL 75 6 5 0 ... 0.000 0.000 -100.0 0 0 0 1 0 0 1
4 1988 5 Luis Aguayo 29 TOT MLB 99 260 237 21 ... 0.354 0.663 88.0 84 6 1 1 1 3 564
I want to merge these two datasets on 'year', 'name'. But the problem is, these both data frames has different names like in the first dataset, it has name 'Glenn Davis' but in second dataset it has 'Glen Davis'.
Now, I want to know that How can I merge both of them using difflib library even it has different names?
Any help will be appreciated ...
Thanks in advance.
I have used this code which I got in a question asked at this platform but it is not working for me. I am adding a new column after matching names in both of the datasets. I know this is not a good approach. Kindly suggest, If i can do it in a better way.
df_a = all_batting_statistics_df
df_b = all_batters
df_a = df_a.astype(str)
df_b = df_b.astype(str)
df_a['merge_year'] = df_a['Year'] # we will use these as the merge keys
df_a['merge_name'] = df_a['Name']
for comp_a, addr_a in df_a[['Year','Name']].values:
for ixb, (comp_b, addr_b) in enumerate(df_b[['years','name']].values):
if cdifflib.CSequenceMatcher(None,comp_a,comp_b).ratio() > .6:
df_b.loc[ixb,'merge_year'] = comp_a # creates a merge key in df_b
if cdifflib.CSequenceMatcher(None,addr_a, addr_b).ratio() > .6:
df_b.loc[ixb,'merge_name'] = addr_a # creates a merge key in df_b
merged_df = pd.merge(df_a,df_b,on=['merge_name','merge_years'],how='inner')
You can do
import difflib
df_b['name'] = df_b['name'].apply(lambda x: \
difflib.get_close_matches(x, df_a['name'])[0])
to replace names in df_b with closest match from df_a, then do your merge. See also this post.
Let me get to your problem by assuming that you have to make a data set with 2 columns and the 2 columns being 1. 'year' and 2. 'name'
okay
1. we will 1st rename all the names which are wrong
I hope you know all the wrong names from all_batting_statistics_df using this
all_batting_statistics_df.replace(regex=r'^Glen.$', value='Glenn Davis')
once you have corrected all the spellings, choose the smaller one which has the names you know, so it doesn't take long
2. we need both data sets to have the same columns i.e. only 'year' and 'name'
use this to drop the columns we don't need
all_batsman_df_1 = all_batsman_df.drop(['team','pos','salary'])
all_batting_statistics_df_1 = all_batting_statistics_df.drop(['Rk','Name','Age','Tm','Lg','G','PA','AB','R','Summary'], axis=1)
I cannot see all the 31 columns so I left them, you have to add to the above code
3. we need to change the column names to look the same i.e. 'year' and 'name' using python dataframe rename
df_new_1 = all_batting_statistics_df(colums={'Year': 'year', 'Name':'name'})
4. next, to merge them
we will use this
all_batsman_df.merge(df_new_1, left_on='year', right_on='name')
FINAL THOUGHTS:
If you don't want to do all this find a way to export the data set to google sheets or microsoft excel and use edit them with those advanced software, if you like pandas then its not that difficult you will find a way, all the best!
Working through Pandas Cookbook. Counting the Total Number of Flights Between Cities.
import pandas as pd
import numpy as np
# import matplotlib.pyplot as plt
print('NumPy: {}'.format(np.__version__))
print('Pandas: {}'.format(pd.__version__))
print('-----')
desired_width = 320
pd.set_option('display.width', desired_width)
pd.options.display.max_rows = 50
pd.options.display.max_columns = 14
# pd.options.display.float_format = '{:,.2f}'.format
file = "e:\\packt\\data_analysis_and_exploration_with_pandas\\section07\\data\\flights.csv"
flights = pd.read_csv(file)
print(flights.head(10))
print()
# This returns the total number of rows for each group.
flights_ct = flights.groupby(['ORG_AIR', 'DEST_AIR']).size()
print(flights_ct.head(10))
print()
# Get the number of flights between Atlanta and Houston in both directions.
print(flights_ct.loc[[('ATL', 'IAH'), ('IAH', 'ATL')]])
print()
# Sort the origin and destination cities:
# flights_sort = flights.sort_values(by=['ORG_AIR', 'DEST_AIR'], axis=1)
flights_sort = flights[['ORG_AIR', 'DEST_AIR']].apply(sorted, axis=1)
print(flights_sort.head(10))
print()
# Passing just the first row.
print(sorted(flights.loc[0, ['ORG_AIR', 'DEST_AIR']]))
print()
# Once each row is independently sorted, the column name are no longer correct.
# We will rename them to something generic, then again find the total number of flights between all cities.
rename_dict = {'ORG_AIR': 'AIR1', 'DEST_AIR': 'AIR2'}
flights_sort = flights_sort.rename(columns=rename_dict)
flights_ct2 = flights_sort.groupby(['AIR1', 'AIR2']).size()
print(flights_ct2.head(10))
print()
When I get to this line of code my output differs from the authors:
```flights_sort = flights[['ORG_AIR', 'DEST_AIR']].apply(sorted, axis=1)```
My output does not contain any column names. As a result, when I get to:
```flights_ct2 = flights_sort.groupby(['AIR1', 'AIR2']).size()```
it throws a KeyError. This makes sense, as I am trying to rename columns when no column names exist.
My question is, why are the column names gone? All other output matches the authors output exactly:
Connected to pydev debugger (build 191.7141.48)
NumPy: 1.16.3
Pandas: 0.24.2
-----
MONTH DAY WEEKDAY AIRLINE ORG_AIR DEST_AIR SCHED_DEP DEP_DELAY AIR_TIME DIST SCHED_ARR ARR_DELAY DIVERTED CANCELLED
0 1 1 4 WN LAX SLC 1625 58.0 94.0 590 1905 65.0 0 0
1 1 1 4 UA DEN IAD 823 7.0 154.0 1452 1333 -13.0 0 0
2 1 1 4 MQ DFW VPS 1305 36.0 85.0 641 1453 35.0 0 0
3 1 1 4 AA DFW DCA 1555 7.0 126.0 1192 1935 -7.0 0 0
4 1 1 4 WN LAX MCI 1720 48.0 166.0 1363 2225 39.0 0 0
5 1 1 4 UA IAH SAN 1450 1.0 178.0 1303 1620 -14.0 0 0
6 1 1 4 AA DFW MSY 1250 84.0 64.0 447 1410 83.0 0 0
7 1 1 4 F9 SFO PHX 1020 -7.0 91.0 651 1315 -6.0 0 0
8 1 1 4 AA ORD STL 1845 -5.0 44.0 258 1950 -5.0 0 0
9 1 1 4 UA IAH SJC 925 3.0 215.0 1608 1136 -14.0 0 0
ORG_AIR DEST_AIR
ATL ABE 31
ABQ 16
ABY 19
ACY 6
AEX 40
AGS 83
ALB 33
ANC 2
ASE 1
ATW 10
dtype: int64
ORG_AIR DEST_AIR
ATL IAH 121
IAH ATL 148
dtype: int64
*** No columns names *** Why?
0 [LAX, SLC]
1 [DEN, IAD]
2 [DFW, VPS]
3 [DCA, DFW]
4 [LAX, MCI]
5 [IAH, SAN]
6 [DFW, MSY]
7 [PHX, SFO]
8 [ORD, STL]
9 [IAH, SJC]
dtype: object
The author's output. Note the columns names are present.
sorted returns a list object and obliterates the columns:
In [11]: df = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"])
In [12]: df.apply(sorted, axis=1)
Out[12]:
0 [1, 2]
1 [3, 4]
dtype: object
In [13]: type(df.apply(sorted, axis=1).iloc[0])
Out[13]: list
It's possible that this wouldn't have been the case in earlier pandas... but it would still be bad code.
You can do this by passing the columns explicitly:
In [14]: df.apply(lambda x: pd.Series(sorted(x), df.columns), axis=1)
Out[14]:
A B
0 1 2
1 3 4
A more efficient way to do this is to sort the sort the underlying numpy array:
In [21]: df = pd.DataFrame([[1, 2], [3, 1]], columns=["A", "B"])
In [22]: df
Out[22]:
A B
0 1 2
1 3 1
In [23]: arr = df[["A", "B"]].values
In [24]: arr.sort(axis=1)
In [25]: df[["A", "B"]] = arr
In [26]: df
Out[26]:
A B
0 1 2
1 1 3
As you can see this sorts each row.
A final note. I just applied #AndyHayden numpy based solution from above.
flights_sort = flights[["ORG_AIR", "DEST_AIR"]].values
flights_sort.sort(axis=1)
flights[["ORG_AIR", "DEST_AIR"]] = flights_sort
All I can say is … Wow. What an enormous performance difference. I get the exact same
correct answer and I get it as soon as I click the mouse as compared to the pandas lambda solution also provided by #AndyHayden which takes about 20 seconds to perform the sort. That dataset is 58,000+ rows. The numpy solution returns the sort instantly.
i have 2 dataframes parent and child, i want to concatenate both in groupby manner
df_parent
parent parent_value
0 Super Sun 0
1 Alpha Mars 4
2 Pluto 9
df_child
child value
0 Planet Sun 100
1 one Sun direction 101
2 Ice Pluto Tune 101
3 Life on Mars 99
4 Mars Robot 105
5 Sun Twins 200
I want the ouput to be in order order = ['Sun', 'Pluto', 'Mars']
Sun
-childs
Pluto
-childs
Mards
-childs
I want to find the child with keyword wise, refer parent_dict
parent_dict = {'Super Sun': 'Sun',
'Alpha Mars': 'Mars',
'Pluto': 'Pluto'}
expected output
child value
0 Super Sun 0 # parent
1 Planet Sun 100 # child
2 one Sun direction 101 # child
3 Sun Twins 200 # child
4 Pluto 9 # parent
5 Ice Pluto Tune 101 # child
6 Alpha Mars 4 # parent
7 Life on Mars 99 # child
8 Mars Robot 105 # child
So far i have tried to iterate master list and both dfs, but expected output is not coming, here is my code
output_df = pd.DataFrame()
for o in order:
key = o
for j, row in df_parent.iterrows():
if key in row[0]:
output_df.at[j, 'parent'] = key
output_df.at[j, 'value'] = row[1]
for k, row1 in df_child.iterrows():
if key in row1[0]:
output_df.at[j, 'parent'] = key
output_df.at[j, 'value'] = row[1]
print(output_df)
Output:
parent value
0 Sun 0.0
2 Pluto 9.0
1 Mars 4.0
You can use append with both dataframe after some preparation. First create a column keyword in both df_parent and df_child used for sorting later. To do so, you an use np.select such as:
import pandas as pd
order = ['Sun', 'Pluto', 'Mars']
condlist_parent = [df_parent['parent'].str.contains(word) for word in order]
df_parent['keyword'] = pd.np.select(condlist = condlist_parent, choicelist = order, default = None)
condlist_child = [df_child['child'].str.contains(word) for word in order]
df_child['keyword'] = pd.np.select(condlist = condlist_child, choicelist = order, default = None)
giving for example for df_parent:
parent parent_value keyword
0 Super Sun 0 Sun
1 Alpha Mars 4 Mars
2 Pluto 9 Pluto
Now you can use append and also Categorical to order the dataframe according to the list order. The rename is used to fit your expected output and for the append working as wanted (columns should have the same name in both dataframe).
df_all = (df_parent.rename(columns={'parent':'child','parent_value':'value'})
.append(df_child,ignore_index=True))
# to order the column keyword with the list order
df_all['keyword'] = pd.Categorical(df_all['keyword'], ordered=True, categories=order)
# now sort_values by the column keyword, reset_index and drop the column keyword
df_output = (df_all.sort_values('keyword')
.reset_index(drop=True).drop('keyword',1)) # last two methods are for cosmetic
The output is then:
child value
0 Super Sun 0
1 Planet Sun 100
2 one Sun direction 101
3 Sun Twins 200
4 Pluto 9
5 Ice Pluto Tune 101
6 Alpha Mars 4
7 Life on Mars 99
8 Mars Robot 105
Note: The fact that the parents are before childs after sorting on 'keyword' is that df_child is appened to df_parent, and not in the reverse.
Here is one solution, by iterating both dataframes, but this seems a very very long procedure
output_df = pd.DataFrame()
c = 0
for o in order:
key = o
for j, row in df_parent.iterrows():
if key in row[0]:
output_df.at[c, 'parent'] = row[0]
output_df.at[c, 'value'] = row[1]
c += 1
for k, row1 in df_child.iterrows():
if key in row1[0]:
output_df.at[c, 'parent'] = row1[0]
output_df.at[c, 'value'] = row1[1]
c += 1
Output:
parent value
0 Super Sun 0.0
1 Planet Sun 100.0
2 one Sun direction 101.0
3 Sun Twins 200.0
4 Pluto 9.0
5 Ice Pluto Tune 101.0
6 Alpha Mars 4.0
7 Life on Mars 99.0
8 Mars Robot 105.0
Consider concatenating both dataframes and order by a keyword find:
order = ['Sun', 'Pluto', 'Mars']
def find_keyword(str_param):
output = None
# ITERATE THROUGH LIST AND RETURN MATCHING POSITION
for i,v in enumerate(order):
if v in str_param:
output = i
return output
# RENAME COLS AND CONCAT DFs
df_combined = pd.concat([df_parent.rename(columns={'parent':'item', 'parent_value':'value'}),
df_child.rename(columns={'child':'item'})],
ignore_index=True)
# CREATE KEYWORD COL WITH DEFINED FUNCTION
df_combined['keyword'] = df_combined['item'].apply(find_keyword)
# SORT BY KEYWORD AND DROP HELPER COL
df_combined = df_combined.sort_values(['keyword', 'value'])\
.drop(columns=['keyword']).reset_index(drop=True)
print(df_combined)
# item value
# 0 Super Sun 0
# 1 Planet Sun 100
# 2 one Sun direction 101
# 3 Sun Twins 200
# 4 Pluto 9
# 5 Ice Pluto Tune 101
# 6 Alpha Mars 4
# 7 Life on Mars 99
# 8 Mars Robot 105
I'm trying to convert a string of my dataset to a float type. Here some context:
import pandas as pd
import numpy as np
import xlrd
file_location = "/Users/sekr2/Desktop/Jari/Leistungen/leistungen2_2017.xlsx"
workbook = xlrd.open_workbook(file_location)
sheet = workbook.sheet_by_index(0)
df = pd.read_excel("/Users/.../bla.xlsx")
df.head()
Leistungserbringer Anzahl Leistung AL TL TaxW Taxpunkte
0 McGregor Sarah 12 'Konsilium' 147.28 87.47 KVG 234.75
1 McGregor Sarah 12 'Grundberatung' 47.00 67.47 KVG 114.47
2 McGregor Sarah 12 'Extra 5min' 87.28 87.47 KVG 174.75
3 McGregor Sarah 12 'Respirator' 147.28 102.01 KVG 249.29
4 McGregor Sarah 12 'Besuch' 167.28 87.45 KVG 254.73
To keep working on this I need to find a way to create a new column:
df['Leistungswert'] = df['Taxpunkte'] * df['Anzahl'] * df['TaxW'].
TaxW shows the string 'KVG' for each entry. I know from the data that 'KVG' = 0.89. I have hit a wall with trying to convert the string into a float. I cannot just create a new column with the float type because this code should work with further inputs. In the column TaxW there are about 7 different entries with all different values.
I'm thankful for all information on this matter.
Assuming 'KVG' isn't the only possible string value in TaxW, you should store a mapping of strings to their float equivalent, like this:
map_ = {'KVG' : 0.89, ... } # add more fields here
Then, you can use Series.map:
In [424]: df['Leistungswert'] = df['Taxpunkte'] * df['Anzahl'] * df['TaxW'].map(map_); df['Leistungswert']
Out[424]:
0 2507.1300
1 1222.5396
2 1866.3300
3 2662.4172
4 2720.5164
Name: Leistungswert, dtype: float64
Alternatively, you can use df.transform:
In [435]: df['Leistungswert'] = df.transform(lambda x: x['Taxpunkte'] * x['Anzahl'] * map_[x['TaxW']], axis=1); df['Lei
...: stungswert']
Out[435]:
0 2507.1300
1 1222.5396
2 1866.3300
3 2662.4172
4 2720.5164
Name: Leistungswert, dtype: float64
Alternative solution which uses map_ mapping from #COLDSPEED:
In [237]: df.assign(TaxW=df['TaxW'].map(map_)) \
.eval("Leistungswert = Taxpunkte * Anzahl * TaxW", inplace=False)
Out[237]:
Leistungserbringer Anzahl Leistung AL TL TaxW Taxpunkte Leistungswert
0 McGregor Sarah 12 Konsilium 147.28 87.47 0.89 234.75 2507.1300
1 McGregor Sarah 12 Grundberatung 47.00 67.47 0.89 114.47 1222.5396
2 McGregor Sarah 12 Extra 5min 87.28 87.47 0.89 174.75 1866.3300
3 McGregor Sarah 12 Respirator 147.28 102.01 0.89 249.29 2662.4172
4 McGregor Sarah 12 Besuch 167.28 87.45 0.89 254.73 2720.5164