Ranking with multiple ocurrence of ties in Pandas - python

I need to rank my df by some columns. Have a look at the print below
The lines need to be ranked from 1 to 20 by the column df['pontos_na_rodada']
If we issue some ties - which will occur - they have to be resolved by the highest value in column df['saldo_gols']. Then if the tie persist resolve it again by the column df['gols_feitos'] and lastly if we still have ties resolve it by column df['Red Cards'] and df['Yellow Cards'] where for these columns the lower value is the best.
Can someone give me a hand?
Example of the data in the image:
<bound method DataFrame.to_dict of league_season league_round fixture_id team.id resultado \
50885 2020 1.0 327986 118 3.0
46622 2020 1.0 327992 119 3.0
50863 2020 1.0 327986 120 0.0
60003 2020 1.0 327987 121 1.0
46637 2020 1.0 327991 123 3.0
46774 2020 1.0 327990 124 0.0
55991 2020 1.0 327994 126 3.0
46700 2020 1.0 327985 127 0.0
46730 2020 1.0 327988 128 1.0
46652 2020 1.0 327991 129 0.0
46758 2020 1.0 327990 130 3.0
50908 2020 1.0 327989 131 1.0
60024 2020 1.0 327987 133 1.0
46684 2020 1.0 327993 134 3.0
50931 2020 1.0 327989 144 1.0
46606 2020 1.0 327992 147 0.0
55970 2020 1.0 327994 151 0.0
46668 2020 1.0 327993 154 0.0
46743 2020 1.0 327988 794 1.0
46714 2020 1.0 327985 1062 3.0
gols_feitos saldo_gols Red Cards Yellow Cards pontos_na_rodada \
50885 2.0 1.0 0.0 3.0 3.0
46622 1.0 1.0 0.0 4.0 3.0
50863 1.0 -1.0 1.0 2.0 0.0
60003 1.0 0.0 0.0 1.0 1.0
46637 3.0 1.0 0.0 3.0 3.0
46774 0.0 -1.0 0.0 3.0 0.0
55991 3.0 3.0 0.0 NaN 3.0
46700 0.0 -1.0 0.0 3.0 0.0
46730 1.0 0.0 0.0 NaN 1.0
46652 2.0 -1.0 0.0 3.0 0.0
46758 1.0 1.0 0.0 2.0 3.0
50908 0.0 0.0 0.0 2.0 1.0
60024 1.0 0.0 0.0 1.0 1.0
46684 2.0 2.0 0.0 2.0 3.0
50931 0.0 0.0 0.0 NaN 1.0
46606 0.0 -1.0 0.0 3.0 0.0
55970 0.0 -3.0 0.0 3.0 0.0
46668 0.0 -2.0 1.0 3.0 0.0
46743 1.0 0.0 0.0 1.0 1.0
46714 1.0 1.0 0.0 2.0 3.0
rank
50885 NaN
46622 NaN
50863 NaN
60003 NaN
46637 NaN
46774 NaN
55991 NaN
46700 NaN
46730 NaN
46652 NaN
46758 NaN
50908 NaN
60024 NaN
46684 NaN
50931 NaN
46606 NaN
55970 NaN
46668 NaN
46743 NaN
46714 NaN >

I just figured out an answer to show here:
df['rank'] = np.nan
df['Red Cards'] = df['Red Cards']*-1
df['Yellow Cards'] = df['Yellow Cards']*-1
df['rank'] = df.sort_values(by = ['league_season','league_round','pontos_na_rodada',\
'saldo_gols','gols_feitos','Red Cards','Yellow Cards'])\
.groupby(['league_season','league_round']).cumcount(ascending=False)+1
df[(df['league_round']==10) & (df['league_season']==2020)].sort_values(by = 'rank')
The result:
league_season league_round fixture_id team.id resultado \
49809 2020 10.0 328084 119 0.0
50032 2020 10.0 328076 133 3.0
49919 2020 10.0 328079 1062 3.0
49671 2020 10.0 328078 126 1.0
49964 2020 10.0 328077 121 1.0
49855 2020 10.0 328083 127 0.0
49648 2020 10.0 328078 128 1.0
49694 2020 10.0 328080 130 1.0
49740 2020 10.0 328075 124 3.0
49832 2020 10.0 328083 129 3.0
49899 2020 10.0 328081 144 3.0
49717 2020 10.0 328080 154 1.0
49876 2020 10.0 328081 118 0.0
49602 2020 10.0 328082 134 3.0
49987 2020 10.0 328077 123 1.0
49763 2020 10.0 328075 131 0.0
50009 2020 10.0 328076 120 0.0
49786 2020 10.0 328084 151 3.0
49625 2020 10.0 328082 147 0.0
49942 2020 10.0 328079 794 0.0
gols_feitos saldo_gols Red Cards Yellow Cards pontos_na_rodada \
49809 0.0 -1.0 -0.0 -3.0 20.0
50032 3.0 1.0 -0.0 -2.0 18.0
49919 2.0 1.0 -0.0 -1.0 18.0
49671 2.0 0.0 -0.0 -2.0 18.0
49964 2.0 0.0 -1.0 -3.0 18.0
49855 0.0 -2.0 -0.0 NaN 17.0
49648 2.0 0.0 -0.0 -3.0 15.0
49694 1.0 0.0 -1.0 -1.0 15.0
49740 2.0 1.0 -1.0 -2.0 14.0
49832 2.0 2.0 -0.0 -1.0 13.0
49899 1.0 1.0 -0.0 -2.0 13.0
49717 1.0 0.0 -1.0 -2.0 12.0
49876 0.0 -1.0 -1.0 -2.0 12.0
49602 1.0 1.0 -0.0 -4.0 11.0
49987 2.0 0.0 -1.0 -3.0 11.0
49763 1.0 -1.0 -0.0 -4.0 10.0
50009 2.0 -1.0 -0.0 -2.0 9.0
49786 1.0 1.0 -1.0 -4.0 8.0
49625 0.0 -1.0 -1.0 -2.0 8.0
49942 1.0 -1.0 -0.0 -1.0 7.0
rank
49809 1
50032 2
49919 3
49671 4
49964 5
49855 6
49648 7
49694 8
49740 9
49832 10
49899 11
49717 12
49876 13
49602 14
49987 15
49763 16
50009 17
49786 18
49625 19
49942 20

Related

I have a dataset and when I try to find the correlation of this dataset like df.corr() it the output is (_) in jupyter, NaN in spyder

My dataset looks like this, and my code is:
import pandas as pd
df = pd.read_csv("Admission_data.csv")
correlations = df.corr()
correlations
admit gre gpa rank
0.0 380.0 3.609999895095825 3.0
1.0 660.0 3.6700000762939453 3.0
1.0 800.0 4.0 1.0
1.0 640.0 3.190000057220459 4.0
0.0 520.0 2.930000066757202 4.0
1.0 760.0 3.0 2.0
1.0 560.0 2.9800000190734863 1.0
0.0 400.0 3.0799999237060547 2.0
1.0 540.0 3.390000104904175 3.0
0.0 700.0 3.9200000762939453 2.0
0.0 800.0 4.0 4.0
0.0 440.0 3.2200000286102295 1.0
1.0 760.0 4.0 1.0
0.0 700.0 3.0799999237060547 2.0
1.0 700.0 4.0 1.0
0.0 480.0 3.440000057220459 3.0
0.0 780.0 3.869999885559082 4.0
0.0 360.0 2.559999942779541 3.0
0.0 800.0 3.75 2.0
1.0 540.0 3.809999942779541 1.0
0.0 500.0 3.1700000762939453 3.0
1.0 660.0 3.630000114440918 2.0
0.0 600.0 2.819999933242798 4.0
0.0 680.0 3.190000057220459 4.0
1.0 760.0 3.3499999046325684 2.0
1.0 800.0 3.6600000858306885 1.0
1.0 620.0 3.609999895095825 1.0
1.0 520.0 3.740000009536743 4.0
1.0 780.0 3.2200000286102295 2.0
0.0 520.0 3.2899999618530273 1.0
0.0 540.0 3.7799999713897705 4.0
0.0 760.0 3.3499999046325684 3.0
0.0 600.0 3.4000000953674316 3.0
1.0 800.0 4.0 3.0
0.0 360.0 3.140000104904175 1.0
0.0 400.0 3.049999952316284 2.0
0.0 580.0 3.25 1.0
0.0 520.0 2.9000000953674316 3.0
1.0 500.0 3.130000114440918 2.0
1.0 520.0 2.680000066757202 3.0
0.0 560.0 2.4200000762939453 2.0
1.0 580.0 3.319999933242798 2.0
1.0 600.0 3.1500000953674316 2.0
0.0 500.0 3.309999942779541 3.0
0.0 700.0 2.940000057220459 2.0
1.0 460.0 3.450000047683716 3.0
1.0 580.0 3.4600000381469727 2.0
0.0 500.0 2.9700000286102295 4.0
0.0 440.0 2.4800000190734863 4.0
0.0 400.0 3.3499999046325684 3.0
0.0 640.0 3.859999895095825 3.0
0.0 440.0 3.130000114440918 4.0
0.0 740.0 3.369999885559082 4.0
1.0 680.0 3.2699999809265137 2.0
0.0 660.0 3.3399999141693115 3.0
1.0 740.0 4.0 3.0
0.0 560.0 3.190000057220459 3.0
0.0 380.0 2.940000057220459 3.0
0.0 400.0 3.6500000953674316 2.0
0.0 600.0 2.819999933242798 4.0
1.0 620.0 3.180000066757202 2.0
0.0 560.0 3.319999933242798 4.0
0.0 640.0 3.6700000762939453 3.0
1.0 680.0 3.8499999046325684 3.0
0.0 580.0 4.0 3.0
0.0 600.0 3.5899999141693115 2.0
0.0 740.0 3.619999885559082 4.0
0.0 620.0 3.299999952316284 1.0
0.0 580.0 3.690000057220459 1.0
0.0 800.0 3.7300000190734863 1.0
0.0 640.0 4.0 3.0
0.0 300.0 2.9200000762939453 4.0
0.0 480.0 3.390000104904175 4.0
0.0 580.0 4.0 2.0
0.0 720.0 3.450000047683716 4.0
0.0 720.0 4.0 3.0
0.0 560.0 3.359999895095825 3.0
1.0 800.0 4.0 3.0
0.0 540.0 3.119999885559082 1.0
1.0 620.0 4.0 1.0
0.0 700.0 2.9000000953674316 4.0
0.0 620.0 3.069999933242798 2.0
0.0 500.0 2.7100000381469727 2.0
0.0 380.0 2.9100000858306885 4.0
1.0 500.0 3.5999999046325684 3.0
0.0 520.0 2.9800000190734863 2.0
0.0 600.0 3.319999933242798 2.0
0.0 600.0 3.4800000190734863 2.0
0.0 700.0 3.2799999713897705 1.0
1.0 660.0 4.0 2.0
0.0 700.0 3.8299999237060547 2.0
1.0 720.0 3.640000104904175 1.0
0.0 800.0 3.9000000953674316 2.0
0.0 580.0 2.930000066757202 2.0
1.0 660.0 3.440000057220459 2.0
0.0 660.0 3.3299999237060547 2.0
0.0 640.0 3.5199999809265137 4.0
0.0 480.0 3.569999933242798 2.0
0.0 700.0 2.880000114440918 2.0
0.0 400.0 3.309999942779541 3.0
I tried correlations methods, method='pearson' or numeric_value=True etc.. but nothing works

Relative minimum values in pandas

i have the following dataframe in pandas:
Race_ID Athlete_ID Finish_time
0 1.0 1.0 56.1
1 1.0 3.0 60.2
2 1.0 2.0 57.1
3 1.0 4.0 57.2
4 2.0 2.0 56.2
5 2.0 1.0 56.3
6 2.0 3.0 56.4
7 2.0 4.0 56.5
8 3.0 1.0 61.2
9 3.0 2.0 62.1
10 3.0 3.0 60.4
11 3.0 4.0 60.0
12 4.0 2.0 55.0
13 4.0 1.0 54.0
14 4.0 3.0 53.0
15 4.0 4.0 52.0
where Race_ID is in descending order of time. (i.e. 1 is the most current race nad 4 is the oldest race)
And I want to add a new column Relative_time#t-1 which is the Athlete's Finish_time in the last race relative to the fastest time in the last race. Hence the output would look something like
Race_ID Athlete_ID Finish_time Relative_time#t-1
0 1.0 1.0 56.1 56.3/56.2
1 1.0 3.0 60.2 56.4/56.2
2 1.0 2.0 57.1 56.2/56.2
3 1.0 4.0 57.2 56.5/56.2
4 2.0 2.0 56.2 62.1/60
5 2.0 1.0 56.3 61.2/60
6 2.0 3.0 56.4 60.4/60
7 2.0 4.0 56.5 60/60
8 3.0 1.0 61.2 54/52
9 3.0 2.0 62.1 55/52
10 3.0 3.0 60.4 53/52
11 3.0 4.0 60.0 52/52
12 4.0 2.0 55.0 0
13 4.0 1.0 54.0 0
14 4.0 3.0 53.0 0
15 4.0 4.0 52.0 0
Here's the code:
data = [[1,1,56.1,'56.3/56.2'],
[1,3,60.2,'56.4/56.2'],
[1,2,57.1,'56.2/56.2'],
[1,4,57.2,'56.5/56.2'],
[2,2,56.2,'62.1/60'],
[2,1,56.3,'61.2/60'],
[2,3,56.4,'60.4/60'],
[2,4,56.5,'60/60'],
[3,1,61.2,'54/52'],
[3,2,62.1,'55/52'],
[3,3,60.4,'53/52'],
[3,4,60,'52/52'],
[4,2,55,'0'],
[4,1,54,'0'],
[4,3,53,'0'],
[4,4,52,'0']]
df = pd.DataFrame(data,columns=['Race_ID','Athlete_ID','Finish_time','Relative_time#t-1'],dtype=float)
I intentionally made the Relative_time#t-1 as str instead of int to show the formula.
Here is what I have tried:
df.sort_values(by = ['Race_ID', 'Athlete_ID'], ascending=[True, True], inplace=True)
df['Finish_time#t-1'] = df.groupby('Athlete_ID')['Finish_time'].shift(-1)
df['Finish_time#t-1'] = df['Finish_time#t-1'].replace(np.nan, 0, regex = True)
So I get the numerator for the new column but I don't know how to get the minimum time for each Race_ID (i.e. the value in the denominator)
Thank you in advance.
Try this:
(df.groupby('Athlete_ID')['Finish_time']
.shift(-1)
.div(df['Race_ID'].map(
df.groupby('Race_ID')['Finish_time']
.min()
.shift(-1)))
.fillna(0))
Output:
0 1.001779
1 1.003559
2 1.000000
3 1.005338
4 1.035000
5 1.020000
6 1.006667
7 1.000000
8 1.038462
9 1.057692
10 1.019231
11 1.000000
12 0.000000
13 0.000000
14 0.000000
15 0.000000

how to sort dataframe rows in pandas wrt to months from Jan to Dec

How can we sort the below rows in dataframe wrt to month from Jan to Dec,
currently this dataframe is in alphabetical order.
0 Col1 Col2 Col3 ... Col22 Col23 Col24
1 April 53.0 0.0 ... 11.0 0.0 0.0
2 August 43.0 0.0 ... 11.0 3.0 5.0
3 December 36.0 0.0 ... 4.0 1.0 0.0
4 February 48.0 0.0 ... 16.0 0.0 0.0
5 January 55.0 0.0 ... 24.0 4.0 0.0
6 July 45.0 0.0 ... 4.0 8.0 1.0
7 June 34.0 0.0 ... 4.0 8.0 1.0
8 March 34.0 2.0 ... 24.0 4.0 1.0
9 May 52.0 1.0 ... 3.0 2.0 1.0
10 November 33.0 0.0 ... 7.0 2.0 3.0
11 October 21.0 1.0 ... 7.0 1.0 2.0
12 September 27.0 0.0 ... 5.0 3.0 3.0
We can also use Series.date_range with month_name() and month:
month = pd.date_range(start='2018-01', freq='M', periods=12)
df.loc[df['Col1'].map(dict(zip(month.month_name(),month.month))).sort_values().index]
Col1 Col2 Col3 Col22 Col23 Col24
5 January 55.0 0.0 24.0 4.0 0.0
4 February 48.0 0.0 16.0 0.0 0.0
8 March 34.0 2.0 24.0 4.0 1.0
1 April 53.0 0.0 11.0 0.0 0.0
9 May 52.0 1.0 3.0 2.0 1.0
7 June 34.0 0.0 4.0 8.0 1.0
6 July 45.0 0.0 4.0 8.0 1.0
2 August 43.0 0.0 11.0 3.0 5.0
12 September 27.0 0.0 5.0 3.0 3.0
11 October 21.0 1.0 7.0 1.0 2.0
10 November 33.0 0.0 7.0 2.0 3.0
3 December 36.0 0.0 4.0 1.0 0.0
You can use calender to create a month number integer mapping , then sort the values and reindex:
import calendar
df.reindex(df['Col1'].map({i:e
for e,i in enumerate(calendar.month_name)}).sort_values().index)
Col1 Col2 Col3 ... Col22 Col23 Col24
5 January 55.0 0.0 ... 24.0 4.0 0.0
4 February 48.0 0.0 ... 16.0 0.0 0.0
8 March 34.0 2.0 ... 24.0 4.0 1.0
1 April 53.0 0.0 ... 11.0 0.0 0.0
9 May 52.0 1.0 ... 3.0 2.0 1.0
7 June 34.0 0.0 ... 4.0 8.0 1.0
6 July 45.0 0.0 ... 4.0 8.0 1.0
2 August 43.0 0.0 ... 11.0 3.0 5.0
12 September 27.0 0.0 ... 5.0 3.0 3.0
11 October 21.0 1.0 ... 7.0 1.0 2.0
10 November 33.0 0.0 ... 7.0 2.0 3.0
3 December 36.0 0.0 ... 4.0 1.0 0.0

Python - DataFrame: Multiply multiple columns by another column and save in new columns

I couldn't find an efficient away of doing that.
I have below DataFrame in Python with columns from A to Z
A B C ... Z
0 2.0 8.0 1.0 ... 5.0
1 3.0 9.0 0.0 ... 4.0
2 4.0 9.0 0.0 ... 3.0
3 5.0 8.0 1.0 ... 2.0
4 6.0 8.0 0.0 ... 1.0
5 7.0 9.0 1.0 ... 0.0
I need to multiply each of the columns from B to Z by A, (B x A, C x A, ..., Z x A), and save the results on new columns (R1, R2 ..., R25).
I would have something like this:
A B C ... Z R1 R2 ... R25
0 2.0 8.0 1.0 ... 5.0 16.0 2.0 ... 10.0
1 3.0 9.0 0.0 ... 4.0 27.0 0.0 ... 12.0
2 4.0 9.0 0.0 ... 3.0 36.0 0.0 ... 12.0
3 5.0 8.0 1.0 ... 2.0 40.0 5.0 ... 10.0
4 6.0 8.0 0.0 ... 1.0 48.0 0.0 ... 6.0
5 7.0 9.0 1.0 ... 0.0 63.0 7.0 ... 0.0
I was able to calculate the results using below code, but from here I would need to merge with original df. Doesn't sound efficient. There must be a simple/clean way of doing that.
df.loc[:,'B':'D'].multiply(df['A'], axis="index")
That's an example, my real DataFrame has 160 columns x 16k rows.
Create new columns names by list comprehension and then join to original:
df1 = df.loc[:,'B':'D'].multiply(df['A'], axis="index")
df1.columns = ['R{}'.format(x) for x in range(1, len(df1.columns) + 1)]
df = df.join(df1)
print (df)
A B C Z R1 R2
0 2.0 8.0 1.0 5.0 16.0 2.0
1 3.0 9.0 0.0 4.0 27.0 0.0
2 4.0 9.0 0.0 3.0 36.0 0.0
3 5.0 8.0 1.0 2.0 40.0 5.0
4 6.0 8.0 0.0 1.0 48.0 0.0
5 7.0 9.0 1.0 0.0 63.0 7.0

Efficiently updating NaN's in a pandas dataframe from a prior row & specific columns value

I have a pandas'DataFrame, it looks like this:
# Output
# A B C D
# 0 3.0 6.0 7.0 4.0
# 1 42.0 44.0 1.0 3.0
# 2 4.0 2.0 3.0 62.0
# 3 90.0 83.0 53.0 23.0
# 4 22.0 23.0 24.0 NaN
# 5 5.0 2.0 5.0 34.0
# 6 NaN NaN NaN NaN
# 7 NaN NaN NaN NaN
# 8 2.0 12.0 65.0 1.0
# 9 5.0 7.0 32.0 7.0
# 10 2.0 13.0 6.0 12.0
# 11 NaN NaN NaN NaN
# 12 23.0 NaN 23.0 34.0
# 13 61.0 NaN 63.0 3.0
# 14 32.0 43.0 12.0 76.0
# 15 24.0 2.0 34.0 2.0
What I would like to do is fill the NaN's with the earliest preceding row's B value. Apart from Column D, on this row, I would like NaN's replaced with zeros.
I've looked into ffill, fillna.. neither seem to be able to do the job.
My solution so far:
def fix_abc(row, column, df):
# If the row/column value is null/nan
if pd.isnull( row[column] ):
# Get the value of row[column] from the row before
prior = row.name
value = df[prior-1:prior]['B'].values[0]
# If that values empty, go to the row before that
while pd.isnull( value ) and prior >= 1 :
prior = prior - 1
value = df[prior-1:prior]['B'].values[0]
else:
value = row[column]
return value
df['A'] = df.apply( lambda x: fix_abc(x,'A',df), axis=1 )
df['B'] = df.apply( lambda x: fix_abc(x,'B',df), axis=1 )
df['C'] = df.apply( lambda x: fix_abc(x,'C',df), axis=1 )
def fix_d(x):
if pd.isnull(x['D']):
return 0
return x
df['D'] = df.apply( lambda x: fix_d(x), axis=1 )
It feels like this quite inefficient, and slow. So I'm wondering if there is a quicker, more efficient way to do this.
Example output;
# A B C D
# 0 3.0 6.0 7.0 3.0
# 1 42.0 44.0 1.0 42.0
# 2 4.0 2.0 3.0 4.0
# 3 90.0 83.0 53.0 90.0
# 4 22.0 23.0 24.0 0.0
# 5 5.0 2.0 5.0 5.0
# 6 2.0 2.0 2.0 0.0
# 7 2.0 2.0 2.0 0.0
# 8 2.0 12.0 65.0 2.0
# 9 5.0 7.0 32.0 5.0
# 10 2.0 13.0 6.0 2.0
# 11 13.0 13.0 13.0 0.0
# 12 23.0 13.0 23.0 23.0
# 13 61.0 13.0 63.0 61.0
# 14 32.0 43.0 12.0 32.0
# 15 24.0 2.0 34.0 24.0
I have dumped the code including the data for the dataframe into a python fiddle available (here)
fillna allows for various ways to do the filling. In this case, column D can just fill with 0. Column B can fill via pad. And then columns A and C can fill from column B, like:
Code:
df['D'] = df.D.fillna(0)
df['B'] = df.B.fillna(method='pad')
df['A'] = df.A.fillna(df['B'])
df['C'] = df.C.fillna(df['B'])
Test Code:
df = pd.read_fwf(StringIO(u"""
A B C D
3.0 6.0 7.0 4.0
42.0 44.0 1.0 3.0
4.0 2.0 3.0 62.0
90.0 83.0 53.0 23.0
22.0 23.0 24.0 NaN
5.0 2.0 5.0 34.0
NaN NaN NaN NaN
NaN NaN NaN NaN
2.0 12.0 65.0 1.0
5.0 7.0 32.0 7.0
2.0 13.0 6.0 12.0
NaN NaN NaN NaN
23.0 NaN 23.0 34.0
61.0 NaN 63.0 3.0
32.0 43.0 12.0 76.0
24.0 2.0 34.0 2.0"""), header=1)
print(df)
df['D'] = df.D.fillna(0)
df['B'] = df.B.fillna(method='pad')
df['A'] = df.A.fillna(df['B'])
df['C'] = df.C.fillna(df['B'])
print(df)
Results:
A B C D
0 3.0 6.0 7.0 4.0
1 42.0 44.0 1.0 3.0
2 4.0 2.0 3.0 62.0
3 90.0 83.0 53.0 23.0
4 22.0 23.0 24.0 NaN
5 5.0 2.0 5.0 34.0
6 NaN NaN NaN NaN
7 NaN NaN NaN NaN
8 2.0 12.0 65.0 1.0
9 5.0 7.0 32.0 7.0
10 2.0 13.0 6.0 12.0
11 NaN NaN NaN NaN
12 23.0 NaN 23.0 34.0
13 61.0 NaN 63.0 3.0
14 32.0 43.0 12.0 76.0
15 24.0 2.0 34.0 2.0
A B C D
0 3.0 6.0 7.0 4.0
1 42.0 44.0 1.0 3.0
2 4.0 2.0 3.0 62.0
3 90.0 83.0 53.0 23.0
4 22.0 23.0 24.0 0.0
5 5.0 2.0 5.0 34.0
6 2.0 2.0 2.0 0.0
7 2.0 2.0 2.0 0.0
8 2.0 12.0 65.0 1.0
9 5.0 7.0 32.0 7.0
10 2.0 13.0 6.0 12.0
11 13.0 13.0 13.0 0.0
12 23.0 13.0 23.0 34.0
13 61.0 13.0 63.0 3.0
14 32.0 43.0 12.0 76.0
15 24.0 2.0 34.0 2.0

Categories