Rename specific columns with numbers with str+number - python

I originally have r number of csv files.
I created one dataframe with 9 columns and r of them have numbers as headers.
I would like to target only them and change their name into ['Apple']+range(len(files)).
Example:
I have 3 csv files.
The current 3 targeted columns in my dataframe are:
0 1 2
0 444.0 286.0 657.0
1 2103.0 2317.0 2577.0
2 157.0 200.0 161.0
3 4000.0 3363.0 4986.0
4 1042.0 541.0 872.0
5 1607.0 1294.0 3305.0
I would like:
Apple1 Apple2 Apple3
0 444.0 286.0 657.0
1 2103.0 2317.0 2577.0
2 157.0 200.0 161.0
3 4000.0 3363.0 4986.0
4 1042.0 541.0 872.0
5 1607.0 1294.0 3305.0
Thank you

IIUC, you can initialise a itertools.count object and reset the columns in a list comprehension.
from itertools import count
cnt = count(1)
df.columns = ['Apple{}'.format(next(cnt)) if
str(x).isdigit() else x for x in df.columns]
This will also work very well if the digits are not contiguous, but you want them to be renamed with a contiguous suffix:
print(df)
1 Col1 5 Col2 500
0 1240.0 552.0 1238.0 52.0 1370.0
1 633.0 435.0 177.0 2201.0 185.0
2 1518.0 936.0 385.0 288.0 427.0
3 212.0 660.0 320.0 438.0 1403.0
4 15.0 556.0 501.0 1259.0 1298.0
5 177.0 718.0 1420.0 833.0 984.0
cnt = count(1)
df.columns = ['Apple{}'.format(next(cnt)) if
str(x).isdigit() else x for x in df.columns]
print(df)
Apple1 Col1 Apple2 Col2 Apple3
0 1240.0 552.0 1238.0 52.0 1370.0
1 633.0 435.0 177.0 2201.0 185.0
2 1518.0 936.0 385.0 288.0 427.0
3 212.0 660.0 320.0 438.0 1403.0
4 15.0 556.0 501.0 1259.0 1298.0
5 177.0 718.0 1420.0 833.0 984.0

You can use rename_axis:
df.rename_axis(lambda x: 'Apple{}'.format(int(x)+1) if str(x).isdigit() else x, axis="columns")
Out[9]:
Apple1 Apple2 Apple3
0 444.0 286.0 657.0
1 2103.0 2317.0 2577.0
2 157.0 200.0 161.0
3 4000.0 3363.0 4986.0
4 1042.0 541.0 872.0
5 1607.0 1294.0 3305.0

Related

Add to values of a DataFrame using cooridnates

I have a dataframe a:
Out[68]:
p0_4 p5_7 p8_9 p10_14 p15 p16_17 p18_19 p20_24 p25_29 \
0 1360.0 921.0 676.0 1839.0 336.0 668.0 622.0 1190.0 1399.0
1 308.0 197.0 187.0 411.0 67.0 153.0 172.0 336.0 385.0
2 76.0 59.0 40.0 72.0 16.0 36.0 20.0 56.0 82.0
3 765.0 608.0 409.0 1077.0 220.0 359.0 342.0 873.0 911.0
4 1304.0 906.0 660.0 1921.0 375.0 725.0 645.0 1362.0 1474.0
5 195.0 135.0 78.0 262.0 44.0 97.0 100.0 265.0 229.0
6 1036.0 965.0 701.0 1802.0 335.0 701.0 662.0 1321.0 1102.0
7 5072.0 3798.0 2865.0 7334.0 1399.0 2732.0 2603.0 4976.0 4575.0
8 1360.0 962.0 722.0 1758.0 357.0 710.0 713.0 1761.0 1660.0
9 743.0 508.0 369.0 1118.0 286.0 615.0 429.0 738.0 885.0
10 1459.0 1015.0 679.0 1732.0 337.0 746.0 677.0 1493.0 1546.0
11 828.0 519.0 415.0 1057.0 190.0 439.0 379.0 788.0 1024.0
12 1042.0 690.0 503.0 1204.0 219.0 451.0 465.0 1193.0 1406.0
p30_44 p45_59 p60_64 p65_74 p75_84 p85_89 p90plus
0 4776.0 8315.0 2736.0 5463.0 2819.0 738.0 451.0
1 1004.0 2456.0 988.0 2007.0 1139.0 313.0 153.0
2 291.0 529.0 187.0 332.0 108.0 31.0 10.0
3 2807.0 5505.0 2060.0 4104.0 2129.0 516.0 252.0
4 4524.0 9406.0 3034.0 6003.0 3366.0 840.0 471.0
5 806.0 1490.0 606.0 1288.0 664.0 185.0 108.0
6 4127.0 8311.0 2911.0 6111.0 3525.0 1029.0 707.0
7 16917.0 27547.0 8145.0 15950.0 9510.0 2696.0 1714.0
8 5692.0 9380.0 3288.0 6458.0 3830.0 1050.0 577.0
9 2749.0 5696.0 2014.0 4165.0 2352.0 603.0 288.0
10 4676.0 7654.0 2502.0 5077.0 3004.0 754.0 461.0
11 2799.0 4880.0 1875.0 3951.0 2294.0 551.0 361.0
12 3288.0 5661.0 1974.0 4007.0 2343.0 623.0 303.0
and a series d:
Out[70]:
2 p45_59
10 p45_59
11 p45_59
Is there a simple way to add 1 to number in a with the same index and column labels in d?
I have tried:
a[d] +=1
However this adds 1 to every value in the column, not just the values with indices 2, 10 and 11.
Thanking you in advance.
You might want to try this.
a.loc[list(d.index), list(d.values)] += 1

Taking the mean value of N last days, including NaNs

I have this data frame:
ID Date X 123_Var 456_Var 789_Var
A 16-07-19 3 777.0 250.0 810.0
A 17-07-19 9 637.0 121.0 529.0
A 20-07-19 2 295.0 272.0 490.0
A 21-07-19 3 778.0 600.0 544.0
A 22-07-19 6 741.0 792.0 907.0
A 25-07-19 6 435.0 416.0 820.0
A 26-07-19 8 590.0 455.0 342.0
A 27-07-19 6 763.0 476.0 753.0
A 02-08-19 6 717.0 211.0 454.0
A 03-08-19 6 152.0 442.0 475.0
A 05-08-19 6 564.0 340.0 302.0
A 07-08-19 6 105.0 929.0 633.0
A 08-08-19 6 948.0 366.0 586.0
B 07-08-19 4 509.0 690.0 406.0
B 08-08-19 2 413.0 725.0 414.0
B 12-08-19 2 170.0 702.0 912.0
B 13-08-19 3 851.0 616.0 477.0
B 14-08-19 9 475.0 447.0 555.0
B 15-08-19 1 412.0 403.0 708.0
B 17-08-19 2 299.0 537.0 321.0
B 18-08-19 4 310.0 119.0 125.0
C 16-07-19 3 777.0 250.0 810.0
C 17-07-19 9 637.0 121.0 529.0
C 20-07-19 2 NaN NaN NaN
C 21-07-19 3 NaN NaN NaN
C 22-07-19 6 741.0 792.0 907.0
C 25-07-19 6 NaN NaN NaN
C 26-07-19 8 590.0 455.0 342.0
C 27-07-19 6 763.0 476.0 753.0
C 02-08-19 6 717.0 211.0 454.0
C 03-08-19 6 NaN NaN NaN
C 05-08-19 6 564.0 340.0 302.0
C 07-08-19 6 NaN NaN NaN
C 08-08-19 6 948.0 366.0 586.0
I want to show the mean value of n last days (using Date column), excluding the value of current day.
I'm using this code (what should I do to fix this?):
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
n = 4
cols = df.filter(regex='Var').columns
df = df.set_index('Date')
df_ = df.set_index('ID', append=True).swaplevel(1,0)
df1 = df.groupby('ID').rolling(window=f'{n+1}D')[cols].count()
df2 = df.groupby('ID').rolling(window=f'{n+1}D')[cols].mean()
df3 = (df1.mul(df2)
.sub(df_[cols])
.div(df1[cols].sub(1)).add_suffix(f'_{n}')
)
df4 = df_.join(df3)
Expected result:
ID Date X 123_Var 456_Var 789_Var 123_Var_4 456_Var_4 789_Var_4
A 16-07-19 3 777.0 250.0 810.0 NaN NaN NaN
A 17-07-19 9 637.0 121.0 529.0 777.000000 250.000000 810.0
A 20-07-19 2 295.0 272.0 490.0 707.000000 185.500000 669.5
A 21-07-19 3 778.0 600.0 544.0 466.000000 196.500000 509.5
A 22-07-19 6 741.0 792.0 907.0 536.500000 436.000000 517.0
A 25-07-19 6 435.0 416.0 820.0 759.500000 696.000000 725.5
A 26-07-19 8 590.0 455.0 342.0 588.000000 604.000000 863.5
A 27-07-19 6 763.0 476.0 753.0 512.500000 435.500000 581.0
A 02-08-19 6 717.0 211.0 454.0 NaN NaN NaN
A 03-08-19 6 152.0 442.0 475.0 717.000000 211.000000 454.0
A 05-08-19 6 564.0 340.0 302.0 434.500000 326.500000 464.5
A 07-08-19 6 105.0 929.0 633.0 358.000000 391.000000 388.5
A 08-08-19 6 948.0 366.0 586.0 334.500000 634.500000 467.5
B 07-08-19 4 509.0 690.0 406.0 NaN NaN NaN
B 08-08-19 2 413.0 725.0 414.0 509.000000 690.000000 406.0
B 12-08-19 2 170.0 702.0 912.0 413.000000 725.000000 414.0
B 13-08-19 3 851.0 616.0 477.0 291.500000 713.500000 663.0
B 14-08-19 9 475.0 447.0 555.0 510.500000 659.000000 694.5
B 15-08-19 1 412.0 403.0 708.0 498.666667 588.333333 648.0
B 17-08-19 2 299.0 537.0 321.0 579.333333 488.666667 580.0
B 18-08-19 4 310.0 119.0 125.0 395.333333 462.333333 528.0
C 16-07-19 3 777.0 250.0 810.0 NaN NaN NaN
C 17-07-19 9 637.0 121.0 529.0 777.000000 250.000000 810.0
C 20-07-19 2 NaN NaN NaN 707.000000 185.500000 669.5
C 21-07-19 3 NaN NaN NaN 637.000000 121.000000 529.0
C 22-07-19 6 741.0 792.0 907.0 NaN NaN NaN
C 25-07-19 6 NaN NaN NaN 741.000000 792.000000 907.0
C 26-07-19 8 590.0 455.0 342.0 741.000000 792.000000 907.0
C 27-07-19 6 763.0 476.0 753.0 590.000000 455.000000 342.0
C 02-08-19 6 717.0 211.0 454.0 NaN NaN NaN
C 03-08-19 6 NaN NaN NaN 717.000000 211.000000 454.0
C 05-08-19 6 564.0 340.0 302.0 717.000000 211.000000 454.0
C 07-08-19 6 NaN NaN NaN 564.000000 340.000000 302.0
C 08-08-19 6 948.0 366.0 586.0 564.000000 340.000000 302.0
Numbers after the decimal point is not the matter.
These threads might help:
Taking the mean value of N last days
Taking the min value of N last days
Try:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df1 = (df.groupby('ID')['Date','123_Var','456_Var','789_Var'].rolling('4D', on='Date', closed='left').mean())
dfx = (df.set_index(['ID','Date'])
.join(df1.reset_index().set_index(['ID','Date']), rsuffix='_4')
.reset_index()
.drop('level_1',axis=1))
print(dfx.to_string())
ID Date X 123_Var 456_Var 789_Var 123_Var_4 456_Var_4 789_Var_4
0 A 2019-07-16 3 777.0 250.0 810.0 NaN NaN NaN
1 A 2019-07-17 9 637.0 121.0 529.0 777.000000 250.000000 810.0
2 A 2019-07-20 2 295.0 272.0 490.0 707.000000 185.500000 669.5
3 A 2019-07-21 3 778.0 600.0 544.0 466.000000 196.500000 509.5
4 A 2019-07-22 6 741.0 792.0 907.0 536.500000 436.000000 517.0
5 A 2019-07-25 6 435.0 416.0 820.0 759.500000 696.000000 725.5
6 A 2019-07-26 8 590.0 455.0 342.0 588.000000 604.000000 863.5
7 A 2019-07-27 6 763.0 476.0 753.0 512.500000 435.500000 581.0
8 A 2019-08-02 6 717.0 211.0 454.0 NaN NaN NaN
9 A 2019-08-03 6 152.0 442.0 475.0 717.000000 211.000000 454.0
10 A 2019-08-05 6 564.0 340.0 302.0 434.500000 326.500000 464.5
11 A 2019-08-07 6 105.0 929.0 633.0 358.000000 391.000000 388.5
12 A 2019-08-08 6 948.0 366.0 586.0 334.500000 634.500000 467.5
13 B 2019-08-07 4 509.0 690.0 406.0 NaN NaN NaN
14 B 2019-08-08 2 413.0 725.0 414.0 509.000000 690.000000 406.0
15 B 2019-08-12 2 170.0 702.0 912.0 413.000000 725.000000 414.0
16 B 2019-08-13 3 851.0 616.0 477.0 170.000000 702.000000 912.0
17 B 2019-08-14 9 475.0 447.0 555.0 510.500000 659.000000 694.5
18 B 2019-08-15 1 412.0 403.0 708.0 498.666667 588.333333 648.0
19 B 2019-08-17 2 299.0 537.0 321.0 579.333333 488.666667 580.0
20 B 2019-08-18 4 310.0 119.0 125.0 395.333333 462.333333 528.0
21 C 2019-07-16 3 777.0 250.0 810.0 NaN NaN NaN
22 C 2019-07-17 9 637.0 121.0 529.0 777.000000 250.000000 810.0
23 C 2019-07-20 2 NaN NaN NaN 707.000000 185.500000 669.5
24 C 2019-07-21 3 NaN NaN NaN 637.000000 121.000000 529.0
25 C 2019-07-22 6 741.0 792.0 907.0 NaN NaN NaN
26 C 2019-07-25 6 NaN NaN NaN 741.000000 792.000000 907.0
27 C 2019-07-26 8 590.0 455.0 342.0 741.000000 792.000000 907.0
28 C 2019-07-27 6 763.0 476.0 753.0 590.000000 455.000000 342.0
29 C 2019-08-02 6 717.0 211.0 454.0 NaN NaN NaN
30 C 2019-08-03 6 NaN NaN NaN 717.000000 211.000000 454.0
31 C 2019-08-05 6 564.0 340.0 302.0 717.000000 211.000000 454.0
32 C 2019-08-07 6 NaN NaN NaN 564.000000 340.000000 302.0
33 C 2019-08-08 6 948.0 366.0 586.0 564.000000 340.000000 302.0

pandas concat list of dataframes to one dataframe

Trying to combine a list of dataframes to one dataframe. Data looks like:
Date station_id Hour Temp
0 2004-01-01 1 1 46.0
1 2004-01-01 1 2 46.0
2 2004-01-01 1 3 45.0
3 2004-01-01 1 4 41.0
...
433730 2008-06-30 11 3 64.0
433731 2008-06-30 11 4 64.0
433732 2008-06-30 11 5 64.0
433733 2008-06-30 11 6 64.0
This gives me a list of dataframes:
stations = [x for _,x in df.groupby('station_id')]
When I reset the indices for "stations", and concat, I can get a dataframe, but it doesn't look like I'd like:
for i in range(0,11):
stations[i].reset_index(drop=True,inplace=True)
pd.concat(stations,axis=1)
Date station_id Hour Temp Date station_id Hour Temp
0 2004-01-01 1 1 46.0 2004-01-01 2 1 38.0
1 2004-01-01 1 2 46.0 2004-01-01 2 2 36.0
2 2004-01-01 1 3 45.0 2004-01-01 2 3 35.0
3 2004-01-01 1 4 41.0 2004-01-01 2 4 30.0
I'm much rather get towards a df like this:
Date Hour Stn1 Stn2
0 2004-01-01 1 46.0 38.0
1 2004-01-01 2 46.0 6.0
2 2004-01-01 3 45.0 35.0
3 2004-01-01 4 41.0 30.0
How do I do this?
Based on your expected output, you are looking for a pivot table with index=['Date', 'Hour'], columns='station_id', values=Temp. Demo:
# A bunch of example data
df
Date station_id Hour Temp
0 2004-01-01 1 1 10.0
1 2004-01-01 1 2 20.0
2 2004-01-01 1 3 30.0
3 2004-01-01 1 4 40.0
4 2004-01-01 2 1 50.0
5 2004-01-01 2 2 60.0
6 2004-01-01 2 3 70.0
7 2004-01-01 2 4 80.0
8 2004-01-01 3 1 90.0
9 2004-01-01 3 2 100.0
10 2004-01-02 3 1 110.0
11 2004-01-02 3 2 120.0
12 2004-01-01 4 4 130.0
13 2004-01-02 4 5 140.0
# Create pivot table, with ['Date', 'Hour'] in a MultiIndex
res = df.pivot_table(columns='station_id', index=['Date', 'Hour'], values='Temp')
# Add 'Stn' prefix to each column name
res = res.add_prefix('Stn')
# Delete the name of the columns' index, which is 'station_id'
del res.columns.name
# Reset MultiIndex into columns
res.reset_index(inplace=True)
res
Date Hour Stn1 Stn2 Stn3 Stn4
0 2004-01-01 1 10.0 50.0 90.0 NaN
1 2004-01-01 2 20.0 60.0 100.0 NaN
2 2004-01-01 3 30.0 70.0 NaN NaN
3 2004-01-01 4 40.0 80.0 NaN 130.0
4 2004-01-02 1 NaN NaN 110.0 NaN
5 2004-01-02 2 NaN NaN 120.0 NaN
6 2004-01-02 5 NaN NaN NaN 140.0
For what it's worth, this gets where I want to go.
stations = [x for _,x in df.groupby('station_id')] #,as_index=True)]
for i in range(0,11):
stations[i].reset_index(drop=True,inplace=True)
stations[i].rename(columns={'Temp':'Stn'+str(i+1)},inplace=True)
stations[i].drop(columns='station_id',inplace=True)
if i>0:
stations[i].drop(columns=['Date','Hour'],inplace=True)
stations = pd.concat(stations,axis=1)
Feels a bit brute force to me, though. Additional pythonic suggestions welcome.

find duplicates and mark as variant

I'm trying to create a data frame where I add duplicates as variants in a column.To further illustrate my question:
I have a pandas dataframe like this:
Case ButtonAsInteger
0 1 130
1 1 133
2 1 42
3 2 165
4 2 158
5 2 157
6 3 158
7 3 159
8 3 157
9 4 130
10 4 133
11 4 43
... ... ...
I have converted it into this form:
grouped = activity2.groupby(['Case'])
values = grouped['ButtonAsInteger'].agg('sum')
id_df = grouped['ButtonAsInteger'].apply(lambda x: pd.Series(x.values)).unstack(level=-1
0 1 2 3 4 5 6 7 8 9
Case
1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0
3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0
4 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
And now I want to find duplicates and mark each duplicate as a variant. So in this example, Case 1 and 4 should get variant 1. Like this:
Variants 0 1 2 3 4 5 6 7 8 9
Case
1 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
2 2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0
3 3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0
4 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN
I have already tried this method https://stackoverflow.com/a/44999009. But it doesn't work on my data frame. Unfortunately I don't know why.
It will probably be possible to apply a double for loop. So for each line look if there is a duplicate in the record. Whether this is efficient on a large record, I don't know.
I have also added my procedure with grouping, because perhaps there is a possibility to already work with duplicates at this point?
This groups by all columns and returns the group index (+ 1 because zero based indexing is the default). I think this should be what you want.
id_df['Variant'] = id_df.groupby(
id_df.columns.values.tolist()).grouper.group_info[0] + 1
The resulting data frame, given your input data like above:
0 1 2 Variant
Case
1 130 133 42 1
2 165 158 157 3
3 158 159 157 2
4 130 133 42 1
There could be a syntactically nicer way to access the group index, but i didn't find one.

Python: Create a new column of date from an existing column of date by subtracting consecutive rows [duplicate]

This question already has answers here:
Adding a column thats result of difference in consecutive rows in pandas
(4 answers)
Closed 5 years ago.
Code:
import pandas as pd
df = pd.read_csv('xyz.csv', usecols=['transaction_date', 'amount'])
df=pd.concat(g for _, g in df.groupby("amount") if len(g) > 3)
df=df.reset_index(drop=True)
print(df)
Output:
transaction_date amount
0 2016-06-02 50.0
1 2016-06-02 50.0
2 2016-06-02 50.0
3 2016-06-02 50.0
4 2016-06-02 50.0
5 2016-06-02 50.0
6 2016-07-04 50.0
7 2016-07-04 50.0
8 2016-09-29 225.0
9 2016-10-29 225.0
10 2016-11-29 225.0
11 2016-12-30 225.0
12 2017-01-30 225.0
13 2016-05-16 1000.0
14 2016-05-20 1000.0
I need to add another column next to the amount column which gives the difference between corresponding rows of transaction_date
e.g.
transaction_date amount delta(days)
0 2016-06-02 50.0 -
1 2016-06-02 50.0 0
2 2016-06-02 50.0 0
3 2016-06-02 50.0 0
4 2016-06-02 50.0 0
5 2016-06-02 50.0 0
6 2016-07-04 50.0 32
7 2016-07-04 50.0 .
8 2016-09-29 225.0 .
9 2016-10-29 225.0 .
10 2016-11-29 225.0
there're probably some better methods, but you can use pandas.Series.shift:
>>> df.transaction_date.shift(-1) - df.transaction_date
0 0 days
1 0 days
2 0 days
3 0 days
4 0 days
5 32 days
6 0 days
7 87 days
8 30 days
9 31 days
10 31 days
11 31 days
12 -259 days
13 4 days
14 NaT
I think you need diff + dt.days:
df['delta(days)'] = df['transaction_date'].diff().dt.days
print (df)
transaction_date amount delta(days)
0 2016-06-02 50.0 NaN
1 2016-06-02 50.0 0.0
2 2016-06-02 50.0 0.0
3 2016-06-02 50.0 0.0
4 2016-06-02 50.0 0.0
5 2016-06-02 50.0 0.0
6 2016-07-04 50.0 32.0
7 2016-07-04 50.0 0.0
8 2016-09-29 225.0 87.0
9 2016-10-29 225.0 30.0
10 2016-11-29 225.0 31.0
11 2016-12-30 225.0 31.0
12 2017-01-30 225.0 31.0
13 2016-05-16 1000.0 -259.0
14 2016-05-20 1000.0 4.0
But if need count it by groups add groupby:
df['delta(days)'] = df.groupby('amount')['transaction_date'].diff().dt.days
print (df)
transaction_date amount delta(days)
0 2016-06-02 50.0 NaN
1 2016-06-02 50.0 0.0
2 2016-06-02 50.0 0.0
3 2016-06-02 50.0 0.0
4 2016-06-02 50.0 0.0
5 2016-06-02 50.0 0.0
6 2016-07-04 50.0 32.0
7 2016-07-04 50.0 0.0
8 2016-09-29 225.0 NaN
9 2016-10-29 225.0 30.0
10 2016-11-29 225.0 31.0
11 2016-12-30 225.0 31.0
12 2017-01-30 225.0 31.0
13 2016-05-16 1000.0 NaN
14 2016-05-20 1000.0 4.0
To get exact output you've requested (sorting optional), use shift to solve for timedelta, use dt.days to find int:
df.transaction_date = pd.to_datetime(df.transaction_date)
df.sort_values('transaction_date', inplace=True)
df['delta(days)'] = (df['transaction_date'] - df['transaction_date'].shift(1)).dt.days
Output:
transaction_date amount delta(days)
13 2016-05-16 1000.0 NaN
14 2016-05-20 1000.0 4.0
0 2016-06-02 50.0 13.0
1 2016-06-02 50.0 0.0
2 2016-06-02 50.0 0.0
3 2016-06-02 50.0 0.0
4 2016-06-02 50.0 0.0
5 2016-06-02 50.0 0.0
6 2016-07-04 50.0 32.0
7 2016-07-04 50.0 0.0
8 2016-09-29 225.0 87.0
9 2016-10-29 225.0 30.0
10 2016-11-29 225.0 31.0
11 2016-12-30 225.0 31.0
12 2017-01-30 225.0 31.0

Categories