New variable that connects year and quarter - python

Hi i am a stata user and i am trying to pass my codes to Pandas. I have a panel data as shown below, and i am looking for a command that can create a constant variable according to which year and quarter the row is located. In stata such command would be reproduced by gen new_variable = yq(year, quarter)
My dataframe look like this
id year quarter
1 2007 1
1 2007 2
1 2007 3
1 2007 4
1 2008 1
1 2008 2
1 2008 3
1 2008 4
1 2009 1
1 2009 2
1 2009 3
1 2009 4
2 2007 1
2 2007 2
2 2007 3
2 2007 4
2 2008 1
2 2008 2
2 2008 3
2 2008 4
3 2009 2
3 2009 3
3 2010 2
3 2010 3
I my expected output should look like this: (Values inside new_variable are arbitrary, just looking for a constant value the would be always the same for each year and quarter and consecutives)
id year quarter new_variable
1 2007 1 220
1 2007 2 221
1 2007 3 222
1 2007 4 223
1 2008 1 224
1 2008 2 225
1 2008 3 226
1 2008 4 227
1 2009 1 228
1 2009 2 229
1 2009 3 230
1 2009 4 231
2 2007 1 220
2 2007 2 221
2 2007 3 222
2 2007 4 223
2 2008 1 224
2 2008 2 225
2 2008 3 226
2 2008 4 227
3 2009 2 229
3 2009 3 230
3 2010 2 233
3 2010 3 234

My solution extends the idea of #johnchase: build a dictionary mapping from the Cartesian product of year by quarter that takes the string representation year + quarter to integers.
ys = df['year'].unique()
qs = df['quarter'].unique()
new_idx = pd.MultiIndex.from_product([ys, qs], names=['year', 'quarter'])
yq = [''.join([str(a), str(b)]) for a, b in new_idx.values]
# yq
# ['20071', '20072', '20073', '20074',
# '20081', '20082', '20083', '20084',
# '20091', '20092', '20093', '20094',
# '20101', '20102', '20103', '20104']
mapper = {k:i+220 for i, k in enumerate(yq)}
df['new_variable'] = df['year'].astype(str) + df['quarter'].astype(str)
df['new_variable'] = df['new_variable'].map(mapper)
df
id year quarter new_variable
0 1 2007 1 220
1 1 2007 2 221
2 1 2007 3 222
3 1 2007 4 223
4 1 2008 1 224
5 1 2008 2 225
6 1 2008 3 226
7 1 2008 4 227
8 1 2009 1 228
9 1 2009 2 229
10 1 2009 3 230
11 1 2009 4 231
12 2 2007 1 220
13 2 2007 2 221
14 2 2007 3 222
15 2 2007 4 223
16 2 2008 1 224
17 2 2008 2 225
18 2 2008 3 226
19 2 2008 4 227
20 3 2009 2 229
21 3 2009 3 230
22 3 2010 2 233
23 3 2010 3 234

Related

How to create multiple triangles based on given number of simulations?

Below is my code:
triangle = cl.load_sample('genins')
# Use bootstrap sampler to get resampled triangles
bootstrapdataframe = cl.BootstrapODPSample(n_sims=4, random_state=42).fit(triangle).resampled_triangles_
#converting to dataframe
resampledtriangledf = bootstrapdataframe.to_frame()
print(resampledtriangledf)
In above code i mentioned n_sims(number of simulation)=4. So it generates below datafame:
0 2001 12 254,926
0 2001 24 535,877
0 2001 36 1,355,613
0 2001 48 2,034,557
0 2001 60 2,311,789
0 2001 72 2,539,807
0 2001 84 2,724,773
0 2001 96 3,187,095
0 2001 108 3,498,646
0 2001 120 3,586,037
0 2002 12 542,369
0 2002 24 1,016,927
0 2002 36 2,201,329
0 2002 48 2,923,381
0 2002 60 3,711,305
0 2002 72 3,914,829
0 2002 84 4,385,757
0 2002 96 4,596,072
0 2002 108 5,047,861
0 2003 12 235,361
0 2003 24 960,355
0 2003 36 1,661,972
0 2003 48 2,643,370
0 2003 60 3,372,684
0 2003 72 3,642,605
0 2003 84 4,160,583
0 2003 96 4,480,332
0 2004 12 764,553
0 2004 24 1,703,557
0 2004 36 2,498,418
0 2004 48 3,198,358
0 2004 60 3,524,562
0 2004 72 3,884,971
0 2004 84 4,268,241
0 2005 12 381,670
0 2005 24 1,124,054
0 2005 36 2,026,434
0 2005 48 2,863,902
0 2005 60 3,039,322
0 2005 72 3,288,253
0 2006 12 320,332
0 2006 24 1,022,323
0 2006 36 1,830,842
0 2006 48 2,676,710
0 2006 60 3,375,172
0 2007 12 330,361
0 2007 24 1,463,348
0 2007 36 2,771,839
0 2007 48 4,003,745
0 2008 12 282,143
0 2008 24 1,782,267
0 2008 36 2,898,699
0 2009 12 362,726
0 2009 24 1,277,750
0 2010 12 321,247
1 2001 12 219,021
1 2001 24 755,975
1 2001 36 1,360,298
1 2001 48 2,062,947
1 2001 60 2,356,983
1 2001 72 2,781,187
1 2001 84 2,987,837
1 2001 96 3,118,952
1 2001 108 3,307,522
1 2001 120 3,455,107
1 2002 12 302,932
1 2002 24 1,022,459
1 2002 36 1,634,938
1 2002 48 2,538,708
1 2002 60 3,005,695
1 2002 72 3,274,719
1 2002 84 3,356,499
1 2002 96 3,595,361
1 2002 108 4,100,065
1 2003 12 489,934
1 2003 24 1,233,438
1 2003 36 2,471,849
1 2003 48 3,672,629
1 2003 60 4,157,489
1 2003 72 4,498,470
1 2003 84 4,587,579
1 2003 96 4,816,232
1 2004 12 518,680
1 2004 24 1,209,705
1 2004 36 2,019,757
1 2004 48 2,997,820
1 2004 60 3,630,442
1 2004 72 3,881,093
1 2004 84 4,080,322
1 2005 12 453,963
1 2005 24 1,458,504
1 2005 36 2,036,506
1 2005 48 2,846,464
1 2005 60 3,280,124
1 2005 72 3,544,597
1 2006 12 369,755
1 2006 24 1,209,117
1 2006 36 1,973,136
1 2006 48 3,034,294
1 2006 60 3,537,784
1 2007 12 477,788
1 2007 24 1,524,537
1 2007 36 2,170,391
1 2007 48 3,355,093
1 2008 12 250,690
1 2008 24 1,546,986
1 2008 36 2,996,737
1 2009 12 271,270
1 2009 24 1,446,353
1 2010 12 510,114
2 2001 12 170,866
2 2001 24 797,338
2 2001 36 1,663,610
2 2001 48 2,293,697
2 2001 60 2,607,067
2 2001 72 2,979,479
2 2001 84 3,127,308
2 2001 96 3,285,338
2 2001 108 3,574,272
2 2001 120 3,630,610
2 2002 12 259,060
2 2002 24 1,011,092
2 2002 36 1,851,504
2 2002 48 2,705,313
2 2002 60 3,195,774
2 2002 72 3,766,008
2 2002 84 3,944,417
2 2002 96 4,234,043
2 2002 108 4,763,664
2 2003 12 239,981
2 2003 24 983,484
2 2003 36 1,929,785
2 2003 48 2,497,929
2 2003 60 2,972,887
2 2003 72 3,313,868
2 2003 84 3,727,432
2 2003 96 4,024,122
2 2004 12 77,522
2 2004 24 729,401
2 2004 36 1,473,914
2 2004 48 2,376,313
2 2004 60 2,999,197
2 2004 72 3,372,020
2 2004 84 3,887,883
2 2005 12 321,598
2 2005 24 1,132,502
2 2005 36 1,710,504
2 2005 48 2,438,620
2 2005 60 2,801,957
2 2005 72 3,182,466
2 2006 12 255,407
2 2006 24 1,275,141
2 2006 36 2,083,421
2 2006 48 3,144,579
2 2006 60 3,891,772
2 2007 12 338,120
2 2007 24 1,275,697
2 2007 36 2,238,715
2 2007 48 3,615,323
2 2008 12 310,214
2 2008 24 1,237,156
2 2008 36 2,563,326
2 2009 12 271,093
2 2009 24 1,523,131
2 2010 12 430,591
3 2001 12 330,887
3 2001 24 831,193
3 2001 36 1,601,374
3 2001 48 2,188,879
3 2001 60 2,662,773
3 2001 72 3,086,976
3 2001 84 3,332,247
3 2001 96 3,317,279
3 2001 108 3,576,659
3 2001 120 3,613,563
3 2002 12 358,263
3 2002 24 1,139,259
3 2002 36 2,236,375
3 2002 48 3,163,464
3 2002 60 3,715,130
3 2002 72 4,295,638
3 2002 84 4,502,105
3 2002 96 4,769,139
3 2002 108 5,323,304
3 2003 12 489,934
3 2003 24 1,570,352
3 2003 36 3,123,215
3 2003 48 4,189,299
3 2003 60 4,819,070
3 2003 72 5,306,689
3 2003 84 5,560,371
3 2003 96 5,827,003
3 2004 12 419,727
3 2004 24 1,308,884
3 2004 36 2,118,936
3 2004 48 2,906,732
3 2004 60 3,561,577
3 2004 72 3,934,400
3 2004 84 4,010,511
3 2005 12 389,217
3 2005 24 1,173,226
3 2005 36 1,794,216
3 2005 48 2,528,910
3 2005 60 3,474,035
3 2005 72 3,908,999
3 2006 12 291,940
3 2006 24 1,136,674
3 2006 36 1,915,614
3 2006 48 2,693,930
3 2006 60 3,375,601
3 2007 12 506,055
3 2007 24 1,684,660
3 2007 36 2,678,739
3 2007 48 3,545,156
3 2008 12 282,143
3 2008 24 1,536,490
3 2008 36 2,458,789
3 2009 12 271,093
3 2009 24 1,199,897
3 2010 12 266,359
Using above dataframe I have to create 4 triangles based on Toatal column:
For example:
Row Labels 12 24 36 48 60 72 84 96 108 120 Grand Total
2001 254,926 535,877 1,355,613 2,034,557 2,311,789 2,539,807 2,724,773 3,187,095 3,498,646 3,586,037 22,029,119
2002 542,369 1,016,927 2,201,329 2,923,381 3,711,305 3,914,829 4,385,757 4,596,072 5,047,861 28,339,832
2003 235,361 960,355 1,661,972 2,643,370 3,372,684 3,642,605 4,160,583 4,480,332 21,157,261
2004 764,553 1,703,557 2,498,418 3,198,358 3,524,562 3,884,971 4,268,241 19,842,659
2005 381,670 1,124,054 2,026,434 2,863,902 3,039,322 3,288,253 12,723,635
2006 320,332 1,022,323 1,830,842 2,676,710 3,375,172 9,225,377
2007 330,361 1,463,348 2,771,839 4,003,745 8,569,294
2008 282,143 1,782,267 2,898,699 4,963,110
2009 362,726 1,277,750 1,640,475
2010 321,247 321,247
Grand Total 3,795,687 10,886,456 17,245,147 20,344,022 19,334,833 17,270,466 15,539,355 12,263,499 8,546,507 3,586,037 128,812,009
.
.
.
Like this i need 4 triangles (4 is number of simulation) using 1st dataframe.
If user gives s_sims=900 then it creates 900 totals values based on this we have to create 900 triangles.
In above triangle i just displayed only 1 triangle for 0th value. But i neet triangle for 1 ,2 and 3 also.
Try:
df['sample_size'] = pd.to_numeric(df['sample_size'].str.replace(',',''))
df.pivot_table('sample_size','year', 'no', aggfunc='first')\
.pipe(lambda x: pd.concat([x,x.sum().to_frame('Grand Total').T]))
Output:
no 12 24 36 48 60 72 84 96 108 120
2001 254926.0 535877.0 1355613.0 2034557.0 2311789.0 2539807.0 2724773.0 3187095.0 3498646.0 3586037.0
2002 542369.0 1016927.0 2201329.0 2923381.0 3711305.0 3914829.0 4385757.0 4596072.0 5047861.0 NaN
2003 235361.0 960355.0 1661972.0 2643370.0 3372684.0 3642605.0 4160583.0 4480332.0 NaN NaN
2004 764553.0 1703557.0 2498418.0 3198358.0 3524562.0 3884971.0 4268241.0 NaN NaN NaN
2005 381670.0 1124054.0 2026434.0 2863902.0 3039322.0 3288253.0 NaN NaN NaN NaN
2006 320332.0 1022323.0 1830842.0 2676710.0 3375172.0 NaN NaN NaN NaN NaN
2007 330361.0 1463348.0 2771839.0 4003745.0 NaN NaN NaN NaN NaN NaN
2008 282143.0 1782267.0 2898699.0 NaN NaN NaN NaN NaN NaN NaN
2009 362726.0 1277750.0 NaN NaN NaN NaN NaN NaN NaN NaN
2010 321247.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
Grand Total 3795688.0 10886458.0 17245146.0 20344023.0 19334834.0 17270465.0 15539354.0 12263499.0 8546507.0 3586037.0

How replaces values from a column in a Panel Data with values from a list in Python?

I have a database in panel data form:
Date id variable1 variable2
2015 1 10 200
2016 1 17 300
2017 1 8 400
2018 1 11 500
2015 2 12 150
2016 2 19 350
2017 2 15 250
2018 2 9 450
2015 3 20 100
2016 3 8 220
2017 3 12 310
2018 3 14 350
And I have a list with the labels of the ID
List = ['Argentina', 'Brazil','Chile']
I want to replace values of id with labels from my list.
Thanks in advance
Date id variable1 variable2
2015 Argentina 10 200
2016 Argentina 17 300
2017 Argentina 8 400
2018 Argentina 11 500
2015 Brazil 12 150
2016 Brazil 19 350
2017 Brazil 15 250
2018 Brazil 9 450
2015 Chile 20 100
2016 Chile 8 220
2017 Chile 12 310
2018 Chile 14 350
map is the way to go, with enumerate:
d = {k:v for k,v in enumerate(List, start=1)}
df['id'] = df['id'].map(d)
Output:
Date id variable1 variable2
0 2015 Argentina 10 200
1 2016 Argentina 17 300
2 2017 Argentina 8 400
3 2018 Argentina 11 500
4 2015 Brazil 12 150
5 2016 Brazil 19 350
6 2017 Brazil 15 250
7 2018 Brazil 9 450
8 2015 Chile 20 100
9 2016 Chile 8 220
10 2017 Chile 12 310
11 2018 Chile 14 350
Try
df['id'] = df['id'].map({1: 'Argentina', 2: 'Brazil', 3: 'Chile'})
or
df['id'] = df['id'].map({k+1: v for k, v in enumerate(List)})

How to find ChangeCol1/ChangeCol2 and %ChangeCol1/%ChangeCol2 of DF

I have data that looks like this.
Year Quarter Quantity Price TotalRevenue
0 2000 1 23 142 3266
1 2000 2 23 144 3312
2 2000 3 23 147 3381
3 2000 4 23 151 3473
4 2001 1 22 160 3520
5 2001 2 22 183 4026
6 2001 3 22 186 4092
7 2001 4 22 186 4092
8 2002 1 21 212 4452
9 2002 2 19 232 4408
10 2002 3 19 223 4237
I'm trying to figure out how to get the 'MarginalRevenue', where:
MR = (∆TR/∆Q)
MarginalRevenue = (Change in TotalRevenue) / (Change in Quantity)
I found: df.pct_change()
But that seems to get the percentage change for everything.
Also, I'm trying to figure out how to get something related:
ElasticityPrice = (%ΔQuantity/%ΔPrice)
Do you mean something like this ?
df['MarginalRevenue'] = df['TotalRevenue'].pct_change() / df['Quantity'].pct_change()
or
df['MarginalRevenue'] = df['TotalRevenue'].diff() / df['Quantity'].diff()

Grouping data series by day intervals with Pandas

I have to perform some data analysis on a seasonal basis.
I have circa one and a half years worth of hourly measurements, from the end of 2015 to the second half of 2017. What I want to do is to sort this data in seasons.
Here's an example of the data I am working with:
Date,Year,Month,Day,Day week,Hour,Holiday,Week Day,Impulse,Power (kW),Temperature (C)
04/12/2015,2015,12,4,6,18,0,6,2968,1781,16.2
04/12/2015,2015,12,4,6,19,0,6,2437,1462,16.2
19/04/2016,2016,4,19,3,3,0,3,1348,809,14.4
19/04/2016,2016,4,19,3,4,0,3,1353,812,14.1
11/06/2016,2016,6,11,7,19,0,7,1395,837,18.8
11/06/2016,2016,6,11,7,20,0,7,1370,822,17.4
11/06/2016,2016,6,11,7,21,0,7,1364,818,17
11/06/2016,2016,6,11,7,22,0,7,1433,860,17.5
04/12/2016,2016,12,4,1,17,0,1,1425,855,14.6
04/12/2016,2016,12,4,1,18,0,1,1466,880,14.4
07/03/2017,2017,3,7,3,14,0,3,3668,2201,14.2
07/03/2017,2017,3,7,3,15,0,3,3666,2200,14
24/04/2017,2017,4,24,2,5,0,2,1347,808,11.4
24/04/2017,2017,4,24,2,6,0,2,1816,1090,11.5
24/04/2017,2017,4,24,2,7,0,2,2918,1751,12.4
15/06/2017,2017,6,15,5,13,1,1,2590,1554,22.5
15/06/2017,2017,6,15,5,14,1,1,2629,1577,22.5
15/06/2017,2017,6,15,5,15,1,1,2656,1594,22.1
15/11/2017,2017,11,15,4,13,0,4,3765,2259,15.6
15/11/2017,2017,11,15,4,14,0,4,3873,2324,15.9
15/11/2017,2017,11,15,4,15,0,4,3905,2343,15.8
15/11/2017,2017,11,15,4,16,0,4,3861,2317,15.3
As you can see I have data on three different years.
What I was thinking to do is to convert the first column with the pd.to_datetime() command. Then to group the rows according to the day/month, regardless of the year in dd/mm intervals (if winter goes from the 21/12 to the 21/03, create a new dataframe with all of those rows in which the date is included in this interval, regardless of the year), but I couldn't do it by neglecting the year (which make things more complicated).
EDIT:
A desired output would be:
df_spring
Date,Year,Month,Day,Day week,Hour,Holiday,Week Day,Impulse,Power (kW),Temperature (C)
19/04/2016,2016,4,19,3,3,0,3,1348,809,14.4
19/04/2016,2016,4,19,3,4,0,3,1353,812,14.1
07/03/2017,2017,3,7,3,14,0,3,3668,2201,14.2
07/03/2017,2017,3,7,3,15,0,3,3666,2200,14
24/04/2017,2017,4,24,2,5,0,2,1347,808,11.4
24/04/2017,2017,4,24,2,6,0,2,1816,1090,11.5
24/04/2017,2017,4,24,2,7,0,2,2918,1751,12.4
df_autumn
Date,Year,Month,Day,Day week,Hour,Holiday,Week Day,Impulse,Power (kW),Temperature (C)
04/12/2015,2015,12,4,6,18,0,6,2968,1781,16.2
04/12/2015,2015,12,4,6,19,0,6,2437,1462,16.2
04/12/2016,2016,12,4,1,17,0,1,1425,855,14.6
04/12/2016,2016,12,4,1,18,0,1,1466,880,14.4
15/11/2017,2017,11,15,4,13,0,4,3765,2259,15.6
15/11/2017,2017,11,15,4,14,0,4,3873,2324,15.9
15/11/2017,2017,11,15,4,15,0,4,3905,2343,15.8
15/11/2017,2017,11,15,4,16,0,4,3861,2317,15.3
And so on for the remaining seasons.
Define each season by filtering the relevant rows using Day and Month columns as presented for winter:
df_winter = df.loc[((df['Day'] >= 21) & (df['Month'] == 12)) | (df['Month'] == 1) | (df['Month'] == 2) | ((df['Day'] <= 21) & (df['Month'] == 3))]
you can simply filter your dataframe by month.isin()
# spring
df[df['Month'].isin([3,4])]
Date Year Month Day Day week Hour Holiday Week Day Impulse Power (kW) Temperature (C)
2 19/04/2016 2016 4 19 3 3 0 3 1348 809 14.4
3 19/04/2016 2016 4 19 3 4 0 3 1353 812 14.1
10 07/03/2017 2017 3 7 3 14 0 3 3668 2201 14.2
11 07/03/2017 2017 3 7 3 15 0 3 3666 2200 14.0
12 24/04/2017 2017 4 24 2 5 0 2 1347 808 11.4
13 24/04/2017 2017 4 24 2 6 0 2 1816 1090 11.5
14 24/04/2017 2017 4 24 2 7 0 2 2918 1751 12.4
# autumn
df[df['Month'].isin([11,12])]
Date Year Month Day Day week Hour Holiday Week Day Impulse Power (kW) Temperature (C)
0 04/12/2015 2015 12 4 6 18 0 6 2968 1781 16.2
1 04/12/2015 2015 12 4 6 19 0 6 2437 1462 16.2
8 04/12/2016 2016 12 4 1 17 0 1 1425 855 14.6
9 04/12/2016 2016 12 4 1 18 0 1 1466 880 14.4
18 15/11/2017 2017 11 15 4 13 0 4 3765 2259 15.6
19 15/11/2017 2017 11 15 4 14 0 4 3873 2324 15.9
20 15/11/2017 2017 11 15 4 15 0 4 3905 2343 15.8
21 15/11/2017 2017 11 15 4 16 0 4 3861 2317 15.3

Select the last row by id of a panel data and use np.where

I want to select the last position by id and check if the variable fecha, which is a variable assign to the year and quarter, is bigger than 252, so as to use it in a np.where
id clae6 year quarter fecha fecha_dif2 position
1 475230.0 2007 1 220 -1 1
1 475230.0 2007 2 221 -1 2
1 475230.0 2007 3 222 -1 3
1 475230.0 2007 4 223 -1 4
1 475230.0 2008 1 224 -1 5
2 475230.0 2007 1 220 -1 1
2 475230.0 2007 2 221 -1 2
2 475230.0 2007 3 222 -1 3
2 475230.0 2007 4 223 -1 4
2 475230.0 2008 1 224 -1 5
3 475230.0 2010 1 232 -1 1
3 475230.0 2010 2 233 -1 2
3 475230.0 2010 3 234 -1 3
3 475230.0 2010 4 235 -1 4
3 475230.0 2011 1 236 -1 5
3 475230.0 2011 2 237 -1 6
Without groupby
df.drop_duplicates(['id'],keep='last').fecha.gt(252)
Out[213]:
4 False
9 False
15 False
Name: fecha, dtype: bool
df['fechatest']=df.drop_duplicates(['id'],keep='last').fecha.gt(252)
df.fillna(False)
Out[216]:
id clae6 year quarter fecha fecha_dif2 position fechatest
0 1 475230.0 2007 1 220 -1 1 False
1 1 475230.0 2007 2 221 -1 2 False
2 1 475230.0 2007 3 222 -1 3 False
3 1 475230.0 2007 4 223 -1 4 False
4 1 475230.0 2008 1 224 -1 5 False
5 2 475230.0 2007 1 220 -1 1 False
6 2 475230.0 2007 2 221 -1 2 False
7 2 475230.0 2007 3 222 -1 3 False
8 2 475230.0 2007 4 223 -1 4 False
9 2 475230.0 2008 1 224 -1 5 False
10 3 475230.0 2010 1 232 -1 1 False
11 3 475230.0 2010 2 233 -1 2 False
12 3 475230.0 2010 3 234 -1 3 False
13 3 475230.0 2010 4 235 -1 4 False
14 3 475230.0 2011 1 236 -1 5 False
15 3 475230.0 2011 2 237 -1 6 False
Use groupby with tail first and then compare :
mask = df.groupby('id')['fecha'].tail(1) > 252
#same as
#mask = df.groupby('id')['fecha'].tail(1).gt(252)
print (mask)
4 False
9 False
15 False
Name: fecha, dtype: bool
If need mask same size as df add reindex:
m = df.groupby('id')['fecha'].tail(1).gt(252).reindex(df.index, fill_value=False)
print (m)
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 False
13 False
14 False
15 False
Name: fecha, dtype: bool
df['new'] = np.where(m, 'yes', 'no')
print (df)
id clae6 year quarter fecha fecha_dif2 position new
0 1 475230.0 2007 1 220 -1 1 no
1 1 475230.0 2007 2 221 -1 2 no
2 1 475230.0 2007 3 222 -1 3 no
3 1 475230.0 2007 4 223 -1 4 no
4 1 475230.0 2008 1 224 -1 5 no
5 2 475230.0 2007 1 220 -1 1 no
6 2 475230.0 2007 2 221 -1 2 no
7 2 475230.0 2007 3 222 -1 3 no
8 2 475230.0 2007 4 223 -1 4 no
9 2 475230.0 2008 1 224 -1 5 no
10 3 475230.0 2010 1 232 -1 1 no
11 3 475230.0 2010 2 233 -1 2 no
12 3 475230.0 2010 3 234 -1 3 no
13 3 475230.0 2010 4 235 -1 4 no
14 3 475230.0 2011 1 236 -1 5 no
15 3 475230.0 2011 2 237 -1 6 no

Categories