Python Pandas: How to update values for other column in groupby? - python

I have a dataframe with time series.
meter date value
0 1002 19501 0.362
1 1002 19502 0.064
2 1002 19503 0.119
3 1002 19504 0.023
4 1002 19505 0.140
Now I need to change the date to numeric order (1,2,3, etc. until 336) for each unique value in meter. There 336 rows for each unique meter value, so that shouldn't be too difficult, but I am stuck at getting the right result here.
I tried the following:
def change_timestamp(df):
timestamp_uniform = [i for i in range(1,337)]
timestamp = pd.Series(data=timestamp_uniform)
df.date = timestamp.values
return df.date
by_meter = meters_weekly.groupby('meter')
by_meter.apply(change_timestamp)
but the output was just dates repeated.
Any ideas on how to fix that?

You could try something like this -
df['new_date'] = df.groupby(['meter']).cumcount()

Related

How do you add the value for a certain column from a previous row to your current row in Python Pandas? [duplicate]

In python, how can I reference previous row and calculate something against it? Specifically, I am working with dataframes in pandas - I have a data frame full of stock price information that looks like this:
Date Close Adj Close
251 2011-01-03 147.48 143.25
250 2011-01-04 147.64 143.41
249 2011-01-05 147.05 142.83
248 2011-01-06 148.66 144.40
247 2011-01-07 147.93 143.69
Here is how I created this dataframe:
import pandas
url = 'http://ichart.finance.yahoo.com/table.csv?s=IBM&a=00&b=1&c=2011&d=11&e=31&f=2011&g=d&ignore=.csv'
data = data = pandas.read_csv(url)
## now I sorted the data frame ascending by date
data = data.sort(columns='Date')
Starting with row number 2, or in this case, I guess it's 250 (PS - is that the index?), I want to calculate the difference between 2011-01-03 and 2011-01-04, for every entry in this dataframe. I believe the appropriate way is to write a function that takes the current row, then figures out the previous row, and calculates the difference between them, the use the pandas apply function to update the dataframe with the value.
Is that the right approach? If so, should I be using the index to determine the difference? (note - I'm still in python beginner mode, so index may not be the right term, nor even the correct way to implement this)
I think you want to do something like this:
In [26]: data
Out[26]:
Date Close Adj Close
251 2011-01-03 147.48 143.25
250 2011-01-04 147.64 143.41
249 2011-01-05 147.05 142.83
248 2011-01-06 148.66 144.40
247 2011-01-07 147.93 143.69
In [27]: data.set_index('Date').diff()
Out[27]:
Close Adj Close
Date
2011-01-03 NaN NaN
2011-01-04 0.16 0.16
2011-01-05 -0.59 -0.58
2011-01-06 1.61 1.57
2011-01-07 -0.73 -0.71
To calculate difference of one column. Here is what you can do.
df=
A B
0 10 56
1 45 48
2 26 48
3 32 65
We want to compute row difference in A only and want to consider the rows which are less than 15.
df['A_dif'] = df['A'].diff()
df=
A B A_dif
0 10 56 Nan
1 45 48 35
2 26 48 19
3 32 65 6
df = df[df['A_dif']<15]
df=
A B A_dif
0 10 56 Nan
3 32 65 6
I don't know pandas, and I'm pretty sure it has something specific for this; however, I'll give you the pure-Python solution, that might be of some help even if you need to use pandas:
import csv
import urllib
# This basically retrieves the CSV files and loads it in a list, converting
# All numeric values to floats
url='http://ichart.finance.yahoo.com/table.csv?s=IBM&a=00&b=1&c=2011&d=11&e=31&f=2011&g=d&ignore=.csv'
reader = csv.reader(urllib.urlopen(url), delimiter=',')
# We sort the output list so the records are ordered by date
cleaned = sorted([[r[0]] + map(float, r[1:]) for r in list(reader)[1:]])
for i, row in enumerate(cleaned): # enumerate() yields two-tuples: (<id>, <item>)
# The try..except here is to skip the IndexError for line 0
try:
# This will calculate difference of each numeric field with the same field
# in the row before this one
print row[0], [(row[j] - cleaned[i-1][j]) for j in range(1, 7)]
except IndexError:
pass

Converting Pandas DataFrame dates so that I can pick out particular dates

I have two dataframes with particular data that I'm needing to merge.
Date Greenland Antarctica
0 2002.29 0.00 0.00
1 2002.35 68.72 19.01
2 2002.62 -219.32 -59.36
3 2002.71 -242.83 46.55
4 2002.79 -209.12 63.31
.. ... ... ...
189 2020.79 -4928.78 -2542.18
190 2020.87 -4922.47 -2593.06
191 2020.96 -4899.53 -2751.98
192 2021.04 -4838.44 -3070.67
193 2021.12 -4900.56 -2755.94
[194 rows x 3 columns]
and
Date Mean Sea Level
0 1993.011526 -38.75
1 1993.038692 -39.77
2 1993.065858 -39.61
3 1993.093025 -39.64
4 1993.120191 -38.72
... ... ...
1021 2020.756822 62.83
1022 2020.783914 62.93
1023 2020.811006 62.98
1024 2020.838098 63.00
1025 2020.865190 63.00
[1026 rows x 2 columns]
My ultimate goal is trying to pull out the data from the second data frame(Mean Sea Level column) that comes from (roughly) the same time frame as the dates in the first dataframe, and then merge that back in with the first data frame.
However, the only ways that I can think of selecting out certain dates, involves first converting all of the dates in the Date columns of both Dataframes to something Pandas recognizes, but I have been unable to figure our how to do that. I figured out some code(below) that can convert individual dates to a more common date format, but its been difficult to successfully apply it to all of the Dates in dataframe. Also I'm not sure I can then get Pandas to then convert that to a date format that Pandas recognizes.
from datetime import datetime
def fraction2datetime(year_fraction: float) -> datetime:
year = int(year_fraction)
fraction = year_fraction - year
first = datetime(year, 1, 1)
aux = datetime(year + 1, 1, 1)
return first + (aux - first)*fraction
I also looked at pandas.to_datetime but I don't see a way to have it read the format the dates are initially in.
So does anyone have any guidance on this? Firstly with the conversion of dates, but also with the task of picking out the dates from the second dataframe if possible. Any help would be greatly appreciated.
Suppose you have this 2 dataframes:
df1:
Date Greenland Antarctica
0 2020.79 -4928.78 -2542.18
1 2020.87 -4922.47 -2593.06
2 2020.96 -4899.53 -2751.98
3 2021.04 -4838.44 -3070.67
4 2021.12 -4900.56 -2755.94
df2:
Date Mean Sea Level
0 2020.756822 62.83
1 2020.783914 62.93
2 2020.811006 62.98
3 2020.838098 63.00
4 2020.865190 63.00
To convert the dates:
def fraction2datetime(year_fraction: float) -> datetime:
year = int(year_fraction)
fraction = year_fraction - year
first = datetime(year, 1, 1)
aux = datetime(year + 1, 1, 1)
return first + (aux - first) * fraction
df1["Date"] = df1["Date"].apply(fraction2datetime)
df2["Date"] = df2["Date"].apply(fraction2datetime)
print(df1)
print(df2)
Prints:
Date Greenland Antarctica
0 2020-10-16 03:21:35.999999 -4928.78 -2542.18
1 2020-11-14 10:04:47.999997 -4922.47 -2593.06
2 2020-12-17 08:38:24.000001 -4899.53 -2751.98
3 2021-01-15 14:23:59.999999 -4838.44 -3070.67
4 2021-02-13 19:11:59.999997 -4900.56 -2755.94
Date Mean Sea Level
0 2020-10-03 23:55:28.012795 62.83
1 2020-10-13 21:54:02.073603 62.93
2 2020-10-23 19:52:36.134397 62.98
3 2020-11-02 17:51:10.195198 63.00
4 2020-11-12 15:49:44.255992 63.00
For the join, you can use pd.merge_asof. For example this will join on nearest date within 30-day tolerance (you can tweak these values as you want):
x = pd.merge_asof(
df1, df2, on="Date", tolerance=pd.Timedelta(days=30), direction="nearest"
)
print(x)
Will print:
Date Greenland Antarctica Mean Sea Level
0 2020-10-16 03:21:35.999999 -4928.78 -2542.18 62.93
1 2020-11-14 10:04:47.999997 -4922.47 -2593.06 63.00
2 2020-12-17 08:38:24.000001 -4899.53 -2751.98 NaN
3 2021-01-15 14:23:59.999999 -4838.44 -3070.67 NaN
4 2021-02-13 19:11:59.999997 -4900.56 -2755.94 NaN
You can specify a timestamp format in to_datetime(). Otherwise, if you need to use a custom function you can use apply(). If performance is a concern, be aware that apply() does not perform as well as builtin pandas methods.
To combine the DataFrames you can use an outer join on the date column.

how to apply filter condition in percentage string column using pandas?

I am working on below df but unable to apply filter in percentage field,but it is working normal excel.
I need to apply filter condition > 100.00% in the particular field using pandas.
I tried reading it from Html,csv and excel in pandas but unable to use condition.
it requires float conversion but not working with given data
I am assuming that the values you have are read as strings in Pandas:
data = ['4,700.00%', '3,900.00%', '1,500.00%', '1,400.00%', '1,200.00%', '0.15%', '0.13%', '0.12%', '0.10%', '0.08%', '0.07%']
df = pd.DataFrame(data)
df.columns = ['data']
printing the df:
data
0 4,700.00%
1 3,900.00%
2 1,500.00%
3 1,400.00%
4 1,200.00%
5 0.15%
6 0.13%
7 0.12%
8 0.10%
9 0.08%
10 0.07%
then:
df['data'] = df['data'].str.rstrip('%').str.replace(',','').astype('float')
df_filtered = df[df['data'] > 100]
Results:
data
0 4700.0
1 3900.0
2 1500.0
3 1400.0
4 1200.0
I have used below code as well.str.rstrip('%') and .str.replace(',','').astype('float') it working fine

Using python print max and min values and date associated with the max and min values

I am new to programming and am trying to write a program that evaluates and prints the max AVE.SPEED value and the date associated with that value from a csv file.
This would be an example of the file data set:
STATION DATE AVE_SPEED
0 US68 2018-03-22 0.00
1 US68 2018-03-23 0.00
2 US68 2018-03-24 0.00
3 US68 2018-03-26 0.24
4 US68 2018-03-27 2.28
5 US68 2018-03-28 0.21
6 US10 2018-03-29 0.04
7 US10 2018-03-30 0.00
8 US10 2018-03-31 0.00
9 US10 2018-04-01 0.00
10 US10 2018-04-02 0.02
This is what I have come up with so far but it just prints the entire set at the end.
import pandas as pd
df = pd.read_csv (r'data_01.csv')
max1 = df['AVE_SPEED'].max()
print ('Max Speed in MPH: ' + str(max1))
groupby_max1 = df.groupby(['DATE']).max()
print ('Maximum Average Speed Value and Date of Occurance: ' + str(groupby_max1))
Your initial average speed max is correct in pandas.
To find the corresponding date, I would do the following:
mport pandas as pd
df = pd.read_csv (r'data_01.csv')
max1 = df['AVE_SPEED'].max()
print ('Max Speed in MPH: ' + str(max1))
date_of_max = df[df['AVE_SPEED'] == max1]['date'].values[0]
Effectively, you're creating another dataframe where any "AVE_SPEED" must equal the max speed (it should be a single row unless there's are multiple instances of the same max speed). From there, you return the 'date' value of that dataframe/row.
You can then print/return the max velocity and corresponding date as needed.
I would like to suggest a non-pandas approach to this as a lot of new programmers focus on learning pandas instead of learning python -- especially here it might be easier to understand what plain python is doing instead of using a dataframe:
with open('data_01.csv') as f:
data = f.readlines()[1:] # ditch the header
data = [x.split() for x in data] # turn each line in to a list of its values
data.sort(key=lambda x: -float(x[-1])) # sort by the last item in each list (the speed) ascending
print(data[0][2]) # print the date (index 2) from the first item in your sorted data

Pandas Comparing Two Data Frames

I have two dataframes. I will explain my requirement in form of a loop--because this is how I visualize the problem.
I realize that there can be another solution, so if this can be done differently, please feel free to share! I am new to Pandas, so I'm struggling with this solution. Thank you in advance for looking at my question!!
I have 2 dataframes that have 3 columns: ID, ODO, ODOLength. ODOLength is the running difference for each ODO record, which I got using: abs(Df1['Odo'] - Df1['Odo'].shift(-1))
OldDataSet = {'id' : [10,20,30,40,50,60,70,80,90,100,110,120,130,140],'Odo': [-1.09,1.02,26.12,43.12,46.81,56.23,111.07,166.38,191.27,196.41,207.74,231.61,235.84,240.04], 'OdoLength':[2.11,25.1,17,3.69,9.42,54.84,55.31,24.89,5.14,11.33,23.87,4.23,4.2,4.09]}
NewDataSet = {'id' : [1000,2000,3000,4000,5000,6000,7000,8000,9000,10000,11000,12000,13000,14000],'Odo': [1.51,2.68,4.72,25.03,42,45.74,55.15,110.05,165.41,170.48,172.39,190.35,195.44,206.78], 'OdoLength':[1.17,2.04,20.31,16.97,3.74,9.41,54.9,55.36,5.07,1.91,17.96,5.09,11.34,23.89]}
FinalResultDataSet = {'DFOneId':[10,20,30,40,50,60,70,80,90,100,110], 'DFTwoID' : [1000,3000,4000,5000,6000,7000,8000,11000,12000,13000,14000], 'OdoDiff': [2.6,3.7,1.09,1.12,1.07,1.08,1.02,6.01,0.92,0.97,0.96], 'OdoLengthDiff':[0.94,4.79,0.03,0.05,0.01,0.06,0.05,6.93,0.05,0.01,0.02], 'OdoAndLengthDiff':[1.66,1.09,1.06,1.07,1.06,1.02,0.97,0.92,0.87,0.96,0.94]}
df1= pd.DataFrame(OldDataSet)
df2 = pd.DataFrame(NewDataSet)
FinalDf = pd.DataFrame(FinalResultDataSet)
The logic behind how to get the FinalDF is as follows: Take Odo and OdoLen from df1 and subtract it from each Odo and OdoLen columns in df2. Take the lowest value of the difference and match them. For next comparison of Df1 and Df2, begin with the first Df2 record that does not have a match. If Df2 values are not a minimum value, for the current Df1 values
that is being compared then that record of DF2 is not included in the final dataset. For example, Df1 ID 20- was compared to Df2 ID 2000 and the final result was 21.4 ((DfOne.ODO:1.02-DfTwo.ODO:2.68) - (DfOneODOLen:25.1-DfTwo.ODoLen-2.04) = 21.4), however when Df1 ID 20 is compared to Df2 3000 the final difference is 1.09 ((DfOne.ODO:1.02-DfTwo.ODO:4.72) - (DfOneODOLen:25.1-DfTwo.ODoLen-20.31) = 1.06). In this case, Df2 ID 3000 is matched to DF1 ID 20 and Df2 ID - 2000 is dropped off because
the difference was larger. At this point DF2 ID 2000 is not considered for any other matches. So the next DF1 record comparison would start at DF2 ID 4000, because that is the next value that does not have a match.
As I said, I am open to all suggestions!
Thanks!
You can using merge_asof
Step 1: combine the dataframe
df1['match']=df1.Odo+df1.OdoLength
df2['match']=df2.Odo+df2.OdoLength
out=pd.merge_asof(df1,df2,on='match',direction='nearest')
out.drop_duplicates(['id_y'])
Out[728]:
Odo_x OdoLength_x id_x match Odo_y OdoLength_y id_y
0 -1.09 2.11 10 1.02 1.51 1.17 1000
1 1.02 25.10 20 26.12 4.72 20.31 3000
2 26.12 17.00 30 43.12 25.03 16.97 4000
3 43.12 3.69 40 46.81 42.00 3.74 5000
4 46.81 9.42 50 56.23 45.74 9.41 6000
5 56.23 54.84 60 111.07 55.15 54.90 7000
6 111.07 55.31 70 166.38 110.05 55.36 8000
7 166.38 24.89 80 191.27 172.39 17.96 11000
8 191.27 5.14 90 196.41 190.35 5.09 12000
9 196.41 11.33 100 207.74 195.44 11.34 13000
10 207.74 23.87 110 231.61 206.78 23.89 14000
Step 2
Then you can do something like below to get your new column
out['OdoAndLengthDiff']=out.OdoLength_x-out.OdoLength_y+out.Odo_x-out.Odo_y
BTW I did not drop the column , after you get all new value if you need, You can drop it by using out=out.drop([columns],1)

Categories