I am facing problem while dealing with NaN values in Temperature column with respect to column City by using interpolate().
The df is:
data ={
'City':['Greenville','Charlotte', 'Los Gatos','Greenville','Carson City','Greenville','Greenville' ,'Charlotte','Carson City',
'Greenville','Charlotte','Fort Lauderdale', 'Rifle', 'Los Gatos','Fort Lauderdale'],
'Rec_times':['2019-05-21 08:29:55','2019-01-27 17:43:09','2020-12-13 21:53:00','2019-07-17 11:43:09','2018-04-17 16:51:23',
'2019-10-07 13:28:09','2020-01-07 11:38:10','2019-11-03 07:13:09','2020-11-19 10:45:23','2020-10-07 15:48:19','2020-10-07 10:53:09',
'2017-08-31 17:40:49','2016-08-31 17:40:49','2021-11-13 20:13:10','2016-08-31 19:43:29'],
'Temperature':[30,45,26,33,50,None,29,None,48,32,47,33,None,None,28],
'Pressure':[30,None,26,43,50,36,29,None,48,32,None,35,23,49,None]
}
df =pd.DataFrame(data)
df
Output:
City Rec_times Temperature Pressure
0 Greenville 2019-05-21 08:29:55 30.0 30.0
1 Charlotte 2019-01-27 17:43:09 45.0 NaN
2 Los Gatos 2020-12-13 21:53:00 26.0 26.0
3 Greenville 2019-07-17 11:43:09 33.0 43.0
4 Carson City 2018-04-17 16:51:23 50.0 50.0
5 Greenville 2019-10-07 13:28:09 NaN 36.0
6 Greenville 2020-01-07 11:38:10 29.0 29.0
7 Charlotte 2019-11-03 07:13:09 NaN NaN
8 Carson City 2020-11-19 10:45:23 48.0 48.0
9 Greenville 2020-10-07 15:48:19 32.0 32.0
10 Charlotte 2020-10-07 10:53:09 47.0 NaN
11 Fort Lauderdale 2017-08-31 17:40:49 33.0 35.0
12 Rifle 2016-08-31 17:40:49 NaN 23.0
13 Los Gatos 2021-11-13 20:13:10 NaN 49.0
14 Fort Lauderdale 2016-08-31 19:43:29 28.0 NaN
I want you to deal the NaN values in the column Temperature by grouping them based on City using interpolate(method='time').
Ex:
Consider City as 'Greenville' it has 5 temperatures (30,33,NaN,29 and 32) recorded at different times. The NaN value in Temperature is replaced by a value by grouping the records by the City and using interpolate(method='time').
Note: If you know any other optimal method to replace NaN in Temperature you can use as 'Other solution'.
Use a lambda function with DatetimeIndex created by DataFrame.set_index with GroupBy.transform:
df["Rec_times"] = pd.to_datetime(df["Rec_times"])
df['Temperature'] = (df.set_index('Rec_times')
.groupby("City")['Temperature']
.transform(lambda x: x.interpolate(method='time')).to_numpy())
One possible idea for replacing missing values after interpolate is to replace them by the mean of all values like:
df1.Temperature = df1.Temperature.fillna(df1.Temperature.mean())
My understanding is that you want to replace the NaN in column temperature by an interpolation of the temperature in that specific city.
I would have to think about a more sophisticated solution. But here is a simple hack:
df["Rec_times"] = pd.to_datetime(df["Rec_times"]) # .interpolate requires datetime
df["idx"] = df.index # to restore original ordering
df_new = pd.DataFrame() # will hold new data
for (city,group) in df.groupby("City"):
group = group.set_index("Rec_times", drop=False)
df_new = pd.concat((df_new, group.interpolate(method='time')))
df_new = df_new.set_index("idx").sort_index() # Restore original ordering
df_new
Note that interpolation for Rifle will yield NaN given there is only one data point which is NaN.
I have a multiindexed dataframe (but with more columns)
2020-12-22 09:47:50 2020-12-23 16:43:45 2020-12-22 15:00
Lines VehicleNumber
102 9405 3 NaN 3
9415 NaN NaN NaN
9416 NaN NaN NaN
Now I want to sort the columns such that I have the earliest date as a first column and the lastest as last. After that I want to delete columns, which are not in between two dates let's say 2020-12-22 10:00:00 < date < 2020-12-23 10:00:00. I tried transposing the dataframe, but it seems not to work when I have a multiindex.
Expected output:
2020-12-22 15:00 2020-12-23 16:43:45
Lines VehicleNumber
102 9405 3 NaN
9415 NaN NaN
9416 NaN NaN
So first we sort the columns by date and then check if they are between the two dates:
2020-12-22 10:00:00 < date < 2020-12-23 10:00:00 hence delete one column
First convert str columns to date time columns:
In [2244]: df.columns = pd.to_datetime(df.columns)
Then, sort df based on datetimes:
In [2246]: df = df.reindex(sorted(df.columns), axis=1)
Suppose you want to keep only column that are greater than following:
In [2251]: x = '2020-12-22 10:00:00'
Use List comprehension:
In [2257]: m = [i for i in df.columns if i > pd.to_datetime(x)]
In [2258]: df[m]
Out[2258]:
2020-12-22 15:00:00 2020-12-23 16:43:45
Lines VehicleNumber
102 9405.0 3.0 NaN
9415 NaN NaN NaN
9416 NaN NaN NaN
My original dataframe looked like:
timestamp variables value
1 2017-05-26 19:46:41.289 inf 0.000000
2 2017-05-26 20:40:41.243 tubavg 225.489639
... ... ... ...
899541 2017-05-02 20:54:41.574 caspre 684.486450
899542 2017-04-29 11:17:25.126 tvol 50.895000
Now I want to bucket this dataset by time, which can be done with the code:
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce')
df = df.groupby(pd.Grouper(key='timestamp', freq='5min'))
But I also want all the different metrics to become columns in the new dataframe. For example the first two rows from the original dataframe would look like:
timestamp inf tubavg caspre tvol ...
1 2017-05-26 19:46:41.289 0.000000 225.489639 xxxxxxx xxxxx
... ... ... ...
xxxxx 2017-05-02 20:54:41.574 xxxxxx xxxxxx 684.486450 50.895000
Now as it can be seen the time has been bucketed by 5 min intervals and will look at all the values of variables and try to create columns for those columns for all buckets. The bucket has assumed the very first value of the time it had bucketed with.
in order to solve this, I have tried a couple of different solutions, but can't seem to find anything without constant errors.
Try unstacking the variables column from rows to columns with .unstack(1). The parameter is 1, because we want the second index column (0 would be the first)
Then, drop the level of the multi-index you just created to make it a little bit cleaner with .droplevel().
Finally, use pd.Grouper. Since the date/time is on the index, you don't need to specify a key.
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce')
df = df.set_index(['timestamp','variables']).unstack(1)
df.columns = df.columns.droplevel()
df = df.groupby(pd.Grouper(freq='5min')).mean().reset_index()
df
Out[1]:
variables timestamp caspre inf tubavg tvol
0 2017-04-29 11:15:00 NaN NaN NaN 50.895
1 2017-04-29 11:20:00 NaN NaN NaN NaN
2 2017-04-29 11:25:00 NaN NaN NaN NaN
3 2017-04-29 11:30:00 NaN NaN NaN NaN
4 2017-04-29 11:35:00 NaN NaN NaN NaN
... ... ... ... ...
7885 2017-05-26 20:20:00 NaN NaN NaN NaN
7886 2017-05-26 20:25:00 NaN NaN NaN NaN
7887 2017-05-26 20:30:00 NaN NaN NaN NaN
7888 2017-05-26 20:35:00 NaN NaN NaN NaN
7889 2017-05-26 20:40:00 NaN NaN 225.489639 NaN
Another way would be to .groupby the variables as well and then .unstack(1) again:
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce')
df = df.groupby([pd.Grouper(freq='5min', key='timestamp'), 'variables']).mean().unstack(1)
df.columns = df.columns.droplevel()
df = df.reset_index()
df
Out[1]:
variables timestamp caspre inf tubavg tvol
0 2017-04-29 11:15:00 NaN NaN NaN 50.895
1 2017-05-02 20:50:00 684.48645 NaN NaN NaN
2 2017-05-26 19:45:00 NaN 0.0 NaN NaN
3 2017-05-26 20:40:00 NaN NaN 225.489639 NaN
TL:DR - how do I create a dataframe/series from one or more columns in an existing non-indexed dataframe based on the column(s) containing a specific piece of text?
Relatively new to Python and data analysis and (this is my first time posting a question on Stack Overflow but I've been hunting for an answer for a long time (and used to code regularly) and not having any success.
I have a dataframe import from an Excel file that doesn't have named/indexed columns. I am trying to successfully extract data from nearly 2000 of these files which all have slightly different columns of data (of course - why make it simple... or follow a template... or simply use something other than poorly formatted Excel spreadsheets...).
The original dataframe (from a poorly structured XLS file) looks a bit like this:
0 NaN RIGHT NaN
1 Date UCVA Sph
2 2007-01-13 00:00:00 6/38 [-2.00]
3 2009-11-05 00:00:00 6/9 NaN
4 2009-11-18 00:00:00 6/12 NaN
5 2009-12-14 00:00:00 6/9 [-1.25]
6 2018-04-24 00:00:00 worn CL [-5.50]
3 4 5 6 7 8 9 \
0 NaN NaN NaN NaN NaN NaN NaN
1 Cyl Axis BSCVA Pentacam remarks K1 K2 K2 back
2 [-2.75] 65 6/9 NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN 6/5 Pentacam 46 43.9 -6.6
5 [-5.75] 60 6/6-1 NaN NaN NaN NaN
6 [+7.00} 170 6/7.5 NaN NaN NaN NaN
... 17 18 19 20 21 22 \
0 ... NaN NaN NaN NaN NaN NaN
1 ... BSCVA Pentacam remarks K1 K2 K2 back K max
2 ... 6/5 NaN NaN NaN NaN NaN
3 ... NaN NaN NaN NaN NaN NaN
4 ... NaN Pentacam 44.3 43.7 -6.2 45.5
5 ... 6/4-4 NaN NaN NaN NaN NaN
6 ... 6/5 NaN NaN NaN NaN NaN
I want to extract a set of dataframes/series that I can then combine back together to get a 'tidy' dataframe e.g.:
1 Date R-UCVA R-Sph
2 2007-01-13 00:00:00 6/38 [-2.00]
3 2009-11-05 00:00:00 6/9 NaN
4 2009-11-18 00:00:00 6/12 NaN
5 2009-12-14 00:00:00 6/9 [-1.25]
6 2018-04-24 00:00:00 worn CL [-5.50]
1 R-Cyl R-Axis R-BSCVA R-Penta R-K1 R-K2 R-K2 back
2 [-2.75] 65 6/9 NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN 6/5 Pentacam 46 43.9 -6.6
5 [-5.75] 60 6/6-1 NaN NaN NaN NaN
6 [+7.00} 170 6/7.5 NaN NaN NaN NaN
etc. etc. so I'm trying to write some code that will pull a series of columns that I define by looking for the words "Date" or "UCVA" etc. etc. Then I plan to stitch them back together into a single dataframe with patient identifier as an extra column. And then cycle through all the XLS files, appending the whole lot to a single CSV file that I can then do useful stuff on (like put into an Access Database - yes, I know, but it has to be easy to use and already installed on an NHS computer - and statistical analysis).
Any suggestions? I hope that's enough information.
Thanks very much in advance.
Kind regards
Vicky
Here a something that will hopefully get you started.
I have prepared a text.xlsx file:
and I can read it as follows
path = 'text.xlsx'
df = pd.read_excel(path, header=[0,1])
# Deal with two levels of headers, here I just join them together crudely
df.columns = df.columns.map(lambda h: ' '.join(h))
# Slight hack because I messed with the column names
# I create two dataframes, one with the first column, one with the second column
df1 = df[[df.columns[0],df.columns[1]]]
df2 = df[[df.columns[0], df.columns[2]]]
# Stacking them on top of each other
result = pd.concat([df1, df2])
print(result)
#Merging them on the Date column
result = pd.merge(left=df1, right=df2, on=df1.columns[0])
print(result)
This gives the output
RIGHT Sph RIGHT UCVA Unnamed: 0_level_0 Date
0 NaN 6/38 2007-01-13 00:00:00
1 NaN 6/37 2009-11-05 00:00:00
2 NaN 9/56 2009-11-18 00:00:00
0 [-2.00] NaN 2007-01-13 00:00:00
1 NaN NaN 2009-11-05 00:00:00
2 NaN NaN 2009-11-18 00:00:00
and
Unnamed: 0_level_0 Date RIGHT UCVA RIGHT Sph
0 2007-01-13 00:00:00 6/38 [-2.00]
1 2009-11-05 00:00:00 6/37 NaN
2 2009-11-18 00:00:00 9/56 NaN
Some pointers:
How to merger two header rows? See this question and answer.
How to select pandas columns conditionally? See e.g. this or this
How to merge dataframes? There is a very good guide in the pandas doc
So I have a dataFrame:
Units fcast currerr curpercent fcastcum unitscum cumerrpercent
2013-09-01 3561 NaN NaN NaN NaN NaN NaN
2013-10-01 3480 NaN NaN NaN NaN NaN NaN
2013-11-01 3071 NaN NaN NaN NaN NaN NaN
2013-12-01 3234 NaN NaN NaN NaN NaN NaN
2014-01-01 2610 2706 -96 -3.678161 2706 2610 -3.678161
2014-02-01 NaN 3117 NaN NaN 5823 NaN NaN
2014-03-01 NaN 3943 NaN NaN 9766 NaN NaN
And I want to load a value, the index of the current month which is found by getting the last item that has "units" filled in, into a variable, "curr_month" that will have a number of uses (including text display and using as a slicing operator)
This is way ugly but almost works:
curr_month=mergederrs['Units'].dropna()
curr_month=curr_month[-1:].index
curr_month
But curr_month is
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01]
Length: 1, Freq: None, Timezone: None
Which is Unhashable, so this fails
mergederrs[curr_month:]
The docs are great for creating the DF but a bit sparse of getting individual items out!
I'd probably write
>>> df.Units.last_valid_index()
Timestamp('2014-01-01 00:00:00')
but a slight tweak on your approach should work too:
>>> df.Units.dropna().index[-1]
Timestamp('2014-01-01 00:00:00')
It's the difference between somelist[-1:] and somelist[-1].
[Note that I'm assuming that all of the nan values come at the end. If there are valids and then NaNs and then valids, and you want the last valid in the first group, that would be slightly different.]