SettingWithCopyWarning reason in code snippet - python

When processing some medical training data to train a classifier for different medical tests, I got the SettingWithCopyWarning from pandas. I already read about it and figured out it comes from chained indexing a DataFrame, however I can't figure out the point in my code below where I use chained indexing.
#Turn the 12 measurement result rows of each patient into one single row for each patient
#the different test results are named: result, result_1, ... , result_11 (for each result)
#the pid and age column is kept only once while all other column fields of the 12 measurement rows are concatenated
#into one single row, also the time field exists now 12 times per row
imputed_features.sort_values(by=['pid','Time'], inplace=True)
sorted_features = train_features.sort_values(by=['pid','Time'])
measurements = []
columns = []
for i in range(12):
measurements.append(imputed_features.groupby(['pid'], as_index=False).nth(i))
measurements[i].reset_index(drop=True, inplace=True)
if( i == 0 ):
columns = [i for i in measurements[i].columns]
else:
measurements[i].drop(['pid', 'Age'], axis=1, inplace=True)
for j in measurements[i].columns:
columns.append(f'{j}_{i}')
#the resulting aggregated_features DataFrame
aggregated_features = pd.concat(measurements[0:12], axis=1, ignore_index=True)
aggregated_features.columns = columns
aggregated_features.to_csv('aggregated_features.csv', index=False)

I think it's because you're specifying a row.
pd.concat(measurements, ....)
If you still get a warning, maybe copying and merging the 'measurements' will improve it?
measures = measurements.copy()
pd.concat(measures[0:12], ...)

Related

How to add multiple columns to a dataframe based on calculations

I have a csv dataset (with > 8m rows) that I load into a dataframe. The csv has columns like:
...,started_at,ended_at,...
2022-04-01 18:23:32,2022-04-01 22:18:15
2022-04-02 01:16:34,2022-04-02 02:18:32
...
I am able to load the dataset into my dataframe, but then I need to add multiple calculated columns to the dataframe for each row. In otherwords, unlike this SO question, I do not want the rows of the new columns to have the same initial value (col 1 all NAN, col 2, all "dogs", etc.).
Right now, I can add my columns by doing something like:
df['start_time'] = df.apply(lambda row: add_start_time(row['started_at']), axis = 1)
df['start_cat'] = df.apply(lambda row: add_start_cat(row['start_time']), axis = 1)
df['is_dark'] = df.apply(lambda row: add_is_dark(row['started_at']), axis = 1)
df['duration'] = df.apply(lamba row: calc_dur(row'[started_at'],row['ended_at']), axis = 1)
But it seems inefficient since the entire dataset is processed N times (once for each call).
It seems that I should be able to calculate all of the new columns in a single go, but I am missing some conceptual approach.
Examples:
def calc_dur(started_at, ended_at):
# started_at, ended_at are datetime64[ns]; converted at csv load
diff = ended_at - started_at
return diff.total_seconds() / 60
def add_start_time(started_at):
# started_at is datetime64[ns]; converted at csv load
return started_at.time()
def add_is_dark(started_at):
# tz is pytz.timezone('US/Central')
# chi_town is the astral lookup for Chicago
st = started_at.replace(tzinfo=TZ)
chk = sun(chi_town.observer, date=st, tzinfo=chi_town.timezone)
return st >= chk['dusk'] or st <= chk['dawn']
Update 1
Following on the information for MoRe, I was able to get the essential working. I needed to augment by adding the column names, and then with the merge to specify the index.
data = pd.Series(df.apply(lambda x: [
add_start_time(x['started_at']),
add_is_dark(x['started_at']),
yrmo(x['year'], x['month']),
calc_duration_in_minutes(x['started_at'], x['ended_at']),
add_start_cat(x['started_at'])
], axis = 1))
new_df = pd.DataFrame(data.tolist(),
data.index,
columns=['start_time','is_dark','yrmo',
'duration','start_cat'])
df = df.merge(new_df, left_index=True, right_index=True)
import pandas as pd
data = pd.Series(dataframe.apply(lambda x: [function1(x[column_name]), function2(x[column_name)], function3(x[column_name])], axis = 1))
pd.DataFrame(data.tolist(),data.index)
if i understood your mean correctly, it's your answer. but before everything please use Swifter pip :)
first create a series by lists and convert it to columns...
swifter is a simple library (at least i think it is simple) that only has only one useful method: apply
import swifter
data.swifter.apply(lambda x: x+1)
it use parallel manner to improve speed in large datasets... in small ones, it isn't good and even is worse
https://pypi.org/project/swifter/

pandas dropna giving different results when applied to a dataframe with 2 columns or the columns as independednt dataframes

I'm using the following public dataset to practice linear regression:
https://www.kaggle.com/theforcecoder/wind-power-forecasting
I tried to do a least squares regression using numpy polynomial, and I ran into issues because the columns had nan values
applying dropna to the dataframe from where i extract the columns does not have an effect. I tried both using in_place=True and defining a new dataframe, but neither works:
LSFitdDf = BearingTempsCorr[['WindSpeed', 'BearingShaftTemperature']]
LSFitdDf = LSFitdDf[['WindSpeed', 'BearingShaftTemperature']]
WindSpeed = BearingTempsCorr['WindSpeed']
BearingShaftTemperature = BearingTempsCorr['BearingShaftTemperature']
print(len(WindSpeed))
print(len(BearingShaftTemperature))
and
LSFitdDf = BearingTempsCorr[['WindSpeed', 'BearingShaftTemperature']].dropna()
LSFitdDf = LSFitdDf[['WindSpeed', 'BearingShaftTemperature']]
WindSpeed = BearingTempsCorr['WindSpeed']
BearingShaftTemperature = BearingTempsCorr['BearingShaftTemperature']
print(len(WindSpeed))
print(len(BearingShaftTemperature))
Both produce the same output (length of both columns=323)
However, applying dropna to the columns themselves does drop rows:
WindSpeed = BearingTempsCorr['WindSpeed'].dropna()
BearingShaftTemperature = BearingTempsCorr['BearingShaftTemperature'].dropna()
results in lengths=(316, 312)
However this introducces a new problem: regression cannot be applied because x and y have different lengths
What is going on here?
There is an error in your code:
WindSpeed = BearingTempsCorr['WindSpeed']
BearingShaftTemperature = BearingTempsCorr['BearingShaftTemperature']
You use BearingTempsCorr, but you should use LSFitdDf (where you saved dropna values).
WindSpeed = LSFitdDf['WindSpeed']
BearingShaftTemperature = LSFitdDf['BearingShaftTemperature']
P.S. You also have redundant line, which just copies the LSFitdDf into the same variable.
LSFitdDf = LSFitdDf[['WindSpeed', 'BearingShaftTemperature']]
P.P.S. The most clear way to get the whole dataset but drop lines with NA values in desired columns is
BearingTempsCorr.dropna(subset=['WindSpeed', 'BearingShaftTemperature'])

Speeding up loop over dataframes

I have written the code given below. There are two Pandas dataframes: df contains columns timestamp_milli and pressure and df2 contains columns timestamp_milli and acceleration_z. Both dataframes have around 100'000 rows. In the code shown below I'm searching for each timestamp of each row of df the rows of df2 where the time difference lies within a range and is minimal.
Unfortunately the code is extremly slow. Moreover, I'm getting the following message originating from the line df_temp["timestamp_milli"] = df_temp["timestamp_milli"] - row["timestamp_milli"]:
SettingWithCopyWarning: A value is trying to be set on a copy of a
slice from a DataFrame. Try using .loc[row_indexer,col_indexer] =
value instead
How can I speedup the code and solve the warning?
acceleration = []
pressure = []
for index, row in df.iterrows():
mask = (df2["timestamp_milli"] >= (row["timestamp_milli"] - 5)) & (df2["timestamp_milli"] <= (row["timestamp_milli"] + 5))
df_temp = df2[mask]
# Select closest point
if len(df_temp) > 0:
df_temp["timestamp_milli"] = df_temp["timestamp_milli"] - row["timestamp_milli"]
df_temp["timestamp_milli"] = df_temp["timestamp_milli"].abs()
df_temp = df_temp.loc[df_temp["timestamp_milli"] == df_temp["timestamp_milli"].min()]
for index2, row2 in df_temp.iterrows():
pressure.append(row["pressure"])
acc = row2["acceleration_z"]
acceleration.append(acc)
I have faced a similar problem, using itertuples instead of iterrows shows significant reduction in time.
why iterrows have issues.
Hope this helps.

Process subset of data based on variable type in python

I have the below data which I store in a csv (df_sample.csv). I have the column names in a list called cols_list.
df_data_sample:
df_data_sample = pd.DataFrame({
'new_video':['BASE','SHIVER','PREFER','BASE+','BASE+','EVAL','EVAL','PREFER','ECON','EVAL'],
'ord_m1':[0,1,1,0,0,0,1,0,1,0],
'rev_m1':[0,0,25.26,0,0,9.91,'NA',0,0,0],
'equip_m1':[0,0,0,'NA',24.9,20,76.71,57.21,0,12.86],
'oev_m1':[3.75,8.81,9.95,9.8,0,0,'NA',10,56.79,30],
'irev_m1':['NA',19.95,0,0,4.95,0,0,29.95,'NA',13.95]
})
attribute_dict = {
'new_video': 'CAT',
'ord_m1':'NUM',
'rev_m1':'NUM',
'equip_m1':'NUM',
'oev_m1':'NUM',
'irev_m1':'NUM'
}
Then I read each column and do some data processing as below:
cols_list = df_data_sample.columns
# Write to csv.
df_data_sample.to_csv("df_seg_sample.csv",index = False)
#df_data_sample = pd.read_csv("df_seg_sample.csv")
#Create empty dataframe to hold final processed data for each income level.
df_final = pd.DataFrame()
# Read in each column, process, and write to a csv - using csv module
for column in cols_list:
df_column = pd.read_csv('df_seg_sample.csv', usecols = [column],delimiter = ',')
if (((attribute_dict[column] == 'CAT') & (df_column[column].unique().size <= 100))==True):
df_target_attribute = pd.get_dummies(df_column[column], dummy_na=True,prefix=column)
# Check and remove duplicate columns if any:
df_target_attribute = df_target_attribute.loc[:,~df_target_attribute.columns.duplicated()]
for target_column in list(df_target_attribute.columns):
# If variance of the dummy created is zero : append it to a list and print to log file.
if ((np.var(df_target_attribute[[target_column]])[0] != 0)==True):
df_final[target_column] = df_target_attribute[[target_column]]
elif (attribute_dict[column] == 'NUM'):
#Let's impute with 0 for numeric variables:
df_target_attribute = df_column
df_target_attribute.fillna(value=0,inplace=True)
df_final[column] = df_target_attribute
attribute_dict is a dictionary containing the mapping of variable name : variable type as :
{
'new_video': 'CAT'
'ord_m1':'NUM'
'rev_m1':'NUM'
'equip_m1':'NUM'
'oev_m1':'NUM'
'irev_m1':'NUM'
}
However, this column by column operation takes long time to run on a dataset of size**(5million rows * 3400 columns)**. Currently the run time is approximately
12+ hours.
I want to reduce this as much as possible and one of the ways I can think of is to do processing for all NUM columns at once and then go column by column
for the CAT variables.
However I am neither sure of the code in Python to achieve this nor if this will really fasten up the process.
Can someone kindly help me out!
For numeric columns it is simple:
num_cols = [k for k, v in attribute_dict.items() if v == 'NUM']
print (num_cols)
['ord_m1', 'rev_m1', 'equip_m1', 'oev_m1', 'irev_m1']
df1 = pd.read_csv('df_seg_sample.csv', usecols = [num_cols]).fillna(0)
But first part code is performance problem, especially in get_dummies called for 5 million rows:
df_target_attribute = pd.get_dummies(df_column[column], dummy_na=True, prefix=column)
Unfortunately there is problem processes get_dummies in chunks.
There are three things i would advice you to speed up your computaions:
Take a look at pandas HDF5 capabilites. HDF is a binary file
format for fast reading and writing data to disk.
I would read in bigger chunks (several columns) of your csv file at once (depending on
how big your memory is).
There are many pandas operations you can apply to every column at once. For example nunique() (giving you the number of unique values, so you don't need unique().size). With these column-wise operations you can easily filter columns by selecting with a binary vector. E.g.
df = df.loc[:, df.nunique() > 100]
#filter out every column where less then 100 unique values are present
Also this answer from the author of pandas on large data workflow might be interesting for you.

How to speed up this Python function?

I hope it is OK to ask questions of this type.
I have a get_lags function that takes a data frame, and for each column, shifts the column by each n in the list n_lags. So, if n_lags = [1, 2], the function shifts each column once by 1 and once by 2 positions, creating new lagged columns in this way.
def get_lags (df, n_lags):
data =df.copy()
data_with_lags = pd.DataFrame()
for column in data.columns:
for i in range(n_lags[0], n_lags[-1]+1):
new_column_name = str(column) + '_Lag' + str(i)
data_with_lags[new_column_name] = data[column].shift(-i)
data_with_lags.fillna(method = 'ffill', limit = max(n_lags), inplace = True)
return data_with_lags
So, if:
df.columns
ColumnA
ColumnB
Then, get_lags(df, [1 , 2]).columns will be:
ColumnA_Lag1
ColumnA_Lag2
ColumnB_Lag1
ColumnB_Lag2
Issue: working with data frames that have about 100,000 rows and 20,000 columns, this takes forever to run. On a 16-GB RAM, core i7 windows machine, once I waited for 15 minutes to the code to run before I stopped it. Is there anyway I can tweak this function to make it faster?
You'll need shift + concat. Here's the concise version -
def get_lags(df, n_lags):
return pd.concat(
[df] + [df.shift(i).add_suffix('_Lag{}'.format(i)) for i in n_lags],
axis=1
)
And here's a more memory-friendly version, using a for loop -
def get_lags(df, n_lags):
df_list = [df]
for i in n_lags:
v = df.shift(i)
v.columns = v.columns + '_Lag{}'.format(i)
df_list.append(v)
return pd.concat(df_list, axis=1)
This may not apply to your case (I hope I understand what you're trying to do correctly), but you can speed it up massively by not doing it in the first place. Can you treat your columns like a ring buffer?
Instead of changing the columns afterwards, keep track of:
how many columns can you use (how many lag items for each entry)
what was the last lag column used
(optionally) how many times you "rotated"
So instead of moving the data, you do something like:
current_column = (current_column + 1) % total_columns
and write to that column next.

Categories