I have a dataframe with z-scores for several values. It looks like this:
ID Cat1 Cat2 Cat3
A 1.05 -1.67 0.94
B -0.88 0.22 -0.56
C 1.33 0.84 1.19
I want to write a script that will tell me which IDs correspond with values in each category relative to a cut-off value I specify as needed. Because I am working with z-scores, I will need to compare the absolute value against my cut-off.
So if I set my cut-off at 0.75, the resulting dataframe would be:
Cat1 Cat2 Cat3
A A A
B C C
C
If I set 1.0 as my cut-off value: the dataframe above would return:
Cat1 Cat2 Cat3
A A C
C
I know that I can do queries like this:
df1 = df[df['Cat1'] > 1]
df1
df1 = df[df['Cat1'] < -1]
df1
to individually query each column and find the information I'm looking for but this is tedious even if I figure out how to use the abs function to combine the two queries into one.How can I apply this filtration to the whole dataframe?
I've come up with this skeleton of a script:
cut_off = 1.0
cols = list(df.columns)
cols.remove('ID')
for col in cols:
# FOR CELL IN VALUE OF EACH CELL IN COLUMN:
if (abs.CELL < cut_off):
CELL = NaN
to basically just eliminate any values that don't meet the cut-off. If I can get this to work, it will bring me closer to my goal but I am stuck and don't even know if I am on the right track. Again, the overall goal is to quickly figure out which cells have absolute-values above the cut-off in each category be able to list the corresponding IDs.
I apologize if anything is confusing or vague; let me know in comments and I'll fix it. I've been trying to figure this out for most of today and my brain is somewhat fried
You don't have to apply the filtration to columns, you can also do
df[df > 1]
, and also,
df[df > 1] = np.NaN
Related
I have a very large data file (tens of thousands of rows and columns) formatted similarly to this.
name x y gh_00hr_bio_rep1 gh_00hr_bio_rep2 gh_00hr_bio_rep3 gh_06hr_bio_rep1
gene1 x y 2 3 2 1
gene2 x y 5 7 6 2
My goal for each gene is to find the mean of each set of repetitions.
At the end I would like to only have columns of mean values titled something like "00hr_bio" and delete all the individual repetitions.
My thinking right now is to use something like this:
for row in df:
df[avg] = df.iloc[3:].rolling(window=3, axis=1).mean()
But I have no idea how to actually make this work.
The df.iloc[3] is my way of trying to start from the 3rd column but I am fairly certain doing it this way does not work.
I don't even know where to begin in terms of "merging" the 3 columns into only 1.
Any suggestions you have will be greatly appreciated as I obviously have no idea what I am doing.
I would first build a Series of final names indexed by the original columns:
names = pd.Series(['_'.join(i.split('_')[:-1]) for i in df.columns[3:]],
index = df.columns[3:])
I would then use it to ask a mean of a groupby on axis 1:
tmp = df.iloc[:, 3:].groupby(names, axis=1).agg('mean')
It gives a new dataframe indexed like the original one and having the averaged columns:
gh_00hr_bio gh_06hr_bio
0 2.333333 1.0
1 6.000000 2.0
You can then horizontally concat it to the first dataframe or to its 3 first columns:
result = pd.concat([df.iloc[:, :3], tmp], axis=1)
to get:
name x y gh_00hr_bio gh_06hr_bio
0 gene1 x y 2.333333 1.0
1 gene2 x y 6.000000 2.0
You're pretty close.
df['avg'] = df.iloc[:, 2:].mean(axis=1)
will get you this:
x y gh_00hr_bio_rep1 gh_00hr_bio_rep2 gh_00hr_bio_rep3 gh_06hr_bio_rep1 avg
gene1 x y 2 3 2 1 2.0
gene2 x y 5 7 6 2 5.0
If you wish to get the mean from different sets of columns, you could do something like this:
for col in range(10):
df['avg%i' % col] = df.iloc[:, 2+col*5:7+col*5].mean(axis=1)
If you have the same number of columns per average. Otherwise you'd probably want to use the name of the rep columns, depending on what your data looks like.
I'm trying to do arithmetic among different cells in my dataframe and can't figure out how to operate on each of my groups. I'm trying to find the difference in energy_use between a baseline building (in this example upgrade_name == b is the baseline case) and each upgrade, for each building. I have an arbitrary number of building_id's and arbitrary number of upgrade_names.
I can do this successfully for a single building_id. Now I need to expand this out to a full dataset and am stuck. I will have 10's of thousands of buildings and dozens of upgrades for each building.
The answer to this question Iterating within groups in Pandas may be related, but I'm not sure how to apply it to my problem.
I have a dataframe like this:
df = pd.DataFrame({'building_id': [1,2,1,2,1], 'upgrade_name': ['a', 'a', 'b', 'b', 'c'], 'energy_use': [100.4, 150.8, 145.1, 136.7, 120.3]})
In [4]: df
Out[4]:
building_id upgrade_name energy_use
0 1 a 100.4
1 2 a 150.8
2 1 b 145.1
3 2 b 136.7
4 1 c 120.3
For a single building_id I have the following code:
upgrades = df.loc[df.building_id == 1, ['upgrade_name', 'energy_use']]
starting_point = upgrades.loc[upgrades.upgrade_name == 'b', 'energy_use']
upgrades['diff'] = upgrades.energy_use - starting_point.values[0]
In [8]: upgrades
Out[8]:
upgrade_name energy_use diff
0 a 100.4 -44.7
2 b 145.1 0.0
4 c 120.3 -24.8
How do I write this for arbitrary numbers of building_id's, instead of my hard-coded building_id == 1?
The ideal solution looks like this (doesn't matter if the baseline differences are 0 or NaN):
In [17]: df
Out[17]:
building_id upgrade_name energy_use ideal
0 1 a 100.4 -44.7
1 2 a 150.8 14.1
2 1 b 145.1 0.0
3 2 b 136.7 0.0
4 1 c 120.3 -24.8
Define the function counting the difference in energy usage (for
a group of rows for the current building) as follows:
def euDiff(grp):
euBase = grp[grp.upgrade_name == 'b'].energy_use.values[0]
return grp.energy_use - euBase
Then compute the difference (for all buildings), applying it to each group:
df['ideal'] = df.groupby('building_id').apply(euDiff)\
.reset_index(level=0, drop=True)
The result is just as you expected.
thanks for sharing that example data! Made things a lot easier.
I suggest solving this in two parts:
1. Make a dictionary from your dataframe that contains that baseline energy use for each building
2. Apply a lambda function to your dataframe to subtract each energy use value from the baseline value associated with that building.
# set index to building_id, turn into dictionary, filter out energy use
building_baseline = df[df['upgrade_name'] == 'b'].set_index('building_id').to_dict()['energy_use']
# apply lambda to dataframe, use axis=1 to access rows
df['diff'] = df.apply(lambda row: row['energy_use'] - building_baseline[row['building_id']])
You could also write a function to do this. You also don't necessarily need the dictionary, it just makes things easier. If you're curious about these alternative solutions let me know and I can add them for you.
Say I have some data in a pandas dataframe that I want to work with.
>>> df = pd.DataFrame([['a',10,5],['a',12,6],['b',4,2],['b',5,10]],
... columns=['id','val','val2']))
So the dataframe looks something like this:
>>> df
id val val2
0 a 10 5
1 a 12 6
2 b 4 2
3 b 5 10
What I want to achieve is a dataframe containing the id values as column names and val and val2 as row names, where the values shall be composed the following way:
Build the mean value for value columns based on id, leaving something like
id mean-val mean-val2
a 11 5.5
b 4.5 6
Calculate the percentage of mean-val and mean-val2 on the sum of both values based on id (e.g. 11 / (11+5.5) * 100 = 66.67), rendering
id perc-val perc-val2
a 66.67 33.33
b 42.86 57.14
The final dataframe shall look like this:
>>> new_df
a b
val 66.67 42.86
val2 33.33 57.14
My approach
I'm quite inexperienced with pandas, so it took me a while to get an unsatisfying approach.
>>> idx = ['val','val2']
>>> lst = [df.groupby('id')[index].mean() for index in idx]
>>> df_new = pd.DataFrame(
... [[x/y*100 for x, y in zip(lst2,sum(lst))] for lst2 in lst],
... index=idx, columns=df['id'].unique())
This works, but I'm not sure if it is guaranteed that either the columns or the rows are named in the right order, or if it's possible that, e.g., the a column is named b and vice versa.
So my actual question is if there is a nicer, cleaner, safer and maybe more efficient way of doing this.
Yes, there is.
If you're taking the mean over every column, you don't have to specify the column names
You can vectorize your division using DataFrame.div (or the division operator __div__)
v = df.groupby('id').mean()
v.T / v.sum(1) * 100 # thanks to #fuglede
# v.div(v.sum(1), axis=0).T # thanks to #Scott Boston
id a b
val 66.666667 42.857143
val2 33.333333 57.142857
I have a DataFrame df as below. I am just wondering to exclude rows in a particular column, say Vader_Sentiment, which has values in range -0.1 to 0.1 and keep the remaining.
I have tried df = [df['Vader_Sentiment'] < -0.1 & df['Vader_Sentiment] > 0.1] but it doesn't seem to work.
Text Vader_Sentiment
A -0.010
B 0.206
C 0.003
D -0.089
E 0.025
You can use Series.between():
df.loc[~df.Vader_Sentiment.between(-0.1, 0.1)]
Text Vader_Sentiment
1 B 0.206
Three things:
The tilde (~) operator denotes an inverse/complement.
Make sure you have numeric data. df.dtypes should show float for Vader_Sentiment, not "object"
You can specify an inclusive parameter to note if you want intervals to be closed or open
If I have a pandas database such as:
timestamp label value new
etc. a 1 3.5
b 2 5
a 5 ...
b 6 ...
a 2 ...
b 4 ...
I want the new column to be the average of the last two a's and the last two b's... so for the first it would be the average of 5 and 2 to get 3.5. It will be sorted by the timestamp. I know I could use a groupby to get the average of all the a's or all the b's but I'm not sure how to get an average of just the last two. I'm kinda new to python and coding so this might not be possible idk.
Edit: I should also mention this is not for a class or anything this is just for something I'm doing on my own and that this will be on a very large dataset. I'm just using this as an example. Also I would want each A and each B to have its own value for the last 2 average so the dimension of the new column will be the same as the others. So for the third line it would be the average of 2 and whatever the next a would be in the data set.
IIUC one way (among many) to do that:
In [139]: df.groupby('label').tail(2).groupby('label').mean().reset_index()
Out[139]:
label value
0 a 3.5
1 b 5.0
Edited to reflect a change in the question specifying the last two, not the ones following the first, and that you wanted the same dimensionality with values repeated.
import pandas as pd
data = {'label': ['a','b','a','b','a','b'], 'value':[1,2,5,6,2,4]}
df = pd.DataFrame(data)
grouped = df.groupby('label')
results = {'label':[], 'tail_mean':[]}
for item, grp in grouped:
subset_mean = grp.tail(2).mean()[0]
results['label'].append(item)
results['tail_mean'].append(subset_mean)
res_df = pd.DataFrame(results)
df = df.merge(res_df, on='label', how='left')
Outputs:
>> res_df
label tail_mean
0 a 3.5
1 b 5.0
>> df
label value tail_mean
0 a 1 3.5
1 b 2 5.0
2 a 5 3.5
3 b 6 5.0
4 a 2 3.5
5 b 4 5.0
Now you have a dataframe of your results only, if you need them, plus a column with it merged back into the main dataframe. Someone else posted a more succinct way to get to the results dataframe; probably no reason to do it the longer way I showed here unless you also need to perform more operations like this that you could do inside the same loop.