Pandas: reshape data with duplicate row names to columns - python

I have a data set that's sort of like this (first lines shown):
Sample Detector Cq
P_1 106 23.53152
P_1 106 23.152458
P_1 106 23.685083
P_1 135 24.465698
P_1 135 23.86892
P_1 135 23.723469
P_1 17 22.524242
P_1 17 20.658733
P_1 17 21.146122
Both "Sample" and "Detector" columns contain duplicated values ("Cq" is unique): to be precise, each "Detector" appears 3 times for each sample, because it's a replicate in the data.
What I need to do is to:
Reshape the table so that the columns contain Samples and rows Detectors
Rename the duplicate columns so that I know which replicate is it
I thought that DataFrame.pivot would do the trick, but it fails because of the duplicate data. What would be the best approach? Rename the duplicates, then reshape, or is there a better option?
EDIT: I thought over it and I think it's better to state the purpose. I need to store for each "Sample" the mean and standard deviation of their "Detector".

It looks like what you may be looking for is a hierarchical indexed dataframe [link].
Would something like this work?
#build a sample dataframe
a=['P_1']*9
b=[106,106,106,135,135,135,17,17,17]
c = np.random.randint(1,100,9)
df = pandas.DataFrame(data=zip(a,b,c), columns=['sample','detector','cq'])
#add a repetition number column
df['rep_num']=[1,2,3]*( len(df)/3 )
#Convert to a multi-indexed DF
df_multi = df.set_index(['sample','detector','rep_num'])
#--------------Resulting Dataframe---------------------
cq
sample detector rep_num
P_1 106 1 97
2 83
3 81
135 1 46
2 92
3 89
17 1 58
2 26
3 75

Related

How to drop identical columns in Pandas dataframe if first x rows of values are identical?

I'm working with a large dataset (921600 rows, 23 columns) with the occasional duplicate column (different column names however). I would like to remove the columns with identical values. However, 'df.T.drop_duplicates().T' and similar solutions simply take too long as they presumably are checking all 921600 rows. Is it possible to remove columns if just the first x amount of rows have identical values?
E.g.: Identify that 'channel2' and 'channel2-2' are duplicate by comparing the first x (say 10) rows instead of inspecting all million rows.
channel1 channel2 channel3 channel2-b
0 47 46 27 46
1 84 28 28 28
2 72 79 68 79
... ... ... ... ...
999997 4729 1957 2986 1957
999998 9918 1513 2957 1513
999999 1001 5883 7577 5883
Use DataFrame.duplicated with filter top values in DataFrame.head, filter rows by DataFrame.loc:
N = 2
df = df.loc[:, ~df.head(N).T.duplicated()]

Average certain columns based on values in other columns

I would like to average certain column values depending on whether a condition is met in another column. Specifically, if column 1 in the below dataframe is < 1700, I want to include the corresponding value in that row from column 51 in my average calculation. And if column 2 < 1700, I want to also include the value in that row from column 52 in my average calculation.
So, for row 0, the new calculated column for that row would be 64 (average of 65 & 63). For row 1, the average would be just 80 (column 51 value) since neither columns 2 nor 3 were less than 1700 and hence not included in the average calculation.
This is a simplified example as my actual dataframe has about 10 columns for conditions with 10 corresponding columns of values to average.
As a potential complexity, the column headers are numbers rather than traditional text labels and do not refer to the order of that column in the dataframe since I've excluded certain columns when I imported the csv file. In other words, column 51 isn't the 51st column in the dataframe.
When I run the below code I'm getting the following error:
ValueError: ("No axis named 1 for object type ",
'occurred at index 0')
Is there a more efficient way to code this and avoid this error? Thanks for your help!
import pandas as pd
import numpy as np
test_df = pd.DataFrame({1:[1600,1600,1600,1700,1800],2:[1500,2000,1400,1500,2000],
3:[2000,2000,2000,2000,2000],51:[65,80,75,80,75],52:[63,82,85,85,75],53:[83,80,75,76,78]})
test_df
1 2 3 51 52 53
0 1600 1500 2000 65 63 83
1 1600 2000 2000 80 82 80
2 1600 1400 2000 75 85 75
3 1700 1500 2000 80 85 76
4 1800 2000 2000 75 75 78
def calc_mean_based_on_conditions(row):
list_of_columns_to_average = []
for i in range(1,4):
if row[i] < 1700:
list_of_columns_to_average.append(i+50)
if not list_of_columns_to_average:
return np.nan
else:
return row[(list_of_columns_to_average)].mean(axis=1)
test_df['MeanValue'] = test_df.apply(calc_mean_based_on_conditions, axis=1)
Something very relevant (supporting int as column names)- https://github.com/theislab/anndata/issues/31
Due to this bug/issue, I converted the column names to type string:
test_df = pd.DataFrame({'1':[1600,1600,1600,1700,1800],'2':[1500,2000,1400,1500,2000],
'3':[2000,2000,2000,2000,2000],'51':[65,80,75,80,75],'52':[63,82,85,85,75],'53':
[83,80,75,76,78]})
Created a new dataframe - new_df to meet out requirements
new_df = test_df[['1', '2', '3']].where(test_df[['1','2','3']]<1700).notnull()
new_df now looks like this
1 2 3
0 True True False
1 True False False
2 True True False
3 False True False
4 False False False
Then simply rename the column and check using 'where'
new_df = new_df.rename(columns={"1": "51", "2":"52", "3":"53"})
test_df['mean_value'] = test_df[['51', '52', '53']].where(new_df).mean(axis=1)
This should give you the desired output -
1 2 3 51 52 53 mean_value
0 1600 1500 2000 65 63 83 64.0
1 1600 2000 2000 80 82 80 80.0
2 1600 1400 2000 75 85 75 80.0
3 1700 1500 2000 80 85 76 85.0
4 1800 2000 2000 75 75 78 NaN
I deleted my other answer because it was going down the wrong path. What you want to do is generate a mask of your conditional columns, then use that mask to apply a function to other columns. In this case, 1 corresponds to 51, 2 to 52, etc.
import pandas as pd
import numpy as np
test_df = pd.DataFrame({1:[1600,1600,1600,1700,1800],2:[1500,2000,1400,1500,2000],
3:[2000,2000,2000,2000,2000],51:[65,80,75,80,75],52:[63,82,85,85,75],53:[83,80,75,76,78]})
test_df
1 2 3 51 52 53
0 1600 1500 2000 65 63 83
1 1600 2000 2000 80 82 80
2 1600 1400 2000 75 85 75
3 1700 1500 2000 80 85 76
4 1800 2000 2000 75 75 78
# create dictionary to map columns to one another
l1=list(range(1,4))
l2=list(range(50,54))
d = {k:v for k,v in zip(l1,l2)}
d
{1: 51, 2: 52, 3: 53}
temp=test_df[l1] > 1700 # Subset initial dataframe, generate mask
for _, row in temp.iterrows(): #iterate through subsetted data
list_of_columns_for_mean=list() # list of columns for later computation
for k, v in d.items(): #iterate through each k:v and evaluate conditional for each row
if row[k]:
list_of_columns_for_mean.append(v)
# the rest should be pretty easy to figure out
This is not an elegant solution, but it is a solution. Unfortunately, I've run out of time to dedicate to it, but hopefully this gets you pointed in a better direction.
There is probably a better, vectorized way to do this, but you could do it without the function
import numpy as np
import pandas as pd
from collections import defaultdict
test_df = pd.DataFrame({1:[1600,1600,1600,1700,1800],2:[1500,2000,1400,1500,2000],
3:[2000,2000,2000,2000,2000],51:[65,80,75,80,75],52:[63,82,85,85,75],53:[83,80,75,76,78]})
# List of columns that you're applying the condition to
condition_cols = list(range(1,4))
# Get row and column indices where this condition is true
condition = np.where(test_df[condition_cols].lt(1700))
# make a dictionary mapping row to true columns
cond_map = defaultdict(list)
for r,c in zip(*condition):
cond_map[r].append(c)
# Get the means of true columns
means = []
for row in range(len(test_df)):
if row in cond_map:
temp = []
for col in cond_map[row]:
# Needs 51 because of Python indexing starting at zero + 50
temp.append(test_df.loc[row, col+51])
means.append(temp)
else:
# If the row has no true columns (i.e row 4)
means.append(np.nan)
test_df['Means'] = [np.mean(l) for l in means]
The issue is indexing true rows and columns in a vectorized way.

generating a scatter plot using two different dataset in python pandas

I have two datasets. Both have different numbers of observations. Is it possible to generate a scatter plot between features from different datasets?
For example, I want to generate a scatter plot between the submission_day column of dataset 1 and the score column of dataset 2.
I am not sure how to do that using python packages.
For example consider the following two datasets:
id_student submission_day
23hv 100
24hv 99
45hv 10
56hv 16
53hv 34
id_student score
23hv 59
25gf 20
24hv 56
45hv 76
I think need merge for one DataFrame and then DataFrame.plot.scatter:
df = df1.merge(df2, on='id_student')
print (df)
id_student submission_day score
0 23hv 100 59
1 24hv 99 56
2 45hv 10 76
df.plot.scatter(x='submission_day', y='score')

Replacing pandas Series column values with their own indices

I have a chronologically-sorted datetime Series(note the index values on the left-hand side)
9 1971-04-10
84 1971-05-18
2 1971-07-08
53 1971-07-11
28 1971-09-12
474 1972-01-01
153 1972-01-13
13 1972-01-26
129 1972-05-06
98 1972-05-13
111 1972-06-10
225 1972-06-15
For my purpose, only the sorted indices matter, so I would like to replace the datetime values with their indices in the original pandas Series (perhaps through reindexing) to return a new Series like this:
0 9
1 84
2 2
3 53
4 28
5 474
6 153
7 13
8 129
9 98
10 111
11 225
where the 'indices' on the left-hand-side are the new 'index' column and the 'indices' on the right are the original index column for datetime values.
What is the easier way to do this?
Thank you.
If you are okay with constructing a new object:
series = pd.Series(old_series.index, index=whateveryouwant)
Where specifying the new index is optional..
You can point your index to a list as follows
df.index = list(range(len(df))
where df is your dataframe

Numpy 2D array in Python 3.4

I have this code:
import pandas as pd
data = pd.read_csv("test.csv", sep=",")
data array looks like that:
The problem is that I can't split it by columns, like that:
week = data[:,1]
It should split the second column into the week, but it doesn't do it:
*TypeError: unhashable type: 'slice'
*
How should I do this to make it work?
I also wondering, that what this code do exactly? (Don't really understand np.newaxis part)
week = data['1'][:, np.newaxis]
Result:
There are a few issues here.
First, read_csv uses a comma as a separator by default, so you don't need to specify that.
Second, the pandas csv reader by default uses the first row to get column headings. That doesn't appear to be what you want, so you need to use the header=None argument.
Third, it looks like your first column is the row number. You can use index_col=0 to use that column as the index.
Fourth, for pandas, the first index is the column, not the row. Further, using the standard data[ind] notation is indexing by column name, rather than column number. And you can't use a comma to index both row and column at the same time (you need to use data.loc[row, col] to do that).
So for your case, all you need to do to get the second columns is data[2], or if you use the first column as the row number then the second column becomes the first column, so you would do data[1]. This returns a pandas Series, which is the 1D equivalent of a 2D DataFrame.
So the whole thing should look like this:
import pandas as pd
data = pd.read_csv('test.csv', header=None, index_col=0)
week = data[1]
data looks like this:
1 2 3 4
0
1 10 2 100 12
2 15 5 150 15
3 25 7 240 20
4 22 12 350 20
5 51 13 552 20
6 134 20 880 36
7 150 22 900 38
8 200 29 1020 44
9 212 31 1100 46
10 199 23 1089 45
11 220 32 1145 60
The '0' row doesn't exist, it is just there for informational purposes.
week looks like this:
0
1 10
2 15
3 25
4 22
5 51
6 134
7 150
8 200
9 212
10 199
11 220
Name: 1, dtype: int64
However, you can give columns (and rows) meaningful names in pandas, and then access them by those names. I don't know the column names, so I just made some up:
import pandas as pd
data = pd.read_csv('test.csv', header=None, index_col=0, names=['week', 'spam', 'eggs', 'grail'])
week = data['week']
In this case, data looks like this:
week spam eggs grail
1 10 2 100 12
2 15 5 150 15
3 25 7 240 20
4 33 12 350 20
5 51 13 552 20
6 134 20 880 36
7 150 22 900 38
8 200 29 1020 44
9 212 31 1100 46
10 199 23 1089 45
11 220 32 1145 50
And week looks like this:
1 10
2 15
3 25
4 33
5 51
6 134
7 150
8 200
9 212
10 199
11 220
Name: week, dtype: int64
For np.newaxis, what that does is add one dimension to the array. So say you have a 1D array (a vector), using np.newaxis on it would turn it into a 2D array. It would turn a 2D array into a 3D array, 3D into 4D, and so on. Depending on where you put it (such as [:,np.newaxis] vs. [np.newaxis,:], you can determine which dimension to add. So np.arange(10)[np.newaxis,:] (or just np.arange(10)[np.newaxis]) gives you a shape (1,10) 2D array, while np.arange(10)[:,np.newaxis] gives you a shape (10,1) 2D array.
In your case, what the line is doing is getting the column with the name 1, which is a 1D pandas Series, then adding a new dimension to it. However, instead of turning it back into a DataFrame, it instead converts it into a 1D numpy array, then adds one dimension to make it a 2D numpy array.
This, however, is dangerous long-term. There is no guarantee that this sort of silent conversion won't be changed at some point. To change a pandas objects to a numpy one, you should use an explicit conversion with the values method, so in your cases data.values or data['1'].values.
However, you don't really need a numpy array. A series is fine. If you really want a 2D object, you can convert a Series into a DataFrame by using something like data['1'].to_frame().

Categories