"Cannot reindex from a duplicate axis" when groupby.apply() on MultiIndex columns - python

I'm playing around with computing subtotals within a DataFrame that looks like this (note the MultiIndex):
0 1 2 3 4 5
A 1 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0
B 1 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0
I can successfully add the subtotals with the following code:
(
df
.groupby(level=0)
.apply(
lambda df: pd.concat(
[df.xs(df.name), df.sum().to_frame('Total').T]
)
)
)
And it looks like this:
0 1 2 3 4 5
A 1 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0
Total 0.0 0.0 0.0 0.0 0.0 0.0
B 1 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0
Total 0.0 0.0 0.0 0.0 0.0 0.0
However, when I work with the transposed DataFrame, it does not work. The DataFrame looks like:
A B
1 2 1 2
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0
And I use the following code:
(
df2
.groupby(level=0, axis=1)
.apply(
lambda df: pd.concat(
[df.xs(df.name, axis=1), df.sum(axis=1).to_frame('Total')],
axis=1
)
)
)
I have specified axis=1 everywhere I can think of, but I get an error:
ValueError: cannot reindex from a duplicate axis
I would expect the output to be:
A B
1 2 Total 1 2 Total
0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0 0.0 0.0
Is this a bug? Or have I not specified the axis correctly everywhere? As a workaround, I can obviously transpose the DataFrame, produce the totals, and transpose back, but I'd like to know why it's not working here, and submit a bug report if necessary.
The problem DataFrame can be generated with:
df2 = pd.DataFrame(
np.zeros([6, 4]),
columns=pd.MultiIndex.from_product([['A', 'B'], [1, 2]])
)

Related

Python - Adding a column in a text file when certain criteria is met

I want to create a python script that will perform the following steps, however I'm not too sure on how to tackle it. I have a text file that looks like this (this was a .tsv file that was converted to a .txt):
trans_x trans_y trans_z rot_x rot_y rot_z non_steady_state_outlier00 non_steady_state_outlier01 motion_outlier00 motion_outlier01 motion_outlier02 motion_outlier03 motion_outlier04 motion_outlier05 motion_outlier06 motion_outlier07 motion_outlier08 motion_outlier09 motion_outlier10 motion_outlier11 motion_outlier12 motion_outlier13 motion_outlier14 motion_outlier15 motion_outlier16 motion_outlier17 motion_outlier18 motion_outlier19 motion_outlier20 motion_outlier21 motion_outlier22 motion_outlier23 motion_outlier24 motion_outlier25 motion_outlier26 motion_outlier27 motion_outlier28 motion_outlier29 motion_outlier30 motion_outlier31 motion_outlier32 motion_outlier33 motion_outlier34 motion_outlier35 motion_outlier36 motion_outlier37 motion_outlier38 motion_outlier39 motion_outlier40 motion_outlier41 motion_outlier42 motion_outlier43 motion_outlier44 motion_outlier45 motion_outlier46 motion_outlier47 motion_outlier48 motion_outlier49 motion_outlier50 motion_outlier51 motion_outlier52 motion_outlier53 motion_outlier54 motion_outlier55 motion_outlier56 motion_outlier57
-0.213045 0.188054 -0.0350898 -0.000904171 -0.001059 0.00130768 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.310081 -0.194011 0.00361878 0.000139216 0.000693861 -0.00162252 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.120743 -0.328317 0.00325342 -0.000296934 0.000299405 -0.000840907 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.17164 -0.36066 -0.00439546 -0.000190615 0.000769665 -0.000238167 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
What I need is a code that:
Goes through my .txt files and searches to see whether there is a column called non_steady_state_outlier02.
If non_steady_state_outlier02 doesn't exist, add a column with this name after non_steady_state_outlier01 and fill it out so that there is a 1 in the 3rd position and 0 everywhere else (e.g. 0 0 1 0 0 0...).
Many thanks in advance!
Assuming your table is like this:
etc.
Then looping over your directory of text files in python and using pandas for your condition will work nicely:
import pandas as pd
import os
for filename in os.listdir('.'):
if filename.endswith('.txt'):
df = pd.read_csv(filename)
if not 'non_steady_state_outlier02' in df.columns:
df['non_steady_state_outlier02'] = pd.Series([0, 0, 1, 0])
df.to_csv(filename.replace('.txt', '_new.txt'))
else:
pass
And if not .csv add your delimiter to pandas such as tabs:
df = pd.read_csv(filename, delimiter='\t)

Getting column name where a condition matches in a row

I have a pandas dataframe which looks like this:
A B C D E F G H I
1 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Now, for each row, I have to check which column contains 1 and then record this column name in a new column. The final dataframe would look like this:
A B C D E F G H I IsTrue
1 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 B
2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 A
3 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 B
Is there any faster and pythonic way to do it?
Here's one way using DataFrame.dot:
df['isTrue'] = df.astype(bool).dot(df.columns)
A B C D E F G H I isTrue
1 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 B
2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 A
3 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 B
For an even better performance you can use:
df['isTrue'] = df.columns[df.to_numpy().argmax(1)]
What you described is the definition of idxmax
>>> df.idxmax(1)
1 B
2 A
3 B
dtype: object

Iterating over pandas DataFrame with identical columns header

I am trying to iterate through rows and columns of the Pandas DataFrame and write that result in a new DataFrame if some condition is met. I am able to iterate on the following DataFrame which has different names for row and column.
W0O5 W1O5 W2O5 W3O5
W0O5 0.0 0.0 0.0 0.0
W1O5 0.0 0.0 1.0 0.0
W2O5 0.0 1.0 0.0 0.0
W3O5 0.0 0.0 0.0 0.0
I used the following approach
for i in pandas_df.index:
for j in pandas_df.columns:
print(i, j)
print(pandas_df.at[i, j])
if pandas_df.at[i, j] ==1:
single_pandas_df.at['WO5', 'WO5_corner'] =1
where single_pandas_df  is the new DataFrame I created, on which I want to add the value at corresponding row and column.
However, when I try to iterate through  DataFrame containing identical header for row and columns as below:
WO5 WO5 WO5 WO5
WO5 0.0 0.0 0.0 0.0
WO5 0.0 0.0 1.0 0.0
WO5 0.0 1.0 0.0 0.0
WO5 0.0 0.0 0.0 0.0
I get the AttributeError saying
AttributeError: 'BlockManager' object has no attribute 'T'
I know the error is due to duplicate column names. I was curious is there any way to handle such case in pandas. I have all of my DataFrames as in the second case and I need to get the values of each index from row and column.
Thanks in advance.
Update after Yolos comment:
Actually I have many such DataFrames as below
DyO7 DyO7 DyO6 DyO7 DyO7 DyO6
DyO7 0.0 3.0 1.0 2.0 1.0 0.0
DyO7 3.0 0.0 0.0 1.0 0.0 1.0
DyO6 1.0 0.0 0.0 0.0 1.0 0.0
DyO7 2.0 1.0 0.0 0.0 3.0 1.0
DyO7 1.0 0.0 1.0 3.0 0.0 0.0
DyO6 0.0 1.0 0.0 1.0 0.0 0.0
and next one as
TaO6 TaO6
TaO6 0.0 1.0
TaO6 1.0 0.0
In these DataFrames 1 ,2 and 3 represents the corner, edge and face sharing. So if (i,j) item in the DataFrame is 1, it goes to "..."_corner, if 2 it goes to edge and 3 goes to face.
my initial single_pandas DataFrame looks like following
DyO6_corner DyO6_edge DyO6_face DyO7_corner DyO7_edge DyO7_face TaO6_corner TaO6_edge TaO6_face WO5_corner WO5_edge WO5_face
DyO6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
DyO7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
TaO6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
WO5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
As from my above script after updating this single_pandas DataFrame, there will be 1 at ('WO5', 'WO5_corner') and it becomes:
DyO6_corner DyO6_edge DyO6_face DyO7_corner DyO7_edge DyO7_face TaO6_corner TaO6_edge TaO6_face WO5_corner WO5_edge WO5_face
DyO6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
DyO7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
TaO6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
WO5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 0.0 0.0

Finding the difference between rows of columns using shift

I've been coming here for almost two years now and have always been able to figure things out but I'm stumped now. Hopefully this is a quick answer.
https://github.com/MPhillips55/Capstone-Project-2---League-of-Legends/blob/master/EDA/test_case.csv
The link there is what my data looks like. 'min_0', 'min_1' and so on are gold values for League of Legends games at 1 minute intervals, that continue on to 'min_80'. The csv should be available to download.
I want to subtract the red values from the blue values and store that number on the blue rows for each minute.
Then I want to subtract the blue values from the red values and store that number on the red rows for each minute.
For clarity, I am only interested in the comparison for matching 'match_id's.
Here is an image of my desired output:
Desired Output
I think the right answer is likely something like this:
gold_df.loc[gold_df['red_or_blue_side'] == 'blue', :] = \
BLUE_VALUES - BLUE_VALUES.shifted_down
gold_df.loc[gold_df['red_or_blue_side'] == 'red', :] = \
RED_VALUES - RED_VALUES.shifted_up
I'm not clear on two things with that code. I need to select all the columns except the first two to calculate the differences. I also don't know how to select the values and the shifted values across all the relevant columns.
Thank you for the help. Please let me know if more information is needed.
-Mike
You could groupby match_id and then find the difference in each direction using .diff and then add the two components.
g = df.groupby('match_id', sort=False)[df.columns[2:]]
df = g.diff().fillna(0) + g.diff(-1).fillna(0)
df
min_0 min_1 min_2 min_3 min_4 min_5 min_6 min_7 min_8 min_9 \
0 0.0 15.0 46.0 -133.0 -60.0 -904.0 -505.0 -852.0 -763.0 -1224.0
1 0.0 -15.0 -46.0 133.0 60.0 904.0 505.0 852.0 763.0 1224.0
2 0.0 0.0 0.0 89.0 -92.0 -174.0 191.0 69.0 253.0 362.0
3 0.0 0.0 0.0 -89.0 92.0 174.0 -191.0 -69.0 -253.0 -362.0
4 0.0 0.0 17.0 -106.0 -136.0 400.0 363.0 829.0 1532.0 1862.0
5 0.0 0.0 -17.0 106.0 136.0 -400.0 -363.0 -829.0 -1532.0 -1862.0
... min_71 min_72 min_73 min_74 min_75 min_76 min_77 min_78 \
0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
min_79 min_80
0 0.0 0.0
1 0.0 0.0
2 0.0 0.0
3 0.0 0.0
4 0.0 0.0
5 0.0 0.0
To select all columns except the first two:
df[df.columns[2:]]
To select all columns except the first two:
df.iloc[:,2:]

Reorganizing an MxN 2D array of datapoints into an N-dimensional array

I've got a series of measurements in a 2D array such as
T mu1 mu2 mu3 a b c d e
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 3.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 2.0 3.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 3.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 3.0 2.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0
1.0 0.0 1.0 3.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 2.0 1.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 2.0 3.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 3.0 1.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 3.0 2.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0
where T, mu1, mu2 and mu3 are the 4 axes of the variables I control (independent variables). a, b, c, d and e are the measurements I've made (dependent variables).
I would like to convert this 2D array into a 5D array in numpy. By specifying T, mu1, mu2 and mu3 (or at least their 4 indexes) I want to be able to retrieve the corresponding a, b, c, d and e values.
Is there a straightforward way to reshape this kind of array by specifying what columns the axes correspond to? The MultiIndex in Pandas seemed to smartly organize it in a table, but seems ill-suited for high dimensional arrays. I won't necessarily know ahead of time what the shape of the ndarray should be, but it seems to me that based on the values it should be possible to reshape the array properly. The increment values for each axis might also be different, but they will always be uniform.
My current idea involves ignoring the mu1, mu2 and mu3 columns, and stacking sets of T data into a 3D array. From there I would stack sets of 3D mu1 data into a 4D array, and repeat the process with mu2 and mu3. This seems like a tedious process that should have a simple solution though.
First, let's make some fake data:
# an N x 5 array containing a regular mesh representing the stimulus params
stim_params = np.mgrid[:2, :3, :4, :5, :6].reshape(5, -1).T
# an N x 3 array representing the output values for each simulation run
output_vals = np.arange(720 * 3).reshape(720, 3)
# shuffle the rows for a bit of added realism
shuf = np.random.permutation(stim_params.shape[0])
stim_params = stim_params[shuf]
output_vals = output_vals[shuf]
Now you can use np.lexsort to get the set of indices that will sort the rows of your 2D array of simulation parameters such that the values in each column are in ascending order. Having done that, you can apply these indices to the rows of simulation output values.
# get the number of unique values for each stimulus parameter
params_shape = tuple(np.unique(col).shape[0] for col in stim_params.T)
# get the set of row indices that will sort the stimulus parameters in ascending
# order, starting with the final column
idx = np.lexsort(stim_params[:, ::-1].T)
# sort and reshape the stimulus parameters:
sorted_params = stim_params[idx].T.reshape((5,) + params_shape)
# sort and reshape the output values
sorted_output = output_vals[idx].T.reshape((3,) + params_shape)
I find that the hardest part is often just trying to wrap your head around what all the different dimensions of the outputs correspond to:
# array of stimulus parameters, with dimensions (n_params, p1, p2, p3, p4, p5)
print(sorted_params.shape)
# (5, 2, 3, 4, 5, 6)
# to check that the sorting worked as expected, we can look at the values of the
# 5th parameter when all the others are held constant at 0:
print(sorted_params[4, 0, 0, 0, 0, :])
# [0 1 2 3 4 5]
# ... and the 1st parameter when we hold all the others constant:
print(sorted_params[0, :, 0, 0, 0, 0])
# [0, 1]
# ... now let the 1st and 2nd parameters covary:
print(sorted_params[:2, :, :, 0, 0, 0])
# [[[0 0 0]
# [1 1 1]]
# [[0 1 2]
# [0 1 2]]]
Hopefully you get the idea. The same indexing logic applies to the sorted simulation outputs:
# array of outputs, with dimensions (n_outputs, p1, p2, p3, p4, p5)
print(sorted_output.shape)
# (3, 2, 3, 4, 5, 6)
# the first output variable whilst holding the first 4 simulation parameters
# constant at 0:
print(sorted_output[0, 0, 0, 0, 0, :])
# [ 0 3 6 9 12 15]

Categories