How to parse the output of Pandas' stack() function - python

stack() is an excellent Pandas function. It return Stacked dataframe or series. How to parse this output and print it well-formatted (like using to_markdown() function)?
>>>> df_single_level_cols
weight height
cat 0 1
dog 2 3
>>>> df_single_level_cols.stack()
cat weight 0
height 1
dog weight 2
height 3
dtype: int64

You mean that with to_markdown the two columns are displayed as tupels, right? Then try to use:
print(df_single_level_cols.stack().reset_index().to_markdown())
Maybe you have to adjust the ibdex and the column names.

Related

How to change the value of a column items using pandas?

This is my fist question on stackoverflow.
I'm implementing a Machine Learning classification algorithm and I want to generalize it for any input dataset that have their target class in the last column. For that, I want to modify all values of this column without needing to know the names of each column or rows using pandas in python.
For example, let's suppose I load a dataset:
dataset = pd.read_csv('random_dataset.csv')
Let's say the last column has the following data:
0 dog
1 dog
2 cat
3 dog
4 cat
I want to change each "dog" appearence to 1 and each cat appearance to 0, so that the column would look:
0 1
1 1
2 0
3 1
4 0
I have found some ways of changing the values of specific cells using pandas, but for this case, what would be the best way to do that?
I appreciate each answer.
You can use pandas.Categorical:
df['column'] = pd.Categorical(df['column']).codes
You can also use the built in functionality for this too:
df['column'] = df['column'].astype('category').cat.codes
use the map and map the values as per requirement:
df['col_name'] = df['col_name'].map({'dog' : 1 , 'cat': 0})
OR -> Use factorize(Encode the object as an enumerated type) -> if you wanna assign random numeric values
df['col_name'] = df['col_name'].factorize()[0]
OUTPUT:
0 1
1 1
2 0
3 1
4 0

Convert data frame with a single column to a difference matrix

I have a data frame with a single column of values and an index of sample names:
>>> df = pd.DataFrame(data={'value':[1,3,4]},index=['cat','dog','bird'])
>>> print(df)
value
cat 1
dog 3
bird 4
I would like to convert this to a square matrix wherein each cell of the matrix shows the difference between every set of two values:
cat dog bird
cat 0 2 3
dog 2 0 1
bird 3 1 0
Is this possible? If so, how do I go about doing this?
I have tried to use scipy.spatial.distance.squareform to convert my starting data frame into a matrix, but apparently what I am starting with is not the right type of vector. Any help would be much appreciated!

Create a column based on multiple column distinct count pandas [duplicate]

I want to add an aggregate, grouped, nunique column to my pandas dataframe but not aggregate the entire dataframe. I'm trying to do this in one line and avoid creating a new aggregated object and merging that, etc.
my df has track, type, and id. I want the number of unique ids for each track/type combination as a new column in the table (but not collapse track/type combos in the resulting df). Same number of rows, 1 more column.
something like this isn't working:
df['n_unique_id'] = df.groupby(['track', 'type'])['id'].nunique()
nor is
df['n_unique_id'] = df.groupby(['track', 'type'])['id'].transform(nunique)
this last one works with some aggregating functions but not others. the following works (but is meaningless on my dataset):
df['n_unique_id'] = df.groupby(['track', 'type'])['id'].transform(sum)
in R this is easily done in data.table with
df[, n_unique_id := uniqueN(id), by = c('track', 'type')]
thanks!
df.groupby(['track', 'type'])['id'].transform(nunique)
Implies that there is a name nunique in the name space that performs some function. transform will take a function or a string that it knows a function for. nunique is definitely one of those strings.
As pointed out by #root, often the method that pandas will utilize to perform a transformation indicated by these strings are optimized and should generally be preferred to passing your own functions. This is True even for passing numpy functions in some cases.
For example transform('sum') should be preferred over transform(sum).
Try this instead
df.groupby(['track', 'type'])['id'].transform('nunique')
demo
df = pd.DataFrame(dict(
track=list('11112222'), type=list('AAAABBBB'), id=list('XXYZWWWW')))
print(df)
id track type
0 X 1 A
1 X 1 A
2 Y 1 A
3 Z 1 A
4 W 2 B
5 W 2 B
6 W 2 B
7 W 2 B
df.groupby(['track', 'type'])['id'].transform('nunique')
0 3
1 3
2 3
3 3
4 1
5 1
6 1
7 1
Name: id, dtype: int64

Slice column in panda database and averaging results

If I have a pandas database such as:
timestamp label value new
etc. a 1 3.5
b 2 5
a 5 ...
b 6 ...
a 2 ...
b 4 ...
I want the new column to be the average of the last two a's and the last two b's... so for the first it would be the average of 5 and 2 to get 3.5. It will be sorted by the timestamp. I know I could use a groupby to get the average of all the a's or all the b's but I'm not sure how to get an average of just the last two. I'm kinda new to python and coding so this might not be possible idk.
Edit: I should also mention this is not for a class or anything this is just for something I'm doing on my own and that this will be on a very large dataset. I'm just using this as an example. Also I would want each A and each B to have its own value for the last 2 average so the dimension of the new column will be the same as the others. So for the third line it would be the average of 2 and whatever the next a would be in the data set.
IIUC one way (among many) to do that:
In [139]: df.groupby('label').tail(2).groupby('label').mean().reset_index()
Out[139]:
label value
0 a 3.5
1 b 5.0
Edited to reflect a change in the question specifying the last two, not the ones following the first, and that you wanted the same dimensionality with values repeated.
import pandas as pd
data = {'label': ['a','b','a','b','a','b'], 'value':[1,2,5,6,2,4]}
df = pd.DataFrame(data)
grouped = df.groupby('label')
results = {'label':[], 'tail_mean':[]}
for item, grp in grouped:
subset_mean = grp.tail(2).mean()[0]
results['label'].append(item)
results['tail_mean'].append(subset_mean)
res_df = pd.DataFrame(results)
df = df.merge(res_df, on='label', how='left')
Outputs:
>> res_df
label tail_mean
0 a 3.5
1 b 5.0
>> df
label value tail_mean
0 a 1 3.5
1 b 2 5.0
2 a 5 3.5
3 b 6 5.0
4 a 2 3.5
5 b 4 5.0
Now you have a dataframe of your results only, if you need them, plus a column with it merged back into the main dataframe. Someone else posted a more succinct way to get to the results dataframe; probably no reason to do it the longer way I showed here unless you also need to perform more operations like this that you could do inside the same loop.

How to convert Series into DataFrame preserving the columns?

I have a large DataFrame with the following contents:
obj0 type obj1 counts
car P wheel 2
tree P branch 5
box I ball 23
wheel P bus 3
grass A cow 10
...
I want to see how many obj0 and type pairs I have, I do:
obj0_sums = pdf.groupby(['obj0', 'type'])['counts'].sum().
obj0 type
car P 5
tree P 15
box I 50
...
Then I need to do other operations on obj0_sums later but it is a Series where obj0 and type columns have turned into the first row. How can I make it stay a DataFrame and preserve columns after groupby operation?
try this:
In [262]: df.groupby(['obj0', 'type'], as_index=False)['counts'].sum()
Out[262]:
obj0 type counts
0 box I 23
1 car P 2
2 grass A 10
3 tree P 5
4 wheel P 3
from docs:
as_index : boolean, default True
For aggregated output, return object with group labels as the index. Only
relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output
I think you might be looking for
pdf.groupby(['obj0', 'type'], as_index=False)['counts'].sum()

Categories