fill_null() values with other columns data - python

i have a question regarding fill null values, is it possible to backfill data from other columns as in pandas?
Working pandas example on how to backfill data :
df.loc[:, ['A', 'B', 'C']] = df[['A', 'B', 'C']].fillna(
value={'A': df['D'],
'B': df['D'],
'C': df['D'],
})
Polars example as i tried to backfill data from column D to column A if the value is null, but it's not working:
df = pl.DataFrame(
{"date": ["2020-01-01 00:00:00", "2020-01-07 00:00:00", "2020-01-14 00:00:00"],
"A": [3, 4, 7],
"B": [3, 4, 5],
"C": [0, 1, 2],
"D": [1, 2, 5]})
df = df.with_column(pl.col("date").str.strptime(pl.Datetime, "%Y-%m-%d %H:%M:%S"))
date_range = df.select(pl.arange(df["date"][0], df["date"]
[-1] + 1, step=1000*60*60*24).cast(pl.Datetime).alias("date"))
df = (date_range.join(df, on="date", how="left"))
df['D'] = df['D'].fill_null("forward")
print(df)
df[:, ['A']] = df[['A']].fill_null({
'A': df['D']
}
)
print(df)
Kind regards,
Tom

In the example you show and the accomponied pandas code. A fillna doesn't fill any null values, because the other columns are also NaN. So I am going to assume that you want to fill missing values by values of another column that doesn't have missing values, but correct me if I am wrong.
import polars as pl
from polars import col
df = pl.DataFrame({
"a": [0, 1, 2, 3, None, 5, 6, None, 8, None],
"b": range(10),
})
out = df.with_columns([
pl.when(col("a").is_null()).then(col("b")).otherwise(col("a")).alias("a"),
pl.when(col("a").is_null()).then(col("b").shift(1)).otherwise(col("a")).alias("a_filled_lag"),
pl.when(col("a").is_null()).then(col("b").mean()).otherwise(col("a")).alias("a_filled_mean")
])
print(out)
In the example above, we use a when -> then -> othwerwise expression to fill missing values by another columns values. Think about if else expressions but then on whole columns.
I gave 3 examples, one where we fill on that value, one where we fill with the lagged value, and one where we fill with the mean value of the other column.
The snippet above produces:
shape: (10, 4)
┌─────┬─────┬──────────────┬───────────────┐
│ a ┆ b ┆ a_filled_lag ┆ a_filled_mean │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ f64 │
╞═════╪═════╪══════════════╪═══════════════╡
│ 0 ┆ 0 ┆ 0 ┆ 0.0 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 1 ┆ 1 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 2 ┆ 2 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 3 ┆ 3 ┆ 3 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ... ┆ ... ┆ ... ┆ ... │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 5 ┆ 5 ┆ 5 ┆ 5 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 6 ┆ 6 ┆ 6 ┆ 6 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 7 ┆ 7 ┆ 6 ┆ 4.5 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 8 ┆ 8 ┆ 8 ┆ 8 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 9 ┆ 9 ┆ 8 ┆ 4.5 │
└─────┴─────┴──────────────┴───────────────┘

The fillna() method is used to fill null values in pandas.
df['D'] = df['D'].fillna(df['A'].mean())
The above code will replace null values of D column with the mean value of A column.

Related

Replicate pandas ngroup behaviour in polars

I am currently trying to replicate ngroup behaviour in polars to get consecutive group indexes (the dataframe will be grouped over two columns). For the R crowd, this would be achieved in the dplyr world with dplyr::group_indices or the newer dplyr::cur_group_id.
As shown in the repro, I've tried couple avenues without much succcess, both approaches miss group sequentiality and merely return row counts by group.
Quick repro:
import polars as pl
import pandas as pd
df = pd.DataFrame(
{
"id": ["a", "a", "a", "a", "b", "b", "b", "b"],
"cat": [1, 1, 2, 2, 1, 1, 2, 2],
}
)
df_pl = pl.from_pandas(df)
print(df.groupby(["id", "cat"]).ngroup())
# This is the desired behaviour
# 0 0
# 1 0
# 2 1
# 3 1
# 4 2
# 5 2
# 6 3
# 7 3
print(df_pl.select(pl.count().over(["id", "cat"])))
# This is only counting observation by group
# ┌───────┐
# │ count │
# │ --- │
# │ u32 │
# ╞═══════╡
# │ 2 │
# │ 2 │
# │ 2 │
# │ 2 │
# │ 2 │
# │ 2 │
# │ 2 │
# │ 2 │
# └───────┘
# shape: (4, 3)
print(df_pl.groupby(["id", "cat"]).agg([pl.count().alias("test")]))
# shape: (4, 3)
# ┌─────┬─────┬──────┐
# │ id ┆ cat ┆ test │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ u32 │
# ╞═════╪═════╪══════╡
# │ a ┆ 1 ┆ 2 │
# │ a ┆ 2 ┆ 2 │
# │ b ┆ 1 ┆ 2 │
# │ b ┆ 2 ┆ 2 │
# └─────┴─────┴──────┘
Edit
As #jqurious points out we can use rank for this:
(df.with_row_count("idx")
.select(
pl.first("idx").over(["id", "cat"]).rank("dense") - 1)
)
shape: (8, 1)
┌─────┐
│ idx │
│ --- │
│ u32 │
╞═════╡
│ 0 │
│ 0 │
│ 1 │
│ 1 │
│ 2 │
│ 2 │
│ 3 │
│ 3 │
└─────┘
The following might be more clear:
df = pl.DataFrame(
{
"id": ["a", "a", "a", "a", "b", "b", "b", "b"],
"cat": [1, 1, 2, 2, 1, 1, 2, 2],
}
)
(
# Add row count to each line to create an index.
df.with_row_count("idx")
# Group on id and cat column.
.groupby(
["id", "cat"],
maintain_order=True,
)
.agg(
# Create a list of all index positions per group.
pl.col("idx")
)
# Add a new row count for each group.
.with_row_count("ngroup")
# Expand idx list column to separate rows.
.explode("idx")
# Reorder columns.
.select(["idx", "ngroup", "id", "cat"])
# Optionally sort by original order.
.sort("idx")
)
┌─────┬────────┬─────┬─────┐
│ idx ┆ ngroup ┆ id ┆ cat │
│ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ u32 ┆ str ┆ i64 │
╞═════╪════════╪═════╪═════╡
│ 0 ┆ 0 ┆ a ┆ 1 │
│ 1 ┆ 0 ┆ a ┆ 1 │
│ 2 ┆ 1 ┆ a ┆ 2 │
│ 3 ┆ 1 ┆ a ┆ 2 │
│ 4 ┆ 2 ┆ b ┆ 1 │
│ 5 ┆ 2 ┆ b ┆ 1 │
│ 6 ┆ 3 ┆ b ┆ 2 │
│ 7 ┆ 3 ┆ b ┆ 2 │
└─────┴────────┴─────┴─────┘

How to compute a group weighted average controlling for null values in Polars?

A group weighted average without null is pretty straightforward
import polars as pl
data = {"id":[1, 1, 2, 2], "a" : [2, 1, 1, 3], "b":[0,1,2,3], "weights":[0.5, 1, 0.2, 3]}
df = pl.DataFrame(data)
weighted_average = (pl.col(["a", "b"]) * pl.col("weights")).sum() / pl.col("weights").sum()
df.groupby("id").agg(weighted_average)
shape: (2, 3)
┌─────┬──────────┬──────────┐
│ id ┆ a ┆ b │
│ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ f64 │
╞═════╪══════════╪══════════╡
│ 1 ┆ 1.333333 ┆ 0.666667 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 2.875 ┆ 2.9375 │
└─────┴──────────┴──────────┘
However, column result from group containing None/null values would be invalid.
This is due to the last term of the expression being not subsetted by the null mask
of column with null values.
Example:
data = {"id":[1, 1, 2, 2], "a" : [2, None, 1, 3], "b":[0,1,2,3], "weights":[0.5, 1, 0.2, 3]}
df = pl.DataFrame(data)
weighted_average = (pl.col(["a", "b"]) * pl.col("weights")).sum() / pl.col("weights").sum()
df.groupby("id").agg(weighted_average)
shape: (2, 3)
┌─────┬──────────┬──────────┐
│ id ┆ a ┆ b │
│ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ f64 │
╞═════╪══════════╪══════════╡
│ 2 ┆ 2.875 ┆ 2.9375 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 0.666667 ┆ 0.666667 │
└─────┴──────────┴──────────┘
Value of column a group 1, should be equal to 2: 2 * 0.5 /0.5 but is instead calculated
as 2 * 0.5 / (0.5 + 1) = 0.66
How to get the right results ?
I.e. how to subset the denominator by other column mask when required?
You can mask out the weights by the null value of the other column.
data = {
"id": [1, 1, 2, 2],
"a": [2, None, 1, 3],
"b": [0,1,2,3],
"weights": [0.5, 1, 0.2, 3]
}
df = pl.DataFrame(data)
# mask out the weights that are null in 'a' or 'b'
masked_weights = pl.col("weights") * pl.col(["a", "b"]).is_not_null()
weighted_average = (pl.col(["a", "b"]) * pl.col("weights")).sum() / masked_weights.sum()
(df.groupby("id")
.agg(weighted_average)
)
Ouptuts
shape: (2, 3)
┌─────┬───────┬──────────┐
│ id ┆ a ┆ b │
│ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ f64 │
╞═════╪═══════╪══════════╡
│ 1 ┆ 2.0 ┆ 0.666667 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 2.875 ┆ 2.9375 │
└─────┴───────┴──────────┘

How to cumulatively sum the first elements when using the .over() function for a specific column in Python Polars

I was wondering if someone could please enlighten me.
I am trying to cumulatively sum pty_nber over/groupby a specific column (Declaration).
My original idea was to use something along:
dataset.filter(pl.col("pty_nber").first().over("Declaration").cumsum() < 30 )
But unfortunately, it does not take into account the .over() and just cumulatively sums all the rows. So rather than summing 4 + 7 + 8 etc.., it sums it 4 + 4 + 4 + 4 + 7 ...
The goal is to show at least a few complete declarations and not cut in the middle.
Thanks in advance :)
As an example please see below:
--> and filter out for CUMSUM that are over a certain threshold such as 30 so that I make sure that no ONE declaration is not complete (i.e. not including all the pty_nber for that specific declaration)
So I am not sure if you want just to have the yellow marked entries or keep the whole dataframe and just remove the rows where CUMSUM is greater than 30. So I implemented in two ways:
Data
I simplified a little bit your example data
import polars as pl
df = pl.DataFrame(
{
"declaration": [2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4],
"item": [1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4],
"pty_nber": [12, 12, 12, 9, 9, 9, 9, 16, 16, 16, 16],
}
)
First:
df.groupby("declaration", maintain_order=True).first().filter(
pl.col("pty_nber").cumsum() < 30
)
shape: (2, 3)
┌─────────────┬──────┬──────────┐
│ declaration ┆ item ┆ pty_nber │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════════════╪══════╪══════════╡
│ 2 ┆ 1 ┆ 12 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 1 ┆ 9 │
└─────────────┴──────┴──────────┘
Second:
For the second I use the help of the ITEM column, since I suppose the first element each group equals the first item
df.filter(
pl.when(pl.col("item") == 1).then(pl.col("pty_nber")).otherwise(0).cumsum() < 30
)
shape: (7, 3)
┌─────────────┬──────┬──────────┐
│ declaration ┆ item ┆ pty_nber │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════════════╪══════╪══════════╡
│ 2 ┆ 1 ┆ 12 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 2 ┆ 12 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 3 ┆ 12 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 1 ┆ 9 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 2 ┆ 9 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 3 ┆ 9 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 4 ┆ 9 │
└─────────────┴──────┴──────────┘

Take cumsum of each row in polars

E.g. if I have
import polars as pl
df = pl.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
how would I find the cumulative sum of each row?
Expected output:
a b
0 1 5
1 2 7
2 3 9
Here's the equivalent in pandas:
>>> import pandas as pd
>>> pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]}).cumsum(axis=1)
a b
0 1 5
1 2 7
2 3 9
but I can't figure out how to do it in polars
Edit: Polars 0.14.18 and later
As of Polars 0.14.18, we can use the new polars.cumsum function to simplify this. (Note: this is slightly different than the polars.Expr.cumsum Expression, in that it acts as a root Expression.)
Using the same DataFrame as below:
my_cols = [s.name for s in df if s.is_numeric()]
(
df
.select([
pl.exclude(my_cols),
pl.cumsum(my_cols).alias('result')
])
.unnest('result')
)
shape: (3, 4)
┌─────┬─────┬─────┬─────┐
│ id ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═════╡
│ a ┆ 1 ┆ 5 ┆ 12 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ b ┆ 2 ┆ 7 ┆ 15 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ c ┆ 3 ┆ 9 ┆ 18 │
└─────┴─────┴─────┴─────┘
Before Polars 0.14.18
Polars is column-oriented, and as such does not have the concept of a axis. Still, we can use the list evaluation context to solve this.
First, let's expand you data slightly:
df = pl.DataFrame({
"id": ['a', 'b', 'c'],
"a": [1, 2, 3],
"b": [4, 5, 6],
"c": [7, 8, 9],
})
df
shape: (3, 4)
┌─────┬─────┬─────┬─────┐
│ id ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═════╡
│ a ┆ 1 ┆ 4 ┆ 7 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ b ┆ 2 ┆ 5 ┆ 8 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ c ┆ 3 ┆ 6 ┆ 9 │
└─────┴─────┴─────┴─────┘
The Algorithm
Here's a general-purpose performant algorithm that will solve this. We'll walk through it below.
my_cols = [s.name for s in df if s.is_numeric()]
(
df
.with_column(
pl.concat_list(my_cols)
.arr.eval(pl.element().cumsum())
.arr.to_struct(name_generator=lambda idx: my_cols[idx])
.alias('result')
)
.drop(my_cols)
.unnest('result')
)
shape: (3, 4)
┌─────┬─────┬─────┬─────┐
│ id ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═════╡
│ a ┆ 1 ┆ 5 ┆ 12 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ b ┆ 2 ┆ 7 ┆ 15 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ c ┆ 3 ┆ 9 ┆ 18 │
└─────┴─────┴─────┴─────┘
How it works
First, we'll select the names of the numeric columns. You can name these explicitly if you like, e.g., my_cols=['a','b','c'].
Next, we'll gather up the column values into a list using polars.concat_list.
my_cols = [s.name for s in df if s.is_numeric()]
(
df
.with_column(
pl.concat_list(my_cols)
.alias('result')
)
)
shape: (3, 5)
┌─────┬─────┬─────┬─────┬───────────┐
│ id ┆ a ┆ b ┆ c ┆ result │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ list[i64] │
╞═════╪═════╪═════╪═════╪═══════════╡
│ a ┆ 1 ┆ 4 ┆ 7 ┆ [1, 4, 7] │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ b ┆ 2 ┆ 5 ┆ 8 ┆ [2, 5, 8] │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ c ┆ 3 ┆ 6 ┆ 9 ┆ [3, 6, 9] │
└─────┴─────┴─────┴─────┴───────────┘
From here, we'll use the arr.eval context to run our cumsum on the list.
my_cols = [s.name for s in df if s.is_numeric()]
(
df
.with_column(
pl.concat_list(my_cols)
.arr.eval(pl.element().cumsum())
.alias('result')
)
)
shape: (3, 5)
┌─────┬─────┬─────┬─────┬────────────┐
│ id ┆ a ┆ b ┆ c ┆ result │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ list[i64] │
╞═════╪═════╪═════╪═════╪════════════╡
│ a ┆ 1 ┆ 4 ┆ 7 ┆ [1, 5, 12] │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ b ┆ 2 ┆ 5 ┆ 8 ┆ [2, 7, 15] │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ c ┆ 3 ┆ 6 ┆ 9 ┆ [3, 9, 18] │
└─────┴─────┴─────┴─────┴────────────┘
Next, we'll break the list into a struct using arr.to_struct, and name the fields the corresponding names from our selected numeric columns.
my_cols = [s.name for s in df if s.is_numeric()]
(
df
.with_column(
pl.concat_list(my_cols)
.arr.eval(pl.element().cumsum())
.arr.to_struct(name_generator=lambda idx: my_cols[idx])
.alias('result')
)
)
shape: (3, 5)
┌─────┬─────┬─────┬─────┬───────────┐
│ id ┆ a ┆ b ┆ c ┆ result │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ struct[3] │
╞═════╪═════╪═════╪═════╪═══════════╡
│ a ┆ 1 ┆ 4 ┆ 7 ┆ {1,5,12} │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ b ┆ 2 ┆ 5 ┆ 8 ┆ {2,7,15} │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ c ┆ 3 ┆ 6 ┆ 9 ┆ {3,9,18} │
└─────┴─────┴─────┴─────┴───────────┘
And finally, we'll use unnest to break the struct into columns. (But first we must drop the original columns or else we'll get two columns with the same name.)
my_cols = [s.name for s in df if s.is_numeric()]
(
df
.with_column(
pl.concat_list(my_cols)
.arr.eval(pl.element().cumsum())
.arr.to_struct(name_generator=lambda idx: my_cols[idx])
.alias('result')
)
.drop(my_cols)
.unnest('result')
)
shape: (3, 4)
┌─────┬─────┬─────┬─────┐
│ id ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═════╡
│ a ┆ 1 ┆ 5 ┆ 12 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ b ┆ 2 ┆ 7 ┆ 15 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ c ┆ 3 ┆ 9 ┆ 18 │
└─────┴─────┴─────┴─────┘
There may be a simpler and faster way, but here is the programmatic solution.
Concatenate the values along the columns into a list
Calculate the cumulative sum over the list (the result is still a list)
Get values for each column in the result
import polars as pl
df = pl.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
df.select([
pl.concat_list(pl.all())
.arr.eval(pl.element().cumsum())
.alias('cs')
]).select([
pl.col('cs').arr.get(i).alias(name)
for i, name in enumerate(df.columns)
])
shape: (3, 2)
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 1 ┆ 5 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 2 ┆ 7 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 3 ┆ 9 │
└─────┴─────┘

Polars python equivalent to glimpse and summary in R

I couldn't find a function that would summarize the content in the polars dataframe just like glimpse and summary do it in R?
Polars has a describe method:
df = pl.DataFrame({
'a': [1.0, 2.8, 3.0],
'b': [4, 5, 6],
"c": [True, False, True]
})
df.describe()
shape: (5, 4)
╭──────────┬───────┬─────┬──────╮
│ describe ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ f64 ┆ f64 ┆ f64 │
╞══════════╪═══════╪═════╪══════╡
│ "mean" ┆ 2.267 ┆ 5 ┆ null │
├╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌┤
│ "std" ┆ 1.102 ┆ 1 ┆ null │
├╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌┤
│ "min" ┆ 1 ┆ 4 ┆ 0.0 │
├╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌┤
│ "max" ┆ 3 ┆ 6 ┆ 1 │
├╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌┤
│ "median" ┆ 2.8 ┆ 5 ┆ null │
Which reports, like R's summary, descriptive statistics per column. I have not used glimpse before, but a quick Google suggests it does something similar to Polar's head, but then with the output stacked vertically, so it is easier to digest when there are many columns.

Categories