How can I find median of an RDD of integers using a distributed method, IPython, and Spark? The RDD is approximately 700,000 elements and therefore too large to collect and find the median.
This question is similar to this question. However, the answer to the question is using Scala, which I do not know.
How can I calculate exact median with Apache Spark?
Using the thinking for the Scala answer, I am trying to write a similar answer in Python.
I know I first want to sort the RDD. I do not know how. I see the sortBy (Sorts this RDD by the given keyfunc) and sortByKey (Sorts this RDD, which is assumed to consist of (key, value) pairs.) methods. I think both use key value and my RDD only has integer elements.
First, I was thinking of doing myrdd.sortBy(lambda x: x)?
Next I will find the length of the rdd (rdd.count()).
Finally, I want to find the element or 2 elements at the center of the rdd. I need help with this method too.
EDIT:
I had an idea. Maybe I can index my RDD and then key = index and value = element. And then I can try to sort by value? I don't know if this is possible because there is only a sortByKey method.
Ongoing work
SPARK-30569 - Add DSL functions invoking percentile_approx
Spark 2.0+:
You can use approxQuantile method which implements Greenwald-Khanna algorithm:
Python:
df.approxQuantile("x", [0.5], 0.25)
Scala:
df.stat.approxQuantile("x", Array(0.5), 0.25)
where the last parameter is a relative error. The lower the number the more accurate results and more expensive computation.
Since Spark 2.2 (SPARK-14352) it supports estimation on multiple columns:
df.approxQuantile(["x", "y", "z"], [0.5], 0.25)
and
df.approxQuantile(Array("x", "y", "z"), Array(0.5), 0.25)
Underlying methods can be also used in SQL aggregation (both global and groped) using approx_percentile function:
> SELECT approx_percentile(10.0, array(0.5, 0.4, 0.1), 100);
[10.0,10.0,10.0]
> SELECT approx_percentile(10.0, 0.5, 100);
10.0
Spark < 2.0
Python
As I've mentioned in the comments it is most likely not worth all the fuss. If data is relatively small like in your case then simply collect and compute median locally:
import numpy as np
np.random.seed(323)
rdd = sc.parallelize(np.random.randint(1000000, size=700000))
%time np.median(rdd.collect())
np.array(rdd.collect()).nbytes
It takes around 0.01 second on my few years old computer and around 5.5MB of memory.
If data is much larger sorting will be a limiting factor so instead of getting an exact value it is probably better to sample, collect, and compute locally. But if you really want a to use Spark something like this should do the trick (if I didn't mess up anything):
from numpy import floor
import time
def quantile(rdd, p, sample=None, seed=None):
"""Compute a quantile of order p ∈ [0, 1]
:rdd a numeric rdd
:p quantile(between 0 and 1)
:sample fraction of and rdd to use. If not provided we use a whole dataset
:seed random number generator seed to be used with sample
"""
assert 0 <= p <= 1
assert sample is None or 0 < sample <= 1
seed = seed if seed is not None else time.time()
rdd = rdd if sample is None else rdd.sample(False, sample, seed)
rddSortedWithIndex = (rdd.
sortBy(lambda x: x).
zipWithIndex().
map(lambda (x, i): (i, x)).
cache())
n = rddSortedWithIndex.count()
h = (n - 1) * p
rddX, rddXPlusOne = (
rddSortedWithIndex.lookup(x)[0]
for x in int(floor(h)) + np.array([0L, 1L]))
return rddX + (h - floor(h)) * (rddXPlusOne - rddX)
And some tests:
np.median(rdd.collect()), quantile(rdd, 0.5)
## (500184.5, 500184.5)
np.percentile(rdd.collect(), 25), quantile(rdd, 0.25)
## (250506.75, 250506.75)
np.percentile(rdd.collect(), 75), quantile(rdd, 0.75)
(750069.25, 750069.25)
Finally lets define median:
from functools import partial
median = partial(quantile, p=0.5)
So far so good but it takes 4.66 s in a local mode without any network communication. There is probably way to improve this, but why even bother?
Language independent (Hive UDAF):
If you use HiveContext you can also use Hive UDAFs. With integral values:
rdd.map(lambda x: (float(x), )).toDF(["x"]).registerTempTable("df")
sqlContext.sql("SELECT percentile_approx(x, 0.5) FROM df")
With continuous values:
sqlContext.sql("SELECT percentile(x, 0.5) FROM df")
In percentile_approx you can pass an additional argument which determines a number of records to use.
Here is the method I used using window functions (with pyspark 2.2.0).
from pyspark.sql import DataFrame
class median():
""" Create median class with over method to pass partition """
def __init__(self, df, col, name):
assert col
self.column=col
self.df = df
self.name = name
def over(self, window):
from pyspark.sql.functions import percent_rank, pow, first
first_window = window.orderBy(self.column) # first, order by column we want to compute the median for
df = self.df.withColumn("percent_rank", percent_rank().over(first_window)) # add percent_rank column, percent_rank = 0.5 coressponds to median
second_window = window.orderBy(pow(df.percent_rank-0.5, 2)) # order by (percent_rank - 0.5)^2 ascending
return df.withColumn(self.name, first(self.column).over(second_window)) # the first row of the window corresponds to median
def addMedian(self, col, median_name):
""" Method to be added to spark native DataFrame class """
return median(self, col, median_name)
# Add method to DataFrame class
DataFrame.addMedian = addMedian
Then call the addMedian method to calculate the median of col2:
from pyspark.sql import Window
median_window = Window.partitionBy("col1")
df = df.addMedian("col2", "median").over(median_window)
Finally you can group by if needed.
df.groupby("col1", "median")
Adding a solution if you want an RDD method only and dont want to move to DF.
This snippet can get you a percentile for an RDD of double.
If you input percentile as 50, you should obtain your required median.
Let me know if there are any corner cases not accounted for.
/**
* Gets the nth percentile entry for an RDD of doubles
*
* #param inputScore : Input scores consisting of a RDD of doubles
* #param percentile : The percentile cutoff required (between 0 to 100), e.g 90%ile of [1,4,5,9,19,23,44] = ~23.
* It prefers the higher value when the desired quantile lies between two data points
* #return : The number best representing the percentile in the Rdd of double
*/
def getRddPercentile(inputScore: RDD[Double], percentile: Double): Double = {
val numEntries = inputScore.count().toDouble
val retrievedEntry = (percentile * numEntries / 100.0 ).min(numEntries).max(0).toInt
inputScore
.sortBy { case (score) => score }
.zipWithIndex()
.filter { case (score, index) => index == retrievedEntry }
.map { case (score, index) => score }
.collect()(0)
}
There are two ways that can be used. One is using approxQuantile method and the other percentile_approx method. However, both the methods might not give accurate results when there are even number of records.
importpyspark.sql.functions.percentile_approx as F
# df.select(F.percentile_approx("COLUMN_NAME_FOR_WHICH_MEDIAN_TO_BE_COMPUTED", 0.5).alias("MEDIAN)) # might not give proper results when there are even number of records
((
df.select(F.percentile_approx("COLUMN_NAME_FOR_WHICH_MEDIAN_TO_BE_COMPUTED", 0.5) + df.select(F.percentile_approx("COLUMN_NAME_FOR_WHICH_MEDIAN_TO_BE_COMPUTED", 0.51)
)*.5).alias("MEDIAN))
I have written the function which takes data frame as an input and returns a dataframe which has median as an output over a partition and order_col is the column for which we want to calculate median for part_col is the level at which we want to calculate median for :
from pyspark.sql import Window
import pyspark.sql.functions as F
def calculate_median(dataframe, part_col, order_col):
win = Window.partitionBy(*part_col).orderBy(order_col)
# count_row = dataframe.groupby(*part_col).distinct().count()
dataframe.persist()
dataframe.count()
temp = dataframe.withColumn("rank", F.row_number().over(win))
temp = temp.withColumn(
"count_row_part",
F.count(order_col).over(Window.partitionBy(part_col))
)
temp = temp.withColumn(
"even_flag",
F.when(
F.col("count_row_part") %2 == 0,
F.lit(1)
).otherwise(
F.lit(0)
)
).withColumn(
"mid_value",
F.floor(F.col("count_row_part")/2)
)
temp = temp.withColumn(
"avg_flag",
F.when(
(F.col("even_flag")==1) &
(F.col("rank") == F.col("mid_value"))|
((F.col("rank")-1) == F.col("mid_value")),
F.lit(1)
).otherwise(
F.when(
F.col("rank") == F.col("mid_value")+1,
F.lit(1)
)
)
)
temp.show(10)
return temp.filter(
F.col("avg_flag") == 1
).groupby(
part_col + ["avg_flag"]
).agg(
F.avg(F.col(order_col)).alias("median")
).drop("avg_flag")
For exact median computation you can use the following function and use it with PySpark DataFrame API:
def median_exact(col: Union[Column, str]) -> Column:
"""
For grouped aggregations, Spark provides a way via pyspark.sql.functions.percentile_approx("col", .5) function,
since for large datasets, computing the median is computationally expensive.
This function manually computes the median and should only be used for small to mid sized datasets / groupings.
:param col: Column to compute the median for.
:return: A pyspark `Column` containing the median calculation expression
"""
list_expr = F.filter(F.collect_list(col), lambda x: x.isNotNull())
sorted_list_expr = F.sort_array(list_expr)
size_expr = F.size(sorted_list_expr)
even_num_elements = (size_expr % 2) == 0
odd_num_elements = ~even_num_elements
return F.when(size_expr == 0, None).otherwise(
F.when(odd_num_elements, sorted_list_expr[F.floor(size_expr / 2)]).otherwise(
(
sorted_list_expr[(size_expr / 2 - 1).cast("long")]
+ sorted_list_expr[(size_expr / 2).cast("long")]
)
/ 2
)
)
Apply it like this:
output_df = input_spark_df.groupby("group").agg(
median_exact("elems").alias("elems_median")
)
We can calculate the median and quantiles in spark using the following code:
df.stat.approxQuantile(col,[quantiles],error)
For example, finding the median in the following dataframe [1,2,3,4,5]:
df.stat.approxQuantile(col,[0.5],0)
The lesser the error, the more accurate the results.
From version 3.4+ (and also already in 3.3.1) the median function is directly available
https://github.com/apache/spark/blob/e170a2eb236a376b036730b5d63371e753f1d947/python/pyspark/sql/functions.py#L633
import pyspark.sql.functions as f
df.groupBy("grp").agg(f.median("val"))
I guess the respective documentation will be added if the version is finally released.
Related
I have been trying to use the lambda and apply() method to create a new column based on other columns. The calculation I want to do calculates a range of "Stage Reaction" values based on a range of "Alpha 1" values and "Alpha 2" values, along with being based on a constant "Stage Loading" value. Code below:
import pandas as pd
import numpy as np
data = {
'Stage Loading':[0.1],
'Alpha 1':[[0.1,0.12,0.14]],
'Alpha 2':[[0.1,0.12,0.14]]
}
pdf = pd.DataFrame(data)
def findstageloading(row):
stload = row('Stage Loading')
for alpha1, alpha2 in zip(row('Alpha 1'), row('Alpha 2')):
streact = 1 - 0.5 * stload * (np.tan(alpha1) - np.tan(alpha2))
return streact
pdf['Stage Reaction'] = pdf.apply(lambda row: findstageloading, axis = 1)
print(pdf.to_string())
The problem is that this code returns a message
"<function findstageloading at 0x000002272AF5F0D0>"
for the new column.
Can anyone tell me why? I want it to return a list of values
[0.9420088227267556, 1.0061754635552815, 1.0579911772732444]
Your lambda is just returning the function, just use pdf['Stage Reaction'] = pdf.apply(findstageloading, axis = 1)
Also, you need square brackets to access columns not round ones, otherwise python thinks you're calling a function.
Also, I'm not sure where your output came from but if you want to do pairwise arithmetic, you can use vectorisation and omit the zip.
def findstageloading(row):
stload = row['Stage Loading'] # not row('Stage Loading')
alpha1, alpha2 = row['Alpha 1'], row['Alpha 2']
streact = 1 - 0.5 * stload * (np.tan(alpha1) - np.tan(alpha2))
return streact
I have two functions:
First (z_score) to compute rolling z-score values given df column
Second (z_score_cum) to compute cumulative z-score without forward-looking bias
# rolling z_score
def z_score(df, window):
val_column = df.columns[0]
col_mean = df[val_column].rolling(window=window).mean()
col_std = df[val_column].rolling(window=window).std()
df['zscore' + '_'+ str(window)+'D'] = (df[val_column] - col_mean)/col_std
return df
# cumulative z_score
def z_score_cum(data_frame):
# calculating length of original data frame to standardize
len_ = len(data_frame)
# storing column name & making a copy of data frame
val_column = data_frame.columns[0]
data_frame_standardized_final = data_frame.copy()
# calculating statistics
data_frame_standardized_final['mean_past'] = [np.mean(data_frame_standardized_final[val_column][0:lv+1]) for lv in range(0,len_)]
data_frame_standardized_final['std_past'] = [np.std(data_frame_standardized_final[val_column][0:lv+1]) for lv in range(0,len_)]
data_frame_standardized_final['z_score_cum'] = (data_frame_standardized_final[val_column] - data_frame_standardized_final['mean_past']) / data_frame_standardized_final['std_past']
return data_frame_standardized_final[['z_score_cum']]
I would like to somehow combine those two into one z-score function, so that, no matter if I pass time window as parameter, it would compute z-score based on window and additionaly, will contain one column with cumulative z-score. Currently, I am creating a list of time windows (here in days), which I am passing in the loop while calling the function and joining this additional column separately, which I don't think is the optimal way of processing.
d_list = [n * 21 for n in range(1,13)]
df_zscore = df.copy()
for i in d_list:
df_zscore = z_score(df_zscore, i)
df_zscore_cum = z_score_cum(df)
df_z_scores = pd.concat([df_zscore, df_zscore_cum], axis=1)
Eventually, I made it this way:
def calculate_z_scores(self, list_of_windows, freq_flag='D'):
"""
Calculates rolling z-scores and cumulative z-scores based on given list
of time windows
Parameters
----------
list_of_windows : list
a list of time windows.
freq_flag : string
frequency flag. The default is 'D' (daily)
Returns
-------
data frame
a data frame with calculated rolling & cumulative z-score.
"""
z_scores_data_frame = self.original_data_frame.copy()
# get column with values (1st column)
val_column = z_scores_data_frame.columns[0]
len_ = len(z_scores_data_frame)
# calculating statistics for cumulative_zscore
z_scores_data_frame['mean_past'] = [np.mean(z_scores_data_frame[val_column][0:lv+1]) for lv in range(0,len_)]
z_scores_data_frame['std_past'] = [np.std(z_scores_data_frame[val_column][0:lv+1]) for lv in range(0,len_)]
z_scores_data_frame['zscore_cum'] = (z_scores_data_frame[val_column] - z_scores_data_frame['mean_past']) / z_scores_data_frame['std_past']
# taking care of rolling z_scores
for i in list_of_windows:
col_mean = z_scores_data_frame[val_column].rolling(window=i).mean()
col_std = z_scores_data_frame[val_column].rolling(window=i).std()
z_scores_data_frame['zscore' + '_' + str(i)+ freq_flag] = (z_scores_data_frame[val_column] - col_mean)/col_std
cols_to_leave = [c for c in z_scores_data_frame.columns if 'zscore' in c]
self.z_scores_data_frame = z_scores_data_frame[cols_to_leave]
return self.z_scores_data_frame
Just a sidenote: This is my class method, but after minor modifications, could be use as a standalone function.
I am currently trying to implement Category Utility (as defined here) in Python. I am trying to use Pandas to accomplish this. I have a rough draft of code currently, however I am pretty sure that it's wrong. I think it's wrong specifically in the code that deals with looping over the clusters and calculating the inner_sum, seeing as Category Utility requires going through all possible values for each attribute. Would anyone be able to help me figure out how to improve this draft such that it properly calculates the category utility for the given clusters?
Here is the code:
import pandas as pd
from typing import List
def probability(df: pd.DataFrame, clause: str) -> pd.DataFrame:
"""
Gets the probabilities of the values within the given data frame
of the provided clause
"""
return df.groupby(clause).size().div(len(df))
def conditional_probability(df: pd.DataFrame, clause: str, given: str) -> pd.DataFrame:
"""
Gets the conditional probability of the values within the provided data
frame of the provided clause assuming that the given is true.
"""
base_probabilities: pd.DataFrame = probability(df, clause=given)
return df.groupby([clause, given]).size().div(len(df))
.div(base_probabilities, axis=0, level=given)
def category_utility(clusters: pd.DataFrame) -> float:
# k is the number of clusters.
k: int = len(clusters)
# probabilities of all clusters. To be used to get P(C_l)
probs_of_clusters: pd.DataFrame = probability(clusters, 'clusters')
# probabilities of all attributes being any possible value.
# To be used to get P(a_i = v_ij)
probs_of_attr_vals: pd.DataFrame = probability(clusters, 'attributes')
# Probabilities of all attributes being any possible value given the cluster they're in.
# To be used to get P(a_i = v_ij | C_l)
cond_prob_of_attr_vals: pd.DataFrame = conditional_probability(clusters, clause='attributes', given='clusters')
tracked_cu: List[float] = []
for cluster in clusters['clusters']:
# The probability of the current cluster.
# P(C_l)
prob_of_curr_cluster: float = probs_of_clusters[cluster]
# The summation of the square difference between an attribute being in a cluster and just overall existing in the data.
# E (P(a_i = v_ij | C_l) ^ 2 - P(a_i = v_ij) ^ 2)
inner_sum: float = sum([cond_prob_of_attr_vals[attr] ** 2 - probs_of_attr_vals[attr] for attr in clusters['attributes']])
tracked_cu += inner_sum * prob_of_curr_cluster
return sum(tracked_cu) / k
Any help with correctly implementing this would be appreciated.
I have a function which I'm trying to apply in parallel and within that function I call another function that I think would benefit from being executed in parallel. The goal is to take in multiple years of crop yields for each field and combine all of them into one pandas dataframe. I have the function I use for finding the closest point in each dataframe, but it is quite intensive and takes some time. I'm looking to speed it up.
I've tried creating a pool and using map_async on the inner function. I've also tried doing the same with the loop for the outer function. The latter is the only thing I've gotten to work the way I intended it to. I can use this, but I know there has to be a way to make it faster. Check out the code below:
return_columns = []
return_columns_cb = lambda x: return_columns.append(x)
def getnearestpoint(gdA, gdB, retcol):
dist = lambda point1, point2: distance.great_circle(point1, point2).feet
def find_closest(point):
distances = gdB.apply(
lambda row: dist(point, (row["Longitude"], row["Latitude"])), axis=1
)
return (gdB.loc[distances.idxmin(), retcol], distances.min())
append_retcol = gdA.apply(
lambda row: find_closest((row["Longitude"], row["Latitude"])), axis=1
)
return append_retcol
def combine_yield(field):
#field is a list of the files for the field I'm working with
#lots of pre-processing
#dfs in this case is a list of the dataframes for the current field
#mdf is the dataframe with the most points which I poppped from this list
p = Pool()
for i in range(0, len(dfs)):
p.apply_async(getnearestpoint, args=(mdf, dfs[i], dfs[i].columns[-1]), callback=return_cols_cb)
for col in return_columns:
mdf = mdf.append(col)
'''I unzip my points back to longitude and latitude here in the final
dataframe so I can write to csv without tuples'''
mdf[["Longitude", "Latitude"]] = pd.DataFrame(
mdf["Point"].tolist(), index=mdf.index
)
return mdf
def multiprocess_combine_yield():
'''do stuff to get dictionary below with each field name as key and values
as all the files for that field'''
yield_by_field = {'C01': ('files...'), ...}
#The farm I'm working on has 30 fields and below is too slow
for k,v in yield_by_field.items():
combine_yield(v)
I guess what I need help on is I envision something like using a pool to imap or apply_async on each tuple of files in the dictionary. Then within the combine_yield function when applied to that tuple of files, I want to to be able to parallel process the distance function. That function bogs the program down because it calculates the distance between every point in each of the dataframes for each year of yield. The files average around 1200 data points and then you multiply all of that by 30 fields and I need something better. Maybe the efficiency improvement lies in finding a better way to pull in the closest point. I still need something that gives me the value from gdB, and the distance though because of what I do later on when selecting which rows to use from the 'mdf' dataframe.
Thanks to #ALollz comment, I figured this out. I went back to my getnearestpoint function and instead of doing a bunch of Series.apply I am now using cKDTree from scipy.spatial to find the closest point, and then using a vectorized haversine distance to calculate the true distances on each of these matched points. Much much quicker. Here are the basics of the code below:
import numpy as np
import pandas as pd
from scipy.spatial import cKDTree
def getnearestpoint(gdA, gdB, retcol):
gdA_coordinates = np.array(
list(zip(gdA.loc[:, "Longitude"], gdA.loc[:, "Latitude"]))
)
gdB_coordinates = np.array(
list(zip(gdB.loc[:, "Longitude"], gdB.loc[:, "Latitude"]))
)
tree = cKDTree(data=gdB_coordinates)
distances, indices = tree.query(gdA_coordinates, k=1)
#These column names are done as so due to formatting of my 'retcols'
df = pd.DataFrame.from_dict(
{
f"Longitude_{retcol[:4]}": gdB.loc[indices, "Longitude"].values,
f"Latitude_{retcol[:4]}": gdB.loc[indices, "Latitude"].values,
retcol: gdB.loc[indices, retcol].values,
}
)
gdA = pd.merge(left=gdA, right=df, left_on=gdA.index, right_on=df.index)
gdA.drop(columns="key_0", inplace=True)
return gdA
def combine_yield(field):
#same preprocessing as before
for i in range(0, len(dfs)):
mdf = getnearestpoint(mdf, dfs[i], dfs[i].columns[-1])
main_coords = np.array(list(zip(mdf.Longitude, mdf.Latitude)))
lat_main = main_coords[:, 1]
longitude_main = main_coords[:, 0]
longitude_cols = [
c for c in mdf.columns for m in [re.search(r"Longitude_B\d{4}", c)] if m
]
latitude_cols = [
c for c in mdf.columns for m in [re.search(r"Latitude_B\d{4}", c)] if m
]
year_coords = list(zip_longest(longitude_cols, latitude_cols, fillvalue=np.nan))
for i in year_coords:
year = re.search(r"\d{4}", i[0]).group(0)
year_coords = np.array(list(zip(mdf.loc[:, i[0]], mdf.loc[:, i[1]])))
year_coords = np.deg2rad(year_coords)
lat_year = year_coords[:, 1]
longitude_year = year_coords[:, 0]
diff_lat = lat_main - lat_year
diff_lng = longitude_main - longitude_year
d = (
np.sin(diff_lat / 2) ** 2
+ np.cos(lat_main) * np.cos(lat_year) * np.sin(diff_lng / 2) ** 2
)
mdf[f"{year} Distance"] = 2 * (2.0902 * 10 ** 7) * np.arcsin(np.sqrt(d))
return mdf
Then I'll just do Pool.map(combine_yield, (v for k,v in yield_by_field.items()))
This has made a substantial difference. Hope it helps anyone else in a similar predicament.
My raw data comes in a tabular format. It contains observations from different variables. Each observation with the variable name, the timestamp and the value at that time.
Variable [string], Time [datetime], Value [float]
The data is stored as Parquet in HDFS and loaded into a Spark Dataframe (df). From that dataframe.
Now I want to calculate default statistics like Mean, Standard Deviation and others for each variable. Afterwards, once the Mean has been retrieved, I want to filter/count those values for that variable that are closely around the Mean.
Due to the answer towards my other question, I came up with this code:
from pyspark.sql.window import Window
from pyspark.sql.functions import *
from pyspark.sql.types import *
w1 = Window().partitionBy("Variable")
w2 = Window.partitionBy("Variable").orderBy("Time")
def stddev_pop_w(col, w):
#Built-in stddev doesn't support windowing
return sqrt(avg(col * col).over(w) - pow(avg(col).over(w), 2))
def isInRange(value, mean, stddev, radius):
try:
if (abs(value - mean) < radius * stddev):
return 1
else:
return 0
except AttributeError:
return -1
delta = col("Time").cast("long") - lag("Time", 1).over(w2).cast("long")
#f = udf(lambda (value, mean, stddev, radius): abs(value - mean) < radius * stddev, IntegerType())
#f2 = udf(lambda value, mean, stddev: isInRange(value, mean, stddev, 2), IntegerType())
#f3 = udf(lambda value, mean, stddev: isInRange(value, mean, stddev, 3), IntegerType())
df_ = df_all \
.withColumn("mean", mean("Value").over(w1)) \
.withColumn("std_deviation", stddev_pop_w(col("Value"), w1)) \
.withColumn("delta", delta) \
# .withColumn("stddev_2", f2("Value", "mean", "std_deviation")) \
# .withColumn("stddev_3", f3("Value", "mean", "std_deviation")) \
#df2.show(5, False)
Question: The last two commented-lines won't work. It will give an AttributeError because the incoming values for stddev and mean are null. I guess this happens because I'm referring to columns that are also just calculated on the fly and have no value at that moment. But is there a way to achieve that?
Currently I'm doing a second run like this:
df = df_.select("*", \
abs(df_.Value - df_.mean).alias("max_deviation_mean"), \
when(abs(df_.Value - df_.mean) < 2 * df_.std_deviation, 1).otherwise(1).alias("std_dev_mean_2"), \
when(abs(df_.Value - df_.mean) < 3 * df_.std_deviation, 1).otherwise(1).alias("std_dev_mean_3"))
The solution is to use the DataFrame.aggregateByKey function that aggregates the values per partition and node before shuffling that aggregate around the computing nodes where they are combined to one resulting value.
Pseudo-code looks like this. It is inspired by this tutorial, but it uses two instances of the StatCounter though we are summarizing two different statistics at once:
from pyspark.statcounter import StatCounter
# value[0] is the timestamp and value[1] is the float-value
# we are using two instances of StatCounter to sum-up two different statistics
def mergeValues(s1, v1, s2, v2):
s1.merge(v1)
s2.merge(v2)
return
def combineStats(s1, s2):
s1[0].mergeStats(s2[0])
s1[1].mergeStats(s2[1])
return
(df.aggregateByKey((StatCounter(), StatCounter()),
(lambda s, values: mergeValues(s[0], values[0], s[1], values[1]),
(lambda s1, s2: combineStats(s1, s2))
.mapValues(lambda s: ( s[0].min(), s[0].max(), s[1].max(), s[1].min(), s[1].mean(), s[1].variance(), s[1].stddev,() s[1].count()))
.collect())
This cannot work because when you execute
from pyspark.sql.functions import *
you shadow built-in abs with pyspark.sql.functions.abs which expects a column not a local Python value as an input.
Also UDF you created doesn't handle NULL entries.
Don't use import * unless you're aware of what exactly is imported. Instead alias
from pyspark.sql.functions import abs as abs_
or import module
from pyspark.sql import functions as sqlf
sqlf.col("x")
Always check input inside UDF or even better avoid UDFs unless necessary.