Sum of pyspark columns to ignore NaN values - python

I have a pypark dataframe in the following way:
+---+----+----+
| id|col1|col2|
+---+----+----+
| 1| 1| 3|
| 2| NaN| 4|
| 3| 3| 5|
+---+----+----+
I would like to sum col1 and col2 so that the result looks like this:
+---+----+----+---+
| id|col1|col2|sum|
+---+----+----+---+
| 1| 1| 3| 4|
| 2| NaN| 4| 4|
| 3| 3| 5| 8|
+---+----+----+---+
Here's what I have tried:
import pandas as pd
test = pd.DataFrame({
'id': [1, 2, 3],
'col1': [1, None, 3],
'col2': [3, 4, 5]
})
test = spark.createDataFrame(test)
test.withColumn('sum', F.col('col1') + F.col('col2')).show()
This code returns:
+---+----+----+---+
| id|col1|col2|sum|
+---+----+----+---+
| 1| 1| 3| 4|
| 2| NaN| 4|NaN| # <-- I want a 4 here, not this NaN
| 3| 3| 5| 8|
+---+----+----+---+
Can anyone help me with this?

Use F.nanvl to replace NaN with a given value (0 here):
import pyspark.sql.functions as F
result = test.withColumn('sum', F.nanvl(F.col('col1'), F.lit(0)) + F.col('col2'))
For your comment:
result = test.withColumn('sum',
F.when(
F.isnan(F.col('col1')) & F.isnan(F.col('col2')),
F.lit(float('nan'))
).otherwise(
F.nanvl(F.col('col1'), F.lit(0)) + F.nanvl(F.col('col2'), F.lit(0))
)
)

Related

Spread List of Lists to Sparks DF with PySpark?

I'm currently struggling with following issue:
Let's take following List of Lists:
[[1, 2, 3], [4, 5], [6, 7]]
How can I create following Sparks DF out of it with one row per element of each sublist:
| min_value | value |
---------------------
| 1| 1|
| 1| 2|
| 1| 3|
| 4| 4|
| 4| 5|
| 6| 6|
| 6| 7|
The only way I'm getting this done is by processing this list to another list with for-loops, which basically then already represents all rows of my DF, which is probably not the best way to solve this.
THX & BR
IntoNumbers
You can create a dataframe and use explode and array_min to get the desired output:
import pyspark.sql.functions as F
l = [[1, 2, 3], [4, 5], [6, 7]]
df = spark.createDataFrame(
[[l]],
['col']
).select(
F.explode('col').alias('value')
).withColumn(
'min_value',
F.array_min('value')
).withColumn(
'value',
F.explode('value')
)
df.show()
+-----+---------+
|value|min_value|
+-----+---------+
| 1| 1|
| 2| 1|
| 3| 1|
| 4| 4|
| 5| 4|
| 6| 6|
| 7| 6|
+-----+---------+
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
from pyspark.sql.functions import col, expr
import pyspark.sql.functions as F
data=[[1, 2, 3], [4, 5], [6, 7]]
Extract first element in each list in data
j=[item[0] for item in data]
Zip first element to data, create df and explode column b
df=spark.createDataFrame(zip(j,data), ['min_value','value']).withColumn('value', F.explode(col('value'))).show()
+---------+-----+
|min_value|value|
+---------+-----+
| 1| 1|
| 1| 2|
| 1| 3|
| 4| 4|
| 4| 5|
| 6| 6|
| 6| 7|
+---------+-----+
Here is my Spark like solution to this:
Basic Imports and SparkSession creation for example purposes
import pyspark.sql.functions as F
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master("local") \
.appName("Stackoverflow problem 1") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
Creating an arbitrary list of lists:
list_of_lists = [[1,3,5], [1053,23], [1,3], [5,2,15,23,5,3]]
Solution below:
# Create an rdd from the list of lists
rdd_ll = spark.sparkContext.parallelize(list_of_lists)
# Compute the min from the previous rdd and store in another rdd
rdd_min = rdd_ll.map(lambda x: min(x))
# create a dataframe by zipping the two rdds and using the explode function
df = spark.createDataFrame(rdd_min.zip(rdd_ll)) \
.withColumn('value', F.explode('_2')) \
.drop('_2') \
.withColumnRenamed('_1', 'min_value')
Output
df.show()
+---------+-----+
|min_value|value|
+---------+-----+
| 1| 1|
| 1| 3|
| 1| 5|
| 23| 1053|
| 23| 23|
| 1| 1|
| 1| 3|
| 2| 5|
| 2| 2|
| 2| 15|
| 2| 23|
| 2| 5|
| 2| 3|
+---------+-----+

Add distinct count of a column to each row in PySpark

I need to add distinct count of a column to each row in PySpark dataframe.
Example:
If the original dataframe is this:
+----+----+
|col1|col2|
+----+----+
|abc | 1|
|xyz | 1|
|dgc | 2|
|ydh | 3|
|ujd | 1|
|ujx | 3|
+----+----+
Then I want something like this:
+----+----+----+
|col1|col2|col3|
+----+----+----+
|abc | 1| 3|
|xyz | 1| 3|
|dgc | 2| 3|
|ydh | 3| 3|
|ujd | 1| 3|
|ujx | 3| 3|
+----+----+----+
I tried df.withColumn('total_count', f.countDistinct('col2')) but it's giving error.
You can count distinct elements in the column and create new column with the value:
distincts = df.dropDuplicates(["col2"]).count()
df = df.withColumn("col3", f.lit(distincts))
Cross join to the count distinct as below:
df2 = df.crossJoin(df.select(F.countDistinct('col2').alias('col3')))
df2.show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| abc| 1| 3|
| xyz| 1| 3|
| dgc| 2| 3|
| ydh| 3| 3|
| ujd| 1| 3|
| ujx| 3| 3|
+----+----+----+
You can use Window, collect_set and size:
from pyspark.sql import functions as F, Window
df = spark.createDataFrame([("abc", 1), ("xyz", 1), ("dgc", 2), ("ydh", 3), ("ujd", 1), ("ujx", 3)], ['col1', 'col2'])
window = Window.orderBy("col2").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
df.withColumn("col3", F.size(F.collect_set(F.col("col2")).over(window))).show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| abc| 1| 3|
| xyz| 1| 3|
| dgc| 2| 3|
| ydh| 3| 3|
| ujd| 1| 3|
| ujx| 3| 3|
+----+----+----+

Joining with a lookup table in PySpark

I have 2 tables: Table 'A' and Table 'Lookup'
Table A:
ID Day
A 1
B 1
C 2
D 4
The lookup table has percentage values for each ID-Day combination.
Table Lookup:
ID 1 2 3 4
A 20 10 50 30
B 0 50 0 50
C 50 10 10 30
D 10 25 25 40
My expected output is to have an additional field in Table 'A' named 'Percent' with values filled in from the lookup table:
ID Day Percent
A 1 20
B 1 0
C 2 10
D 4 40
Since both the tables are large, I do not want to pivot any of the tables.
I have written code in scala. You can refer same for python.
scala> TableA.show()
+---+---+
| ID|Day|
+---+---+
| A| 1|
| B| 1|
| C| 2|
| D| 4|
+---+---+
scala> lookup.show()
+---+---+---+---+---+
| ID| 1| 2| 3| 4|
+---+---+---+---+---+
| A| 20| 10| 50| 30|
| B| 0| 50| 0| 50|
| C| 50| 10| 10| 30|
| D| 10| 25| 25| 40|
+---+---+---+---+---+
//UDF Functon to retrieve data from lookup table
val lookupUDF = (r:Row, s:String) => {
r.getAs(s).toString}
//Join over Key column "ID"
val joindf = TableA.join(lookup,"ID")
//final output DataFrame creation
val final_df = joindf.map(x => (x.getAs("ID").toString, x.getAs("Day").toString, lookupUDF(x,x.getAs("Day")))).toDF("ID","Day","Percentage")
final_df.show()
+---+---+----------+
| ID|Day|Percentage|
+---+---+----------+
| A| 1| 20|
| B| 1| 0|
| C| 2| 10|
| D| 4| 40|
+---+---+----------+
(Posting my answer a day after I posted the question)
I was able to solve this by converting the tables to a pandas dataframe.
from pyspark.sql.types import *
schema = StructType([StructField("id", StringType())\
,StructField("day", StringType())\
,StructField("1", IntegerType())\
,StructField("2", IntegerType())\
,StructField("3", IntegerType())\
,StructField("4", IntegerType())])
# Day field is String type
data = [['A', 1, 20, 10, 50, 30], ['B', 1, 0, 50, 0, 50], ['C', 2, 50, 10, 10, 30], ['D', 4, 10, 25, 25, 40]]
df = spark.createDataFrame(data,schema=schema)
df.show()
# After joining the 2 tables on "id", the tables would look like this:
+---+---+---+---+---+---+
| id|day| 1| 2| 3| 4|
+---+---+---+---+---+---+
| A| 1| 20| 10| 50| 30|
| B| 1| 0| 50| 0| 50|
| C| 2| 50| 10| 10| 30|
| D| 4| 10| 25| 25| 40|
+---+---+---+---+---+---+
# Converting to a pandas dataframe
pandas_df = df.toPandas()
id day 1 2 3 4
A 1 20 10 50 30
B 1 0 50 0 50
C 2 50 10 10 30
D 4 10 25 25 40
# UDF:
def udf(x):
return x[x['day']]
pandas_df['percent'] = pandas_df.apply(udf, axis=1)
# Converting back to a Spark DF:
spark_df = sqlContext.createDataFrame(pandas_df)
+---+---+---+---+---+---+---+
| id|day| 1| 2| 3| 4|new|
+---+---+---+---+---+---+---+
| A| 1| 20| 10| 50| 30| 20|
| B| 1| 0| 50| 0| 50| 0|
| C| 2| 50| 10| 10| 30| 10|
| D| 4| 10| 25| 25| 40| 40|
+---+---+---+---+---+---+---+
spark_df.select("id", "day", "percent").show()
+---+---+-------+
| id|day|percent|
+---+---+-------+
| A| 1| 20|
| B| 1| 0|
| C| 2| 10|
| D| 4| 40|
+---+---+-------+
I would appreciate if someone posts an answer in PySpark without the pandas-df conversion.
df = spark.createDataFrame([{'ID':'A','Day':1}
,{'ID':'B','Day':1}
,{'ID':'C','Day':2}
,{'ID':'D','Day':4}])
df1 = spark.createDataFrame([{'ID':'A','1':20,'2':10,'3':50,'4':30},
{'ID':'B','1':0,'2':50,'3':0,'4':50},
{'ID':'C','1':50,'2':10,'3':10,'4':30},
{'ID':'D','1':10,'2':25,'3':25,'4':40}
])
df1=df1.withColumn('1',col('1').cast('int')).withColumn('2',col('2').cast('int')).withColumn('3',col('3').cast('int')).withColumn('4',col('4').cast('int'))
df=df.withColumn('Day',col('Day').cast('int'))
df_final = df.join(df1,'ID')
df_final_rdd = df_final.rdd
print(df_final_rdd.collect())
def create_list(r,s):
s=str(s)
k = (r['ID'],r['Day'],r[s])
return k
l=[]
for element in df_final_rdd.collect():
l.append(create_list(element,element['Day']))
rdd = sc.parallelize(l)
df= spark.createDataFrame(rdd).toDF('ID','Day','Percent')

Pyspark: how to duplicate a row n time in dataframe?

I've got a dataframe like this and I want to duplicate the row n times if the column n is bigger than one:
A B n
1 2 1
2 9 1
3 8 2
4 1 1
5 3 3
And transform like this:
A B n
1 2 1
2 9 1
3 8 2
3 8 2
4 1 1
5 3 3
5 3 3
5 3 3
I think I should use explode, but I don't understand how it works...
Thanks
With Spark 2.4.0+, this is easier with builtin functions: array_repeat + explode:
from pyspark.sql.functions import expr
df = spark.createDataFrame([(1,2,1), (2,9,1), (3,8,2), (4,1,1), (5,3,3)], ["A", "B", "n"])
new_df = df.withColumn('n', expr('explode(array_repeat(n,int(n)))'))
>>> new_df.show()
+---+---+---+
| A| B| n|
+---+---+---+
| 1| 2| 1|
| 2| 9| 1|
| 3| 8| 2|
| 3| 8| 2|
| 4| 1| 1|
| 5| 3| 3|
| 5| 3| 3|
| 5| 3| 3|
+---+---+---+
The explode function returns a new row for each element in the given array or map.
One way to exploit this function is to use a udf to create a list of size n for each row. Then explode the resulting array.
from pyspark.sql.functions import udf, explode
from pyspark.sql.types import ArrayType, IntegerType
df = spark.createDataFrame([(1,2,1), (2,9,1), (3,8,2), (4,1,1), (5,3,3)] ,["A", "B", "n"])
+---+---+---+
| A| B| n|
+---+---+---+
| 1| 2| 1|
| 2| 9| 1|
| 3| 8| 2|
| 4| 1| 1|
| 5| 3| 3|
+---+---+---+
# use udf function to transform the n value to n times
n_to_array = udf(lambda n : [n] * n, ArrayType(IntegerType()))
df2 = df.withColumn('n', n_to_array(df.n))
+---+---+---------+
| A| B| n|
+---+---+---------+
| 1| 2| [1]|
| 2| 9| [1]|
| 3| 8| [2, 2]|
| 4| 1| [1]|
| 5| 3|[3, 3, 3]|
+---+---+---------+
# now use explode
df2.withColumn('n', explode(df2.n)).show()
+---+---+---+
| A | B | n |
+---+---+---+
| 1| 2| 1|
| 2| 9| 1|
| 3| 8| 2|
| 3| 8| 2|
| 4| 1| 1|
| 5| 3| 3|
| 5| 3| 3|
| 5| 3| 3|
+---+---+---+
I think the udf answer by #Ahmed is the best way to go, but here is an alternative method, that may be as good or better for small n:
First, collect the maximum value of n over the whole DataFrame:
max_n = df.select(f.max('n').alias('max_n')).first()['max_n']
print(max_n)
#3
Now create an array for each row of length max_n, containing numbers in range(max_n). The output of this intermediate step will result in a DataFrame like:
df.withColumn('n_array', f.array([f.lit(i) for i in range(max_n)])).show()
#+---+---+---+---------+
#| A| B| n| n_array|
#+---+---+---+---------+
#| 1| 2| 1|[0, 1, 2]|
#| 2| 9| 1|[0, 1, 2]|
#| 3| 8| 2|[0, 1, 2]|
#| 4| 1| 1|[0, 1, 2]|
#| 5| 3| 3|[0, 1, 2]|
#+---+---+---+---------+
Now we explode the n_array column, and filter to keep only the values in the array that are less than n. This will ensure that we have n copies of each row. Finally we drop the exploded column to get the end result:
df.withColumn('n_array', f.array([f.lit(i) for i in range(max_n)]))\
.select('A', 'B', 'n', f.explode('n_array').alias('col'))\
.where(f.col('col') < f.col('n'))\
.drop('col')\
.show()
#+---+---+---+
#| A| B| n|
#+---+---+---+
#| 1| 2| 1|
#| 2| 9| 1|
#| 3| 8| 2|
#| 3| 8| 2|
#| 4| 1| 1|
#| 5| 3| 3|
#| 5| 3| 3|
#| 5| 3| 3|
#+---+---+---+
However, we are creating a max_n length array for each row- as opposed to just an n length array in the udf solution. It's not immediately clear to me how this will scale vs. udf for large max_n, but I suspect the udf will win out.

Window timeseries with step in Spark/Scala

I have this input :
timestamp,user
1,A
2,B
5,C
9,E
12,F
The result wanted is :
timestampRange,userList
1 to 2,[A,B]
3 to 4,[] Or null
5 to 6,[C]
7 to 8,[] Or null
9 to 10,[E]
11 to 12,[F]
I tried using Window, but the problem, it doesn't include the empty timestamp range.
Any hints would be helpful.
Don't know if widowing function will cover the gaps between ranges, but you can take the following approach :
Define a dataframe, df_ranges:
val ranges = List((1,2), (3,4), (5,6), (7,8), (9,10))
val df_ranges = sc.parallelize(ranges).toDF("start", "end")
+-----+---+
|start|end|
+-----+---+
| 1| 2|
| 3| 4|
| 5| 6|
| 7| 8|
| 9| 10|
+-----+---+
Data with the timestamp column, df_data :
val data = List((1,"A"), (2,"B"), (5,"C"), (9,"E"))
val df_data = sc.parallelize(data).toDF("timestamp", "user")
+---------+----+
|timestamp|user|
+---------+----+
| 1| A|
| 2| B|
| 5| C|
| 9| E|
+---------+----+
Join the two dataframe on the start, end, timestamp columns:
df_ranges.join(df_data, df_ranges.col("start").equalTo(df_data.col("timestamp")).or(df_ranges.col("end").equalTo(df_data.col("timestamp"))), "left")
+-----+---+---------+----+
|start|end|timestamp|user|
+-----+---+---------+----+
| 1| 2| 1| A|
| 1| 2| 2| B|
| 5| 6| 5| C|
| 9| 10| 9| E|
| 3| 4| null|null|
| 7| 8| null|null|
+-----+---+---------+----+
Now do a simple aggregation with collect_list function :
res4.groupBy("start", "end").agg(collect_list("user")).orderBy("start")
+-----+---+------------------+
|start|end|collect_list(user)|
+-----+---+------------------+
| 1| 2| [A, B]|
| 3| 4| []|
| 5| 6| [C]|
| 7| 8| []|
| 9| 10| [E]|
+-----+---+------------------+

Categories