Method for all rows of a PySpark DataFrame - python

I have troubles designing a working udf for my task on PySpark (python=2.7, pyspark=1.6)
I have a data DataFrame which looks like this :
+-----------------+
| sequence|
+-----------------+
| idea can|
| fuel turn|
| found today|
| a cell|
|administration in|
+-----------------+
And for each row in data I'd like to lookup info in an other DataFrame called ggrams (based on the attribute sequence), compute aggregates and return that as a new column in data.
My feeling is that I should do it that way :
from pyspark.sql.types import IntegerType
from pyspark.sql.functions import udf
def compute_aggregates(x):
res = ggrams.filter((ggrams.ngram.startswith(x)) \
).groupby("ngram").sum("match_count")
return res.collect()[0]['sum(match_count)']
aggregate = udf(compute_aggregates, IntegerType())
result = data.withColumn('aggregate', compute_aggregates('sequence'))
But that returns a PicklingError.
PicklingError: Could not serialize object: Py4JError: An error occurred while calling o1759.__getstate__. Trace:
py4j.Py4JException: Method __getstate__([]) does not exist

The error is thrown because you cannot access a different dataframe in the udf of another. The simplest way to fix this is by collecting the dataframe you want to check to.
The other option is a cross-join, but I can say from experience that collecting the other dataframe is faster. (I have no math/statistics to back this up though)
so:
1. Collect dataframe you want to use in the udf
2. Call to this collected dataframe (which is now a list) in your udf, you can/must now use python logic since you are talking to a list of objects
Note: Try to limit the dataframe that you are collecting to a minimum, select only the columns you need
Update:
If you are dealing with a very big set that would make collecting impossible, cross-join will most likely now work (it didn't for me at least). The problem is then that the huge cross product of the two dataframes will take so much time to create that the connection to the worker node will time-out, this causes an broadcast error. So see if there is any way that you can limit the columns that you are using, or if there is a possibility to filter out rows of which you can know for sure that they will not be used.
If all this fails, see if you can create some batch approach*, so run only the first X rows with collected data, if this is done, load the next X rows. This will most likely be terrible slow, but atleast it won't time-out (I think, I have not tried this personally since the collection was possible in my case)
*batch both, the dataframe you are running the udf on and the other dataframe, since you still cannot collect inside the udf, because you cannot acces the dataframe from there

Related

Does Spark eject DataFrame data from memory every time an action is executed?

I'm trying to understand how to leverage cache() to improve my performance. Since cache retains a DataFrame in memory "for reuse", it seems like i need to understand the conditions that eject the DataFrame from memory to better understand how to leverage it.
After defining transformations, I call an action, is the dataframe, after the action completes, gone from memory? This would imply that if I do execute an action on a dataframe, but I continue to do other stuff with the data, all the previous parts of the DAG, from the read to the action, will need to be re done.
Is this accurate?
The fact is, that after an action is executed another dag is created. You can check this via SparkUI
In code you can try to identify where your dag is done and new started by looking for actions
When looking at code you can use this simple rule:
When function is transforming one df into another df - its not action but only lazy evaluated transformation. Even if this is join or something else which requires shuffling
When fuction is returning value other dan df, then you are using and action (for example count which is returning Long)
Lets take a look at this code (Its Scala but api is similar). I created this example just to show you the mechanism, this could be done better of course :)
import org.apache.spark.sql.functions.{col, lit, format_number}
import org.apache.spark.sql.DataFrame
val input = spark.read.format("csv")
.option("header", "true")
.load("dbfs:/FileStore/shared_uploads/***#gmail.com/city_temperature.csv")
val dataWithTempInCelcius: DataFrame = input
.withColumn("avg_temp_celcius",format_number((col("AvgTemperature").cast("integer") - lit(32)) / lit(1.8), 2).cast("Double"))
val dfAfterGrouping: DataFrame = dataWithTempInCelcius
.filter(col("City") === "Warsaw")
.groupBy(col("Year"), col("Month"))
.max("avg_temp_celcius")//Not an action, we are still doing transofrmation from df to another df
val maxTemp: Row = dfAfterGrouping
.orderBy(col("Year"))
.first() // <--- first is our first action, output from first is a Row and not df
dataWithTempInCelcius
.filter(col("City") === "Warsaw")
.count() // <--- count is our second action
Here you may see what is the problem. It looks like between first and second action i am doing transformation which was already done in first dag. This intermediate results of calculation was not cached, so in second dag Spark is unable to get the dataframe after filter from the memory which leads us to recomputation. Spark is going to fetch the data again, apply our filter and then calculate the count.
In SparkUI u will find two separate dags and both of them are going to read the source csv
If you cache intermediate results after first .filter(col("City") === "Warsaw") and then use this cached DF to do grouping and count you will still find two separate dags (number of action has not changed) but this time in the plan for second dag you will find "In memory table scan" instead of read of a csv file - that means that Spark is reading data from cache
Now you can see in memory relation in plan. There is still read csv node in the dag but as you can see, for second action its skipped (0 bytes read)
** I am using Databrics cluster with Spark 3.2, SparkUI may look different on your env
Quote...
Using cache() and persist() methods, Spark provides an optimization mechanism to store the intermediate computation of a Spark DataFrame so they can be reused in subsequent actions.
See https://sparkbyexamples.com/spark/spark-dataframe-cache-and-persist-explained/

Why is it not possible to determine the partitions in a dataframe if it is possible to get the count of partitions in Spark?

Using df.rdd.getNumPartitions(), we can get the count of partitions. But how do we get the partitions?
I also tried to pick something up from the documentation and all the attributes (using dir(df)) of a dataframe. However, I could not find any API that would give the partitions, only repartitioning, coalesce, getNumPartitions were all that I could find.
I read this and deduced that Spark does not know the partitioning key(s). My doubt is, if it does not know the partitioning key(s), and hence, does not know the partitions, how can it know their count? If it can, how to determine the partitions?
How about checking what the partition contains using mapPartitionsWithIndex
This code will work for some small dataset
def f(splitIndex, elements):
elements_text = ",".join(list(elements))
yield splitIndex, elements_text
rdd.mapPartitionsWithIndex(f).take(10)
pyspark provides the spark_partition_id() function.
spark_partition_id()
A column for partition ID.
Note: This is indeterministic because it depends on data partitioning
and task scheduling.
>>> from pyspark.sql.functions import *
>>> spark.range(1,1000000)
.withColumn("spark_partition",spark_partition_id())
.groupby("spark_partition")
.count().show(truncate=False)
+---------------+------+
|spark_partition|count |
+---------------+------+
|1 |500000|
|0 |499999|
+---------------+------+
Partitions are numbered from zero to n-1 where n is the number you get from getNumPartitions().
Is that what you're after? Or did you actually mean Hive partitions?

Transforming Python Lambda function without return value to Pyspark

I have a working lambda function in Python that computes the highest similarity between each string in dataset1 and the strings in dataset2. During an iteration, it writes the string, the best match and the similarity together with some other information to bigquery. There is no return value, as the purpose of the function is to insert a row into a bigquery dataset. This process takes rather long which is why I wanted to use Pyspark and Dataproc to speed up the process.
Converting the pandas dataframes to spark was easy. I am having trouble to register my udf, because it has no return value and pyspark requires one. In addition I don't understand how to map the 'apply' function in python to the pyspark variant. So basically my question is how to transform the python code below to work on a spark dataframe.
The following code works in a regular Python environment:
def embargomatch(name, code, embargo_names):
find best match
insert best match and additional information to bigquery
customer_names.apply(lambda x: embargoMatch(x['name'], x['customer_code'],embargo_names),axis=1)
Because pyspark requires a return type, I added 'return 1' to the udf and tried the following:
customer_names = spark.createDataFrame(customer_names)
from pyspark.sql.types import IntegerType
embargo_match_udf = udf(lambda x: embargoMatch(x['name'], x['customer_code'],embargo_names), IntegerType())
Now i'm stuck trying to apply the select function, as I don't know what parameters to give.
I suspect you're stuck on how to pass multiple columns to the udf -- here's a good answer to that question: Pyspark: Pass multiple columns in UDF.
Rather than creating a udf based on a lambda that wraps your function, consider simplifying by creating a udf based on embargomatch directly.
embargo_names = ...
# The parameters here are the columns passed into the udf
def embargomatch(name, customer_code):
pass
embargo_match_udf = udf(embargomatch, IntegerType())
customer_names.select(embargo_match_udf(array('name', 'customer_code')).alias('column_name'))
That being said, it's suspect that your udf doesn't return anything -- I generally see udfs as a way to add columns to the dataframe, but not to have side effects. If you want to insert records into bigquery, consider doing something like this:
customer_names.select('column_name').write.parquet('gs://some/path')
os.system("bq load --source_format=PARQUET [DATASET].[TABLE] gs://some/path")

Spark DF pivot error: Method pivot([class java.lang.String, class java.lang.String]) does not exist

I am a newbie at using Spark dataframes. I am trying to use the pivot method with Spark (Spark version 2.x) and running into the following error:
Py4JError: An error occurred while calling o387.pivot. Trace:
py4j.Py4JException: Method pivot([class java.lang.String, class java.lang.String]) does not exist
Even though I have the agg function as first here, I really do not need to apply any aggregation.
My dataframe looks like this:
+-----+-----+----------+-----+
| name|value| date| time|
+-----+-----+----------+-----+
|name1|100.0|2017-12-01|00:00|
|name1|255.5|2017-12-01|00:15|
|name1|333.3|2017-12-01|00:30|
Expected:
+-----+----------+-----+-----+-----+
| name| date|00:00|00:15|00:30|
+-----+----------+-----+-----+-----+
|name1|2017-12-01|100.0|255.5|333.3|
The way I am trying:
df = df.groupBy(["name","date"]).pivot(pivot_col="time",values="value").agg(first("value")).show
What is my mistake here?
The problem is the values="value" parameter in the pivot function. This should be used for a list of actual values to pivot on, not a column name. From the documentation:
values – List of values that will be translated to columns in the output DataFrame.
and an example:
df4.groupBy("year").pivot("course", ["dotNET", "Java"]).sum("earnings").collect()
[Row(year=2012, dotNET=15000, Java=20000), Row(year=2013, dotNET=48000, Java=30000)]
For the example in the question values should be set to ["00:00","00:15", "00:30"]. However, the values argument is often not necessary (but will make the pivot more efficient), so you can simply change to:
df = df.groupBy(["name","date"]).pivot("time").agg(first("value"))

Un-persisting all dataframes in (py)spark

I am a spark application with several points where I would like to persist the current state. This is usually after a large step, or caching a state that I would like to use multiple times. It appears that when I call cache on my dataframe a second time, a new copy is cached to memory. In my application, this leads to memory issues when scaling up. Even though, a given dataframe is a maximum of about 100 MB in my current tests, the cumulative size of the intermediate results grows beyond the alloted memory on the executor. See below for a small example that shows this behavior.
cache_test.py:
from pyspark import SparkContext, HiveContext
spark_context = SparkContext(appName='cache_test')
hive_context = HiveContext(spark_context)
df = (hive_context.read
.format('com.databricks.spark.csv')
.load('simple_data.csv')
)
df.cache()
df.show()
df = df.withColumn('C1+C2', df['C1'] + df['C2'])
df.cache()
df.show()
spark_context.stop()
simple_data.csv:
1,2,3
4,5,6
7,8,9
Looking at the application UI, there is a copy of the original dataframe, in adition to the one with the new column. I can remove the original copy by calling df.unpersist() before the withColumn line. Is this the recommended way to remove cached intermediate result (i.e. call unpersist before every cache()).
Also, is it possible to purge all cached objects. In my application, there are natural breakpoints where I can simply purge all memory, and move on to the next file. I would like to do this without creating a new spark application for each input file.
Thank you in advance!
Spark 2.x
You can use Catalog.clearCache:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate
...
spark.catalog.clearCache()
Spark 1.x
You can use SQLContext.clearCache method which
Removes all cached tables from the in-memory cache.
from pyspark.sql import SQLContext
from pyspark import SparkContext
sqlContext = SQLContext.getOrCreate(SparkContext.getOrCreate())
...
sqlContext.clearCache()
We use this quite often
for (id, rdd) in sc._jsc.getPersistentRDDs().items():
rdd.unpersist()
print("Unpersisted {} rdd".format(id))
where sc is a sparkContext variable.
When you use cache on dataframe it is one of the transformation and gets evaluated lazily when you perform any action on it like count(),show() etc.
In your case after doing first cache you are calling show() that is the reason the dataframe is cached in memory. Now then you are again performing transformation on the dataframe to add additional column and again caching the new dataframe and then calling the action command show again and this would cache the second dataframe in memory. In case if size of your dataframe is big enough to just hold one dataframe then when you cache the second dataframe it would remove the first dataframe from the memory as it does not have enough space to hold the second dataframe.
Thing to keep in mind: You should not cache a dataframe unless you are using it in multiple actions otherwise it would be an overload in terms of performance as caching itself is costlier operation.

Categories