How to stack two columns into a single one in PySpark? - python

I have the following PySpark DataFrame:
id col1 col2
A 2 3
A 2 4
A 4 6
B 1 2
I want to stack col1 and col2 in order to get a single column as follows:
id col3
A 2
A 3
A 4
A 6
B 1
B 2
How can I do so?
df = (
sc.parallelize([
(A, 2, 3), (A, 2, 4), (A, 4, 6),
(B, 1, 2),
]).toDF(["id", "col1", "col2"])
)

The simplest is merge col1 and col2 into an array column and then explode it:
df.show()
+---+----+----+
| id|col1|col2|
+---+----+----+
| A| 2| 3|
| A| 2| 4|
| A| 4| 6|
| B| 1| 2|
+---+----+----+
df.selectExpr('id', 'explode(array(col1, col2))').show()
+---+---+
| id|col|
+---+---+
| A| 2|
| A| 3|
| A| 2|
| A| 4|
| A| 4|
| A| 6|
| B| 1|
| B| 2|
+---+---+
You can drop duplicates if you don't need them.

To do this, group by the "id", then collect the lists from both "col1" and "col2" in an aggregation, to then explode it again into one column.
To get the unique numbers, just drop the duplicates after.
I see that you also have the numbers sorted in your end result, this is done by sorting the concatted lists in the aggregation.
The following code:
from pyspark.sql.functions import concat, collect_list, explode, col, sort_array
df = (
sc.parallelize([
('A', 2, 3), ('A', 2, 4), ('A', 4, 6),
('B', 1, 2),
]).toDF(["id", "col1", "col2"])
)
result = df.groupBy("id") \
.agg(sort_array(concat(collect_list("col1"),collect_list("col2"))).alias("all_numbers")) \
.orderBy("id") \
.withColumn('number', explode(col('all_numbers'))) \
.dropDuplicates() \
.select("id","number") \
.show()
will yield:
+---+------+
| id|number|
+---+------+
| A| 2|
| A| 3|
| A| 4|
| A| 6|
| B| 1|
| B| 2|
+---+------+

Rather a simple solution if the number of columns involved is less.
df = (
sc.parallelize([
('A', 2, 3), ('A', 2, 4), ('A', 4, 6),
('B', 1, 2),
]).toDF(["id", "col1", "col2"])
)
df.show()
+---+----+----+
| id|col1|col2|
+---+----+----+
| A| 2| 3|
| A| 2| 4|
| A| 4| 6|
| B| 1| 2|
+---+----+----+
df1 = df.select(['id', 'col1'])
df2 = df.select(['id', 'col2']).withColumnRenamed('col2', 'col1')
df_new = df1.union(df2)
df_new = df_new.drop_duplicates()
df_new.show()
+---+----+
| id|col1|
+---+----+
| A| 3|
| A| 4|
| B| 1|
| A| 6|
| A| 2|
| B| 2|
+---+----+

Related

How to generate the max values for new columns in PySpark dataframe?

Suppose I have a pyspark dataframe df.
+---+---+
| a| b|
+---+---+
| 1| 2|
| 2| 3|
| 4| 5|
+---+---+
I'd like to add new column c.
column c = max(0, column b - 100)
+---+---+---+
| a| b| c|
+---+---+---+
| 1|200|100|
| 2|300|200|
| 4| 50| 0|
+---+---+---+
How should I generate the new column c in pyspark dataframe? Thanks in advance!
Hope you are looking something like this:
from pyspark.sql.functions import col, lit, greatest
df = spark.createDataFrame(
[
(1, 200),
(2, 300),
(4, 50),
],
["a", "b"]
)
df_new = df.withColumn("c", greatest(lit(0), col("b")-lit(100)))
.show()

Add distinct count of a column to each row in PySpark

I need to add distinct count of a column to each row in PySpark dataframe.
Example:
If the original dataframe is this:
+----+----+
|col1|col2|
+----+----+
|abc | 1|
|xyz | 1|
|dgc | 2|
|ydh | 3|
|ujd | 1|
|ujx | 3|
+----+----+
Then I want something like this:
+----+----+----+
|col1|col2|col3|
+----+----+----+
|abc | 1| 3|
|xyz | 1| 3|
|dgc | 2| 3|
|ydh | 3| 3|
|ujd | 1| 3|
|ujx | 3| 3|
+----+----+----+
I tried df.withColumn('total_count', f.countDistinct('col2')) but it's giving error.
You can count distinct elements in the column and create new column with the value:
distincts = df.dropDuplicates(["col2"]).count()
df = df.withColumn("col3", f.lit(distincts))
Cross join to the count distinct as below:
df2 = df.crossJoin(df.select(F.countDistinct('col2').alias('col3')))
df2.show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| abc| 1| 3|
| xyz| 1| 3|
| dgc| 2| 3|
| ydh| 3| 3|
| ujd| 1| 3|
| ujx| 3| 3|
+----+----+----+
You can use Window, collect_set and size:
from pyspark.sql import functions as F, Window
df = spark.createDataFrame([("abc", 1), ("xyz", 1), ("dgc", 2), ("ydh", 3), ("ujd", 1), ("ujx", 3)], ['col1', 'col2'])
window = Window.orderBy("col2").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
df.withColumn("col3", F.size(F.collect_set(F.col("col2")).over(window))).show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| abc| 1| 3|
| xyz| 1| 3|
| dgc| 2| 3|
| ydh| 3| 3|
| ujd| 1| 3|
| ujx| 3| 3|
+----+----+----+

PySpark making dataframe with three columns from RDD with tuple and int

I have a RDD in a form of:
[(('1', '10'), 1), (('10', '1'), 1), (('1', '12'), 1), (('12', '1'), 1)]
What I have done is
df = spark.createDataFrame(rdd, ["src", "rp"])
where I make a column of the tuple and int which looks like this:
+-------+-----+
| src|rp |
+-------+-----+
|[1, 10]| 1|
|[10, 1]| 1|
|[1, 12]| 1|
|[12, 1]| 1|
+-------+-----+
But I can't figure out how to make src column of the first element in [x,y] and dst column of the second element so i would have a dataframe with three columns src, dst and rp:
+-------+-----+-----+
| src|dst |rp |
+-------+-----+-----+
| 1| 10| 1|
| 10| 1| 1|
| 1| 12| 1|
| 12| 1| 1|
+-------+-----+-----+
You need an intermediate transformation on your RDD to make it a flat list of three elements:
spark.createDataFrame(rdd.map(lambda l: [l[0][0], l[0][1], l[1]]), ["src", "dst", "rp"])
+---+---+---+
|src|dst| rp|
+---+---+---+
| 1| 10| 1|
| 10| 1| 1|
| 1| 12| 1|
| 12| 1| 1|
+---+---+---+
You can just do a simple select on the dataframe to separate out the columns. No need to do a intermediate transformation as the other answer suggests.
from pyspark.sql.functions import col
df = sqlContext.createDataFrame(rdd, ["src", "rp"])
df = df.select(col("src._1").alias("src"), col("src._2").alias("dst"),col("rp"))
df.show()
Here's the result
+---+---+---+
|src|dst| rp|
+---+---+---+
| 1| 10| 1|
| 10| 1| 1|
| 1| 12| 1|
| 12| 1| 1|
+---+---+---+

How to encode labels from array in pyspark

For example I have DataFrame with categorical features in name:
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local").appName("example")
.config("spark.some.config.option", "some-value").getOrCreate()
features = [(['a', 'b', 'c'], 1),
(['a', 'c'], 2),
(['d'], 3),
(['b', 'c'], 4),
(['a', 'b', 'd'], 5)]
df = spark.createDataFrame(features, ['name','id'])
df.show()
Out:
+---------+----+
| name| id |
+---------+----+
|[a, b, c]| 1|
| [a, c]| 2|
| [d]| 3|
| [b, c]| 4|
|[a, b, d]| 5|
+---------+----+
What I want to get:
+--------+--------+--------+--------+----+
| name_a | name_b | name_c | name_d | id |
+--------+--------+--------+--------+----+
| 1 | 1 | 1 | 0 | 1 |
+--------+--------+--------+--------+----+
| 1 | 0 | 1 | 0 | 2 |
+--------+--------+--------+--------+----+
| 0 | 0 | 0 | 1 | 3 |
+--------+--------+--------+--------+----+
| 0 | 1 | 1 | 0 | 4 |
+--------+--------+--------+--------+----+
| 1 | 1 | 0 | 1 | 5 |
+--------+--------+--------+--------+----+
I found the same queston but there is nothing helpful.
I tried to use VectorIndexer from PySpark.ML but I faced some problems with a transform of name field to vector type.
from pyspark.ml.feature import VectorIndexer
indexer = VectorIndexer(inputCol="name", outputCol="indexed", maxCategories=5)
indexerModel = indexer.fit(df)
I get the following error:
Column name must be of type org.apache.spark.ml.linalg.VectorUDT#3bfc3ba7 but was actually ArrayType
I found a solution here but it looks overcomplicated. However, I'm not sure if it can be done only with VectorIndexer.
If you want use the output with Spark ML it is best to use CountVectorizer:
from pyspark.ml.feature import CountVectorizer
# Add binary=True if needed
df_enc = (CountVectorizer(inputCol="name", outputCol="name_vector")
.fit(df)
.transform(df))
df_enc.show(truncate=False)
+---------+---+-------------------------+
|name |id |name_vector |
+---------+---+-------------------------+
|[a, b, c]|1 |(4,[0,1,2],[1.0,1.0,1.0])|
|[a, c] |2 |(4,[0,1],[1.0,1.0]) |
|[d] |3 |(4,[3],[1.0]) |
|[b, c] |4 |(4,[1,2],[1.0,1.0]) |
|[a, b, d]|5 |(4,[0,2,3],[1.0,1.0,1.0])|
+---------+---+-------------------------+
Otherwise collect distinct values:
from pyspark.sql.functions import array_contains, col, explode
names = [
x[0] for x in
df.select(explode("name").alias("name")).distinct().orderBy("name").collect()]
and select the columns with array_contains:
df_sep = df.select("*", *[
array_contains("name", name).alias("name_{}".format(name)).cast("integer")
for name in names]
)
df_sep.show()
+---------+---+------+------+------+------+
| name| id|name_a|name_b|name_c|name_d|
+---------+---+------+------+------+------+
|[a, b, c]| 1| 1| 1| 1| 0|
| [a, c]| 2| 1| 0| 1| 0|
| [d]| 3| 0| 0| 0| 1|
| [b, c]| 4| 0| 1| 1| 0|
|[a, b, d]| 5| 1| 1| 0| 1|
+---------+---+------+------+------+------+
With explode from the pyspark.sql.functions and pivot:
from pyspark.sql import functions as F
features = [(['a', 'b', 'c'], 1),
(['a', 'c'], 2),
(['d'], 3),
(['b', 'c'], 4),
(['a', 'b', 'd'], 5)]
df = spark.createDataFrame(features, ['name','id'])
df.show()
+---------+---+
| name| id|
+---------+---+
|[a, b, c]| 1|
| [a, c]| 2|
| [d]| 3|
| [b, c]| 4|
|[a, b, d]| 5|
+---------+---+
df = df.withColumn('exploded', F.explode('name'))
df.drop('name').groupby('id').pivot('exploded').count().show()
+---+----+----+----+----+
| id| a| b| c| d|
+---+----+----+----+----+
| 5| 1| 1|null| 1|
| 1| 1| 1| 1|null|
| 3|null|null|null| 1|
| 2| 1|null| 1|null|
| 4|null| 1| 1|null|
+---+----+----+----+----+
Sort by id and convert null to 0
df.drop('name').groupby('id').pivot('exploded').count().na.fill(0).sort(F.col('id').asc()).show()
+---+---+---+---+---+
| id| a| b| c| d|
+---+---+---+---+---+
| 1| 1| 1| 1| 0|
| 2| 1| 0| 1| 0|
| 3| 0| 0| 0| 1|
| 4| 0| 1| 1| 0|
| 5| 1| 1| 0| 1|
+---+---+---+---+---+
explode returns a new row for each element in the given array or map. You can then use pivot to "transpose" the new column.

How to get unique pairs of values in DataFrame

Given a pySpark DataFrame, how can I get all possible unique combinations of columns col1 and col2.
I can get unique values for a single column, but cannot get unique pairs of col1 and col2:
df.select('col1').distinct().rdd.map(lambda r: r[0]).collect()
I tried this, but it doesn't seem to work:
df.select(['col1','col2']).distinct().rdd.map(lambda r: r[0]).collect()
The one I tried,
>>> df = spark.createDataFrame([(1,2),(1,3),(1,2),(2,3)],['col1','col2'])
>>> df.show()
+----+----+
|col1|col2|
+----+----+
| 1| 2|
| 1| 3|
| 1| 2|
| 2| 3|
+----+----+
>>> df.select('col1','col2').distinct().rdd.map(lambda r:r[0]).collect() ## your mapping
[1, 2, 1]
>>> df.select('col1','col2').distinct().show()
+----+----+
|col1|col2|
+----+----+
| 1| 3|
| 2| 3|
| 1| 2|
+----+----+
>>> df.select('col1','col2').distinct().rdd.map(lambda r:(r[0],r[1])).collect()
[(1, 3), (2, 3), (1, 2)]
Try with this function below:
`df[['col1', 'col2']].drop_duplicates()`

Categories