How to convert array of strings into array of struct with conditions - python

I have a pyspark dataframe with single column _c0.
a|b|c|clm4=1|clm5=3
a|b|c|clm4=9|clm6=60|clm7=23
I am trying to convert it to a dataframe of selected columns like this
clm1,clm2,clm3,clm4,clm6,clm7,clm8
a, b, c, 1, null,null,null
a, b, c, 9, 60, 23, null
Please note that I removed clm5 and added clm8.
I am using below code:
transform_expr = """
transform(split(_c0, '[|]'), (x, i) ->
struct(
IF(x like '%=%', substring_index(x, '=', 1), concat('_c0', i+1)),
substring_index(x, '=', -1)
)
)
"""
df = df.select("_c0", explode(map_from_entries(expr(transform_expr))).alias("col_name", "col_value")).groupby("_c0").pivot('col_name').agg(first('col_value')).drop("_c0")
The problem is that I have multiple huge files on which I want to perform this action and the result of each file should contain same columns(which is also a long list) which can have null values if not present in input file. How can I add a condition to the above code to select only those column which are present in a list of column names?

You can have the desired columns in a list and use it to filter the transformed array :
column_list = ["clm1", "clm2", "clm3", "clm4", "clm6", "clm7", "clm8"]
Now add this filter after the transform step using filter function:
column_filter = ','.join(f"'{c}'" for c in column_list)
transform_expr = f"""
filter(transform(split(_c0, '[|]'), (x, i) ->
struct(
IF(x like '%=%', substring_index(x, '=', 1), concat('clm', i+1)) as name,
substring_index(x, '=', -1) as value
)
), x -> x.name in ({column_filter}))
"""
This will filter out all the columns that are not present in the list.
And finally, add the missing columns as nulls using simple select expression:
df = df.select("_c0", explode(map_from_entries(expr(transform_expr))).alias("col_name", "col_value")).groupby("_c0").pivot('col_name').agg(first('col_value')).drop("_c0")
## add missing columns as nulls
final_columns = [col(c).alias(c) if c in df.columns else lit(None).alias(c) for c in column_list]
df.select(*final_columns).show()
#+----+----+----+----+----+----+----+
#|clm1|clm2|clm3|clm4|clm6|clm7|clm8|
#+----+----+----+----+----+----+----+
#| a| b| c| 9| 60| 23|null|
#| a| b| c| 1|null|null|null|
#+----+----+----+----+----+----+----+

Related

How to Update a dataframe column by taking value from another dataframe?

I have two dataframes df_1 and df_2:
rdd = spark.sparkContext.parallelize([
(1, '', '5647-0394'),
(2, '', '6748-9384'),
(3, '', '9485-9484')])
df_1 = spark.createDataFrame(rdd, schema=['ID', 'UPDATED_MESSAGE', 'ZIP_CODE'])
# +---+---------------+---------+
# | ID|UPDATED_MESSAGE| ZIP_CODE|
# +---+---------------+---------+
# | 1| |5647-0394|
# | 2| |6748-9384|
# | 3| |9485-9484|
# +---+---------------+---------+
rdd = spark.sparkContext.parallelize([
('JAMES', 'INDIA_WON', '6748-9384')])
df_2 = spark.createDataFrame(rdd, schema=['NAME', 'CODE', 'ADDRESS_CODE'])
# +-----+---------+------------+
# | NAME| CODE|ADDRESS_CODE|
# +-----+---------+------------+
# |JAMES|INDIA_WON| 6748-9384|
# +-----+---------+------------+
I need to update df_1 column 'UPDATED MESSAGE' with value 'INDIA_WON' from df_2 column 'CODE'. Currently the column "UPDATED_MESSAGE" is Null. I need to update every row with value as 'INDIA_WON', How can we do it in PySpark?
The condition here is if we find 'ADDRESS_CODE" value in df_1 column "ZIP_CODE", we need to populate all the values in 'UPDATED_MESSAGE' = 'INDIA_WON'.
I hope I've interpreted what you need well. If yes, then your logic seems strange. It seems, that your tables are very small. Spark is the engine for big data (millions to billions of records). If your tables are small, consider doing things in Pandas.
from pyspark.sql import functions as F
df_2 = df_2.groupBy('ADDRESS_CODE').agg(F.first('CODE').alias('CODE'))
df_joined = df_1.join(df_2, df_1.ZIP_CODE == df_2.ADDRESS_CODE, 'left')
df_filtered = df_joined.filter(~F.isnull('ADDRESS_CODE'))
if bool(df_filtered.head(1)):
df_1 = df_1.withColumn('UPDATED_MESSAGE', F.lit(df_filtered.head()['CODE']))
df_1.show()
# +---+---------------+---------+
# | ID|UPDATED_MESSAGE| ZIP_CODE|
# +---+---------------+---------+
# | 1| INDIA_WON|5647-0394|
# | 2| INDIA_WON|6748-9384|
# | 3| INDIA_WON|9485-9484|
# +---+---------------+---------+
The below Python method returns either an original df_1 when no ZIP_CODE match has been found in df_2 or an modified df_1 where column UPDATED_MESSAGE is filled in with the value from df_2.CODE column:
from pyspark.sql.functions import lit
def update_df1(df_1, df_2):
if (df_1.join(df_2, on=(col("ZIP_CODE") == col("ADDRESS_CODE")), how="inner").count() == 0):
return df_1
code = df_2.collect()[0]["CODE"]
return df_1.withColumn("UPDATED_MESSAGE", lit(code))
update_df1(df_1, df_2).show()
+---+---------------+---------+
| ID|UPDATED_MESSAGE| ZIP_CODE|
+---+---------------+---------+
| 1| INDIA_WON|5647-0394|
| 2| INDIA_WON|6748-9384|
| 3| INDIA_WON|9485-9484|
+---+---------------+---------+
I propose use of broadcast join in this case to avoid excessive shuffle.
Code and logic below
new=(df_1.drop('UPDATED_MESSAGE').join(broadcast(df_2.drop('NAME')),how='left', on=df_1.ZIP_CODE==df_2.ADDRESS_CODE)#Drop the null column and join
.drop('ADDRESS_CODE')#Drop column no longer neede
.toDF('ID', 'ZIP_CODE', 'UPDATED_MESSAGE')#rename new df
).show()
Why use dataframes when Spark SQL is so much easier?
Turn data frames into temporary views.
%python
df_1.createOrReplaceTempView("tmp_zipcodes")
df_2.createOrReplaceTempView("tmp_person")
Write simple Spark SQL to get answer.
%sql
select
a.id,
case when b.code is null then '' else b.code end as update_message,
a.zip_code
from tmp_zipcodes as a
left join tmp_person as b
on a.zip_code = b.address_code
Output from query. Use spark.sql() to make an dataframe if you need to write to disk.
Overwrite whole data frame with new answer.
sql_txt = """
select
a.id,
case when b.code is null then '' else b.code end as update_message,
a.zip_code
from tmp_zipcodes as a
left join tmp_person as b
on a.zip_code = b.address_code
"""
df_1 = spark.sql(sql_txt)

Iterate over array column and stop as soon as condition is met in pyspark

I have this dataframe:
se_cols = [
"id",
"name",
"associated_countries"
]
person = (
1,
"GABRIELE",
["ITA", "BEL", "BVI"],
)
company = (
2,
"Bad Company",
["CYP", "RUS", "ITA"],
)
se_data = [person, company]
se = spark.createDataFrame(se_data).toDF(*se_cols)
Now, what I want, is to be able to iterate over each array in the "associated_countries" column, and as soon as I find one country that belongs to a certain set, select that row.
The way I could think of was to use F.exists with a dictionary whose keys are the ISO codes of the target countries I'm looking for.
secrecy = {"CYP":"cyprus", "BVI":"british virgin island"}
def at_least_one_secrecy(x_arr, secrecy_map=secrecy):
for x in x_arr:
if secrecy_map.get(x, False) is False:
continue
else:
return True
return False
se.withColumn("linked_to_secrecy", F.exists("associated_countries", lambda x_arr: at_least_one_secrecy(x_arr=x_arr))).show()
But this returns the error:
TypeError: Column is not iterable
PS: I know this could be solved by adding a column "target_countries" where each row would contain my target ISO as an array and do some sort of array_overlap > 0 condition between "associated_countries" and "terget_countries". But consider I have a huge dataset, and that would be very expensive.
You can use arrays_overlap function with literal array that contains your ISO countries codes:
from pyspark.sql import functions as F,
secrecy_array = F.array(*[F.lit(x) for x in secrecy.keys()])
se.withColumn(
"linked_to_secrecy",
F.arrays_overlap(F.col("associated_countries"), secrecy_array)
).show()
#+---+-----------+--------------------+-----------------+
#| id| name|associated_countries|linked_to_secrecy|
#+---+-----------+--------------------+-----------------+
#| 1| GABRIELE| [ITA, BEL, BVI]| true|
#| 2|Bad Company| [CYP, RUS, ITA]| true|
#+---+-----------+--------------------+-----------------+

Explode column with dense vectors in multiple rows

I have a Dataframe with two columns: BrandWatchErwaehnungID and word_counts.
The word_counts column is the output of `CountVectorizer (a sparse vector). After dropped the empty rows I have created two new columns one with the indices of the sparse vector and one with their values.
help0 = countedwords_text['BrandWatchErwaehnungID','word_counts'].rdd\
.filter(lambda x : x[1].indices.size!=0)\
.map(lambda x : (x[0],x[1],DenseVector(x[1].indices) , DenseVector(x[1].values))).toDF()\
.withColumnRenamed("_1", "BrandWatchErwaenungID").withColumnRenamed("_2", "word_counts")\
.withColumnRenamed("_3", "word_indices").withColumnRenamed("_4", "single_word_counts")
I needed to convert them to dense vectors before adding to my Dataframe due to spark did not accept numpy.ndarray. My problem is that I now want to explode that Dataframeon the word_indices column but the explode method from pyspark.sql.functions does only support arrays or map as input.
I have tried:
help1 = help0.withColumn('b' , explode(help0.word_indices))
and get the following error:
cannot resolve 'explode(`word_indices')' due to data type mismatch: input to function explode should be array or map type
Afterwards I tried:
help1 = help0.withColumn('b' , explode(help0.word_indices.toArray()))
Which also did not worked...
Any suggestions?
You have to use udf:
from pyspark.sql.functions import udf, explode
from pyspark.sql.types import *
from pyspark.ml.linalg import *
#udf("array<integer>")
def indices(v):
if isinstance(v, DenseVector):
return list(range(len(v)))
if isinstance(v, SparseVector):
return v.indices.tolist()
df = spark.createDataFrame([
(1, DenseVector([1, 2, 3])), (2, SparseVector(5, {4: 42}))],
("id", "v"))
df.select("id", explode(indices("v"))).show()
# +---+---+
# | id|col|
# +---+---+
# | 1| 0|
# | 1| 1|
# | 1| 2|
# | 2| 4|
# +---+---+

How to perform union on two DataFrames with different amounts of columns in Spark?

I have 2 DataFrames:
I need union like this:
The unionAll function doesn't work because the number and the name of columns are different.
How can I do this?
In Scala you just have to append all missing columns as nulls.
import org.apache.spark.sql.functions._
// let df1 and df2 the Dataframes to merge
val df1 = sc.parallelize(List(
(50, 2),
(34, 4)
)).toDF("age", "children")
val df2 = sc.parallelize(List(
(26, true, 60000.00),
(32, false, 35000.00)
)).toDF("age", "education", "income")
val cols1 = df1.columns.toSet
val cols2 = df2.columns.toSet
val total = cols1 ++ cols2 // union
def expr(myCols: Set[String], allCols: Set[String]) = {
allCols.toList.map(x => x match {
case x if myCols.contains(x) => col(x)
case _ => lit(null).as(x)
})
}
df1.select(expr(cols1, total):_*).unionAll(df2.select(expr(cols2, total):_*)).show()
+---+--------+---------+-------+
|age|children|education| income|
+---+--------+---------+-------+
| 50| 2| null| null|
| 34| 4| null| null|
| 26| null| true|60000.0|
| 32| null| false|35000.0|
+---+--------+---------+-------+
Update
Both temporal DataFrames will have the same order of columns, because we are mapping through total in both cases.
df1.select(expr(cols1, total):_*).show()
df2.select(expr(cols2, total):_*).show()
+---+--------+---------+------+
|age|children|education|income|
+---+--------+---------+------+
| 50| 2| null| null|
| 34| 4| null| null|
+---+--------+---------+------+
+---+--------+---------+-------+
|age|children|education| income|
+---+--------+---------+-------+
| 26| null| true|60000.0|
| 32| null| false|35000.0|
+---+--------+---------+-------+
Spark 3.1+
df = df1.unionByName(df2, allowMissingColumns=True)
Test results:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
data1=[
(1 , '2016-08-29', 1 , 2, 3),
(2 , '2016-08-29', 1 , 2, 3),
(3 , '2016-08-29', 1 , 2, 3)]
df1 = spark.createDataFrame(data1, ['code' , 'date' , 'A' , 'B', 'C'])
data2=[
(5 , '2016-08-29', 1, 2, 3, 4),
(6 , '2016-08-29', 1, 2, 3, 4),
(7 , '2016-08-29', 1, 2, 3, 4)]
df2 = spark.createDataFrame(data2, ['code' , 'date' , 'B', 'C', 'D', 'E'])
df = df1.unionByName(df2, allowMissingColumns=True)
df.show()
# +----+----------+----+---+---+----+----+
# |code| date| A| B| C| D| E|
# +----+----------+----+---+---+----+----+
# | 1|2016-08-29| 1| 2| 3|null|null|
# | 2|2016-08-29| 1| 2| 3|null|null|
# | 3|2016-08-29| 1| 2| 3|null|null|
# | 5|2016-08-29|null| 1| 2| 3| 4|
# | 6|2016-08-29|null| 1| 2| 3| 4|
# | 7|2016-08-29|null| 1| 2| 3| 4|
# +----+----------+----+---+---+----+----+
Spark 2.3+
diff1 = [c for c in df2.columns if c not in df1.columns]
diff2 = [c for c in df1.columns if c not in df2.columns]
df = df1.select('*', *[F.lit(None).alias(c) for c in diff1]) \
.unionByName(df2.select('*', *[F.lit(None).alias(c) for c in diff2]))
Test results:
from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()
data1=[
(1 , '2016-08-29', 1 , 2, 3),
(2 , '2016-08-29', 1 , 2, 3),
(3 , '2016-08-29', 1 , 2, 3)]
df1 = spark.createDataFrame(data1, ['code' , 'date' , 'A' , 'B', 'C'])
data2=[
(5 , '2016-08-29', 1, 2, 3, 4),
(6 , '2016-08-29', 1, 2, 3, 4),
(7 , '2016-08-29', 1, 2, 3, 4)]
df2 = spark.createDataFrame(data2, ['code' , 'date' , 'B', 'C', 'D', 'E'])
diff1 = [c for c in df2.columns if c not in df1.columns]
diff2 = [c for c in df1.columns if c not in df2.columns]
df = df1.select('*', *[F.lit(None).alias(c) for c in diff1]) \
.unionByName(df2.select('*', *[F.lit(None).alias(c) for c in diff2]))
df.show()
# +----+----------+----+---+---+----+----+
# |code| date| A| B| C| D| E|
# +----+----------+----+---+---+----+----+
# | 1|2016-08-29| 1| 2| 3|null|null|
# | 2|2016-08-29| 1| 2| 3|null|null|
# | 3|2016-08-29| 1| 2| 3|null|null|
# | 5|2016-08-29|null| 1| 2| 3| 4|
# | 6|2016-08-29|null| 1| 2| 3| 4|
# | 7|2016-08-29|null| 1| 2| 3| 4|
# +----+----------+----+---+---+----+----+
Here is my Python version:
from pyspark.sql import SparkSession, HiveContext
from pyspark.sql.functions import lit
from pyspark.sql import Row
def customUnion(df1, df2):
cols1 = df1.columns
cols2 = df2.columns
total_cols = sorted(cols1 + list(set(cols2) - set(cols1)))
def expr(mycols, allcols):
def processCols(colname):
if colname in mycols:
return colname
else:
return lit(None).alias(colname)
cols = map(processCols, allcols)
return list(cols)
appended = df1.select(expr(cols1, total_cols)).union(df2.select(expr(cols2, total_cols)))
return appended
Here is sample usage:
data = [
Row(zip_code=58542, dma='MIN'),
Row(zip_code=58701, dma='MIN'),
Row(zip_code=57632, dma='MIN'),
Row(zip_code=58734, dma='MIN')
]
firstDF = spark.createDataFrame(data)
data = [
Row(zip_code='534', name='MIN'),
Row(zip_code='353', name='MIN'),
Row(zip_code='134', name='MIN'),
Row(zip_code='245', name='MIN')
]
secondDF = spark.createDataFrame(data)
customUnion(firstDF,secondDF).show()
Here is the code for Python 3.0 using pyspark:
from pyspark.sql.functions import lit
def __order_df_and_add_missing_cols(df, columns_order_list, df_missing_fields):
""" return ordered dataFrame by the columns order list with null in missing columns """
if not df_missing_fields: # no missing fields for the df
return df.select(columns_order_list)
else:
columns = []
for colName in columns_order_list:
if colName not in df_missing_fields:
columns.append(colName)
else:
columns.append(lit(None).alias(colName))
return df.select(columns)
def __add_missing_columns(df, missing_column_names):
""" Add missing columns as null in the end of the columns list """
list_missing_columns = []
for col in missing_column_names:
list_missing_columns.append(lit(None).alias(col))
return df.select(df.schema.names + list_missing_columns)
def __order_and_union_d_fs(left_df, right_df, left_list_miss_cols, right_list_miss_cols):
""" return union of data frames with ordered columns by left_df. """
left_df_all_cols = __add_missing_columns(left_df, left_list_miss_cols)
right_df_all_cols = __order_df_and_add_missing_cols(right_df, left_df_all_cols.schema.names,
right_list_miss_cols)
return left_df_all_cols.union(right_df_all_cols)
def union_d_fs(left_df, right_df):
""" Union between two dataFrames, if there is a gap of column fields,
it will append all missing columns as nulls """
# Check for None input
if left_df is None:
raise ValueError('left_df parameter should not be None')
if right_df is None:
raise ValueError('right_df parameter should not be None')
# For data frames with equal columns and order- regular union
if left_df.schema.names == right_df.schema.names:
return left_df.union(right_df)
else: # Different columns
# Save dataFrame columns name list as set
left_df_col_list = set(left_df.schema.names)
right_df_col_list = set(right_df.schema.names)
# Diff columns between left_df and right_df
right_list_miss_cols = list(left_df_col_list - right_df_col_list)
left_list_miss_cols = list(right_df_col_list - left_df_col_list)
return __order_and_union_d_fs(left_df, right_df, left_list_miss_cols, right_list_miss_cols)
A very simple way to do this - select the columns in the same order from both the dataframes and use unionAll
df1.select('code', 'date', 'A', 'B', 'C', lit(None).alias('D'), lit(None).alias('E'))\
.unionAll(df2.select('code', 'date', lit(None).alias('A'), 'B', 'C', 'D', 'E'))
Here's a pyspark solution.
It assumes that if a field in df1 is missing from df2, then you add that missing field to df2 with null values. However it also assumes that if the field exists in both dataframes, but the type or nullability of the field is different, then the two dataframes conflict and cannot be combined. In that case I raise a TypeError.
from pyspark.sql.functions import lit
def harmonize_schemas_and_combine(df_left, df_right):
left_types = {f.name: f.dataType for f in df_left.schema}
right_types = {f.name: f.dataType for f in df_right.schema}
left_fields = set((f.name, f.dataType, f.nullable) for f in df_left.schema)
right_fields = set((f.name, f.dataType, f.nullable) for f in df_right.schema)
# First go over left-unique fields
for l_name, l_type, l_nullable in left_fields.difference(right_fields):
if l_name in right_types:
r_type = right_types[l_name]
if l_type != r_type:
raise TypeError, "Union failed. Type conflict on field %s. left type %s, right type %s" % (l_name, l_type, r_type)
else:
raise TypeError, "Union failed. Nullability conflict on field %s. left nullable %s, right nullable %s" % (l_name, l_nullable, not(l_nullable))
df_right = df_right.withColumn(l_name, lit(None).cast(l_type))
# Now go over right-unique fields
for r_name, r_type, r_nullable in right_fields.difference(left_fields):
if r_name in left_types:
l_type = left_types[r_name]
if r_type != l_type:
raise TypeError, "Union failed. Type conflict on field %s. right type %s, left type %s" % (r_name, r_type, l_type)
else:
raise TypeError, "Union failed. Nullability conflict on field %s. right nullable %s, left nullable %s" % (r_name, r_nullable, not(r_nullable))
df_left = df_left.withColumn(r_name, lit(None).cast(r_type))
# Make sure columns are in the same order
df_left = df_left.select(df_right.columns)
return df_left.union(df_right)
I somehow find most of the python-answers here a bit too clunky in their writing if you're just going with the simple lit(None)-workaround (which is also the only way I know). As alternative this might be useful:
# df1 and df2 are assumed to be the given dataFrames from the question
# Get the lacking columns for each dataframe and set them to null in the respective dataFrame.
# First do so for df1...
for column in [column for column in df1.columns if column not in df2.columns]:
df1 = df1.withColumn(column, lit(None))
# ... and then for df2
for column in [column for column in df2.columns if column not in df1.columns]:
df2 = df2.withColumn(column, lit(None))
Afterwards just do the union() you wanted to do.
Caution: If your column-order differs between df1 and df2 use unionByName()!
result = df1.unionByName(df2)
Modified Alberto Bonsanto's version to preserve the original column order (OP implied the order should match the original tables). Also, the match part caused an Intellij warning.
Here's my version:
def unionDifferentTables(df1: DataFrame, df2: DataFrame): DataFrame = {
val cols1 = df1.columns.toSet
val cols2 = df2.columns.toSet
val total = cols1 ++ cols2 // union
val order = df1.columns ++ df2.columns
val sorted = total.toList.sortWith((a,b)=> order.indexOf(a) < order.indexOf(b))
def expr(myCols: Set[String], allCols: List[String]) = {
allCols.map( {
case x if myCols.contains(x) => col(x)
case y => lit(null).as(y)
})
}
df1.select(expr(cols1, sorted): _*).unionAll(df2.select(expr(cols2, sorted): _*))
}
in pyspark:
df = df1.join(df2, ['each', 'shared', 'col'], how='full')
I had the same issue and using join instead of union solved my problem.
So, for example with python , instead of this line of code:
result = left.union(right), which will fail to execute for different number of columns,
you should use this one:
result = left.join(right, left.columns if (len(left.columns) < len(right.columns)) else right.columns, "outer")
Note that the second argument contains the common columns between the two DataFrames. If you don't use it, the result will have duplicate columns with one of them being null and the other not.
Hope it helps.
There is much concise way to handle this issue with a moderate sacrifice of performance.
def unionWithDifferentSchema(a: DataFrame, b: DataFrame): DataFrame = {
sparkSession.read.json(a.toJSON.union(b.toJSON).rdd)
}
This is the function which does the trick. Using toJSON to each dataframe makes a json Union. This preserves the ordering and the datatype.
Only catch is toJSON is relatively expensive (however not much you probably get 10-15% slowdown). However this keeps the code clean.
My version for Java:
private static Dataset<Row> unionDatasets(Dataset<Row> one, Dataset<Row> another) {
StructType firstSchema = one.schema();
List<String> anotherFields = Arrays.asList(another.schema().fieldNames());
another = balanceDataset(another, firstSchema, anotherFields);
StructType secondSchema = another.schema();
List<String> oneFields = Arrays.asList(one.schema().fieldNames());
one = balanceDataset(one, secondSchema, oneFields);
return another.unionByName(one);
}
private static Dataset<Row> balanceDataset(Dataset<Row> dataset, StructType schema, List<String> fields) {
for (StructField e : schema.fields()) {
if (!fields.contains(e.name())) {
dataset = dataset
.withColumn(e.name(),
lit(null));
dataset = dataset.withColumn(e.name(),
dataset.col(e.name()).cast(Optional.ofNullable(e.dataType()).orElse(StringType)));
}
}
return dataset;
}
Here's the version in Scala also answered here, Also a Pyspark version..
( Spark - Merge / Union DataFrame with Different Schema (column names and sequence) to a DataFrame with Master common schema ) -
It takes List of dataframe to be unioned .. Provided same named columns in all the dataframe should have same datatype..
def unionPro(DFList: List[DataFrame], spark: org.apache.spark.sql.SparkSession): DataFrame = {
/**
* This Function Accepts DataFrame with same or Different Schema/Column Order.With some or none common columns
* Creates a Unioned DataFrame
*/
import spark.implicits._
val MasterColList: Array[String] = DFList.map(_.columns).reduce((x, y) => (x.union(y))).distinct
def unionExpr(myCols: Seq[String], allCols: Seq[String]): Seq[org.apache.spark.sql.Column] = {
allCols.toList.map(x => x match {
case x if myCols.contains(x) => col(x)
case _ => lit(null).as(x)
})
}
// Create EmptyDF , ignoring different Datatype in StructField and treating them same based on Name ignoring cases
val masterSchema = StructType(DFList.map(_.schema.fields).reduce((x, y) => (x.union(y))).groupBy(_.name.toUpperCase).map(_._2.head).toArray)
val masterEmptyDF = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], masterSchema).select(MasterColList.head, MasterColList.tail: _*)
DFList.map(df => df.select(unionExpr(df.columns, MasterColList): _*)).foldLeft(masterEmptyDF)((x, y) => x.union(y))
}
Here is the sample test for it -
val aDF = Seq(("A", 1), ("B", 2)).toDF("Name", "ID")
val bDF = Seq(("C", 1, "D1"), ("D", 2, "D2")).toDF("Name", "Sal", "Deptt")
unionPro(List(aDF, bDF), spark).show
Which gives output as -
+----+----+----+-----+
|Name| ID| Sal|Deptt|
+----+----+----+-----+
| A| 1|null| null|
| B| 2|null| null|
| C|null| 1| D1|
| D|null| 2| D2|
+----+----+----+-----+
This function takes in two dataframes (df1 and df2) with different schemas and unions them.
First we need to bring them to the same schema by adding all (missing) columns from df1 to df2 and vice versa. To add a new empty column to a df we need to specify the datatype.
import pyspark.sql.functions as F
def union_different_schemas(df1, df2):
# Get a list of all column names in both dfs
columns_df1 = df1.columns
columns_df2 = df2.columns
# Get a list of datatypes of the columns
data_types_df1 = [i.dataType for i in df1.schema.fields]
data_types_df2 = [i.dataType for i in df2.schema.fields]
# We go through all columns in df1 and if they are not in df2, we add
# them (and specify the correct datatype too)
for col, typ in zip(columns_df1, data_types_df1):
if col not in df2.columns:
df2 = df2\
.withColumn(col, F.lit(None).cast(typ))
# Now df2 has all missing columns from df1, let's do the same for df1
for col, typ in zip(columns_df2, data_types_df2):
if col not in df1.columns:
df1 = df1\
.withColumn(col, F.lit(None).cast(typ))
# Now df1 and df2 have the same columns, not necessarily in the same
# order, therefore we use unionByName
combined_df = df1\
.unionByName(df2)
return combined_df
PYSPARK
Scala version from Alberto works great. However, if you want to make a for-loop or some dynamic assignment of variables you can face some problems.
Solution comes with Pyspark - clean code:
from pyspark.sql.functions import *
#defining dataframes
df1 = spark.createDataFrame(
[
(1, 'foo','ok'),
(2, 'pro','ok')
],
['id', 'txt','check']
)
df2 = spark.createDataFrame(
[
(3, 'yep',13,'mo'),
(4, 'bro',11,'re')
],
['id', 'txt','value','more']
)
#retrieving columns
cols1 = df1.columns
cols2 = df2.columns
#getting columns from df1 and df2
total = list(set(cols2) | set(cols1))
#defining function for adding nulls (None in case of pyspark)
def addnulls(yourDF):
for x in total:
if not x in yourDF.columns:
yourDF = yourDF.withColumn(x,lit(None))
return yourDF
df1 = addnulls(df1)
df2 = addnulls(df2)
#additional sorting for correct unionAll (it concatenates DFs by column number)
df1.select(sorted(df1.columns)).unionAll(df2.select(sorted(df2.columns))).show()
+-----+---+----+---+-----+
|check| id|more|txt|value|
+-----+---+----+---+-----+
| ok| 1|null|foo| null|
| ok| 2|null|pro| null|
| null| 3| mo|yep| 13|
| null| 4| re|bro| 11|
+-----+---+----+---+-----+
from functools import reduce
from pyspark.sql import DataFrame
import pyspark.sql.functions as F
def unionAll(*dfs, fill_by=None):
clmns = {clm.name.lower(): (clm.dataType, clm.name) for df in dfs for clm in df.schema.fields}
dfs = list(dfs)
for i, df in enumerate(dfs):
df_clmns = [clm.lower() for clm in df.columns]
for clm, (dataType, name) in clmns.items():
if clm not in df_clmns:
# Add the missing column
dfs[i] = dfs[i].withColumn(name, F.lit(fill_by).cast(dataType))
return reduce(DataFrame.unionByName, dfs)
unionAll(df1, df2).show()
Case insenstive columns
Will returns the actual column case
Support the existing datatypes
Default value can be customizable
Pass multiple dataframes at once (e.g unionAll(df1, df2, df3, ..., df10))
here's another one:
def unite(df1: DataFrame, df2: DataFrame): DataFrame = {
val cols1 = df1.columns.toSet
val cols2 = df2.columns.toSet
val total = (cols1 ++ cols2).toSeq.sorted
val expr1 = total.map(c => {
if (cols1.contains(c)) c else "NULL as " + c
})
val expr2 = total.map(c => {
if (cols2.contains(c)) c else "NULL as " + c
})
df1.selectExpr(expr1:_*).union(
df2.selectExpr(expr2:_*)
)
}
Union and outer union for Pyspark DataFrame concatenation. This works for multiple data frames with different columns.
def union_all(*dfs):
return reduce(ps.sql.DataFrame.unionAll, dfs)
def outer_union_all(*dfs):
all_cols = set([])
for df in dfs:
all_cols |= set(df.columns)
all_cols = list(all_cols)
print(all_cols)
def expr(cols, all_cols):
def append_cols(col):
if col in cols:
return col
else:
return sqlfunc.lit(None).alias(col)
cols_ = map(append_cols, all_cols)
return list(cols_)
union_df = union_all(*[df.select(expr(df.columns, all_cols)) for df in dfs])
return union_df
One more generic method to union list of DataFrame.
def unionFrames(dfs: Seq[DataFrame]): DataFrame = {
dfs match {
case Nil => session.emptyDataFrame // or throw an exception?
case x :: Nil => x
case _ =>
//Preserving Column order from left to right DF's column order
val allColumns = dfs.foldLeft(collection.mutable.ArrayBuffer.empty[String])((a, b) => a ++ b.columns).distinct
val appendMissingColumns = (df: DataFrame) => {
val columns = df.columns.toSet
df.select(allColumns.map(c => if (columns.contains(c)) col(c) else lit(null).as(c)): _*)
}
dfs.tail.foldLeft(appendMissingColumns(dfs.head))((a, b) => a.union(appendMissingColumns(b)))
}
This is my pyspark version:
from functools import reduce
from pyspark.sql.functions import lit
def concat(dfs):
# when the dataframes to combine do not have the same order of columns
# https://datascience.stackexchange.com/a/27231/15325
return reduce(lambda df1, df2: df1.union(df2.select(df1.columns)), dfs)
def union_all(dfs):
columns = reduce(lambda x, y : set(x).union(set(y)), [ i.columns for i in dfs ] )
for i in range(len(dfs)):
d = dfs[i]
for c in columns:
if c not in d.columns:
d = d.withColumn(c, lit(None))
dfs[i] = d
return concat(dfs)
Alternate you could use full join.
list_of_files = ['test1.parquet', 'test2.parquet']
def merged_frames():
if list_of_files:
frames = [spark.read.parquet(df.path) for df in list_of_files]
if frames:
df = frames[0]
if frames[1]:
var = 1
for element in range(len(frames)-1):
result_df = df.join(frames[var], 'primary_key', how='full')
var += 1
display(result_df)
If you are loading from files, I guess you could just use the read function with a list of files.
# file_paths is list of files with different schema
df = spark.read.option("mergeSchema", "true").json(file_paths)
The resulting dataframe will have merged columns.

Create new column with function in Spark Dataframe based on a string search of another column

I have a spark dataframe with a column that contains string values (i.e. 'xyztext\afadfa'). I wish to create a new column where the values are '0' or '1' depending on whether the original column contains certain text (i.e. 'text')
Example of result:
## +---+---+------+---------+
## | x1| x2| x3 | xnew |
## +---+---+------+---------+
## | 1| a| xtext| 1 |
## | 3| B| abcht| 0 |
EDIT: I have tried this previously (and have now added .cast(int)) thanks to SGVD but receive 'column is not callable' error when I insert the column name:
df1 = df.withColumn('Target', df.column.contains('text').cast('int'))
The best I have achieved so far is creating a column with 0's in it by:
from pyspark.sql.functions import lit
df1 = df.withColumn('Target', lit(0))
I have also tried an if then else statement to create the vector but am having no luck:
def targ(string):
if df.column.contains('text'): return '1'
else: return '0'
Spark columns have a cast method to cast between types, and you can cast a boolean type to an integer, where True is cast to 1 and False to 0. In Scala, you could use Column#contains to check for a substring. PySpark does not have this method, but you can use the instr function instead:
import pyspark.sql.functions as F
df1 = df.withColumn('Target', (F.instr(df.column, 'text') > 0).cast('int'))
You can also write this function as a SQL expression:
df1 = df.withColumn('Target', F.expr("INSTR(column, 'text') > 0").cast('int'))
Or, completely in SQL without the cast:
df1 = df.withColumn('Target', F.expr("IF(INSTR(column, 'text') > 0, 1, 0)"))

Categories