How to change dataframe column names in PySpark? - python
I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:
df.columns = new_column_name_list
However, the same doesn't work in PySpark dataframes created using sqlContext.
The only solution I could figure out to do this easily is the following:
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)
This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.
Is there a better and more efficient way to do this like we do in pandas?
My Spark version is 1.5.0
There are many ways to do that:
Option 1. Using selectExpr.
data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)],
["Name", "askdaosdka"])
data.show()
data.printSchema()
# Output
#+-------+----------+
#| Name|askdaosdka|
#+-------+----------+
#|Alberto| 2|
#| Dakota| 2|
#+-------+----------+
#root
# |-- Name: string (nullable = true)
# |-- askdaosdka: long (nullable = true)
df = data.selectExpr("Name as name", "askdaosdka as age")
df.show()
df.printSchema()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
#root
# |-- name: string (nullable = true)
# |-- age: long (nullable = true)
Option 2. Using withColumnRenamed, notice that this method allows you to "overwrite" the same column. For Python3, replace xrange with range.
from functools import reduce
oldColumns = data.schema.names
newColumns = ["name", "age"]
df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
df.printSchema()
df.show()
Option 3. using
alias, in Scala you can also use as.
from pyspark.sql.functions import col
data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
data.show()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables.
sqlContext.registerDataFrameAsTable(data, "myTable")
df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
df2.show()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
df = df.withColumnRenamed("colName", "newColName")\
.withColumnRenamed("colName2", "newColName2")
Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names.
If you want to change all columns names, try df.toDF(*cols)
In case you would like to apply a simple transformation on all column names, this code does the trick: (I am replacing all spaces with underscore)
new_column_name_list= list(map(lambda x: x.replace(" ", "_"), df.columns))
df = df.toDF(*new_column_name_list)
Thanks to #user8117731 for toDf trick.
df.withColumnRenamed('age', 'age2')
If you want to rename a single column and keep the rest as it is:
from pyspark.sql.functions import col
new_df = old_df.select(*[col(s).alias(new_name) if s == column_to_change else s for s in old_df.columns])
this is the approach that I used:
create pyspark session:
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('changeColNames').getOrCreate()
create dataframe:
df = spark.createDataFrame(data = [('Bob', 5.62,'juice'), ('Sue',0.85,'milk')], schema = ["Name", "Amount","Item"])
view df with column names:
df.show()
+----+------+-----+
|Name|Amount| Item|
+----+------+-----+
| Bob| 5.62|juice|
| Sue| 0.85| milk|
+----+------+-----+
create a list with new column names:
newcolnames = ['NameNew','AmountNew','ItemNew']
change the column names of the df:
for c,n in zip(df.columns,newcolnames):
df=df.withColumnRenamed(c,n)
view df with new column names:
df.show()
+-------+---------+-------+
|NameNew|AmountNew|ItemNew|
+-------+---------+-------+
| Bob| 5.62| juice|
| Sue| 0.85| milk|
+-------+---------+-------+
I made an easy to use function to rename multiple columns for a pyspark dataframe,
in case anyone wants to use it:
def renameCols(df, old_columns, new_columns):
for old_col,new_col in zip(old_columns,new_columns):
df = df.withColumnRenamed(old_col,new_col)
return df
old_columns = ['old_name1','old_name2']
new_columns = ['new_name1', 'new_name2']
df_renamed = renameCols(df, old_columns, new_columns)
Be careful, both lists must be the same length.
Another way to rename just one column (using import pyspark.sql.functions as F):
df = df.select( '*', F.col('count').alias('new_count') ).drop('count')
Method 1:
df = df.withColumnRenamed("old_column_name", "new_column_name")
Method 2:
If you want to do some computation and rename the new values
df = df.withColumn("old_column_name", F.when(F.col("old_column_name") > 1, F.lit(1)).otherwise(F.col("old_column_name"))
df = df.drop("new_column_name", "old_column_name")
You can use the following function to rename all the columns of your dataframe.
def df_col_rename(X, to_rename, replace_with):
"""
:param X: spark dataframe
:param to_rename: list of original names
:param replace_with: list of new names
:return: dataframe with updated names
"""
import pyspark.sql.functions as F
mapping = dict(zip(to_rename, replace_with))
X = X.select([F.col(c).alias(mapping.get(c, c)) for c in to_rename])
return X
In case you need to update only a few columns' names, you can use the same column name in the replace_with list
To rename all columns
df_col_rename(X,['a', 'b', 'c'], ['x', 'y', 'z'])
To rename a some columns
df_col_rename(X,['a', 'b', 'c'], ['a', 'y', 'z'])
we can use col.alias for renaming the column:
from pyspark.sql.functions import col
df.select(['vin',col('timeStamp').alias('Date')]).show()
We can use various approaches to rename the column name.
First, let create a simple DataFrame.
df = spark.createDataFrame([("x", 1), ("y", 2)],
["col_1", "col_2"])
Now let's try to rename col_1 to col_3. PFB a few approaches to do the same.
# Approach - 1 : using withColumnRenamed function.
df.withColumnRenamed("col_1", "col_3").show()
# Approach - 2 : using alias function.
df.select(df["col_1"].alias("col3"), "col_2").show()
# Approach - 3 : using selectExpr function.
df.selectExpr("col_1 as col_3", "col_2").show()
# Rename all columns
# Approach - 4 : using toDF function. Here you need to pass the list of all columns present in DataFrame.
df.toDF("col_3", "col_2").show()
Here is the output.
+-----+-----+
|col_3|col_2|
+-----+-----+
| x| 1|
| y| 2|
+-----+-----+
I hope this helps.
A way that you can use 'alias' to change the column name:
col('my_column').alias('new_name')
Another way that you can use 'alias' (possibly not mentioned):
df.my_column.alias('new_name')
You can put into for loop, and use zip to pairs each column name in two array.
new_name = ["id", "sepal_length_cm", "sepal_width_cm", "petal_length_cm", "petal_width_cm", "species"]
new_df = df
for old, new in zip(df.columns, new_name):
new_df = new_df.withColumnRenamed(old, new)
I like to use a dict to rename the df.
rename = {'old1': 'new1', 'old2': 'new2'}
for col in df.schema.names:
df = df.withColumnRenamed(col, rename[col])
For a single column rename, you can still use toDF(). For example,
df1.selectExpr("SALARY*2").toDF("REVISED_SALARY").show()
There are multiple approaches you can use:
df1=df.withColumn("new_column","old_column").drop(col("old_column"))
df1=df.withColumn("new_column","old_column")
df1=df.select("old_column".alias("new_column"))
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
CreatingDataFrame = [("James","Sales","NY",90000,34,10000),
("Michael","Sales","NY",86000,56,20000),
("Robert","Sales","CA",81000,30,23000),
("Maria","Finance","CA",90000,24,23000),
("Raman","Finance","CA",99000,40,24000),
("Scott","Finance","NY",83000,36,19000),
("Jen","Finance","NY",79000,53,15000),
("Jeff","Marketing","CA",80000,25,18000),
("Kumar","Marketing","NY",91000,50,21000)
]
schema = StructType([ \
StructField("employee_name",StringType(),True), \
StructField("department",StringType(),True), \
StructField("state",StringType(),True), \
StructField("salary", IntegerType(), True), \
StructField("age", StringType(), True), \
StructField("bonus", IntegerType(), True) \
])
OurData = spark.createDataFrame(data=CreatingDataFrame,schema=schema)
OurData.show()
# COMMAND ----------
GrouppedBonusData=OurData.groupBy("department").sum("bonus")
# COMMAND ----------
GrouppedBonusData.show()
# COMMAND ----------
GrouppedBonusData.printSchema()
# COMMAND ----------
from pyspark.sql.functions import col
BonusColumnRenamed = GrouppedBonusData.select(col("department").alias("department"), col("sum(bonus)").alias("Total_Bonus"))
BonusColumnRenamed.show()
# COMMAND ----------
GrouppedBonusData.groupBy("department").count().show()
# COMMAND ----------
GrouppedSalaryData=OurData.groupBy("department").sum("salary")
# COMMAND ----------
GrouppedSalaryData.show()
# COMMAND ----------
from pyspark.sql.functions import col
SalaryColumnRenamed = GrouppedSalaryData.select(col("department").alias("Department"), col("sum(salary)").alias("Total_Salary"))
SalaryColumnRenamed.show()
Try the following method. The following method can allow you rename columns of multiple files
Reference: https://www.linkedin.com/pulse/pyspark-methods-rename-columns-kyle-gibson/
df_initial = spark.read.load('com.databricks.spark.csv')
rename_dict = {
'Alberto':'Name',
'Dakota':'askdaosdka'
}
df_renamed = df_initial \
.select([col(c).alias(rename_dict.get(c, c)) for c in df_initial.columns])
rename_dict = {
'FName':'FirstName',
'LName':'LastName',
'DOB':'BirthDate'
}
return df.select([col(c).alias(rename_dict.get(c, c)) for c in df.columns])
df_renamed = spark.read.load('/mnt/datalake/bronze/testData') \
.transform(renameColumns)
The simplest solution is using withColumnRenamed:
renamed_df = df.withColumnRenamed(‘name_1’, ‘New_name_1’).withColumnRenamed(‘name_2’, ‘New_name_2’)
renamed_df.show()
And if you would like to do this like we do with Pandas, you can use toDF:
Create an order of list of new columns and pass it to toDF
df_list = ["newName_1", “newName_2", “newName_3", “newName_4"]
renamed_df = df.toDF(*df_list)
renamed_df.show()
This is an easy way to rename multiple columns with a loop:
cols_to_rename = ["col1","col2","col3"]
for col in cols_to_rename:
df = df.withColumnRenamed(col,"new_{}".format(col))
List comprehension + f-string:
df = df.toDF(*[f'n_{c}' for c in df.columns])
Simple list comprehension:
df = df.toDF(*[c.lower() for c in df.columns])
The closest statement to df.columns = new_column_name_list is:
import pyspark.sql.functions as F
df = df.select(*[F.col(name_old).alias(name_new)
for (name_old, name_new)
in zip(df.columns, new_column_name_list)]
This doesn't require any rarely-used functions, and emphasizes some patterns that are very helpful in Spark. You could also break up the steps if you find this one-liner to be doing too many things:
import pyspark.sql.functions as F
column_mapping = [F.col(name_old).alias(name_new)
for (name_old, name_new)
in zip(df.columns, new_column_name_list)]
df = df.select(*column_mapping)
Related
Give two different data frame inputs to PySpark UDF and save output in new data frame
I am trying to use python function using PySpark data frame. I need to give two data frames at input and would like to store the results in the another data frame. Python function I want to use: #udf(StringType()) def fuzz_ratio(df1, df2): return np.vectorize(fuzz.token_sort_ratio(df1, df2)) This is how I am trying to use the above function: result_df.withcolumn("VAL", fuzz_ratio(col(df1.VAL), col(df2.VAL))) df1 and df2 are inputs. VAL columns of both these data frame contain the values I need to input to the function fuzz_ratio. The output should be saved in VAL column of result_df. Example: The VAL is the column name in all the dataframes. df1 and df2 column VAL is of string type.
When you move both columns to the same dataframe, something like the following pandas_udf could be used. pandas_udf is vectorized for performance. It's not the same as regular Spark udf. Input: from pyspark.sql import functions as F import pandas as pd from fuzzywuzzy import fuzz df = spark.createDataFrame([('danial khilji', 'danial'), ('a','as',)], ['col1', 'col2']) df.show() # +-------------+------+ # | col1| col2| # +-------------+------+ # |danial khilji|danial| # | a| as| # +-------------+------+ Script: #F.pandas_udf('long') def fuzz_ratio(c1: pd.Series, c2: pd.Series) -> pd.Series: return pd.concat([c1, c2], axis=1).apply(lambda x: fuzz.token_sort_ratio(x[0], x[1]), axis=1) df.withColumn('fuzz_ratio', fuzz_ratio('col1', 'col2')).show() # +-------------+------+----------+ # | col1| col2|fuzz_ratio| # +-------------+------+----------+ # |danial khilji|danial| 63| # | a| as| 67| # +-------------+------+----------+
Get last / delimited value from Dataframe column in PySpark
I am trying to get the last string after '/'. The column can look like this: "lala/mae.da/rg1/zzzzz" (not necessary only 3 /), and I'd like to return: zzzzz In SQL and Python it's very easy, but I would like to know if there is a way to do it in PySpark. Solving it in Python: original_string = "lala/mae.da/rg1/zzzzz" last_char_index = original_string.rfind("/") new_string = original_string[last_char_index+1:] or directly: new_string = original_string.rsplit('/', 1)[1] And in SQL: RIGHT(MyColumn, CHARINDEX('/', REVERSE(MyColumn))-1) For PySpark I was thinking something like this: df = df.select(col("MyColumn").rsplit('/', 1)[1]) but I get the following error: TypeError: 'Column' object is not callable and I am not even sure Spark allows me to do rsplit at all. Do you have any suggestion on how can I solve this?
Adding another solution even though #Pav3k's answer is great. element_at which gets an item at specific position out of a list: from pyspark.sql import functions as F df = df.withColumn('my_col_split', F.split(df['MyColumn'], '/'))\ .select('MyColumn',F.element_at(F.col('my_col_split'), -1).alias('rsplit') ) >>> df.show(truncate=False) +---------------------+------+ |MyColumn |rsplit| +---------------------+------+ |lala/mae.da/rg1/zzzzz|zzzzz | |fefe |fefe | |fe/fe/frs/fs/fe32/4 |4 | +---------------------+------+ Pav3k's DF used.
import pandas as pd from pyspark.sql import functions as F df = pd.DataFrame({"MyColumn": ["lala/mae.da/rg1/zzzzz", "fefe", "fe/fe/frs/fs/fe32/4"]}) df = spark.createDataFrame(df) df.show(truncate=False) # output +---------------------+ |MyColumn | +---------------------+ |lala/mae.da/rg1/zzzzz| |fefe | |fe/fe/frs/fs/fe32/4 | +---------------------+ ( df .withColumn("NewCol", F.split("MyColumn", "/") ) .withColumn("NewCol", F.col("Newcol")[F.size("NewCol") -1]) .show() ) # output +--------------------+------+ | MyColumn|NewCol| +--------------------+------+ |lala/mae.da/rg1/z...| zzzzz| | fefe| fefe| | fe/fe/frs/fs/fe32/4| 4| +--------------------+------+
Since Spark 2.4, you can use split built-in function to split your string then use element_at built-in function to get the last element of your obtained array, as follows: from pyspark.sql import functions as F df = df.select(F.element_at(F.split(F.col("MyColumn"), '/'), -1))
Leag Lag and Window Function with concat function
I have to tranform data from basically merge line until |#| is found in data Output Needed I have transformed using lead lag function but unsure how to proceed from pyspark.sql import functions as F from pyspark.sql import Window from pyspark.sql.functions import * df = spark.read.text('text.dat') #Adding index column each row get its row numbers , Spark distributes the data and to maintain the order of data we need to perfrom this action df_1 = df.rdd.map(lambda r: r).zipWithIndex().toDF(['value', 'index']) df_1.createOrReplaceTempView("linenumber") #zipindex creates array making back to string df_2 = spark.sql("select value.value as value , index from linenumber") df_2.createOrReplaceTempView("linenumber2") #Splitting and extracting the location value from header and assigning null df_new = spark.sql("select value,case when value like '%|##|' then value else null end as orgval,case when value like '%|#|' then 1 else 0 end as valrow,index from linenumber2") w = Window().partitionBy().orderBy(col("index")) df_new=df_new.select("*", lag("valrow").over(w).alias("validrows")) df_new.createOrReplaceTempView("linenumber3") spark.sql("select * from linenumber3 order by index").show(100) Please help.
Here is my code and explanation: from pyspark.sql import functions as f, Row from pyspark.sql.window import Window df = spark.createDataFrame([ Row(Value='A', LineNumber=6), Row(Value='B', LineNumber=7), Row(Value='C', LineNumber=8), Row(Value='D|#|', LineNumber=9), Row(Value='A|#|', LineNumber=10), Row(Value='E', LineNumber=11), Row(Value='F', LineNumber=12), Row(Value='G|#|', LineNumber=13), Row(Value='I', LineNumber=23), Row(Value='J', LineNumber=24), Row(Value='K', LineNumber=25), Row(Value='L', LineNumber=25) ]) df = df.withColumn('filename', f.input_file_name()) df = df.repartition('filename') w = Window.partitionBy('filename').orderBy('index') # Creating an id to enable window functions df = df.withColumn('index', f.monotonically_increasing_id()) # Identifying if the previous row has |#| delimiter df = df.withColumn('delimiter', f.lag('Value', default=False).over(w).contains('|#|')) # Creating a column to group all values that must be concatenated df = df.withColumn('group', f.sum(f.col('delimiter').cast('int')).over(w)) # Grouping them, removing |#|, collecting all values and concatenate them df = (df .groupBy('group') .agg(f.concat_ws(',', f.collect_list(f.regexp_replace('Value', '\|#\|', ''))).alias('ConcalValue'), f.min('LineNumber').alias('LineNumber'))) # Selecting only desired columns (df .select(f.col('ConcalValue').alias('Concal Value'), f.col('LineNumber').alias('Initial Line Number')) .sort('LineNumber') .show(truncate=False)) Output: +------------+-------------------+ |Concal Value|Initial Line Number| +------------+-------------------+ | A,B,C,D| 6| | A| 10| | E,F,G| 11| | I,J,K,L| 23| +------------+-------------------+
How to perform union on two DataFrames with different amounts of columns in Spark?
I have 2 DataFrames: I need union like this: The unionAll function doesn't work because the number and the name of columns are different. How can I do this?
In Scala you just have to append all missing columns as nulls. import org.apache.spark.sql.functions._ // let df1 and df2 the Dataframes to merge val df1 = sc.parallelize(List( (50, 2), (34, 4) )).toDF("age", "children") val df2 = sc.parallelize(List( (26, true, 60000.00), (32, false, 35000.00) )).toDF("age", "education", "income") val cols1 = df1.columns.toSet val cols2 = df2.columns.toSet val total = cols1 ++ cols2 // union def expr(myCols: Set[String], allCols: Set[String]) = { allCols.toList.map(x => x match { case x if myCols.contains(x) => col(x) case _ => lit(null).as(x) }) } df1.select(expr(cols1, total):_*).unionAll(df2.select(expr(cols2, total):_*)).show() +---+--------+---------+-------+ |age|children|education| income| +---+--------+---------+-------+ | 50| 2| null| null| | 34| 4| null| null| | 26| null| true|60000.0| | 32| null| false|35000.0| +---+--------+---------+-------+ Update Both temporal DataFrames will have the same order of columns, because we are mapping through total in both cases. df1.select(expr(cols1, total):_*).show() df2.select(expr(cols2, total):_*).show() +---+--------+---------+------+ |age|children|education|income| +---+--------+---------+------+ | 50| 2| null| null| | 34| 4| null| null| +---+--------+---------+------+ +---+--------+---------+-------+ |age|children|education| income| +---+--------+---------+-------+ | 26| null| true|60000.0| | 32| null| false|35000.0| +---+--------+---------+-------+
Spark 3.1+ df = df1.unionByName(df2, allowMissingColumns=True) Test results: from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() data1=[ (1 , '2016-08-29', 1 , 2, 3), (2 , '2016-08-29', 1 , 2, 3), (3 , '2016-08-29', 1 , 2, 3)] df1 = spark.createDataFrame(data1, ['code' , 'date' , 'A' , 'B', 'C']) data2=[ (5 , '2016-08-29', 1, 2, 3, 4), (6 , '2016-08-29', 1, 2, 3, 4), (7 , '2016-08-29', 1, 2, 3, 4)] df2 = spark.createDataFrame(data2, ['code' , 'date' , 'B', 'C', 'D', 'E']) df = df1.unionByName(df2, allowMissingColumns=True) df.show() # +----+----------+----+---+---+----+----+ # |code| date| A| B| C| D| E| # +----+----------+----+---+---+----+----+ # | 1|2016-08-29| 1| 2| 3|null|null| # | 2|2016-08-29| 1| 2| 3|null|null| # | 3|2016-08-29| 1| 2| 3|null|null| # | 5|2016-08-29|null| 1| 2| 3| 4| # | 6|2016-08-29|null| 1| 2| 3| 4| # | 7|2016-08-29|null| 1| 2| 3| 4| # +----+----------+----+---+---+----+----+ Spark 2.3+ diff1 = [c for c in df2.columns if c not in df1.columns] diff2 = [c for c in df1.columns if c not in df2.columns] df = df1.select('*', *[F.lit(None).alias(c) for c in diff1]) \ .unionByName(df2.select('*', *[F.lit(None).alias(c) for c in diff2])) Test results: from pyspark.sql import SparkSession, functions as F spark = SparkSession.builder.getOrCreate() data1=[ (1 , '2016-08-29', 1 , 2, 3), (2 , '2016-08-29', 1 , 2, 3), (3 , '2016-08-29', 1 , 2, 3)] df1 = spark.createDataFrame(data1, ['code' , 'date' , 'A' , 'B', 'C']) data2=[ (5 , '2016-08-29', 1, 2, 3, 4), (6 , '2016-08-29', 1, 2, 3, 4), (7 , '2016-08-29', 1, 2, 3, 4)] df2 = spark.createDataFrame(data2, ['code' , 'date' , 'B', 'C', 'D', 'E']) diff1 = [c for c in df2.columns if c not in df1.columns] diff2 = [c for c in df1.columns if c not in df2.columns] df = df1.select('*', *[F.lit(None).alias(c) for c in diff1]) \ .unionByName(df2.select('*', *[F.lit(None).alias(c) for c in diff2])) df.show() # +----+----------+----+---+---+----+----+ # |code| date| A| B| C| D| E| # +----+----------+----+---+---+----+----+ # | 1|2016-08-29| 1| 2| 3|null|null| # | 2|2016-08-29| 1| 2| 3|null|null| # | 3|2016-08-29| 1| 2| 3|null|null| # | 5|2016-08-29|null| 1| 2| 3| 4| # | 6|2016-08-29|null| 1| 2| 3| 4| # | 7|2016-08-29|null| 1| 2| 3| 4| # +----+----------+----+---+---+----+----+
Here is my Python version: from pyspark.sql import SparkSession, HiveContext from pyspark.sql.functions import lit from pyspark.sql import Row def customUnion(df1, df2): cols1 = df1.columns cols2 = df2.columns total_cols = sorted(cols1 + list(set(cols2) - set(cols1))) def expr(mycols, allcols): def processCols(colname): if colname in mycols: return colname else: return lit(None).alias(colname) cols = map(processCols, allcols) return list(cols) appended = df1.select(expr(cols1, total_cols)).union(df2.select(expr(cols2, total_cols))) return appended Here is sample usage: data = [ Row(zip_code=58542, dma='MIN'), Row(zip_code=58701, dma='MIN'), Row(zip_code=57632, dma='MIN'), Row(zip_code=58734, dma='MIN') ] firstDF = spark.createDataFrame(data) data = [ Row(zip_code='534', name='MIN'), Row(zip_code='353', name='MIN'), Row(zip_code='134', name='MIN'), Row(zip_code='245', name='MIN') ] secondDF = spark.createDataFrame(data) customUnion(firstDF,secondDF).show()
Here is the code for Python 3.0 using pyspark: from pyspark.sql.functions import lit def __order_df_and_add_missing_cols(df, columns_order_list, df_missing_fields): """ return ordered dataFrame by the columns order list with null in missing columns """ if not df_missing_fields: # no missing fields for the df return df.select(columns_order_list) else: columns = [] for colName in columns_order_list: if colName not in df_missing_fields: columns.append(colName) else: columns.append(lit(None).alias(colName)) return df.select(columns) def __add_missing_columns(df, missing_column_names): """ Add missing columns as null in the end of the columns list """ list_missing_columns = [] for col in missing_column_names: list_missing_columns.append(lit(None).alias(col)) return df.select(df.schema.names + list_missing_columns) def __order_and_union_d_fs(left_df, right_df, left_list_miss_cols, right_list_miss_cols): """ return union of data frames with ordered columns by left_df. """ left_df_all_cols = __add_missing_columns(left_df, left_list_miss_cols) right_df_all_cols = __order_df_and_add_missing_cols(right_df, left_df_all_cols.schema.names, right_list_miss_cols) return left_df_all_cols.union(right_df_all_cols) def union_d_fs(left_df, right_df): """ Union between two dataFrames, if there is a gap of column fields, it will append all missing columns as nulls """ # Check for None input if left_df is None: raise ValueError('left_df parameter should not be None') if right_df is None: raise ValueError('right_df parameter should not be None') # For data frames with equal columns and order- regular union if left_df.schema.names == right_df.schema.names: return left_df.union(right_df) else: # Different columns # Save dataFrame columns name list as set left_df_col_list = set(left_df.schema.names) right_df_col_list = set(right_df.schema.names) # Diff columns between left_df and right_df right_list_miss_cols = list(left_df_col_list - right_df_col_list) left_list_miss_cols = list(right_df_col_list - left_df_col_list) return __order_and_union_d_fs(left_df, right_df, left_list_miss_cols, right_list_miss_cols)
A very simple way to do this - select the columns in the same order from both the dataframes and use unionAll df1.select('code', 'date', 'A', 'B', 'C', lit(None).alias('D'), lit(None).alias('E'))\ .unionAll(df2.select('code', 'date', lit(None).alias('A'), 'B', 'C', 'D', 'E'))
Here's a pyspark solution. It assumes that if a field in df1 is missing from df2, then you add that missing field to df2 with null values. However it also assumes that if the field exists in both dataframes, but the type or nullability of the field is different, then the two dataframes conflict and cannot be combined. In that case I raise a TypeError. from pyspark.sql.functions import lit def harmonize_schemas_and_combine(df_left, df_right): left_types = {f.name: f.dataType for f in df_left.schema} right_types = {f.name: f.dataType for f in df_right.schema} left_fields = set((f.name, f.dataType, f.nullable) for f in df_left.schema) right_fields = set((f.name, f.dataType, f.nullable) for f in df_right.schema) # First go over left-unique fields for l_name, l_type, l_nullable in left_fields.difference(right_fields): if l_name in right_types: r_type = right_types[l_name] if l_type != r_type: raise TypeError, "Union failed. Type conflict on field %s. left type %s, right type %s" % (l_name, l_type, r_type) else: raise TypeError, "Union failed. Nullability conflict on field %s. left nullable %s, right nullable %s" % (l_name, l_nullable, not(l_nullable)) df_right = df_right.withColumn(l_name, lit(None).cast(l_type)) # Now go over right-unique fields for r_name, r_type, r_nullable in right_fields.difference(left_fields): if r_name in left_types: l_type = left_types[r_name] if r_type != l_type: raise TypeError, "Union failed. Type conflict on field %s. right type %s, left type %s" % (r_name, r_type, l_type) else: raise TypeError, "Union failed. Nullability conflict on field %s. right nullable %s, left nullable %s" % (r_name, r_nullable, not(r_nullable)) df_left = df_left.withColumn(r_name, lit(None).cast(r_type)) # Make sure columns are in the same order df_left = df_left.select(df_right.columns) return df_left.union(df_right)
I somehow find most of the python-answers here a bit too clunky in their writing if you're just going with the simple lit(None)-workaround (which is also the only way I know). As alternative this might be useful: # df1 and df2 are assumed to be the given dataFrames from the question # Get the lacking columns for each dataframe and set them to null in the respective dataFrame. # First do so for df1... for column in [column for column in df1.columns if column not in df2.columns]: df1 = df1.withColumn(column, lit(None)) # ... and then for df2 for column in [column for column in df2.columns if column not in df1.columns]: df2 = df2.withColumn(column, lit(None)) Afterwards just do the union() you wanted to do. Caution: If your column-order differs between df1 and df2 use unionByName()! result = df1.unionByName(df2)
Modified Alberto Bonsanto's version to preserve the original column order (OP implied the order should match the original tables). Also, the match part caused an Intellij warning. Here's my version: def unionDifferentTables(df1: DataFrame, df2: DataFrame): DataFrame = { val cols1 = df1.columns.toSet val cols2 = df2.columns.toSet val total = cols1 ++ cols2 // union val order = df1.columns ++ df2.columns val sorted = total.toList.sortWith((a,b)=> order.indexOf(a) < order.indexOf(b)) def expr(myCols: Set[String], allCols: List[String]) = { allCols.map( { case x if myCols.contains(x) => col(x) case y => lit(null).as(y) }) } df1.select(expr(cols1, sorted): _*).unionAll(df2.select(expr(cols2, sorted): _*)) }
in pyspark: df = df1.join(df2, ['each', 'shared', 'col'], how='full')
I had the same issue and using join instead of union solved my problem. So, for example with python , instead of this line of code: result = left.union(right), which will fail to execute for different number of columns, you should use this one: result = left.join(right, left.columns if (len(left.columns) < len(right.columns)) else right.columns, "outer") Note that the second argument contains the common columns between the two DataFrames. If you don't use it, the result will have duplicate columns with one of them being null and the other not. Hope it helps.
There is much concise way to handle this issue with a moderate sacrifice of performance. def unionWithDifferentSchema(a: DataFrame, b: DataFrame): DataFrame = { sparkSession.read.json(a.toJSON.union(b.toJSON).rdd) } This is the function which does the trick. Using toJSON to each dataframe makes a json Union. This preserves the ordering and the datatype. Only catch is toJSON is relatively expensive (however not much you probably get 10-15% slowdown). However this keeps the code clean.
My version for Java: private static Dataset<Row> unionDatasets(Dataset<Row> one, Dataset<Row> another) { StructType firstSchema = one.schema(); List<String> anotherFields = Arrays.asList(another.schema().fieldNames()); another = balanceDataset(another, firstSchema, anotherFields); StructType secondSchema = another.schema(); List<String> oneFields = Arrays.asList(one.schema().fieldNames()); one = balanceDataset(one, secondSchema, oneFields); return another.unionByName(one); } private static Dataset<Row> balanceDataset(Dataset<Row> dataset, StructType schema, List<String> fields) { for (StructField e : schema.fields()) { if (!fields.contains(e.name())) { dataset = dataset .withColumn(e.name(), lit(null)); dataset = dataset.withColumn(e.name(), dataset.col(e.name()).cast(Optional.ofNullable(e.dataType()).orElse(StringType))); } } return dataset; }
Here's the version in Scala also answered here, Also a Pyspark version.. ( Spark - Merge / Union DataFrame with Different Schema (column names and sequence) to a DataFrame with Master common schema ) - It takes List of dataframe to be unioned .. Provided same named columns in all the dataframe should have same datatype.. def unionPro(DFList: List[DataFrame], spark: org.apache.spark.sql.SparkSession): DataFrame = { /** * This Function Accepts DataFrame with same or Different Schema/Column Order.With some or none common columns * Creates a Unioned DataFrame */ import spark.implicits._ val MasterColList: Array[String] = DFList.map(_.columns).reduce((x, y) => (x.union(y))).distinct def unionExpr(myCols: Seq[String], allCols: Seq[String]): Seq[org.apache.spark.sql.Column] = { allCols.toList.map(x => x match { case x if myCols.contains(x) => col(x) case _ => lit(null).as(x) }) } // Create EmptyDF , ignoring different Datatype in StructField and treating them same based on Name ignoring cases val masterSchema = StructType(DFList.map(_.schema.fields).reduce((x, y) => (x.union(y))).groupBy(_.name.toUpperCase).map(_._2.head).toArray) val masterEmptyDF = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], masterSchema).select(MasterColList.head, MasterColList.tail: _*) DFList.map(df => df.select(unionExpr(df.columns, MasterColList): _*)).foldLeft(masterEmptyDF)((x, y) => x.union(y)) } Here is the sample test for it - val aDF = Seq(("A", 1), ("B", 2)).toDF("Name", "ID") val bDF = Seq(("C", 1, "D1"), ("D", 2, "D2")).toDF("Name", "Sal", "Deptt") unionPro(List(aDF, bDF), spark).show Which gives output as - +----+----+----+-----+ |Name| ID| Sal|Deptt| +----+----+----+-----+ | A| 1|null| null| | B| 2|null| null| | C|null| 1| D1| | D|null| 2| D2| +----+----+----+-----+
This function takes in two dataframes (df1 and df2) with different schemas and unions them. First we need to bring them to the same schema by adding all (missing) columns from df1 to df2 and vice versa. To add a new empty column to a df we need to specify the datatype. import pyspark.sql.functions as F def union_different_schemas(df1, df2): # Get a list of all column names in both dfs columns_df1 = df1.columns columns_df2 = df2.columns # Get a list of datatypes of the columns data_types_df1 = [i.dataType for i in df1.schema.fields] data_types_df2 = [i.dataType for i in df2.schema.fields] # We go through all columns in df1 and if they are not in df2, we add # them (and specify the correct datatype too) for col, typ in zip(columns_df1, data_types_df1): if col not in df2.columns: df2 = df2\ .withColumn(col, F.lit(None).cast(typ)) # Now df2 has all missing columns from df1, let's do the same for df1 for col, typ in zip(columns_df2, data_types_df2): if col not in df1.columns: df1 = df1\ .withColumn(col, F.lit(None).cast(typ)) # Now df1 and df2 have the same columns, not necessarily in the same # order, therefore we use unionByName combined_df = df1\ .unionByName(df2) return combined_df
PYSPARK Scala version from Alberto works great. However, if you want to make a for-loop or some dynamic assignment of variables you can face some problems. Solution comes with Pyspark - clean code: from pyspark.sql.functions import * #defining dataframes df1 = spark.createDataFrame( [ (1, 'foo','ok'), (2, 'pro','ok') ], ['id', 'txt','check'] ) df2 = spark.createDataFrame( [ (3, 'yep',13,'mo'), (4, 'bro',11,'re') ], ['id', 'txt','value','more'] ) #retrieving columns cols1 = df1.columns cols2 = df2.columns #getting columns from df1 and df2 total = list(set(cols2) | set(cols1)) #defining function for adding nulls (None in case of pyspark) def addnulls(yourDF): for x in total: if not x in yourDF.columns: yourDF = yourDF.withColumn(x,lit(None)) return yourDF df1 = addnulls(df1) df2 = addnulls(df2) #additional sorting for correct unionAll (it concatenates DFs by column number) df1.select(sorted(df1.columns)).unionAll(df2.select(sorted(df2.columns))).show() +-----+---+----+---+-----+ |check| id|more|txt|value| +-----+---+----+---+-----+ | ok| 1|null|foo| null| | ok| 2|null|pro| null| | null| 3| mo|yep| 13| | null| 4| re|bro| 11| +-----+---+----+---+-----+
from functools import reduce from pyspark.sql import DataFrame import pyspark.sql.functions as F def unionAll(*dfs, fill_by=None): clmns = {clm.name.lower(): (clm.dataType, clm.name) for df in dfs for clm in df.schema.fields} dfs = list(dfs) for i, df in enumerate(dfs): df_clmns = [clm.lower() for clm in df.columns] for clm, (dataType, name) in clmns.items(): if clm not in df_clmns: # Add the missing column dfs[i] = dfs[i].withColumn(name, F.lit(fill_by).cast(dataType)) return reduce(DataFrame.unionByName, dfs) unionAll(df1, df2).show() Case insenstive columns Will returns the actual column case Support the existing datatypes Default value can be customizable Pass multiple dataframes at once (e.g unionAll(df1, df2, df3, ..., df10))
here's another one: def unite(df1: DataFrame, df2: DataFrame): DataFrame = { val cols1 = df1.columns.toSet val cols2 = df2.columns.toSet val total = (cols1 ++ cols2).toSeq.sorted val expr1 = total.map(c => { if (cols1.contains(c)) c else "NULL as " + c }) val expr2 = total.map(c => { if (cols2.contains(c)) c else "NULL as " + c }) df1.selectExpr(expr1:_*).union( df2.selectExpr(expr2:_*) ) }
Union and outer union for Pyspark DataFrame concatenation. This works for multiple data frames with different columns. def union_all(*dfs): return reduce(ps.sql.DataFrame.unionAll, dfs) def outer_union_all(*dfs): all_cols = set([]) for df in dfs: all_cols |= set(df.columns) all_cols = list(all_cols) print(all_cols) def expr(cols, all_cols): def append_cols(col): if col in cols: return col else: return sqlfunc.lit(None).alias(col) cols_ = map(append_cols, all_cols) return list(cols_) union_df = union_all(*[df.select(expr(df.columns, all_cols)) for df in dfs]) return union_df
One more generic method to union list of DataFrame. def unionFrames(dfs: Seq[DataFrame]): DataFrame = { dfs match { case Nil => session.emptyDataFrame // or throw an exception? case x :: Nil => x case _ => //Preserving Column order from left to right DF's column order val allColumns = dfs.foldLeft(collection.mutable.ArrayBuffer.empty[String])((a, b) => a ++ b.columns).distinct val appendMissingColumns = (df: DataFrame) => { val columns = df.columns.toSet df.select(allColumns.map(c => if (columns.contains(c)) col(c) else lit(null).as(c)): _*) } dfs.tail.foldLeft(appendMissingColumns(dfs.head))((a, b) => a.union(appendMissingColumns(b))) }
This is my pyspark version: from functools import reduce from pyspark.sql.functions import lit def concat(dfs): # when the dataframes to combine do not have the same order of columns # https://datascience.stackexchange.com/a/27231/15325 return reduce(lambda df1, df2: df1.union(df2.select(df1.columns)), dfs) def union_all(dfs): columns = reduce(lambda x, y : set(x).union(set(y)), [ i.columns for i in dfs ] ) for i in range(len(dfs)): d = dfs[i] for c in columns: if c not in d.columns: d = d.withColumn(c, lit(None)) dfs[i] = d return concat(dfs)
Alternate you could use full join. list_of_files = ['test1.parquet', 'test2.parquet'] def merged_frames(): if list_of_files: frames = [spark.read.parquet(df.path) for df in list_of_files] if frames: df = frames[0] if frames[1]: var = 1 for element in range(len(frames)-1): result_df = df.join(frames[var], 'primary_key', how='full') var += 1 display(result_df)
If you are loading from files, I guess you could just use the read function with a list of files. # file_paths is list of files with different schema df = spark.read.option("mergeSchema", "true").json(file_paths) The resulting dataframe will have merged columns.
PySpark converting a column of type 'map' to multiple columns in a dataframe
Input I have a column Parameters of type map of the form: from pyspark.sql import SQLContext sqlContext = SQLContext(sc) d = [{'Parameters': {'foo': '1', 'bar': '2', 'baz': 'aaa'}}] df = sqlContext.createDataFrame(d) df.collect() # [Row(Parameters={'foo': '1', 'bar': '2', 'baz': 'aaa'})] df.printSchema() # root # |-- Parameters: map (nullable = true) # | |-- key: string # | |-- value: string (valueContainsNull = true) Output I want to reshape it in PySpark so that all the keys (foo, bar, etc.) would become columns, namely: [Row(foo='1', bar='2', baz='aaa')] Using withColumn works: (df .withColumn('foo', df.Parameters['foo']) .withColumn('bar', df.Parameters['bar']) .withColumn('baz', df.Parameters['baz']) .drop('Parameters') ).collect() But I need a solution that doesn't explicitly mention the column names, as I have dozens of them.
Since keys of the MapType are not a part of the schema you'll have to collect these first for example like this: from pyspark.sql.functions import explode keys = (df .select(explode("Parameters")) .select("key") .distinct() .rdd.flatMap(lambda x: x) .collect()) When you have this all what is left is simple select: from pyspark.sql.functions import col exprs = [col("Parameters").getItem(k).alias(k) for k in keys] df.select(*exprs)
Performant solution One of the question constraints is to dynamically determine the column names, which is fine, but be warned that this can be really slow. Here's how you can avoid typing and write code that'll execute quickly. cols = list(map( lambda f: F.col("Parameters").getItem(f).alias(str(f)), ["foo", "bar", "baz"])) df.select(cols).show() +---+---+---+ |foo|bar|baz| +---+---+---+ | 1| 2|aaa| +---+---+---+ Notice that this runs a single select operation. Don't run withColumn multiple times because that's slower. The fast solution is only possible if you know all the map keys. You'll need to revert to the slower solution if you don't know all the unique values for the map keys. Slower solution The accepted answer is good. My solution is a bit more performant because it doesn't call .rdd or flatMap(). import pyspark.sql.functions as F d = [{'Parameters': {'foo': '1', 'bar': '2', 'baz': 'aaa'}}] df = spark.createDataFrame(d) keys_df = df.select(F.explode(F.map_keys(F.col("Parameters")))).distinct() keys = list(map(lambda row: row[0], keys_df.collect())) key_cols = list(map(lambda f: F.col("Parameters").getItem(f).alias(str(f)), keys)) df.select(key_cols).show() +---+---+---+ |bar|foo|baz| +---+---+---+ | 2| 1|aaa| +---+---+---+ Collecting results to the driver node can be a performance bottleneck. It's good to execute this code list(map(lambda row: row[0], keys_df.collect())) as a separate command to make sure it's not running too slowly.
Performance-wise, not hard-coding column names, use this: from pyspark.sql import functions as F df = df.withColumn("_c", F.to_json("Parameters")) json_schema = spark.read.json(df.rdd.map(lambda r: r._c)).schema df = df.withColumn("_c", F.from_json("_c", json_schema)) df = df.select("_c.*") df.show() # +----+----+---+ # | bar| baz|foo| # +----+----+---+ # | 2| aaa| 1| # |null|null| 1| # +----+----+---+ It doesn't use neither distinct nor collect. It once calls rdd, so that the extracted schema would have a suitable format to use in from_json.