Create new dataFrame based on reformatted columns from old dataFrame - python

I imported the data from a database
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").option("uri",
"mongodb://127.0.0.1/test.db").load()
I have selected the double columns using
double_list = [name for name,types in df.dtypes if types == 'double']
Credits to #Ramesh Maharjan.
To remove special characters we use
removedSpecials = [''.join(y for y in x if y.isalnum()) for x in double_list]
The question is:
How can I create a new dataframe based on df with ONLY double_list columns. ?

If you already have list of column names with double as datatype then next step is to remove the special characters which can be done by using .isalnum() credit as
removedSpecials = [''.join(y for y in x if y.isalnum()) for x in double_list]
once you have the special characters removed list of column names then its just .withColumnRenamed() api call as
for (x, y) in zip(double_list, removedSpecials):
df = df.withColumnRenamed(x, y)
df.show(truncate=False) should give you the renamed dataframe on the columns with double datatype
If you don't want the columns that are not in double_list i.e. not in double datatype list then you can use select as
df.select(*removedSpecials).show(truncate=False)
The reason for doing
for (x, y) in zip(double_list, removedSpecials):
df = df.withColumnRenamed(x, y)
before doing
df.select(*removedSpecials).show(truncate=False)
is that there might be special characters like . which doesn't make concise solutions like df.select([df[x].alias(y) for (x, y) in zip(double_list, removedSpecials)]).show(truncate=False) to work
I hope the answer is helpful

scala code, you can convert into python
import sqlContext.implicits._
// sample df
df.show()
+----+--------------------+--------+
|data| Week|NumCCol1|
+----+--------------------+--------+
| aac|01/28/2018-02/03/...| 2.0|
| aac|02/04/2018-02/10/...| 23.0|
| aac|02/11/2018-02/17/...| 105.0|
+----+--------------------+--------+
df.printSchema()
root
|-- data: string (nullable = true)
|-- Week: string (nullable = true)
|-- NumCCol1: double (nullable = false)
val df1 = df.schema.fields
.collect({case x if x.dataType.typeName == "double" => x.name})
.foldLeft(df)({case(dframe,field) => dframe.select(field)})
// df with only double columns
df1.show()
use df1.withColumnRenamed to rename the columns

Related

Data Frames being read in with varying number of columns, how do I dynamically change data types of only columns that are Boolean to String data type?

In my notebook, I have Data Frames being read in that will have a variable number of columns every time the notebook is ran. How do I dynamically change the data types of only the columns that are Boolean data types to String data type?
This is a problem I faced so I am posting the answer incase this helps someone else.
The name of the data frame is "df".
Here we dynamically convert every column in the incoming dataset that is a Boolean data type to a String data type:
def bool_col_DataTypes(DataFrame):
"""This Function accepts a Spark Data Frame as an argument. It returns a list of all Boolean columns in your dataframe."""
DataFrame = dict(DataFrame.dtypes)
list_of_bool_cols_for_conversion = [x for x, y in DataFrame.items() if y == 'boolean']
return list_of_bool_cols_for_conversion
list_of_bool_columns = bool_col_DataTypes(df)
for i in list_of_bool_columns:
df = df.withColumn(i, F.col(i).cast(StringType()))
new_df = df
data=([(True, 'Lion',1),
(False, 'fridge',2),
( True, 'Bat', 23)])
schema =StructType([StructField('Answer',BooleanType(), True),StructField('Entity',StringType(), True),StructField('ID',IntegerType(), True)])
df=spark.createDataFrame(data, schema)
df.printSchema()
Schema
root
|-- Answer: boolean (nullable = true)
|-- Entity: string (nullable = true)
|-- ID: integer (nullable = true)
Transformation
df1 =df.select( *[col(x).cast('string').alias(x) if y =='boolean' else col(x) for x, y in df.dtypes])
df1.printSchema()
root
|-- Answer: string (nullable = true)
|-- Entity: string (nullable = true)
|-- ID: integer (nullable = true)

PySpark dataframe to python dataframe doesn't carry nested list of dictionaries

I have this pyspark dataframe as:
|-- countryCode: string (nullable = true)
|-- words: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- confidence: string (nullable = true)
| | |-- locale: string (nullable = true)
I am trying to convert it to Python Dataframe, but the list of dictionaries as "words" won't keep their format of list of dictionaries. They become list of tuples.
How could I avoid it?
in pyspark:
[Row(countryCode='TR', words=[Row(confidence='0.50127727', locale='en')]]
converting to python dataframe:
scraped_data.select("countryCode", "words").toPandas()
python dataframe:
countryCode words
TR [(0.50127727, en)]
what I want is the list of dictionaries in my python dataframe for "words" as [{'confidence':'0.50127727', 'locale':'en'}]
Convert struct to Map Type using pyspark's create_map() before you send to pandas. create_map() is a PySpark SQL function that converts StructType to MapType column.
Data
from pyspark.sql import SparkSession, Row
df= spark.createDataFrame([Row(countryCode='TR', Words =[Row(confidence='0.50127727', locale='en')])])
df.show()
Solution
from pyspark.sql.functions import col,lit,create_map
df = df.withColumn("WordsMap",create_map(
lit("confidence"),col("Words.confidence"),
lit("locale"),col("Words.locale")
)).drop("Words")
pdf =df.select("countryCode", "WordsMap").toPandas()
Outcome
print(pdf)
countryCode WordsMap
0 TR {'confidence': ['0.50127727'], 'locale': ['en']}
I noticed that when we convert the pyspark dataframe to python dataframe, the format of dictionaries' list will be showed no preserved, but the format is preserved in the new dataframe too and you can access the key-value stores:
for idx, row in scraped_data_df.iterrows():
for kw in row['words']:
if kw['confidence'] not in test.keys():
it basically have all the dictionary fields in it even if it dataframe.head() only shows tuples: [(0.50127727, en)]

Comparing schema of dataframe using Pyspark

I have a data frame (df).
For showing its schema I use:
from pyspark.sql.functions import *
df1.printSchema()
And I get the following result:
#root
# |-- name: string (nullable = true)
# |-- age: long (nullable = true)
Sometimes the schema changes (the column type or name):
df2.printSchema()
#root
# |-- name: array (nullable = true)
# |-- gender: integer (nullable = true)
# |-- age: long (nullable = true)
I would like to compare between the two schemas (df1 and df2) and get only the differences in types and columns names (Sometimes the column can move to another position).
The results should be a table (or data frame) something like this:
column df1 df2 diff
name: string array type
gender: N/A integer new column
(age column is the same and didn't change. In case of omission of column there will be indication 'omitted')
How can I do it if efficiently if I have many columns in each?
Without any external library, we can find the schema difference using
from pyspark.sql.session import SparkSession
from pyspark.sql import DataFrame
def schema_diff(spark: SparkSession, df_1: DataFrame, df_2: DataFrame):
s1 = spark.createDataFrame(df_1.dtypes, ["d1_name", "d1_type"])
s2 = spark.createDataFrame(df_2.dtypes, ["d2_name", "d2_type"])
difference = (
s1.join(s2, s1.d1_name == s2.d2_name, how="outer")
.where(s1.d1_type.isNull() | s2.d2_type.isNull())
.select(s1.d1_name, s1.d1_type, s2.d2_name, s2.d2_type)
.fillna("")
)
return difference
fillna is optional. I prefer to view them as empty string.
in where clause we use type because this will help us to show even if column exists in both dataframe but they have different schemas.
this will also show all columns that are in second dataframe but not in first dataframe
Usage:
diff = schema_diff(spark, df_1, df_2)
diff.show(diff.count(), truncate=False)
You can try creating two pandas dataframes with metadata from both DF1 and DF2 like below
pd_df1=pd.DataFrame(df1.dtypes,columns=['column','data_type'])
pd_df2=pd.DataFrame(df2.dtypes,columns=['column','data_type'])
and then join those two pandas dataframes through 'outer' join?
A custom function that could be useful for someone.
def SchemaDiff(DF1, DF2):
# Getting schema for both dataframes in a dictionary
DF1Schema = {x[0]:x[1] for x in DF1.dtypes}
DF2Schema = {x[0]:x[1] for x in DF2.dtypes}
# Column present in DF1 but not in DF2
DF1MinusDF2 = dict.fromkeys((set(DF1.columns) - set(DF2.columns)), '')
for column_name in DF1MinusDF2:
DF1MinusDF2[column_name] = DF1Schema[column_name]
# Column present in DF2 but not in DF1
DF2MinusDF1 = dict.fromkeys((set(DF2.columns) - set(DF1.columns)), '')
for column_name in DF2MinusDF1:
DF2MinusDF1[column_name] = DF2Schema[column_name]
# Find data type changed in DF1 as compared to DF2
UpdatedDF1Schema = {k:v for k,v in DF1Schema.items() if k not in DF1MinusDF2}
UpdatedDF1Schema = {**UpdatedDF1Schema, **DF2MinusDF1}
DF1DataTypesChanged = {}
for column_name in UpdatedDF1Schema:
if UpdatedDF1Schema[column_name] != DF2Schema[column_name]:
DF1DataTypesChanged[column_name] = DF2Schema[column_name]
return DF1MinusDF2, DF2MinusDF1, DF1DataTypesChanged
you can simply use
df1.printSchema() == df2.printSchema()

Change column type from string to date in Pyspark

I'm trying to change my column type from string to date. I have consulted answers from:
How to change the column type from String to Date in DataFrames?
Why I get null results from date_format() PySpark function?
When I tried to apply answers from link 1, I got null result instead, so I referred to answer from link 2 but I don't understand this part:
output_format = ... # Some SimpleDateFormat string
from pyspark.sql.functions import col, unix_timestamp, to_date
#sample data
df = sc.parallelize([['12-21-2006'],
['05-30-2007'],
['01-01-1984'],
['12-24-2017']]).toDF(["date_in_strFormat"])
df.printSchema()
df = df.withColumn('date_in_dateFormat',
to_date(unix_timestamp(col('date_in_strFormat'), 'MM-dd-yyyy').cast("timestamp")))
df.show()
df.printSchema()
Output is:
root
|-- date_in_strFormat: string (nullable = true)
+-----------------+------------------+
|date_in_strFormat|date_in_dateFormat|
+-----------------+------------------+
| 12-21-2006| 2006-12-21|
| 05-30-2007| 2007-05-30|
| 01-01-1984| 1984-01-01|
| 12-24-2017| 2017-12-24|
+-----------------+------------------+
root
|-- date_in_strFormat: string (nullable = true)
|-- date_in_dateFormat: date (nullable = true)
simple way:
from pyspark.sql.types import *
df_1 = df.withColumn("col_with_date_format",
df["col_with_date_format"].cast(DateType()))
Here is a more easy way by using default to_date function:
from pyspark.sql import functions as F
df= df.withColumn('col_with_date_format',F.to_date(df.col_with_str_format))

How to change dataframe column names in PySpark?

I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:
df.columns = new_column_name_list
However, the same doesn't work in PySpark dataframes created using sqlContext.
The only solution I could figure out to do this easily is the following:
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)
This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.
Is there a better and more efficient way to do this like we do in pandas?
My Spark version is 1.5.0
There are many ways to do that:
Option 1. Using selectExpr.
data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)],
["Name", "askdaosdka"])
data.show()
data.printSchema()
# Output
#+-------+----------+
#| Name|askdaosdka|
#+-------+----------+
#|Alberto| 2|
#| Dakota| 2|
#+-------+----------+
#root
# |-- Name: string (nullable = true)
# |-- askdaosdka: long (nullable = true)
df = data.selectExpr("Name as name", "askdaosdka as age")
df.show()
df.printSchema()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
#root
# |-- name: string (nullable = true)
# |-- age: long (nullable = true)
Option 2. Using withColumnRenamed, notice that this method allows you to "overwrite" the same column. For Python3, replace xrange with range.
from functools import reduce
oldColumns = data.schema.names
newColumns = ["name", "age"]
df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
df.printSchema()
df.show()
Option 3. using
alias, in Scala you can also use as.
from pyspark.sql.functions import col
data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
data.show()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables.
sqlContext.registerDataFrameAsTable(data, "myTable")
df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
df2.show()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
df = df.withColumnRenamed("colName", "newColName")\
.withColumnRenamed("colName2", "newColName2")
Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names.
If you want to change all columns names, try df.toDF(*cols)
In case you would like to apply a simple transformation on all column names, this code does the trick: (I am replacing all spaces with underscore)
new_column_name_list= list(map(lambda x: x.replace(" ", "_"), df.columns))
df = df.toDF(*new_column_name_list)
Thanks to #user8117731 for toDf trick.
df.withColumnRenamed('age', 'age2')
If you want to rename a single column and keep the rest as it is:
from pyspark.sql.functions import col
new_df = old_df.select(*[col(s).alias(new_name) if s == column_to_change else s for s in old_df.columns])
this is the approach that I used:
create pyspark session:
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('changeColNames').getOrCreate()
create dataframe:
df = spark.createDataFrame(data = [('Bob', 5.62,'juice'), ('Sue',0.85,'milk')], schema = ["Name", "Amount","Item"])
view df with column names:
df.show()
+----+------+-----+
|Name|Amount| Item|
+----+------+-----+
| Bob| 5.62|juice|
| Sue| 0.85| milk|
+----+------+-----+
create a list with new column names:
newcolnames = ['NameNew','AmountNew','ItemNew']
change the column names of the df:
for c,n in zip(df.columns,newcolnames):
df=df.withColumnRenamed(c,n)
view df with new column names:
df.show()
+-------+---------+-------+
|NameNew|AmountNew|ItemNew|
+-------+---------+-------+
| Bob| 5.62| juice|
| Sue| 0.85| milk|
+-------+---------+-------+
I made an easy to use function to rename multiple columns for a pyspark dataframe,
in case anyone wants to use it:
def renameCols(df, old_columns, new_columns):
for old_col,new_col in zip(old_columns,new_columns):
df = df.withColumnRenamed(old_col,new_col)
return df
old_columns = ['old_name1','old_name2']
new_columns = ['new_name1', 'new_name2']
df_renamed = renameCols(df, old_columns, new_columns)
Be careful, both lists must be the same length.
Another way to rename just one column (using import pyspark.sql.functions as F):
df = df.select( '*', F.col('count').alias('new_count') ).drop('count')
Method 1:
df = df.withColumnRenamed("old_column_name", "new_column_name")
Method 2:
If you want to do some computation and rename the new values
df = df.withColumn("old_column_name", F.when(F.col("old_column_name") > 1, F.lit(1)).otherwise(F.col("old_column_name"))
df = df.drop("new_column_name", "old_column_name")
You can use the following function to rename all the columns of your dataframe.
def df_col_rename(X, to_rename, replace_with):
"""
:param X: spark dataframe
:param to_rename: list of original names
:param replace_with: list of new names
:return: dataframe with updated names
"""
import pyspark.sql.functions as F
mapping = dict(zip(to_rename, replace_with))
X = X.select([F.col(c).alias(mapping.get(c, c)) for c in to_rename])
return X
In case you need to update only a few columns' names, you can use the same column name in the replace_with list
To rename all columns
df_col_rename(X,['a', 'b', 'c'], ['x', 'y', 'z'])
To rename a some columns
df_col_rename(X,['a', 'b', 'c'], ['a', 'y', 'z'])
we can use col.alias for renaming the column:
from pyspark.sql.functions import col
df.select(['vin',col('timeStamp').alias('Date')]).show()
We can use various approaches to rename the column name.
First, let create a simple DataFrame.
df = spark.createDataFrame([("x", 1), ("y", 2)],
["col_1", "col_2"])
Now let's try to rename col_1 to col_3. PFB a few approaches to do the same.
# Approach - 1 : using withColumnRenamed function.
df.withColumnRenamed("col_1", "col_3").show()
# Approach - 2 : using alias function.
df.select(df["col_1"].alias("col3"), "col_2").show()
# Approach - 3 : using selectExpr function.
df.selectExpr("col_1 as col_3", "col_2").show()
# Rename all columns
# Approach - 4 : using toDF function. Here you need to pass the list of all columns present in DataFrame.
df.toDF("col_3", "col_2").show()
Here is the output.
+-----+-----+
|col_3|col_2|
+-----+-----+
| x| 1|
| y| 2|
+-----+-----+
I hope this helps.
A way that you can use 'alias' to change the column name:
col('my_column').alias('new_name')
Another way that you can use 'alias' (possibly not mentioned):
df.my_column.alias('new_name')
You can put into for loop, and use zip to pairs each column name in two array.
new_name = ["id", "sepal_length_cm", "sepal_width_cm", "petal_length_cm", "petal_width_cm", "species"]
new_df = df
for old, new in zip(df.columns, new_name):
new_df = new_df.withColumnRenamed(old, new)
I like to use a dict to rename the df.
rename = {'old1': 'new1', 'old2': 'new2'}
for col in df.schema.names:
df = df.withColumnRenamed(col, rename[col])
For a single column rename, you can still use toDF(). For example,
df1.selectExpr("SALARY*2").toDF("REVISED_SALARY").show()
There are multiple approaches you can use:
df1=df.withColumn("new_column","old_column").drop(col("old_column"))
df1=df.withColumn("new_column","old_column")
df1=df.select("old_column".alias("new_column"))
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
CreatingDataFrame = [("James","Sales","NY",90000,34,10000),
("Michael","Sales","NY",86000,56,20000),
("Robert","Sales","CA",81000,30,23000),
("Maria","Finance","CA",90000,24,23000),
("Raman","Finance","CA",99000,40,24000),
("Scott","Finance","NY",83000,36,19000),
("Jen","Finance","NY",79000,53,15000),
("Jeff","Marketing","CA",80000,25,18000),
("Kumar","Marketing","NY",91000,50,21000)
]
schema = StructType([ \
StructField("employee_name",StringType(),True), \
StructField("department",StringType(),True), \
StructField("state",StringType(),True), \
StructField("salary", IntegerType(), True), \
StructField("age", StringType(), True), \
StructField("bonus", IntegerType(), True) \
])
OurData = spark.createDataFrame(data=CreatingDataFrame,schema=schema)
OurData.show()
# COMMAND ----------
GrouppedBonusData=OurData.groupBy("department").sum("bonus")
# COMMAND ----------
GrouppedBonusData.show()
# COMMAND ----------
GrouppedBonusData.printSchema()
# COMMAND ----------
from pyspark.sql.functions import col
BonusColumnRenamed = GrouppedBonusData.select(col("department").alias("department"), col("sum(bonus)").alias("Total_Bonus"))
BonusColumnRenamed.show()
# COMMAND ----------
GrouppedBonusData.groupBy("department").count().show()
# COMMAND ----------
GrouppedSalaryData=OurData.groupBy("department").sum("salary")
# COMMAND ----------
GrouppedSalaryData.show()
# COMMAND ----------
from pyspark.sql.functions import col
SalaryColumnRenamed = GrouppedSalaryData.select(col("department").alias("Department"), col("sum(salary)").alias("Total_Salary"))
SalaryColumnRenamed.show()
Try the following method. The following method can allow you rename columns of multiple files
Reference: https://www.linkedin.com/pulse/pyspark-methods-rename-columns-kyle-gibson/
df_initial = spark.read.load('com.databricks.spark.csv')
rename_dict = {
'Alberto':'Name',
'Dakota':'askdaosdka'
}
df_renamed = df_initial \
.select([col(c).alias(rename_dict.get(c, c)) for c in df_initial.columns])
rename_dict = {
'FName':'FirstName',
'LName':'LastName',
'DOB':'BirthDate'
}
return df.select([col(c).alias(rename_dict.get(c, c)) for c in df.columns])
df_renamed = spark.read.load('/mnt/datalake/bronze/testData') \
.transform(renameColumns)
The simplest solution is using withColumnRenamed:
renamed_df = df.withColumnRenamed(‘name_1’, ‘New_name_1’).withColumnRenamed(‘name_2’, ‘New_name_2’)
renamed_df.show()
And if you would like to do this like we do with Pandas, you can use toDF:
Create an order of list of new columns and pass it to toDF
df_list = ["newName_1", “newName_2", “newName_3", “newName_4"]
renamed_df = df.toDF(*df_list)
renamed_df.show()
This is an easy way to rename multiple columns with a loop:
cols_to_rename = ["col1","col2","col3"]
for col in cols_to_rename:
df = df.withColumnRenamed(col,"new_{}".format(col))
List comprehension + f-string:
df = df.toDF(*[f'n_{c}' for c in df.columns])
Simple list comprehension:
df = df.toDF(*[c.lower() for c in df.columns])
The closest statement to df.columns = new_column_name_list is:
import pyspark.sql.functions as F
df = df.select(*[F.col(name_old).alias(name_new)
for (name_old, name_new)
in zip(df.columns, new_column_name_list)]
This doesn't require any rarely-used functions, and emphasizes some patterns that are very helpful in Spark. You could also break up the steps if you find this one-liner to be doing too many things:
import pyspark.sql.functions as F
column_mapping = [F.col(name_old).alias(name_new)
for (name_old, name_new)
in zip(df.columns, new_column_name_list)]
df = df.select(*column_mapping)

Categories