The data looks like this -
+-----------+-----------+-----------------------------+
| id| point| data|
+-----------------------------------------------------+
| abc| 6|{"key1":"124", "key2": "345"}|
| dfl| 7|{"key1":"777", "key2": "888"}|
| 4bd| 6|{"key1":"111", "key2": "788"}|
I am trying to break it into the following format.
+-----------+-----------+-----------+-----------+
| id| point| key1| key2|
+------------------------------------------------
| abc| 6| 124| 345|
| dfl| 7| 777| 888|
| 4bd| 6| 111| 788|
The explode function explodes the dataframe into multiple rows. But that is not the desired solution.
Note: This solution does not answers my questions.
PySpark "explode" dict in column
As long as you are using Spark version 2.1 or higher, pyspark.sql.functions.from_json should get you your desired result, but you would need to first define the required schema
from pyspark.sql.functions import from_json, col
from pyspark.sql.types import StructType, StructField, StringType
schema = StructType(
[
StructField('key1', StringType(), True),
StructField('key2', StringType(), True)
]
)
df.withColumn("data", from_json("data", schema))\
.select(col('id'), col('point'), col('data.*'))\
.show()
which should give you
+---+-----+----+----+
| id|point|key1|key2|
+---+-----+----+----+
|abc| 6| 124| 345|
|df1| 7| 777| 888|
|4bd| 6| 111| 788|
+---+-----+----+----+
As suggested by #pault, the data field is a string field. since the keys are the same (i.e. 'key1', 'key2') in the JSON string over rows, you might also use json_tuple() (this function is New in version 1.6 based on the documentation)
from pyspark.sql import functions as F
df.select('id', 'point', F.json_tuple('data', 'key1', 'key2').alias('key1', 'key2')).show()
Below is My original post: which is most likely WRONG if the original table is from df.show(truncate=False) and thus the data field is NOT a python data structure.
Since you have exploded the data into rows, I supposed the column data is a Python data structure instead of a string:
from pyspark.sql import functions as F
df.select('id', 'point', F.col('data').getItem('key1').alias('key1'), F.col('data')['key2'].alias('key2')).show()
As mentioned by #jxc, json_tuple should work fine if you were not able to define the schema beforehand and you only needed to deal with a single level of json string. I think it's more straight forward and easier to use. Strangely, I didn't find anyone else mention this function before.
In my use case, original dataframe schema: StructType(List(StructField(a,StringType,true))), json string column shown as:
+---------------------------------------+
|a |
+---------------------------------------+
|{"k1": "v1", "k2": "2", "k3": {"m": 1}}|
|{"k1": "v11", "k3": "v33"} |
|{"k1": "v13", "k2": "23"} |
+---------------------------------------+
Expand json fields into new columns with json_tuple:
from pyspark.sql import functions as F
df = df.select(F.col('a'),
F.json_tuple(F.col('a'), 'k1', 'k2', 'k3') \
.alias('k1', 'k2', 'k3'))
df.schema
df.show(truncate=False)
The document doesn't say much about it, but at least in my use case, new columns extracted by json_tuple are StringType, and it only extract single depth of JSON string.
StructType(List(StructField(k1,StringType,true),StructField(k2,StringType,true),StructField(k3,StringType,true)))
+---------------------------------------+---+----+-------+
|a |k1 |k2 |k3 |
+---------------------------------------+---+----+-------+
|{"k1": "v1", "k2": "2", "k3": {"m": 1}}|v1 |2 |{"m":1}|
|{"k1": "v11", "k3": "v33"} |v11|null|v33 |
|{"k1": "v13", "k2": "23"} |v13|23 |null |
+---------------------------------------+---+----+-------+
This works for my use case
data1 = spark.read.parquet(path)
json_schema = spark.read.json(data1.rdd.map(lambda row: row.json_col)).schema
data2 = data1.withColumn("data", from_json("json_col", json_schema))
col1 = data2.columns
col1.remove("data")
col2 = data2.select("data.*").columns
append_str ="data."
col3 = [append_str + val for val in col2]
col_list = col1 + col3
data3 = data2.select(*col_list).drop("json_col")
All credits to Shrikant Prabhu
You can simply use SQL
SELECT id, point, data.*
FROM original_table
Like this the schema of the new table will adapt if the data changes and you won't have to do anything in your pipelin.
In this approach you just need to set the name of column with Json content.
No need to set up the schema. It makes everything automatically.
json_col_name = 'data'
keys = df.head()[json_col_name].keys()
jsonFields= [f"{json_col_name}.{key} {key}" for key in keys]
main_fields = [key for key in df.columns if key != json_col_name]
df_new = df.selectExpr(main_fields + jsonFields)
Related
I have three pyspark dataframes. I want to put the dataset reference in a dictionary, write a loop, perform some operations on these three dataframes, and then save them for further analysis. But I struggle with the last step. I have two questions:
In my code below, how do I access the results in TRANSFORMS? When I type: print(TRANSFORMS[0])
I only get this cryptic result:
<function __main__.multi_output(Input_table, table_name='ONE')>
Is there a mistake in my code and the transormations are never made?
How do I modify the function so it saves three datasets like df_1_result, df_2_result, df_3_result which I can then later use in further analysis?
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession.builder.appName('Sparky').getOrCreate()
# Create the initial dataframe
data = [("James","M",60000),("Michael","M",70000),
("Robert",None,400000),("Maria","F",500000),
("Jen","",None)]
columns = ["name","gender","salary"]
df_when = spark.createDataFrame(data = data, schema = columns)
# Create three identical datasets
df_1 = df_when
df_2 = df_when
df_3 = df_when
TRANSFORMS = []
DATASETS = {
"ONE" : df_1,
"TWO" : df_2,
"THREE" : df_3,
}
for table_name, table_location in list(DATASETS.items()):
def multi_output(Input_table, table_name=table_name):
if table_name=="ONE":
output_table = Input_table.drop("name")
elif table_name=="TWO":
output_table== Input_table.drop("gender")
elif table_name=="THREE":
output_table = Input_table.drop("salary")
return output_table
TRANSFORMS.append(multi_output)
There are a couple of issues here:
Issue 1: TRANSFORMS.append(multi_output) simply adds the function definition to the TRANSFORMS list. The function is never invoked. Also, we should define it outside the for-loop.
Issue 2: The statement under the second condition has a typo.
The code below, should work as expected.
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession.builder.appName('Sparky').getOrCreate()
# Create the initial dataframe
data = [("James","M",60000),("Michael","M",70000),
("Robert",None,400000),("Maria","F",500000),
("Jen","",None)]
columns = ["name","gender","salary"]
df_when = spark.createDataFrame(data = data, schema = columns)
# Create three identical datasets
df_1 = df_when
df_2 = df_when
df_3 = df_when
TRANSFORMS = []
DATASETS = {
"ONE" : df_1,
"TWO" : df_2,
"THREE" : df_3,
}
def multi_output(Input_table, table_name):
output_table = Input_table
if table_name=="ONE":
output_table = Input_table.drop("name")
elif table_name=="TWO":
output_table= Input_table.drop("gender")
elif table_name=="THREE":
output_table = Input_table.drop("salary")
return output_table
for table_name, table_location in list(DATASETS.items()):
TRANSFORMS.append(multi_output(table_location,table_name))
len(TRANSFORMS)
TRANSFORMS[0].show()
TRANSFORMS[1].show()
TRANSFORMS[2].show()
+------+------+
|gender|salary|
+------+------+
| M| 60000|
| M| 70000|
| null|400000|
| F|500000|
| | null|
+------+------+
+-------+------+
| name|salary|
+-------+------+
| James| 60000|
|Michael| 70000|
| Robert|400000|
| Maria|500000|
| Jen| null|
+-------+------+
+-------+------+
| name|gender|
+-------+------+
| James| M|
|Michael| M|
| Robert| null|
| Maria| F|
| Jen| |
+-------+------+
I have this dataframe :
cSchema = StructType([StructField("id1", StringType()), StructField("id2", StringType()), StructField("params", StringType())\
,StructField("Col2", IntegerType())])
test_list = [[1, 2, '{"param1": "val1", "param2": "val2"}', 1], [1, 3, '{"param1": "val4", "param2": "val5"}', 3]]
df = spark.createDataFrame(test_list,schema=cSchema)
+---+---+--------------------+----+
|id1|id2| params|Col2|
+---+---+--------------------+----+
| 1| 2|{"param1": "val1"...| 1|
| 1| 3|{"param1": "val4"...| 3|
+---+---+--------------------+----+
I want to explode params into columns :
+---+---+----+------+------+
|id1|id2|Col2|param1|param2|
+---+---+----+------+------+
| 1| 2| 1| val1| val2|
| 1| 3| 3| val4| val5|
+---+---+----+------+------+
So I coded this :
schema2 = StructType([StructField("param1", StringType()), StructField("param2", StringType())])
df.withColumn(
"params", from_json("params", schema2)
).select(
col('id1'), col('id2'),col('Col2'), col('params.*')
).show()
The problem is that params schema is dynamic (variable schema2), he may change from one execution to another, so I need to infer the schema dynamically (It's ok to have all columns with String Type)... And I can't figure out of a way to do this..
Can anyone help me up with that please ?
In Pyspark the syntax should be:
import pyspark.sql.functions as F
schema = F.schema_of_json(df.select('params').head()[0])
df2 = df.withColumn(
"params", F.from_json("params", schema)
).select(
'id1', 'id2', 'Col2', 'params.*'
)
df2.show()
+---+---+----+------+------+
|id1|id2|Col2|param1|param2|
+---+---+----+------+------+
| 1| 2| 1| val1| val2|
| 1| 3| 3| val4| val5|
+---+---+----+------+------+
Here is how you can do it, hope you can change it to python
Get the schema dynamically with schema_of_json from the value and use from_json to read.
val schema = schema_of_json(df.first().getAs[String]("params"))
df.withColumn("params", from_json($"params", schema))
.select("id1", "id2", "Col2", "params.*")
.show(false)
If you want to get a larger sample of data to compare, you can read the params field into a list, convert that to an RDD, then read using "spark.read.json()"
params_list = df.select("params").rdd.flatMap(lambda x: x).collect()
params_rdd = sc.parallelize(params_list)
spark.read.json(params_rdd).schema
Caveat here being that you probably don't want to load too much data, as it's all being stuffed into local variables. Try taking the top 1000 or whatever an appropriate sample size may be.
I want to convert my list of dictionaries into DataFrame. This is the list:
mylist =
[
{"type_activity_id":1,"type_activity_name":"xxx"},
{"type_activity_id":2,"type_activity_name":"yyy"},
{"type_activity_id":3,"type_activity_name":"zzz"}
]
This is my code:
from pyspark.sql.types import StringType
df = spark.createDataFrame(mylist, StringType())
df.show(2,False)
+-----------------------------------------+
| value|
+-----------------------------------------+
|{type_activity_id=1,type_activity_id=xxx}|
|{type_activity_id=2,type_activity_id=yyy}|
|{type_activity_id=3,type_activity_id=zzz}|
+-----------------------------------------+
I assume that I should provide some mapping and types for each column, but I don't know how to do it.
Update:
I also tried this:
schema = ArrayType(
StructType([StructField("type_activity_id", IntegerType()),
StructField("type_activity_name", StringType())
]))
df = spark.createDataFrame(mylist, StringType())
df = df.withColumn("value", from_json(df.value, schema))
But then I get null values:
+-----+
|value|
+-----+
| null|
| null|
+-----+
In the past, you were able to simply pass a dictionary to spark.createDataFrame(), but this is now deprecated:
mylist = [
{"type_activity_id":1,"type_activity_name":"xxx"},
{"type_activity_id":2,"type_activity_name":"yyy"},
{"type_activity_id":3,"type_activity_name":"zzz"}
]
df = spark.createDataFrame(mylist)
#UserWarning: inferring schema from dict is deprecated,please use pyspark.sql.Row instead
# warnings.warn("inferring schema from dict is deprecated,"
As this warning message says, you should use pyspark.sql.Row instead.
from pyspark.sql import Row
spark.createDataFrame(Row(**x) for x in mylist).show(truncate=False)
#+----------------+------------------+
#|type_activity_id|type_activity_name|
#+----------------+------------------+
#|1 |xxx |
#|2 |yyy |
#|3 |zzz |
#+----------------+------------------+
Here I used ** (keyword argument unpacking) to pass the dictionaries to the Row constructor.
You can do it like this. You will get a dataframe with 2 columns.
mylist = [
{"type_activity_id":1,"type_activity_name":"xxx"},
{"type_activity_id":2,"type_activity_name":"yyy"},
{"type_activity_id":3,"type_activity_name":"zzz"}
]
myJson = sc.parallelize(mylist)
myDf = sqlContext.read.json(myJson)
Output :
+----------------+------------------+
|type_activity_id|type_activity_name|
+----------------+------------------+
| 1| xxx|
| 2| yyy|
| 3| zzz|
+----------------+------------------+
in Spark Version 2.4 it is possible to do directly with
df=spark.createDataFrame(mylist)
>>> mylist = [
... {"type_activity_id":1,"type_activity_name":"xxx"},
... {"type_activity_id":2,"type_activity_name":"yyy"},
... {"type_activity_id":3,"type_activity_name":"zzz"}
... ]
>>> df1=spark.createDataFrame(mylist)
>>> df1.show()
+----------------+------------------+
|type_activity_id|type_activity_name|
+----------------+------------------+
| 1| xxx|
| 2| yyy|
| 3| zzz|
+----------------+------------------+
I was also facing the same issue when creating dataframe from list of dictionaries.
I have resolved this using namedtuple.
Below is my code using data provided.
from collections import namedtuple
final_list = []
mylist = [{"type_activity_id":1,"type_activity_name":"xxx"},
{"type_activity_id":2,"type_activity_name":"yyy"},
{"type_activity_id":3,"type_activity_name":"zzz"}
]
ExampleTuple = namedtuple('ExampleTuple', ['type_activity_id', 'type_activity_name'])
for my_dict in mylist:
namedtupleobj = ExampleTuple(**my_dict)
final_list.append(namedtupleobj)
sqlContext.createDataFrame(final_list).show(truncate=False)
output
+----------------+------------------+
|type_activity_id|type_activity_name|
+----------------+------------------+
|1 |xxx |
|2 |yyy |
|3 |zzz |
+----------------+------------------+
my version informations are as follows
spark: 2.4.0
python: 3.6
It is not necessary to have my_list variable. since it was available I have used it to create namedtuple object otherwise directly namedtuple object can be created.
I have 2 data frames to compare both have the same number of columns and the comparison result should have the field that is mismatching and the values along with the ID.
Dataframe one
+-----+---+--------+
| name| id| City|
+-----+---+--------+
| Sam| 3| Toronto|
| BALU| 11| YYY|
|CLAIR| 7|Montreal|
|HELEN| 10| London|
|HELEN| 16| Ottawa|
+-----+---+--------+
Dataframe two
+-------------+-----------+-------------+
|Expected_name|Expected_id|Expected_City|
+-------------+-----------+-------------+
| SAM| 3| Toronto|
| BALU| 11| YYY|
| CLARE| 7| Montreal|
| HELEN| 10| Londn|
| HELEN| 15| Ottawa|
+-------------+-----------+-------------+
Expected Output
+---+------------+--------------+-----+
| ID|Actual_value|Expected_value|Field|
+---+------------+--------------+-----+
| 7| CLAIR| CLARE| name|
| 3| Sam| SAM| name|
| 10| London| Londn| City|
+---+------------+--------------+-----+
Code
Create example data
from pyspark.sql import SQLContext
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
from pyspark.sql import SparkSession
sc = SparkContext()
sql_context = SQLContext(sc)
spark = SparkSession.builder.getOrCreate()
spark.sparkContext.setLogLevel("ERROR") # log only on fails
df_Actual = sql_context.createDataFrame(
[("Sam", 3,'Toronto'), ("BALU", 11,'YYY'), ("CLAIR", 7,'Montreal'),
("HELEN", 10,'London'), ("HELEN", 16,'Ottawa')],
["name", "id","City"]
)
df_Expected = sql_context.createDataFrame(
[("SAM", 3,'Toronto'), ("BALU", 11,'YYY'), ("CLARE", 7,'Montreal'),
("HELEN", 10,'Londn'), ("HELEN", 15,'Ottawa')],
["Expected_name", "Expected_id","Expected_City"]
)
Create empty dataframe for Result
field = [
StructField("ID",StringType(), True),
StructField("Actual_value", StringType(), True),
StructField("Expected_value", StringType(), True),
StructField("Field", StringType(), True)
]
schema = StructType(field)
Df_Result = sql_context.createDataFrame(sc.emptyRDD(), schema)
Join expected and actual on id's
df_cobined = df_Actual.join(df_Expected, (df_Actual.id == df_Expected.Expected_id))
col_names=df_Actual.schema.names
Loop through each column to find mismatches
for col_name in col_names:
#Filter for column values not matching
df_comp= df_cobined.filter(col(col_name)!=col("Expected_"+col_name ))\
.select(col('id'),col(col_name),col("Expected_"+col_name ))
#Add not matching column name
df_comp = df_comp.withColumn("Field", lit(col_name))
#Add to final result
Df_Result = Df_Result.union(df_comp)
Df_Result.show()
This code works as expected. However, in the real case, I have more columns and millions of rows to compare. With this code, it takes more time to finish the comparison. Is there a better way to increase the performance and get the same result?
One way to avoid doing the union is the following:
Create a list of columns to compare: to_compare
Next select the id column and use pyspark.sql.functions.when to compare the columns. For those with a mismatch, build an array of structs with 3 fields: (Actual_value, Expected_value, Field) for each column in to_compare
Explode the temp array column and drop the nulls
Finally select the id and use col.* to expand the values from the struct into columns.
Code:
StructType to store the mismatched fields.
import pyspark.sql.functions as f
# these are the fields you want to compare
to_compare = [c for c in df_Actual.columns if c != "id"]
df_new = df_cobined.select(
"id",
f.array([
f.when(
f.col(c) != f.col("Expected_"+c),
f.struct(
f.col(c).alias("Actual_value"),
f.col("Expected_"+c).alias("Expected_value"),
f.lit(c).alias("Field")
)
).alias(c)
for c in to_compare
]).alias("temp")
)\
.select("id", f.explode("temp"))\
.dropna()\
.select("id", "col.*")
df_new.show()
#+---+------------+--------------+-----+
#| id|Actual_value|Expected_value|Field|
#+---+------------+--------------+-----+
#| 7| CLAIR| CLARE| name|
#| 10| London| Londn| City|
#| 3| Sam| SAM| name|
#+---+------------+--------------+-----+
Join only those records where expected id equals actual and there is mismatch in any other column:
df1.join(df2, df1.id=df2.id and (df1.name != df2.name or df1.age != df2.age...))
This means you will do for loop only across mismatched rows, instead of whole dataset.
For this who are looking for an answer, I transposed the data frame and then did a comparison.
from pyspark.sql.functions import array, col, explode, struct, lit
def Transposedf(df, by,colheader):
# Filter dtypes and split into column names and type description
cols, dtypes = zip(*((c, t) for (c, t) in df.dtypes if c not in by))
# Spark SQL supports only homogeneous columns
assert len(set(dtypes)) == 1, "All columns have to be of the same type"
# Create and explode an array of (column_name, column_value) structs
kvs = explode(array([ struct(lit(c).alias("Field"), col(c).alias(colheader)) for c in cols ])).alias("kvs")
return df.select(by + [kvs]).select(by + ["kvs.Field", "kvs."+colheader])
Then the comparison looks like this
def Compare_df(df_Expected,df_Actual):
df_combined = (df_Actual
.join(df_Expected, ((df_Actual.id == df_Expected.id)
& (df_Actual.Field == df_Expected.Field)
& (df_Actual.Actual_value != df_Expected.Expected_value)))
.select([df_Actual.account_unique_id,df_Actual.Field,df_Actual.Actual_value,df_Expected.Expected_value])
)
return df_combined
I called these 2 functions as
df_Actual=Transposedf(df_Actual, ["id"],'Actual_value')
df_Expected=Transposedf(df_Expected, ["id"],'Expected_value')
#Compare the expected and actual
df_result=Compare_df(df_Expected,df_Actual)
I'm messing around with dataframes in pyspark 1.4 locally and am having issues getting the dropDuplicates method to work. It keeps returning the error:
"AttributeError: 'list' object has no attribute 'dropDuplicates'"
Not quite sure why as I seem to be following the syntax in the latest documentation.
#loading the CSV file into an RDD in order to start working with the data
rdd1 = sc.textFile("C:\myfilename.csv").map(lambda line: (line.split(",")[0], line.split(",")[1], line.split(",")[2], line.split(",")[3])).collect()
#loading the RDD object into a dataframe and assigning column names
df1 = sqlContext.createDataFrame(rdd1, ['column1', 'column2', 'column3', 'column4']).collect()
#dropping duplicates from the dataframe
df1.dropDuplicates().show()
It is not an import problem. You simply call .dropDuplicates() on a wrong object. While class of sqlContext.createDataFrame(rdd1, ...) is pyspark.sql.dataframe.DataFrame, after you apply .collect() it is a plain Python list, and lists don't provide dropDuplicates method. What you want is something like this:
(df1 = sqlContext
.createDataFrame(rdd1, ['column1', 'column2', 'column3', 'column4'])
.dropDuplicates())
df1.collect()
if you have a data frame and want to remove all duplicates -- with reference to duplicates in a specific column (called 'colName'):
count before dedupe:
df.count()
do the de-dupe (convert the column you are de-duping to string type):
from pyspark.sql.functions import col
df = df.withColumn('colName',col('colName').cast('string'))
df.drop_duplicates(subset=['colName']).count()
can use a sorted groupby to check to see that duplicates have been removed:
df.groupBy('colName').count().toPandas().set_index("count").sort_index(ascending=False)
In summary, distinct() and dropDuplicates() methods remove duplicates with one difference, which is essential.
dropDuplicates() is more suitable by considering only a subset of the columns
data = [("James","","Smith","36636","M",60000),
("James","Rose","","40288","M",70000),
("Robert","","Williams","42114","",400000),
("Maria","Anne","Jones","39192","F",500000),
("Maria","Mary","Brown","","F",0)]
columns = ["first_name","middle_name","last_name","dob","gender","salary"]
df = spark.createDataFrame(data = data, schema = columns)
df.printSchema()
df.show(truncate=False)
df.groupBy('first_name').agg(count(
'first_name').alias("count_duplicates")).filter(
col('count_duplicates') >= 2).show()
df.dropDuplicates(['first_name']).show()
# output
+----------+-----------+---------+-----+------+------+
|first_name|middle_name|last_name|dob |gender|salary|
+----------+-----------+---------+-----+------+------+
|James | |Smith |36636|M |60000 |
|James |Rose | |40288|M |70000 |
|Robert | |Williams |42114| |400000|
|Maria |Anne |Jones |39192|F |500000|
|Maria |Mary |Brown | |F |0 |
+----------+-----------+---------+-----+------+------+
+----------+----------------+
|first_name|count_duplicates|
+----------+----------------+
| James| 2|
| Maria| 2|
+----------+----------------+
+----------+-----------+---------+-----+------+------+
|first_name|middle_name|last_name| dob|gender|salary|
+----------+-----------+---------+-----+------+------+
| James| | Smith|36636| M| 60000|
| Maria| Anne| Jones|39192| F|500000|
| Robert| | Williams|42114| |400000|
+----------+-----------+---------+-----+------+------+