I am trying to extract regex patterns from a column using PySpark. I have a data frame which contains the regex patterns and then a table which contains the strings I'd like to match.
columns = ['id', 'text']
vals = [
(1, 'here is a Match1'),
(2, 'Do not match'),
(3, 'Match2 is another example'),
(4, 'Do not match'),
(5, 'here is a Match1')
]
df_to_extract = sql.createDataFrame(vals, columns)
columns = ['id', 'Regex', 'Replacement']
vals = [
(1, 'Match1', 'Found1'),
(2, 'Match2', 'Found2'),
]
df_regex = sql.createDataFrame(vals, columns)
I'd like to match the 'Regex' column within the 'text' column of 'df_to_extract'. I'd like to extract the terms against each id with the resulting table containing the id and 'replacement' which corresponds to the 'Regex'. For example:
+---+------------+
| id| replacement|
+---+------------+
| 1| Found1|
| 3| Found2|
| 5| Found1|
+---+------------+
Thanks!
One way is to use a pyspark.sql.functions.expr, which allows you to use a column value as a parameter, in your join condition.
For example:
from pyspark.sql.functions import expr
df_to_extract.alias("e")\
.join(
df_regex.alias("r"),
on=expr(r"e.text LIKE concat('%', r.Regex, '%')"),
how="inner"
)\
.select("e.id", "r.Replacement")\
.show()
#+---+-----------+
#| id|Replacement|
#+---+-----------+
#| 1| Found1|
#| 3| Found2|
#| 5| Found1|
#+---+-----------+
Here I used the sql expression:
e.text LIKE concat('%', r.Regex, '%')
Which will join all rows where the text column is like the Regex column with the % acting as wildcards to capture anything before and after.
Related
Dataframe Schema:
root |--LAST_UPDATE_DATE |--ADDR_1 |--ADDR_2 |--ERROR
If the "ERROR" col is null i want to change df like :
df = df.withColumn("LAST_UPDATE_DATE", current_timestamp()) \
.withColumn("ADDR_1", lit("ADDR_1")) \
.withColumn("ADDR_2", lit("ADDR_2"))
else :
df = df.withColumn("ADDR_1", lit("0"))
i have checked the "when-otherwise" but only one column can be changed in that scenario
Desired output :
//+----------------+------+------+-----+
//|LAST_UPDATE_DATE|ADDR_1|ADDR_2|ERROR|
//+----------------+------+------+-----+
//|2022-06-17 07:54|ADDR_1|ADDR_2| null|
//| null| null| null| 1|
//+----------------+------+------+-----+
Why not use when-otherwise for each witnColumn? Condition can be taken out for convenience.
Example:
error_event = F.col('ERROR').isNull()
df = (
df
.withColumn('LAST_UPDATE_DATE', F.when(error_event, F.current_timestamp()))
.withColumn('ADDR_1', F.when(error_event, F.lit('ADDR_1'))
.otherwise(1))
)
I have this dataframe :
cSchema = StructType([StructField("id1", StringType()), StructField("id2", StringType()), StructField("params", StringType())\
,StructField("Col2", IntegerType())])
test_list = [[1, 2, '{"param1": "val1", "param2": "val2"}', 1], [1, 3, '{"param1": "val4", "param2": "val5"}', 3]]
df = spark.createDataFrame(test_list,schema=cSchema)
+---+---+--------------------+----+
|id1|id2| params|Col2|
+---+---+--------------------+----+
| 1| 2|{"param1": "val1"...| 1|
| 1| 3|{"param1": "val4"...| 3|
+---+---+--------------------+----+
I want to explode params into columns :
+---+---+----+------+------+
|id1|id2|Col2|param1|param2|
+---+---+----+------+------+
| 1| 2| 1| val1| val2|
| 1| 3| 3| val4| val5|
+---+---+----+------+------+
So I coded this :
schema2 = StructType([StructField("param1", StringType()), StructField("param2", StringType())])
df.withColumn(
"params", from_json("params", schema2)
).select(
col('id1'), col('id2'),col('Col2'), col('params.*')
).show()
The problem is that params schema is dynamic (variable schema2), he may change from one execution to another, so I need to infer the schema dynamically (It's ok to have all columns with String Type)... And I can't figure out of a way to do this..
Can anyone help me up with that please ?
In Pyspark the syntax should be:
import pyspark.sql.functions as F
schema = F.schema_of_json(df.select('params').head()[0])
df2 = df.withColumn(
"params", F.from_json("params", schema)
).select(
'id1', 'id2', 'Col2', 'params.*'
)
df2.show()
+---+---+----+------+------+
|id1|id2|Col2|param1|param2|
+---+---+----+------+------+
| 1| 2| 1| val1| val2|
| 1| 3| 3| val4| val5|
+---+---+----+------+------+
Here is how you can do it, hope you can change it to python
Get the schema dynamically with schema_of_json from the value and use from_json to read.
val schema = schema_of_json(df.first().getAs[String]("params"))
df.withColumn("params", from_json($"params", schema))
.select("id1", "id2", "Col2", "params.*")
.show(false)
If you want to get a larger sample of data to compare, you can read the params field into a list, convert that to an RDD, then read using "spark.read.json()"
params_list = df.select("params").rdd.flatMap(lambda x: x).collect()
params_rdd = sc.parallelize(params_list)
spark.read.json(params_rdd).schema
Caveat here being that you probably don't want to load too much data, as it's all being stuffed into local variables. Try taking the top 1000 or whatever an appropriate sample size may be.
I have a spark dataframe which has a column 'X'.The column contains elements which are in the form:
u'[23,4,77,890,455,................]'
. How can I convert this unicode to list.That is my output should be
[23,4,77,890,455...................]
. I have apply it for each element in the 'X' column.
I have tried df.withColumn("X_new", ast.literal_eval(x)) and got the error
"Malformed String"
I also tried
df.withColumn("X_new", json.loads(x)) and got the error "Expected
String or Buffer"
and
df.withColumn("X_new", json.dumps(x)) which says JSON not
serialisable.
and also
df_2 = df.rdd.map(lambda x: x.encode('utf-8')) which says rdd has no
attribute encode.
I dont want to use collect and toPandas() because its memory consuming.(But if thats the only way please do tell).I am using Pyspark
Update: cph_sto gave the answer using UDF.Though it worked well,I find that it is Slow.Can Somebody suggest any other method?
import ast
from pyspark.sql.functions import udf
values = [(u'[23,4,77,890.455]',10),(u'[11,2,50,1.11]',20),(u'[10.05,1,22.04]',30)]
df = sqlContext.createDataFrame(values,['list','A'])
df.show()
+-----------------+---+
| list| A|
+-----------------+---+
|[23,4,77,890.455]| 10|
| [11,2,50,1.11]| 20|
| [10.05,1,22.04]| 30|
+-----------------+---+
# Creating a UDF to convert the string list to proper list
string_list_to_list = udf(lambda row: ast.literal_eval(row))
df = df.withColumn('list',string_list_to_list(col('list')))
df.show()
+--------------------+---+
| list| A|
+--------------------+---+
|[23, 4, 77, 890.455]| 10|
| [11, 2, 50, 1.11]| 20|
| [10.05, 1, 22.04]| 30|
+--------------------+---+
Extension of the Q, as asked by OP -
# Creating a UDF to find length of resulting list.
length_list = udf(lambda row: len(row))
df = df.withColumn('length_list',length_list(col('list')))
df.show()
+--------------------+---+-----------+
| list| A|length_list|
+--------------------+---+-----------+
|[23, 4, 77, 890.455]| 10| 4|
| [11, 2, 50, 1.11]| 20| 4|
| [10.05, 1, 22.04]| 30| 3|
+--------------------+---+-----------+
Since it's a string, you could remove the first and last characters:
From '[23,4,77,890,455]' to '23,4,77,890,455'
Then apply the split() function to generate an array, taking , as the delimiter.
Please use the below code to ignore unicode
df.rdd.map(lambda x: x.encode("ascii","ignore"))
I have 2 data frames to compare both have the same number of columns and the comparison result should have the field that is mismatching and the values along with the ID.
Dataframe one
+-----+---+--------+
| name| id| City|
+-----+---+--------+
| Sam| 3| Toronto|
| BALU| 11| YYY|
|CLAIR| 7|Montreal|
|HELEN| 10| London|
|HELEN| 16| Ottawa|
+-----+---+--------+
Dataframe two
+-------------+-----------+-------------+
|Expected_name|Expected_id|Expected_City|
+-------------+-----------+-------------+
| SAM| 3| Toronto|
| BALU| 11| YYY|
| CLARE| 7| Montreal|
| HELEN| 10| Londn|
| HELEN| 15| Ottawa|
+-------------+-----------+-------------+
Expected Output
+---+------------+--------------+-----+
| ID|Actual_value|Expected_value|Field|
+---+------------+--------------+-----+
| 7| CLAIR| CLARE| name|
| 3| Sam| SAM| name|
| 10| London| Londn| City|
+---+------------+--------------+-----+
Code
Create example data
from pyspark.sql import SQLContext
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
from pyspark.sql import SparkSession
sc = SparkContext()
sql_context = SQLContext(sc)
spark = SparkSession.builder.getOrCreate()
spark.sparkContext.setLogLevel("ERROR") # log only on fails
df_Actual = sql_context.createDataFrame(
[("Sam", 3,'Toronto'), ("BALU", 11,'YYY'), ("CLAIR", 7,'Montreal'),
("HELEN", 10,'London'), ("HELEN", 16,'Ottawa')],
["name", "id","City"]
)
df_Expected = sql_context.createDataFrame(
[("SAM", 3,'Toronto'), ("BALU", 11,'YYY'), ("CLARE", 7,'Montreal'),
("HELEN", 10,'Londn'), ("HELEN", 15,'Ottawa')],
["Expected_name", "Expected_id","Expected_City"]
)
Create empty dataframe for Result
field = [
StructField("ID",StringType(), True),
StructField("Actual_value", StringType(), True),
StructField("Expected_value", StringType(), True),
StructField("Field", StringType(), True)
]
schema = StructType(field)
Df_Result = sql_context.createDataFrame(sc.emptyRDD(), schema)
Join expected and actual on id's
df_cobined = df_Actual.join(df_Expected, (df_Actual.id == df_Expected.Expected_id))
col_names=df_Actual.schema.names
Loop through each column to find mismatches
for col_name in col_names:
#Filter for column values not matching
df_comp= df_cobined.filter(col(col_name)!=col("Expected_"+col_name ))\
.select(col('id'),col(col_name),col("Expected_"+col_name ))
#Add not matching column name
df_comp = df_comp.withColumn("Field", lit(col_name))
#Add to final result
Df_Result = Df_Result.union(df_comp)
Df_Result.show()
This code works as expected. However, in the real case, I have more columns and millions of rows to compare. With this code, it takes more time to finish the comparison. Is there a better way to increase the performance and get the same result?
One way to avoid doing the union is the following:
Create a list of columns to compare: to_compare
Next select the id column and use pyspark.sql.functions.when to compare the columns. For those with a mismatch, build an array of structs with 3 fields: (Actual_value, Expected_value, Field) for each column in to_compare
Explode the temp array column and drop the nulls
Finally select the id and use col.* to expand the values from the struct into columns.
Code:
StructType to store the mismatched fields.
import pyspark.sql.functions as f
# these are the fields you want to compare
to_compare = [c for c in df_Actual.columns if c != "id"]
df_new = df_cobined.select(
"id",
f.array([
f.when(
f.col(c) != f.col("Expected_"+c),
f.struct(
f.col(c).alias("Actual_value"),
f.col("Expected_"+c).alias("Expected_value"),
f.lit(c).alias("Field")
)
).alias(c)
for c in to_compare
]).alias("temp")
)\
.select("id", f.explode("temp"))\
.dropna()\
.select("id", "col.*")
df_new.show()
#+---+------------+--------------+-----+
#| id|Actual_value|Expected_value|Field|
#+---+------------+--------------+-----+
#| 7| CLAIR| CLARE| name|
#| 10| London| Londn| City|
#| 3| Sam| SAM| name|
#+---+------------+--------------+-----+
Join only those records where expected id equals actual and there is mismatch in any other column:
df1.join(df2, df1.id=df2.id and (df1.name != df2.name or df1.age != df2.age...))
This means you will do for loop only across mismatched rows, instead of whole dataset.
For this who are looking for an answer, I transposed the data frame and then did a comparison.
from pyspark.sql.functions import array, col, explode, struct, lit
def Transposedf(df, by,colheader):
# Filter dtypes and split into column names and type description
cols, dtypes = zip(*((c, t) for (c, t) in df.dtypes if c not in by))
# Spark SQL supports only homogeneous columns
assert len(set(dtypes)) == 1, "All columns have to be of the same type"
# Create and explode an array of (column_name, column_value) structs
kvs = explode(array([ struct(lit(c).alias("Field"), col(c).alias(colheader)) for c in cols ])).alias("kvs")
return df.select(by + [kvs]).select(by + ["kvs.Field", "kvs."+colheader])
Then the comparison looks like this
def Compare_df(df_Expected,df_Actual):
df_combined = (df_Actual
.join(df_Expected, ((df_Actual.id == df_Expected.id)
& (df_Actual.Field == df_Expected.Field)
& (df_Actual.Actual_value != df_Expected.Expected_value)))
.select([df_Actual.account_unique_id,df_Actual.Field,df_Actual.Actual_value,df_Expected.Expected_value])
)
return df_combined
I called these 2 functions as
df_Actual=Transposedf(df_Actual, ["id"],'Actual_value')
df_Expected=Transposedf(df_Expected, ["id"],'Expected_value')
#Compare the expected and actual
df_result=Compare_df(df_Expected,df_Actual)
I have a Dataframe with two columns: BrandWatchErwaehnungID and word_counts.
The word_counts column is the output of `CountVectorizer (a sparse vector). After dropped the empty rows I have created two new columns one with the indices of the sparse vector and one with their values.
help0 = countedwords_text['BrandWatchErwaehnungID','word_counts'].rdd\
.filter(lambda x : x[1].indices.size!=0)\
.map(lambda x : (x[0],x[1],DenseVector(x[1].indices) , DenseVector(x[1].values))).toDF()\
.withColumnRenamed("_1", "BrandWatchErwaenungID").withColumnRenamed("_2", "word_counts")\
.withColumnRenamed("_3", "word_indices").withColumnRenamed("_4", "single_word_counts")
I needed to convert them to dense vectors before adding to my Dataframe due to spark did not accept numpy.ndarray. My problem is that I now want to explode that Dataframeon the word_indices column but the explode method from pyspark.sql.functions does only support arrays or map as input.
I have tried:
help1 = help0.withColumn('b' , explode(help0.word_indices))
and get the following error:
cannot resolve 'explode(`word_indices')' due to data type mismatch: input to function explode should be array or map type
Afterwards I tried:
help1 = help0.withColumn('b' , explode(help0.word_indices.toArray()))
Which also did not worked...
Any suggestions?
You have to use udf:
from pyspark.sql.functions import udf, explode
from pyspark.sql.types import *
from pyspark.ml.linalg import *
#udf("array<integer>")
def indices(v):
if isinstance(v, DenseVector):
return list(range(len(v)))
if isinstance(v, SparseVector):
return v.indices.tolist()
df = spark.createDataFrame([
(1, DenseVector([1, 2, 3])), (2, SparseVector(5, {4: 42}))],
("id", "v"))
df.select("id", explode(indices("v"))).show()
# +---+---+
# | id|col|
# +---+---+
# | 1| 0|
# | 1| 1|
# | 1| 2|
# | 2| 4|
# +---+---+