Pyspark: Is there an equivalent method to pandas info()? - python

Is there an equivalent method to pandas info() method in PySpark?
I am trying to gain basic statistics about a dataframe in PySpark, such as:
Number of columns and rows
Number of nulls
Size of dataframe
Info() method in pandas provides all these statistics.

Also there is summary method to get row numbers and some other descritive statistics. It is similar to describe method already mentioned.
From PySpark manual:
df.summary().show()
+-------+------------------+-----+
|summary| age| name|
+-------+------------------+-----+
| count| 2| 2|
| mean| 3.5| null|
| stddev|2.1213203435596424| null|
| min| 2|Alice|
| 25%| 2| null|
| 50%| 2| null|
| 75%| 5| null|
| max| 5| Bob|
+-------+------------------+-----+
or
df.select("age", "name").summary("count").show()
+-------+---+----+
|summary|age|name|
+-------+---+----+
| count| 2| 2|
+-------+---+----+

To figure out type information about data frame you could try df.schema
spark.read.csv('matchCount.csv',header=True).printSchema()
StructType(List(StructField(categ,StringType,true),StructField(minv,StringType,true),StructField(maxv,StringType,true),StructField(counts,StringType,true),StructField(cutoff,StringType,true)))
For Summary stats you could also have a look at describe method from the documentation.

Check this answer to get a count of the null and not null values.
from pyspark.sql.functions import isnan, when, count, col
import numpy as np
df = spark.createDataFrame(
[(1, 1, None), (1, 2, float(5)), (1, 3, np.nan), (1, 4, None), (1, 5, float(10)), (1, 6, float('nan')), (1, 6, float('nan'))],
('session', "timestamp1", "id2"))
df.show()
# +-------+----------+----+
# |session|timestamp1| id2|
# +-------+----------+----+
# | 1| 1|null|
# | 1| 2| 5.0|
# | 1| 3| NaN|
# | 2| 4|null|
# | 1| 5|10.0|
# | 1| 6| NaN|
# | 1| 6| NaN|
# +-------+----------+----+
df.select([count(when(isnan(c), c)).alias(c) for c in df.columns]).show()
# +-------+----------+---+
# |session|timestamp1|id2|
# +-------+----------+---+
# | 0| 0| 3|
# +-------+----------+---+
df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()
# +-------+----------+---+
# |session|timestamp1|id2|
# +-------+----------+---+
# | 0| 0| 5|
# +-------+----------+---+
df.describe().show()
# +-------+-------+------------------+---+
# |summary|session| timestamp1|id2|
# +-------+-------+------------------+---+
# | count| 7| 7| 5|
# | mean| 1.0| 3.857142857142857|NaN|
# | stddev| 0.0|1.9518001458970662|NaN|
# | min| 1| 1|5.0|
# | max| 1| 6|NaN|
# +-------+-------+------------------+---
There is no equivalent to pandas.DataFrame.info() that I know of.
PrintSchema is useful, and toPandas.info() works for small dataframes but When I use pandas.DataFrame.info() I often look at the null values.

I could not find a good answer so I use the slightly cheating
dataFrame.toPandas().info()

I wrote a pyspark function that emulates Pandas.DataFrame.info()
from collections import Counter
def spark_info(df, abbreviate_columns=True, include_nested_types=False, count=None):
"""Similar to Pandas.DataFrame.info which produces output like:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 201100 entries, 0 to 201099
Columns: 151 entries, first_col to last_col
dtypes: float64(20), int64(6), object(50)
memory usage: 231.7+ MB
"""
classinfo = "<class 'pyspark.sql.dataframe.DataFrame'>"
_cnt = count if count else df.count()
numrows = f"Total Rows: {str(_cnt)}"
_cols = (
', to '.join([
df.columns[0], df.columns[-1]])
if abbreviate_columns
else ', '.join(df.columns))
columns = f"{len(df.columns)} entries: {_cols}"
_typs = [
col.dataType
for col in df.schema
if include_nested_types or (
'ArrayType' not in str(col.dataType) and
'StructType' not in str(col.dataType) and
'MapType' not in str(col.dataType))
]
dtypes = ', '.join(
f"{str(typ)}({cnt})"
for typ, cnt in Counter(_typs).items())
mem = f'memory usage: ? bytes'
return '\n'.join([classinfo, numrows, columns, dtypes, mem])
I wasn't sure about estimating size of pyspark dataframe. This depends on the full spark execution plan and configuration, but maybe try this answer for ideas.
Note that not all dtype summaries are included, by default nested types are excluded. Also df.count() is calculated, which can take a while, unless you calculate it first and pass it in.
Suggested usage:
>>> df = spark.createDataFrame(((1, 'a', 2),(2,'b',3)), ['id', 'letter', 'num'])
>>> print(spark_info(df, count=2))
<class 'pyspark.sql.dataframe.DataFrame'>
Total Rows: 2
3 entries: id, to num
LongType(2), StringType(1)
memory usage: ? bytes

Related

pyspark iterate N rows from Data Frame to each execution

def fun_1(csv):
# returns int[] of length = Number of New Lines in String csv
def fun_2(csv): # My WorkArround to Pass one CSV Line at One Time
return fun_1(csv)[0]
The Input Data Frame is df
+----+----+-----+
|col1|col2|CSVs |
+----+----+-----+
| 1| a|2,0,1|
| 2| b|2,0,2|
| 3| c|2,0,3|
| 4| a|2,0,1|
| 5| b|2,0,2|
| 6| c|2,0,3|
| 7| a|2,0,1|
+----+----+-----+
Below is a Code Snippet which works but takes long time
from pyspark.sql.functions import udf
from pyspark.sql import functions as sf
funudf = udf(fun_2) # wish it could be fun_1
df=df.withColumn( 'pred' , funudf(sf.col('csv')))
fun_1 , has a memory issue and could only handle 50000 max rows at a time. I wish to use funudf = udf(fun_1) .
Hence, how can I split the PySpark DF into segments of 50000 rows , call funudf ->fun_1 .
Output has two colunms, 'col1' from the input and 'funudf return value' .
You can achieve the desired result of forcing PySpark to operate on fixed batches of rows by using the groupByKey method exposed in the RDD API. Using groupByKey will force PySpark to shuffle all the data for a single key to a single executor.
NOTE: for this very same reason using groupByKey is often discouraged because of the network cost.
Strategy:
Add a column that groups your data into the desired size of batches and groupByKey
Define a function that reproduces the logic of your UDF (and also returns an id for joining later on). This operates on pyspark.resultiterable.ResultIterable, the result of groupByKey. Apply function to your groups using mapValues
Convert the resulting RDD into a DataFrame and join back in.
Example:
# Synthesize DF
data = {'_id': range(9), 'group': ['a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c'], 'vals': [2.0*i for i in range(9)]}
df = spark.createDataFrame(pd.DataFrame(data))
df.show()
##
# Step - 1 Convert to rdd and groupByKey to force each group to separate executor
##
kv = df.rdd.map(lambda r: (r.group, [r._id, r.group, r.vals]))
groups = kv.groupByKey()
##
# Step 2 - Calulate function
##
# Dummy function taking
def mult3(ditr):
data = ditr.data
ids = [v[0] for v in data]
vals = [3*v[2] for v in data]
return zip(ids, vals)
# run mult3 and flaten results
mv = groups.mapValues(mult3).map(lambda r: r[1]).flatMap(lambda r: r) # rdd[(id, val)]
##
# Step 3 - Join results back into base DF
##
# convert results into a DF and join back in
schema = t.StructType([t.StructField('_id', t.LongType()), t.StructField('vals_x_3', t.FloatType())])
df_vals = spark.createDataFrame(mv, schema)
joined = df.join(df_vals, '_id')
joined.show()
>>>
+---+-----+----+
|_id|group|vals|
+---+-----+----+
| 0| a| 0.0|
| 1| b| 2.0|
| 2| c| 4.0|
| 3| a| 6.0|
| 4| b| 8.0|
| 5| c|10.0|
| 6| a|12.0|
| 7| b|14.0|
| 8| c|16.0|
+---+-----+----+
+---+-----+----+--------+
|_id|group|vals|vals_x_3|
+---+-----+----+--------+
| 0| a| 0.0| 0.0|
| 7| b|14.0| 42.0|
| 6| a|12.0| 36.0|
| 5| c|10.0| 30.0|
| 1| b| 2.0| 6.0|
| 3| a| 6.0| 18.0|
| 8| c|16.0| 48.0|
| 2| c| 4.0| 12.0|
| 4| b| 8.0| 24.0|
+---+-----+----+--------+

Spark SQL Dataset: Split Multiple Array Columns to individual rows

I'm new to Spark SQL and the Dataset / Dataframe API.
I have columns of which 2 columns both have multi values / arrays in my Dataset.
I want to step through the arrays per line positionally, and output a new row for each set of corresponding positional entries in the arrays. You can see how from the 2 diagrams below.
For example:
Input dataframe / dataset
+---+---------+-----+
| id| le|leloc|
+---+---------+-----+
| 1|[aaa,bbb]|[1,2]|
| 2|[ccc,ddd]|[3,4]|
+---+---------+-----+
Expected Output dataset
I need output as per below, the data is transformed from columns to rows:
+---+---------+-----+
| id| le|leloc|
+---+---------+-----+
| 1|aaa |1 |
| 1|bbb |2 |
| 2|ccc |3 |
| 2|ddd |4 |
+---+---------+-----+
%python
from pyspark.sql.functions import *
from pyspark.sql.types import *
# Gen some data
df1 = spark.createDataFrame([ ( 1, list(['A','B','X']), list(['1', '2', '8']) ) for x in range(2)], ['value1', 'array1', 'array2'] )
df2 = spark.createDataFrame([ ( 2, list(['C','D','Y']), list(['3', '4', '9']) ) for x in range(2)], ['value1', 'array1', 'array2'] )
df = df1.union(df2).distinct()
# from here specifically for you
col_temp_expr = "transform(array1, (x, i) -> concat(x, ',', array2[i]))"
dfA = df.withColumn("col_temp", expr(col_temp_expr))
dfB = dfA.select("value1", "array2", explode((col("col_temp")))) # Not an array
dfC = dfB.withColumn('tempArray', split(dfB['col'], ',')) # Now an array
dfC.select("value1", dfC.tempArray[0], dfC.tempArray[1]).show()
returns:
+------+------------+------------+
|value1|tempArray[0]|tempArray[1]|
+------+------------+------------+
| 1| A| 1|
| 1| B| 2|
| 1| X| 8|
| 2| C| 3|
| 2| D| 4|
| 2| Y| 9|
+------+------------+------------+
You can rename cols. This had more elements per array.

Find count of distinct values between two same values in a csv file using pyspark

Im working on pyspark to deal with big CSV files more than 50gb.
Now I need to find the number of distinct values between two references to the same value.
for example,
input dataframe:
+----+
|col1|
+----+
| a|
| b|
| c|
| c|
| a|
| b|
| a|
+----+
output dataframe:
+----+-----+
|col1|col2 |
+----+-----+
| a| null|
| b| null|
| c| null|
| c| 0|
| a| 2|
| b| 2|
| a| 1|
+----+-----+
I'm struggling with this for past one week. Tried window functions and many things in spark. But couldn't get anything. It would be a great help if someone knows how to fix this. Thank you.
Comment if you need any clarification in the question.
I am providing solution, with some assumptions.
Assuming, previous reference can be found in max of previous 'n' rows. If 'n' is reasonable less value, i think this is good solution.
I assumed you can find the previous reference in 5 rows.
def get_distincts(list, current_value):
cnt = {}
flag = False
for i in list:
if current_value == i :
flag = True
break
else:
cnt[i] = "some_value"
if flag:
return len(cnt)
else:
return None
get_distincts_udf = udf(get_distincts, IntegerType())
df = spark.createDataFrame([["a"],["b"],["c"],["c"],["a"],["b"],["a"]]).toDF("col1")
#You can replace this, if you have some unique id column
df = df.withColumn("seq_id", monotonically_increasing_id())
window = Window.orderBy("seq_id")
df = df.withColumn("list", array([lag(col("col1"),i, None).over(window) for i in range(1,6) ]))
df = df.withColumn("col2", get_distincts_udf(col('list'), col('col1'))).drop('seq_id','list')
df.show()
which results
+----+----+
|col1|col2|
+----+----+
| a|null|
| b|null|
| c|null|
| c| 0|
| a| 2|
| b| 2|
| a| 1|
+----+----+
You can try the following approach:
add a monotonically_increasing column id to keep track the order of rows
find prev_id for each col1 and save the result to a new df
for the new DF (alias 'd1'), make a LEFT JOIN to the DF itself (alias 'd2') with a condition (d2.id > d1.prev_id) & (d2.id < d1.id)
then groupby('d1.col1', 'd1.id') and aggregate on the countDistinct('d2.col1')
The code based on the above logic and your sample data is shown below:
from pyspark.sql import functions as F, Window
df1 = spark.createDataFrame([ (i,) for i in list("abccaba")], ["col1"])
# create a WinSpec partitioned by col1 so that we can find the prev_id
win = Window.partitionBy('col1').orderBy('id')
# set up id and prev_id
df11 = df1.withColumn('id', F.monotonically_increasing_id())\
.withColumn('prev_id', F.lag('id').over(win))
# check the newly added columns
df11.sort('id').show()
# +----+---+-------+
# |col1| id|prev_id|
# +----+---+-------+
# | a| 0| null|
# | b| 1| null|
# | c| 2| null|
# | c| 3| 2|
# | a| 4| 0|
# | b| 5| 1|
# | a| 6| 4|
# +----+---+-------+
# let's cache the new dataframe
df11.persist()
# do a self-join on id and prev_id and then do the aggregation
df12 = df11.alias('d1') \
.join(df11.alias('d2')
, (F.col('d2.id') > F.col('d1.prev_id')) & (F.col('d2.id') < F.col('d1.id')), how='left') \
.select('d1.col1', 'd1.id', F.col('d2.col1').alias('ids')) \
.groupBy('col1','id') \
.agg(F.countDistinct('ids').alias('distinct_values'))
# display the result
df12.sort('id').show()
# +----+---+---------------+
# |col1| id|distinct_values|
# +----+---+---------------+
# | a| 0| 0|
# | b| 1| 0|
# | c| 2| 0|
# | c| 3| 0|
# | a| 4| 2|
# | b| 5| 2|
# | a| 6| 1|
# +----+---+---------------+
# release the cached df11
df11.unpersist()
Note you will need to keep this id column to sort rows, otherwise your resulting rows will be totally messed up each time you collect them.
reuse_distance = []
block_dict = {}
stack_dict = {}
counter_reuse = 0
counter_stack = 0
reuse_list = []
Here block is nothing but the characters you want to read and search from csv
stack_list = []
stack_dist = -1
reuse_dist = -1
if block in block_dict:
reuse_dist = counter_reuse - block_dict[block]-1
block_dict[block] = counter_reuse
counter_reuse += 1
stack_dist_ind= stack_list.index(block)
stack_dist = counter_stack -stack_dist_ind - 1
del stack_list[stack_dist_ind]
stack_list.append(block)
else:
block_dict[block] = counter_reuse
counter_reuse += 1
counter_stack += 1
stack_list.append(block)
reuse_distance_2.append([block, stack_dist, reuse_dist])

I want to fill Missing value with last row value in Pyspark:

My df has multiple columns
Query I tried:
df=df.withColumn('Column_required',F.when(df.Column_present>1,df.Column_present).otherwise(lag(df.Column_present))
Not able to work on otherwise.
. Column on which I want operation:
Column_present Column_required
40000 40000
Null 40000
Null 40000
500 500
Null 500
Null 500
I think your solution might be the usage of last instead of lag:
df_new = spark.createDataFrame([
(1, 40000), (2, None), (3,None), (4,None),
(5,500), (6,None), (7,None)
], ("id", "Col_present"))
df_new.withColumn('Column_required',when(df_new.Col_present>1,df_new.Col_present).otherwise(last(df_new.Col_present,ignorenulls=True).over(Window.orderBy("id")))).show()
This will produce your desired output:
+---+-----------+---------------+
| id|Col_present|Column_required|
+---+-----------+---------------+
| 1| 40000| 40000|
| 2| null| 40000|
| 3| null| 40000|
| 4| null| 40000|
| 5| 500| 500|
| 6| null| 500|
| 7| null| 500|
+---+-----------+---------------+
But be aware, that the window function requires a column to perform the sorting. That's why I used the id column in the example. You can create an id column by yourself, if your dataframe does not contain a sortable column with monotonically_increasing_id().

Add column sum as new column in PySpark dataframe

I'm using PySpark and I have a Spark dataframe with a bunch of numeric columns. I want to add a column that is the sum of all the other columns.
Suppose my dataframe had columns "a", "b", and "c". I know I can do this:
df.withColumn('total_col', df.a + df.b + df.c)
The problem is that I don't want to type out each column individually and add them, especially if I have a lot of columns. I want to be able to do this automatically or by specifying a list of column names that I want to add. Is there another way to do this?
This was not obvious. I see no row-based sum of the columns defined in the spark Dataframes API.
Version 2
This can be done in a fairly simple way:
newdf = df.withColumn('total', sum(df[col] for col in df.columns))
df.columns is supplied by pyspark as a list of strings giving all of the column names in the Spark Dataframe. For a different sum, you can supply any other list of column names instead.
I did not try this as my first solution because I wasn't certain how it would behave. But it works.
Version 1
This is overly complicated, but works as well.
You can do this:
use df.columns to get a list of the names of the columns
use that names list to make a list of the columns
pass that list to something that will invoke the column's overloaded add function in a fold-type functional manner
With python's reduce, some knowledge of how operator overloading works, and the pyspark code for columns here that becomes:
def column_add(a,b):
return a.__add__(b)
newdf = df.withColumn('total_col',
reduce(column_add, ( df[col] for col in df.columns ) ))
Note this is a python reduce, not a spark RDD reduce, and the parenthesis term in the second parameter to reduce requires the parenthesis because it is a list generator expression.
Tested, Works!
$ pyspark
>>> df = sc.parallelize([{'a': 1, 'b':2, 'c':3}, {'a':8, 'b':5, 'c':6}, {'a':3, 'b':1, 'c':0}]).toDF().cache()
>>> df
DataFrame[a: bigint, b: bigint, c: bigint]
>>> df.columns
['a', 'b', 'c']
>>> def column_add(a,b):
... return a.__add__(b)
...
>>> df.withColumn('total', reduce(column_add, ( df[col] for col in df.columns ) )).collect()
[Row(a=1, b=2, c=3, total=6), Row(a=8, b=5, c=6, total=19), Row(a=3, b=1, c=0, total=4)]
The most straight forward way of doing it is to use the expr function
from pyspark.sql.functions import *
data = data.withColumn('total', expr("col1 + col2 + col3 + col4"))
The solution
newdf = df.withColumn('total', sum(df[col] for col in df.columns))
posted by #Paul works. Nevertheless I was getting the error, as many other as I have seen,
TypeError: 'Column' object is not callable
After some time I found the problem (at least in my case). The problem is that I previously imported some pyspark functions with the line
from pyspark.sql.functions import udf, col, count, sum, when, avg, mean, min
so the line imported the sum pyspark command while df.withColumn('total', sum(df[col] for col in df.columns)) is supposed to use the normal python sum function.
You can delete the reference of the pyspark function with del sum.
Otherwise in my case I changed the import to
import pyspark.sql.functions as F
and then referenced the functions as F.sum.
Summing multiple columns from a list into one column
PySpark's sum function doesn't support column addition.
This can be achieved using expr function.
from pyspark.sql.functions import expr
cols_list = ['a', 'b', 'c']
# Creating an addition expression using `join`
expression = '+'.join(cols_list)
df = df.withColumn('sum_cols', expr(expression))
This gives us the desired sum of columns.
My problem was similar to the above (bit more complex) as i had to add consecutive column sums as new columns in PySpark dataframe. This approach uses code from Paul's Version 1 above:
import pyspark
from pyspark.sql import SparkSession
import pandas as pd
spark = SparkSession.builder.appName('addColAsCumulativeSUM').getOrCreate()
df=spark.createDataFrame(data=[(1,2,3),(4,5,6),(3,2,1)\
,(6,1,-4),(0,2,-2),(6,4,1)\
,(4,5,2),(5,-3,-5),(6,4,-1)]\
,schema=['x1','x2','x3'])
df.show()
+---+---+---+
| x1| x2| x3|
+---+---+---+
| 1| 2| 3|
| 4| 5| 6|
| 3| 2| 1|
| 6| 1| -4|
| 0| 2| -2|
| 6| 4| 1|
| 4| 5| 2|
| 5| -3| -5|
| 6| 4| -1|
+---+---+---+
colnames=df.columns
add new columns that are cumulative sums (consecutive):
for i in range(0,len(colnames)):
colnameLst= colnames[0:i+1]
colname = 'cm'+ str(i+1)
df = df.withColumn(colname, sum(df[col] for col in colnameLst))
df.show()
+---+---+---+---+---+---+
| x1| x2| x3|cm1|cm2|cm3|
+---+---+---+---+---+---+
| 1| 2| 3| 1| 3| 6|
| 4| 5| 6| 4| 9| 15|
| 3| 2| 1| 3| 5| 6|
| 6| 1| -4| 6| 7| 3|
| 0| 2| -2| 0| 2| 0|
| 6| 4| 1| 6| 10| 11|
| 4| 5| 2| 4| 9| 11|
| 5| -3| -5| 5| 2| -3|
| 6| 4| -1| 6| 10| 9|
+---+---+---+---+---+---+
'cumulative sum' columns added are as follows:
cm1 = x1
cm2 = x1 + x2
cm3 = x1 + x2 + x3
df = spark.createDataFrame([("linha1", "valor1", 2), ("linha2", "valor2", 5)], ("Columna1", "Columna2", "Columna3"))
df.show()
+--------+--------+--------+
|Columna1|Columna2|Columna3|
+--------+--------+--------+
| linha1| valor1| 2|
| linha2| valor2| 5|
+--------+--------+--------+
df = df.withColumn('DivisaoPorDois', df[2]/2)
df.show()
+--------+--------+--------+--------------+
|Columna1|Columna2|Columna3|DivisaoPorDois|
+--------+--------+--------+--------------+
| linha1| valor1| 2| 1.0|
| linha2| valor2| 5| 2.5|
+--------+--------+--------+--------------+
df = df.withColumn('Soma_Colunas', df[2]+df[3])
df.show()
+--------+--------+--------+--------------+------------+
|Columna1|Columna2|Columna3|DivisaoPorDois|Soma_Colunas|
+--------+--------+--------+--------------+------------+
| linha1| valor1| 2| 1.0| 3.0|
| linha2| valor2| 5| 2.5| 7.5|
+--------+--------+--------+--------------+------------+
A very simple approach would be to just use select instead of withcolumn as below:
df = df.select('*', (col("a")+col("b")+col('c).alias("total"))
This should give you required sum with minor changes based on requirements
The following approach works for me:
Import pyspark sql functions
from pyspark.sql import functions as F
Use F.expr(list_of_columns) data_frame.withColumn('Total_Sum',F.expr('col_name1+col_name2+..col_namen)

Categories