How can i show highest and lowest numbers in two rows itself? - python
I wanted the highest number and the lowest number in two rows but I am getting the whole output , should i have to use dense rank or rank window function ?
like so
popular_eco_move=spark.sql("select a.eco,b.eco_name,count(b.eco_name) as number_of_occurance
from chess_game as a, chess_eco_codes as b where a.eco=b.eco group by a.eco,b.eco_name order
by
number_of_occurance desc")
popular_eco_move.show(10)
+---+--------------------+-------------------+
|eco| eco_name|number_of_occurance|
+---+--------------------+-------------------+
|C42| Petrov Defense| 64|
|E15| Queen's Indian| 56|
|C88| Ruy Lopez| 46|
|D37|Queen's Gambit De...| 44|
|B90| Sicilian, Najdorf| 38|
|C67| Ruy Lopez| 37|
|B12| Caro-Kann Defense| 37|
|C11| French| 35|
|C45| Scotch Game| 34|
|D27|Queen's Gambit Ac...| 32|
+---+--------------------+-------------------+
only showing top 10 rows
Result attributes: eco, eco_name, number_of_occurences
Final result will have only two rows
Hello try a with clause to store a query a re-use it like this :
with my_select as
(select a.eco,b.eco_name,count(b.eco_name) as occurance
from `game`.`chess_game` as a, `game`.`chess_eco_codes` as b where a.eco=b.eco
group by a.eco,b.eco_name)
select * from my_select
where occurance = (select max(occurance) from my_select)
or occurance = (select min(occurance) from my_select)
If you're using PySpark, you should learn how to write it in its Pythonic way instead of just SQL.
from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
(df
.withColumn('rank_asc', F.dense_rank().over(W.orderBy(F.asc('number_of_occurance'))))
.withColumn('rank_desc', F.dense_rank().over(W.orderBy(F.desc('number_of_occurance'))))
.where((F.col('rank_asc') == 1) | (F.col('rank_desc') == 1))
# .drop('rank_asc', 'rank_desc') # to drop these two temp columns
.show()
)
+---+--------------------+-------------------+--------+---------+
|eco| eco_name|number_of_occurance|rank_asc|rank_desc|
+---+--------------------+-------------------+--------+---------+
|C42| Petrov Defense| 64| 9| 1|
|D27|Queen's Gambit Ac...| 32| 1| 9|
+---+--------------------+-------------------+--------+---------+
Related
Translating a SAS Ranking with Tie set to HIGH into PySpark
I'm trying to replicate the following SAS code in PySpark: PROC RANK DATA = aud_baskets OUT = aud_baskets_ranks GROUPS=10 TIES=HIGH; BY customer_id; VAR expenditure; RANKS basket_rank; RUN; The idea is to rank all expenditures under each customer_id block. The data would look like this: +-----------+--------------+-----------+ |customer_id|transaction_id|expenditure| +-----------+--------------+-----------+ | A| 1| 34| | A| 2| 90| | B| 1| 89| | A| 3| 6| | B| 2| 8| | B| 3| 7| | C| 1| 96| | C| 2| 9| +-----------+--------------+-----------+ In PySpark, I tried this: spendWindow = Window.partitionBy('customer_id').orderBy(col('expenditure').asc()) aud_baskets = (aud_baskets_ranks.withColumn('basket_rank', ntile(10).over(spendWindow))) The problem is that PySpark doesn't let the user change the way it will handle Ties, like SAS does (that I know of). I need to set this behavior in PySpark so that values are moved up to the next tier each time one of those edge cases occur, as oppose to dropping them to the rank below. Or is there a way to custom write this approach?
Use dense_rank it will give same rank in case of ties and next rank will not be skipped ntile function split the group of records in each partition into n parts. In your case which is 10 from pyspark.sql.functions import dense_rank spendWindow = Window.partitionBy('customer_id').orderBy(col('expenditure').asc()) aud_baskets = aud_baskets_ranks.withColumn('basket_rank',dense_rank.over(spendWindow))
Try The following code. It is generated by an automated tool called SPROCKET. It should take care of ties. df = (aud_baskets) for (colToRank,rankedName) in zip(['expenditure'],['basket_rank']): wA = Window.orderBy(asc(colToRank)) df_w_rank = (df.withColumn('raw_rank', rank().over(wA))) ties = df_w_rank.groupBy('raw_rank').count().filter("""count > 1""") df_w_rank = (df_w_rank.join(ties,['raw_rank'],'left').withColumn(rankedName,expr("""case when count is not null then (raw_rank + count - 1) else raw_rank end"""))) rankedNameGroup = rankedName n = df_w_rank.count() df_with_rank_groups = (df_w_rank.withColumn(rankedNameGroup,expr("""FLOOR({rankedName} *{k}/({n}+1))""".format(k=10, n=n, rankedName=rankedName)))) df = df_with_rank_groups aud_baskets_ranks = df_with_rank_groups.drop('raw_rank', 'count')
Pyspark calculate a field on a grouped table
I've got a data frame that looks like this: +-------+-----+-------------+------------+ |startID|endID|trip_distance|total_amount| +-------+-----+-------------+------------+ | 1| 3| 5| 12| | 1| 3| 0| 4| +-------+-----+-------------+------------+ I need to create a new table that groups the trips by the start and end IDs, and then figures out what the average trip rate was. The trip rate is figured by taking all the trips with the same start and end IDs, in my case startID 1, and endID 3, had a total of 2 trips, and for those 2 trips the avg trip_distance was 2.5, and avg total_amount was 8. So the trip_rate should be 8/2.5=3.2 So the end result should look like this: +-------+-----+-----+----------+ |startID|endID|count| trip_rate| +-------+-----+-----+----------+ | 1| 3| 2| 3.2| +-------+-----+-----+----------+ Here is what I'm trying to do: from pyspark.shell import spark from pyspark.sql.functions import avg df = spark.createDataFrame( [ (1, 3, 5, 12), (1, 3, 0, 4) ], ['startID', 'endID', 'trip_distance', 'total_amount'] # add your columns label here ) df.show() grouped_table = df.groupBy('startID', 'endID').count().alias('count') grouped_table.show() grouped_table = df.withColumn('trip_rate', (avg('total_amount') / avg('trip_distance'))) grouped_table.show() But I'm getting the following error: pyspark.sql.utils.AnalysisException: "grouping expressions sequence is empty, and '`startID`' is not an aggregate function. Wrap '((avg(`total_amount`) / avg(`trip_distance`)) AS `trip_rate`)' in windowing function(s) or wrap '`startID`' in first() (or first_value) if you don't care which value you get.;;\nAggregate [startID#0L, endID#1L, trip_distance#2L, total_amount#3L, (avg(total_amount#3L) / avg(trip_distance#2L)) AS trip_rate#44]\n+- LogicalRDD [startID#0L, endID#1L, trip_distance#2L, total_amount#3L], false\n" I tried wrapping the calculation in an AS function, but I kept getting syntax errors.
Group by, sum and divide. count and sum can be used inside agg() from pyspark.sql import functions as F df.groupBy('startID', 'endID').agg(F.count(F.lit(1)).alias("count"), \ (F.sum("total_amount")/F.sum("trip_distance")).alias('trip_rate')).show()
pyspark - how to select exact number of records per strata using (df.sampleByKey()) in stratified random sampling
I have a spark dataframe (I am using pyspark) 'orders'. Its having following columns in it ['id', 'orderdate', 'customerid', 'status'] I am trying to do a stratified random sampling using key column as 'status'. My objective is as below >> create a new dataframe with exactly 5 random records per status So the method I have chosen is using .sampleBy('strata_key',{fraction_dict}). But the challenge I have faced is in choosing the exact fraction value for each status, so that every time I should get exactly 5 random records per status. I have followed below method 1.Created a dictionary for total count per status as below #Total count of records for each order 'status' in 'ORDERS' dataframe is as below d=dict([(x['status'],x['count']) for x in orders.groupBy("status").count().collect()]) print(d) OUTPUT: {'PENDING_PAYMENT': 15030, 'COMPLETE': 22899, 'ON_HOLD': 3798, 'PAYMENT_REVIEW': 729, 'PROCESSING': 8275, 'CLOSED': 7556, 'SUSPECTED_FRAUD': 1558, 'PENDING': 7610, 'CANCELED': 1428} 2.Created a function which generates fraction values needed to fetch exact N records #Exact number of records needed per status N=5 #function calculates fraction def fraction_calc(count_dict,N) d_mod={} for i in d: d_mod[i]=(N/d[i]) return d_mod #creating dictionary of fractions using above function fraction=fraction_calc(d,5) print(fraction) OUTPUT: {'PENDING_PAYMENT': 0.00033266799733865603, 'COMPLETE': 0.000218350146294598, 'ON_HOLD': 0.0013164823591363876, 'PAYMENT_REVIEW': 0.006858710562414266, 'PROCESSING': 0.0006042296072507553, 'CLOSED': 0.0006617257808364214, 'SUSPECTED_FRAUD': 0.003209242618741977, 'PENDING': 0.000657030223390276, 'CANCELED': 0.0035014005602240898} 3.Creating the final dataframe which is sampled using startified sampling API .sampleBy() #creating final sampled dataframe df_sample=orders.sampleBy("status",fraction) But Still I am not getting exact 5 records per status.A sample output is as below #Checking count per status of resultant sample dataframe df_sample.groupBy("status").count().show() +---------------+-----+ | status|count| +---------------+-----+ |PENDING_PAYMENT| 3| | COMPLETE| 6| | ON_HOLD| 7| | PAYMENT_REVIEW| 4| | PROCESSING| 6| | CLOSED| 6| |SUSPECTED_FRAUD| 7| | PENDING| 9| | CANCELED| 5| +---------------+-----+ What should I do here to achieve my objective.
Found a work around from pyspark.sql.window import Window from pyspark.sql.functions import rand,row_number 1. Using rand() builtin function for generating column 'key' of random numbers and then assigning a row number to each element of the partition window created over "order_status" column order by 'key'. The code is as below df_sample=df.withColumn("key",rand()).\ withColumn("rnk", row_number().\ over(Window.partitionBy("status").\ orderBy("key"))).\ where("rnk<=5").drop("key","rnk") 2. Now I am getting exactly 5 random records per status.A sample output is as below. This out will change for every spark session. #Checking count per status of resultant sample dataframe df_sample.groupBy("status").count().show() +---------------+-----+ | status |count| +---------------+-----+ |PENDING_PAYMENT| 5| | COMPLETE| 5| | ON_HOLD| 5| | PAYMENT_REVIEW| 5| | PROCESSING| 5| | CLOSED| 5| |SUSPECTED_FRAUD| 5| | PENDING| 5| | CANCELED| 5| +---------------+-----+
how to identify people's relationship based on name, address and then assign a same ID through linux comman or Pyspark
I have one csv file. D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address 2,66M,J,Rock,F,1995,201211.0,J 3,David,HM,Lee,M,1991,201211.0,J 6,66M,,Rock,F,1990,201211.0,J 0,David,H M,Lee,M,1990,201211.0,B 3,Marc,H,Robert,M,2000,201211.0,C 6,Marc,M,Robert,M,1988,201211.0,C 6,Marc,MS,Robert,M,2000,201211.0,D I want to assign persons with same last name living in the same address a same ID or index. It's better that ID is made up of only numbers. If persons have different last name in the same place, then ID should be different. Such ID should be unique. Namely, people who are different in either address or last name, ID must be different. My expected output is D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address,ID 2,66M,J,Rock,F,1995,201211.0,J,11 3,David,HM,Lee,M,1991,201211.0,J,12 6,66M,,Rock,F,1990,201211.0,J,11 0,David,H M,Lee,M,1990,201211.0,B,13 3,Marc,H,Robert,M,2000,201211.0,C,14 6,Marc,M,Robert,M,1988,201211.0,C,14 6,Marc,MS,Robert,M,2000,201211.0,D,15 My datafile size is around 30 GB. I am thinking of using groupBy function in spark based on the key consisting of LNAME and address to group those observations together. Then assign it a ID by key. But I don't know how to do this. After that, maybe I can use flatMap to split the line and return those observations with a ID. But I am not sure about it. In addition, can I also make it in Linux environment? Thank you.
As discussed in the comments, the basic idea is to partition the data properly so that records with the same LNAME+Address stay in the same partition, run Python code to generate separate idx on each partition and then merge them into the final id. Note: I added some new rows in your sample records, see the result of df_new.show() shown below. from pyspark.sql import Window, Row from pyspark.sql.functions import coalesce, sum as fsum, col, max as fmax, lit, broadcast # ...skip code to initialize the dataframe # tweak the number of repartitioning N based on actual data size N = 5 # Python function to iterate through the sorted list of elements in the same # partition and assign an in-partition idx based on Address and LNAME. def func(partition_id, it): idx, lname, address = (1, None, None) for row in sorted(it, key=lambda x: (x.LNAME, x.Address)): if lname and (row.LNAME != lname or row.Address != address): idx += 1 yield Row(partition_id=partition_id, idx=idx, **row.asDict()) lname = row.LNAME address = row.Address # Repartition based on 'LNAME' and 'Address' and then run mapPartitionsWithIndex() # function to create in-partition idx. Adjust N so that records in each partition # should be small enough to be loaded into the executor memory: df1 = df.repartition(N, 'LNAME', 'Address') \ .rdd.mapPartitionsWithIndex(func) \ .toDF() Get number of unique rows cnt (based on Address+LNAME) which is max_idx and then grab the running SUM of this rcnt. # idx: calculated in-partition id # cnt: number of unique ids in the same partition: fmax('idx') # rcnt: starting_id for a partition(something like a running count): coalesce(fsum('cnt').over(w1),lit(0)) # w1: WindowSpec to calculate the above rcnt w1 = Window.partitionBy().orderBy('partition_id').rowsBetween(Window.unboundedPreceding,-1) df2 = df1.groupby('partition_id') \ .agg(fmax('idx').alias('cnt')) \ .withColumn('rcnt', coalesce(fsum('cnt').over(w1),lit(0))) df2.show() +------------+---+----+ |partition_id|cnt|rcnt| +------------+---+----+ | 0| 3| 0| | 1| 1| 3| | 2| 1| 4| | 4| 1| 5| +------------+---+----+ Join df1 with df2 and create the final id which is idx + rcnt df_new = df1.join(broadcast(df2), on=['partition_id']).withColumn('id', col('idx')+col('rcnt')) df_new.show() #+------------+-------+---+----+-----+------+------+-----+---+--------+---+----+---+ #|partition_id|Address| D| DOB|FNAME|GENDER| LNAME|MNAME|idx|snapshot|cnt|rcnt| id| #+------------+-------+---+----+-----+------+------+-----+---+--------+---+----+---+ #| 0| B| 0|1990|David| M| Lee| H M| 1|201211.0| 3| 0| 1| #| 0| J| 3|1991|David| M| Lee| HM| 2|201211.0| 3| 0| 2| #| 0| D| 6|2000| Marc| M|Robert| MS| 3|201211.0| 3| 0| 3| #| 1| C| 3|2000| Marc| M|Robert| H| 1|201211.0| 1| 3| 4| #| 1| C| 6|1988| Marc| M|Robert| M| 1|201211.0| 1| 3| 4| #| 2| J| 6|1991| 66M| F| Rek| null| 1|201211.0| 1| 4| 5| #| 2| J| 6|1992| 66M| F| Rek| null| 1|201211.0| 1| 4| 5| #| 4| J| 2|1995| 66M| F| Rock| J| 1|201211.0| 1| 5| 6| #| 4| J| 6|1990| 66M| F| Rock| null| 1|201211.0| 1| 5| 6| #| 4| J| 6|1990| 66M| F| Rock| null| 1|201211.0| 1| 5| 6| #+------------+-------+---+----+-----+------+------+-----+---+--------+---+----+---+ df_new = df_new.drop('partition_id', 'idx', 'rcnt', 'cnt') Some notes: Practically, you will need to clean-out/normalize the column LNAME and Address before using them as uniqueness check. For example, use a separate column uniq_key which combine LNAME and Address as the unique key of the dataframe. see below for an example with some basic data cleansing procedures: from pyspark.sql.functions import coalesce, lit, concat_ws, upper, regexp_replace, trim #(1) convert NULL to '': coalesce(col, '') #(2) concatenate LNAME and Address using NULL char '\x00' or '\0' #(3) convert to uppercase: upper(text) #(4) remove all non-[word/whitespace/NULL_char]: regexp_replace(text, r'[^\x00\w\s]', '') #(5) convert consecutive whitespaces to a SPACE: regexp_replace(text, r'\s+', ' ') #(6) trim leading/trailing spaces: trim(text) df = (df.withColumn('uniq_key', trim( regexp_replace( regexp_replace( upper( concat_ws('\0', coalesce('LNAME', lit('')), coalesce('Address', lit(''))) ), r'[^\x00\s\w]+', '' ), r'\s+', ' ' ) ) )) Then in the code, replace 'LNAME' and 'Address' with uniq_key to find the idx As mentioned by cronoik in the comment, you can also try one of the Window rank functions to calculate the in-partition idx. for example: from pyspark.sql.functions import spark_partition_id, dense_rank # use dense_rank to calculate the in-partition idx w2 = Window.partitionBy('partition_id').orderBy('LNAME', 'Address') df1 = df.repartition(N, 'LNAME', 'Address') \ .withColumn('partition_id', spark_partition_id()) \ .withColumn('idx', dense_rank().over(w2)) After you have df1, use the same methods as above to calculate df2 and df_new. This should be faster than using mapPartitionsWithIndex() which is basically an RDD-based method. For your real data, adjust N to fit your actual data size. this N only influences the initial partitions, after dataframe join, the partition will be reset to default(200). you can adjust this using spark.sql.shuffle.partitions for example when you initialize the spark session: spark = SparkSession.builder \ .... .config("spark.sql.shuffle.partitions", 500) \ .getOrCreate()
Since you have 30GB of input data, you probably don't want something that'll attempt to hold it all in in-memory data structures. Let's use disk space instead. Here's one approach that loads all your data into a sqlite database, and generates an id for each unique last name and address pair, and then joins everything back up together: #!/bin/sh csv="$1" # Use an on-disk database instead of in-memory because source data is 30gb. # This will take a while to run. db=$(mktemp -p .) sqlite3 -batch -csv -header "${db}" <<EOF .import "${csv}" people CREATE TABLE ids(id INTEGER PRIMARY KEY, lname, address, UNIQUE(lname, address)); INSERT OR IGNORE INTO ids(lname, address) SELECT lname, address FROM people; SELECT p.*, i.id AS ID FROM people AS p JOIN ids AS i ON (p.lname, p.address) = (i.lname, i.address) ORDER BY p.rowid; EOF rm -f "${db}" Example: $./makeids.sh data.csv D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address,ID 2,66M,J,Rock,F,1995,201211.0,J,1 3,David,HM,Lee,M,1991,201211.0,J,2 6,66M,"",Rock,F,1990,201211.0,J,1 0,David,"H M",Lee,M,1990,201211.0,B,3 3,Marc,H,Robert,M,2000,201211.0,C,4 6,Marc,M,Robert,M,1988,201211.0,C,4 6,Marc,MS,Robert,M,2000,201211.0,D,5 It's better that ID is made up of only numbers. If that restriction can be relaxed, you can do it in a single pass by using a cryptographic hash of the last name and address as the ID: $ perl -MDigest::SHA=sha1_hex -F, -lane ' BEGIN { $" = $, = "," } if ($. == 1) { print #F, "ID" } else { print #F, sha1_hex("#F[3,7]") }' data.csv D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address,ID 2,66M,J,Rock,F,1995,201211.0,J,5c99211a841bd2b4c9cdcf72d7e95e46b2ae08b5 3,David,HM,Lee,M,1991,201211.0,J,c263f9d1feb4dc789de17a8aab8f2808aea2876a 6,66M,,Rock,F,1990,201211.0,J,5c99211a841bd2b4c9cdcf72d7e95e46b2ae08b5 0,David,H M,Lee,M,1990,201211.0,B,e86e81ab2715a8202e41b92ad979ca3a67743421 3,Marc,H,Robert,M,2000,201211.0,C,363ed8175fdf441ed59ac19cea3c37b6ce9df152 6,Marc,M,Robert,M,1988,201211.0,C,363ed8175fdf441ed59ac19cea3c37b6ce9df152 6,Marc,MS,Robert,M,2000,201211.0,D,cf5135dc402efe16cd170191b03b690d58ea5189 Or if the number of unique lname, address pairs is small enough that they can reasonably be stored in a hash table on your system: #!/usr/bin/gawk -f BEGIN { FS = OFS = "," } NR == 1 { print $0, "ID" next } ! ($4, $8) in ids { ids[$4, $8] = ++counter } { print $0, ids[$4, $8] }
$ sort -t, -k8,8 -k4,4 <<EOD | awk -F, ' $8","$4 != last { ++id; last = $8","$4 } { NR!=1 && $9=id; print }' id=9 OFS=, D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address 2,66M,J,Rock,F,1995,201211.0,J 3,David,HM,Lee,M,1991,201211.0,J 6,66M,,Rock,F,1990,201211.0,J 0,David,H M,Lee,M,1990,201211.0,B 3,Marc,H,Robert,M,2000,201211.0,C 6,Marc,M,Robert,M,1988,201211.0,C 6,Marc,MS,Robert,M,2000,201211.0,D > EOD D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address 0,David,H M,Lee,M,1990,201211.0,B,11 3,Marc,H,Robert,M,2000,201211.0,C,12 6,Marc,M,Robert,M,1988,201211.0,C,12 6,Marc,MS,Robert,M,2000,201211.0,D,13 3,David,HM,Lee,M,1991,201211.0,J,14 2,66M,J,Rock,F,1995,201211.0,J,15 6,66M,,Rock,F,1990,201211.0,J,15 $
sampling with weight using pyspark
I have an unbalanced dataframe on spark using PySpark. I want to resample it to make it balanced. I only find the sample function in PySpark sample(withReplacement, fraction, seed=None) but I want to sample the dataframe with weight of unitvolume in Python, I can do it like df.sample(n,Flase,weights=log(unitvolume)) is there any method I could do the same using PySpark?
Spark provides tools for stratified sampling, but this work only on categorical data. You could try to bucketize it: from pyspark.ml.feature import Bucketizer from pyspark.sql.functions import col, log df_log = df.withColumn("log_unitvolume", log(col("unitvolume")) splits = ... # A list of splits bucketizer = Bucketizer(splits=splits, inputCol="log_unitvolume", outputCol="bucketed_log_unitvolume") df_log_bucketed = bucketizer.transform(df_log) Compute statistics: counts = df.groupBy("bucketed_log_unitvolume") fractions = ... # Define fractions from each bucket: and use these for sampling: df_log_bucketed.sampleBy("bucketed_log_unitvolume", fractions) You can also try to rescale log_unitvolume to [0, 1] range and then: from pyspark.sql.functions import rand df_log_rescaled.where(col("log_unitvolume_rescaled") < rand())
I think it will be better to simply ignore the .sample() function altogether. Sampling without replacement can be implemented with a uniform random number generator: import pyspark.sql.functions as F n_samples_appx = 100 total_weight = df.agg(F.sum('weight')).collect().values df.filter(F.rand(seed=843) < F.col('weight') / total_weight * n_samples_appx) This will randomly include/exclude rows from your dataset, which is typically comparable to sampling with replacement. You should be careful about interpretation if you have RHS that exceeds 1 -- weighted sampling is a nuanced process that, rigorously speaking, should only be performed with-replacement. So if you want to sample with replacement instead, you can use F.rand() to get samples of the poisson distribution which will tell you how many copies of the row to include, and you can either treat that value as a weight, or do some annoying joins & unions to duplicate your rows. But I find that this is typically not required. You can also do this in a portable repeatable way with the hash: import pyspark.sql.functions as F n_samples_appx = 100 total_weight = df.agg(F.sum('weight')).collect().values df.filter(F.hash(F.col('id')) % (total_weight / n_samples_appx * F.col('weight')).astype('int') == 0) This will sample at a rate of 1-in-modulo, which incorporates your weight. hash() should be a consistent and deterministic function, but the sampling will occur like random.
One way to do it is to use udf to make a sampling column. This column will have a random number multiplied by your desired weight. Then we sort by the sampling column, and take the top N. Consider the following illustrative example: Create Dummy Data import numpy as np import string import pyspark.sql.functions as f index = range(100) weights = [i%26 for i in index] labels = [string.ascii_uppercase[w] for w in weights] df = sqlCtx.createDataFrame( zip(index, labels, weights), ('index', 'label', 'weight') ) df.show(n=5) #+-----+-----+------+ #|index|label|weight| #+-----+-----+------+ #| 0| A| 0| #| 1| B| 1| #| 2| C| 2| #| 3| D| 3| #| 4| E| 4| #+-----+-----+------+ #only showing top 5 rows Add Sampling Column In this example, we want to sample the DataFrame using the column weight as the weight. We define a udf using numpy.random.random() to generate uniform random numbers and multiply by the weight. Then we use sort() on this column and use limit() to get the desired number of samples. N = 10 # the number of samples def get_sample_value(x): return np.random.random() * x get_sample_value_udf = f.udf(get_sample_value, FloatType()) df_sample = df.withColumn('sampleVal', get_sample_value_udf(f.col('weight')))\ .sort('sampleVal', ascending=False)\ .select('index', 'label', 'weight')\ .limit(N) Result As expected, the DataFrame df_sample has 10 rows, and it's contents tend to have letters near the end of the alphabet (higher weights). df_sample.count() #10 df_sample.show() #+-----+-----+------+ #|index|label|weight| #+-----+-----+------+ #| 23| X| 23| #| 73| V| 21| #| 46| U| 20| #| 25| Z| 25| #| 19| T| 19| #| 96| S| 18| #| 75| X| 23| #| 48| W| 22| #| 51| Z| 25| #| 69| R| 17| #+-----+-----+------+