how do I pass multiple column names dynamically in pyspark? - python

I am writing a python function that will do a leftanti join on two dataframe, and the joining condition may vary. i.e. sometime 2 DFs might have just one column as unique key for joining, and soemtime 2 DFs might have more than 1 columns to join on.
So, I have written the below code. Please suggest what changes should I make
def integraty_check(testdata, refdata, cond = []):
df = func.join_dataframe(testdata, refdata, cond, "leftanti", logger)
df = df.select(cond)
func.write_df_as_parquet_file(df, curate_path, logger)
return df
here the parameter cond may have 1 or more than 1 column names as comma separated.
So, hwo do I pass the dynamic list of column names when I am calling the function?
Please suggest what would be the best way to achieve the purpose.

you can use python's Unpacking Operator (PEP 448)
df = df.select(*cond)
You can find more examples on how to use the asterisk operator:
Packing and Unpacking Arguments in Python

Related

Dynamic column assignment in Python Pandas

I have a pandas dataframe from which I'd like to create some text-related feature columns. I also have a class that calculates those features. Here's my code:
r = ReadabilityMetrics()
text_features = [['sentence_count', r.sentence_count], ['word_count', r.word_count], ['syllable_count', r.syllable_count], ['unique_words', r.unique_words],
['reading_time', r.reading_time], ['speaking_time', r.speaking_time], ['flesch_reading_ease', r.flesch_reading_ease], ['flesch_kincaid_grade', r.flesch_kincaid_grade],
['char_count', r.char_count]]
(df
.assign(**{t:df['description'].apply(f) for t, f in text_features})
)
I iterate over text_features to dynamically create the columns.
My question: how can I remove reference to the methods and make text_features more concise?
For example, I'd like have text_features = ['sentence_count', 'word_count', 'syllable_count', ...], and since the column names are the same as the function names, dynamically reference the functions. Having a nested list doesn't seem DRY so looking for a more efficient implementation.
I think you're looking for this:
text_features = ['sentence_count', 'word_count', 'syllable_count', 'unique_words', 'reading_time', 'speaking_time', 'flesch_reading_ease', 'flesch_kincaid_grade', 'char_count']
df.assign(**{func_name: df['description'].apply(getattr(r, func_name)) for func_name in text_features})
for column_name, function in text_features:
df[column_name] = df['description'].apply(function)
I think this is fine. I would probably define text_features as a list of tuples rather than a list of lists.
If you're sure that it has to be more concise, define text_features as a list of strings.
for column name in text_features:
df[column_name] = df['description'].apply(getattr(r, column_name))
I would not try to make it any more concise than this (such as using ** with a dict) as to make the solution less esoteric, but this is just a matter of opinion.
In your case try getattr
(df
.assign(**{t:df['description'].apply(getattr(r, t)()) for t in text_features})
)

Select all rows in Python pandas

I have a function that aims at printing the sum along a column of a pandas DataFrame after filtering on some rows to be defined ; and the percentage this quantity makes up in the same sum without any filter:
def my_function(df, filter_to_apply, col):
my_sum = np.sum(df[filter_to_apply][col])
print(my_sum)
print(my_sum/np.sum(df[col]))
Now I am wondering if there is any way to have a filter_to_apply that actually doesn't do any filter (i.e. keeps all rows), to keep using my function (that is actually a bit more complex and convenient) even when I don't want any filter.
So, some filter_f1 that would do: df[filter_f1] = df and could be used with other filters: filter_f1 & filter_f2.
One possible answer is: df.index.isin(df.index) but I am wondering if there is anything easier to understand (e.g. I tried to use just True but it didn't work).
A Python slice object, i.e. slice(-1), acts as an object that selects all indexes in a indexable object. So df[slice(-1)] would select all rows in the DataFrame. You can store that in a variable an an initial value which you can further refine in your logic:
filter_to_apply = slice(-1) # initialize to select all rows
... # logic that may set `filter_to_apply` to something more restrictive
my_function(df, filter_to_apply, col)
This is a way to select all rows:
df[range(0, len(df))]
this is also
df[:]
But I haven't figured out a way to pass : as an argument.
Theres a function called loc on pandas that filters rows. You could do something like this:
df2 = df.loc[<Filter here>]
#Filter can be something like df['price']>500 or df['name'] == 'Brian'
#basically something that for each row returns a boolean
total = df2['ColumnToSum'].sum()

In python, rename variables using parameter of a function

I am creating a function. One input of this function will be a panda dataframe and one of its tasks is to do some operation with two variables of this dataframe. These two variables are not fixed and I want to have the freedom to determine them using parameters as inputs of the function fun.
For example, suppose at some moment the variables I want to use are 'var1' and 'var2' (but at another time, I may want to use others two variables). Supose that these variables take values 1,2,3,4 and I want to reduce df doing var1 == 1 and var2 == 1. My functions is like this
def fun(df , var = ['input_var1', 'input_var2'] , val):
df = df.rename(columns={ var[1] : 'aux_var1 ', var[2]:'aux_var2'})
# Other operations
df = df.loc[(df.aux_var1 == val ) & (df.aux_var2 == val )]
# end of operations
# recover
df = df.rename(columns={ 'aux_var1': var[1] ,'aux_var2': var[2]})
return df
When I use the function fun, I have the error
fun(df, var = ['var1','var2'], val = 1)
IndexError: list index out of range
Actually, I want to do other more complex operations and I didn't describe these operations so as not to extend the question. Perhaps the simple example above has a solution that does not need to rename the variables. But maybe this solution doesn't work with the operations I really want to do. So first, I would necessarily like to correct the error when renaming the variables. If you want to give another more elegant solution that doesn't need renaming, I appreciate that too, but I will be very grateful if besides the elegant solution, you offer me the solution about renaming.
Python liste are zero indexed, i.e. the first element index is 0.
Just change the lines:
df = df.rename(columns={ var[1] : 'aux_var1 ', var[2]:'aux_var2'})
df = df.rename(columns={ 'aux_var1': var[1] ,'aux_var2': var[2]})
to
df = df.rename(columns={ var[0] : 'aux_var1 ', var[1]:'aux_var2'})
df = df.rename(columns={ 'aux_var1': var[0] ,'aux_var2': var[1]})
respectively
In this case you are accessing var[2] but a 2-element list in Python has elements 0 and 1. Element 2 does not exist and therefore accessing it is out of range.
As it has been mentioned in other answers, the error you are receiving is due to the 0-indexing of Python lists, i.e. if you wish to access the first element of the list var, you do that by taking the 0 index instead of 1 index: var[0].
However to the topic of renaming, you are able to perform the filtering of pandas dataframe without any column renaming. I can see that you are accessing the column as an attribute of the dataframe, however you are able to achieve the same via utilising the __getitem__ method, which is more commonly used with square brackets, f.e. df[var[0]].
If you wish to have more generality over your function without any renaming happening, I can suggest this:
from functools import reduce
def fun(df , var, val):
_sub = reduce(
lambda x, y: x & (df[y] == val),
var,
pd.Series([True]*df.shape[0])
)
return df[_sub]
This will work with any number of input column variables. Hope this will serve as an inspiration to your more complicated operations you intend to do.

Count the number of digits in a dataframe column

Question
I have an email_alias column and I'd like to find the number of integers in that column (per row) in another column using Python. So far I can only count the total number of numbers in the entire column.
Attempt
I tried: df['count_numbers'] = sum(c.isdigit() for c in df['email_alias'])
Example:
email_alias count_numbers
thisisatest111 3
testnumber2 1
I believe this might be the simplest solution.
df['count_numbers'] = df['email_alias'].str.count('\d')
You can apply a custom python function to the column. I don't think there's a vectorized way. sum() here takes advantage of the fact that bools are a subclass of ints so all True values are equal to 1.
import pandas as pd
def count_digits(string):
return sum(item.isdigit() for item in string)
df = pd.DataFrame({'a': ['thisisatest111', 'testnumber2']})
df['counts'] = df['a'].apply(count_digits)
Your approach of:
df['count_numbers'] = sum(c.isdigit() for c in df['email_alias'])
could not work because df['count_numbers'] = is an assignment to every value in that column. Here, apply implicitly iterates over the rows (but in Python time, so it's not vectorized). Then again, most of the .str accessor methods of Pandas are, too, despite the syntax suggesting it will go faster than a for loop.

How to self-reference column in pandas Data Frame?

In Python's Pandas, I am using the Data Frame as such:
drinks = pandas.read_csv(data_url)
Where data_url is a string URL to a CSV file
When indexing the frame for all "light drinkers" where light drinkers is constituted by 1 drink, the following is written:
drinks.light_drinker[drinks.light_drinker == 1]
Is there a more DRY-like way to self-reference the "parent"? I.e. something like:
drinks.light_drinker[self == 1]
You can now use query or assign depending on what you need:
drinks.query('light_drinker == 1')
or to mutate the the df:
df.assign(strong_drinker = lambda x: x.light_drinker + 100)
Old answer
Not at the moment, but an enhancement with your ideas is being discussed here. For simple cases where might be enough. The new API might look like this:
df.set(new_column=lambda self: self.light_drinker*2)
In the most current version of pandas, .where() also accepts a callable!
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.where.html?highlight=where#pandas.DataFrame.where
So, the following is now possible:
drinks.light_drinker.where(lambda x: x == 1)
which is particularly useful in method-chains. However, this will return only the Series (not the DataFrame filtered based on the values in the light_drinker column). This is consistent with your question, but I will elaborate for the other case.
To get a filtered DataFrame, use:
drinks.where(lambda x: x.light_drinker == 1)
Note that this will keep the shape of the self (meaning you will have rows where all entries will be NaN, because the condition failed for the light_drinker value at that index).
If you don't want to preserve the shape of the DataFrame (i.e you wish to drop the NaN rows), use:
drinks.query('light_drinker == 1')
Note that the items in DataFrame.index and DataFrame.columns are placed in the query namespace by default, meaning that you don't have to reference the self.
I don't know of any way to reference parent objects like self or this in Pandas, but perhaps another way of doing what you want which could be considered more DRY is where().
drinks.where(drinks.light_drinker == 1, inplace=True)

Categories