Python. Dataframes. Select rows and apply condition on the selection - python

python newbie here. I have written the code that solves the issue. However, there should be a much better way of doing it.
I have two Series that come from the same table but due to some earlier process I get as separate sets. (They could be joined into a single dataframe again since the entries belong to the same record)
Ser1 Ser2
| id | | section |
| ---| |-------- |
| 1 | | A |
| 2 | | B |
| 2 | | C |
| 3 | | D |
df2
| id | section |
| ---|---------|
| 1 | A |
| 2 | B |
| 2 | Z |
| 2 | Y |
| 4 | X |
First, I would like to find those entries in Ser1, which match the same id in df2. Then, check if the values in the ser2 can NOT be found in the section column of df2
My expected results:
| id | section | result |
| ---|-------- |---------|
| 1 | A | False | # Both id(1) and section(A) are also in df2
| 2 | B | False | # Both id(2) and section(B) are also in df2
| 2 | C | True | # id(2) is in df2 but section(C) is not
| 3 | D | False | # id(3) is not in df2, in that case the result should also be False
My code:
for k, v in Ser2.items():
rslt_df = df2[df2['id'] == Ser[k]]
if rslt_df.empty:
print(False)
if(v not in rslt_df['section'].tolist()):
print(True)
else:
print(False)
I know the code is not very good. But after reading about merging and comprehension lists I am getting confused what the best way would be to improve it.

You can concat the series and compute the "result" with boolean arithmetic (XOR):
out = (
pd.concat([ser1, ser2], axis=1)
.assign(result=ser1.isin(df2['id'])!=ser2.isin(df2['section']))
)
Output:
id section result
0 1 A False
1 2 B False
2 2 C True
3 3 D False
Intermediates:
m1 = ser1.isin(df2['id'])
m2 = ser2.isin(df2['section'])
m1 m2 m1!=m2
0 True True False
1 True True False
2 True False True
3 False False False

Related

Pyspark: Reorder only a subset of rows among themselves

my data frame:
+-----+--------+-------+
| val | id | reRnk |
+-----+--------+-------+
| 2 | a | yes |
| 1 | b | no |
| 3 | c | no |
| 8 | d | yes |
| 7 | e | yes |
| 9 | f | no |
+-----+--------+-------+
In my desired output I will re-rank only the columns where reRnk==yes, ranking will be done based on "val"
I don't want to change the rows where reRnk = no, for example at id=b we have reRnk=no I want to keep that row at row no. 2 only.
my desired output will look like this:
+-----+--------+-------+
| val | id | reRnk |
+-----+--------+-------+
| 8 | d | yes |
| 1 | b | no |
| 3 | c | no |
| 7 | e | yes |
| 2 | a | yes |
| 9 | f | no |
+-----+--------+-------+
From what I'm reading, pyspark DF's do not have an index by default. You might need to add this.
I do not know the exact syntax for pyspark, however since it has many similarities with pandas this might lead you into a certain direction:
df.loc[df.reRnk == 'yes', ['val','id']] = df.loc[df.reRnk == 'yes', ['val','id']].sort_values('val', ascending=False).set_index(df.loc[df.reRnk == 'yes', ['val','id']].index)
Basically what we do here is isolating the rows with reRnk == 'yes', sorting these values but resetting the index to its original index. Then we assign these new values to the original rows in the df.
for .loc, https://spark.apache.org/docs/3.2.0/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.loc.html might be worth a try.
for .sort_values see: https://sparkbyexamples.com/pyspark/pyspark-orderby-and-sort-explained/

Replace column name by Index

I have the below data in a Dataframe.
+----+------+----+------+
| Id | Name | Id | Name |
+----+------+----+------+
| 1 | A | 1 | C |
| 2 | B | 2 | B |
+----+------+----+------+
Though the column names are repeating, ideally, its a comparison of 1st 2 columns (old data) with the last 2 columns (new data).
I was trying to rename the 2nd last column by appending _New to it with Index using the below code. Unfortunately, the 1st column is also getting appended with _New.
df.rename(columns={df.columns[2]: df.columns[2] + '_New'}, inplace=True)
Here's the result I am getting using the above code.
+--------+------+--------+------+
| Id_New | Name | Id_New | Name |
+--------+------+--------+------+
| 1 | A | 1 | C |
| 2 | B | 2 | B |
+--------+------+--------+------+
My understanding is that it should add _New to only the 2nd last column. Below is the expected result.
+----+------+--------+------+
| Id | Name | Id_New | Name |
+----+------+--------+------+
| 1 | A | 1 | C |
| 2 | B | 2 | B |
+----+------+--------+------+
Is there any way to accomplish this?
You can use a simple loop with a dictionary to keep track of the increments. I generalized the logic here to handle an arbitrary number of duplicates:
cols = {}
new_cols = []
for c in df.columns:
if c in cols:
new_cols.append(f'{c}_New{cols[c]}')
cols[c] += 1
else:
new_cols.append(c)
cols[c] = 1
df.columns = new_cols
output:
Id Name Id_New1 Name_New1
0 1 A 1 C
1 2 B 2 B
If you really want Id_New then Id_New2 etc. change:
new_cols.append(f'{c}_New{cols[c]}')
to
i = cols[c] if cols[c] != 1 else ''
new_cols.append(f'{c}_New{i}')

Calculate True Positives and False Negatives for Responses Within a DataFrame

I have survey results which I have one-hot encoded. I would like to calculate the sensitivity of each participant's response.
The below is an example of how my DataFrame is structured, whereby:
'Chocolate' and 'Ice-Cream' are correct
'Pizza' and 'None of the Above' are incorrect
Question 1 | Chocolate | Pizza | Ice-Cream | None of the Above |
Participant ID | | | | |
1 | 1 | 1 | 1 | 0 |
2 | 0 | 0 | 1 | 0 |
3 | 1 | 0 | 1 | 0 |
I would like to append a column that contains the sum of true positives and another with the sum of false negatives, to then create another with the sensitivity score (for each participant).
The below is an example of what I am trying to do:
Question 1 | Chocolate | ... | True Positive | False Negative | ..
Participant ID | | | | |
1 | 1 | ... | 2 | 0 | ..
2 | 0 | ... | 1 | 1 | ..
3 | 1 | ... | 2 | 1 | ..
I am not sure where to begin with this! Can anyone help me out?
Thanks a lot!
You could calculate the 'true pos', false neg' etc by using a confusion matrix (e.g. from Sklearn). Maybe the following code is usefull for you:
import pandas as pd
import sklearn
from sklearn.metrics import confusion_matrix
a = [[1,1,1,0], [0,0,1,0], [1,0,1,0]]
correct = [[1,0,1,0], [1,0,1,0], [1,0,1,0]]
df = pd.DataFrame(data=a)
df.columns=['chocolate', 'pizza', 'icecream', 'none']
for i in range(len(df)):
pred = a[i]
true = correct[i]
tn, fp, fn, tp = confusion_matrix(true,pred).ravel()
print (f'Nr:{i} true neg:{tn} false pos:{fp} false neg:{fn} true pos:{tp}')
The output is (which you could put in a DataFrame):
Nr:0 true neg:1 false pos:1 false neg:0 true pos:2
Nr:1 true neg:2 false pos:0 false neg:1 true pos:1
Nr:2 true neg:2 false pos:0 false neg:0 true pos:2

Pandas keeping certain rows based on strings in other rows

I have the following dataframe
+-------+------------+--+
| index | keep | |
+-------+------------+--+
| 0 | not useful | |
| 1 | start_1 | |
| 2 | useful | |
| 3 | end_1 | |
| 4 | not useful | |
| 5 | start_2 | |
| 6 | useful | |
| 7 | useful | |
| 8 | end_2 | |
+-------+------------+--+
There are two pairs of strings (start_1, end_1, start_2, end_2) that indicate that the rows between those strings are the only ones relevant in the data. Hence, in the dataframe below, the output dataframe would be only composed of the rows at index 2, 6, 7 (since 2 is between start_1 and end_1; and 6 and 7 is between start_2 and end_2)
d = {'keep': ["not useful", "start_1", "useful", "end_1", "not useful", "start_2", "useful", "useful", "end_2"]}
df = pd.DataFrame(data=d)
What is the most Pythonic/Pandas approach to this problem?
Thanks
Here's one way to do that (in a couple of steps, for clarity). There might be others:
df["sections"] = 0
df.loc[df.keep.str.startswith("start"), "sections"] = 1
df.loc[df.keep.str.startswith("end"), "sections"] = -1
df["in_section"] = df.sections.cumsum()
res = df[(df.in_section == 1) & ~df.keep.str.startswith("start")]
Output:
index keep sections in_section
2 2 useful 0 1
6 6 useful 0 1
7 7 useful 0 1

Creating new complex index with pandas Dataframe columns

I'm trying to concatenate the columns 'A' and 'C' in a Dataframe like the following to use it as a new Index:
A | B | C | ...
---------------------------
0 5 | djn | 0 | ...
1 5 | vlv | 1 | ...
2 5 | bla | 2 | ...
3 5 | ses | 3 | ...
4 5 | dug | 4 | ...
The desired result would be a Dataframe which is similar to the following result:
A | B | C | ...
-------------------------------
05000 5 | djn | 0 | ...
05001 5 | vlv | 1 | ...
05002 5 | bla | 2 | ...
05003 5 | ses | 3 | ...
05004 5 | dug | 4 | ...
I've searched my eyes off, does someone know how to manipulate a dataframe to get such result?
#dummying up a dataframe
cf['A'] = 5*[5]
cf['C'] = range(5)
cf['B'] = list('qwert')
#putting together two columns into a new one -- EDITED so string formatting is OK
cf['D'] = map(lambda x: str(x).zfill(5), 1000*cf.A + cf.C)
# use it as the index
cf.index = cf.D
# we don't need it as a column
cf.drop('D', axis=1, inplace=True)
print(cf.to_csv())
D,A,C,B
05000,5,0,q
05001,5,1,w
05002,5,2,e
05003,5,3,r
05004,5,4,t
That said, I suspect you'd be safer with multi-indexing (what if the values in B go above 999....), or sorting or grouping on multi-columns.

Categories