Check if a value in a df is in a row after - python

First of all, my English is not my native language, sorry for my mistakes.
I am looking to automate some tasks done via Excel with Python.
I have a dataframe ordered by date/time, and I want to check if a customer has contacted me again after already having a response.
So I have a dataframe like this:
| Date | Tel |
| ---------- | ------------ |
| 01-01-2023 | +33000000001 |
| 01-01-2023 | +33000000002 |
| 01-01-2023 | +33000000003 |
| 02-01-2023 | +33000000002 |
| 02-01-2023 | +33000000004 |
I'd like to add a column TRUE/FALSE if my client has contacted me later :
| Date | Tel | Re-contact |
| ------------ | ------------ | ------------ |
| 01-01-2023 | +33000000001 | FALSE |
| 01-01-2023 | +33000000002 | TRUE |
| 01-01-2023 | +33000000003 | FALSE |
| 02-01-2023 | +33000000002 | FALSE |
| 02-01-2023 | +33000000004 | FALSE |
In Excel, I do this action as follows:
COUNTIFS(A2:A$5;A1)>0
And I would get my TRUE/FALSE if the phone number exists further in my list.
I looked at the documentation to see if a value existed in a list, but I couldn't find a way to see if it existed further down. Also, I'm looking for a quick way to calculate it, as I have 100,000 rows in my dataframe.
# I've tried this so far:
length = len(df.index) - 1
i = 1
for i in range(i, length):
print(i)
for x in df['number']:
if x in df['number'][[i+1, length]]:
df['Re-contact'] = 'TRUE'
else:
df['Re-contact'] = 'FALSE'
i += 1
It feels very wrong to me, and my code takes too much time. I'm looking for a more efficient way to perform what I'm trying to do.

Use pandas.DataFrame.duplicated over Tel column to find repeated calls:
df['Re-contact'] = df.Tel.duplicated(keep='last')
Date Tel Re-contact
0 01-01-2023 33000000001 False
1 01-01-2023 33000000002 True
2 01-01-2023 33000000003 False
3 02-01-2023 33000000002 False
4 02-01-2023 33000000004 False

Related

Pyspark: Reorder only a subset of rows among themselves

my data frame:
+-----+--------+-------+
| val | id | reRnk |
+-----+--------+-------+
| 2 | a | yes |
| 1 | b | no |
| 3 | c | no |
| 8 | d | yes |
| 7 | e | yes |
| 9 | f | no |
+-----+--------+-------+
In my desired output I will re-rank only the columns where reRnk==yes, ranking will be done based on "val"
I don't want to change the rows where reRnk = no, for example at id=b we have reRnk=no I want to keep that row at row no. 2 only.
my desired output will look like this:
+-----+--------+-------+
| val | id | reRnk |
+-----+--------+-------+
| 8 | d | yes |
| 1 | b | no |
| 3 | c | no |
| 7 | e | yes |
| 2 | a | yes |
| 9 | f | no |
+-----+--------+-------+
From what I'm reading, pyspark DF's do not have an index by default. You might need to add this.
I do not know the exact syntax for pyspark, however since it has many similarities with pandas this might lead you into a certain direction:
df.loc[df.reRnk == 'yes', ['val','id']] = df.loc[df.reRnk == 'yes', ['val','id']].sort_values('val', ascending=False).set_index(df.loc[df.reRnk == 'yes', ['val','id']].index)
Basically what we do here is isolating the rows with reRnk == 'yes', sorting these values but resetting the index to its original index. Then we assign these new values to the original rows in the df.
for .loc, https://spark.apache.org/docs/3.2.0/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.loc.html might be worth a try.
for .sort_values see: https://sparkbyexamples.com/pyspark/pyspark-orderby-and-sort-explained/

Python - Pandas - Converting column with specific subsets into rows

I have a dataframe that looks like this below with Date, Price and Serial.
+----------+--------+--------+
| Date | Price | Serial |
+----------+--------+--------+
| 2/1/1996 | 0.5909 | 1 |
| 2/1/1996 | 0.5711 | 2 |
| 2/1/1996 | 0.5845 | 3 |
| 3/1/1996 | 0.5874 | 1 |
| 3/1/1996 | 0.5695 | 2 |
| 3/1/1996 | 0.584 | 3 |
+----------+--------+--------+
I will like to make it look like this where the serial becomes the column name and the data sorts itself into the correct date row as well as Serial column.
+----------+--------+--------+--------+
| Date | 1 | 2 | 3 |
+----------+--------+--------+--------+
| 2/1/1996 | 0.5909 | 0.5711 | 0.5845 |
| 3/1/1996 | 0.5874 | 0.5695 | 0.584 |
+----------+--------+--------+--------+
I understand I can do this via a loop but just wondering if there is a more efficient way to do this?
Thanks for your kind help. Also curious if there is a better way to paste such tables rather than attaching images in my questions =x
You can use pandas.pivot_table:
res = df.pivot_table(index='Date', columns='Serial', values='Price', aggfunc=np.sum)\
.reset_index()
res.columns.name = ''
Date 1 2 3
0 2/1/1996 0.5909 0.5711 0.5845
1 3/1/1996 0.5874 0.5695 0.5840

graphlab - sframe : How to remove rows which have same ids and condition on a column?

I have a graphlab sframe dataframe where few rows have similar id value in "uid" column.
| VIM Document Type | Vendor Number & Zone | Value <5000 or >5000 | Today Status |
+-------------------+----------------------+----------------------+--------------+
| PO_VR_GLB | 1613407EMEAi | Less than 5000 | 0 |
| PO_VR_GLB | 249737LATIN AMERICA | More than 5000 | 1 |
| PO_MN_GLB | 1822317NORTH AMERICA | Less than 5000 | 1 |
| PO_MN_GLB | 1822317NORTH AMERICA | Less than 5000 | 1 |
| PO_MN_GLB | 1822317NORTH AMERICA | Less than 5000 | 1 |
| PO_MN_GLB | 1216902NORTH AMERICA | More than 5000 | 1 |
| PO_MN_GLB | 1213709EMEAi | Less than 5000 | 0 |
| PO_MN_GLB | 882843NORTH AMERICA | More than 5000 | 1 |
| PO_MN_GLB | 2131503ASIA PACIFIC | More than 5000 | 1 |
| PO_MN_GLB | 2131503ASIA PACIFIC | More than 5000 | 1 |
+-------------------+----------------------+----------------------+--------------+
+---------------------+
| uid |
+---------------------+
| 63068$#069 |
| 5789$#13 |
| 12933036$#IN6532618 |
| 12933022$#IN6590132 |
| 12932349$#IN6636468 |
| 12952077$#203250 |
| 13012770$#MUML04184 |
| 12945049$#112370 |
| 13582330$#CI160118 |
| 13012770$#MUML04184|
Here, I want to retain all the rows with unique uids and only one of the rows which have same uid, the row to be retained can be any row which has today status=1, (i.e. there can be rows where uid and row status are same, but other fields are different, in that case, we can keep any one of these rows.) I want to do these operations in graphlab sframes, but am unable to figure out how to proceed.
you may use SFrame.unique() that can give you unique rows
sf = sf.unique()
Other way can also be using either groupby() method or join() methods where you can specify column name and further work. You may read their documentation on turi.com click for various ways.
Another way (that I personally prefer) is to convert SFrame to Dataframe of pandas and work on getting data operations and again converting pandas Dataframe to SFrame. It depends on your choice and I hope this helps.

pandas dataframe get rows based on matched strings in cells

Given the following data frame
+-----+----------------+--------+---------+
| | A | B | C |
+-----+----------------+--------+---------+
| 0 | hello#me.com | 2.0 | Hello |
| 1 | you#you.com | 3.0 | World |
| 2 | us#world.com | hi | holiday |
+-----+----------------+--------+---------+
How can I get all the rows where re.compile([Hh](i|ello)) would match in a cell? That is, from the above example, I would like to get the following output:
+-----+----------------+--------+---------+
| | A | B | C |
+-----+----------------+--------+---------+
| 0 | hello#me.com | 2.0 | Hello |
| 2 | us#world.com | hi | holiday |
+-----+----------------+--------+---------+
I am not able to get a solution for this. And help would be very much appreciated.
Using stack to avoid apply
df.loc[df.stack().str.match(r'[Hh](i|ello)').unstack().any(1)]
Using match generates a future warning. The warning is consistant with what we are doing, so that's good. However, findall accomplishes the same thing
df.loc[df.stack().str.findall(r'[Hh](i|ello)').unstack().any(1)]
You can use the findall function which takes regular expressions.
msk = df.apply(lambda x: x.str.findall(r'[Hh](i|ello)')).any(axis=1)
df[msk]
+---|------------|------|---------+
| | A | B | C |
+---|------------|------|---------+
| 0 |hello#me.com| 2 | Hello |
| 2 |us#world.com| hi | holiday |
+---|------------|------|---------+
any(axis=1) will check if any of the columns in a given row are true. So msk is a single column of True/False values indicating whether or not the regular expression was found in that row.

search for string in pandas row

How can I search through the entire row in a pandas dataframe for a phrase and if it exist create a new col where says it says 'Yes' and what columns in that row it found it in? I would like to be able to ignore case as well.
You could use Pandas apply function, which allows you to traverse rows or columns and apply your own function to them.
For example, given a dataframe
+--------------------------------------+------------+---+
| deviceid | devicetype | 1 |
+--------------------------------------+------------+---+
| b569dcb7-4498-4cb4-81be-333a7f89e65f | Google | 1 |
| 04d3b752-f7a1-42ae-8e8a-9322cda4fd7f | Android | 2 |
| cf7391c5-a82f-4889-8d9e-0a423f132026 | Android | 3 |
+--------------------------------------+------------+---+
Define a function
def pr(array, value):
condition = array[array.str.contains(value).fillna(False)].index.tolist()
if condition:
ret = array.append(pd.Series({"condition":['Yes'] + condition}))
else:
ret = array.append(pd.Series({"condition":['No'] + condition}))
return ret
Use it
df.apply(pr, axis=1, args=('Google',))
+---+--------------------------------------+------------+---+-------------------+
| | deviceid | devicetype | 1 | condition |
+---+--------------------------------------+------------+---+-------------------+
| 0 | b569dcb7-4498-4cb4-81be-333a7f89e65f | Google | 1 | [Yes, devicetype] |
| 1 | 04d3b752-f7a1-42ae-8e8a-9322cda4fd7f | Android | 2 | [No] |
| 2 | cf7391c5-a82f-4889-8d9e-0a423f132026 | Android | 3 | [No] |
+---+--------------------------------------+------------+---+-------------------+

Categories