How to add the first values in cascading columns? - python

Can anyone help me with a better solution than looping? Let's say that we have the following pandas data-frame made out of 4 columns. I am looking for a way to get the same values as in "Result" column through other methods than looping.
Here is the logic:
If priority1=1 then result=1
If priority1=1 and priority2=1 then result=2 (ignore all other if priority1 !=1)
If priority1=1 and priority2=1 and priority3=1 then result=3 (ignore all other if priority1 and priority2 != 1)
If priority1=1 and priority2=1 and priority3=1 and priority4=1 then result=4 (ignore all other if priority1 and priority2 and priority3 != 1)
The opposite is happening if priority1 is negative.
Here is my final result after a very ugly and inefficient looping:
+-----+-----------+-----------+-----------+-----------+--------+
| | priority1 | priority2 | priority3 | priority4 | Result |
+-----+-----------+-----------+-----------+-----------+--------+
| 0 | | | | | |
| 1 | | 1 | -1 | -1 | |
| 2 | | | | | |
| 3 | | | | 1 | |
| 4 | | | | 1 | |
| 5 | | | | | |
| 6 | | | | -1 | |
| 7 | | | | | |
| 8 | | | | | |
| 9 | 1 | 1 | 1 | 1 | 1 |
| 10 | | | | | |
| 11 | | | | 1 | |
| 12 | | | 1 | | |
| 13 | | | | | |
| 14 | | | -1 | -1 | |
| 15 | | | | | |
| 16 | | | | | |
| 17 | | | | -1 | |
| 18 | | | | | |
| 19 | | | | | |
| 20 | | 1 | 1 | 1 | 2 |
| 21 | | | | | |
| 22 | | | -1 | -1 | |
| 23 | | | | | |
| 24 | | | | | |
| 25 | | | | -1 | |
| 26 | | | | | |
| 27 | | | 1 | 1 | 3 |
| 28 | | | | | |
| 29 | | | | | |
| 30 | | | | -1 | |
| 31 | | | | | |
| 32 | | | | | |
| 33 | | | -1 | -1 | |
| 34 | | | | | |
| 35 | | | 1 | 1 | 4 |
| 36 | | | | | |
| 37 | | | | | |
| 38 | | | | | |
| 39 | | | -1 | -1 | |
| 40 | | | | | |
| 41 | | | | | |
| 42 | | 1 | 1 | 1 | 2 |
| 43 | | | | | |
| 44 | | | | | |
| 45 | | | | -1 | |
| 46 | | | | | |
| 47 | | | | | |
| 48 | | | | | |
| 49 | | | | | |
| 50 | | -1 | -1 | -1 | |
| 51 | | | | | |
| 52 | | | | | |
| 53 | | 1 | 1 | 1 | 2 |
| 54 | | | | | |
| 55 | | | | | |
| 56 | | | | -1 | |
| 57 | | | | | |
| 58 | | | | | |
| 59 | | | -1 | -1 | |
| 60 | | | | | |
| 61 | | | | | |
| 62 | | | 1 | 1 | 3 |
| 63 | | | | | |
| 64 | -1 | -1 | -1 | -1 | -1 |
| 65 | | | | | |
| 66 | | | 1 | 1 | |
| 67 | | | | | |
| 68 | | | | | |
| 69 | | | | | |
| 70 | | | | -1 | |
| 71 | | | | | |
| 72 | | | | | |
| 73 | | | 1 | 1 | |
| 74 | | | | -1 | |
| 75 | | | | | |
| 76 | | | 1 | 1 | |
| 77 | | | | | |
| 78 | | -1 | -1 | -1 | -2 |
| 79 | | | 1 | | |
| 80 | | | 1 | 1 | |
| 81 | | | | | |
| 82 | | | -1 | -1 | -3 |
| 83 | | | 1 | 1 | |
| 84 | | | | | |
| 85 | | | | | |
| 86 | | | | | |
| 87 | | | -1 | -1 | -4 |
| 88 | | | | | |
| 89 | | -1 | -1 | -1 | -2 |
| 90 | | | | | |
| 91 | | | | | |
| 92 | | | | -1 | |
| 93 | | | | | |
| 94 | | | | | |
| 95 | | | 1 | 1 | |
| 96 | | | | | |
| 97 | | | | | |
| 98 | | | | -1 | |
| 99 | | | | 1 | |
| 100 | | | | | |
| 101 | | | -1 | -1 | -3 |
| 102 | | | | | |
| 103 | | | | | |
| 104 | | 1 | 1 | 1 | |
| 105 | | | | | |
| 106 | | | | 1 | |
| 107 | | | | | |
| 108 | | | -1 | -1 | |
| 109 | | | | | |
| 110 | | | | | |
| 111 | | | 1 | 1 | |
| 112 | | | | | |
| 113 | | | | | |
| 114 | | | -1 | -1 | |
| 115 | | | | | |
| 116 | | | 1 | 1 | |
| 117 | | | | | |
| 118 | | | | | |
| 119 | | -1 | -1 | -1 | -2 |
| 120 | | | | | |
| 121 | | | | | |
| 122 | | | | 1 | |
| 123 | | | | | |
| 124 | | | | 1 | |
| 125 | | | | | |
| 126 | | | | | |
| 127 | | | 1 | 1 | |
| 128 | | | | | |
| 129 | | | | | |
| 130 | | | -1 | -1 | -3 |
| 131 | | | | | |
| 132 | | | | | |
| 133 | | | | | |
| 134 | 1 | 1 | 1 | 1 | 1 |
| 135 | | | | -1 | |
| 136 | | | | | |
| 137 | | -1 | -1 | -1 | |
| 138 | | | 1 | | |
| 139 | | | | 1 | |
| 140 | | 1 | 1 | 1 | 2 |
| 141 | | | 1 | 1 | 3 |
| 142 | | | | | |
| 143 | | | | -1 | |
| 144 | | | | | |
| 145 | | | | 1 | 4 |
+-----+-----------+-----------+-----------+-----------+--------+

setup
df = pd.DataFrame([
[ 1, 0, 0, 0],
[ 1, 1, 0, 0],
[ 1, 1, 1, 0],
[ 1, 1, 1, 1],
[ 0, 1, 1, 1],
[ 0, 0, 1, 1],
[ 0, 0, 0, 1],
[ 1, 0, 1, 1], # this should end up 1
[ 0, 0, 0, 0],
[-1, 0, 0, 0],
[-1, -1, 0, 0],
[-1, -1, -1, 0],
[-1, -1, -1, -1],
[ 0, -1, -1, -1],
[ 0, 0, -1, -1],
[ 0, 0, 0, -1],
], columns=['priority{}'.format(i) for i in range(1, 5)])
solution
v = df.values
df.assign(Results=(v * v.cumprod(1).astype(np.bool8)).sum(1))
priority1 priority2 priority3 priority4 Results
0 1 0 0 0 1
1 1 1 0 0 2
2 1 1 1 0 3
3 1 1 1 1 4
4 0 1 1 1 0
5 0 0 1 1 0
6 0 0 0 1 0
7 1 0 1 1 1
8 0 0 0 0 0
9 -1 0 0 0 -1
10 -1 -1 0 0 -2
11 -1 -1 -1 0 -3
12 -1 -1 -1 -1 -4
13 0 -1 -1 -1 0
14 0 0 -1 -1 0
15 0 0 0 -1 0
how it works
Grab the numpy array with
v = df.values
non-zeros as True with
v.astype(np.bool8)
each successive column continuing to be non-zero with
v.astype(np.bool8).cumprod(1)
multiply by v to filter out which ones to add then sum
(v * v.astype(np.bool8).cumprod(1)).sum()
naive time test
small data
big data

Using piRSquared's example frame (hat tip!), I might do something like
match = (df.abs() == 1) & (df.eq(df.iloc[:, 0], axis=0))
out = match.cumprod(axis=1).sum(axis=1) * df.iloc[:, 0]
which gives me
In [107]: df["Result"] = out
In [108]: df
Out[108]:
priority1 priority2 priority3 priority4 Result
0 1 0 0 0 1
1 1 1 0 0 2
2 1 1 1 0 3
3 1 1 1 1 4
4 0 1 1 1 0
5 0 0 1 1 0
6 0 0 0 1 0
7 0 0 0 0 0
8 -1 0 0 0 -1
9 -1 -1 0 0 -2
10 -1 -1 -1 0 -3
11 -1 -1 -1 -1 -4
12 0 -1 -1 -1 0
13 0 0 -1 -1 0
14 0 0 0 -1 0

Related

Python Pivot Table based on multiple criteria

I was asking the question in this link SUMIFS in python jupyter
However, I just realized that the solution didn't work because they can switch in and switch out on different dates. So basically they have to switch out first before they can switch in.
Here is the dataframe (sorted based on the date):
+---------------+--------+---------+-----------+--------+
| Switch In/Out | Client | Quality | Date | Amount |
+---------------+--------+---------+-----------+--------+
| Out | 1 | B | 15-Aug-19 | 360 |
| In | 1 | A | 16-Aug-19 | 180 |
| In | 1 | B | 17-Aug-19 | 180 |
| Out | 1 | A | 18-Aug-19 | 140 |
| In | 1 | B | 18-Aug-19 | 80 |
| In | 1 | A | 19-Aug-19 | 60 |
| Out | 2 | B | 14-Aug-19 | 45 |
| Out | 2 | C | 15-Aug-20 | 85 |
| In | 2 | C | 15-Aug-20 | 130 |
| Out | 2 | A | 20-Aug-19 | 100 |
| In | 2 | A | 22-Aug-19 | 30 |
| In | 2 | B | 23-Aug-19 | 30 |
| In | 2 | C | 23-Aug-19 | 40 |
+---------------+--------+---------+-----------+--------+
I would then create a new column and divide them into different transactions.
+---------------+--------+---------+-----------+--------+------+
| Switch In/Out | Client | Quality | Date | Amount | Rows |
+---------------+--------+---------+-----------+--------+------+
| Out | 1 | B | 15-Aug-19 | 360 | 1 |
| In | 1 | A | 16-Aug-19 | 180 | 1 |
| In | 1 | B | 17-Aug-19 | 180 | 1 |
| Out | 1 | A | 18-Aug-19 | 140 | 2 |
| In | 1 | B | 18-Aug-19 | 80 | 2 |
| In | 1 | A | 19-Aug-19 | 60 | 2 |
| Out | 2 | B | 14-Aug-19 | 45 | 3 |
| Out | 2 | C | 15-Aug-20 | 85 | 3 |
| In | 2 | C | 15-Aug-20 | 130 | 3 |
| Out | 2 | A | 20-Aug-19 | 100 | 4 |
| In | 2 | A | 22-Aug-19 | 30 | 4 |
| In | 2 | B | 23-Aug-19 | 30 | 4 |
| In | 2 | C | 23-Aug-19 | 40 | 4 |
+---------------+--------+---------+-----------+--------+------+
With this, I can apply the pivot formula and take it from there.
However, how do I do this in python? In excel, I can just use multiple SUMIFS and compare in and out. However, this is not possible in python.
Thank you!
One simple solution is to iterate and apply a check (function) over each element being the result a new column, so: map.
Using df.index.map we get the index for each item to pass as a argument, so we can play with the values, get and compare. In your case your aim is to identify the change to "Out" keeping a counter.
import pandas as pd
switchInOut = ["Out", "In", "In", "Out", "In", "In",
"Out", "Out", "In", "Out", "In", "In", "In"]
df = pd.DataFrame(switchInOut, columns=['Switch In/Out'])
counter = 1
def changeToOut(i):
global counter
if df["Switch In/Out"].get(i) == "Out" and df["Switch In/Out"].get(i-1) == "In":
counter += 1
return counter
rows = df.index.map(changeToOut)
df["Rows"] = rows
df
Result:
+----+-----------------+--------+
| | Switch In/Out | Rows |
|----+-----------------+--------|
| 0 | Out | 1 |
| 1 | In | 1 |
| 2 | In | 1 |
| 3 | Out | 2 |
| 4 | In | 2 |
| 5 | In | 2 |
| 6 | Out | 3 |
| 7 | Out | 3 |
| 8 | In | 3 |
| 9 | Out | 4 |
| 10 | In | 4 |
| 11 | In | 4 |
| 12 | In | 4 |
+----+-----------------+--------+

Logical indexing in pandas dataframes [duplicate]

This question already has answers here:
How do I Pandas group-by to get sum?
(11 answers)
Closed 3 years ago.
I have some data like this:
+-----------+---------+-------+
| Duration | Outcome | Event |
+-----------+---------+-------+
| 421 | 0 | 1 |
| 421 | 0 | 1 |
| 261 | 0 | 1 |
| 24 | 0 | 1 |
| 27 | 0 | 1 |
| 613 | 0 | 1 |
| 2454 | 0 | 1 |
| 227 | 0 | 1 |
| 2560 | 0 | 1 |
| 229 | 0 | 1 |
| 2242 | 0 | 1 |
| 6680 | 0 | 1 |
| 1172 | 0 | 1 |
| 5656 | 0 | 1 |
| 5082 | 0 | 1 |
| 7239 | 0 | 1 |
| 127 | 0 | 1 |
| 128 | 0 | 1 |
| 128 | 0 | 1 |
| 7569 | 1 | 1 |
| 324 | 0 | 2 |
| 6395 | 0 | 2 |
| 6196 | 0 | 2 |
| 31 | 0 | 2 |
| 228 | 0 | 2 |
| 274 | 0 | 2 |
| 270 | 0 | 2 |
| 275 | 0 | 2 |
| 232 | 0 | 2 |
| 7310 | 0 | 2 |
| 7644 | 1 | 2 |
| 6949 | 0 | 3 |
| 6903 | 1 | 3 |
| 6942 | 0 | 4 |
| 7031 | 1 | 4 |
+-----------+---------+-------+
Now, for each Event, with the Outcome 0/1 considered as Fail/Pass, I want to sum the total Duration of Fail/Pass events separately in 2 new columns (or 1, whatever ensures readability).
I'm new to dataframes and I feel significant logical indexing is involved here. What is the best way to approach this problem?
df.groupby(['Event', 'Outcome'])['Duration'].sum()
So you group by both the event then the outcome, look at the duration column then take the sum of each group.
You can also try:
pd.pivot_table(index='Event',
columns='Outcome',
values='Duration',
data=df,
aggfunc='sum')
which gives you a table with two columns:
+---------+-------+------+
| Outcome | 0 | 1 |
+---------+-------+------+
| Event | | |
+---------+-------+------+
| 1 | 35691 | 7569 |
| 2 | 21535 | 7644 |
| 3 | 6949 | 6903 |
| 4 | 6942 | 7031 |
+---------+-------+------+

Assigning states of Hidden Markov Models by idealized values intensity values.

I'm running the pomegranate HMM (http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.html) on my data, and I load the results into a Pandas DF, and define the idealized intensity as the median of all the points in that state: df["hmm_idealized"] = df.groupby(["hmm_state"],as_index = False)["Raw"].transform("median"). Sample data:
+-----+-----------------+-------------+------------+
| | hmm_idealized | hmm_state | hmm_diff |
|-----+-----------------+-------------+------------|
| 0 | 99862 | 3 | nan |
| 1 | 99862 | 3 | 0 |
| 2 | 99862 | 3 | 0 |
| 3 | 99862 | 3 | 0 |
| 4 | 99862 | 3 | 0 |
| 5 | 99862 | 3 | 0 |
| 6 | 117759 | 4 | 1 |
| 7 | 117759 | 4 | 0 |
| 8 | 117759 | 4 | 0 |
| 9 | 117759 | 4 | 0 |
| 10 | 117759 | 4 | 0 |
| 11 | 117759 | 4 | 0 |
| 12 | 117759 | 4 | 0 |
| 13 | 117759 | 4 | 0 |
| 14 | 124934 | 2 | -2 |
| 15 | 124934 | 2 | 0 |
| 16 | 124934 | 2 | 0 |
| 17 | 124934 | 2 | 0 |
| 18 | 124934 | 2 | 0 |
| 19 | 117759 | 4 | 2 |
| 20 | 117759 | 4 | 0 |
| 21 | 117759 | 4 | 0 |
| 22 | 117759 | 4 | 0 |
| 23 | 117759 | 4 | 0 |
| 24 | 117759 | 4 | 0 |
| 25 | 117759 | 4 | 0 |
| 26 | 117759 | 4 | 0 |
| 27 | 117759 | 4 | 0 |
| 28 | 117759 | 4 | 0 |
| 29 | 117759 | 4 | 0 |
| 30 | 117759 | 4 | 0 |
| 31 | 117759 | 4 | 0 |
| 32 | 117759 | 4 | 0 |
| 33 | 117759 | 4 | 0 |
| 34 | 117759 | 4 | 0 |
| 35 | 117759 | 4 | 0 |
| 36 | 117759 | 4 | 0 |
| 37 | 117759 | 4 | 0 |
| 38 | 117759 | 4 | 0 |
| 39 | 117759 | 4 | 0 |
| 40 | 106169 | 1 | -3 |
| 41 | 106169 | 1 | 0 |
| 42 | 106169 | 1 | 0 |
| 43 | 106169 | 1 | 0 |
| 44 | 106169 | 1 | 0 |
| 45 | 106169 | 1 | 0 |
| 46 | 106169 | 1 | 0 |
| 47 | 106169 | 1 | 0 |
| 48 | 106169 | 1 | 0 |
| 49 | 106169 | 1 | 0 |
| 50 | 106169 | 1 | 0 |
| 51 | 106169 | 1 | 0 |
| 52 | 106169 | 1 | 0 |
| 53 | 106169 | 1 | 0 |
| 54 | 106169 | 1 | 0 |
| 55 | 106169 | 1 | 0 |
| 56 | 106169 | 1 | 0 |
| 57 | 106169 | 1 | 0 |
| 58 | 106169 | 1 | 0 |
| 59 | 106169 | 1 | 0 |
| 60 | 106169 | 1 | 0 |
| 61 | 106169 | 1 | 0 |
| 62 | 106169 | 1 | 0 |
| 63 | 106169 | 1 | 0 |
| 64 | 106169 | 1 | 0 |
| 65 | 106169 | 1 | 0 |
| 66 | 106169 | 1 | 0 |
| 67 | 106169 | 1 | 0 |
| 68 | 106169 | 1 | 0 |
| 69 | 106169 | 1 | 0 |
| 70 | 106169 | 1 | 0 |
| 71 | 106169 | 1 | 0 |
| 72 | 106169 | 1 | 0 |
| 73 | 106169 | 1 | 0 |
| 74 | 106169 | 1 | 0 |
| 75 | 99862 | 3 | 2 |
| 76 | 99862 | 3 | 0 |
| 77 | 99862 | 3 | 0 |
| 78 | 99862 | 3 | 0 |
| 79 | 99862 | 3 | 0 |
| 80 | 99862 | 3 | 0 |
| 81 | 99862 | 3 | 0 |
| 82 | 99862 | 3 | 0 |
| 83 | 99862 | 3 | 0 |
| 84 | 99862 | 3 | 0 |
| 85 | 99862 | 3 | 0 |
| 86 | 99862 | 3 | 0 |
| 87 | 99862 | 3 | 0 |
| 88 | 99862 | 3 | 0 |
| 89 | 99862 | 3 | 0 |
| 90 | 99862 | 3 | 0 |
| 91 | 99862 | 3 | 0 |
| 92 | 99862 | 3 | 0 |
| 93 | 99862 | 3 | 0 |
| 94 | 99862 | 3 | 0 |
| 95 | 99862 | 3 | 0 |
| 96 | 99862 | 3 | 0 |
| 97 | 99862 | 3 | 0 |
| 98 | 99862 | 3 | 0 |
| 99 | 99862 | 3 | 0 |
| 100 | 99862 | 3 | 0 |
| 101 | 99862 | 3 | 0 |
| 102 | 99862 | 3 | 0 |
| 103 | 99862 | 3 | 0 |
| 104 | 99862 | 3 | 0 |
| 105 | 99862 | 3 | 0 |
| 106 | 99862 | 3 | 0 |
| 107 | 99862 | 3 | 0 |
| 108 | 94127 | 0 | -3 |
| 109 | 94127 | 0 | 0 |
| 110 | 94127 | 0 | 0 |
| 111 | 94127 | 0 | 0 |
| 112 | 94127 | 0 | 0 |
| 113 | 94127 | 0 | 0 |
| 114 | 94127 | 0 | 0 |
| 115 | 94127 | 0 | 0 |
| 116 | 94127 | 0 | 0 |
| 117 | 94127 | 0 | 0 |
| 118 | 94127 | 0 | 0 |
| 119 | 94127 | 0 | 0 |
| 120 | 94127 | 0 | 0 |
| 121 | 94127 | 0 | 0 |
| 122 | 94127 | 0 | 0 |
| 123 | 94127 | 0 | 0 |
| 124 | 94127 | 0 | 0 |
| 125 | 94127 | 0 | 0 |
| 126 | 94127 | 0 | 0 |
| 127 | 94127 | 0 | 0 |
| 128 | 94127 | 0 | 0 |
| 129 | 94127 | 0 | 0 |
| 130 | 94127 | 0 | 0 |
| 131 | 94127 | 0 | 0 |
| 132 | 94127 | 0 | 0 |
| 133 | 94127 | 0 | 0 |
| 134 | 94127 | 0 | 0 |
| 135 | 94127 | 0 | 0 |
| 136 | 94127 | 0 | 0 |
| 137 | 94127 | 0 | 0 |
| 138 | 94127 | 0 | 0 |
| 139 | 94127 | 0 | 0 |
| 140 | 94127 | 0 | 0 |
| 141 | 94127 | 0 | 0 |
| 142 | 94127 | 0 | 0 |
| 143 | 94127 | 0 | 0 |
| 144 | 94127 | 0 | 0 |
| 145 | 94127 | 0 | 0 |
| 146 | 94127 | 0 | 0 |
| 147 | 94127 | 0 | 0 |
| 148 | 94127 | 0 | 0 |
| 149 | 94127 | 0 | 0 |
| 150 | 94127 | 0 | 0 |
| 151 | 94127 | 0 | 0 |
| 152 | 94127 | 0 | 0 |
| 153 | 94127 | 0 | 0 |
| 154 | 94127 | 0 | 0 |
| 155 | 94127 | 0 | 0 |
| 156 | 94127 | 0 | 0 |
| 157 | 94127 | 0 | 0 |
| 158 | 94127 | 0 | 0 |
| 159 | 94127 | 0 | 0 |
| 160 | 94127 | 0 | 0 |
| 161 | 94127 | 0 | 0 |
| 162 | 94127 | 0 | 0 |
| 163 | 94127 | 0 | 0 |
| 164 | 94127 | 0 | 0 |
| 165 | 94127 | 0 | 0 |
| 166 | 94127 | 0 | 0 |
| 167 | 94127 | 0 | 0 |
| 168 | 94127 | 0 | 0 |
| 169 | 94127 | 0 | 0 |
| 170 | 94127 | 0 | 0 |
| 171 | 94127 | 0 | 0 |
| 172 | 94127 | 0 | 0 |
| 173 | 94127 | 0 | 0 |
| 174 | 94127 | 0 | 0 |
| 175 | 94127 | 0 | 0 |
| 176 | 94127 | 0 | 0 |
| 177 | 94127 | 0 | 0 |
| 178 | 94127 | 0 | 0 |
| 179 | 94127 | 0 | 0 |
| 180 | 94127 | 0 | 0 |
| 181 | 94127 | 0 | 0 |
| 182 | 94127 | 0 | 0 |
| 183 | 94127 | 0 | 0 |
| 184 | 94127 | 0 | 0 |
| 185 | 94127 | 0 | 0 |
| 186 | 94127 | 0 | 0 |
| 187 | 94127 | 0 | 0 |
| 188 | 94127 | 0 | 0 |
| 189 | 94127 | 0 | 0 |
| 190 | 94127 | 0 | 0 |
| 191 | 94127 | 0 | 0 |
| 192 | 94127 | 0 | 0 |
| 193 | 94127 | 0 | 0 |
| 194 | 94127 | 0 | 0 |
| 195 | 94127 | 0 | 0 |
| 196 | 94127 | 0 | 0 |
| 197 | 94127 | 0 | 0 |
| 198 | 94127 | 0 | 0 |
| 199 | 94127 | 0 | 0 |
| 200 | 94127 | 0 | 0 |
+-----+-----------------+-------------+------------+
When analyzing the results, I want to count the number of increases in the model. I would like to use the df.where(data.hmm_diff > 0).count() function to read off how many times I increment by one state. However, the increase sometimes spans two states (i.e. it skips the middle state), so I need to reassign the HMM-state label by sorting the idealized values, so that the lowest state would be 0, the highest 4, etc. Is there a way to reassign the hmm_state label from arbitrary to dependent on idealized intensity?
For example,the hmm_state labeled as "1" lies in between of hmm_state 3 and 4
It looks like you just need to define a sorted HMM state like this:
state_orders = {v: i for i, v in enumerate(sorted(df.hmm_idealized.unique()))}
df['sorted_state'] = df.hmm_idealized.map(state_orders)
Then you can continue as you did in the question, but taking a diff on this column, and counting the jumps on it.

pandas column shift with day 0 value as 0

I've got a pandas dataframe(pivoted) like customer_name, current_date, current_day_count
+----------+--------------+-------------------+
| customer | current_date | current_day_count |
+----------+--------------+-------------------+
| Mark | 2018_02_06 | 15 |
| | 2018_02_09 | 42 |
| | 2018_02_12 | 33 |
| | 2018_02_21 | 82 |
| | 2018_02_27 | 72 |
| Bob | 2018_02_02 | 76 |
| | 2018_02_23 | 11 |
| | 2018_03_04 | 59 |
| | 2018_03_13 | 68 |
| Shawn | 2018_02_11 | 71 |
| | 2018_02_15 | 39 |
| | 2018_02_18 | 65 |
| | 2018_02_24 | 38 |
+----------+--------------+-------------------+
Now, I want another new column with previous_day_counts for each customer but the first day of the customer's previous day value should be 0 something like this customer, current_date, current_day_count, previous_day_count (with first day value as 0)
+----------+--------------+-------------------+--------------------+
| customer | current_date | current_day_count | previous_day_count |
+----------+--------------+-------------------+--------------------+
| Mark | 2018_02_06 | 15 | 0 |
| | 2018_02_09 | 42 | 33 |
| | 2018_02_12 | 33 | 82 |
| | 2018_02_21 | 82 | 72 |
| | 2018_02_27 | 72 | 0 |
| Bob | 2018_02_02 | 76 | 0 |
| | 2018_02_23 | 11 | 59 |
| | 2018_03_04 | 59 | 68 |
| | 2018_03_13 | 68 | 0 |
| Shawn | 2018_02_11 | 71 | 0 |
| | 2018_02_15 | 39 | 65 |
| | 2018_02_18 | 65 | 38 |
| | 2018_02_24 | 38 | 0 |
+----------+--------------+-------------------+--------------------+
Try this:
import pandas as pd
import numpy as np
df = pd.DataFrame({'name': ['Mark','Mark','Mark','Mark','Bob','Bob','Bob','Bob'], 'current_day_count': [18,28,29,10,19,92,7,43]})
df['previous_day_count'] = df.groupby('name')['current_day_count'].shift(-1)
df.loc[df.groupby('name',as_index=False).head(1).index,'previous_day_count'] = np.nan
df['previous_day_count'].fillna(0, inplace=True)

Numpy version of finding the highest and lowest value locations within an interval of another column?

Given the following numpy array. How can I find the highest and lowest value locations of column 0 within the interval on column 1 using numpy?
import numpy as np
data = np.array([
[1879.289,np.nan],[1879.281,np.nan],[1879.292,1],[1879.295,1],[1879.481,1],[1879.294,1],[1879.268,1],
[1879.293,1],[1879.277,1],[1879.285,1],[1879.464,1],[1879.475,1],[1879.971,1],[1879.779,1],
[1879.986,1],[1880.791,1],[1880.29,1],[1879.253,np.nan],[1878.268,np.nan],[1875.73,1],[1876.792,1],
[1875.977,1],[1876.408,1],[1877.159,1],[1877.187,1],[1883.164,1],[1883.171,1],[1883.495,1],
[1883.962,1],[1885.158,1],[1885.974,1],[1886.479,np.nan],[1885.969,np.nan],[1884.693,1],[1884.977,1],
[1884.967,1],[1884.691,1],[1886.171,1],[1886.166,np.nan],[1884.476,np.nan],[1884.66,1],[1882.962,1],
[1881.496,1],[1871.163,1],[1874.985,1],[1874.979,1],[1871.173,np.nan],[1871.973,np.nan],[1871.682,np.nan],
[1872.476,np.nan],[1882.361,1],[1880.869,1],[1882.165,1],[1881.857,1],[1880.375,1],[1880.66,1],
[1880.891,1],[1880.377,1],[1881.663,1],[1881.66,1],[1877.888,1],[1875.69,1],[1875.161,1],
[1876.697,np.nan],[1876.671,np.nan],[1879.666,np.nan],[1877.182,np.nan],[1878.898,1],[1878.668,1],[1878.871,1],
[1878.882,1],[1879.173,1],[1878.887,1],[1878.68,1],[1878.872,1],[1878.677,1],[1877.877,1],
[1877.669,1],[1877.69,1],[1877.684,1],[1877.68,1],[1877.885,1],[1877.863,1],[1877.674,1],
[1877.676,1],[1877.687,1],[1878.367,1],[1878.179,1],[1877.696,1],[1877.665,1],[1877.667,np.nan],
[1878.678,np.nan],[1878.661,1],[1878.171,1],[1877.371,1],[1877.359,1],[1878.381,1],[1875.185,1],
[1875.367,np.nan],[1865.492,np.nan],[1865.495,1],[1866.995,1],[1866.672,1],[1867.465,1],[1867.663,1],
[1867.186,1],[1867.687,1],[1867.459,1],[1867.168,1],[1869.689,1],[1869.693,1],[1871.676,1],
[1873.174,1],[1873.691,np.nan],[1873.685,np.nan]
])
In the third column below you can see where the max and min is for each interval.
+-------+----------+-----------+---------+
| index | Value | Intervals | Min/Max |
+-------+----------+-----------+---------+
| 0 | 1879.289 | np.nan | |
| 1 | 1879.281 | np.nan | |
| 2 | 1879.292 | 1 | |
| 3 | 1879.295 | 1 | |
| 4 | 1879.481 | 1 | |
| 5 | 1879.294 | 1 | |
| 6 | 1879.268 | 1 | -1 | min
| 7 | 1879.293 | 1 | |
| 8 | 1879.277 | 1 | |
| 9 | 1879.285 | 1 | |
| 10 | 1879.464 | 1 | |
| 11 | 1879.475 | 1 | |
| 12 | 1879.971 | 1 | |
| 13 | 1879.779 | 1 | |
| 17 | 1879.986 | 1 | |
| 18 | 1880.791 | 1 | 1 | max
| 19 | 1880.29 | 1 | |
| 55 | 1879.253 | np.nan | |
| 56 | 1878.268 | np.nan | |
| 57 | 1875.73 | 1 | -1 |min
| 58 | 1876.792 | 1 | |
| 59 | 1875.977 | 1 | |
| 60 | 1876.408 | 1 | |
| 61 | 1877.159 | 1 | |
| 62 | 1877.187 | 1 | |
| 63 | 1883.164 | 1 | |
| 64 | 1883.171 | 1 | |
| 65 | 1883.495 | 1 | |
| 66 | 1883.962 | 1 | |
| 67 | 1885.158 | 1 | |
| 68 | 1885.974 | 1 | 1 | max
| 69 | 1886.479 | np.nan | |
| 70 | 1885.969 | np.nan | |
| 71 | 1884.693 | 1 | |
| 72 | 1884.977 | 1 | |
| 73 | 1884.967 | 1 | |
| 74 | 1884.691 | 1 | -1 | min
| 75 | 1886.171 | 1 | 1 | max
| 76 | 1886.166 | np.nan | |
| 77 | 1884.476 | np.nan | |
| 78 | 1884.66 | 1 | 1 | max
| 79 | 1882.962 | 1 | |
| 80 | 1881.496 | 1 | |
| 81 | 1871.163 | 1 | -1 | min
| 82 | 1874.985 | 1 | |
| 83 | 1874.979 | 1 | |
| 84 | 1871.173 | np.nan | |
| 85 | 1871.973 | np.nan | |
| 86 | 1871.682 | np.nan | |
| 87 | 1872.476 | np.nan | |
| 88 | 1882.361 | 1 | 1 | max
| 89 | 1880.869 | 1 | |
| 90 | 1882.165 | 1 | |
| 91 | 1881.857 | 1 | |
| 92 | 1880.375 | 1 | |
| 93 | 1880.66 | 1 | |
| 94 | 1880.891 | 1 | |
| 95 | 1880.377 | 1 | |
| 96 | 1881.663 | 1 | |
| 97 | 1881.66 | 1 | |
| 98 | 1877.888 | 1 | |
| 99 | 1875.69 | 1 | |
| 100 | 1875.161 | 1 | -1 | min
| 101 | 1876.697 | np.nan | |
| 102 | 1876.671 | np.nan | |
| 103 | 1879.666 | np.nan | |
| 111 | 1877.182 | np.nan | |
| 112 | 1878.898 | 1 | |
| 113 | 1878.668 | 1 | |
| 114 | 1878.871 | 1 | |
| 115 | 1878.882 | 1 | |
| 116 | 1879.173 | 1 | 1 | max
| 117 | 1878.887 | 1 | |
| 118 | 1878.68 | 1 | |
| 119 | 1878.872 | 1 | |
| 120 | 1878.677 | 1 | |
| 121 | 1877.877 | 1 | |
| 122 | 1877.669 | 1 | |
| 123 | 1877.69 | 1 | |
| 124 | 1877.684 | 1 | |
| 125 | 1877.68 | 1 | |
| 126 | 1877.885 | 1 | |
| 127 | 1877.863 | 1 | |
| 128 | 1877.674 | 1 | |
| 129 | 1877.676 | 1 | |
| 130 | 1877.687 | 1 | |
| 131 | 1878.367 | 1 | |
| 132 | 1878.179 | 1 | |
| 133 | 1877.696 | 1 | |
| 134 | 1877.665 | 1 | -1 | min
| 135 | 1877.667 | np.nan | |
| 136 | 1878.678 | np.nan | |
| 137 | 1878.661 | 1 | 1 | max
| 138 | 1878.171 | 1 | |
| 139 | 1877.371 | 1 | |
| 140 | 1877.359 | 1 | |
| 141 | 1878.381 | 1 | |
| 142 | 1875.185 | 1 | -1 | min
| 143 | 1875.367 | np.nan | |
| 144 | 1865.492 | np.nan | |
| 145 | 1865.495 | 1 | -1 | min
| 146 | 1866.995 | 1 | |
| 147 | 1866.672 | 1 | |
| 148 | 1867.465 | 1 | |
| 149 | 1867.663 | 1 | |
| 150 | 1867.186 | 1 | |
| 151 | 1867.687 | 1 | |
| 152 | 1867.459 | 1 | |
| 153 | 1867.168 | 1 | |
| 154 | 1869.689 | 1 | |
| 155 | 1869.693 | 1 | |
| 156 | 1871.676 | 1 | |
| 157 | 1873.174 | 1 | 1 | max
| 158 | 1873.691 | np.nan | |
| 159 | 1873.685 | np.nan | |
+-------+----------+-----------+---------+
I must specify upfront that this question has already been answered here with a pandas solution. The solution performs reasonable at about 300 seconds for a table of around 1 million rows. But after some more testing, I see that if the table is over 3 million rows, the execution time increases dramatically to over 2500 seconds and even more. This is obviously too long for such a simple task. How would the same problem be solved with numpy?
Here's one NumPy approach -
mask = ~np.isnan(data[:,1])
s0 = np.flatnonzero(mask[1:] > mask[:-1])+1
s1 = np.flatnonzero(mask[1:] < mask[:-1])+1
lens = s1 - s0
tags = np.repeat(np.arange(len(lens)), lens)
idx = np.lexsort((data[mask,0], tags))
starts = np.r_[0,lens.cumsum()]
offsets = np.r_[s0[0], s0[1:] - s1[:-1]]
offsets_cumsum = offsets.cumsum()
min_ids = idx[starts[:-1]] + offsets_cumsum
max_ids = idx[starts[1:]-1] + offsets_cumsum
out = np.full(data.shape[0], np.nan)
out[min_ids] = -1
out[max_ids] = 1
So this is a bit of a cheat since it uses scipy:
import numpy as np
from scipy import ndimage
markers = np.isnan(data[:, 1])
groups = np.cumsum(markers)
mins, max, min_idx, max_idx = ndimage.measurements.extrema(
data[:, 0], labels=groups, index=range(2, groups.max(), 2))

Categories