I have a dataframe that consists of one column that consists of 0 and 1.
They are structured in this way [0,0,1,1,0,0,1,1,1,].
My goal is to count only the first 1 in each repeating 1s in a loop.
So in this example of [0,0,1,1,0,0,1,1,1,] it should be able to only count a total of 2. How can I use a for loop and use an if condition and count this?
A simple for loop:
out = [0]+[int(j-i==1) for i,j in zip(lst,lst[1:])]
Output:
[0, 0, 1, 0, 0, 0, 1, 0, 0]
Also, you can assign a pd.Series to a DataFrame column like:
df.col = (pd.Series(lst).diff()==1).astype(int)
Found a messy way to do it where I can create a translated list and count the sum.
def FirstValue(data):
for index, item in enumerate(data):
if item == 1:
if data[index-1] == 1:
counter.append(0)
if item == 1:
if data[index-1] == 0:
counter.append(1)
else:
counter.append(0)
(As #Erfan mentiond in the comments:)
>>> df
col
0 0
1 0
2 1
3 1
4 0
5 0
6 1
7 1
8 1
>>> df['col'].diff().eq(1).sum()
2
>>> df['col'].diff().eq(1).astype(int)
0 0
1 0
2 1
3 0
4 0
5 0
6 1
7 0
8 0
Name: col, dtype: int64
Related
I have a pandas data frame that looks like:
Index Activity
0 0
1 0
2 1
3 1
4 1
5 0
...
1167 1
1168 0
1169 0
I want to count how many times it changes from 0 to 1 and when it changes from 1 to 0, but I do not want to count how many 1's or 0's there are.
For example, if I only wanted to count index 0 to 5, the count for 0 to 1 would be one.
How would I go about this? I have tried using some_value
This is a simple approach that can also tell you the index value when the change happens. Just add the index to a list.
c_1to0 = 0
c_0to1 = 0
for i in range(0, df.shape[0]-1):
if df.iloc[i]['Activity'] == 0 and df.iloc[i+1]['Activity'] == 1:
c_0to1 +=1
elif df.iloc[i]['Activity'] == 1 and df.iloc[i+1]['Activity'] == 0:
c_1to0 +=1
Assuming a dataframe like this
In [5]: data = pd.DataFrame([[9,4],[5,4],[1,3],[26,7]])
In [6]: data
Out[6]:
0 1
0 9 4
1 5 4
2 1 3
3 26 7
I want to count how many times the values in a rolling window/slice of 2 on column 0 are greater or equal to the value in col 1 (4).
On the first number 4 at col 1, a slice of 2 on column 0 yields 5 and 1, so the output would be 2 since both numbers are greater than 4, then on the second 4 the next slice values on col 0 would be 1 and 26, so the output would be 1 because only 26 is greater than 4 but not 1. I can't use rolling window since iterating through rolling window values is not implemented.
I need something like a slice of the previous n rows and then I can iterate, compare and count how many times any of the values in that slice are above the current row.
I have done this using list instead of doing it in data frame. Check the code below:
list1, list2 = df['0'].values.tolist(), df['1'].values.tolist()
outList = []
for ix in range(len(list1)):
if ix < len(list1) - 2:
if list2[ix] < list1[ix + 1] and list2[ix] < list1[ix + 2]:
outList.append(2)
elif list2[ix] < list1[ix + 1] or list2[ix] < list1[ix + 2]:
outList.append(1)
else:
outList.append(0)
else:
outList.append(0)
df['2_rows_forward_moving_tag'] = pd.Series(outList)
Output:
0 1 2_rows_forward_moving_tag
0 9 4 1
1 5 4 1
2 1 3 0
3 26 7 0
I want to find the starting index and ending index of every piece of data chunk in the dataset.
The data is like:
index A wanted_column1 wanted_column2
2000/1/1 0 0
2000/1/2 1 2000/1/2 1
2000/1/3 1 1
2000/1/4 1 1
2000/1/5 0 0
2000/1/6 1 2000/1/6 2
2000/1/7 1 2
2000/1/8 1 2
2000/1/9 0 0
As shown in the data, index and A are the given columns and wanted_column1 and wanted_column2 are what I want to get.
The idea is that there are different pieces of continuous chunks of data. I want to retrieve starting indices of every chunk of data and I want to increment a count of how many chunks are in the data.
I tried to use shift(-1), but it is not possible to differentiate the difference between starting index and the ending index.
Is that what you need ?
df['change'] = df['A'].diff().eq(1)
df['wanted_column1'] = df[['index','change']].apply(lambda x: x[0] if x[1] else None, axis=1)
df['wanted_column2'] = df['change'].cumsum()
df['wanted_column2'] = df[['wanted_column2','A']].apply(lambda x: 0 if x[1]==0 else x[0], axis=1)
df.drop('change', axis=1, inplace=True)
That yields :
index A wanted_column1 wanted_column2
0 2000/1/1 0 None 0
1 2000/1/2 1 2000/1/2 1
2 2000/1/3 1 None 1
3 2000/1/4 1 None 1
4 2000/1/5 0 None 0
5 2000/1/6 1 2000/1/6 2
6 2000/1/7 1 None 2
7 2000/1/8 1 None 2
8 2000/1/9 0 None 2
Edit : performance comparison
gehbiszumeis's solution : 19.9 ms
my solution : 4.07 ms
Assuming your dataframe to be df, you can find the indices where df['A'] != 0. The indices before are the last indices of a chunck, the ones after the first ones of a chunk. Later you count the number of found indices to calculate the number of data chunks
import pandas as pd
# Read your data
df = pd.read_csv('my_txt.txt', sep=',')
df['wanted_column1'] = None # creating already dummy columns
df['wanted_column2'] = None
# Find indices after each index, where 'A' is not 1, except of it is the last value
# of the dataframe
first = [x + 1 for x in df[df['A'] != 1].index.values if x != len(df)-1]
# Find indices before each index, where 'A' is not 1, except of it is the first value
# of the dataframe
last = [x - 1 for x in df[df['A'] != 1].index.values if x != 0]
# Set the first indices of each chunk at its corresponding position in your dataframe
df.loc[first, 'wanted_column1'] = df.loc[first, 'index']
# You can set also the last indices of each chunk (you only mentioned this in the text,
# not in your expected-result-listed). Uncomment for last indices.
# df.loc[last, 'wanted_column1'] = df.loc[last, 'index']
# Count the number of chunks and fill it to wanted_column2
for i in df.index: df.loc[i, 'wanted_column2'] = sum(df.loc[:i, 'wanted_column1'].notna())
# Some polishing of the df after to match your expected result
df.loc[df['A'] != 1, 'wanted_column2'] = 0
This gives
index A wanted_column1 wanted_column2
0 2000/1/1 0 None 0
1 2000/1/2 1 2000/1/2 1
2 2000/1/3 1 None 1
3 2000/1/4 1 None 1
4 2000/1/5 0 None 0
5 2000/1/6 1 2000/1/6 2
6 2000/1/7 1 None 2
7 2000/1/8 1 None 2
8 2000/1/9 0 None 0
and works for all lengths of df and number of chunks in your data
I have the following:
df['PositionLong'] = 0
df['PositionLong'] = np.where(df['Alpha'] == 1, 1, (np.where(np.logical_and(df['PositionLong'].shift(1) == 1, df['Bravo'] == 1), 1, 0)))
This lines basically only take in df['Alpha'] but not the df['PositionLong'].shift(1).. It cannot recognize it but I dont understand why?
It produces this:
df['Alpha'] df['Bravo'] df['PositionLong']
0 0 0
1 1 1
0 1 0
1 1 1
1 1 1
However what I wanted the code to do is this:
df['Alpha'] df['Bravo'] df['PositionLong']
0 0 0
1 1 1
0 1 1
1 1 1
1 1 1
I believe the solution is to loop each row, but this will take very long.
Can you help me please?
You are looking for a recursive function, since a previous PositionLong value depends on Alpha, which itself is used to determine PositionLong.
But numpy.where is a regular function, so df['PositionLong'].shift(1) is evaluated as a series of 0 values, since you initialise the series with 0.
A manual loop need not be expensive. You can use numba to efficiently implement your recursive algorithm:
from numba import njit
#njit
def rec_algo(alpha, bravo):
res = np.empty(alpha.shape)
res[0] = 1 if alpha[0] == 1 else 0
for i in range(1, len(res)):
if (alpha[i] == 1) or ((res[i-1] == 1) and bravo[i] == 1):
res[i] = 1
else:
res[i] = 0
return res
df['PositionLong'] = rec_algo(df['Alpha'].values, df['Bravo'].values).astype(int)
Result:
print(df)
Alpha Bravo PositionLong
0 0 0 0
1 1 1 1
2 0 1 1
3 1 1 1
4 1 1 1
I am iterating over a Python dataframe and finding it to be extremely slow. I understand that in Pandas you try to vectorize everything, but in this case I specifically need to iterate (or if it is possible to vectorize, I'm unclear how to do it).
The logic is simple: you have two columns "A" and "B" and a result column "signal." If A equals 1, then you set signal to 1. If B equals 1, then you set signal to 0. Otherwise, signals is whatever it was previously. In other words, column A is an "on" signal, column B is an "off" signal, and "signal" represents the state.
Here is my code:
def signals(indata):
numrows = len(indata)
data = pd.DataFrame(index= range(0,numrows))
data['A'] = indata['A']
data['B'] = indata['B']
data['signal'] = 0
for i in range(1,numrows):
if data['A'].iloc[i] == 1:
data['signal'].iloc[i] = 1
elif data['B'].iloc[i] == 1:
data['signal'].iloc[i] = 0
else:
data['signal'].iloc[i] = data['signal'].iloc[i-1]
return data
Example input/output:
indata = pd.DataFrame(index = range(0,10))
indata['A'] = [0, 1, 0, 0, 0, 0, 1, 0, 0, 0]
indata['B'] = [1, 0, 0, 0, 1, 0, 0, 0, 1, 1]
signals(indata)
Output:
A B signal
0 0 1 0
1 1 0 1
2 0 0 1
3 0 0 1
4 0 1 0
5 0 0 0
6 1 0 1
7 0 0 1
8 0 1 0
9 0 1 0
This simple logic takes my computer 46 seconds to run on a dataframe of 2000 rows with randomly generated data.
df['signal'] = df.A.groupby((df.A != df.B).cumsum()).transform('head', 1)
df
A B signal
0 0 1 0
1 1 0 1
2 0 0 1
3 0 0 1
4 0 1 0
5 0 0 0
6 1 0 1
7 0 0 1
8 0 1 0
9 0 1 0
The logic here involves dividing your series into groups based on the inequality between A and B, and every group's value is determined by A.
You dont need to iterate at all you can do some Boolean indexing
#set condition for A
indata.loc[indata.A == 1,'signal'] = 1
#set condition for B
indata.loc[indata.B == 1,'signal'] = 0
#forward fill NaN values
indata.signal.fillna(method='ffill',inplace=True)
The simplest answer to my problem was to not write to the dataframe while iterating through it. I created an array of zeros in numpy, then did my iterative logic in the array. Then I wrote the array to the column in my dataframe.
def signals3(indata):
numrows = len(indata)
data = pd.DataFrame(index= range(0,numrows))
data['A'] = indata['A']
data['B'] = indata['B']
out_signal = np.zeros(numrows)
for i in range(1,numrows):
if data['A'].iloc[i] == 1:
out_signal[i] = 1
elif data['B'].iloc[i] == 1:
out_signal[i] = 0
else:
out_signal[i] = out_signal[i-1]
data['signal'] = out_signal
return data
On a dataframe of 2000 rows of random data, this takes only 43 milliseconds as opposed to 46 seconds (~1,000x faster).
I also tried a variant where I assigned the dataframe columns A and B to series, and then iterated through the series. This was a bit faster (27 milliseconds). But it appears most of the slowness is in writing to a dataframe.
Both coldspeed and djk's answers were faster than my solution (about 4.5ms) but in practice I'll probably just iterate through series even though that is not optimal.