Pandas Dataframe - Adding Else? - python

I want to generate Test Data for my Bayesian Network.
This is my current Code:
data = np.random.randint(2, size=(5, 6))
columns = ['p_1', 'p_2', 'OP1', 'OP2', 'OP3', 'OP4']
df = pd.DataFrame(data=data, columns=columns)
df.loc[(df['p_1'] == 1) & (df['p_2'] == 1), 'OP1'] = 1
df.loc[(df['p_1'] == 1) & (df['p_2'] == 0), 'OP2'] = 1
df.loc[(df['p_1'] == 0) & (df['p_2'] == 1), 'OP3'] = 1
df.loc[(df['p_1'] == 0) & (df['p_2'] == 0), 'OP4'] = 1
print(df)
So every time, for example, p_1 has a 1 and p_2 has a 1, the OP1 should be 1 as well, all the other values should output 0 in the column.
When p_1 is 1 and p_2 is 0, then OP2 should be 1 an d all others 0, and so on.
But my current Output is the following:
p_1
p_2
OP1
OP2
OP3
OP4
0
0
0
0
0
1
1
0
1
1
1
1
0
0
1
1
0
1
0
1
1
1
1
1
1
0
0
1
1
0
Is there any way to fix it? What did I do wrong?
I didn't really understand the solutions to other peoples questions, so I thought Id ask here.
I hope that someone can help me.

The problem is that when you instantiate df, the "OP" columns already have some values:
data = np.random.randint(2, size=(5, 6))
columns = ['p_1', 'p_2', 'OP1', 'OP2', 'OP3', 'OP4']
df = pd.DataFrame(data=data, columns=columns)
df
p_1 p_2 OP1 OP2 OP3 OP4
0 1 1 0 1 0 0
1 0 0 1 1 0 1
2 0 1 1 1 0 0
3 1 1 1 1 0 1
4 0 1 1 0 1 0
One way of fixing it with your code is forcing all "OP" columns to 0 before:
df["OP1"] = df["OP2"] = df["OP3"] df["OP4"] = 0
But then you are generating too many random numbers. I'd do this instead:
data = np.random.randint(2, size=(5, 2))
columns = ['p_1', 'p_2']
df = pd.DataFrame(data=data, columns=columns)
df["OP1"] = ((df['p_1'] == 0) & (df['p_2'] == 1)).astype(int)

You can defined tuples for test and create new columns by casting values of mask to inetegers for True/False to 1/0 mapping:
vals = [(1,1),(1,0),(0,1),(0,0)]
for i, (a, b) in enumerate(vals, 1):
df[f'OP{i}'] = ((df['p_1'] == a) & (df['p_2'] == b)).astype(int)
print(df)
p_1 p_2 OP1 OP2 OP3 OP4
0 0 0 0 0 0 1
1 0 1 0 0 1 0
2 0 1 0 0 1 0
3 0 1 0 0 1 0
4 1 0 0 1 0 0
In your solution set 0 first, because already are set 1 values in original DataFrame:
cols = ['OP1', 'OP2', 'OP3', 'OP4']
df[cols] = 0

Related

How to create new column based off values from existing columns in pandas

I have a dataframe with 171 rows and 11 columns.
The 11 columns have values with either 0 or 1
how can i create a new column that will either be a 0 or 1, depending on whether the existing columns have a majority of 0 or 1?
you could do
(df.sum(axis=1)>df.shape[1]/2)+0
import numpy as np
import pandas as pd
X = np.asarray([(0, 0, 0),
(0, 0, 1),
(0, 1, 1),
(1, 1, 1)])
df = pd.DataFrame(X)
df['majority'] = (df.mean(axis=1) > 0.5) + 0
df
Use mean of rows and compare by DataFrame.gt for greater or DataFrame.ge for greater or equal 0.5 (it depends of output if same number of 0 and 1) and last convert mask to integers by Series.astype:
np.random.seed(20193)
df = pd.DataFrame(np.random.choice([0,1], size=(5, 4)))
df['new'] = df.mean(axis=1).gt(0.5).astype(int)
print (df)
0 1 2 3 new
0 1 1 0 0 0
1 1 1 1 0 1
2 0 0 1 0 0
3 1 1 0 1 1
4 1 1 1 1 1
np.random.seed(20193)
df = pd.DataFrame(np.random.choice([0,1], size=(5, 4)))
df['new'] = df.mean(axis=1).ge(0.5).astype(int)
print (df)
0 1 2 3 new
0 1 1 0 0 1
1 1 1 1 0 1
2 0 0 1 0 0
3 1 1 0 1 1
4 1 1 1 1 1

Pandas: Count values on a row basis

I have a numeric DataFrame, for example:
x = np.array([[1,2,3],[-1,-1,1],[0,0,0]])
df = pd.DataFrame(x, columns=['A','B','C'])
df
A B C
0 1 2 3
1 -1 -1 1
2 0 0 0
And I want to count, for each row, the number of positive values, negativa values and values equals to 0. I've been trying the following:
df['positive_count'] = df.apply(lambda row: (row > 0).sum(), axis = 1)
df['negative_count'] = df.apply(lambda row: (row < 0).sum(), axis = 1)
df['zero_count'] = df.apply(lambda row: (row == 0).sum(), axis = 1)
But I'm getting the following result, which is obviously incorrent
A B C positive_count negative_count zero_count
0 1 2 3 3 0 1
1 -1 -1 1 1 2 0
2 0 0 0 0 0 5
Anyone knows what might be going wrong, or could help me find the best way to do what I'm looking for?
Thank you.
There are some ways, but one option is using np.sign and get_dummies:
u = (pd.get_dummies(np.sign(df.stack()))
.sum(level=0)
.rename({-1: 'negative_count', 1: 'positive_count', 0: 'zero_count'}, axis=1))
u
negative_count zero_count positive_count
0 0 0 3
1 2 0 1
2 0 3 0
df = pd.concat([df, u], axis=1)
df
A B C negative_count zero_count positive_count
0 1 2 3 0 0 3
1 -1 -1 1 2 0 1
2 0 0 0 0 3 0
np.sign treats zero differently from positive and negative values, so it is ideal to use here.
Another option is groupby and value_counts:
(np.sign(df)
.stack()
.groupby(level=0)
.value_counts()
.unstack(1, fill_value=0)
.rename({-1: 'negative_count', 1: 'positive_count', 0: 'zero_count'}, axis=1))
negative_count zero_count positive_count
0 0 0 3
1 2 0 1
2 0 3 0
Slightly more verbose but still worth knowing about.

Compare two columns using pandas 2

I'm comparing two columns in a dataframe (A & B). I have a method that works (C5). It came from this question:
Compare two columns using pandas
I wondered why I couldn't get the other methods (C1 - C4) to give the correct answer:
df = pd.DataFrame({'A': [1,1,1,1,1,2,2,2,2,2],
'B': [1,1,1,1,1,1,0,0,0,0]})
#df['C1'] = 1 [df['A'] == df['B']]
df['C2'] = df['A'].equals(df['B'])
df['C3'] = np.where((df['A'] == df['B']),0,1)
def fun(row):
if ['A'] == ['B']:
return 1
else:
return 0
df['C4'] = df.apply(fun, axis=1)
df['C5'] = df.apply(lambda x : 1 if x['A'] == x['B'] else 0, axis=1)
Use:
df = pd.DataFrame({'A': [1,1,1,1,1,2,2,2,2,2],
'B': [1,1,1,1,1,1,0,0,0,0]})
So for C1 and C2 need compare columns by == or eq for boolean mask and then convert it to integers - True, False to 1,0:
df['C1'] = (df['A'] == df['B']).astype(int)
df['C2'] = df['A'].eq(df['B']).astype(int)
Here is necessary change order 1,0 - for match condition need 1:
df['C3'] = np.where((df['A'] == df['B']),1,0)
In function is not selected values of Series, missing row:
def fun(row):
if row['A'] == row['B']:
return 1
else:
return 0
df['C4'] = df.apply(fun, axis=1)
Solution is correct:
df['C5'] = df.apply(lambda x : 1 if x['A'] == x['B'] else 0, axis=1)
print (df)
A B C1 C2 C3 C4 C5
0 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1
5 2 1 0 0 0 0 0
6 2 0 0 0 0 0 0
7 2 0 0 0 0 0 0
8 2 0 0 0 0 0 0
9 2 0 0 0 0 0 0
IIUC you need this:
def fun(row):
if row['A'] == row['B']:
return 1
else:
return 0

Pandas: Adding zero values where no rows exist (sparse)

I have a Pandas DataFrame with a MultiIndex. The MultiIndex has values in the range (0,0) to (1000,1000), and the column has two fields p and q.
However, the DataFrame is sparse. That is, if there was no measurement corresponding to a particular index (say (3,2)), there won't be any row for it (3,2). I'd like to make it not sparse, by filling in these rows with p=0 and q=0. Continuing the example, if I do df.loc[3].loc[2], I want it to return p=0 q=0, not No Such Record (as it currently does).
Clarification: By "sparse", I mean it only in the sense I used it, that zero values are omitted. I'm not referring to anything in Pandas or Numpy internals.
Consider this df
data = {
(1, 0): dict(p=1, q=1),
(3, 2): dict(p=1, q=1),
(5, 4): dict(p=1, q=1),
(7, 6): dict(p=1, q=1),
}
df = pd.DataFrame(data).T
df
p q
1 0 1 1
3 2 1 1
5 4 1 1
7 6 1 1
Use reindex with fill_value=0 from a constructed pd.MultiIndex.from_product
mux = pd.MultiIndex.from_product([range(8), range(8)])
df.reindex(mux, fill_value=0)
p q
0 0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
1 0 1 1
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
2 0 0 0
1 0 0
2 0 0
3 0 0
response to comment
You can get min, max of index levels like this
def mn_mx(idx):
return idx.min(), idx.max()
mn0, mx0 = mn_mx(df.index.levels[0])
mn1, mx1 = mn_mx(df.index.levels[1])
mux = pd.MultiIndex.from_product([range(mn0, mx0 + 1), range(mn1, mx1 + 1)])
df.reindex(mux, fill_value=0)

classifying a series to a new column in pandas

I want to be able to take my current set of data, which is filled with ints, and classify them according to certain criteria. The table looks something like this:
[in]> df = pd.DataFrame({'A':[0,2,3,2,0,0],'B': [1,0,2,0,0,0],'C': [0,0,1,0,1,0]})
[out]>
A B C
0 0 1 0
1 2 0 0
2 3 2 1
3 2 0 0
4 0 0 1
5 0 0 0
I'd like to classify these in a separate column by string. Being more familiar with R, I tried to create a new column with the rules in that column's definition. Following that I attempted with .ix and lambdas which both resulted in a type errors (between ints & series ). I'm under the impression that this is a fairly simple question. Although the following is completely wrong, here is the logic from attempt 1:
df['D']=(
if ((df['A'] > 0) & (df['B'] == 0) & df['C']==0):
return "c1";
elif ((df['A'] == 0) & ((df['B'] > 0) | df['C'] >0)):
return "c2";
else:
return "c3";)
for a final result of:
A B C D
0 0 1 0 "c2"
1 2 0 0 "c1"
2 3 2 1 "c3"
3 2 0 0 "c1"
4 0 0 1 "c2"
5 0 0 0 "c3"
If someone could help me figure this out it would be much appreciated.
I can think of two ways. The first is to write a classifier function and then .apply it row-wise:
>>> import pandas as pd
>>> df = pd.DataFrame({'A':[0,2,3,2,0,0],'B': [1,0,2,0,0,0],'C': [0,0,1,0,1,0]})
>>>
>>> def classifier(row):
... if row["A"] > 0 and row["B"] == 0 and row["C"] == 0:
... return "c1"
... elif row["A"] == 0 and (row["B"] > 0 or row["C"] > 0):
... return "c2"
... else:
... return "c3"
...
>>> df["D"] = df.apply(classifier, axis=1)
>>> df
A B C D
0 0 1 0 c2
1 2 0 0 c1
2 3 2 1 c3
3 2 0 0 c1
4 0 0 1 c2
5 0 0 0 c3
and the second is to use advanced indexing:
>>> df = pd.DataFrame({'A':[0,2,3,2,0,0],'B': [1,0,2,0,0,0],'C': [0,0,1,0,1,0]})
>>> df["D"] = "c3"
>>> df["D"][(df["A"] > 0) & (df["B"] == 0) & (df["C"] == 0)] = "c1"
>>> df["D"][(df["A"] == 0) & ((df["B"] > 0) | (df["C"] > 0))] = "c2"
>>> df
A B C D
0 0 1 0 c2
1 2 0 0 c1
2 3 2 1 c3
3 2 0 0 c1
4 0 0 1 c2
5 0 0 0 c3
Which one is clearer depends upon the situation. Usually the more complex the logic the more likely I am to wrap it up in a function I can then document and test.

Categories