import pandas as pd
df = pd.DataFrame([[5, 6], [1.2, 3]])
ser = pd.Series([0, 0], name='r3')
df_app = df.append(ser)
print('{}\n'.format(df_app)) #has 3 rows
df_app = df.append(ser, ignore_index=True)
print('{}\n'.format(df_app)) #has 3 rows
df2 = pd.DataFrame([[0,0],[9,9]])
df_app = df.append(df2)
print(format(df_app)) #didnt understand this part, where did the series row go?
OUTPUT
0 1
0 5.0 6
1 1.2 3
r3 0.0 0
0 1
0 5.0 6
1 1.2 3
2 0.0 0
0 1
0 5.0 6
1 1.2 3
0 0.0 0
1 9.0 9
I didn't understand where did the appended series go in the last append.
df has 2 rows, then [0,0] series is appended =>3 rows
df2 has 2 rows as well, after appending, there is a total of 4 rows. Where did the series row go?
You have been appending your series to the df Dataframe, which will remain same every time.
You are printing the df_app dataframe, which is updated over the df only. If you need to append all the rows then instead of appending to df you need to append to df_app itself.
You should change the code to following:
df_app = df_app.append(ser)
Related
Imagine the following dataframe
Base dataframe
df = pd.from_dict({'a': [1,2,1,2,1]
'b': [1,1,3,3,1]
})
And them i pick up the a, column and replace a few values based on b column values
df.loc[df['b']== 3]['a'].replace(2,1)
How could i reappend my a column to my original df, but only changing those specific filtered values?
Wanted result
df = pd.from_dict({'a': [1,2,1,1,1]
'b': [1,1,3,3,1]
})
Do with update
df.update(df.loc[df['b']== 3,['a']].replace(2,1))
df
Out[354]:
a b
0 1.0 1
1 2.0 1
2 1.0 3
3 1.0 3
4 1.0 1
You can try df.mask
df['a'] = df['a'].mask(df['a'].eq(2) & df['b'].eq(3), 1)
print(df)
a b
0 1 1
1 2 1
2 1 3
3 1 3
4 1 1
Using python and this data set https://raw.githubusercontent.com/yadatree/AL/main/AK4.csv I would like to create a new column for each subject, that starts with 0 (in the first row) and then subtracts the SCALE value from row 2 from row 1, then row 3 from row 2, row 4 from row 3, etc.
However, if this produces a negative value, then to give the output of 0.
Edit: Thank you for the response. That worked perfectly. The only remaining issue is that I'd like to start again with each subject (SUBJECT column). The number of values for each subject is not fixed thus something that checks the SUBJECT column and then starts again from 0 would be ideal.
screenshot
You can use .shift(1) create new column with values moved from previous rows - and then you will have both values in the same row and you can substract columns.
And later you can selecte all negative results and assign zero
import pandas as pd
data = {
'A': [1, 3, 2, 5, 1],
}
df = pd.DataFrame(data)
df['previous'] = df['A'].shift(1)
df['result'] = df['A'] - df['previous']
print(df)
#df['result'] = df['A'] - df['A'].shift(1)
#print(df)
df.loc[ df['result'] < 0 , 'result'] = 0
print(df)
Result:
A previous result
0 1 NaN NaN
1 3 1.0 2.0
2 2 3.0 -1.0
3 5 2.0 3.0
4 1 5.0 -4.0
A previous result
0 1 NaN NaN
1 3 1.0 2.0
2 2 3.0 0.0
3 5 2.0 3.0
4 1 5.0 0.0
EDIT:
If you use df['result'] = df['A'] - df['A'].shift(1) then you get column result without creating column previous.
And if you use .shift(1, fill_value=0) then it will put 0 instead of NaN in first row.
EDIT:
You can use groupy("SUBJECT") to group by subject and later in every group you can put 0 in first row.
import pandas as pd
data = {
'S': ['A', 'A', 'A', 'B', 'B', 'B'],
'A': [1, 3, 2, 1, 5, 1],
}
df = pd.DataFrame(data)
df['result'] = df['A'] - df['A'].shift(1, fill_value=0)
print(df)
df.loc[ df['result'] < 0 , 'result'] = 0
print(df)
all_groups = df.groupby('S')
first_index = all_groups.apply(lambda grp: grp.index[0])
df.loc[first_index, 'result'] = 0
print(df)
Results:
S A result
0 A 1 1
1 A 3 2
2 A 2 -1
3 B 1 -1
4 B 5 4
5 B 1 -4
S A result
0 A 1 1
1 A 3 2
2 A 2 0
3 B 1 0
4 B 5 4
5 B 1 0
S A result
0 A 1 0
1 A 3 2
2 A 2 0
3 B 1 0
4 B 5 4
5 B 1 0
I am calculating the difference of a dataframe values at different lags.
Following dataframe is my input
df = pd.DataFrame([[1, 2], [3, 4],[5,6],[7,8]], columns=list('AB'))
To compute the difference between last three rows and first three rows, I am doing the following.
df2=df.iloc[1:,:]
df3=df.iloc[:-1,:]
df_out=pd.DataFrame(df2.values-df3.values,index=df2.index)
The calculation is as expected but I want to retain the index 0 with zeros in that row.
df_expected_out=pd.DataFrame([[0,0], [2,2],[2,2],[2,2]], columns=list('AB'))
Please suggest the way forward.Thanks for your time.
I think you need reindex by original index:
df_out=pd.DataFrame(df2.values-df3.values,index=df2.index).reindex(df.index, fill_value=0)
print (df_out)
0 1
0 0 0
1 2 2
2 2 2
3 2 2
Another solution:
df_out= df.diff().fillna(0).astype(int)
Or append first zero row to arrays:
a1 = np.zeros((1, len(df.columns)), dtype=int)
arr = np.append(a1, df2.values, axis=0) - np.append(a1, df3.values, axis=0)
df_out = pd.DataFrame(arr, index=df.index)
print (df_out)
0 1
0 0 0
1 2 2
2 2 2
3 2 2
You can use the shift function
(df - df.shift()).fillna(0)
Out[9]:
A B
0 0.0 0.0
1 2.0 2.0
2 2.0 2.0
3 2.0 2.0
I have a long list of data, that meaningful data being sandwiched between 0 values, here is how it looks like
0
0
1
0
0
2
3
1
0
0
0
0
1
0
The length of 0 and meaningful value sequence is variable. I want to extract the meaningful sequence, each of them into a row in a dataframe. For example, the above data can be extracted to this:
1
2 3 1
1
I used this code to 'slice' the meaningful data:
import pandas as pd
import numpy as np
raw = pd.read_csv('data.csv')
df = pd.DataFrame(index=np.arange(0, 10000),columns = ['DT01', 'DT02', 'DT03', 'DT04', 'DT05', 'DT06', 'DT07', 'DT08', 'DT02', 'DT09', 'DT10', 'DT11', 'DT12', 'DT13', 'DT14', 'DT15', 'DT16', 'DT17', 'DT18', 'DT19', 'DT20',])
a = 0
b = 0
n=0
for n in range(0,999999):
if raw.iloc[n].values > 0:
df.iloc[a,b] = raw.iloc[n].values
a=a+1
if raw [n+1] == 0:
b=b+1
a=0
but I keep getting KeyError: n, while n is the row after the first row has a value different than 0.
Where is the problem with me code? And is there any way to improve it, in term of speed and memory cost?
Thank you very much
You can use:
df['Group'] = df['col'].eq(0).cumsum()
df = df.loc[ df['col'] != 0]
df = df.groupby('Group')['col'].apply(list)
print (df)
Group
2 [1]
4 [2, 3, 1]
8 [1]
Name: col, dtype: object
df = pd.DataFrame(df.groupby('Group')['col'].apply(list).values.tolist())
print (df)
0 1 2
0 1 NaN NaN
1 2 3.0 1.0
2 1 NaN NaN
Let's try this outputs a dataframe:
df.groupby(df[0].eq(0).cumsum().mask(df[0].eq(0)),as_index=False)[0]\
.apply(lambda x: x.reset_index(drop=True)).unstack(1)
Output:
0 1 2
0 1.0 NaN NaN
1 2.0 3.0 1.0
2 1.0 NaN NaN
Or a string:
df.groupby(df[0].eq(0).cumsum().mask(df[0].eq(0)),as_index=False)[0]\
.apply(lambda x: ' '.join(x.astype(str)))
Output:
0 1
1 2 3 1
2 1
dtype: object
Or as a list:
df.groupby(df[0].eq(0).cumsum().mask(df[0].eq(0)),as_index=False)[0]\
.apply(list)
Output:
0 [1]
1 [2, 3, 1]
2 [1]
dtype: object
Try this , I break down the steps
df.LIST=df.LIST.replace({0:np.nan})
df['Group']=df.LIST.isnull().cumsum()
df=df.dropna()
df.groupby('Group').LIST.apply(list)
Out[384]:
Group
2 [1]
4 [2, 3, 1]
8 [1]
Name: LIST, dtype: object
Data Input
df = pd.DataFrame({'LIST' : [0,0,1,0,0,2,3,1,0,0,0,0,1,0]})
Let's start with packing your original data into a pandas dataframe (in real life, you will probably use pd.read_csv() to generate this dataframe):
raw = pd.DataFrame({'0' : [0,0,1,0,0,2,3,1,0,0,0,0,1,0]})
The default index will help you locate zero spans:
s1 = raw.reset_index()
s1['index'] = np.where(s1['0'] != 0, np.nan, s1['index'])
s1['index'] = s1['index'].fillna(method='ffill').fillna(0).astype(int)
s1[s1['0'] != 0].groupby('index')['0'].apply(list).tolist()
#[[1], [2, 3, 1], [1]]
There may be a smarter way to do this in Python Pandas, but the following example should, but doesn't work:
import pandas as pd
import numpy as np
df1 = pd.DataFrame([[1, 0], [1, 2], [2, 0]], columns=['a', 'b'])
df2 = df1.copy()
df3 = df1.copy()
idx = pd.date_range("2010-01-01", freq='H', periods=3)
s = pd.Series([df1, df2, df3], index=idx)
# This causes an error
s.mean()
I won't post the whole traceback, but the main error message is interesting:
TypeError: Could not convert melt T_s
0 6 12
1 0 6
2 6 10 to numeric
It looks like the dataframe was successfully sum'med, but not divided by the length of the series.
However, we can take the sum of the dataframes in the series:
s.sum()
... returns:
a b
0 6 12
1 0 6
2 6 10
Why wouldn't mean() work when sum() does? Is this a bug or a missing feature? This does work:
(df1 + df2 + df3)/3.0
... and so does this:
s.sum()/3.0
a b
0 2 4.000000
1 0 2.000000
2 2 3.333333
But this of course is not ideal.
You could (as suggested by #unutbu) use a hierarchical index but when you have a three dimensional array you should consider using a "pandas Panel". Especially when one of the dimensions represents time as in this case.
The Panel is oft overlooked but it is after all where the name pandas comes from. (Panel Data System or something like that).
Data slightly different from your original so there are not two dimensions with the same length:
df1 = pd.DataFrame([[1, 0], [1, 2], [2, 0], [2, 3]], columns=['a', 'b'])
df2 = df1 + 1
df3 = df1 + 10
Panels can be created a couple of different ways but one is from a dict. You can create the dict from your index and the dataframes with:
s = pd.Panel(dict(zip(idx,[df1,df2,df3])))
The mean you are looking for is simply a matter of operating on the correct axis (axis=0 in this case):
s.mean(axis=0)
Out[80]:
a b
0 4.666667 3.666667
1 4.666667 5.666667
2 5.666667 3.666667
3 5.666667 6.666667
With your data, sum(axis=0) returns the expected result.
EDIT: OK too late for panels as the hierarchical index approach is already "accepted". I will say that that approach is preferable if the data is know to be "ragged" with an unknown but different number in each grouping. For "square" data, the panel is absolutly the way to go and will be significantly faster with more built-in operations. Pandas 0.15 has many improvements for multi-level indexing but still has limitations and dark edge cases in real world apps.
When you define s with
s = pd.Series([df1, df2, df3], index=idx)
you get a Series with DataFrames as items:
In [77]: s
Out[77]:
2010-01-01 00:00:00 a b
0 1 0
1 1 2
2 2 0
2010-01-01 01:00:00 a b
0 1 0
1 1 2
2 2 0
2010-01-01 02:00:00 a b
0 1 0
1 1 2
2 2 0
Freq: H, dtype: object
The sum of the items is a DataFrame:
In [78]: s.sum()
Out[78]:
a b
0 3 0
1 3 6
2 6 0
but when you take the mean, nanops.nanmean is called:
def nanmean(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(values, skipna, 0)
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_max))
...
Notice that _ensure_numeric (source code) is called on the resultant sum.
An error is raised because a DataFrame is not numeric.
Here is a workaround. Instead of making a Series with DataFrames as items,
you can concatenate the DataFrames into a new DataFrame with a hierarchical index:
In [79]: s = pd.concat([df1, df2, df3], keys=idx)
In [80]: s
Out[80]:
a b
2010-01-01 00:00:00 0 1 0
1 1 2
2 2 0
2010-01-01 01:00:00 0 1 0
1 1 2
2 2 0
2010-01-01 02:00:00 0 1 0
1 1 2
2 2 0
Now you can take the sum and the mean:
In [82]: s.sum(level=1)
Out[82]:
a b
0 3 0
1 3 6
2 6 0
In [84]: s.mean(level=1)
Out[84]:
a b
0 1 0
1 1 2
2 2 0