I am new to python and pandas. I have attached a picture of a pandas dataframe,
I need to know how I can fetch data from the last column and how to rename the last column.
You can use:
df = df.rename(columns = {df.columns[-1] : 'newname'})
Or:
df.columns = df.columns[:-1].tolist() + ['new_name']
It seems solution:
df.columns.values[-1] = 'newname'
is buggy. Because after rename pandas functions return weird errors.
For fetch data from last column is possible use select by position by iloc:
s = df.iloc[:,-1]
And after rename:
s1 = df['newname']
print (s1)
Sample:
df = pd.DataFrame({'R':[7,8,9],
'T':[1,3,5],
'E':[5,3,6],
('Z', 'a'):[7,4,3]})
print (df)
E T R (Z, a)
0 5 1 7 7
1 3 3 8 4
2 6 5 9 3
s = df.iloc[:,-1]
print (s)
0 7
1 4
2 3
Name: (Z, a), dtype: int64
df.columns = df.columns[:-1].tolist() + ['new_name']
print (df)
E T R new_name
0 5 1 7 7
1 3 3 8 4
2 6 5 9 3
df = df.rename(columns = {('Z', 'a') : 'newname'})
print (df)
E T R newname
0 5 1 7 7
1 3 3 8 4
2 6 5 9 3
s = df['newname']
print (s)
0 7
1 4
2 3
Name: newname, dtype: int64
df.columns.values[-1] = 'newname'
s = df['newname']
print (s)
>KeyError: 'newname'
fetch data from the last column
Retrieving the last column using df.iloc[:,-1] as suggested by other answers works fine only when it is indeed the last column.
However, using absolute column positions like -1 is not a stable solution, i.e. if you add some other column, your code will break.
A stable, generic approach
First of all, make sure all your column names are strings:
# rename columns
df.columns = [str(s) for s in df.columns]
# access column by name
df['(vehicle_id, reservation_count)']
rename the last column
It is preferable to have similar column names for all columns, without brackets in them - make your code more readable and your dataset easier to use:
# access column by name
df['vehicle_id_reservation_count`]
This is a straight forward conversion on all columns that are named by a tuple:
# rename columns
def rename(col):
if isinstance(col, tuple):
col = '_'.join(str(c) for c in col)
return col
df.columns = map(rename, df.columns)
You can drop the last column and reassign it with a different name.
This isn't technically renaming the column. However, I think its intuitive.
Using #jezrael's setup
df = pd.DataFrame({'R':[7,8,9],
'T':[1,3,5],
'E':[5,3,6],
('Z', 'a'):[7,4,3]})
print(df)
R T E (Z, a)
0 7 1 5 7
1 8 3 3 4
2 9 5 6 3
How can I fetch the last column?
You can use iloc
df.iloc[:, -1]
0 5
1 3
2 6
Name: c, dtype: int64
You can rename the column after you've extracted it
df.iloc[:, -1].rename('newcolumn')
0 5
1 3
2 6
Name: newcolumn, dtype: int64
In order to rename it within the dataframe, you can do a great number of ways. To continue with the theme that I've started, namely, fetching the column, then renaming it:
option 1
start by dropping the last column with iloc[:, :-1]
use join to add the renamed column referenced above
df.iloc[:, :-1].join(df.iloc[:, -1].rename('newcolumn'))
R T E newname
0 7 1 5 7
1 8 3 3 4
2 9 5 6 3
option 2
Or we can use assign to put it back and save the rename
df.iloc[:, :-1].assign(newname=df.iloc[:, -1])
R T E newname
0 7 1 5 7
1 8 3 3 4
2 9 5 6 3
For changeing the column name
columns=df.columns.values
columns[-1]="Column name"
For fetch data from dataframe
You can use loc,iloc and ix methods.
loc is for fetch value using label
iloc is for fetch value using indexing
ix can fetch data with both using index and label
Learn about loc and iloc
http://pandas.pydata.org/pandas-docs/stable/dsintro.html#indexing-selection
Learn more about indexing and selecting data
http://pandas.pydata.org/pandas-docs/stable/indexing.html
Related
what is the most elegant way to create a new dataframe from an existing dataframe, by 1. selecting only certain columns and 2. renaming them at the same time?
For instance I have the following dataframe, where I want to pick column B, D and F and rename them into X, Y, Z
base dataframe
A B C D E F
1 2 3 4 5 6
1 2 3 4 5 6
new dataframe
X Y Z
2 4 6
2 4 6
You can select and rename the columns in one line
df2=df[['B','D','F']].rename({'B':'X','D':'Y','F':'Z'}, axis=1)
Slightly more general selection of every other column:
df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6],
'C':[7,8,9], 'D':[10,11,12]})
df_half = df.iloc[:, ::2]
with df_half being:
A C
0 1 7
1 2 8
2 3 9
You can then use the rename method mentioned in the answer by #G. Anderson or directly assign to the columns:
df_half.columns = ['X','Y']
returning:
X Y
0 1 7
1 2 8
2 3 9
Given this data frame:
import pandas as pd
df=pd.DataFrame({'A':[1,2,3],'B':[4,5,6],'C':[7,8,9]})
df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
I'd like to create 3 new data frames; one from each column.
I can do this one at a time like this:
a=pd.DataFrame(df[['A']])
a
A
0 1
1 2
2 3
But instead of doing this for each column, I'd like to do it in a loop.
Here's what I've tried:
a=b=c=df.copy()
dfs=[a,b,c]
fields=['A','B','C']
for d,f in zip(dfs,fields):
d=pd.DataFrame(d[[f]])
...but when I then print each one, I get the whole original data frame as opposed to just the column of interest.
a
A B C
0 1 4 7
1 2 5 8
2 3 6 9
Update:
My actual data frame will have some columns that I do not need and the columns will not be in any sort of order, so I need to be able to get the columns by name.
Thanks in advance!
A simple list comprehension should be enough.
In [68]: df_list = [df[[x]] for x in df.columns]
Printing out the list, this is what you get:
In [69]: for d in df_list:
...: print(d)
...: print('-' * 5)
...:
A
0 1
1 2
2 3
-----
B
0 4
1 5
2 6
-----
C
0 7
1 8
2 9
-----
Each element in df_list is its own data frame, corresponding to each data frame from the original. Furthermore, you don't even need fields, use df.columns instead.
Or you can try this, instead create copy of df, this method will return the result as single Dataframe, not a list, However, I think save Dataframe into a list is better
dfs=['a','b','c']
fields=['A','B','C']
variables = locals()
for d,f in zip(dfs,fields):
variables["{0}".format(d)] = df[[f]]
a
Out[743]:
A
0 1
1 2
2 3
b
Out[744]:
B
0 4
1 5
2 6
c
Out[745]:
C
0 7
1 8
2 9
You should use loc
a = df.loc[:,0]
and then loop through like
for i in range(df.columns.size):
dfs[i] = df.loc[:, i]
I have a Pandas dataset that I want to clean up prior to applying my ML algorithm. I am wondering if it was possible to remove a row if an element of its columns does not match a set of values. For example, if I have the dataframe:
a b
0 1 6
1 4 7
2 2 4
3 3 7
...
And I desire the values of a to be one of [1,3] and of b to be one of [6,7], such that my final dataset is:
a b
0 1 6
1 3 7
...
Currently, my implementation is not working as some of my data rows have erroneous strings attached to the value. For example, instead of a value of 1 I'll have something like 1abc. Hence why I would like to remove anything that is not an integer of that value.
My workaround is also a bit archaic, as I am removing entries for column a that do not have 1 or 3 via:
dataset = dataset[(dataset.commute != 1)]
dataset = dataset[(dataset.commute != 3)]
You can use boolean indexing with double isin and &:
df1 = df[(df['a'].isin([1,3])) & (df['b'].isin([6,7]))]
print (df1)
a b
0 1 6
3 3 7
Or use numpy.in1d:
df1 = df[(np.in1d(df['a'], [1,3])) & (np.in1d(df['b'], [6,7])) ]
print (df1)
a b
0 1 6
3 3 7
But if need remove all rows with non numeric then need to_numeric with errors='coerce' which return NaN and then is possible filter it by notnull:
df = pd.DataFrame({'a':['1abc','2','3'],
'b':['4','5','dsws7']})
print (df)
a b
0 1abc 4
1 2 5
2 3 dsws7
mask = pd.to_numeric(df['a'], errors='coerce').notnull() &
pd.to_numeric(df['b'], errors='coerce').notnull()
df1 = df[mask].astype(int)
print (df1)
a b
1 2 5
If need check if some value is NaN or None:
df = pd.DataFrame({'a':['1abc',None,'3'],
'b':['4','5',np.nan]})
print (df)
a b
0 1abc 4
1 None 5
2 3 NaN
print (df[df.isnull().any(axis=1)])
a b
1 None 5
2 3 NaN
You can use pandas isin()
df = df[df.a.isin([1,3]) & df.b.isin([6,7])]
a b
0 1 6
3 3 7
import pandas as pd
df = pd.DataFrame({
'id':[1,2,3,4,5,6,7,8,9,10,11],
'text': ['abc','zxc','qwe','asf','efe','ert','poi','wer','eer','poy','wqr']})
I have a DataFrame with columns:
id text
1 abc
2 zxc
3 qwe
4 asf
5 efe
6 ert
7 poi
8 wer
9 eer
10 poy
11 wqr
I have a list L = [1,3,6,10] which contains list of id's.
I am trying to append the text column using a list such that, from my list first taking 1 and 3(first two values in a list) and appending text column in my DataFrame with id = 1 which has id's 2, then deleting rows with id column 2 similarly then taking 3 and 6 and then appending text column where id = 4,5 to id 3 and then delete rows with id = 4 and 5 and iteratively for elements in list (x, x+1)
My final output would look like this:
id text
1 abczxc # joining id 1 and 2
3 qweasfefe # joining id 3,4 and 5
6 ertpoiwereer # joining id 6,7,8,9
10 poywqr # joining id 10 and 11
You can use isin with cumsum for Series, which is use for groupby with apply join function:
s = df.id.where(df.id.isin(L)).ffill().astype(int)
df1 = df.groupby(s)['text'].apply(''.join).reset_index()
print (df1)
id text
0 1 abczxc
1 3 qweasfefe
2 6 ertpoiwereer
3 10 poywqr
It working because:
s = df.id.where(df.id.isin(L)).ffill().astype(int)
print (s)
0 1
1 1
2 3
3 3
4 3
5 6
6 6
7 6
8 6
9 10
10 10
Name: id, dtype: int32
I changed the values not in list to np.nan and then ffill and groupby. Though #Jezrael's approach is much better. I need to remember to use cumsum:)
l = [1,3,6,10]
df.id[~df.id.isin(l)] = np.nan
df = df.ffill().groupby('id').sum()
text
id
1.0 abczxc
3.0 qweasfefe
6.0 ertpoiwereer
10.0 poywqr
Use pd.cut to create you bins then groupby with a lambda function to join your text in that group.
df.groupby(pd.cut(df.id,L+[np.inf],right=False, labels=[i for i in L])).apply(lambda x: ''.join(x.text))
EDIT:
(df.groupby(pd.cut(df.id,L+[np.inf],
right=False,
labels=[i for i in L]))
.apply(lambda x: ''.join(x.text)).reset_index().rename(columns={0:'text'}))
Output:
id text
0 1 abczxc
1 3 qweasfefe
2 6 ertpoiwereer
3 10 poywqr
I have a DataFrame which I want to pass to a function, derive some information from and then return that information. Originally I set up my code like:
df = pd.DataFrame( {
'A': [1,1,1,1,2,2,2,3,3,4,4,4],
'B': [5,5,6,7,5,6,6,7,7,6,7,7],
'C': [1,1,1,1,1,1,1,1,1,1,1,1]
} );
def test_function(df):
df['D'] = 0
df.D = np.random.rand(len(df))
grouped = df.groupby('A')
df = grouped.first()
df = df['D']
return df
Ds = test_function(df)
print(df)
print(Ds)
Which returns:
A B C D
0 1 5 1 0.582319
1 1 5 1 0.269779
2 1 6 1 0.421593
3 1 7 1 0.797121
4 2 5 1 0.366410
5 2 6 1 0.486445
6 2 6 1 0.001217
7 3 7 1 0.262586
8 3 7 1 0.146543
9 4 6 1 0.985894
10 4 7 1 0.312070
11 4 7 1 0.498103
A
1 0.582319
2 0.366410
3 0.262586
4 0.985894
Name: D, dtype: float64
My thinking was along the lines of, I don't want to copy my large dataframe, so I will add a working column to it, and then just return the information I want with out affecting the original dataframe. This of course doesn't work, because I didn't copy the dataframe so adding a column is adding a column. Currently I'm doing something like:
add column
results = Derive information
delete column
return results
which feels a bit kludgy to me, but I can't think of a better way to do it without copying the dataframe. Any suggestions?
If you do not want to add a column to your original DataFrame, you could create an independent Series and apply the groupby method to the Series instead:
def test_function(df):
ser = pd.Series(np.random.rand(len(df)))
grouped = ser.groupby(df['A'])
return grouped.first()
Ds = test_function(df)
yields
A
1 0.017537
2 0.392849
3 0.451406
4 0.234016
dtype: float64
Thus, test_function does not modify df at all. Notice that ser.groupby can be passed a sequence of values (such as df['A']) by which to group instead of the just the name of a column.