How can I create a DataFrame slice object piece by piece? - python

I have a DataFrame, and I want to select certain rows and columns from it. I know how to do this using loc. However, I want to be able to specify each criteria individually, rather than in one go.
import numpy as np
import pandas as pd
idx = pd.IndexSlice
index = [np.array(['foo', 'foo', 'qux', 'qux']),
np.array(['a', 'b', 'a', 'b'])]
columns = ["A", "B"]
df = pd.DataFrame(np.random.randn(4, 2), index=index, columns=columns)
print df
print df.loc[idx['foo', :], idx['A':'B']]
A B
foo a 0.676649 -1.638399
b -0.417915 0.587260
qux a 0.294555 -0.573041
b 1.592056 0.237868
A B
foo a -0.470195 -0.455713
b 1.750171 -0.409216
Requirement
I want to be able to achieve the same result with something like the following bit of code, where I specify each criteria one by one. It's also important that I'm able to use a slice_list to allow dynamic behaviour [i.e. the syntax should work whether there are two, three or ten different criteria in the slice_list].
slice_1 = 'foo'
slice_2 = ':'
slice_list = [slice_1, slice_2]
column_slice = "'A':'B'"
print df.loc[idx[slice_list], idx[column_slice]]

You can achieve this using the slice built-in function. You can't build slices with strings as ':' is a literal character and not a syntatical one.
slice_1 = 'foo'
slice_2 = slice(None)
column_slice = slice('A', 'B')
df.loc[idx[slice_1, slice_2], idx[column_slice]]

You might have to build your "slice lists" a little differently than you intended, but here's a relatively compact method using df.merge() and df.ix[]:
# Build a "query" dataframe
slice_df = pd.DataFrame(index=[['foo','qux','qux'],['a','a','b']])
# Explicitly name columns
column_slice = ['A','B']
slice_df.merge(df, left_index=True, right_index=True, how='inner').ix[:,column_slice]
Out[]:
A B
foo a 0.442302 -0.949298
qux a 0.425645 -0.233174
b -0.041416 0.229281
This method also requires you to be explicit about your second index and columns, unfortunately. But computers are great at making long tedious lists for you if you ask nicely.
EDIT - Example of method to dynamically built a slice list that could be used like above.
Here's a function that takes a dataframe and spits out a list that could then be used to create a "query" dataframe to slice the original by. It only works with dataframes with 1 or 2 indices. Let me know if that's an issue.
def make_df_slice_list(df):
if df.index.nlevels == 1:
slice_list = []
# Only one level of index
for dex in df.index.unique():
if input("DF index: " + dex + " - Include? Y/N: ") == "Y":
# Add to slice list
slice_list.append(dex)
if df.index.nlevels > 1:
slice_list = [[] for _ in xrange(df.index.nlevels)]
# Multi level
for i in df.index.levels[0]:
print "DF index:", i, "has subindexes:", [dex for dex in df.ix[i].index]
sublist = input("Enter a the indexes you'd like as a list: ")
# if no response, the first entry
if len(sublist)==0:
sublist = [df.ix[i].index[0]]
# Add an entry to the first index list for each sub item passed
[slice_list[0].append(i) for item in sublist]
# Add each of the second index list items
[slice_list[1].append(item) for item in sublist]
return slice_list
I'm not advising this as a way to communicate with your user, just an example. When you use it you have to pass strings (e.g. "Y" and "N") and lists of string (["a","b"]) and empty lists [] at prompts. Example:
In [115]: slice_list = make_df_slice_list(df)
DF index: foo has subindexes: ['a', 'b']
Enter a the indexes you'd like as a list: []
DF index: qux has subindexes: ['a', 'b']
Enter a the indexes you'd like as a list: ['a','b']
In [116]:slice_list
Out[116]: [['foo', 'qux', 'qux'], ['a', 'a', 'b']]
# Back to my original solution, but now passing the list:
slice_df = pd.DataFrame(index=slice_list)
column_slice = ['A','B']
slice_df.merge(df, left_index=True, right_index=True, how='inner').ix[:,column_slice]
Out[117]:
A B
foo a -0.249547 0.056414
qux a 0.938710 -0.202213
b 0.329136 -0.465999

Building up on the answer by Ted Petrou:
slices = [('foo', slice(None)), slice('A', 'B')]
print df.loc[tuple(idx[s] for s in slices)]
A B
foo a -0.465421 -0.591763
b -0.854938 1.221204
slices = [('foo', slice(None)), 'A']
print df.loc[tuple(idx[s] for s in slices)]
foo a -0.465421
b -0.854938
Name: A, dtype: float64
slices = [('foo', slice(None))]
print df.loc[tuple(idx[s] for s in slices)]
A B
foo a -0.465421 -0.591763
b -0.854938 1.221204
You have to use tuples when calling __getitem__ (loc[...]) with a 'dynamic' argument.
You could also avoid building the slice objects by hand:
def to_selector(s):
if isinstance(s, tuple) or isinstance(s, list):
return tuple(map(to_selector, s))
ps = [None if len(p) == 0 else p for p in s.split(':')]
assert len(ps) > 0 and len(ps) <= 2
if len(ps) == 1:
assert ps[0] is not None
return ps[0]
return slice(*ps)
query = [('foo', ':'), 'A:B']
df.loc[tuple(idx[to_selector(s)] for s in query)]

do you mean this?
import numpy as np
import pandas as pd
idx = pd.IndexSlice
index = [np.array(['foo', 'foo', 'qux', 'qux']),
np.array(['a', 'b', 'a', 'b'])]
columns = ["A", "B"]
df = pd.DataFrame(np.random.randn(4, 2), index=index, columns=columns)
print df
#
la1 = lambda df: df.loc[idx['foo', :], idx['A':'B']]
la2 = lambda df: df.loc[idx['qux', :], idx['A':'B']]
laList = [la1, la2]
result = map(lambda la: la(df), laList)
print result[0]
print result[1]
A B
foo a 0.162138 -1.382822
b -0.822986 -0.403766
qux a 0.191695 -1.125841
b 0.669254 -0.704894
A B
foo a 0.162138 -1.382822
b -0.822986 -0.403766
A B
qux a 0.191695 -1.125841
b 0.669254 -0.704894

Did you simply mean this?
df.loc[idx['foo',:], :].loc[idx[:,'a'], :]
In a slightly more general form, for example:
def multiindex_partial_row_slice(df, part_idx, criteria):
slc = idx[tuple([slice(None) if i != part_idx else criteria
for i in range(len(df.index.levels))])]
return df.loc[slc, :]
multiindex_partial_row_slice(df, 1, slice('a','b'))
Similarly you can always narrow your current column set by appending .loc[:, columns] to your currently sliced view.

Related

Python find and replace tool using pandas and a dictionary

Having issues with building a find and replace tool in python. Goal is to search a column in an excel file for a string and swap out every letter of the string based on the key value pair of the dictionary, then write the entire new string back to the same cell. So "ABC" should convert to "BCD". I have to find and replace any occurrence of individual characters.
The below code runs without debugging, but newvalue never creates and I don't know why. No issues writing data to the cell if newvalue gets created.
input: df = pd.DataFrame({'Code1': ['ABC1', 'B5CD', 'C3DE']})
expected output: df = pd.DataFrame({'Code1': ['BCD1', 'C5DE', 'D3EF']})
mycolumns = ["Col1", "Col2"]
mydictionary = {'A': 'B', 'B': 'C', 'C': 'D'}
for x in mycolumns:
# 1. If the mycolumn value exists in the headerlist of the file
if x in headerlist:
# 2. Get column coordinate
col = df.columns.get_loc(x) + 1
# 3. iterate through the rows underneath that header
for ind in df.index:
# 4. log the row coordinate
rangerow = ind + 2
# 5. get the original value of that coordinate
oldval = df[x][ind]
for count, y in enumerate(oldval):
# 6. generate replacement value
newval = df.replace({y: mydictionary}, inplace=True, regex=True, value=None)
print("old: " + str(oldval) + " new: " + str(newval))
# 7. update the cell
ws.cell(row=rangerow, column=col).value = newval
else:
print("not in the string")
else:
# print(df)
print("column doesn't exist in workbook, moving on")
else:
print("done")
wb.save(filepath)
wb.close()
I know there's something going on with enumerate and I'm probably not stitching the string back together after I do replacements? Or maybe a dictionary is the wrong solution to what I am trying to do, the key:value pair is what led me to use it. I have a little programming background but ery little with python. Appreciate any help.
newvalue never creates and I don't know why.
DataFrame.replace with inplace=True will return None.
>>> df = pd.DataFrame({'Code1': ['ABC1', 'B5CD', 'C3DE']})
>>> df = df.replace('ABC1','999')
>>> df
Code1
0 999
1 B5CD
2 C3DE
>>> q = df.replace('999','zzz', inplace=True)
>>> print(q)
None
>>> df
Code1
0 zzz
1 B5CD
2 C3DE
>>>
An alternative could b to use str.translate on the column (using its str attribute) to encode the entire Series
>>> df = pd.DataFrame({'Code1': ['ABC1', 'B5CD', 'C3DE']})
>>> mydictionary = {'A': 'B', 'B': 'C', 'C': 'D'}
>>> table = str.maketrans('ABC','BCD')
>>> df
Code1
0 ABC1
1 B5CD
2 C3DE
>>> df.Code1.str.translate(table)
0 BCD1
1 C5DD
2 D3DE
Name: Code1, dtype: object
>>>

how to apply on certain condition and append value inside it separated by - in pandas through python

I have created a dataframe which contains the columns Name and Mains.
data = [['Anshu', '8321-1328-11'], ['Hero', '83211-1128-11'], ['Naman', '65432-8765-4']]
df = pd.DataFrame(data, columns = ['Name', 'Mains'])
I want to update the Mains column into a new column with an updated value i.e. df['new_mains'] with the following condition: if the number is seprated by 4-4-2, then should be added with 0 and updated separated number will be 5-4-2? is it possible to do so in pandas?
Pretty sure it could be done. For example,
def my_func(strn):
a,b,c = strn.split('-')
new_a = '0'+a if len(a)==4 else a
new_b = '0'+b if len(b)==3 else b
new_c = '9'+c if len(c)==1 else c
return '-'.join([new_a,new_b,new_c])
And then,
df['New_Mains'] = df['Mains'].apply(my_func)
Note: Going off the assumption 'a' is either of length 4 or 5. If 'a', 'b', 'c' are of any other length then, you can also do something like (works for current scenario as well)
new_a = '0'*5-len(a) + a
new_b = '0'*4-len(b) + b
new_c = '9'*2-len(c) + c
More on str.split here. Basically, in your case the string reads something like "99999-9999-99" with "-" as a separator. So,
"99999-9999-99".split('-') #would return
['99999', '9999', '99']
where a = '99999', b = '9999', c = '99'.
new_a, new_b, new_c are variables to hold new values of a, b and c after checking for the conditional statements. Finally join the strings new_a, new_b, new_c to look like original strings from 'Mains' column. More on str.join

Swapping rows within the same pandas dataframe

I'm trying to swap the rows within the same DataFrame in pandas.
I've tried running
a = pd.DataFrame(data = [[1,2],[3,4]], index=range(2), columns = ['A', 'B'])
b, c = a.iloc[0], a.iloc[1]
a.iloc[0], a.iloc[1] = c, b
but I just end up with both the rows showing the values for the second row (3,4).
Even the variables b and c are now both assigned to 3 and 4 even though I did not assign them again. Am I doing something wrong?
Use a temporary varaible to store the value using .copy(), because you are changing the values while assigning them on chain i.e. Unless you use copy the data will be changed directly.
a = pd.DataFrame(data = [[1,2],[3,4]], index=range(2), columns = ['A', 'B'])
b, c = a.iloc[0], a.iloc[1]
temp = a.iloc[0].copy()
a.iloc[0] = c
a.iloc[1] = temp
Or you can directly use copy like
a = pd.DataFrame(data = [[1,2],[3,4]], index=range(2), columns = ['A', 'B'])
b, c = a.iloc[0].copy(), a.iloc[1].copy()
a.iloc[0],a.iloc[1] = c,b
The accepted answer does not make changes the index name.
If you only want to alter the order of rows you should use dataframe.reindex(arraylike). Notice that the index has changed.
In this way, it can be extrapolated to more complex situations:
a = pd.DataFrame(data = [[1,2],[3,4]], index=range(2), columns = ['A', 'B'])
rows = a.index.to_list()
# Move the last row to the first index
rows = rows[-1:]+rows[:-1]
a=a.loc[rows]
df = pd.DataFrame(data = [[1,2],[4,5],[6,7]], index=['a','b','c'], columns = ['A', 'B'])
df
df.reindex(['a','c','b'])

Change entire pandas Series based on conditions

In my pandas DataFrame I want to add a new column (NewCol), based on some conditions that follow from data of another column (OldCol).
To be more specific, my column OldCol contains three types of strings:
BB_sometext
sometext1
sometext 1
I want to differentiate between these three types of strings. Right now, I did this using the following code:
df['NewCol'] = pd.Series()
for i in range(0, len(df)):
if str(df.loc[i, 'OldCol']).split('_')[0] == "BB":
df.loc[i, 'NewCol'] = "A"
elif len(str(df.loc[i, 'OldCol']).split(' ')) == 1:
df.loc[i, 'NewCol'] = "B"
else:
df.loc[i, 'NewCol'] = "C"
Even though this code seems to work, I'm sure there is a better way to do something like this, as this seems very inefficient. Does anyone know a better way to do this? Thanks in advance.
In general, you need something like the following formulation:
>>> df.loc[boolean_test, 'NewCol'] = desired_result
Or, for multiple conditions (Note the parentheses around each condition, and the rather unpythonic & instead of and):
>>> df.loc[(boolean_test1) & (boolean_test2), 'NewCol'] = desired_result
Example
Let's start with an example Data.Frame:
>>> df = pd.DataFrame(dict(OldCol=['sometext1', 'sometext 1', 'BB_ccc', 'sometext1']))
Then you'd do:
>>> df.loc[df['OldCol'].str.split('_').str[0] == 'BB', 'NewCol'] = "A"
To set all BB_ columns to A. You could even (optionally, for readability) separate out the boolean condition onto its own line:
>>> oldcol_starts_BB = df['OldCol'].str.split('_').str[0] == 'BB'
>>> df.loc[oldcol_starts_BB, 'NewCol'] = "A"
I like this method become it means the reader doesn't have to work out the logic hidden within the split('_').str[0] part.
Then, to set all columns with no space, which are still not set (i.e. where isnull is true):
>>> oldcol_has_no_space = df['OldCol'].str.find(' ') < 0
>>> newcol_is_null = df['NewCol'].isnull()
>>> df.loc[(oldcol_has_no_space) & (newcol_is_null), 'NewCol'] = 'C'
Then finally, set all remaining values of NewCol to B:
>>> df.loc[df['NewCol'].isnull(), 'NewCol'] = 'B'
>>> df
OldCol NewCol
0 sometext1 C
1 sometext 1 B
2 BB_ccc A
3 sometext1 C

Filtering dataframes in pandas : use a list of conditions

I have a pandas dataframe with two dimensions : 'col1' and 'col2'
I can filter certain values of those two columns using :
df[ (df["col1"]=='foo') & (df["col2"]=='bar')]
Is there any way I can filter both columns at once ?
I tried naively to use the restriction of the dataframes to two columns, but my best guesses for the second part of the equality don't work :
df[df[["col1","col2"]]==['foo','bar']]
yields me this error
ValueError: Invalid broadcasting comparison [['foo', 'bar']] with block values
I need to do this because the names of the columns, but also the number of columns on which the condition will be set will vary
To the best of my knowledge, there is no way in Pandas for you to do what you want. However, although the following solution may not me the most pretty, you can zip a set of parallel lists as follows:
cols = ['col1', 'col2']
conditions = ['foo', 'bar']
df[eval(" & ".join(["(df['{0}'] == '{1}')".format(col, cond)
for col, cond in zip(cols, conditions)]))]
The string join results in the following:
>>> " & ".join(["(df['{0}'] == '{1}')".format(col, cond)
for col, cond in zip(cols, conditions)])
"(df['col1'] == 'foo') & (df['col2'] == 'bar')"
Which you then use eval to evaluate, effectively:
df[eval("(df['col1'] == 'foo') & (df['col2'] == 'bar')")]
For example:
df = pd.DataFrame({'col1': ['foo', 'bar, 'baz'], 'col2': ['bar', 'spam', 'ham']})
>>> df
col1 col2
0 foo bar
1 bar spam
2 baz ham
>>> df[eval(" & ".join(["(df['{0}'] == {1})".format(col, repr(cond))
for col, cond in zip(cols, conditions)]))]
col1 col2
0 foo bar
I would like to point out an alternative for the accepted answer as eval is not necessary for solving this problem.
from functools import reduce
df = pd.DataFrame({'col1': ['foo', 'bar', 'baz'], 'col2': ['bar', 'spam', 'ham']})
cols = ['col1', 'col2']
values = ['foo', 'bar']
conditions = zip(cols, values)
def apply_conditions(df, conditions):
assert len(conditions) > 0
comps = [df[c] == v for c, v in conditions]
result = comps[0]
for comp in comps[1:]:
result &= comp
return result
def apply_conditions(df, conditions):
assert len(conditions) > 0
comps = [df[c] == v for c, v in conditions]
return reduce(lambda c1, c2: c1 & c2, comps[1:], comps[0])
df[apply_conditions(df, conditions)]
I know I'm late to the party on this one, but if you know that all of your values will use the same sign, then you could use functools.reduce. I have a CSV with something like 64 columns, and I have no desire whatsoever to copy and paste them. This is how I resolved:
from functools import reduce
players = pd.read_csv('players.csv')
# I only want players who have any of the outfield stats over 0.
# That means they have to be an outfielder.
column_named_outfield = lambda x: x.startswith('outfield')
# If a column name starts with outfield, then it is an outfield stat.
# So only include those columns
outfield_columns = filter(column_named_outfield, players.columns)
# Column must have a positive value
has_positive_value = lambda c:players[c] > 0
# We're looking to create a series of filters, so use "map"
list_of_positive_outfield_columns = map(has_positive_value, outfield_columns)
# Given two DF filters, this returns a third representing the "or" condition.
concat_or = lambda x, y: x | y
# Apply the filters through reduce to create a primary filter
is_outfielder_filter = reduce(concat_or, list_of_positive_outfield_columns)
outfielders = players[is_outfielder_filter]
Posting because I ran into a similar issue and found a solution that gets it done in one line albeit a bit inefficiently
cols, vals = ["col1","col2"],['foo','bar']
pd.concat([df.loc[df[cols[i]] == vals[i]] for i in range(len(cols))], join='inner')
This is effectively an & across the columns. To have an | across the columns you can ommit join='inner' and add a drop_duplicates() at the end

Categories