Pandas: append row to DataFrame with multiindex in columns - python

I have a DataFrame with a multiindex in the columns and would like to use dictionaries to append new rows.
Let's say that each row in the DataFrame is a city. The columns contains "distance" and "vehicle". And each cell would be the percentage of the population that chooses this vehicle for this distance.
I'm constructing an index like this:
index_tuples=[]
for distance in ["near", "far"]:
for vehicle in ["bike", "car"]:
index_tuples.append([distance, vehicle])
index = pd.MultiIndex.from_tuples(index_tuples, names=["distance", "vehicle"])
Then I'm creating a dataframe:
dataframe = pd.DataFrame(index=["city"], columns = index)
The structure of the dataframe looks good. Although pandas has added Nans as default values ?
Now I would like to set up a dictionary for the new city and add it:
my_home_city = {"near":{"bike":1, "car":0},"far":{"bike":0, "car":1}}
dataframe["my_home_city"] = my_home_city
But this fails:
ValueError: Length of values does not match length of index
Here is the complete error message (pastebin)
UPDATE:
Thank you for all the good answers. I'm afraid I've oversimplified the problem in my example. Actually my index is nested with 3 levels (and it could become more).
So I've accepted the universal answer of converting my dictionary into a list of tuples. This might not be as clean as the other approaches but works for any multiindex setup.

Multi index is a list of tuple , we just need to modify your dict ,then we could directly assign the value
d = {(x,y):my_home_city[x][y] for x in my_home_city for y in my_home_city[x]}
df.loc['my_home_city',:]=d
df
Out[994]:
distance near far
vehicle bike car bike car
city NaN NaN NaN NaN
my_home_city 1 0 0 1
More Info
d
Out[995]:
{('far', 'bike'): 0,
('far', 'car'): 1,
('near', 'bike'): 1,
('near', 'car'): 0}
df.columns.values
Out[996]: array([('near', 'bike'), ('near', 'car'), ('far', 'bike'), ('far', 'car')], dtype=object)

You can append to you dataframe like this:
my_home_city = {"near":{"bike":1, "car":0},"far":{"bike":0, "car":1}}
dataframe.append(pd.DataFrame.from_dict(my_home_city).unstack().rename('my_home_city'))
Output:
distance near far
vehicle bike car bike car
city NaN NaN NaN NaN
my_home_city 1 0 0 1
The trick is to create the dataframe row with from_dict then unstack to get structure of your original dataframe with multiindex columns then rename to get index and append.
Or if you don't want to create the empty dataframe first you can use this method to create the dataframe with the new data.
pd.DataFrame.from_dict(my_home_city).unstack().rename('my_home_city').to_frame().T
Output:
far near
bike car bike car
my_home_city 0 1 1 0
Explained:
pd.DataFrame.from_dict(my_home_city)
far near
bike 0 1
car 1 0
Now, let's unstack to create multiindex and get to that new dataframe into the structure of the original dataframe.
pd.DataFrame.from_dict(my_home_city).unstack()
far bike 0
car 1
near bike 1
car 0
dtype: int64
We use rename to give that series a name which becomes the index label of that dataframe row when appended to the original dataframe.
far bike 0
car 1
near bike 1
car 0
Name: my_home_city, dtype: int64
Now if you converted that series to a frame and transposed it would look very much like a new row, however, there is no need to do this because, Pandas does intrinsic data alignment, so appending this series to the dataframe will auto-align and add the new dataframe record.
dataframe.append(pd.DataFrame.from_dict(my_home_city).unstack().rename('my_home_city'))
distance near far
vehicle bike car bike car
city NaN NaN NaN NaN
my_home_city 1 0 0 1

I don't think you even need to initialise an empty dataframe. With your d, I can get your desired output with unstack and a transpose:
pd.DataFrame(d).unstack().to_frame().T
far near
bike car bike car
0 0 1 1 0

Initialize your empty dataframe using MultiIndex.from_product.
distances = ['near', 'far']
vehicles = ['bike', 'car']
df = pd.DataFrame([], columns=pd.MultiIndex.from_product([distances, vehicles]),
index=pd.Index([], name='city'))
Your dictionary results in a square matrix (distance by vehicle), so unstack it (which will result in a Series), then convert it into a dataframe row by calling (to_frame) using the relevant city name and transposing the column into a row.
>>> df.append(pd.DataFrame(my_home_city).unstack().to_frame('my_home_city').T)
far near
bike car bike car
city
my_home_city 0 1 1 0

Just to add to all of the answers, this is just another(maybe not too different) simple example, represented in a more reproducible way :
import itertools as it
from IPython.display import display # this is just for displaying output purpose
import numpy as np
import pandas as pd
col_1, col_2 = ['A', 'B'], ['C', 'D']
arr_size = len(col_2)
col = pd.MultiIndex.from_product([col_1, col_2])
tmp_df = pd.DataFrame(columns=col)
display(tmp_df)
for s in range(3):# no of rows to add to tmp_df
tmp_dict = {x : [np.random.random_sample(1)[0] for i in range(arr_size)] for x in range(arr_size)}
tmp_ser = pd.Series(it.chain.from_iterable([tmp_dict[x] for x in tmp_dict]), index=col)
# display(tmp_dict, tmp_ser)
tmp_df = tmp_df.append(tmp_ser[tmp_df.columns], ignore_index=True)
display(tmp_df)
Some things to note about above:
The number of items to add should always match len(col_1)*len(col_2), that is the product of element lengths your multi-index is made from.
list(it.chain.from_iterable([[2, 3], [4, 5]])) simply does this [2,3,4,5]

try this workaround
append to dict
then convert to pandas data frame
at the very last step select desired columns to create multi-index with set_index()
d = dict()
for g in predictor_types:
for col in predictor_types[g]:
tot = len(ames) - ames[col].count()
if tot:
d.setdefault('type',[]).append(g)
d.setdefault('predictor',[]).append(col)
d.setdefault('missing',[]).append(tot)
pd.DataFrame(d).set_index(['type','predictor']).style.bar(color='DodgerBlue')

Related

How to find shared characteristics in pandas?

I have a dataset
I want to get to know our customers by looking at the typical shared characteristics (e.g. "Married customers in their 40s like wine"). This would correspond to the itemset {Married, 40s, Wine}.
How can I create a new dataframe called customer_data_onehot such that rows correspond to customers (as in the original data set) and columns correspond to the categories of each of the ten categorical attributes in the data. The new dataframe should only contain boolean values (True/False or 0/1s) such that the value in row 𝑖 and column 𝑗 is True (or 1) if and only if the attribute value corresponding to the column 𝑗 holds for the customer corresponding to row 𝑖 . Display the dataframe.
I have this hint "Hint: For example, for the attribute "Education" there are 5 possible categories: 'Graduation', 'PhD', 'Master', 'Basic', '2n Cycle'. Therefore, the new dataframe must contain one column for each of those attribute values." but I don't understand how can I achieve this.
Can someone guide me here to achieve the correct solution?
i have this code which Imports the csv file and selects 90% of data from the original dataset.
import pandas as pd
pre_process = pd.read_csv('customer_data.csv')
pre_process = pre_process.sample(frac=0.9, random_state=413808).to_csv('customer_data_2.csv',
index=False)
Use get_dummies:
Setup a MRE
data = {'Customer': ['A', 'B', 'C'],
'Marital_Status': ['Together', 'Married', 'Single'],
'Age_Group': ['40s', '60s', '20s']}
df = pd.DataFrame(data)
print(df)
# Output
Customer Marital_Status Age_Group
0 A Together 40s
1 B Married 60s
2 C Single 20s
out = pd.get_dummies(df.set_index('Customer')).reset_index()
print(out)
# Output
Customer Marital_Status_Married Marital_Status_Single Marital_Status_Together Age_Group_20s Age_Group_40s Age_Group_60s
0 A 0 0 1 0 1 0
1 B 1 0 0 0 0 1
2 C 0 1 0 1 0 0

Pandas: add number of unique values to other dataset (as shown in picture):

I need to add the number of unique values in column C (right table) to the related row in the left table based on the values in common column A (as shown in the picture):
thank you in advance
Groupby column A in second dataset and calculate count of each unique value in column C. merge it with first dataset on column A. Rename column C to C-count if needed:
>>> count_df = df2.groupby('A', as_index=False).C.nunique()
>>> output = pd.merge(df1, count_df, on='A')
>>> output.rename(columns={'C':'C-count'}, inplace=True)
>>> output
A B C-count
0 2 22 3
1 3 23 2
2 5 21 1
3 1 24 1
4 6 21 1
Use DataFrameGroupBy.nunique with Series.map for new column in df1:
df1['C-count'] = df1['A'].map(df2.groupby('A')['C'].nunique())
This may not be the most effective way of doing this, so if your databases are too big be careful.
Define the following function:
def c_value(a_value, right_table):
c_ids = []
for index, row in right_table.iterrows():
if row['A'] == a_value:
if row['C'] not in c_ids:
c_ids.append(row['C'])
return len(c_ids)
For this function I'm supposing that the right_table is a pandas.Dataframe.
Now, you do the following to build the new column (assuming that the left table is a pandas.Dataframe):
new_column = []
for index, row in left_table.iterrows():
new_column.append(c_value(row['A'],right_table))
left_table["C-count"] = new_column
After this, the left_table Dataframe should be the one dessired (as far as I understand what you need).

What is the pythonic way to do a conditional count across pandas dataframe rows with apply?

I'm trying to do a conditional count across records in a pandas dataframe. I'm new at Python and have a working solution using a for loop, but running this on a large dataframe with ~200k rows takes a long time and I believe there is a better way to do this by defining a function and using apply, but I'm having trouble figuring it out.
Here's a simple example.
Create a pandas dataframe with two columns:
import pandas as pd
data = {'color': ['blue','green','yellow','blue','green','yellow','orange','purple','red','red'],
'weight': [4,5,6,4,1,3,9,8,4,1]
}
df = pd.DataFrame(data)
# for each row, count the number of other rows with the same color and a lesser weight
counts = []
for i in df.index:
c = df.loc[i, 'color']
w = df.loc[i, 'weight']
ct = len(df.loc[(df['color']==c) & (df['weight']<w)])
counts.append(ct)
df['counts, same color & less weight'] = counts
For each record, the 'counts, same color & less weight' column is intended to get a count of the other records in the df with the same color and a lesser weight. For example, the result for row 0 (blue, 4) is zero because no other records with color=='blue' have lesser weight. The result for row 1 (green, 5) is 1 because row 4 is also color=='green' but weight==1.
How do I define a function that can be applied to the dataframe to achieve the same?
I'm familiar with apply, for example to square the weight column I'd use:
df['weight squared'] = df['weight'].apply(lambda x: x**2)
... but I'm unclear how to use apply to do a conditional calculation that refers to the entire df.
Thanks in advance for any help.
We can do transform with min groupby
df.weight.gt(df.groupby('color').weight.transform('min')).astype(int)
0 0
1 1
2 1
3 0
4 0
5 0
6 0
7 0
8 1
9 0
Name: weight, dtype: int64
#df['c...]=df.weight.gt(df.groupby('color').weight.transform('min')).astype(int)

Convert a list of stringified lists into a dataframe whilst maintaining index

I have the following data frame coming from an API source, I'm trying to wrangle the data whilst not massively changing my original dataframe (don't want to do a cartesian product essentially)
data = ["[['Key','Metric','Value'],['foo','bar','4'],['foo2','bar2','55.21']]",
"[['Key','Metric','Value'],['foo','bar','5']]",
"[['Key','Metric','Value'],['foo','bar','6'],['foo1','bar1',''],['foo2','bar2','57.75']]"]
df = pd.DataFrame({'id' : [0,1,2],'arr' : data})
print(df)
id arr
0 0 [['Key','Metric','Value'],['foo','bar','4'],['...
1 1 [['Key','Metric','Value'],['foo','bar','5']]
2 2 [['Key','Metric','Value'],['foo','bar','6'],['...
The Key Value Metric tells the order of the arrays within what I'm trying to do is order it in a dictionary fashion of {key : value} where the key is the Key & Metric fields joined and the value is -1 index of the nested list.
The source data is coming via excel & the MS Graph API, I don't envisage that it will change, but it may so I'm trying to come up with a dynamic solution.
my target dataframe is :
target_df = pd.DataFrame({'id' : [0,1,2],
'foo_bar' : [4,5,6],
'foo1_bar1' : [np.nan, np.nan,''],
'foo2_bar2' : [55.21, np.nan, 57.75]})
print(target_df)
id foo_bar foo1_bar1 foo2_bar2
0 0 4 NaN 55.21
1 1 5 NaN NaN
2 2 6 57.75
my own attemps have been to use literal_eval from the ast library to get the first list which will always be the Key Metric & Value column - there maybe in future a Key Metric , Metric2, Value field - hence my desire to keep things dynamic.
there will always be a single Key & Value field.
Own attempt :
from ast import literal_eval
literal_eval(df['arr'][0])[0]
#['Key', 'Value', 'Metric']
with this i replaced the list characters and split by , then converted the result to a dataframe :
df['arr'].str.replace('\[|\]','').str.split(',',expand=True)
however after this I haven't made much clear head-way and wondering If im going about this the wrong way?
Try:
df2=df["arr"].map(eval).apply(lambda x: pd.Series({f"{el[0]}_{el[1]}": el[2] for el in x[1:]}))
df2["id"]=df["id"]
Output:
foo_bar foo2_bar2 foo1_bar1 id
0 4 55.21 NaN 0
1 5 NaN NaN 1
2 6 57.75 2
IIUC, you can loop over each row and use literal_eval, create dataframes, set_index the first two columns and transpose. then concat plus rename the columns, and create the column id:
from ast import literal_eval
df_target = pd.concat([pd.DataFrame.from_records(literal_eval(x)).drop(0).set_index([0,1]).T
for x in df.arr.to_numpy()],
ignore_index=True,
keys=df.id) #to keep the ids
# rename the columns as wanted
df_target.columns = ['{}_{}'.format(*col) for col in df_target.columns]
# add the ids as a column
df_target = df_target.reset_index().rename(columns={'index':'id'})
print (df_target)
id foo_bar foo1_bar1 foo2_bar2
0 0 4 NaN 55.21
1 1 5 NaN NaN
2 2 6 57.75
I'm still not entirely sure I understand every aspect of the question, but here's what I have so far.
import ast
import pandas as pd
data = ["[['Key','Metric','Value'],['foo','bar','4'],['foo2','bar2','55.21']]",
"[['Key','Metric','Value'],['foo','bar','5']]",
"[['Key','Metric','Value'],['foo','bar','6'],['foo1','bar1',''],['foo2','bar2','57.75']]"]
nested_lists = [ast.literal_eval(elem)[1:] for elem in data]
row_dicts = [{'_'.join([key, metric]): value for key, metric, value in curr_list} for curr_list in nested_lists]
df = pd.DataFrame(data=row_dicts)
print(df)
Output:
foo_bar foo2_bar2 foo1_bar1
0 4 55.21 NaN
1 5 NaN NaN
2 6 57.75
nested_lists and row_dicts are list comprehension since it makes debugging easier, but you can of course transform them into generator expressions.

Python: Removing Rows on Count condition

I have a problem filtering a pandas dataframe.
city
NYC
NYC
NYC
NYC
SYD
SYD
SEL
SEL
...
df.city.value_counts()
I would like to remove rows of cities that has less than 4 count frequency, which would be SYD and SEL for instance.
What would be the way to do so without manually dropping them city by city?
Here you go with filter
df.groupby('city').filter(lambda x : len(x)>3)
Out[1743]:
city
0 NYC
1 NYC
2 NYC
3 NYC
Solution two transform
sub_df = df[df.groupby('city').city.transform('count')>3].copy()
# add copy for future warning when you need to modify the sub df
This is one way using pd.Series.value_counts.
counts = df['city'].value_counts()
res = df[~df['city'].isin(counts[counts < 5].index)]
counts is a pd.Series object. counts < 5 returns a Boolean series. We filter the counts series by the Boolean counts < 5 series (that's what the square brackets achieve). We then take the index of the resultant series to find the cities with < 5 counts. ~ is the negation operator.
Remember a series is a mapping between index and value. The index of a series does not necessarily contain unique values, but this is guaranteed with the output of value_counts.
I think you're looking for value_counts()
# Import the great and powerful pandas
import pandas as pd
# Create some example data
df = pd.DataFrame({
'city': ['NYC', 'NYC', 'SYD', 'NYC', 'SEL', 'NYC', 'NYC']
})
# Get the count of each value
value_counts = df['city'].value_counts()
# Select the values where the count is less than 3 (or 5 if you like)
to_remove = value_counts[value_counts <= 3].index
# Keep rows where the city column is not in to_remove
df = df[~df.city.isin(to_remove)]
Another solution :
threshold=3
df['Count'] = df.groupby('City')['City'].transform(pd.Series.value_counts)
df=df[df['Count']>=threshold]
df.drop(['Count'], axis = 1, inplace = True)
print(df)
City
0 NYC
1 NYC
2 NYC
3 NYC

Categories