Creating Pandas DataFrame Column with Dashes - python

Background -
I am aware of the various ways to add columns to a pandas DataFrame, such as assign(), insert(), concat() etc.. However, I haven't been able to ascertain how I can add a column (similar to inset() in that I specify the position or index) and have that column populated with dashed values.
Expected DataFrame structure -Per the following, I am looking to add colC and add - values that span the same length as the columns that do have data.
print(df)
colA colB colC
0 True 1 -
1 False 2 -
2 False 3 -
In the above example, as the DataFrame has 3x rows, colC populates with - values for all 3x rows.

It's as simple as it gets.
>>> df
colA colB
0 True 1
1 False 2
2 False 3
>>> df['colC'] = '-'
>>> df
colA colB colC
0 True 1 -
1 False 2 -
2 False 3 -

Related

Creating a new map from existing maps in python

This question might be common but I am new to python and would like to learn more from the community. I have 2 map files which have data mapping like this:
map1 : A --> B
map2 : B --> C,D,E
I want to create a new map file which will be A --> C
What is the most efficient way to achieve this in python? A generic approach would be very helpful as I need to apply the same logic on different files and different columns
Example:
Map1:
1,100
2,453
3,200
Map2:
100,25,30,
200,300,,
250,190,20,1
My map3 should be:
1,25
2,0
3,300
As 453 is not present in map2, our map3 contains value 0 for key 2.
First create DataFrames:
df1 = pd.read_csv(Map1, header=None)
df2 = pd.read_csv(Map2, header=None)
And then use Series.map by second column with by Series created by df2 with set index by first column, last replace missing values to 0 for not matched values:
df1[1] = df1[1].map(df2.set_index(0)[1]).fillna(0, downcast='int')
print (df1)
0 1
0 1 25
1 2 0
2 3 300
EDIT: for mapping multiple columns use left join with remove only missing columns by DataFrame.dropna and columns b,c used for join, last replace missing values:
df1.columns=['a','b']
df2.columns=['c','d','e','f']
df = (df1.merge(df2, how='left', left_on='b', right_on='c')
.dropna(how='all', axis=1)
.drop(['b','c'], axis=1)
.fillna(0)
.convert_dtypes())
print (df)
a d e
0 1 25 30
1 2 0 0
2 3 300 0

Slice pandas dataframe using .loc with both index values and multiple column values, then set values

I have a dataframe, and I would like to select a subset of the dataframe using both index and column values. I can do both of these separately, but cannot figure out the syntax to do them simultaneously. Example:
import pandas as pd
# sample dataframe:
cid=[1,2,3,4,5,6,17,18,91,104]
c1=[1,2,3,1,2,3,3,4,1,3]
c2=[0,0,0,0,1,1,1,1,0,1]
df=pd.DataFrame(list(zip(c1,c2)),columns=['col1','col2'],index=cid)
df
Returns:
col1 col2
1 1 0
2 2 0
3 3 0
4 1 0
5 2 1
6 3 1
17 3 1
18 4 1
91 1 0
104 3 1
Using .loc, I can collect by index:
rel_index=[5,6,17]
relc1=[2,3]
relc2=[1]
df.loc[rel_index]
Returns:
col1 col2
5 1 5
6 2 6
17 3 7
Or I can select by column values:
df.loc[df['col1'].isin(relc1) & df['col2'].isin(relc2)]
Returning:
col1 col2
5 2 1
6 3 1
17 3 1
104 3 1
However, I cannot do both. When I try the following:
df.loc[rel_index,df['col1'].isin(relc1) & df['col2'].isin(relc2)]
Returns:
IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match
I have tried a few other variations (such as "&" instead of the ","), but these return the same or other errors.
Once I collect this slice, I am hoping to reassign values on the main dataframe. I imagine this will be trivial once the above is done, but I note it here in case it is not. My goal is to assign something like df2 in the following:
c3=[1,2,3]
c4=[5,6,7]
df2=pd.DataFrame(list(zip(c3,c4)),columns=['col1','col2'],index=rel_index)
to the slice referenced by index and multiple column conditions (overwriting what was in the original dataframe).
The reason for the IndexingError, is that you're calling df.loc with arrays of 2 different sizes.
df.loc[rel_index] has a length of 3 whereas df['col1'].isin(relc1) has a length of 10.
You need the index results to also have a length of 10. If you look at the output of df['col1'].isin(relc1), it is an array of booleans.
You can achieve a similar array with the proper length by replacing df.loc[rel_index] with df.index.isin([5,6,17])
so you end up with:
df.loc[df.index.isin([5,6,17]) & df['col1'].isin(relc1) & df['col2'].isin(relc2)]
which returns:
col1 col2
5 2 1
6 3 1
17 3 1
That said, I'm not sure why your index would ever look like this. Typically when slicing by index you would use df.iloc and your index would match the 0,1,2...etc. format.
Alternatively, you could first search by value - then assign the resulting dataframe to a new variable df2
df2 = df.loc[df['col1'].isin(relc1) & df['col2'].isin(relc2)]
then df2.loc[rel_index] would work without issue.
As for your overall goal, you can simply do the following:
c3=[1,2,3]
c4=[5,6,7]
df2=pd.DataFrame(list(zip(c3,c4)),columns=['col1','col2'],index=rel_index)
df.loc[df.index.isin([5,6,17]) & df['col1'].isin(relc1) & df['col2'].isin(relc2)] = df2
#Rexovas explains it quite well, this is an alternative, where you can compute the filters on the index before assigning - it is a bit long, involves MultiIndex, but once you get your head around MultiIndex, should be intuitive:
(df
# move columns into the index
.set_index(['col1', 'col2'], append = True)
# filter based on the index
.loc(axis = 0)[rel_index, relc1, relc2]
# return cols 1 and 2
.reset_index(level = [-2, -1])
# assign values
.assign(col1 = c3, col2 = c4)
)
col1 col2
5 1 5
6 2 6
17 3 7

Create dataframe with values from other dataframe's indices and columns

I have a large dataframe df1 that looks like:
0 1 2
0 NaN 1 5
1 0.5 NaN 1
2 1.25 3 NaN
And I want to create another dataframe df2 with three columns where the values for the first two columns correspond to the df1 columns and indices, and the third column is the cell value.
So df2 would look like:
src dst cost
0 0 1 0.5
1 0 2 1.25
2 1 0 5
3 1 2 3
How can I do this?
Thanks
I'm sure there's probably a clever way to do this with pd.pivot or pd.melt but this works:
df2 = (
# reorganize the data to be row-wise with a multi-index
df1.stack()
# drop missing values
.dropna()
# name the axes
.rename_axis(['src', 'dst'])
# name the values
.to_frame('cost')
# return src and dst to columns
.reset_index(drop=False)
)

How to shift rows up in Pandas Dataframe based on specific column

How do I shift up all the values in a row for one specific column without affecting the order of the other columns?
For example, let's say i have the following code:
import pandas as pd
data= {'ColA':["A","B","C"],
'ColB':[0,1,2],
'ColC':["First","Second","Third"]}
df = pd.DataFrame(data)
print(df)
I would see the following output:
ColA ColB ColC
0 A 0 First
1 B 1 Second
2 C 2 Third
In my case I want to verify that Column B does not have any 0s and if so, it is removed and all the other values below it get pushed up, and the order of the other columns are not affected. Presumably, I would then see the following:
ColA ColB ColC
0 A 1 First
1 B 2 Second
2 C NaN Third
I can't figure out how to do this using either the drop() or shift() methods.
Thank you
Let us do simple sorted
invalid=0
df['ColX']=sorted(df.ColB,key=lambda x : x==invalid)
df.ColX=df.ColX.mask(df.ColX==invalid)
df
Out[351]:
ColA ColB ColC ColX
0 A 0 First 1.0
1 B 1 Second 2.0
2 C 2 Third NaN
The way I'd do this IIUC is to filter out the values in ColB which are not 0, and fill the column with these values according to the length of the obtained valid values:
m = df.loc[~df.ColB.eq(0), 'ColB'].values
df['ColB'] = float('nan')
df.loc[:m.size-1, 'ColB'] = m
print(df)
ColA ColB ColC
0 A 1.0 First
1 B 2.0 Second
2 C NaN Third
You can swap 0s for nans and then move up the rest of the values:
import numpy as np
df.ColB.replace(0, np.nan, inplace=True)
df.assign(ColB=df.ColB.shift(df.ColB.count() - len(df.ColB)))

Pandas (Python) - Update column of a dataframe from another one with conditions and different columns

I had a problem and I found a solution but I feel it's the wrong way to do it. Maybe, there is a more 'canonical' way to do it.
I already had an answer for a really similar problem, but here I have not the same amount of rows in each dataframe. Sorry for the "double-post", but the first one is still valid so I think it's better to make a new one.
Problem
I have two dataframe that I would like to merge without having extra column and without erasing existing infos. Example :
Existing dataframe (df)
A A2 B
0 1 4 0
1 2 5 1
2 2 5 1
Dataframe to merge (df2)
A A2 B
0 1 4 2
1 3 5 2
I would like to update df with df2 if columns 'A' and 'A2' corresponds.
The result would be :
A A2 B
0 1 4 2 <= Update value ONLY
1 2 5 1
2 2 5 1
Here is my solution, but I think it's not a really good one.
import pandas as pd
df = pd.DataFrame([[1,4,0],[2,5,1],[2,5,1]],columns=['A','A2','B'])
df2 = pd.DataFrame([[1,4,2],[3,5,2]],columns=['A','A2','B'])
df = df.merge(df2,on=['A', 'A2'],how='left')
df['B_y'].fillna(0, inplace=True)
df['B'] = df['B_x']+df['B_y']
df = df.drop(['B_x','B_y'], axis=1)
print(df)
I tried this solution :
rows = (df[['A','A2']] == df2[['A','A2']]).all(axis=1)
df.loc[rows,'B'] = df2.loc[rows,'B']
But I have this error because of the wrong number of rows :
ValueError: Can only compare identically-labeled DataFrame objects
Does anyone has a better way to do ?
Thanks !
I think you can use DataFrame.isin for check where are same rows in both DataFrames. Then create NaN by mask, which is filled by combine_first. Last cast to int:
mask = df[['A', 'A2']].isin(df2[['A', 'A2']]).all(1)
print (mask)
0 True
1 False
2 False
dtype: bool
df.B = df.B.mask(mask).combine_first(df2.B).astype(int)
print (df)
A A2 B
0 1 4 2
1 2 5 1
2 2 5 1
With a minor tweak in the way in which the boolean mask gets created, you can get it to work:
cols = ['A', 'A2']
# Slice it to match the shape of the other dataframe to compare elementwise
rows = (df[cols].values[:df2.shape[0]] == df2[cols].values).all(1)
df.loc[rows,'B'] = df2.loc[rows,'B']
df

Categories