How to concat multiple dummy variables to dataframe? - python

I am kind of stuck with a silly issue, can someone please help me pointing out my mistake?
So, have like 5 Categorical variables. I have created their dummies in individual data frames.
seasons = pd.get_dummies(bike['season'], drop_first=True) #3
weathers = pd.get_dummies(bike['weather'], drop_first=True) #3
days = pd.get_dummies(bike['weekday'], drop_first=True)# 6
months = pd.get_dummies(bike['month'], drop_first=True) # 11
years = pd.get_dummies(bike['yr'], drop_first=True) #1
#will add 24 new columns.
Now, when I try to contact them into my main df.
bike = pd.concat([bike, seasons], axis=1)
bike = pd.concat([bike, weathers], axis=1)
bike = pd.concat([bike, months], axis=1)
bike = pd.concat([bike, days], axis=1)
bike = pd.concat([bike, years], axis=1)
bike.info()
I am getting a KeyError: 0 error on bike.info().
Now, upon investigating, I found it is coming only if I try to concat the year df, which is originally indicating one of 2 years 2018: 0, 2019: 1 After the dummy is created this is how it looks.
2019
0 0
1 0
2 0
3 0
4 0
Please Suggest.
Thanks

First of all, do you know why are you using drop_first=True? Just ensuring whether this is what you want to have (removing the first level and having only k-1 categorical levels).
If you want to keep all original data that was not processed by get_dummies method, you do not need to use concat function, it's enough to do bike_with_dummies = pd.get_dummies(bike, columns=['season','weather','weekday','month','yr'], drop_first=True). See example 1. If you want to keep all of them, I would recommend using the code in example 2.
Example 1
You have for example this simple DataFrame (taken from pandas doc)
df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'a', 'c'], 'C': [1, 2, 3]})
When you run
pd.get_dummies(df, columns=['C'], drop_first=True)
it will keep the original columns ("A" and "B") and will convert selected columns ("C" here) to dummies. Output will look like
A B C_2 C_3
0 a b 0 0
1 b a 1 0
2 a c 0 1
Example 2
If you want to keep the original columns as well ("C" from the example above) I would recommend you to do the following
cols_to_dummies = ["C"] # columns that should be turned into dummies
df_with_dummies = pd.get_dummies(df, columns=cols_to_dummies, drop_first=True)
df_with_dummies_and_original = pd.concat([df[cols_to_dummies], df_with_dummies], axis=1)
The output will look like (note that "C" is included now)
C A B C_2 C_3
0 1 a b 0 0
1 2 b a 1 0
2 3 a c 0 1
So in your case you could run this
cols_to_dummies = ['season','weather','weekday','month','yr']
bike_with_dummies = pd.get_dummies(bike, columns=cols_to_dummies, drop_first=True)
bike_with_dummies_and_original = pd.concat([bike[cols_to_dummies], bike_with_dummies], axis=1)
This approach has the advantage that you can easily change cols_to_dummies to update the list of columns that should be turned into dummies and you do not need to add any row.
Final comments - if you prefer better naming, you can use prefix and prefix_sep parameters or do the renaming by yourself at the end.
If this does not help you, please provide example DataFrame (content of bike dataframe).

Related

dataframe group by for all columns in new dataframe

I want to create a new dataframe with the values grouped by each column header dataset
this is the dataset i'm working with.
I essentially want a new dataframe which sums the occurences of 1 and 0 for each feature (chocolate, fruity etc)
i tried this code with the groupby and sort function
`
chocolate = data.groupby(["chocolate"]).size()
bar = data.groupby(["bar"]).size()
hard = data.groupby(["hard"]).size()
display(chocolate,bar, hard)
`
but this only gives me the sum per feature
this is the end result i want to become
end result
You could try the following:
res = (
data
.drop(columns="competitorname")
.melt().value_counts()
.unstack()
.fillna(0).astype("int").T
)
Eliminate the columns that aren't relevant (I've only seen competitorname, but there could be more).
.melt the dataframe. The result has 2 columns, one with the column names, and another with the resp. 0/1 values.
Now .value_counts gives you a series that essentially contains what you are looking for.
Then you just have to .unstack the first index level (column names) and transpose the dataframe.
Example:
data = pd.DataFrame({
"competitorname": ["A", "B", "C"],
"chocolate": [1, 0, 0], "bar": [1, 0, 1], "hard": [1, 1, 1]
})
competitorname chocolate bar hard
0 A 1 1 1
1 B 0 0 1
2 C 0 1 1
Result:
variable bar chocolate hard
value
0 1 2 0
1 2 1 3
Alternative with .pivot_table:
res = (
data
.drop(columns="competitorname")
.melt().value_counts().to_frame()
.pivot_table(index="value", columns="variable", fill_value=0)
.droplevel(0, axis=1)
)
PS: Please don't post images, provide a litte example (like here) that encapsulates your problem.

Replace values of a DataFrame with values of a dictionary

I have two DataFrames with many thousands of rows. The columns all have the dtype string. A snippet looks like this:
DF1:
ID SUCCESSOR
0 0001234 3620031
1 0001235 6640002
2 0002456 8620003
3 0013456 8640004
4 1711999 1283456 <- see DF2
... ... ...
409813 9162467 <NA>
409814 9212466 <NA>
409815 9312466 6975A0C
409816 9452463 <NA>
409817 9591227 <NA>
DF2:
ID
2 1111682
3 1123704
14 1567828
15 1711999 <- that value should be replaced with '1283456'
16 1711834
... ...
845775 970879B
845776 975879B
845777 9275A0A
845778 9285A05
845779 9295A05
Don't be surprised that the second DataFrame is missing a few indexes, because I filtered them before, as they were not relevant. Also, the NaNs are not relevant because my algorithm bypasses them.
I now want to replace the IDs in the second DataFrame with the successors in the first DataFrame that have the same ID.
The output should be:
ID
2 1111682
3 1123704
14 1567828
15 1283456 <- now replaced
16 1711834
... ...
845775 970879B
845776 975879B
845777 9275A0A
845778 9285A05
845779 9295A05
In order not to blow up the example, I have replaced only one value. In reality there are several replacements
Two approaches:
In my first approach I iterated through DF1 and used the replace() function, however this approach takes years, so its useless.
In my second approach, I first convert DF1 to a dictionary and then apply the map() function. I do this as described here by JohnE: Remap values in pandas column with a dict
and in a small example it works wonderfully:
df = pd.DataFrame({'col1': {1: 1, 2: 2, 3: 4, 4: 1}, 'col2': {1: 2, 2: np.nan}})
di = {1: "A", 2: "B"}
col1 col2
1 1 2.0
2 2 NaN
3 4 NaN
4 1 NaN
df['col1'].map(di).fillna(df['col1'])
1 A
2 B
3 4
4 A
My function to map DF1 and DF2 looks like this:
def mapping(df1, df2):
di =dict(zip(df1.ID, df1.SUCCESSOR)) # create the dict
changes = 1
while(changes > 0):
changes = 0
df_old = df2
print(df2) #check how df2 looks before mapping.
df2['ID'] = df2['ID'].map(di).fillna(df2['ID'])
print(df2) # check how df2 looks after mapping. Unfortunately no changes :( so the error must be in the mapping function one line above here.
if df_old.equals(df2) == False:
changes = 1
return df2
So obviously the error must be in this line:
df2['ID'] = df2['ID'].map(dic).fillna(df2['ID']).
However, I just can't figure out why this doesn't work. What is not working here and why?
If anyone can help me, I am indebted to them forever!!!
Best regards,
Alfonso
EDIT:
EDIT:
I found the error and I am an idiot.
My solution worked, but the line: "df_old = df2" prevented the loop from continuing. MANY THANKS anyway and sorry if I have robbed time!
Here is a oneliner that creates the replacement dict by filtering the dataframes:
df2['ID'] = df2['ID'].replace(dict(zip(df2[df2['ID'].isin(df1['ID'])].sort_values(by=['ID']).reset_index()['ID'], df1.loc[df1['ID'].isin(df2['ID'])].sort_values(by=['ID']).reset_index()['SUCCESSOR'])))

Excel Copy Paste Way in Python

I have a data frame as below
df = pd.DataFrame([[3,2,1],[4,5,6],[10,20,30]], columns = ['A','B','C'])
A B C
0 3 2 1
1 4 5 6
2 10 20 30
Is there any way in python to mimic copy and paste function in excel? For example I want to copy paste row 0 column A and B and paste them into row 0 column B and C, such that it will become
A B C
0 3 3 2
1 4 5 6
2 10 20 30
In a small data frame, I can use:
df.loc[0,'C'] = df.loc[0,'B']
df.loc[0,'B'] = df.loc[0,'A']
But my original data frame is sizable and I prefer not to do this one element by one element.
I was also trying to do:
df.loc[0,['A','B']] = df.loc[0,['B','C']]
But my data in row 0 column A becomes NaN.
So is there a way of doing things similar like doing copy paste in excel in python (simply block a range of data, copy them and paste them on top of another existing data)? Thanks
anky_91's answer
df.loc[0, ['B', 'C']] = df.loc[0, ['A', 'B']].to_numpy()
shift
There are many ways you can use shift. This is just one.
df.update(df.shift(axis=1)[['B', 'C']])
For reasons that I'm not happy about, you can provide a fill_value to shift to preserve the integer dtype
df.update(df.shift(axis=1, fill_value=0)[['B', 'C']])
This mostly feels like a bad idea, but if it's what you really want to do, you can use .iloc to address columns by number and just shift them:
In [56]: df.iloc[0, 1:] = df.iloc[0, :-1].values
In [57]: df
Out[57]:
A B C
0 3 3 2
1 4 5 6
2 10 20 30

Comparing specific columns in 2 Excel Files with Pandas: How to select specific/ range of columns? [duplicate]

How do I select columns a and b from df, and save them into a new dataframe df1?
index a b c
1 2 3 4
2 3 4 5
Unsuccessful attempt:
df1 = df['a':'b']
df1 = df.ix[:, 'a':'b']
The column names (which are strings) cannot be sliced in the manner you tried.
Here you have a couple of options. If you know from context which variables you want to slice out, you can just return a view of only those columns by passing a list into the __getitem__ syntax (the []'s).
df1 = df[['a', 'b']]
Alternatively, if it matters to index them numerically and not by their name (say your code should automatically do this without knowing the names of the first two columns) then you can do this instead:
df1 = df.iloc[:, 0:2] # Remember that Python does not slice inclusive of the ending index.
Additionally, you should familiarize yourself with the idea of a view into a Pandas object vs. a copy of that object. The first of the above methods will return a new copy in memory of the desired sub-object (the desired slices).
Sometimes, however, there are indexing conventions in Pandas that don't do this and instead give you a new variable that just refers to the same chunk of memory as the sub-object or slice in the original object. This will happen with the second way of indexing, so you can modify it with the .copy() method to get a regular copy. When this happens, changing what you think is the sliced object can sometimes alter the original object. Always good to be on the look out for this.
df1 = df.iloc[0, 0:2].copy() # To avoid the case where changing df1 also changes df
To use iloc, you need to know the column positions (or indices). As the column positions may change, instead of hard-coding indices, you can use iloc along with get_loc function of columns method of dataframe object to obtain column indices.
{df.columns.get_loc(c): c for idx, c in enumerate(df.columns)}
Now you can use this dictionary to access columns through names and using iloc.
As of version 0.11.0, columns can be sliced in the manner you tried using the .loc indexer:
df.loc[:, 'C':'E']
is equivalent to
df[['C', 'D', 'E']] # or df.loc[:, ['C', 'D', 'E']]
and returns columns C through E.
A demo on a randomly generated DataFrame:
import pandas as pd
import numpy as np
np.random.seed(5)
df = pd.DataFrame(np.random.randint(100, size=(100, 6)),
columns=list('ABCDEF'),
index=['R{}'.format(i) for i in range(100)])
df.head()
Out:
A B C D E F
R0 99 78 61 16 73 8
R1 62 27 30 80 7 76
R2 15 53 80 27 44 77
R3 75 65 47 30 84 86
R4 18 9 41 62 1 82
To get the columns from C to E (note that unlike integer slicing, E is included in the columns):
df.loc[:, 'C':'E']
Out:
C D E
R0 61 16 73
R1 30 80 7
R2 80 27 44
R3 47 30 84
R4 41 62 1
R5 5 58 0
...
The same works for selecting rows based on labels. Get the rows R6 to R10 from those columns:
df.loc['R6':'R10', 'C':'E']
Out:
C D E
R6 51 27 31
R7 83 19 18
R8 11 67 65
R9 78 27 29
R10 7 16 94
.loc also accepts a Boolean array so you can select the columns whose corresponding entry in the array is True. For example, df.columns.isin(list('BCD')) returns array([False, True, True, True, False, False], dtype=bool) - True if the column name is in the list ['B', 'C', 'D']; False, otherwise.
df.loc[:, df.columns.isin(list('BCD'))]
Out:
B C D
R0 78 61 16
R1 27 30 80
R2 53 80 27
R3 65 47 30
R4 9 41 62
R5 78 5 58
...
Assuming your column names (df.columns) are ['index','a','b','c'], then the data you want is in the
third and fourth columns. If you don't know their names when your script runs, you can do this
newdf = df[df.columns[2:4]] # Remember, Python is zero-offset! The "third" entry is at slot two.
As EMS points out in his answer, df.ix slices columns a bit more concisely, but the .columns slicing interface might be more natural, because it uses the vanilla one-dimensional Python list indexing/slicing syntax.
Warning: 'index' is a bad name for a DataFrame column. That same label is also used for the real df.index attribute, an Index array. So your column is returned by df['index'] and the real DataFrame index is returned by df.index. An Index is a special kind of Series optimized for lookup of its elements' values. For df.index it's for looking up rows by their label. That df.columns attribute is also a pd.Index array, for looking up columns by their labels.
In the latest version of Pandas there is an easy way to do exactly this. Column names (which are strings) can be sliced in whatever manner you like.
columns = ['b', 'c']
df1 = pd.DataFrame(df, columns=columns)
In [39]: df
Out[39]:
index a b c
0 1 2 3 4
1 2 3 4 5
In [40]: df1 = df[['b', 'c']]
In [41]: df1
Out[41]:
b c
0 3 4
1 4 5
With Pandas,
wit column names
dataframe[['column1','column2']]
to select by iloc and specific columns with index number:
dataframe.iloc[:,[1,2]]
with loc column names can be used like
dataframe.loc[:,['column1','column2']]
You can use the pandas.DataFrame.filter method to either filter or reorder columns like this:
df1 = df.filter(['a', 'b'])
This is also very useful when you are chaining methods.
You could provide a list of columns to be dropped and return back the DataFrame with only the columns needed using the drop() function on a Pandas DataFrame.
Just saying
colsToDrop = ['a']
df.drop(colsToDrop, axis=1)
would return a DataFrame with just the columns b and c.
The drop method is documented here.
I found this method to be very useful:
# iloc[row slicing, column slicing]
surveys_df.iloc [0:3, 1:4]
More details can be found here.
Starting with 0.21.0, using .loc or [] with a list with one or more missing labels is deprecated in favor of .reindex. So, the answer to your question is:
df1 = df.reindex(columns=['b','c'])
In prior versions, using .loc[list-of-labels] would work as long as at least one of the keys was found (otherwise it would raise a KeyError). This behavior is deprecated and now shows a warning message. The recommended alternative is to use .reindex().
Read more at Indexing and Selecting Data.
You can use Pandas.
I create the DataFrame:
import pandas as pd
df = pd.DataFrame([[1, 2,5], [5,4, 5], [7,7, 8], [7,6,9]],
index=['Jane', 'Peter','Alex','Ann'],
columns=['Test_1', 'Test_2', 'Test_3'])
The DataFrame:
Test_1 Test_2 Test_3
Jane 1 2 5
Peter 5 4 5
Alex 7 7 8
Ann 7 6 9
To select one or more columns by name:
df[['Test_1', 'Test_3']]
Test_1 Test_3
Jane 1 5
Peter 5 5
Alex 7 8
Ann 7 9
You can also use:
df.Test_2
And you get column Test_2:
Jane 2
Peter 4
Alex 7
Ann 6
You can also select columns and rows from these rows using .loc(). This is called "slicing". Notice that I take from column Test_1 to Test_3:
df.loc[:, 'Test_1':'Test_3']
The "Slice" is:
Test_1 Test_2 Test_3
Jane 1 2 5
Peter 5 4 5
Alex 7 7 8
Ann 7 6 9
And if you just want Peter and Ann from columns Test_1 and Test_3:
df.loc[['Peter', 'Ann'], ['Test_1', 'Test_3']]
You get:
Test_1 Test_3
Peter 5 5
Ann 7 9
If you want to get one element by row index and column name, you can do it just like df['b'][0]. It is as simple as you can imagine.
Or you can use df.ix[0,'b'] - mixed usage of index and label.
Note: Since v0.20, ix has been deprecated in favour of loc / iloc.
df[['a', 'b']] # Select all rows of 'a' and 'b'column
df.loc[0:10, ['a', 'b']] # Index 0 to 10 select column 'a' and 'b'
df.loc[0:10, 'a':'b'] # Index 0 to 10 select column 'a' to 'b'
df.iloc[0:10, 3:5] # Index 0 to 10 and column 3 to 5
df.iloc[3, 3:5] # Index 3 of column 3 to 5
Try to use pandas.DataFrame.get (see the documentation):
import pandas as pd
import numpy as np
dates = pd.date_range('20200102', periods=6)
df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
df.get(['A', 'C'])
One different and easy approach: iterating rows
Using iterows
df1 = pd.DataFrame() # Creating an empty dataframe
for index,i in df.iterrows():
df1.loc[index, 'A'] = df.loc[index, 'A']
df1.loc[index, 'B'] = df.loc[index, 'B']
df1.head()
The different approaches discussed in the previous answers are based on the assumption that either the user knows column indices to drop or subset on, or the user wishes to subset a dataframe using a range of columns (for instance between 'C' : 'E').
pandas.DataFrame.drop() is certainly an option to subset data based on a list of columns defined by user (though you have to be cautious that you always use copy of dataframe and inplace parameters should not be set to True!!)
Another option is to use pandas.columns.difference(), which does a set difference on column names, and returns an index type of array containing desired columns. Following is the solution:
df = pd.DataFrame([[2,3,4], [3,4,5]], columns=['a','b','c'], index=[1,2])
columns_for_differencing = ['a']
df1 = df.copy()[df.columns.difference(columns_for_differencing)]
print(df1)
The output would be:
b c
1 3 4
2 4 5
You can also use df.pop():
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),
... ('monkey', 'mammal', np.nan)],
... columns=('name', 'class', 'max_speed'))
>>> df
name class max_speed
0 falcon bird 389.0
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal
>>> df.pop('class')
0 bird
1 bird
2 mammal
3 mammal
Name: class, dtype: object
>>> df
name max_speed
0 falcon 389.0
1 parrot 24.0
2 lion 80.5
3 monkey NaN
Please use df.pop(c).
I've seen several answers on that, but one remained unclear to me. How would you select those columns of interest?
The answer to that is that if you have them gathered in a list, you can just reference the columns using the list.
Example
print(extracted_features.shape)
print(extracted_features)
(63,)
['f000004' 'f000005' 'f000006' 'f000014' 'f000039' 'f000040' 'f000043'
'f000047' 'f000048' 'f000049' 'f000050' 'f000051' 'f000052' 'f000053'
'f000054' 'f000055' 'f000056' 'f000057' 'f000058' 'f000059' 'f000060'
'f000061' 'f000062' 'f000063' 'f000064' 'f000065' 'f000066' 'f000067'
'f000068' 'f000069' 'f000070' 'f000071' 'f000072' 'f000073' 'f000074'
'f000075' 'f000076' 'f000077' 'f000078' 'f000079' 'f000080' 'f000081'
'f000082' 'f000083' 'f000084' 'f000085' 'f000086' 'f000087' 'f000088'
'f000089' 'f000090' 'f000091' 'f000092' 'f000093' 'f000094' 'f000095'
'f000096' 'f000097' 'f000098' 'f000099' 'f000100' 'f000101' 'f000103']
I have the following list/NumPy array extracted_features, specifying 63 columns. The original dataset has 103 columns, and I would like to extract exactly those, then I would use
dataset[extracted_features]
And you will end up with this
This something you would use quite often in machine learning (more specifically, in feature selection). I would like to discuss other ways too, but I think that has already been covered by other Stack Overflower users.
To exclude some columns you can drop them in the column index. For example:
A B C D
0 1 10 100 1000
1 2 20 200 2000
Select all except two:
df[df.columns.drop(['B', 'D'])]
Output:
A C
0 1 100
1 2 200
You can also use the method truncate to select middle columns:
df.truncate(before='B', after='C', axis=1)
Output:
B C
0 10 100
1 20 200
To select multiple columns, extract and view them thereafter: df is the previously named data frame. Then create a new data frame df1, and select the columns A to D which you want to extract and view.
df1 = pd.DataFrame(data_frame, columns=['Column A', 'Column B', 'Column C', 'Column D'])
df1
All required columns will show up!
def get_slize(dataframe, start_row, end_row, start_col, end_col):
assert len(dataframe) > end_row and start_row >= 0
assert len(dataframe.columns) > end_col and start_col >= 0
list_of_indexes = list(dataframe.columns)[start_col:end_col]
ans = dataframe.iloc[start_row:end_row][list_of_indexes]
return ans
Just use this function
I think this is the easiest way to reach your goal.
import pandas as pd
cols = ['a', 'b']
df1 = pd.DataFrame(df, columns=cols)
df1 = df.iloc[:, 0:2]

add columns different length pandas

I have a problem with adding columns in pandas.
I have DataFrame, dimensional is nxk. And in process I wiil need add columns with dimensional mx1, where m = [1,n], but I don't know m.
When I try do it:
df['Name column'] = data
# type(data) = list
result:
AssertionError: Length of values does not match length of index
Can I add columns with different length?
If you use accepted answer, you'll lose your column names, as shown in the accepted answer example, and described in the documentation (emphasis added):
The resulting axis will be labeled 0, ..., n - 1. This is useful if you are concatenating objects where the concatenation axis does not have meaningful indexing information.
It looks like column names ('Name column') are meaningful to the Original Poster / Original Question.
To save column names, use pandas.concat, but don't ignore_index (default value of ignore_index is false; so you can omit that argument altogether). Continue to use axis=1:
import pandas
# Note these columns have 3 rows of values:
original = pandas.DataFrame({
'Age':[10, 12, 13],
'Gender':['M','F','F']
})
# Note this column has 4 rows of values:
additional = pandas.DataFrame({
'Name': ['Nate A', 'Jessie A', 'Daniel H', 'John D']
})
new = pandas.concat([original, additional], axis=1)
# Identical:
# new = pandas.concat([original, additional], ignore_index=False, axis=1)
print(new.head())
# Age Gender Name
#0 10 M Nate A
#1 12 F Jessie A
#2 13 F Daniel H
#3 NaN NaN John D
Notice how John D does not have an Age or a Gender.
Use concat and pass axis=1 and ignore_index=True:
In [38]:
import numpy as np
df = pd.DataFrame({'a':np.arange(5)})
df1 = pd.DataFrame({'b':np.arange(4)})
print(df1)
df
b
0 0
1 1
2 2
3 3
Out[38]:
a
0 0
1 1
2 2
3 3
4 4
In [39]:
pd.concat([df,df1], ignore_index=True, axis=1)
Out[39]:
0 1
0 0 0
1 1 1
2 2 2
3 3 3
4 4 NaN
We can add the different size of list values to DataFrame.
Example
a = [0,1,2,3]
b = [0,1,2,3,4,5,6,7,8,9]
c = [0,1]
Find the Length of all list
la,lb,lc = len(a),len(b),len(c)
# now find the max
max_len = max(la,lb,lc)
Resize all according to the determined max length (not in this example
if not max_len == la:
a.extend(['']*(max_len-la))
if not max_len == lb:
b.extend(['']*(max_len-lb))
if not max_len == lc:
c.extend(['']*(max_len-lc))
Now the all list is same length and create dataframe
pd.DataFrame({'A':a,'B':b,'C':c})
Final Output is
A B C
0 1 0 1
1 2 1
2 3 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
I had the same issue, two different dataframes and without a common column. I just needed to put them beside each other in a csv file.
Merge:
In this case, "merge" does not work; even adding a temporary column to both dfs and then dropping it. Because this method makes both dfs with the same length. Hence, it repeats the rows of the shorter dataframe to match the longer dataframe's length.
Concat:
The idea of The Red Pea didn't work for me. It just appended the shorter df to the longer one (row-wise) while leaving an empty column (NaNs) above the shorter df's column.
Solution: You need to do the following:
df1 = df1.reset_index()
df2 = df2.reset_index()
df = [df1, df2]
df_final = pd.concat(df, axis=1)
df_final.to_csv(filename, index=False)
This way, you'll see your dfs besides each other (column-wise), each of which with its own length.
If somebody like to replace a specific column of a different size instead of adding it.
Based on this answer, I use a dict as an intermediate type.
Create Pandas Dataframe with different sized columns
If the column to be inserted is not a list but already a dict, the respective line can be omitted.
def fill_column(dataframe: pd.DataFrame, list: list, column: str):
dict_from_list = dict(enumerate(list)) # create enumertable object from list and create dict
dataFrame_asDict = dataframe.to_dict() # Get DataFrame as Dict
dataFrame_asDict[column] = dict_from_list # Assign specific column
return pd.DataFrame.from_dict(dataFrame_asDict, orient='index').T # Create new DataSheet from Dict and return it

Categories