I have this data frame
df=
ID join Chapter ParaIndex text
0 NaN 1 0 I am test
1 NaN 2 1 it is easy
2 1 3 2 but not so
3 1 3 3 much easy
I want to get this
(merge the column "text" with the same index in column "join" and reindex "ID" and "ParaIndex", rest without change)
dfEdited=
ID join Chapter ParaIndex text
0 NaN 1 0 I am test
1 NaN 2 1 it is easy
2 1 3 2 but not so much easy
I used this command
dfedited=df.groupby(['join'])['text'].apply(lambda x: ' '.join(x.astype(str))).reset_index()
it only merges the row with the numerical index in column join and exclude row with non index
so I changed to this
dfedited=df.groupby(['join'],dropna=False)['text'].apply(lambda x: ' '.join(x.astype(str))).reset_index()
here it merges all rows based on index join but it considers row with index NaN as one group therefore join them also to be group! however, I do not want to join them ...any idea? many thanks
I also used this
dfedited=df.groupby(['join', "ParaIndex", "Chapter"],dropna=False )['text'].apply(lambda x: ' '.join(x.astype(str) )).reset_index()
it looks better as it has all columns, but no changes!!
I hope you can give an example of data and code. And do it step by step rather than just code it in one line without testing. It's hard to help you with this one-line code.
But the main idea is to use merge(..., on='join')
I solved that so;
dfEdited = df.assign(key=df['join'].ne(df['join'].shift()).cumsum()).groupby('key').agg({ "ParaIndex": 'first', "Chapter":'first','text':' '.join}).reset_index()
Related
So I have a dataframe like this:-
0 1 2 ...
0 Index Something Something2 ...
1 1 5 8 ...
2 2 6 9 ...
3 3 7 10 ...
Now, I want to append some columns in between those "Something" column names, for which I have used this code:-
j = 1
for i in range(2, 51):
if i % 2 != 0 and i != 4:
df.insert(i, f"% Difference {j}", " ")
j += 1
where df is the dataframe. Now what happens is that the columns do get inserted but like this:-
0 1 Difference 1 2 ...
0 Index Something NaN Something2 ...
1 1 5 NaN 8 ...
2 2 6 NaN 9 ...
3 3 7 NaN 10 ...
whereas what I wanted was this:-
0 1 2 3 ...
0 Index Something Difference 1 Something2 ...
1 1 5 NaN 8 ...
2 2 6 NaN 9 ...
3 3 7 NaN 10 ...
Edit 1 Using jezrael's logic:-
df.columns = df.iloc[0].tolist()
df = df.iloc[1:].reset_index(drop = True)
print(df)
The output of that is still this:-
0 1 2 ...
0 Index Something Something2 ...
1 1 5 8 ...
2 2 6 9 ...
3 3 7 10 ...
Any ideas or suggestions as to where or how I am going wrong?
If your dataframe looks like what you've shown in your first code block, your column names aren't Index, Something, etc. - they're actually 0, 1, etc.
Pandas is seeing Index, Something, etc. as data in row 0, NOT as column names (which exist above row 0). So when you add a column with the name Difference 1, you're adding a column above row 0, which is where the range of integers is located.
A couple potential solutions to this:
If you'd like the actual column names to be Index, Something, etc. then the best solution is to import the data with that row as the headers. What is the source of your data? If it's a csv, make sure to NOT use the header = None option. If it's from somewhere else, there is likely an option to pass in a list of the column names to use. I can't think of any reason why you'd want to have a range of integer values as your column names rather than the more descriptive names that you have listed.
Alternatively, you can do what #jezrael suggested and convert your first row of data to column names then delete that data row. I'm not sure why their solution isn't working for you, since the code seems to work fine in my testing. Here's what it's doing:
df.columns = df.iloc[0].tolist()
df.columns tells pandas what to (re)name the columns of the dataframe. df.iloc[0].tolist() creates a list out of the first row of data, which in your case is the column names that you actually want.
df = df.iloc[1:].reset_index(drop = True)
This grabs the 2nd through last rows of data to recreate the dataframe. So you have new column names based on the first row, then you recreate the dataframe starting at the second row. The .reset_index(drop = True) isn't totally necessary to include. That just restarts your actual data rows with an index value of 0 rather than 1.
If for some reason you want to keep the column names as they currently exist (as integers rather than labels), you could do something like the following under the if statement in your for loop:
df.insert(i, i, np.nan, allow_duplicates = True)
df.iat[0, i] = f"%Difference {j}"
df.columns = np.arange(len(df.columns))
The first line inserts a column with an integer label filled with NaN values to start with (assuming you have numpy imported). You need to allow duplicates otherwise you'll get an error since the integer value will be the name of a pre-existing column
The second line changes the value in the 1st row of the newly-created column to what you want.
The third line resets the column names to be a range of integers like you had to start with.
As #jezrael suggested, it seems like you might be a little unclear about the difference between column names, indices, and data rows and columns. An index is its own thing, so it's not usually necessary to have a column named Index like you have in your dataframe, especially since that column has the same values in it as the actual index. Clarifying those sorts of things at import can help prevent a lot of hassle later on, so I'd recommend taking a good look at your data source to see if you can create a clearer dataframe to start with!
I want to append some columns in between those "Something" column names
No, there are no columns names Something, for it need set first row of data to columns names:
print (df.columns)
Int64Index([0, 1, 2], dtype='int64')
print (df.iloc[0].tolist())
['Index', 'Something', 'Something2']
df.columns = df.iloc[0].tolist()
df = df.iloc[1:].reset_index(drop=True)
print (df)
Index Something Something2
0 1 5 8
1 2 6 9
2 3 7 10
print (df.columns)
Index(['Index', 'Something', 'Something2'], dtype='object')
Then your solution create columns Difference, but output is different - no columns 0,1,2,3.
I have two dataframes which are uneven in the number of rows. Now I want to add them horizontally by aligning the second dataframe based on a key ("flag"). However, the flag serves merely as a connector at a specific row to the first (base) df which means the second dataframe should be pasted at that connector point. Please see visual for what I mean in case it is not clear.
I tried looking into merge, concat, join etc but it will join it does not seem quiet like what I am looking for.
dif = df1['flag'].idxmax() - df2['flag'].idxmax()
df2.index = df2.index + dif
df1.merge(df2,how='outer',left_index=True,right_index=True)
Can make use of the above idea. Need to clean up the column names and drop the extra column. Works if dif is -ve.
Drop 1st 3 rows of df1
You can try:
df_base = pd.DataFrame(data={'flag':[0,0,0,0,1,0,0],
'transaction_value':[1,1,2,2,5,6,9]})
df_group2 = pd.DataFrame(data={'flag':[0,0,1,0,0],
'transaction_value':[1,1,2,2,5]})
diff = df_base['flag'].argmax() - df_group2['flag'].argmax()
df_group2.index = df_group2.index + diff
print(df_base.join(df_group2[['transaction_value']], rsuffix='_group2'))
Output:
flag transaction_value transaction_value_group2
0 0 1 NaN
1 0 1 NaN
2 0 2 1.0
3 0 2 1.0
4 1 5 2.0
5 0 6 2.0
6 0 9 5.0
I would like to copy a values from a dataframe column to another dataframe if the values in two other columns are the same.
example df1:
identifier price
1 nan
1 nan
3 nan
3 nan
and so on. There are several rows for every identifier.
In my df2, there is only one value for each identifier in "price"
example df2:
Identifier price
1 3
3 5
I just would like to copy the "price" values in df2 to "price" in df1. It does not matter to me if the values are copied to each column where the identifiers match or just to the first, since I will alter all but the first entry for each identifier in df1["price"] anyways.
Expected output would be still df1 because there are other columns I still need:
identifier price
1 3
1 nan
3 5
3 nan
OR:
identifier price
1 3
1 3
3 5
3 5
I could work with both.
I tried np.where but the different length of the dataframes causes problems. I also tried using loc, but I got stuck when defining the value that should be inserted in the cell if the condition holds.
Any help is much appreciated, thank you in advance!
for the following df
group participated
A 1
A 1
B 0
A 0
B 1
A 1
B 0
B 0
I want to count the total number of values in the participated column for each value in the group column (groupby-count) and then find a count of how many 1s there are in each group too
Something like
group tot_participated 1s
A 4 3
B 4 1
I know the first part is simple and can be done by a simple
grouped_df=df.groupby('group').count().reset_index()
unable to wrap my head around the second part. Any help will be greatly appreciated!
You could follow the groupby with an aggregation as below:
grp_df = df.groupby('group', as_index=False).agg({'participated':['count','sum']})
grp_df.columns = ['group','tot_participated','1s']
grp_df.head()
The caveat to using .agg with multiple aggregation functions on the same column is that a multi-column index is created. This can be remedied by resetting the column names as in line 2.
I was able to pull the rows that I would like to delete from a CSV file but I can't make that drop() function to work.
data = pd.read_csv(next(iglob('*.csv')))
data_top = data.head()
data_top = data_top.drop(axis=0)
What needs to be added?
Example of a CSV file. It should delete everything until it reaches the Employee column.
creation date Unnamed: 1 Unnamed: 2
0 NaN type of client NaN
1 age NaN NaN
2 NaN birth date NaN
3 NaN NaN days off
4 Employee Salary External
5 Dan 130e yes
6 Abraham 10e no
7 Richmond 201e third-party
If it is just the top 5 rows you want to delete, then you can do it as follows:
data = pd.read_csv(next(iglob('*.csv')))
data.drop([0,1,2,3,4], axis=0, inplace=True)
With axis, you should also pass either a single label or list (of column names, or row indexes).
There are, of course, many other ways to achieve this too. Especially if the case is that the index of rows you want to delete is not just the top 5.
edit: inplace added as pointed out in comments.
Considering the coments and further explanations, assuming you know the name of the column, and that you have a positional index, you can try the following:
data = pd.read_csv(next(iglob('*.csv')))
row = data[data['creation date'] == 'Employee']
n = row.index[0]
data.drop(labels=list(range(n)), inplace=True)
The main goal is to find the index of the row that contains the value 'Employee'. To achieve that, assuming there are no other rows that contain that word, you can filter the dataframe to match the value in question in the specific column.
After that, you extract the index value, wich you will use to create a list of labels (given a positional index) that you will drop of the dataframe, as #MAK7 stated in his answer.