Join to dataframe on X.n = Y.n + 1 - python

I have a data frame X with data like:
n val
------------
1 4
2 3
3 0
and another, Y, with the same columns, like:
n val
------------
1 288
2 12
3 130
4 1230
How can I create an additional column in X, that is the value of the following (n + 1) val?
Expected output is:
n val val2
------------
1 4 12
2 3 130
3 0 1230
Apologies as I'm sure this has been asked before, I'm just having trouble finding it, and can't figure it out using join or merge, those seem to only take column names as inputs.

We can do merge
df=df1.merge(df2.assign(n=df2.n-1),on='n')
n val_x val_y
0 1 4.0 12
1 2 3.0 130
2 3 0.0 1230

Related

Pandas or other Python application to generate a column with 1 to n value based on other two columns with rules

Hope I can explain the question properly.
In basic terms, imagining the df as below:
print(df)
year id
1 16100
1 150
1 150
2 66
2 370
2 370
2 530
3 41
3 43
3 61
Would need df.seq to be a cycling 1 to n value if the year rows are identical, until it changes.
df.seq2 would be still n, instead of n+1, if the above rows id value is identical.
So if we imagine excel like formula would be something like
df.seq2 = IF(A2=A1,IF(B2=B1,F1,F1+1),1)
which would make the desired output seq and seq2 below:
year id seq seq2
1 16100 1 1
1 150 2 2
1 150 3 2
2 66 1 1
2 370 2 2
2 370 3 2
2 530 4 3
3 41 1 1
3 43 2 2
3 61 3 3
Did test couple things like (assuming I've generated the df.seq)
comb_df['match'] = comb_df.year.eq(comb_df.year.shift())
comb_df['match2'] = comb_df.id.eq(comb_df.id.shift())
comb_df["seq2"] = np.where((comb_df["match"].shift(+1) == True) & (comb_df["match2"].shift(+1) == True), comb_df["seq"] - 1, comb_df["seq2"])
But the problem is this doesn't really work out if there are multiple duplicates in a row etc.
Perhaps issue can not be resolved purely on numpy sort of way but perhaps I'd have to iterate over the rows?
There are 2-3 million rows, so the performance will be an issue if the solution would be very slow.
Would need to generate both df.seq and df.seq2
Any ideas would be extremely helpful!
We can do groupby with cumcount and factorize
df['seq'] = df.groupby('year').cumcount()+1
df['seq2'] = df.groupby('year')['id'].transform(lambda x : x.factorize()[0]+1)
df
Out[852]:
year id seq seq2
0 1 16100 1 1
1 1 150 2 2
2 1 150 3 2
3 2 66 1 1
4 2 370 2 2
5 2 370 3 2
6 2 530 4 3
7 3 41 1 1
8 3 43 2 2
9 3 61 3 3

Adding and multiplying values of a dataframe in Python

I have a dataset with multiple columns and rows. The rows are supposed to be summed up based on the unique value in a column. I tried .groupby but I want to retain the whole dataset and not just summed up columns based on one unique column. I further need to multiple these individual columns(values) with another column.
For example:
id A B C D E
11 2 1 2 4 100
11 2 2 1 1 100
12 1 3 2 2 200
13 3 1 1 4 190
14 Nan 1 2 2 300
I would like to sum up columns B, C & D based on the unique id and then multiply the result by column A and E in a new column F. I do not want to sum up the values of column A & E
I would like the resultant dataframe to be something like this, which also deals with NaN and while calculating skips the NaN value and moves onto further calculation:
id A B C D E F
11 2 3 3 5 100 9000
12 1 3 2 2 200 2400
13 3 1 1 4 190 2280
14 Nan 1 2 2 300 1200
If the above is unachievable then I would like something as, where the rows are same but the calculation is what I have stated above based on the same id:
id A B C D E F
11 2 3 3 5 100 9000
11 2 2 1 1 100 9000
12 1 3 2 2 200 2400
13 3 1 1 4 190 2280
14 Nan 1 2 2 300 1200
My logic earlier was to apply groupby on the columns B, C, D and then multiply but that is not working out for me. If the above dataframes are unachieavable then please let me know how can i perform this calculation and then merge/join the results with the original file with just E column.
You must first sum verticaly the columns B, C and D for common id, then take the horizontal product:
result = df.groupby('id').agg({'A': 'first', 'B':'sum', 'C': 'sum', 'D': 'sum',
'E': 'first'})
result['F'] = result.fillna(1).astype('int64').agg('prod', axis=1)
It gives:
A B C D E F
id
11 2.0 3 3 5 100 9000
12 1.0 3 2 2 200 2400
13 3.0 1 1 4 190 2280
14 NaN 1 2 2 300 1200
Beware: id is the index here - use reset_index if you want it to be a normal column.

pandas number of items in one column per value in another column

I have two dataframes. say for example, frame 1 is the student info:
student_id course
1 a
2 b
3 c
4 a
5 f
6 f
frame 2 is each interaction the student has with a program
student_id day number_of_clicks
1 4 60
1 5 34
1 7 87
2 3 33
2 4 29
2 8 213
2 9 46
3 2 103
I am trying to add the information from frame 2 to frame 1, ie. for each student I would like to know the number of different days they accessed the database on, and the sum of all the clicks on those days. eg:
student_id course no_days total_clicks
1 a 3 181
2 b 4 321
3 c 1 103
4 a 0 0
5 f 0 0
6 f 0 0
I've tried to do this with groupby, but I couldn't add the information back into frame 1, or figure out how to sum the number of clicks. any ideas?
First we aggregate your df2 to the desired information using GroupBy.agg. Then we merge that information into df1:
agg = df2.groupby('student_id').agg(
no_days=('day', 'size'),
total_clicks=('number_of_clicks', 'sum')
)
df1 = df1.merge(agg, on='student_id', how='left').fillna(0)
student_id course no_days total_clicks
0 1 a 3.0 181.0
1 2 b 4.0 321.0
2 3 c 1.0 103.0
3 4 a 0.0 0.0
4 5 f 0.0 0.0
5 6 f 0.0 0.0
Or if you like one-liners, here's the same method as above, but in one line of code and more in SQL kind of style:
df1.merge(
df2.groupby('student_id').agg(
no_days=('day', 'size'),
total_clicks=('number_of_clicks', 'sum')
),
on='student_id',
how='left'
).fillna(0)
Use merge and fillna the null values then aggregate using groupby.agg as:
df = df1.merge(df2, how='left').fillna(0, downcast='infer')\
.groupby(['student_id', 'course'], as_index=False)\
.agg({'day':np.count_nonzero, 'number_of_clicks':np.sum}).reset_index()
print(df)
student_id course day number_of_clicks
0 1 a 3 181
1 2 b 4 321
2 3 c 1 103
3 4 a 0 0
4 5 f 0 0
5 6 f 0 0
​

how do I sort a list of values that correspond to 'n' into a large table ordered by 'n' [duplicate]

This question already has answers here:
Pandas: how to merge two dataframes on a column by keeping the information of the first one?
(4 answers)
Closed 3 years ago.
I have a list of uncertainties that correspond to a particular values of n that i'll call table 1. I would like to add those uncertainties into a comprehensive large table of data, table 2, that is ordered numerically and in ascending order by n. How could I put attach my uncertainty to the correct corresponding value of n?
My first issue is, my table of uncertainties is a table, not a dataframe. I have the separate arrays but not sure how to combine into a dataframe.
table1 = Table([xrow,yrow])
xrow denotes the array of the below 'n' in table1 and yrow denotes the corresponding error.
excerpt of table1:
n error
1 0.0
2 0.00496
3 0.0096
4 0.00913
6 0.00555
8 0.00718
10 0.00707
excerpt of table2:
n Energy g J error
0 1 0.000000 1 0 NaN
1 2 1827.486200 1 0 NaN
2 3 3626.681500 1 0 NaN
3 4 5396.686500 1 0 NaN
4 5 6250.149500 1 0 NaN
so the end result should look like this:
n Energy g J error
0 1 0.000000 1 0 0
1 2 1827.486200 1 0 0.00496
2 3 3626.681500 1 0 0.0096
3 4 5396.686500 1 0 0.00913
4 5 6250.149500 1 0 NaN
i.e. the ones where there is no data remains to be blank (e.g. n=5 in the above case)
I should note there is a lot of data (roughly 30k) in table 2 and 2.5k in table1.
you can use .merge like this:
import pandas as pd
from io import StringIO
table1 = pd.read_csv(StringIO("""
n error
1 0.0
2 0.00496
3 0.0096
4 0.00913
6 0.00555
8 0.00718
10 0.00707"""), sep=r"\s+")
table2 = pd.read_csv(StringIO("""
n Energy g J error
0 1 0.000000 1 0 NaN
1 2 1827.486200 1 0 NaN
2 3 3626.681500 1 0 NaN
3 4 5396.686500 1 0 NaN
4 5 6250.149500 1 0 NaN"""), sep=r"\s+")
table2["error"] = table1.merge(table2, on="n", how="right")["error_x"]
print(table2)
Output:
n Energy g J error
0 1 0.0000 1 0 0.00000
1 2 1827.4862 1 0 0.00496
2 3 3626.6815 1 0 0.00960
3 4 5396.6865 1 0 0.00913
4 5 6250.1495 1 0 NaN
EDIT: using .map should perform better (see comments):
table2["error"] = table2["n"].map(table1.set_index('n')['error'])

select rows in a dataframe in python based on two criteria

Based on the dataframe (1) below, I wish to create a dataframe (2) where either y or z is equal to 2. Is there a way to do this conveniently?
And if I were to create a dataframe (3) that only contains rows from dataframe (1) but not dataframe (2), how should I approach it?
id x y z
0 324 1 2
1 213 1 1
2 529 2 1
3 347 3 2
4 109 2 2
...
df[df[['y','z']].eq(2).any(1)]
Out[1205]:
id x y z
0 0 324 1 2
2 2 529 2 1
3 3 347 3 2
4 4 109 2 2
You can create df2 easily enough using a condition:
df2 = df1[df1.y.eq(2) | df1.z.eq(2)]
df2
x y z
id
0 324 1 2
2 529 2 1
3 347 3 2
4 109 2 2
Given df2 and df1, you can perform a set difference operation on the index, like this:
df3 = df1.iloc[df1.index.difference(df2.index)]
df3
x y z
id
1 213 1 1
You can do the following:
import pandas as pd
df = pd.read_csv('data.csv')
df2 = df[(df.y == 2) | (df.z == 2)]
print(df2)
Results:
id x y z
0 0 324 1 2
2 2 529 2 1
3 3 347 3 2
4 4 109 2 2

Categories