I have a Dataframe, DF1
Id1 Id2
0 286 409
1 286 257
2 409 286
3 257 183
In this DF, for me rows 286,409 and 409,286 are same. I only want to keep one of these rows. All this I am doing is to build a network graph using Networkx python library.
I have tried achieving it by creating another df with interchanged columns like, DF2
Id2 Id1
0 409 286
1 257 286
2 286 409
3 183 257
then I compare these two DFs using isin function something like this
DF1[DF1[['Id1', 'Id2']].isin(DF2[['Id2', 'Id1']])]
but it prints DF1 as it was.
Expected output DF:
Id1 Id2
0 286 409
1 286 257
3 257 183
Any help would be appreciated, Thanks.
I believe you need sorting both columns by np.sort and filter by DataFrame.duplicated with inverse mask:
df1 = pd.DataFrame(np.sort(DF1[['Id1', 'Id2']].to_numpy(), axis=1), index=DF1.index)
df = DF1[~df1.duplicated()]
print (df)
Id1 Id2
0 286 409
1 286 257
3 257 183
Detail : If use numpy.sort with axis=1 it sorting per rows, so first and third 'row' are same:
print (np.sort(DF1[['Id1', 'Id2']].to_numpy(), axis=1))
[[286 409]
[257 286]
[286 409]
[183 257]]
Then use DataFrame.duplicated function (working with DataFrame, so used DataFrame constructor):
df1 = pd.DataFrame(np.sort(DF1[['Id1', 'Id2']].to_numpy(), axis=1), index=DF1.index)
print (df1)
0 1
0 286 409
1 257 286
2 286 409
3 183 257
Third value is duplicate:
print (df1.duplicated())
0 False
1 False
2 True
3 False
dtype: bool
Last is necessary invert mask for remove duplicates, output is filtered in boolean indexing:
print (DF1[~df1.duplicated()])
Id1 Id2
0 286 409
1 286 257
3 257 183
You can group your DataFrame by a sorted list of the column values
import pandas as pd
from io import StringIO
data = """Id1 Id2
286 409
286 257
409 286
257 183"""
df = pd.read_csv(StringIO(data), sep="\s+")
print(df.groupby(df.apply(lambda x: str(sorted(list(x))), axis=1)).first())
Result:
Id1 Id2
[183, 257] 257 183
[257, 286] 286 257
[286, 409] 286 409
Related
I have a df with numbers in the second column. Each number represents the length of a DNA sequence. I would like to create two new columns in which the first one says where this sequence start and the second one says where this sequence end.
This is my current df:
Names LEN
0 Ribosomal_S9: 121
1 Ribosomal_S8: 129
2 Ribosomal_L10: 100
3 GrpE: 166
4 DUF150: 141
.. ... ...
115 TIGR03632: 117
116 TIGR03654: 175
117 TIGR03723: 314
118 TIGR03725: 212
119 TIGR03953: 188
[120 rows x 2 columns]
And this is what I am trying to get
Names LEN Start End
0 Ribosomal_S9: 121 0 121
1 Ribosomal_S8: 129 121 250
2 Ribosomal_L10: 100 250 350
3 GrpE: 166 350 516
4 DUF150: 141 516 657
.. ... ... ... ..
115 TIGR03632: 117
116 TIGR03654: 175
117 TIGR03723: 314
118 TIGR03725: 212
119 TIGR03953: 188
[120 rows x 4 columns]
Can please anyone put me in the right direction?
Use DataFrame.assign with new columns created with Series.cumsum and for start is added Series.shift:
#convert column to integers
df['LEN'] = df['LEN'].astype(int)
#alternative for replace non numeric to missing values
#df['LEN'] = pd.to_numeric(df['LEN'], errors='coerce')
s = df['LEN'].cumsum()
df = df.assign(Start = s.shift(fill_value=0), End = s)
print (df)
Names LEN Start End
0 Ribosomal_S9: 121 0 121
1 Ribosomal_S8: 129 121 250
2 Ribosomal_L10: 100 250 350
3 GrpE: 166 350 516
4 DUF150: 141 516 657
This question already has answers here:
Row-wise average for a subset of columns with missing values
(3 answers)
Closed 5 years ago.
I have a this data frame and I would like to calculate a new column as the mean of salary_1, salary_2 and salary_3:
df = pd.DataFrame({
'salary_1': [230, 345, 222],
'salary_2': [235, 375, 292],
'salary_3': [210, 385, 260]
})
salary_1 salary_2 salary_3
0 230 235 210
1 345 375 385
2 222 292 260
How can I do it in pandas in the most efficient way? Actually I have many more columns and I don't want to write this one by one.
Something like this:
salary_1 salary_2 salary_3 salary_mean
0 230 235 210 (230+235+210)/3
1 345 375 385 ...
2 222 292 260 ...
Use .mean. By specifying the axis you can take the average across the row or the column.
df['average'] = df.mean(axis=1)
df
returns
salary_1 salary_2 salary_3 average
0 230 235 210 225.000000
1 345 375 385 368.333333
2 222 292 260 258.000000
If you only want the mean of a few you can select only those columns. E.g.
df['average_1_3'] = df[['salary_1', 'salary_3']].mean(axis=1)
df
returns
salary_1 salary_2 salary_3 average_1_3
0 230 235 210 220.0
1 345 375 385 365.0
2 222 292 260 241.0
an easy way to solve this problem is shown below :
col = df.loc[: , "salary_1":"salary_3"]
where "salary_1" is the start column name and "salary_3" is the end column name
df['salary_mean'] = col.mean(axis=1)
df
This will give you a new dataframe with a new column that shows the mean of all the other columns
This approach is really helpful when you are having a large set of columns or also helpful when you need to perform on only some selected columns not on all.
How to get the highest value in one column for each unique value in another column and return the same dataframe structure back.
Here is a pandas dataframe example?
reg.nr counter value ID2 categ date
1 37367 421 231385 93 A 20.01.2004
2 37368 428 235156 93 B 21.01.2004
3 37369 408 234251 93 C 22.01.2004
4 37372 403 196292 93 D 23.01.2004
5 55523 400 247141 139 E 24.01.2004
6 55575 415 215818 139 F 25.01.2004
7 55576 402 204404 139 A 26.01.2004
8 69940 402 62244 175 B 27.01.2004
9 69941 402 38274 175 C 28.01.2004
10 69942 404 55171 175 D 29.01.2004
11 69943 416 55495 175 E 30.01.2004
12 69944 407 90231 175 F 31.01.2004
13 69945 411 75382 175 A 01.02.2004
14 69948 405 119129 175 B 02.02.2004
Where i want to return the highest value of column "counter" based on the unique value of column "ID2". After the new pandas dataframe should look like this:
reg.nr counter value ID2 categ date
1 37368 428 235156 93 B 21.01.2004
2 55575 415 215818 139 F 25.01.2004
3 69943 416 55495 175 E 30.01.2004
One way using drop_duplicates
In [332]: df.sort_values('counter', ascending=False).drop_duplicates(['ID2'])
Out[332]:
reg.nr counter value ID2 categ date
2 37368 428 235156 93 B 21.01.2004
11 69943 416 55495 175 E 30.01.2004
6 55575 415 215818 139 F 25.01.2004
For desired output, you could sort on two columns, and reset the index
In [336]: (df.sort_values(['ID2', 'counter'], ascending=[True, False])
.drop_duplicates(['ID2']).reset_index(drop=True)
)
Out[336]:
reg.nr counter value ID2 categ date
0 37368 428 235156 93 B 21.01.2004
1 55575 415 215818 139 F 25.01.2004
2 69943 416 55495 175 E 30.01.2004
df.loc[df.groupby('ID2')['counter'].idxmax(), :].reset_index()
index reg.nr counter value ID2 categ date
0 2 37368 428 235156 93 B 21.01.2004
1 6 55575 415 215818 139 F 25.01.2004
2 11 69943 416 55495 175 E 30.01.2004
First, you are grouping your dataframe by column ID2. Then you get counter column and calculate an index of (first) maximal element of this column in each group. Then you use these indexes to filter your initial dataframe. Finally you resets indexes (if you need it).
I am aligning two dataframes which look like the following:
Dataframe 1
Timestamp L_x L_y L_a R_x R_y R_a
2403950 621.3 461.3 313 623.3 461.8 260
2404050 622.5 461.3 312 623.3 462.6 260
2404150 623.1 461.5 311 623.4 464 261
2404250 623.6 461.7 310 623.7 465.4 261
2404350 623.8 461.5 309 623.9 466.1 261
Dataframe 2
This dataframe contains the timestamps that a particular event occured.
Timestamp
0 2404030
1 2404050
2 2404250
3 2404266
4 2404282
5 2404298
6 2404314
7 2404330
8 2404350
9 2404382
All timestamps are in milliseconds. As you can see, the first dataframe is resampled to 100milliseconds. So what I want to do is, to align the two dataframes based on count. Which means based on the count how many events occur during a particular 100milliseconds bin time. For example, from the dataframe 1, in the first 100millisecond bin time (24043950 - 2404049), only one event occur according to the second dataframe which is at 2404030 and so on. The aligned table should look like the following:
Timestamp L_x L_y L_a R_x R_y R_a count
2403950 621.3 461.3 313 623.3 461.8 260 1
2404050 622.5 461.3 312 623.3 462.6 260 1
2404150 623.1 461.5 311 623.4 464 261 0
2404250 623.6 461.7 310 623.7 465.4 261 6
2404350 623.8 461.5 309 623.9 466.1 261 2
Thank you for your help and suggestion.
You want to perform integer division on the timestamp (i.e. a // b), but first need to add 50 to it given your bucketing. Then convert it back into the correct units by multiplying by 100 and subtracting 50.
Now, group on this new index and perform a count.
You then merge these counts to your original dataframe and do some formatting operations to get the data in the desired shape. Make sure to fill NaNs with zero.
df2['idx'] = (df2.Timestamp + 50) // 100 * 100 - 50
counts = df2.groupby('idx').count()
>>> counts
Timestamp
idx
2403950 1
2404050 1
2404250 6
2404350 2
df_new =df.merge(counts, how='left', left_on='Timestamp', right_index=True, suffixes=['', '_'])
columns = list(df_new)
columns[-1] = 'count'
df_new.columns = columns
df_new['count'].fillna(0, inplace=True)
>>> df_new
Timestamp L_x L_y L_a R_x R_y R_a count
0 2403950 621.3 461.3 313 623.3 461.8 260 1
1 2404050 622.5 461.3 312 623.3 462.6 260 1
2 2404150 623.1 461.5 311 623.4 464.0 261 0
3 2404250 623.6 461.7 310 623.7 465.4 261 6
4 2404350 623.8 461.5 309 623.9 466.1 261 2
I have two DataFrames that are each of the exact sane dimensions and I would like to multiply just one specific column from each of them together:
My first DataFrame is:
In [834]: patched_benchmark_df_sim
Out[834]:
build_number name cycles
0 390 adpcm 21598
1 390 aes 5441
2 390 blowfish NaN
3 390 dfadd 463
....
284 413 jpeg 766742
285 413 mips 4263
286 413 mpeg2 2021
287 413 sha 348417
[288 rows x 3 columns]
My second DataFrame is:
In [835]: patched_benchmark_df_syn
Out[835]:
build_number name fmax
0 390 adpcm 143.45
1 390 aes 309.60
2 390 blowfish NaN
3 390 dfadd 241.02
....
284 413 jpeg 197.75
285 413 mips 202.39
286 413 mpeg2 291.29
287 413 sha 243.19
[288 rows x 3 columns]
And I would like to take each element of the cycles column of patched_benchmark_df_sim and multiply that to the corresponding element of the fmax column of patched_benchmark_df_syn, and then store the result in a new DataFrame that has exactly the same structure, contiaining the build_number and name columns, but now the last column containing all the numerical data will be called latency, which is the product of fmax and cycles.
So the output DataFrame has to look something like this:
build_number name latency
0 390 adpcm ## each value here has to be product of cycles and fmax and they must correspond to one another ##
......
I tried doing a straightforward patched_benchmark_df_sim * patched_benchmark_df_syn but that did not work as my DataFrames had the name column that's of string type. Is there no builtin pandas method that can do this for me? How could I proceed with the multiplication to get the result I need?
Thank you very much.
The simplest thing to do is to add a new column to the df and then select the columns you want and if you want assign that to a new df:
In [356]:
df['latency'] = df['cycles'] * df1['fmax']
df
Out[356]:
build_number name cycles latency
0 390 adpcm 21598 3.098233e+06
1 390 aes 5441 1.684534e+06
2 390 blowfish NaN NaN
3 390 dfadd 463 1.115923e+05
284 413 jpeg 766742 1.516232e+08
285 413 mips 4263 8.627886e+05
286 413 mpeg2 2021 5.886971e+05
287 413 sha 348417 8.473153e+07
In [357]:
new_df = df[['build_number', 'name', 'latency']]
new_df
Out[357]:
build_number name latency
0 390 adpcm 3.098233e+06
1 390 aes 1.684534e+06
2 390 blowfish NaN
3 390 dfadd 1.115923e+05
284 413 jpeg 1.516232e+08
285 413 mips 8.627886e+05
286 413 mpeg2 5.886971e+05
287 413 sha 8.473153e+07
As you've found you can't multiply non-numeric type df's together like you tried. The above is assuming that the build_number and name columns are the same from both dfs.