I have a dataframe:
John
Kelly
Jay
Max
Kert
I want to create new dataframe such that the output is as follows:
John_John
John_Kelly
John_Jay
John_Max
John_Kert
Kelly_John
Kelly_Kelly
Kelly_Jay
Kelly_Max
Kelly_Kert
...
Kert_Max
Kert_Kert
Assuming "name" the column, you can use a cross merge:
df2 = df.merge(df, how='cross')
out = (df2['name_x']+'_'+df2['name_y']).to_frame('name')
Or, with itertools.product:
from itertools import product
out = pd.DataFrame({'name': [f'{a}_{b}' for a,b in product(df['name'], repeat=2)]})
output:
name
0 John_John
1 John_Kelly
2 John_Jay
3 John_Max
4 John_Kert
5 Kelly_John
6 Kelly_Kelly
7 Kelly_Jay
8 Kelly_Max
9 Kelly_Kert
10 Jay_John
11 Jay_Kelly
12 Jay_Jay
13 Jay_Max
14 Jay_Kert
15 Max_John
16 Max_Kelly
17 Max_Jay
18 Max_Max
19 Max_Kert
20 Kert_John
21 Kert_Kelly
22 Kert_Jay
23 Kert_Max
24 Kert_Kert
Related
I have created a data frame which has rolling quarter mapping using the code
abcd = pd.DataFrame()
abcd['Month'] = np.nan
abcd['Month'] = pd.date_range(start='2020-04-01', end='2022-04-01', freq = 'MS')
abcd['Time_1'] = np.arange(1, abcd.shape[0]+1)
abcd['Time_2'] = np.arange(0, abcd.shape[0])
abcd['Time_3'] = np.arange(-1, abcd.shape[0]-1)
db_nd_ad_unpivot = pd.melt(abcd, id_vars=['Month'],
value_vars=['Time_1', 'Time_2', 'Time_3',],
var_name='Time_name', value_name='Time')
abcd_map = db_nd_ad_unpivot[(db_nd_ad_unpivot['Time']>0)&(db_nd_ad_unpivot['Time']< abcd.shape[0]+1)]
abcd_map = abcd_map[['Month','Time']]
The output of the code looks like this:
Now, I have created an additional column name that gives me the name of the month and year in format Mon-YY using the code
abcd_map['Month'] = pd.to_datetime(abcd_map.Month)
# abcd_map['Month'] = abcd_map['Month'].astype(str)
abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y"))
Now I want to see for a specific time, what is the minimum and maximum in the month column. For eg. for time instance 17
,The simple groupby results as:
Time Period
17 Aug'21-Sept'21
The desired output is
Time Time_Period
17 Aug'21-Oct'21.
I think it is based on min and max of the column Month as by using the strftime function the column is getting converted in String/object type.
How about converting to string after finding the min and max
New_df = abcd_map.groupby('Time')['Month'].agg(['min', 'max']).apply(lambda x: x.dt.strftime("%b'%y")).agg(' '.join, axis=1).reset_index()
Do this:
abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')
abcd_map['Time_Period'] = abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')
abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y"))
df = abcd_map.groupby(['Time']).agg(
sum_col=('Time', np.sum),
first_date=('Time_Period', np.min),
last_date=('Time_Period', np.max)
).reset_index()
df['TimePeriod'] = df['first_date']+'-'+df['last_date']
df = df.drop(['first_date','last_date'], axis = 1)
df
which returns
Time sum_col TimePeriod
0 1 3 Apr'20-May'20
1 2 6 Jul'20-May'20
2 3 9 Aug'20-Jun'20
3 4 12 Aug'20-Sep'20
4 5 15 Aug'20-Sep'20
5 6 18 Nov'20-Sep'20
6 7 21 Dec'20-Oct'20
7 8 24 Dec'20-Nov'20
8 9 27 Dec'20-Jan'21
9 10 30 Feb'21-Mar'21
10 11 33 Apr'21-Mar'21
11 12 36 Apr'21-May'21
12 13 39 Apr'21-May'21
13 14 42 Jul'21-May'21
14 15 45 Aug'21-Jun'21
15 16 48 Aug'21-Sep'21
16 17 51 Aug'21-Sep'21
17 18 54 Nov'21-Sep'21
18 19 57 Dec'21-Oct'21
19 20 60 Dec'21-Nov'21
20 21 63 Dec'21-Jan'22
21 22 66 Feb'22-Mar'22
22 23 69 Apr'22-Mar'22
23 24 48 Apr'22-Mar'22
24 25 25 Apr'22-Apr'22
I have tried out the following snippet of code for my project:
import pandas as pd
import nltk
from nltk.corpus import wordnet as wn
nltk.download('wordnet')
df=[]
hypo = wn.synset('science.n.01').hyponyms()
hyper = wn.synset('science.n.01').hypernyms()
mero = wn.synset('science.n.01').part_meronyms()
holo = wn.synset('science.n.01').part_holonyms()
ent = wn.synset('science.n.01').entailments()
df = df+hypo+hyper+mero+holo+ent
df_agri_clean = pd.DataFrame(df)
df_agri_clean.columns=["Items"]
print(df_agri_clean)
pd.set_option('display.expand_frame_repr', False)
It has given me this output of a dataframe:
Items
0 Synset('agrobiology.n.01')
1 Synset('agrology.n.01')
2 Synset('agronomy.n.01')
3 Synset('architectonics.n.01')
4 Synset('cognitive_science.n.01')
5 Synset('cryptanalysis.n.01')
6 Synset('information_science.n.01')
7 Synset('linguistics.n.01')
8 Synset('mathematics.n.01')
9 Synset('metallurgy.n.01')
10 Synset('metrology.n.01')
11 Synset('natural_history.n.01')
12 Synset('natural_science.n.01')
13 Synset('nutrition.n.03')
14 Synset('psychology.n.01')
15 Synset('social_science.n.01')
16 Synset('strategics.n.01')
17 Synset('systematics.n.01')
18 Synset('thanatology.n.01')
19 Synset('discipline.n.01')
20 Synset('scientific_theory.n.01')
21 Synset('scientific_knowledge.n.01')
This can be converted to a list by just printing df.
[Synset('agrobiology.n.01'), Synset('agrology.n.01'), Synset('agronomy.n.01'), Synset('architectonics.n.01'), Synset('cognitive_science.n.01'), Synset('cryptanalysis.n.01'), Synset('information_science.n.01'), Synset('linguistics.n.01'), Synset('mathematics.n.01'), Synset('metallurgy.n.01'), Synset('metrology.n.01'), Synset('natural_history.n.01'), Synset('natural_science.n.01'), Synset('nutrition.n.03'), Synset('psychology.n.01'), Synset('social_science.n.01'), Synset('strategics.n.01'), Synset('systematics.n.01'), Synset('thanatology.n.01'), Synset('discipline.n.01'), Synset('scientific_theory.n.01'), Synset('scientific_knowledge.n.01')]
I wish to change every word under "Items" like so :
Synset('agrobiology.n.01') => agrobiology.n.01
or
Synset('agrobiology.n.01') => 'agrobiology'
Any answer associated will be appreciated! Thanks!
To access the name of these items, just do function.name(). You could use line comprehension update these items as follows:
df_agri_clean['Items'] = [df_agri_clean['Items'][i].name() for i in range(len(df_agri_clean))]
df_agri_clean
The output will be as you expected
Items
0 agrobiology.n.01
1 agrology.n.01
2 agronomy.n.01
3 architectonics.n.01
4 cognitive_science.n.01
5 cryptanalysis.n.01
6 information_science.n.01
7 linguistics.n.01
8 mathematics.n.01
9 metallurgy.n.01
10 metrology.n.01
11 natural_history.n.01
12 natural_science.n.01
13 nutrition.n.03
14 psychology.n.01
15 social_science.n.01
16 strategics.n.01
17 systematics.n.01
18 thanatology.n.01
19 discipline.n.01
20 scientific_theory.n.01
21 scientific_knowledge.n.01
To further replace ".n.01" as well from the string, you could do the following:
df_agri_clean['Items'] = [df_agri_clean['Items'][i].name().replace('.n.01', '') for i in range(len(df_agri_clean))]
df_agri_clean
Output (just like your second expected output)
Items
0 agrobiology
1 agrology
2 agronomy
3 architectonics
4 cognitive_science
5 cryptanalysis
6 information_science
7 linguistics
8 mathematics
9 metallurgy
10 metrology
11 natural_history
12 natural_science
13 nutrition.n.03
14 psychology
15 social_science
16 strategics
17 systematics
18 thanatology
19 discipline
20 scientific_theory
21 scientific_knowledge
I have a dataframe as below:
import pandas as pd
# intialise data of lists.
data = {'Name':['Tom', 'nick', 'krish', 'jack'],
'Book1':[20, 21, 19, 18],
'Book2':[20,'', 12, 20],
'Book3':[31, 21, 17, 16],
'Book4':[31, 19, 18, 16]}
# Create DataFrame
df = pd.DataFrame(data)
# Print the output.
print(df)
Name Book1 Book2 Book3 Book4
Tom 20 20 31 31
nick 21 21 19
krish 19 12 17 18
jack 18 20 16 16
I wish to get below output which is comparing Book1, Book2, Book3 and Book4 column. For row Tom output, there are two 20 and two 31, since the number of the value is equal valent so in this case it will prefer the value come fist that is Book1, so the Output column is 20. For row nick, there was two number 21 and one number 19, so it will take the most occurrence number for output column which is number 21. While for krish row, there was no repeated number so the output column i want fix it as "Mix" .
Output column as below:
Name Book1 Book2 Book3 Book4 Output
Tom 20 20 31 31 20
nick 21 21 19 21
krish 19 12 17 18 Mix
jack 18 20 16 16 16
Anyone have ideas? I saw there is mode function but it was not applicable for this case, please help, thanks
Use value_counts:
max_val = lambda x: x.value_counts().index[0] \
if x.value_counts().iloc[0] > 1 else 'Mix'
df['Output'] = df.filter(like='Book').apply(max_val, axis=1)
print(df)
# Output:
Name Book1 Book2 Book3 Book4 Output
0 Tom 20 20 31 31 20
1 nick 21 21 19 21
2 krish 19 12 17 18 Mix
3 jack 18 20 16 16 16
Update
If you use Python >= 3.8, you can use the walrus operator (avoid a double call to value_counts:
max_val = lambda x: v.index[0] if (v := x.value_counts()).iloc[0] > 1 else 'Mix'
df['Output'] = df.filter(like='Book').apply(max_val, axis=1)
We can use your idea on mode to get your desired output. First, we need to convert the relevant columns to numeric data types:
temp = (df
.filter(like='Book')
.apply(pd.to_numeric)
.mode(1)
)
# compute for values
# nulls exist only if there are duplicates
output = np.where(temp.notna().all(1),
# value if True
'Mix',
# if False, pick the first modal value,
temp.iloc[:, 0])
df.assign(output = output)
Name Book1 Book2 Book3 Book4 output
0 Tom 20 20 31 31 20.0
1 nick 21 21 19 21.0
2 krish 19 12 17 18 Mix
3 jack 18 20 16 16 16.0
Regards.
I have the following coordinate dataframe, divided by blocks. Each block starts at seq0_leftend, seq0_rightend, seq1_leftend, seq1_rightend, seq2_leftend, seq2_rightend, seq3_leftend, seq3_rightend, and so on. I would like that, for each block given the condition if, coordinates are negative, extract the upper and lower row. example of my dataframe file:
seq0_leftend seq0_rightend
0 7 107088
1 107089 108940
2 108941 362759
3 362760 500485
4 500486 509260
5 509261 702736
seq1_leftend seq1_rightend
0 1 106766
1 106767 108619
2 108620 355933
3 355934 488418
4 488419 497151
5 497152 690112
6 690113 700692
7 700693 721993
8 721994 722347
9 722348 946296
10 946297 977714
11 977715 985708
12 -985709 -990725
13 991992 1042023
14 1042024 1259523
15 1259524 1261239
seq2_leftend seq2_rightend
0 1 109407
1 362514 364315
2 109408 362513
3 364450 504968
4 -504969 -515995
5 515996 671291
6 -671295 -682263
7 682264 707010
8 -707011 -709780
9 709781 934501
10 973791 1015417
11 -961703 -973790
12 948955 961702
13 1015418 1069976
14 1069977 1300633
15 -1300634 -1301616
16 1301617 1344821
17 -1515463 -1596433
18 1514459 1515462
19 -1508094 -1514458
20 1346999 1361467
21 -1361468 -1367472
22 1369840 1508093
seq3_leftend seq3_rightend
0 1 112030
1 112031 113882
2 113883 381662
3 381663 519575
4 519576 528317
5 528318 724500
6 724501 735077
7 735078 759456
8 759457 763157
9 763158 996929
10 996931 1034492
11 1034493 1040984
12 -1040985 -1061402
13 1071212 1125426
14 1125427 1353901
15 1353902 1356209
16 1356210 1392818
seq4_leftend seq4_rightend
0 1 105722
1 105723 107575
2 107576 355193
3 355194 487487
4 487488 496220
5 496221 689560
6 689561 700139
7 700140 721438
8 721458 721497
9 721498 947183
10 947184 978601
11 978602 986595
12 -986596 -991612
13 994605 1046245
14 1046247 1264692
15 1264693 1266814
Finally write a new csv with the data of interest, an example of the final result that I would like, would be this:
seq1_leftend seq1_rightend
11 977715 985708
12 -985709 -990725
13 991992 1042023
seq2_leftend seq2_rightend
3 364450 504968
4 -504969 -515995
5 515996 671291
6 -671295 -682263
7 682264 707010
8 -707011 -709780
9 709781 934501
10 973791 1015417
11 -961703 -973790
12 948955 961702
14 1069977 1300633
15 -1300634 -1301616
16 1301617 1344821
17 -1515463 -1596433
18 1514459 1515462
19 -1508094 -1514458
20 1346999 1361467
21 -1361468 -1367472
22 1369840 1508093
seq3_leftend seq3_rightend
11 1034493 1040984
12 -1040985 -1061402
13 1071212 1125426
seq4_leftend seq4_rightend
11 978602 986595
12 -986596 -991612
13 994605 1046245
I assume that you have a list of DataFrames, let's call it src.
To convert a single DataFrame, define the following function:
def findRows(df):
col = df.iloc[:, 0]
if col.lt(0).any():
return df[col.lt(0) | col.shift(1).lt(0) | col.shift(-1).lt(0)]
else:
return None
Note that this function starts with reading column 0 from the source
DataFrame, so it is independent of the name of this column.
Then it checks whether any element in this column is < 0.
If found, the returned object is a DataFrame with rows which
contain a value < 0:
either in this element,
or in the previous element,
or in the next element.
If not found, this function returns None (from your expected result
I see that in such a case you don't want even any empty DataFrame).
The first stage is to collect results of this function called on each
DataFrame from src:
result = [ findRows(df) for df in src ]
An the last part is to filter out elements which are None:
result = list(filter(None.__ne__, result))
To see the result, run:
for df in result:
print(df)
For src containing first 3 of your DataFrames, I got:
seq1_leftend seq1_rightend
11 977715 985708
12 -985709 -990725
13 991992 1042023
seq2_leftend seq2_rightend
3 364450 504968
4 -504969 -515995
5 515996 671291
6 -671295 -682263
7 682264 707010
8 -707011 -709780
9 709781 934501
10 973791 1015417
11 -961703 -973790
12 948955 961702
14 1069977 1300633
15 -1300634 -1301616
16 1301617 1344821
17 -1515463 -1596433
18 1514459 1515462
19 -1508094 -1514458
20 1346999 1361467
21 -1361468 -1367472
22 1369840 1508093
As you can see, the resulting list contains only results
originating from the second and third source DataFrame.
The first was filtered out, since findRows returned
None from its processing.
I'm using this txt file named Gradedata.txt and it looks like this:
Sarah K.,10,9,7,9,10,20,19,19,45,92
John M.,9,9,8,9,8,20,20,18,43,95
David R.,8,7,7,9,6,18,17,17,40,83
Joan A.,9,10,10,10,10,20,19,20,47,99
Nick J.,9,7,10,10,10,20,20,19,46,98
Vicki T.,7,7,8,9,9,17,18,19,44,88
I'm looking for the averages of each column. Each column has it's own title (Homework #1, Homework #2, etc. in that order). What I am trying to do should look exactly like this:
Homework #1 8.67
Homework #2 8.17
Homework #3 8.33
Homework #4 9.33
Homework #5 8.83
Quiz #1 19.17
Quiz #2 18.83
Quiz #3 18.67
Midterm #1 44.17
Final #1 92.50
Here is my attempt at accomplishing this task:
with open("GradeData.txt", "rtU") as f:
columns = f.readline().strip().split(" ")
numRows = 0
sums = [0] * len(columns)
for line in f:
if not line.strip():
continue
values = line.split(" ")
for i in xrange(len(values)):
sums[i] += int(values[i])
numRows += 1
for index, summedRowValue in enumerate(sums):
print columns[index], 1.0 * summedRowValue / numRows
I'm getting errors and also I realize I have to name each assignment average. Need some help here. I appreciate it.
numpy can chew this up in one line:
>>> np.loadtxt('Gradedata.txt', delimiter=',', usecols=range(1,11)).mean(axis=0)
array([ 8.66666667, 8.16666667, 8.33333333, 9.33333333,
8.83333333, 19.16666667, 18.83333333, 18.66666667,
44.16666667, 92.5 ])
Just transpose and use statistics.mean to get the average, skipping the first col:
import csv
from itertools import islice
from statistics import mean
with open("in.txt") as f:
for col in islice(zip(*csv.reader(f)), 1, None):
print(mean(map(float,col)))
Which will give you:
8.666666666666666
8.166666666666666
8.333333333333334
9.333333333333334
8.833333333333334
19.166666666666668
18.833333333333332
18.666666666666668
44.166666666666664
92.5
If the columns are actually named and you want to pair them:
import csv
from itertools import islice
from statistics import mean
with open("in.txt") as f:
# get column names
cols = next(f).split(",")
for col in islice(zip(*csv.reader(f)),1 ,None):
# keys are column names, values are averages
data = dict(zip(cols[1:],mean(map(float,col))))
Or using pandas.read_csv:
import pandas as pd
df = pd.read_csv("in.txt",index_col=0,header=None)
print(df)
print(df.mean(axis=0))
1 2 3 4 5 6 7 8 9 10
0
Sarah K. 10 9 7 9 10 20 19 19 45 92
John M. 9 9 8 9 8 20 20 18 43 95
David R. 8 7 7 9 6 18 17 17 40 83
Joan A. 9 10 10 10 10 20 19 20 47 99
Nick J. 9 7 10 10 10 20 20 19 46 98
Vicki T. 7 7 8 9 9 17 18 19 44 88
1 8.666667
2 8.166667
3 8.333333
4 9.333333
5 8.833333
6 19.166667
7 18.833333
8 18.666667
9 44.166667
10 92.500000
dtype: float64