Related
I have created a data frame which has rolling quarter mapping using the code
abcd = pd.DataFrame()
abcd['Month'] = np.nan
abcd['Month'] = pd.date_range(start='2020-04-01', end='2022-04-01', freq = 'MS')
abcd['Time_1'] = np.arange(1, abcd.shape[0]+1)
abcd['Time_2'] = np.arange(0, abcd.shape[0])
abcd['Time_3'] = np.arange(-1, abcd.shape[0]-1)
db_nd_ad_unpivot = pd.melt(abcd, id_vars=['Month'],
value_vars=['Time_1', 'Time_2', 'Time_3',],
var_name='Time_name', value_name='Time')
abcd_map = db_nd_ad_unpivot[(db_nd_ad_unpivot['Time']>0)&(db_nd_ad_unpivot['Time']< abcd.shape[0]+1)]
abcd_map = abcd_map[['Month','Time']]
The output of the code looks like this:
Now, I have created an additional column name that gives me the name of the month and year in format Mon-YY using the code
abcd_map['Month'] = pd.to_datetime(abcd_map.Month)
# abcd_map['Month'] = abcd_map['Month'].astype(str)
abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y"))
Now I want to see for a specific time, what is the minimum and maximum in the month column. For eg. for time instance 17
,The simple groupby results as:
Time Period
17 Aug'21-Sept'21
The desired output is
Time Time_Period
17 Aug'21-Oct'21.
I think it is based on min and max of the column Month as by using the strftime function the column is getting converted in String/object type.
How about converting to string after finding the min and max
New_df = abcd_map.groupby('Time')['Month'].agg(['min', 'max']).apply(lambda x: x.dt.strftime("%b'%y")).agg(' '.join, axis=1).reset_index()
Do this:
abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')
abcd_map['Time_Period'] = abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')
abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y"))
df = abcd_map.groupby(['Time']).agg(
sum_col=('Time', np.sum),
first_date=('Time_Period', np.min),
last_date=('Time_Period', np.max)
).reset_index()
df['TimePeriod'] = df['first_date']+'-'+df['last_date']
df = df.drop(['first_date','last_date'], axis = 1)
df
which returns
Time sum_col TimePeriod
0 1 3 Apr'20-May'20
1 2 6 Jul'20-May'20
2 3 9 Aug'20-Jun'20
3 4 12 Aug'20-Sep'20
4 5 15 Aug'20-Sep'20
5 6 18 Nov'20-Sep'20
6 7 21 Dec'20-Oct'20
7 8 24 Dec'20-Nov'20
8 9 27 Dec'20-Jan'21
9 10 30 Feb'21-Mar'21
10 11 33 Apr'21-Mar'21
11 12 36 Apr'21-May'21
12 13 39 Apr'21-May'21
13 14 42 Jul'21-May'21
14 15 45 Aug'21-Jun'21
15 16 48 Aug'21-Sep'21
16 17 51 Aug'21-Sep'21
17 18 54 Nov'21-Sep'21
18 19 57 Dec'21-Oct'21
19 20 60 Dec'21-Nov'21
20 21 63 Dec'21-Jan'22
21 22 66 Feb'22-Mar'22
22 23 69 Apr'22-Mar'22
23 24 48 Apr'22-Mar'22
24 25 25 Apr'22-Apr'22
Suppose we have a dataset.
tmp = pd.DataFrame({'hi': [1,2,3,3,5,6,3,2,3,2,1],
'bye': [12,23,35,35,53,62,31,22,33,22,12],
'yes': [12,2,32,3,5,6,23,2,32,2,21],
'no': [1,92,93,3,95,6,33,2,33,22,1],
'maybe': [91,2,32,3,95,69,3,2,93,2,1]})
In python we can easily do tmp.groupby('hi').agg(total_bye = ('bye', sum)) to get the sum of bye for each group. However, if I want to reference multiple columns, what would be the fastest, most efficient and least amount of cleanly (easily readable) written code to do this in python? In particular, can I do this using df.groupby(my_cols).agg()? What are the fastest alternatives? I'm open (actually prefer) to using faster libraries than pandas such as dask or vaex.
For example, in R data.table we can do this pretty easily, and it's super fast
# In R, assume this object is a data.table
# In a single line, the below code groups by 'hi' and then creates my_new_col column based on if bye > 5 and yes <= 20, taking the sum of 'no' for each group.
tmp[, .(my_new_col = sum(ifelse(bye > 5 & yes < 20, no, 0))), by = 'hi']
# output 1
hi my_new_col
1: 1 1
2: 2 116
3: 3 3
4: 5 95
5: 6 6
# Similarly, we can even group by a rule instead of creating a new col to group by. See below
tmp[, .(my_new_col = sum(ifelse(bye > 5 & yes < 20, no, 0))), by = .(new_rule = ifelse(hi > 3, 1, 0))]
# output 2
new_rule my_new_col
1: 0 120
2: 1 101
# We can even apply multiple aggregate functions in parallel using data.table
agg_fns <- function(x) list(sum=sum(as.double(x), na.rm=T),
mean=mean(as.double(x), na.rm=T),
min=min(as.double(x), na.rm=T),
max=max(as.double(x), na.rm=T))
tmp[,
unlist(
list(N = .N, # add a N column (row count) to the summary
unlist(mclapply(.SD, agg_fns, mc.cores = 12), recursive = F)), # apply all agg_fns over all .SDcols
recursive = F),
.SDcols = !unique(c(names('hi'), as.character(unlist('hi'))))]
output 3:
N bye.sum bye.mean bye.min bye.max yes.sum yes.mean yes.min yes.max no.sum no.mean no.min
1: 11 340 30.90909 12 62 140 12.72727 2 32 381 34.63636 1
no.max maybe.sum maybe.mean maybe.min maybe.max
1: 95 393 35.72727 1 95
Do we have this same flexibility in python?
You can use agg on all wanted columns and add a prefix:
tmp.groupby('hi').agg('sum').add_prefix('total_')
output:
total_bye total_yes total_no total_maybe
hi
1 24 33 2 92
2 67 6 116 6
3 134 90 162 131
5 53 5 95 95
6 62 6 6 69
You can even combine columns and operations flexibly with a dictionary:
tmp.groupby('hi').agg(**{'%s_%s' % (label,c): (c, op)
for c in tmp.columns
for (label,op) in [('total', 'sum'), ('average', 'mean')]
})
output:
total_hi average_hi total_bye average_bye total_yes average_yes total_no average_no total_maybe average_maybe
hi
1 2 1 24 12.000000 33 16.5 2 1.000000 92 46.00
2 6 2 67 22.333333 6 2.0 116 38.666667 6 2.00
3 12 3 134 33.500000 90 22.5 162 40.500000 131 32.75
5 5 5 53 53.000000 5 5.0 95 95.000000 95 95.00
6 6 6 62 62.000000 6 6.0 6 6.000000 69 69.00
I have a dataframe called df_location:
location = {'location_id': [1,2,3,4,5,6,7,8,9,10],
'temperature_value': [20,21,22,23,24,25,26,27,28,29],
'humidity_value':[60,61,62,63,64,65,66,67,68,69]}
df_location = pd.DataFrame(locations)
I have another dataframe called df_islands:
islands = {'island_id':[10,20,30,40,50,60],
'list_of_locations':[[1],[2,3],[4,5],[6,7,8],[9],[10]]}
df_islands = pd.DataFrame(islands)
Each island_id corresponds to one or more locations. As you can see, the locations are stored in a list.
What I'm trying to do is to search the list_of_locations for each unique location and merge it to df_location in a way where each island_id will correspond to a specific location.
Final dataframe should be the following:
merged = {'location_id': [1,2,3,4,5,6,7,8,9,10],
'temperature_value': [20,21,22,23,24,25,26,27,28,29],
'humidity_value':[60,61,62,63,64,65,66,67,68,69],
'island_id':[10,20,20,30,30,40,40,40,50,60]}
df_merged = pd.DataFrame(merged)
I don't know whether there is a method or function in python to do so. I would really appreciate it if someone can give me a solution to this problem.
The pandas method you're looking for to expand your df_islands dataframe is .explode(column_name). From there, rename your column to location_id and then join the dataframes using pd.merge(). It'll perform a SQL-like join method using the location_id as the key.
import pandas as pd
locations = {'location_id': [1,2,3,4,5,6,7,8,9,10],
'temperature_value': [20,21,22,23,24,25,26,27,28,29],
'humidity_value':[60,61,62,63,64,65,66,67,68,69]}
df_locations = pd.DataFrame(locations)
islands = {'island_id':[10,20,30,40,50,60],
'list_of_locations':[[1],[2,3],[4,5],[6,7,8],[9],[10]]}
df_islands = pd.DataFrame(islands)
df_islands = df_islands.explode(column='list_of_locations')
df_islands.columns = ['island_id', 'location_id']
pd.merge(df_locations, df_islands)
Out[]:
location_id temperature_value humidity_value island_id
0 1 20 60 10
1 2 21 61 20
2 3 22 62 20
3 4 23 63 30
4 5 24 64 30
5 6 25 65 40
6 7 26 66 40
7 8 27 67 40
8 9 28 68 50
9 10 29 69 60
The df.apply() method works here. It's a bit long-winded but it works:
df_location['island_id'] = df_location['location_id'].apply(
lambda x: [
df_islands['island_id'][i] \
for i in df_islands.index \
if x in df_islands['list_of_locations'][i]
# comment above line and use this instead if list is stored in a string
# if x in eval(df_islands['list_of_locations'][i])
][0]
)
First we select the final value we want if the if statement is True: df_islands['island_id'][i]
Then we loop over each column in df_islands by using df_islands.index
Then create the if statement which loops over all values in df_islands['list_of_locations'] and returns True if the value for df_location['location_id'] is in the list.
Finally, since we must contain this long statement in square brackets, it is a list. However, we know that there is only one value in the list so we can index it by using [0] at the end.
I hope this helps and happy for other editors to make the answer more legible!
print(df_location)
location_id temperature_value humidity_value island_id
0 1 20 60 10
1 2 21 61 20
2 3 22 62 20
3 4 23 63 30
4 5 24 64 30
5 6 25 65 40
6 7 26 66 40
7 8 27 67 40
8 9 28 68 50
9 10 29 69 60
I'm working under python 2.5 (I'm restricted to that version due to external api) and would like to get same results as below code I wrote under python 2.7
import pandas as pd
df = pd.DataFrame({"lineId":[1,2,3,4], "idCaseMin": [10, 23, 40, 8], "min": [-110, -205, -80, -150], "idCaseMax": [5, 27, 15, 11], "max": [120, 150, 110, 90]})
df = df.set_index("lineId")
df["idMax"] = df["idCaseMax"].where(df["max"]>abs(df["min"]),df["idCaseMin"])
The DataFrame results in:
>>> df
idCaseMax max idCaseMin min idMax
lineId
1 5 10 120 -110 5
2 27 23 150 -205 23
3 15 40 110 -80 15
4 11 8 90 -150 8
The idMax column is defined based on the id which gets the greatest value, in absolute module, within max and min columns.
I can't use where function as it's not available under pandas 0.9.0 (latest version available for python 2.5) and numpy 1.7.1.
So, which options do I have to get same results for idMax column without using pandas where function?
IIUC you can use numpy.where():
In [120]: df['idMax'] = \
np.where(df["max"]<=abs(df["min"]),
df["idCaseMin"],
df["idCaseMax"])
In [121]: df
Out[121]:
idCaseMax idCaseMin max min idMax
lineId
1 5 10 120 -110 5
2 27 23 150 -205 23
3 15 40 110 -80 15
4 11 8 90 -150 8
I'll try and provide an optimised solution for 0.9. IIUC ix should work here.
m = df["max"] > df["min"].abs()
i = df.ix[m, 'idCaseMax']
j = df.ix[~m, 'idCaseMin']
df['idMax'] = i.append(j)
df
idCaseMax idCaseMin max min idMax
lineId
1 5 10 120 -110 5
2 27 23 150 -205 23
3 15 40 110 -80 15
4 11 8 90 -150 8
Your pandas should have this...
df['idMax']=(df["max"]>abs(df["min"]))* df["idCaseMax"]+(df["max"]<=abs(df["min"]))* df["idCaseMin"]
df
Out[1388]:
idCaseMax idCaseMin max min idMax
lineId
1 5 10 120 -110 5
2 27 23 150 -205 23
3 15 40 110 -80 15
4 11 8 90 -150 8
We can use the apply function as below code to attempt same results:
df["idMax"] = df.apply(lambda row: row["idCaseMax"] if row["max"]>abs(row["min"]) else row["idCaseMin"], axis = 1)
This question already has answers here:
Create nice column output in python
(22 answers)
Closed 5 years ago.
I have a problem that in the output of my code;
elements of each column does not place exactly beneath each other.
My original code is too busy, so I reduce it to a simple one;
so at first les's explain this simple one:
At first consider one simple question as follows:
Write a code which recieves a natural number r, as number of rows;
and recieves another natural number c, as number of columns;
and then print all natural numbers
form 1 to rc in r rows and c columns.
So the code will be something like the following:
r = int(input("How many Rows? ")); ## here r stands for number of rows
c = int(input("How many columns? ")); ## here c stands for number of columns
for i in range(1,r+1):
for j in range (1,c+1):
print(j+c*(i-1)) ,
print
and the output is as follows:
How many Rows? 5
How many columns? 6
1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17 18
19 20 21 22 23 24
25 26 27 28 29 30
>>>
or:
How many Rows? 7
How many columns? 3
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
>>>
What should I do, to get an output like this?
How many Rows? 5
How many columns? 6
1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17 18
19 20 21 22 23 24
25 26 27 28 29 30
>>>
or
How many Rows? 7
How many columns? 3
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
>>>
Now my original code is somthing like the following:
def function(n):
R=0;
something...something...something...
something...something...something...
something...something...something...
something...something...something...
return(R)
r = int(input("How many Rows? ")); ## here r stands for number of rows
c = int(input("How many columns? ")); ## here c stands for number of columns
for i in range(0,r+1):
for j in range(0,c+1)
n=j+c*(i-1);
r=function(n);
print (r)
Now for simplicity, suppose that by some by-hand-manipulation we get:
f(1)=function(1)=17, f(2)=235, f(3)=-8;
f(4)=-9641, f(5)=54278249, f(6)=411;
Now when I run the code the out put is as follows:
How many Rows? 2
How many columns? 3
17
235
-8
-9641
54278249
41
>>>
What shold I do to get an output like this:
How many Rows? 2
How many columns? 3
17 235 -8
-9641 54278249 411
>>>
Also note that I did not want to get something like this:
How many Rows? 2
How many columns? 3
17 235 -8
-9641 54278249 411
>>>
Use rjust method:
r,c = 5,5
for i in range(1,r+1):
for j in range (1,c+1):
str_to_printout = str(j+c*(i-1)).rjust(2)
print(str_to_printout),
print
Result:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
21 22 23 24 25
UPD.
As for your last example, let's say f(n) is defined in this way:
def f(n):
my_dict = {1:17, 2:235, 3:-8, 4:-9641, 5:54278249, 6:411}
return my_dict.get(n, 0)
Then you can use the following approach:
r,c = 2,3
# data table with elemets in string format
data_str = [[str(f(j+c*(i-1))) for j in range (1,c+1)] for i in range(1,r+1)]
# transposed data table and list of max len for every column in data_str
data_str_transposed = [list(i) for i in zip(*data_str)]
max_len_columns = [max(map(len, col)) for col in data_str_transposed]
# printing out
# the string " " before 'join' is a delimiter between columns
for row in data_str:
print(" ".join(elem.rjust(max_len) for elem, max_len in zip(row, max_len_columns)))
Result:
17 235 -8
-9641 54278249 411
With r,c = 3,3:
17 235 -8
-9641 54278249 411
0 0 0
Note that the indent in each column corresponds to the maximum length in this column, and not in the entire table.
Hope this helps. Please comment if you need any further clarifications.
# result stores the final matrix
# max_len stores the length of maximum element
result, max_len = [], 0
for i in range(1, r + 1):
temp = []
for j in range(1, c + 1):
n = j + c * (i - 1);
r = function(n);
if len(str(r)) > max_len:
max_len = len(str(r))
temp.append(r)
result.append(temp)
# printing the values seperately to apply rjust() to each and every element
for i in result:
for j in i:
print(str(j).rjust(max_len), end=' ')
print()
Adopted from MaximTitarenko's answer:
You first look for the minimum and maximum value, then decide which is the longer one and use its length as the value for the rjust(x) call.
import random
r,c = 15,5
m = random.sample(xrange(10000), 100)
length1 = len(str(max(m)))
length2 = len(str(min(m)))
longest = max(length1, length2)
for i in range(r):
for j in range (c):
str_to_printout = str(m[i*c+j]).rjust(longest)
print(str_to_printout),
print
Example output:
937 9992 8602 4213 7053
1957 9766 6704 8051 8636
267 889 1903 8693 5565
8287 7842 6933 2111 9689
3948 428 8894 7522 417
3708 8033 878 4945 2771
6393 35 9065 2193 6797
5430 2720 647 4582 3316
9803 1033 7864 656 4556
6751 6342 4915 5986 6805
9490 2325 5237 8513 8860
8400 1789 2004 4500 2836
8329 4322 6616 132 7198
4715 193 2931 3947 8288
1338 9386 5036 4297 2903
You need to use the string method .rjust
From the documentation (linked above):
string.rjust(s, width[, fillchar])
This function right-justifies a string in a field of given width. It returns a string that is at least width characters wide, created by padding the string with the character fillchar (default is a space) until the given width on the right. The string is never truncated.
So we need to calculate what the width (in characters) each number should be padded to. That is pretty simple, just the number of rows * number of columns + 1 (the +1 adds a one-space gab between each column).
Using this, it becomes quite simple to write the code:
r = int(input("How many Rows? "))
c = int(input("How many columns? "))
width = len(str(r*c)) + 1
for i in range(1,r+1):
for j in range(1,c+1):
print str(j+c*(i-1)).rjust(width) ,
print
which for an r, c of 4, 5 respectively, outputs:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
Hopefully this helps you out and you can adapt this to other situations yourself!