Comparing pandas map and merge - python

I have the following df:
df = pd.DataFrame({'key': {0: 'EFG_DS_321',
1: 'EFG_DS_900',
2: 'EFG_DS_900',
3: 'EFG_Q_900',
4: 'EFG_DS_1000',
5: 'EFG_DS_1000',
6: 'EFG_DS_1000',
7: 'ABC_DS_444',
8: 'EFG_DS_900',
9: 'EFG_DS_900',
10: 'EFG_DS_321',
11: 'EFG_DS_900',
12: 'EFG_DS_1000',
13: 'EFG_DS_900',
14: 'EFG_DS_321',
15: 'EFG_DS_321',
16: 'EFG_DS_1000',
17: 'EFG_DS_1000',
18: 'EFG_DS_1000',
19: 'EFG_DS_1000',
20: 'ABC_DS_444',
21: 'EFG_DS_900',
22: 'EFG_DAS_12345',
23: 'EFG_DAS_12345',
24: 'EFG_DAS_321',
25: 'EFG_DS_321',
26: 'EFG_DS_12345',
27: 'EFG_Q_1000',
28: 'EFG_DS_900',
29: 'EFG_DS_321'}})
and I have the following dict:
d = {'ABC_AS_1000': 123,
'ABC_AS_444': 321,
'ABC_AS_231341': 421,
'ABC_AS_888': 412,
'ABC_AS_087': 4215,
'ABC_DAS_1000': 3415,
'ABC_DAS_444': 4215,
'ABC_DAS_231341': 3214,
'ABC_DAS_888': 321,
'ABC_DAS_087': 111,
'ABC_Q_1000': 222,
'ABC_Q_444': 3214,
'ABC_Q_231341': 421,
'ABC_Q_888': 321,
'ABC_Q_087': 41,
'ABC_DS_1000': 421,
'ABC_DS_444': 421,
'ABC_DS_231341': 321,
'ABC_DS_888': 41,
'ABC_DS_087': 41,
'EFG_AS_1000': 213,
'EFG_AS_900': 32,
'EFG_AS_12345': 1,
'EFG_AS_321': 3,
'EFG_DAS_1000': 421,
'EFG_DAS_900': 321,
'EFG_DAS_12345': 123,
'EFG_DAS_321': 31,
'EFG_Q_1000': 41,
'EFG_Q_900': 51,
'EFG_Q_12345': 321,
'EFG_Q_321': 321,
'EFG_DS_1000': 41,
'EFG_DS_900': 51,
'EFG_DS_12345': 321,
'EFG_DS_321': 1}
I want to map d into df, but given that the real data is very large and complicated, i'm trying to understand if map or merge is better in terms of efficiency (running time).
first option:
a simple map
res = df['key'].map(d)
second option:
convert d into a dataframe and preform a merge
d1 = pd.DataFrame.from_dict(d,orient='index',columns=['res'])
res = df.merge(d1,left_on='key',right_index=True)['res']
Any help will be much appreciated (or any better solutions of course:))

map will be faster than a merge
If your goal is simply to assign a numerical category to each unique value in df['AB'], you could use pandas.factorize that should be a bit faster than map:
res = df['AB'].factorize()[0]+1
output: array([1, 1, 1, 2, 2, 3, 3, 3])
test on 800k rows:
factorize 28.6 ms ± 153 µs
map 32.1 ms ± 110 µs
merge 68.6 ms ± 1.33 ms

Related

Double header dataframe, sumif (possibly groupby?) with python

So here is an image of what I have and what I want to get: https://imgur.com/a/RyDbvZD
Basically Those are SUMIF formulas in excel, I would like to recreate that in python, I was trying with pandas groupby().sum() function but I have no clue how to groupby on 2 headers like this, and then how to order the data.
Original dataframe:
df = pd.DataFrame( {'Group': {0: 'Name', 1: 20201001, 2: 20201002, 3: 20201003, 4: 20201004, 5: 20201005, 6: 20201006, 7: 20201007, 8: 20201008, 9: 20201009, 10: 20201010}, 'Credit': {0: 'Credit', 1: 65, 2: 69, 3: 92, 4: 18, 5: 58, 6: 12, 7: 31, 8: 29, 9: 12, 10: 41}, 'Equity': {0: 'Stock', 1: 92, 2: 62, 3: 54, 4: 52, 5: 14, 6: 5, 7: 14, 8: 17, 9: 54, 10: 51}, 'Equity.1': {0: 'Option', 1: 87, 2: 30, 3: 40, 4: 24, 5: 95, 6: 77, 7: 44, 8: 77, 9: 88, 10: 85}, 'Credit.1': {0: 'Credit', 1: 62, 2: 60, 3: 91, 4: 57, 5: 65, 6: 50, 7: 75, 8: 55, 9: 48, 10: 99}, 'Equity.2': {0: 'Option', 1: 61, 2: 91, 3: 38, 4: 3, 5: 71, 6: 51, 7: 74, 8: 41, 9: 59, 10: 31}, 'Bond': {0: 'Bond', 1: 4, 2: 62, 3: 91, 4: 66, 5: 30, 6: 51, 7: 76, 8: 6, 9: 65, 10: 73}, 'Unnamed: 7': {0: 'Stock', 1: 54, 2: 23, 3: 74, 4: 92, 5: 36, 6: 89, 7: 88, 8: 32, 9: 19, 10: 91}, 'Bond.1': {0: 'Bond', 1: 96, 2: 10, 3: 11, 4: 7, 5: 28, 6: 82, 7: 13, 8: 46, 9: 70, 10: 46}, 'Bond.2': {0: 'Bond', 1: 25, 2: 53, 3: 96, 4: 70, 5: 52, 6: 9, 7: 98, 8: 9, 9: 48, 10: 58}, 'Unnamed: 10': {0: float('nan'), 1: 63.0, 2: 80.0, 3: 17.0, 4: 21.0, 5: 30.0, 6: 78.0, 7: 23.0, 8: 31.0, 9: 72.0, 10: 65.0}} )
What I want at the end:
df = pd.DataFrame( {'Group': {0: 20201001, 1: 20201002, 2: 20201003, 3: 20201004, 4: 20201005, 5: 20201006, 6: 20201007, 7: 20201008, 8: 20201009, 9: 20201010}, 'Credit': {0: 127, 1: 129, 2: 183, 3: 75, 4: 123, 5: 62, 6: 106, 7: 84, 8: 60, 9: 140}, 'Equity': {0: 240, 1: 183, 2: 132, 3: 79, 4: 180, 5: 133, 6: 132, 7: 135, 8: 201, 9: 167}, 'Stock': {0: 146, 1: 85, 2: 128, 3: 144, 4: 50, 5: 94, 6: 102, 7: 49, 8: 73, 9: 142}, 'Option': {0: 148, 1: 121, 2: 78, 3: 27, 4: 166, 5: 128, 6: 118, 7: 118, 8: 147, 9: 116}} )
Any ideas where to start on this, or anything is appreciated
Here you go. First row seems to be the real headers so we first move that to column names and set the index to Name
df2 = df.rename(columns = df.loc[0]).drop(index = 0).set_index(['Name'])
Then we groupby by columns and sum
df2.groupby(df2.columns, axis=1, sort = False).sum().reset_index()
and we get
Name Credit Stock Option Bond
0 20201001 127.0 146.0 148.0 125.0
1 20201002 129.0 85.0 121.0 125.0
2 20201003 183.0 128.0 78.0 198.0
3 20201004 75.0 144.0 27.0 143.0
4 20201005 123.0 50.0 166.0 110.0
5 20201006 62.0 94.0 128.0 142.0
6 20201007 106.0 102.0 118.0 187.0
7 20201008 84.0 49.0 118.0 61.0
8 20201009 60.0 73.0 147.0 183.0
9 20201010 140.0 142.0 116.0 177.0
I realise the output is not exactly what you asked for but since we cannot see your SUMIF formulas, I do not know which columns you want to aggregate
Edit
Following up on your comment, I note that, as far as I can tell, the rules for aggregation are somewhat messy so that the same column is included in more than one output column (like Equity.1). I do not think there is much you can do with automation here, and you can replicate your SUMIF experience by directly referencing the columns you want to add. So I think the following gives you what you want
df = df.drop(index =0)
df2 = df[['Group']].copy()
df2['Credit'] = df['Credit'] + df['Credit.1']
df2['Equity'] = df['Equity'] + df['Equity.1']+ df['Equity.2']
df2['Stock'] = df['Equity'] + df['Unnamed: 7']
df2['Option'] = df['Equity.1'] + df['Equity.2']
df2
produces
Group Credit Equity Stock Option
-- -------- -------- -------- ------- --------
1 20201001 127 240 146 148
2 20201002 129 183 85 121
3 20201003 183 132 128 78
4 20201004 75 79 144 27
5 20201005 123 180 50 166
6 20201006 62 133 94 128
7 20201007 106 132 102 118
8 20201008 84 135 49 118
9 20201009 60 201 73 147
10 20201010 140 167 142 116
This also gives you control over which columns to include in the final output
If you want this more automated than you need to do something about labels of your columns, as you would want a unique label for a set of columns you want to aggregate. If the same input column is used in more than one calculation it is probably easiest to just duplicate it with the right labels

Is there a formulaic approach to find the frequency of the sum of combinations?

I have 5 strawberries, 2 lemons, and a banana. For each possible combination of these (including selecting 0), there is a total number of objects. I ultimately want a list of the frequencies at which these sums appear.
[1 strawberry, 0 lemons, 0 bananas] = 1 objects
[2 strawberries, 0 lemons, 1 banana] = 3 objects
[0 strawberries, 1 lemon, 0 bananas] = 1 objects
[2 strawberries, 1 lemon, 0 bananas] = 3 objects
[3 strawberries, 0 lemons, 0 bananas] = 3 objects
For just the above selection of 5 combinations, "1" has a frequency of 2 and "3" has a frequency of 3.
Obviously there are far more possible combinations, each changing the frequency result. Is there a formulaic way to approach the problem to find the frequencies for an entire set of combinations?
Currently, I've set up a brute-force function in Python.
special_cards = {
'A':7, 'B':1, 'C':1, 'D':1, 'E':1, 'F':1, 'G':1, 'H':1, 'I':1, 'J':1, 'K':1, 'L':1,
'M':1, 'N':1, 'O':1, 'P':1, 'Q':1, 'R':1, 'S':1, 'T':1, 'U':1, 'V':1, 'W':1, 'X':1,
'Y':1, 'Z':1, 'AA':1, 'AB':1, 'AC':1, 'AD':1, 'AE':1, 'AF':1, 'AG':1, 'AH':1, 'AI':1, 'AJ':1,
'AK':1, 'AL':1, 'AM':1, 'AN':1, 'AO':1, 'AP':1, 'AQ':1, 'AR':1, 'AS':1, 'AT':1, 'AU':1, 'AV':1,
'AW':1, 'AX':1, 'AY':1
}
def _calc_dis_specials(special_cards):
"""Calculate the total combinations when special cards are factored in"""
# Create an iterator for special card combinations.
special_paths = _gen_dis_special_list(special_cards)
freq = {}
path_count = 0
for o_path in special_paths: # Loop through the iterator
path_count += 1 # Keep track of how many combinations we've evaluated thus far.
try: # I've been told I can use a collections.counter() object instead of try/except.
path_sum = sum(o_path) # Sum the path (counting objects)
new_count = freq[path_sum] + 1 # Try to increment the count for our sum.
freq.update({path_sum: new_count})
except KeyError:
freq.update({path_sum: 1})
print(f"{path_count:,}\n{freq}")
print(f"{path_count:,}\n{freq}")
# Do things with results yadda yadda
def _gen_dis_special_list(special_cards):
"""Generates an iterator for all combinations for special cards"""
product_args = []
for value in special_cards.values(): # A card's "value" is the maximum number that can be in a deck.
product_args.append(range(value+1)) # Populates product_args with lists of each card's possible count.
result = itertools.product(*product_args)
return result
However, for large numbers of object pools (50+) the factorial just gets out of hand. Billions upon billions of combinations. I need a formulaic approach.
Looking at some output, I notice a couple of things:
1
{0: 1}
2
{0: 1, 1: 1}
4
{0: 1, 1: 2, 2: 1}
8
{0: 1, 1: 3, 2: 3, 3: 1}
16
{0: 1, 1: 4, 2: 6, 3: 4, 4: 1}
32
{0: 1, 1: 5, 2: 10, 3: 10, 4: 5, 5: 1}
64
{0: 1, 1: 6, 2: 15, 3: 20, 4: 15, 5: 6, 6: 1}
128
{0: 1, 1: 7, 2: 21, 3: 35, 4: 35, 5: 21, 6: 7, 7: 1}
256
{0: 1, 1: 8, 2: 28, 3: 56, 4: 70, 5: 56, 6: 28, 7: 8, 8: 1}
512
{0: 1, 1: 9, 2: 36, 3: 84, 4: 126, 5: 126, 6: 84, 7: 36, 8: 9, 9: 1}
1,024
{0: 1, 1: 10, 2: 45, 3: 120, 4: 210, 5: 252, 6: 210, 7: 120, 8: 45, 9: 10, 10: 1}
2,048
{0: 1, 1: 11, 2: 55, 3: 165, 4: 330, 5: 462, 6: 462, 7: 330, 8: 165, 9: 55, 10: 11, 11: 1}
4,096
{0: 1, 1: 12, 2: 66, 3: 220, 4: 495, 5: 792, 6: 924, 7: 792, 8: 495, 9: 220, 10: 66, 11: 12, 12: 1}
8,192
{0: 1, 1: 13, 2: 78, 3: 286, 4: 715, 5: 1287, 6: 1716, 7: 1716, 8: 1287, 9: 715, 10: 286, 11: 78, 12: 13, 13: 1}
16,384
{0: 1, 1: 14, 2: 91, 3: 364, 4: 1001, 5: 2002, 6: 3003, 7: 3432, 8: 3003, 9: 2002, 10: 1001, 11: 364, 12: 91, 13: 14, 14: 1}
32,768
{0: 1, 1: 15, 2: 105, 3: 455, 4: 1365, 5: 3003, 6: 5005, 7: 6435, 8: 6435, 9: 5005, 10: 3003, 11: 1365, 12: 455, 13: 105, 14: 15, 15: 1}
65,536
{0: 1, 1: 16, 2: 120, 3: 560, 4: 1820, 5: 4368, 6: 8008, 7: 11440, 8: 12870, 9: 11440, 10: 8008, 11: 4368, 12: 1820, 13: 560, 14: 120, 15: 16, 16: 1}
131,072
{0: 1, 1: 17, 2: 136, 3: 680, 4: 2380, 5: 6188, 6: 12376, 7: 19448, 8: 24310, 9: 24310, 10: 19448, 11: 12376, 12: 6188, 13: 2380, 14: 680, 15: 136, 16: 17, 17: 1}
262,144
{0: 1, 1: 18, 2: 153, 3: 816, 4: 3060, 5: 8568, 6: 18564, 7: 31824, 8: 43758, 9: 48620, 10: 43758, 11: 31824, 12: 18564, 13: 8568, 14: 3060, 15: 816, 16: 153, 17: 18, 18: 1}
524,288
{0: 1, 1: 19, 2: 171, 3: 969, 4: 3876, 5: 11628, 6: 27132, 7: 50388, 8: 75582, 9: 92378, 10: 92378, 11: 75582, 12: 50388, 13: 27132, 14: 11628, 15: 3876, 16: 969, 17: 171, 18: 19, 19: 1}
1,048,576
{0: 1, 1: 20, 2: 190, 3: 1140, 4: 4845, 5: 15504, 6: 38760, 7: 77520, 8: 125970, 9: 167960, 10: 184756, 11: 167960, 12: 125970, 13: 77520, 14: 38760, 15: 15504, 16: 4845, 17: 1140, 18: 190, 19: 20, 20: 1}
2,097,152
{0: 1, 1: 21, 2: 210, 3: 1330, 4: 5985, 5: 20349, 6: 54264, 7: 116280, 8: 203490, 9: 293930, 10: 352716, 11: 352716, 12: 293930, 13: 203490, 14: 116280, 15: 54264, 16: 20349, 17: 5985, 18: 1330, 19: 210, 20: 21, 21: 1}
4,194,304
{0: 1, 1: 22, 2: 231, 3: 1540, 4: 7315, 5: 26334, 6: 74613, 7: 170544, 8: 319770, 9: 497420, 10: 646646, 11: 705432, 12: 646646, 13: 497420, 14: 319770, 15: 170544, 16: 74613, 17: 26334, 18: 7315, 19: 1540, 20: 231, 21: 22, 22: 1}
8,388,608
{0: 1, 1: 23, 2: 253, 3: 1771, 4: 8855, 5: 33649, 6: 100947, 7: 245157, 8: 490314, 9: 817190, 10: 1144066, 11: 1352078, 12: 1352078, 13: 1144066, 14: 817190, 15: 490314, 16: 245157, 17: 100947, 18: 33649, 19: 8855, 20: 1771, 21: 253, 22: 23, 23: 1}
16,777,216
{0: 1, 1: 24, 2: 276, 3: 2024, 4: 10626, 5: 42504, 6: 134596, 7: 346104, 8: 735471, 9: 1307504, 10: 1961256, 11: 2496144, 12: 2704156, 13: 2496144, 14: 1961256, 15: 1307504, 16: 735471, 17: 346104, 18: 134596, 19: 42504, 20: 10626, 21: 2024, 22: 276, 23: 24, 24: 1}
33,554,432
{0: 1, 1: 25, 2: 300, 3: 2300, 4: 12650, 5: 53130, 6: 177100, 7: 480700, 8: 1081575, 9: 2042975, 10: 3268760, 11: 4457400, 12: 5200300, 13: 5200300, 14: 4457400, 15: 3268760, 16: 2042975, 17: 1081575, 18: 480700, 19: 177100, 20: 53130, 21: 12650, 22: 2300, 23: 300, 24: 25, 25: 1}
67,108,864
{0: 1, 1: 26, 2: 325, 3: 2600, 4: 14950, 5: 65780, 6: 230230, 7: 657800, 8: 1562275, 9: 3124550, 10: 5311735, 11: 7726160, 12: 9657700, 13: 10400600, 14: 9657700, 15: 7726160, 16: 5311735, 17: 3124550, 18: 1562275, 19: 657800, 20: 230230, 21: 65780, 22: 14950, 23: 2600, 24: 325, 25: 26, 26: 1}
134,217,728
{0: 1, 1: 27, 2: 351, 3: 2925, 4: 17550, 5: 80730, 6: 296010, 7: 888030, 8: 2220075, 9: 4686825, 10: 8436285, 11: 13037895, 12: 17383860, 13: 20058300, 14: 20058300, 15: 17383860, 16: 13037895, 17: 8436285, 18: 4686825, 19: 2220075, 20: 888030, 21: 296010, 22: 80730, 23: 17550, 24: 2925, 25: 351, 26: 27, 27: 1}
268,435,456
{0: 1, 1: 28, 2: 378, 3: 3276, 4: 20475, 5: 98280, 6: 376740, 7: 1184040, 8: 3108105, 9: 6906900, 10: 13123110, 11: 21474180, 12: 30421755, 13: 37442160, 14: 40116600, 15: 37442160, 16: 30421755, 17: 21474180, 18: 13123110, 19: 6906900, 20: 3108105, 21: 1184040, 22: 376740, 23: 98280, 24: 20475, 25: 3276, 26: 378, 27: 28, 28: 1}
536,870,912
{0: 1, 1: 29, 2: 406, 3: 3654, 4: 23751, 5: 118755, 6: 475020, 7: 1560780, 8: 4292145, 9: 10015005, 10: 20030010, 11: 34597290, 12: 51895935, 13: 67863915, 14: 77558760, 15: 77558760, 16: 67863915, 17: 51895935, 18: 34597290, 19: 20030010, 20: 10015005, 21: 4292145, 22: 1560780, 23: 475020, 24: 118755, 25: 23751, 26: 3654, 27: 406, 28: 29, 29: 1}
1,073,741,824
{0: 1, 1: 30, 2: 435, 3: 4060, 4: 27405, 5: 142506, 6: 593775, 7: 2035800, 8: 5852925, 9: 14307150, 10: 30045015, 11: 54627300, 12: 86493225, 13: 119759850, 14: 145422675, 15: 155117520, 16: 145422675, 17: 119759850, 18: 86493225, 19: 54627300, 20: 30045015, 21: 14307150, 22: 5852925, 23: 2035800, 24: 593775, 25: 142506, 26: 27405, 27: 4060, 28: 435, 29: 30, 30: 1}
Note that I'm only printing when a new key (sum) is found.
I notice that
a new sum is found only on powers of 2 and
the results are symmetrical.
This hints to me that there's a formulaic approach that could work.
Any ideas on how to proceed?
Good news; there is a formula for this, and I'll explain the path there in case there is any confusion.
Let's look at your initial example: 5 strawberries (S), 2 lemons (L), and a banana (B). Let's lay out all of the fruits:
S S S S S L L B
We can actually rephrase the question now, because the number of times that 3, for example, will be the total number is the number of different ways you can pick 3 of the fruits from this list.
In statistics, the choose function (a.k.a nCk), answers just this question: how many ways are there to select a group of k items from a group of n items. This is computed as n!/((n-k)!*k!), where "!" is the factorial, a number multiplied by all numbers less than itself. As such the frequency of 3s would be (the number of fruits) "choose" (the total in question), or 8 choose 3. This is 8!/(5!*3!) = 56.

Group values based on columns and conditions in pandas

I want to group pandas dataframe column based on a condition that if the values are with in a range of +20.
Below is the dataframe
{'Name': {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'},
'ID': {0: 100, 1: 23, 2: 19, 3: 42, 4: 11, 5: 78},
'Left': {0: 70, 1: 70, 2: 70, 3: 70, 4: 66, 5: 66},
'Top': {0: 10, 1: 26, 2: 26, 3: 35, 4: 60, 5: 71}}
Here I want to group columns Left and Top.
This is what I did:
df.groupby(['Top'],as_index=False).agg(lambda x: list(x))
This is the result I got :
{'Top': {0: 10, 1: 26, 2: 35, 3: 60, 4: 71},
'Name': {0: ['A'], 1: ['B', 'C'], 2: ['D'], 3: ['E'], 4: ['F']},
'ID': {0: [100], 1: [23, 19], 2: [42], 3: [11], 4: [78]},
'Left': {0: [70], 1: [70, 70], 2: [43], 3: [66], 4: [66]}}
Desired output:
{'Top': {0: [10, 26], 2: 35, 3: [60,71]},
'Name': {0: ['A', 'B', 'C'], 2: ['D'], 3: ['E', 'F']},
'ID': {0: [100, 23, 19], 2: [42], 3: [11, 78]},
'Left': {0: [70, 50, 87], 2: [43], 3: [66, 99]}}
NOTE:
An important thing to consider is that Top values 10 and 26 are in the range of 20, it forms a group. 35 should not be added to the group even though its difference between 26 and 35 are in the range of 20 because 10 and 20 are already in a group and the difference between 10(the least value in the group) and 35 is not in the range of 20.
Is there any any alternate way to solve this?
EDIT:
I have a different use-case for which the top values increase and when it moves to a new page the top value changes and starts increasing again. This goes on for different inputs. And finally I want to group by Input File Name, Page Number and group. How can I group these?
{'Input File Name': {0: 268441,
1: 268441,
2: 268441,
3: 268441,
4: 268441,
5: 268441,
6: 268441,
7: 268441,
8: 268441,
9: 268441,
10: 268441,
11: 268441,
12: 268441,
13: 268441,
14: 268441,
15: 268441,
16: 268441,
17: 268441,
18: 268441,
19: 268441,
20: 268441,
21: 268441,
22: 268441,
23: 268441,
24: 268441,
25: 268441,
26: 268441,
27: 268441,
28: 268441,
29: 268441,
30: 268441,
31: 268441,
32: 268441,
33: 268441,
34: 268441,
35: 268441,
36: 268441,
37: 268441,
38: 268441,
39: 268441},
'Page Number': {0: 1,
1: 1,
2: 1,
3: 1,
4: 1,
5: 1,
6: 1,
7: 1,
8: 1,
9: 1,
10: 1,
11: 1,
12: 1,
13: 1,
14: 1,
15: 1,
16: 1,
17: 1,
18: 1,
19: 1,
20: 2,
21: 2,
22: 2,
23: 2,
24: 2,
25: 2,
26: 2,
27: 2,
28: 2,
29: 2,
30: 2,
31: 2,
32: 2,
33: 2,
34: 2,
35: 2,
36: 2,
37: 2,
38: 2,
39: 2},
'Content': {0: '3708 Forestview Road',
1: 'AvailableForLease&Sale',
2: '1,700± SFMedicalOffice',
3: '3708ForestviewRoad',
4: 'Suite107',
5: 'Raleigh,NC27612',
6: 'BuildingDescription',
7: '22,278± SFClassAOfficeBuilding',
8: 'OnlyOneSuiteLeft toLeaseand/orPurchase',
9: '(1)1,700± SFShell',
10: 'FlexibleLeaseTerms',
11: '2Floorsw/Elevator&Stairsto2',
12: 'Level',
13: 'nd',
14: 'ClassAFinishes',
15: 'On-SitePropertyManagement',
16: 'LargeGlass Windows',
17: '5:1Parking',
18: 'Formoreinformation,contact:',
19: 'OtherTenants: PivotPhysicalTherapy,TheLundy',
20: 'LeasingDetails',
21: 'SpaceDescription',
22: 'LeaseRate',
23: 'CompetitiveNNN+$5.50TICAM',
24: 'Tenant',
25: 'Suite107:1,700± SF',
26: 'Janitorial&Electric',
27: 'Responsibilities',
28: 'ShellSpacew/TIAllowance&Architecturals',
29: 'ClassABuilding',
30: 'SalePrice',
31: '$374,000or$220PSF',
32: 'BeautifulDouble-DoorEntry',
33: '1,700',
34: '± SF',
35: 'Size',
36: 'LargeGlassWindows',
37: 'ColdDarkShellw/TIAllowance',
38: '5:1Parking',
39: 'Upfit'},
'Top': {0: 6,
1: 6,
2: 49,
3: 103,
4: 103,
5: 103,
6: 590,
7: 637,
8: 656,
9: 676,
10: 695,
11: 716,
12: 716,
13: 717,
14: 736,
15: 755,
16: 775,
17: 794,
18: 813,
19: 835,
20: 111,
21: 138,
22: 142,
23: 142,
24: 169,
25: 174,
26: 179,
27: 190,
28: 195,
29: 216,
30: 217,
31: 217,
32: 238,
33: 247,
34: 247,
35: 248,
36: 259,
37: 274,
38: 282,
39: 285}}
You can write a function to group the Top columns first and then use groupby on that column:
import pandas as pd
df = pd.DataFrame({'Name': {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'},
'ID': {0: 100, 1: 23, 2: 19, 3: 42, 4: 11, 5: 78},
'Left': {0: 70, 1: 70, 2: 70, 3: 70, 4: 66, 5: 66},
'Top': {0: 10, 1: 26, 2: 26, 3: 35, 4: 60, 5: 71}})
def group(l, group_range):
groups = []
current_group = []
i = 0
group_count = 1
while i < len(l):
a = l[i]
if len(current_group) == 0:
if i == len(l) - 1:
break
current_group_start = a
if a <= current_group_start + group_range:
current_group.append(group_count)
if a < current_group_start + group_range:
i += 1
else:
groups.extend(current_group)
current_group = []
group_count += 1
groups.extend(current_group)
return groups
#group(df['Top'],20) -> [1, 1, 1, 2, 3, 3]
df['group'] = group(df['Top'],20)
df.groupby(['group'],as_index=False).agg(list)
Output:
group ID Left Name Top
0 1 [100, 23, 19] [70, 70, 70] [A, B, C] [10, 26, 26]
1 2 [42] [70] [D] [35]
2 3 [11, 78] [66, 66] [E, F] [60, 71]

Getting the max from a nested default dictionary

I'm trying to obtain the maximum value from every dictionary in a default dictionary of default dictionaries using Python3.
Dictionary Set Up:
d = defaultdict(lambda: defaultdict(int))
My iterator runs through the dictionaries and the csv data I'm using just fine, but when I call max, it doesn't necessarily return the max every time.
Example output:
defaultdict(<class 'int'>, {0: 106, 2: 35, 3: 12})
max = (0, 106)
defaultdict(<class 'int'>, {0: 131, 1: 649, 2: 338, 3: 348, 4: 276, 5: 150, 6: 138, 7: 89, 8: 54, 9: 22, 10: 5, 11: 2})
max = (0, 131)
defaultdict(<class 'int'>, {0: 39, 1: 13, 2: 30, 3: 15, 4: 5, 5: 10, 6: 1, 8: 1})
max = (0, 39)
defaultdict(<class 'int'>, {0: 40, 1: 53, 2: 97, 3: 80, 4: 154, 5: 203, 6: 173, 7: 142, 8: 113, 9: 76, 10: 55, 11: 22, 12: 13, 13: 7})
max = (0, 40)
So sometimes it's right, but far from perfect.
My approach was informed by the answer to this question, but I adapted it to try and make it work for a nested default dictionary. Here's the code I'm using to find the max:
for sub_d in d:
outer_dict = d[sub_d]
print(max(outer_dict.items(), key=lambda x: outer_dict.get(x, 0)))
Any insight would be greatly appreciated. Thanks so much.
If you check the values in outer_dict.items(), they are actually consisted of key value tuples, and since these aren't in your dictionary, they all return 0, and hence returns the index 0.
max(a.keys(),key = lambda x: a.get(x,0))
will get you the index of the max value, and retrieve the value by looking up on the dictionary
In
max(outer_dict.items(), key=lambda x: outer_dict.get(x, 0))
the outer_dict.items() call returns an iterator that produces (key, value) tuples of the items in outer_dict. So the key function gets passed a (key, value) tuple as its x argument, and then tries to find that tuple as a key in outer_dict, and of course that's not going to succeed, so the get call always returns 0.
Instead, we can use a key function that extracts the value from the tuple. eg:
nested = {
'a': {0: 106, 2: 35, 3: 12},
'b': {0: 131, 1: 649, 2: 338, 3: 348, 4: 276, 5: 150, 6: 138, 7: 89,
8: 54, 9: 22, 10: 5, 11: 2},
'c': {0: 39, 1: 13, 2: 30, 3: 15, 4: 5, 5: 10, 6: 1, 8: 1},
'd': {0: 40, 1: 53, 2: 97, 3: 80, 4: 154, 5: 203, 6: 173, 7: 142,
8: 113, 9: 76, 10: 55, 11: 22, 12: 13, 13: 7},
}
for k, subdict in nested.items():
print(k, max((t for t in subdict.items()), key=lambda t: t[1]))
output
a (0, 106)
b (1, 649)
c (0, 39)
d (5, 203)
A more efficient alternative to that lambda is to use itemgetter. Here's a version that puts the maxima into a dictionary:
from operator import itemgetter
nested = {
'a': {0: 106, 2: 35, 3: 12},
'b': {0: 131, 1: 649, 2: 338, 3: 348, 4: 276, 5: 150, 6: 138, 7: 89,
8: 54, 9: 22, 10: 5, 11: 2},
'c': {0: 39, 1: 13, 2: 30, 3: 15, 4: 5, 5: 10, 6: 1, 8: 1},
'd': {0: 40, 1: 53, 2: 97, 3: 80, 4: 154, 5: 203, 6: 173, 7: 142,
8: 113, 9: 76, 10: 55, 11: 22, 12: 13, 13: 7},
}
ig1 = itemgetter(1)
maxes = {k: max((t for t in subdict.items()), key=ig1)
for k, subdict in nested.items()}
print(maxes)
output
{'a': (0, 106), 'b': (1, 649), 'c': (0, 39), 'd': (5, 203)}
We define ig1 outside the dictionary comprehension so that we don't call itemgetter(1) on every iteration of the outer loop.

Is there a simple way to draw contour or surface maps from 3 columns in a pandas dataframe?

didn"t find an answer to that.
my data looks like that:
import pandas as pd
mapDF = pd.DataFrame({u'N1': {0: 20, 1: 20, 2: 20, 3: 20, 4: 20, 5: 21, 6: 21, 7: 21, 8: 21, 9: 21, 10: 22, 11: 22, 12: 22, 13: 22, 14: 22, 15: 23, 16: 23, 17: 23, 18: 23, 19: 23, 20: 24, 21: 24, 22: 24, 23: 24, 24: 24},
u'N2': {0: 50, 1: 51, 2: 52, 3: 53, 4: 54, 5: 50, 6: 51, 7: 52, 8: 53, 9: 54, 10: 50, 11: 51, 12: 52, 13: 53, 14: 54, 15: 50, 16: 51, 17: 52, 18: 53, 19: 54, 20: 50, 21: 51, 22: 52, 23: 53, 24: 54},
u'optGain': {0: 1.119175, 1: 1.11189, 2: 1.0984430000000001, 3: 1.0648280000000001, 4: 1.0459499999999999, 5: 1.149848, 6: 1.154882, 7: 1.1460840000000001, 8: 1.096886, 9: 1.098012, 10: 1.1416869999999999, 11: 1.118763, 12: 1.118763, 13: 1.098276, 14: 1.068576, 15: 1.1165069999999999, 16: 1.128744, 17: 1.128744, 18: 1.1070770000000001, 19: 1.0678430000000001, 20: 1.100743, 21: 1.096325, 22: 1.087421, 23: 1.090177, 24: 1.089968},
u'simGain': {0: 0.94936399999999999, 1: 0.94052000000000002, 2: 0.93819300000000005, 3: 0.90808299999999997, 4: 0.91296299999999997, 5: 0.94936399999999999, 6: 0.90771599999999997, 7: 0.90771599999999997, 8: 0.91296299999999997, 9: 0.85592699999999999, 10: 0.90232000000000001, 11: 0.90232000000000001, 12: 0.84629200000000004, 13: 0.75560000000000005, 14: 0.75560000000000005, 15: 0.92555200000000004, 16: 0.87239299999999997, 17: 0.87274600000000002, 18: 0.88428399999999996, 19: 0.83454799999999996, 20: 0.86954900000000002, 21: 0.88426899999999997, 22: 0.88746899999999995, 23: 0.82285200000000003, 24: 0.82285200000000003}})
and I am trying to make a contour map or surface plot where N1 , N2 are the x, y axis and optGain column is the value.
tried with pylab or plotlib but none takes in the DF and all require meshing the data. This is cumbersome since I am not looking to plot a function but rather an already existing data.
any help or pointers will be appreciated
Thanks!
See the answers to this question.
In general, you could do this with
import matplotlib.pyplot as plt
plt.scatter(mapDF.N1.values, mapDF.N2.values, c=mapDF.optGain.values, s=500)
plt.gray()
plt.show()
It will draw things in a continuous greys cale of the values.
Edit
The above will draw a grey scale scatter plot. As Syrtis Major notes in the comments, try tricontour for a 2d contour plot. With this function, note the colors parameter, specifically:
If a tuple of matplotlib color args (string, float, rgb, etc), different levels will be plotted in different colors in the order specified.
So you can use your mapDF.optGain.values to build RGB colors for different levels.

Categories