How to calculate Euclidian of dictionary with tuple as key - python

I have created a matrix by using a dictionary with a tuple as the key (e.g. {(user, place) : 1 } )
I need to calculate the Euclidian for each place in the matrix.
I've created a method to do this, but it is extremely inefficient because it iterates through the entire matrix for each place.
def calculateEuclidian(self, place):
count = 0;
for key, value in self.matrix.items():
if(key[1] == place and value == 1):
count += 1
euclidian = math.sqrt(count)
return euclidian
Is there a way to do this more efficiently?
I need the result to be in a dictionary with the place as a key, and the euclidian as the value.

You can use a dictionary comprehension (using a vectorized form is much faster than a for loop) and accumulate the result of the conditionals (0 or 1) as the euclidean value:
def calculateEuclidian(self, place):
return {place: sum(p==place and val==1 for (_,p), val in self.matrix.items())}
With your current data structure, I doubt there is any way you can avoid iterating through the entire dictionary.

If you cannot use another way (or an auxiliary way) of representing your data, iterating through every element of the dict is as efficient as you can get (asymptotically), since there is no way to ask a dict with tuple keys to give you all elements with keys matching (_, place) (where _ denotes "any value"). There are other, and more succinct, ways of writing the iteration code, but you cannot escape the asymptotic efficiency limitation.
If this is your most common operation, and you can in fact use another way of representing your data, you can use a dict[Place, list[User]] instead. That way, you can, in O(1) time, get the list of all users at a certain place, and all you would need to do is count the items in the list using the len(...) function which is also O(1). Obviously, you'll still need to take the sqrt in the end.

There may be ways to make it more Pythonic, but I do not think you can change the overall complexity since you are making a query based off both key and value. I think you have to search the whole matrix for your instances.

you may want to create a new dictionary from your current dictionary which isn't adapted to this kind of search and create a dictionary with place as key and list of (user,value) tuples as values.
Get the tuple list under place key (that'll be fast), then count the times where value is 1 (linear, but on a small set of data)
Keep the original dictionary for euclidian distance computation. Hoping that you don't change the data too often in the program, because you'd need to keep both dicts in-sync.

Related

Is there an efficient way to fill a numba `Dict` in parallel?

I'm having some trouble quickly filling a numba Dict object with key-value pairs (around 63 million of them). Is there an efficient way to do this in parallel?
The documentation (https://numba.pydata.org/numba-doc/dev/reference/pysupported.html#typed-dict) is clear that numba.typed.Dict is not thread-safe, and so I think to use prange with a single Dict object would be a bad idea. I've tried to use a numba List of Dicts, to populate them in parallel and then stitch them together using update, but I think this last step is also inefficient.
Note that one thing (which may be important) is that all the keys are unique, i.e. once assigned, a key will not be reassigned a value. I think this property makes the problem amenable to an efficient parallelised solution.
Below is an example of the serial approach, which is slow with a large number of key-value pairs.
d = typed.Dict.empty(
key_type=types.UnicodeCharSeq(128), value_type=types.int64
)
#njit
def fill_dict(keys_list, values_list, d):
n = len(keys_list)
for i in range(n):
d[keys_list[i]] = values_list[i]
fill_dict(keys_list, values_list, d)
Can anybody help me?
Many thanks.
You don't have to stitch them together if you preprocess the key into an integer that can be computed for its modulo num_shard value.
# assuming hash() returns an arbitrary integer computed by ascii values
shard = hash(key) % num_shard;
selected_dictionary = dictionary[shard]
value = selected_dictionary[key]
# inserting
# lock only the selected_dictionary
shard = hash(key) % num_shard;
selected_dictionary = dictionary[shard]
selected_dictionary.push((key,value))
The hashing could be something like sum of all ascii codes of chars in key. The modulo based indexing separates blocks of keys so that they can work independently without extra processing except hashing.

Tuple-key dictionary in python: Accessing a whole block of entries

I am looking for an efficient python method to utilise a hash table that has two keys:
E.g.:
(1,5) --> {a}
(2,3) --> {b,c}
(2,4) --> {d}
Further I need to be able to retrieve whole blocks of entries, for example all entries that have "2" at the 0-th position (here: (2,3) as well as (2,4)).
In another post it was suggested to use list comprehension, i.e.:
sum(val for key, val in dict.items() if key[0] == 'B')
I learned that dictionaries are (probably?) the most efficient way to retrieve a value from an object of key:value-pairs. However, calling only an incomplete tuple-key is a bit different than querying the whole key where I either get a value or nothing. I want to ask if python can still return the values in a time proportional to the number of key:value-pairs that match? Or alternatively, is the tuple-dictionary (plus list comprehension) better than using pandas.df.groupby() (but that would occupy a bit much memory space)?
The "standard" way would be something like
d = {(randint(1,10),i):"something" for i,x in enumerate(range(200))}
def byfilter(n,d):
return list(filter(lambda x:x==n, d.keys()))
byfilter(5,d) ##returns a list of tuples where x[0] == 5
Although in similar situations I often used next() to iterate manually, when I didn't need the full list.
However there may be some use cases where we can optimize that. Suppose you need to do a couple or more accesses by key first element, and you know the dict keys are not changing meanwhile. Then you can extract the keys in a list and sort it, and make use of some itertools functions, namely dropwhile() and takewhile():
ls = [x for x in d.keys()]
ls.sort() ##I do not know why but this seems faster than ls=sorted(d.keys())
def bysorted(n,ls):
return list(takewhile(lambda x: x[0]==n, dropwhile(lambda x: x[0]!=n, ls)))
bysorted(5,ls) ##returns the same list as above
This can be up to 10x faster in the best case (i=1 in my example) and more or less take the same time in the worst case (i=10) because we are trimming the number of iterations needed.
Of course you can do the same for accessing keys by x[1], you just need to add a key parameter to the sort() call

Finding min value in a dictionary in O(1) time Python

I need a way to find the minimum value in a dictionary full of Node objects in O(1) time, or really any sublinear time, if possible.
Here's an example of what I'd need:
'''
Nodes have 4 attributes:
- stack
- backwards
- forwards
- total_score
'''
dict = {str(Node1.stack): Node1,
str(Node2.stack): Node2, ... } # note that keys are the stacks as strings
key, smallest = min(frontier.items(), key=lambda pair: pair[1].total_score)
( ^^^ something better than this! ^^^ )
The last line above (key, smallest ... ) is what I have so far. It works fine, but it's too slow. I read online that the min() function takes O(n) time. I have a lot of Nodes to process, so something faster would be amazing.
edit Should have mentioned before, but this is running inside an A* algorithm - frontier is updated dynamically. The operations I need to be able to do are:
Find minimum in O(1), or at least < O(n)
Update values of specific elements quickly
Access attributes easily
It's impossible to get the min value from a dictionary in O(1) time because you have to check every value. However, you can do a fast lookup if you store your data in a heap or a sorted tree, where the data is sorted by value. Trees generally give you insertion and search times of O(log n).
If you literally only need the min value of your data and you don't ever need to look up other values, you could just create a minValue variable that you keep updated every time you insert or remove items.
It is impossible to get the smallest value in a dictionary in O(1), it will take O(n).
You can also do one thing while you are adding the elements in the dictionary you can maintain the minimum value and check at the moment itself.

How to update a large dictionary using a list quickly?

I am looking for a fast way to update the values in a (ordered) dictionary, which contains tens of millions of values, where the updated values are stored in a list/array.
The program I am writing takes the list of keys from the original dictionary (which are numerical tuples) as a numpy array, and passes them through a function which returns an array of new numbers (one for each key value). This array is then multiplied with the corresponding dictionary values (through piece-wise array multiplication), and it is this returned 1-D array of values that we wish to use to update the dictionary. The entries in the new array are stored in the order of the corresponding keys, so I could use a loop to go through the dictionary a update the values one-by-one. But this is too inefficient. Is there a faster way in which to update the values in this dictionary which doesn't use loops?
An example of a similar problem would be if the keys in a dictionary represent the x and y-coordinates of points in space, and the values represent the forces being applied at that point. If we want to calculate the torque experienced at each point from the origin, we would first need a function like:
def euclid(xy):
return (xy[0]**2 + xy[1]**2)**0.5
Which, if xy represents the x, y-tuple, would return the Euclidean distance from the origin. We could then multiply this by the corresponding dictionary value to return the torque, like so:
for xy in dict.keys():
dict[xy] = euclid(xy)*dict[xy]
But this loop is slow, and we could take advantage of array algebra to get the new values in one operation:
new_dict_values = euclid(np.array(dict.keys()))*np.array(dict.values())
And it is here that we wish to find a fast method to update the dictionary, instead of utilising:
i = 0
for key in dict.keys():
dict[key] = new_dict_value[i]
i += 1
That last piece of code isn't just slow. I don't think it does what you want it to do:
for key in dict.keys():
for i in range(len(new_dict_values)):
dict[key] = new_dict_value[i]
For every key in the dictionary, you are iterating through the entire list of new_dict_values and assigning each one to the value of that key, overwriting the value you assigned in the previous iteration of the loop. This will give you a dictionary where every key has the value of the last element in new_dict_value, which I don't think is what you want.
If you are certain that the order of the keys in the dictionary is the same as the order of the values in new_dict_values, then you can do this:
for key, value in zip(dict.keys(), new_dict_values):
dict[key] = value
Edit: Also, in the future there is no need in python to iterate through a range of numbers and access elements of a list via the index. This:
for i in range(len(new_dict_values)):
dict[key] = new_dict_value[i]
is equivalent to this:
for i in new_dict_values:
dict[key] = i

Why can’t you use Hash Tables/Dictionaries in Counting Sort algorithm?

When you use the counting sort algorithm you create a list, and use its indices as keys while adding the number of integer occurrences as the values within the list. Why is this not the same as simply creating a dictionary with the keys as the index and the counts as the values? Such as:
hash_table = collections.Counter(numList)
or
hash_table = {x:numList.count(x) for x in numList}
Once you have your hash table created you essentially just copy the number of integer occurrences over to another list. Hash Tables/Dictionaries have O(1) lookup times, so why would this not be preferable if your simply referencing the key/value pairs?
I've included the algorithm for Counting Sort below for reference:
def counting_sort(the_list, max_value):
# List of 0's at indices 0...max_value
num_counts = [0] * (max_value + 1)
# Populate num_counts
for item in the_list:
num_counts[item] += 1
# Populate the final sorted list
sorted_list = []
# For each item in num_counts
for item, count in enumerate(num_counts):
# For the number of times the item occurs
for _ in xrange(count):
# Add it to the sorted list
sorted_list.append(item)
return sorted_list
You certainly can do something like this. The question is whether it’s worthwhile to do so.
Counting sort has a runtime of O(n + U), where n is the number of elements in the array and U is the maximum value. Notice that as U gets larger and larger the runtime of this algorithm starts to degrade noticeably. For example, if U > n and I add one more digit to U (for example, changing it from 1,000,000 to 10,000,000), the runtime can increase by a factor of ten. This means that counting sort starts to become impractical as U gets bigger and bigger, and so you typically run counting sort when U is fairly small. If you’re going to run counting sort with a small value of U, then using a hash table isn’t necessarily worth the overhead. Hashing items costs more CPU cycles than just doing standard array lookups, and for small arrays the potential savings in memory might not be worth the extra time. And if you’re using a very large value of U, you’re better off switching to radix sort, which essentially is lots of smaller passes of counting sort with a very small value of U.
The other issue is that the reassembly step of counting sort has amazing locality of reference - you simply scan over the counts array and the input array in parallel filling in values. If you use a hash table, you lose some of that locality because th elements in the hash table aren’t necessarily stored consecutively.
But these are more implementation arguments than anything else. Fundamentally, counting sort is less about “use an array” and more about “build a frequency histogram.” It just happens to be the case that a regular old array is usually preferable to a hash table when building that histogram.

Categories