Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Is it a good practice to use set function to list all the unique elements or are there any better approaches which have low time and space complexity.
Simply calling set is the best way to find the unique elements if:
the items are hashable, and
you don't require the original ordering preserved
It's already O(n), and you can't improve on that asymptotically. If you require the ordering preserved:
from collections import OrderedDict
list(OrderedDict.fromkeys(seq))
That's still O(n) asymptotically, but will generally slower due to more overhead (Python loop vs C loop). If you have to deal with unhashable elements, you may need O(n^2) using a list:
unique = []
for item in seq:
if item not in unique:
unique.append(item)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have data which consist of numerical keys and values.
I need to increase all keys and values by N number.
While i am using dictionaries for big amout of data my code works very slow.
What is the best way to store this data and the best way to increase values of pairs?
Example:
N=2
{1:4,3:6,2:1}
expected result:
{3:6,5:8,4:2}
Thanks
We can not actually do something faster if you want to change the whole data of dictionary. Even If someone run a for loop we are not sure of O(N) complexity because there can be re-hashing operations internally.
Best thing is you can smartly use one extra variable in memory for updates.
Like initially
del=0 and d={1:4,3:6,2:1}
when you want to increase values and keys by N
update del+=N
While retrieving from dictionary for key value k
Use d[k-del]+del
Best you can do about this is O(N) whichever data structure you use, you will have to visit the values of each element and increment them.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
There are two major variables (calls and puts), and several sub-variables (e.g. bid, change, time etc.) For example, if there are total 5 data points. I know how to do separately:
data[u'options'][0]["calls"][0]["change"]['fmt'], data[u'options'][0]["calls"][1]["change"]['fmt'], data[u'options'][0]["calls"][2]["change"]['fmt'], data[u'options'][0]["calls"][3]["change"]['fmt'],data[u'options'][0]["calls"][4]["change"]['fmt']
but that spend too much time. I wonder how to choose multiple items in one code.
You can do this with a little bit of list comprehension if I understand your question properly.
For each value in data["options"][0]["calls"], it adds that value's ["change"]["fmt"] value to the list.
d = [call["change"]["fmt"] for call in data["options"][0]["calls"]]
If you want a list of EACH value from every set of options, you could do it like so:
d = [[call["change"]["fmt"] for call in option["calls"]] for option in data["options"]]
and now you can say
for option in d:
for call in option:
print(call)
[data[u'options'][0]["calls"][i]["change"]['fmt'] for i in range(5)]
I don't quite understand your problem, is this what you're after?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was going through this answer on Stack Overflow. I came to know about existence of OrderedSet in Python. I would like to know how it is implemented internally. Is it similar to hash table implementation of sets?
Also, what is the time complexity of some of the common operations like insert, delete, find, etc.?
From the documentation available here
Implementation based on a doubly linked link and an internal
dictionary. This design gives OrderedSet the same big-Oh running times
as regular sets including O(1) adds, removes, and lookups as well as
O(n) iteration.
There is also a discussion on the topic, see Does Python have an ordered set?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
If list comprehension is better than filter, as it performs slighly better and is considered more readable (arguably, in my opinion), why does filter even exist?
I use it all the time, but if the consensus is that list comprehensions are better, what are the reasons why we have the filter function?
Way, way back in the day, way before we had list comprehensions, some guy who liked functional programming wrote up map and filter and submitted the change, and it got put in. That's about it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Suppose i have a list of hundred natural numbers, set of hundred natural numbers and dictionary of hundred natural numbers (assuming both key and value are natural numbers). I want to access an element in these data types. Which will be the more efficient and faster way to access it? I know i can use some performance tools like timeit or cprofile etc to check the performance but how will i know which data type to choose and when?
At a high overview:
list lookups are O(n)
Where n is the length of the list.
dict lookups are O(1)
This is basic Big O notation or Complexity.