Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Often you see question asked about a better method of doing something, or just generally a looping question and very often the top answers will use some form of convoluted list/dict/tuple comprehension that takes longer for others to understand than create themselves. While a simple and understandable loop could have just been made.
Since it cannot provide any speed benefits that I could imagine, is there any use of it in python other than to look smart or be Pythonic?
Thanks.
I believe the goal in this case to make your code as concise and efficient as possible. At times it can seem convoluted, but the computer looping through multiple lines as opposed to a single line adds processing time, which in large applications and across many iterations can cause some delays.
Additionally, although it seems harder to understand initially, for an outside individual reading your code, it's much quicker for them to read simplified expressions than pages of loops to get an idea of what you're attempting to accomplish.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
If code is divided into too many segments, can this make the program slow?
For example - Creating a separate file for just a single function.
In my case, I'm using Python, and suppose there are two functions that I need in the main.py file. If I placed them in different files (just containing the function).
(Suppose) Also, If I'm using the same library for the two functions and I've divided the functions into separate files.
How can this affect efficiency? (Machine performance-wise and Team-wise).
It depends on the language, the framework you use etc. However, dividing the code too much can make it unreadable, which is (most of the time) the bigger problem. Since most of the time you will (or should) be working in a team, you should consider how readable your code would be for them.
However, answering this in a definite way is difficult. You should ask a Senior developer on your team for guidelines.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
For example, we have two methods that want to get the median of the lengths of the words in a sentence string input (this is just a simple example):
def get_median(sentence):
words = tokenize(sentence)
lengths_of_words = [len(word) for word in words]
median = statistics.median(lengths_of_words)
return median
This method is 4 lines long, but describes every component.
Its counterpart is:
def get_median(sentence):
return statistics.median([len(x) for x in tokenize(sentence)])
Even the second seems more pythonic and smooth, the first is more descriptive and compartmentalized. I can't seem to find a clear consensus on this, but what should be preferred generally and what is considered more readable? And why?
In my opinion the second way is better because its way less cluttered. I think that if you concerned that someone might not understand what the code snipped does you could just write a comment above explaining it. Why have cluttered code when you can do the same thing in a way thats more appealing to the eye when reading?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am wondering what I should do for the purpose of my project.
I am gonna operate on about 100 000 rows, every time.
what I wanted to do is to create an object "{}" and then, if I need to search for a value, just call it , for example
data['2018']['09']['Marketing']['AccountName']
the second option is to pull everyting into an array "[]" and in case I need to pull value, I will create a function to go through the array and sum numbers for specific parameters.
But don't know which method is faster.
Will be thankful if you can shed some light on this
Thanks in advance,
If performance (speed) is an issue, Python might not be the ideal choice...
Otherwise:
Might I suggest the use of a proper database, such as SQLLite (which comes shipped with Python).
And maybe SQLAlchemy as an abstraction layer. (https://docs.sqlalchemy.org/en/latest/orm/tutorial.html)
After all, they were made exactly for this kind of tasks.
If that seems overkill: Have a look at Pandas.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was going through this answer on Stack Overflow. I came to know about existence of OrderedSet in Python. I would like to know how it is implemented internally. Is it similar to hash table implementation of sets?
Also, what is the time complexity of some of the common operations like insert, delete, find, etc.?
From the documentation available here
Implementation based on a doubly linked link and an internal
dictionary. This design gives OrderedSet the same big-Oh running times
as regular sets including O(1) adds, removes, and lookups as well as
O(n) iteration.
There is also a discussion on the topic, see Does Python have an ordered set?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am intermediate in R and a beginner in Python. However my core abilities lie less in data analysis and more in programming and developing large software systems in teams, and I don't have time to become an expert in both.
Given the advances in the Python world in numpy, scipy, pandas, and its prevalence in data science and in general programming, I think I need to concentrate on Python (even though I enjoy R a lot), and accept that for some tasks I might be 75% as efficient, say, as I would be in R. I'd find this efficiency loss acceptable in order to be a master of one language rather than intermediate at both.
However I don't know enough about either language to really be sure of my facts. I would be very interested in hearing from anyone who is experienced in both R and Python and can say what would be the significant disadvantages, if any, of dropping R in favour of Python?
Edit 5: this question on stats.stackexchange is similar and has some great answers.
(Edits 3-4: reverted content/title to original question, which was closed. The original question attracted a lot of expert comment, my attempt to narrow the question to reopen it failed, and I'd prefer to have these comments below the original text they were commenting on.)