Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
One thing I was never able to wrap my head around is why python uses indentation unlike other scripting languages. Being dependent on indentation makes editing python codes sometimes very frustrating.
ps. I have a strong feeling that this question gets closed for not being constructive. As much as I think the answer would be very interesting to know.
I think the answer you want is well written in the doc, and I can't summarize this better than by quoting the text:
Why does Python use indentation for grouping of statements?
Guido van Rossum believes that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after a while.
Since there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. Occasionally C programmers will encounter a fragment of code like this:
if (x <= y)
x++;
y--;
z++;
Only the x++ statement is executed if the condition is true, but the indentation leads you to believe otherwise. Even experienced C programmers will sometimes stare at it a long time wondering why y is being decremented even for x > y.
Because there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are many different ways to place the braces. If you’re used to reading and writing code that uses one style, you will feel at least slightly uneasy when reading (or being required to write) another style.
Many coding styles place begin/end brackets on a line by themselves. This makes programs considerably longer and wastes valuable screen space, making it harder to get a good overview of a program. Ideally, a function should fit on one screen (say, 20-30 lines). 20 lines of Python can do a lot more work than 20 lines of C. This is not solely due to the lack of begin/end brackets – the lack of declarations and the high-level data types are also responsible – but the indentation-based syntax certainly helps.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Is there a way to do the same in C++?
Python
int.__add__(1, 1)
I can make something in C++, but it is still ugly and not very optimised and short.
I just want this : 1+1, but without + .
I absolutely want to use the int "class".
It's for matrix multiplication, addition, and subtraction...
I just want to have better-looking and shorter code.
I can make something in C++, but it is still ugly and not very optimised and short.
Last time I checked, int.__add__(1, 1) was way longer than 1+1. Just about anything is longer than 1+1. As for optimisations, you are not in a position to talk about what is more optimised, having not measured anything.
It's for matrix multiplication, addition, and subtraction
The same integer + operator is useful in lots of contexts. Matrix operations are among them. There is no need to single them out.
I just want this : 1+1, but without +
What you want is of secondary importance. Programming is a team sport. You do what everyone else does. Not only because it is likely to be tried and tested and the best thing after the sliced bread, but also because if everyone is doing something in a particular way and you are doing it differently, others will find it hard to understand what you mean.
This is not to say you cannot break conventions and introduce innovations. People do it all the time. But they are not asking anyone how to! If you need to ask, you are not in a position to lead the crowd.
I just want to have better-looking and shorter code.
Then absolutely positively use +. A no-brainer.
You want to use a function (or functor) for addition instead of the + operator. C++ has std::plus for that. I don't use C++ but I think one or both of these should work:
int a = std::plus(1, 1);
int a = std::plus<int>(1, 1);
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I'm currently experimenting with the Black code formatter for Python.
In >90% of the cases I am happy with the output (with default configs), but it regularly happens, that it formats some lines in a way that seems rather ugly to me.
Here an example, before and after formatting with black.
Before:
After:
The syntax of these two lines is originally identical (same function, same number of arguments...), so it makes sense to format them in the same way. However, as the first line is slightly longer, Black formats it differently, which makes it much more difficult to read and interpret the code.
Of course, in this particular case, you could just increase the linelength parameter of Black, but that doesn't really solve the problem in general, and I would like to stick with the default configuration.
I have come across many such situations, also using other formatters such as Prettier for JavaScript.
How do you handle these situations? Is there for example a way to tell Black, to ignore these particular lines and not format them?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Let's say I have a lot of .asm files on a python program (It can also be binary strings, hex strings or whatever you would like). How can I use those files to generate new files that function roughly the same (It's for an assembly game).
The thing is I have a lot of assembly players that were really good at the game and I wondered if I can somehow use natural selection to breed better assembly bots.
This sounds a lot like superoptimization (wikipedia).
e.g. STOKE starts with a sequence of asm instructions and stochastically modifies it looking for shorter / faster sequences that do the same thing.
(Or STOKE can start from scratch looking for an asm sequence that gives the desired result for a set of test-cases.)
It's open source, so have a look at the algorithms they use to modify asm and test-run the code. Of course it's possible if you have data structures that represent operands and opcodes.
See also Applying Genetic Programming to Bytecode and
Assembly, an academic paper from 2014.
I haven't read it, but hopefully it addresses ways to recombine code from different mutations and maybe get something useful more often than you get garbage that steps on the registers from the other code. (That's the major trick with random changes to code, especially in assembly where there are lots of non-useful instruction sequences.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I find that I am often a little inconsistent in my naming conventions for variables, and I'm just wondering what people consider to be the best approach. The specific convention I am talking about is when a variable needs to be described by a noun and an adjective, and whether the adjective should come before or after the noun. The question is general across all programming languages, although personally I use C++ and Python.
For example, consider writing a GUI, which has two buttons; one on the right, and one on the left. I now need to create two variables to store them. One option would be to have the adjective before the noun, and call them left_button, and right_button. The other option would be to have the adjective after the noun, and call them button_left, and button_right. With the first case, it makes more sense when reading out loud, because in English you would always place the adjective before the noun. However, with the second case, it helps to structure the data semantically, because button reveals the most information about the variable, and left or right is supplementary information.
So what do you think? Should the adjective come before or after the noun? Or is it entirely subjective?
I try to use noun_adj because it conforms to what I use for functions. When I write functions I tend to use verb_noun_adj, for instance:
def get_button_id():
"""Get id property of button object."""
pass
This reads to me a bit more clearly that get_id_button because it is not entirely clear what you are getting here: is it getting the button.id or is getting a button called 'id' or maybe even something else? Unless you expand the name to be a bit more clear, like get_id_of_button which may be a bit too verbose for you.
There's probably an equally valid argument against what I'm doing here, but at least I'm being consistent in my madness?
I think that writing the adjective before the noun is the more common thing. But if you think about it for a second, writing it after the noun is easier when reading your code. The adjective can easily be seen as a attribute of the noun. In a logic way of thinking, this is the better way in my opinion.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have used abc in the past, and I'd like to use them again, to enforce pure virtual like methods with #abstractmethod. This is in the context of a Python front-end to an API which users will extend frequently.
It's a bit too complicated for me to develop a reliably comprehensive test of scale, and I've always used abc as a black magic closed box, so I don't know where the cost of the abstraction and checks for abstracts is, and when it's likely incurred, or what the cost would actually be like or what it'd scale with.
I couldn't find satisfactorily complete information of the underlying mechanics anywhere, so any pointers to when and where the magic happens and at what cost would be immensely appreciated (Import? Instancing? Double dipping cost if the instance is extended?)
Some further info about the use case:
Unlike in previous use cases (for me), where there was a very limit number of instances of each base object and abc measured to no perceivable overhead, this time around it would be for something (nodes in a DAG with a tree view) which can be instanced and then extended in place hundreds of times, and the number of virtual methods is likely to go up to somewhere around a dozen per class.
Inheritance is never multiple, and it's generally quite shallow, at the most two or three deep, the majority of the time just one.
Python 2.7 due to 3rd party platforms constraints.
Prior to Python 2.6, using ABC came with some significant overhead. Issue 1762 reported this as a bug, and it was fixed for Python 2.6 (by moving some of the ABC machinery into the C implementation of object).
In newer versions of Python, there should be very little difference in performance between ABC-using and non-ABC using classes (the bug mentions a very small remaining difference in the speed of isinstance checks, but other actions having essentially zero difference in performance).