computational cost of ABC [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have used abc in the past, and I'd like to use them again, to enforce pure virtual like methods with #abstractmethod. This is in the context of a Python front-end to an API which users will extend frequently.
It's a bit too complicated for me to develop a reliably comprehensive test of scale, and I've always used abc as a black magic closed box, so I don't know where the cost of the abstraction and checks for abstracts is, and when it's likely incurred, or what the cost would actually be like or what it'd scale with.
I couldn't find satisfactorily complete information of the underlying mechanics anywhere, so any pointers to when and where the magic happens and at what cost would be immensely appreciated (Import? Instancing? Double dipping cost if the instance is extended?)
Some further info about the use case:
Unlike in previous use cases (for me), where there was a very limit number of instances of each base object and abc measured to no perceivable overhead, this time around it would be for something (nodes in a DAG with a tree view) which can be instanced and then extended in place hundreds of times, and the number of virtual methods is likely to go up to somewhere around a dozen per class.
Inheritance is never multiple, and it's generally quite shallow, at the most two or three deep, the majority of the time just one.
Python 2.7 due to 3rd party platforms constraints.

Prior to Python 2.6, using ABC came with some significant overhead. Issue 1762 reported this as a bug, and it was fixed for Python 2.6 (by moving some of the ABC machinery into the C implementation of object).
In newer versions of Python, there should be very little difference in performance between ABC-using and non-ABC using classes (the bug mentions a very small remaining difference in the speed of isinstance checks, but other actions having essentially zero difference in performance).

Related

python shift operator uses [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
Im just learning about pythons bitwise operator << and >>. as far as I see, it takes the binary version of an integer and shifts it n places left or right. that would mean that saying x<<y is equivalent to x*(2**y)
so my question is why is there an operator for this? as far as I know python doesnt like to give you more than 1 way of doing things to avoid confusion. is there a reason this operator is particularly useful or typical scenerios where its used? I know this is a pretty open ended question but when searching for this I only come across what this operator does, not why we would use it. thankyou in advance
The key is in your remark "it takes the binary version of an integer and shifts it n places left or right".
Ask yourself this: how does your computer represent integers at all? What are integers? Any integer is a sequence of bits, (typically a multiple of 8 bits, i.e. a byte) and your computer is built around memory positions, registers, addresses, etc. that hold these integer values.
So, it makes sense for a CPU to have an operation to shift such a value left or right by one bit, for an extremely fast multiplication or division by 2, more so since powers of two are very commonly needed because everything in your computer is binary.
Other operations can be composed from simple addition, subtraction, shift by n, etc. - Python exposes this operation to give you access to this very basic and quick operation, although Python integers aren't always (or even all that often) the same efficient integers you operate on directly in many other languages.
But bit-shifting has many applications, and as a standard operation of your computer, it only makes sense that Python would give you access to a tool that programmers are very used to, and have applications for in many common algorithms.

Can dividing code too much make it inefficient? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
If code is divided into too many segments, can this make the program slow?
For example - Creating a separate file for just a single function.
In my case, I'm using Python, and suppose there are two functions that I need in the main.py file. If I placed them in different files (just containing the function).
(Suppose) Also, If I'm using the same library for the two functions and I've divided the functions into separate files.
How can this affect efficiency? (Machine performance-wise and Team-wise).
It depends on the language, the framework you use etc. However, dividing the code too much can make it unreadable, which is (most of the time) the bigger problem. Since most of the time you will (or should) be working in a team, you should consider how readable your code would be for them.
However, answering this in a definite way is difficult. You should ask a Senior developer on your team for guidelines.

Can you generate new asm files based on old ones? And if so what is the most efficent way? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Let's say I have a lot of .asm files on a python program (It can also be binary strings, hex strings or whatever you would like). How can I use those files to generate new files that function roughly the same (It's for an assembly game).
The thing is I have a lot of assembly players that were really good at the game and I wondered if I can somehow use natural selection to breed better assembly bots.
This sounds a lot like superoptimization (wikipedia).
e.g. STOKE starts with a sequence of asm instructions and stochastically modifies it looking for shorter / faster sequences that do the same thing.
(Or STOKE can start from scratch looking for an asm sequence that gives the desired result for a set of test-cases.)
It's open source, so have a look at the algorithms they use to modify asm and test-run the code. Of course it's possible if you have data structures that represent operands and opcodes.
See also Applying Genetic Programming to Bytecode and
Assembly, an academic paper from 2014.
I haven't read it, but hopefully it addresses ways to recombine code from different mutations and maybe get something useful more often than you get garbage that steps on the registers from the other code. (That's the major trick with random changes to code, especially in assembly where there are lots of non-useful instruction sequences.)

Which should I use: Python-sgp4, PyEphem, python-skyfield [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 12 months ago.
Improve this question
The landscape of Python tools that seem to accomplish the task of propagating Earth satellites/celestial bodies is confusing. Depending on what you're trying to do, PyEphem or Python-SGP4 may be more suitable. Which of these should I use if:
I want ECEF/ECI coordinates of an Earth satellite
I want general sky coordinates of a celestial object
Near Earth vs. far away objects
Want to use two-line element sets
Do any of these accomplish precise orbit determination? If not, where do I go/what resources are there out there for precise orbit determination?
I kind of know the answers here. For instance, POD is not part of any of these libraries. These computations seem to be very involved. POD for many objects are available from IGS. The main reason I ask is for documentation purposes. I'm not familiar with python-skyfield, but I have a hunch it accomplishes what these other two do. --Brandon Rhodes, I await your expertise :)
Michael mentioned it in his comment, but PyEphem I believe is deprecated as of the current Python 3 version. That being said, if you are to use TLEs, SGP4 was made to handle TLEs in particular. The non-keplerian and non-newtonian terms you see in TLEs are specifically passed into the SGP4 propagator (B* drag, second derivative of mean motion, etc.). Once you get outside of Earth neighborhood (beyond GEO), SGP4 is not meant to handle these cases. SGP4 in of itself is inherently a near-earth propagator that does not scale well on an inter-planetary or even cis-lunar regime. In fact, if you are to have both apogee and perigee extend beyond GEO, I would tend to avoid SGP4.
It is important to note that SGP4 outputs things in a TEME frame (true equator mean equinox). This is an inertial frame. If you want ECEF coordinates, you will need to find a package that converts you from inertial to fixed frames. Regardless of whether or not you desired earth-fixed coordinates, I highly recommend making this conversion so you can then convert to your inertial frame of choice.

Why does python use identation instead of braces and keywords? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
One thing I was never able to wrap my head around is why python uses indentation unlike other scripting languages. Being dependent on indentation makes editing python codes sometimes very frustrating.
ps. I have a strong feeling that this question gets closed for not being constructive. As much as I think the answer would be very interesting to know.
I think the answer you want is well written in the doc, and I can't summarize this better than by quoting the text:
Why does Python use indentation for grouping of statements?
Guido van Rossum believes that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after a while.
Since there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. Occasionally C programmers will encounter a fragment of code like this:
if (x <= y)
x++;
y--;
z++;
Only the x++ statement is executed if the condition is true, but the indentation leads you to believe otherwise. Even experienced C programmers will sometimes stare at it a long time wondering why y is being decremented even for x > y.
Because there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are many different ways to place the braces. If you’re used to reading and writing code that uses one style, you will feel at least slightly uneasy when reading (or being required to write) another style.
Many coding styles place begin/end brackets on a line by themselves. This makes programs considerably longer and wastes valuable screen space, making it harder to get a good overview of a program. Ideally, a function should fit on one screen (say, 20-30 lines). 20 lines of Python can do a lot more work than 20 lines of C. This is not solely due to the lack of begin/end brackets – the lack of declarations and the high-level data types are also responsible – but the indentation-based syntax certainly helps.

Categories