Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
So the other day I was trying something in python, I was trying to write a custom multiplication function in python
def multi(x, y):
z = 0
while y > 0:
z = z + x
y = y - 1
return z
However, when I ran it with extremely large numbers like (1 << 90) and (1 << 45) which is (2 ^ 90) * (2 ^ 45). It took forever to compute.
So I tried looking into different types of multiplication, like the russian peasant multiplication technique, implemented down there, which was extremely fast but not as readable as multi(x, y)
def russian_peasant(x, y):
z = 0
while y > 0:
if y % 2 == 1: z = z + x
x = x << 1
y = y >> 1
return z
What I want you to answer is how do programming languages like python multiply numbers ?
Your multi version runs in O(N) whereas russian_peasant version runs in O(logN), which is far better than O(N).
To realize how fast your russian_peasant version is, check this out
from math import log
print round(log(100000000, 2)) # 27.0
So, the loop has to be executed just 27 times, but your multi version's while loop has to be executed 100000000 times, when y is 100000000.
To answer your other question,
What I want you to answer is how do programming languages like python
multiply numbers ?
Python uses O(N^2) grade school multiplication algorithm for small numbers, but for big numbers it uses Karatsuba algorithm.
Basically multiplication is handled in C code, which can be compiled to machine code and executed faster.
Programming languages like Python use the multiplication instruction provided by your computer's CPU.
In addition, you have to remember that Python is a very high-level programming language, which runs on a virtual machine which itself runs on your computer. As such, it is, inherently, a few order of magnitudes slower than native code. Translating your algorithm to assembly (or even to C) would result in a massive speedup -- although it'd still be slower than the CPU's multiplication operation.
On the plus side, unlike naive assembly/C, Python auto-promotes integers to bignums instead of overflowing when your numbers are bigger than 2**32.
The basic answer to your question is this, multiplication using * is handled through C code. In essence if you write something in pure python its going to be slower than the C implementation, let me give you an example.
The operator.mul function is implemented in C, but a lambda is implemented in Python, we're going to try to find the product of all the numbers in an array using functools.reduce and we are going to use two cases, one using operator.mul and another using a lambda which both do the same thing (on the surface):
from timeit import timeit
setup = """
from functools import reduce
from operator import mul
"""
print(timeit('reduce(mul, range(1, 10))', setup=setup))
print(timeit('reduce(lambda x, y: x * y, range(1, 10))', setup=setup))
Output:
1.48362842561
2.67425475375
operator.mul takes less time, as you can see.
Usually, functional programming involving many computations is best made to take less time using memoization -- the basic idea is that if you feed a true function (something that always evaluates the same result for a given argument) the same thing twice or more, you're wasting time, time that could easily be saved by identifying common calls and storing whatever they evaluate down to into a hash table or other quickly-accessible object. See https://en.wikipedia.org/wiki/Memoization for basic theory. It is well-implemented in Common Lisp.
Related
I'm working in a RSA algorithm in Python. My code runs smoothly, whenever I type prime values such as 17 and 41 no problems happens.
However, if I try primes over 1000 the code "stops" running (I mean, it will get stuck in the exponential) and don't return.
The part where the problems relies is this:
msg = ''
msg = msg + (ord(char ** d) % n for char in array)
I know that doing exponentials for numbers like d = 32722667 and char = 912673 requires a lot of computing, but I guess it shouldn't be supposed to don't return anything in like 5, 10 minutes.
My computer is an i3-6006U and has 4GB of RAM
If you are interested in result of x**y%z rather than x**y you might harness built-in function pow, from help(pow):
Help on built-in function pow in module builtins:
pow(x, y, z=None, /)
Equivalent to x**y (with two arguments) or x**y % z (with three arguments)
Some types, such as ints, are able to use a more efficient algorithm when
invoked using the three argument form.
What you're trying to do is to calculate modular exponentiation. You should not calculate the result of the exponentiation first and then the result of the modulo. There are more efficient algorithms like you can read on Wikipedia. In python you can use the built-in pow function for the modular exponentiation like it was already mentioned.
I am currently implementing this simple code trying to find the n-th element of the Fibonacci sequence using Python 2.7:
import numpy as np
def fib(n):
F = np.empty(n+2)
F[1] = 1
F[0] = 0
for i in range(2,n+1):
F[i]=F[i-1]+F[i-2]
return int(F[n])
This works fine for F < 79, but after that I get wrong numbers. For example, according to wolfram alpha F79 should be equal to 14472334024676221, but fib(100) gives me 14472334024676220. I think this could be caused by the way python deals with integers, but I have no idea what exactly the problem is. Any help is greatly appreciated!
the default data type for a numpy array is depending on architecture a 64 (or 32) bit int.
pure python would let you have arbitrarily long integers; numpy does not.
so it's more the way numpy deals with integers; pure python would do just fine.
Python will deal with integers perfectly fine here. Indeed, that is the beauty of python. numpy, on the other hand, introduces ugliness and just happens to be completely unnecessary, and will likely slow you down. Your implementation will also require much more space. Python allows you to write beautiful, readable code. Here is Raymond Hettinger's canonical implementation of iterative fibonacci in Python:
def fib(n):
x, y = 0, 1
for _ in range(n):
x, y = y, x + y
return x
That is O(n) time and constant space. It is beautiful, readable, and succinct. It will also give you the correct integer as long as you have memory to store the number on your machine. Learn to use numpy when it is the appropriate tool, and as importantly, learn to not use it when it is inappropriate.
Unless you want to generate a list with all the fibonacci numbers until Fn, there is no need to use a list, numpy or anything else like that, a simple loop and 2 variables will be enough as you only really need to know the 2 previous values
def fib(n):
Fk, Fk1 = 0, 1
for _ in range(n):
Fk, Fk1 = Fk1, Fk+Fk1
return Fk
of course, there is better ways to do it using the mathematical properties of the Fibonacci numbers, with those we know that there is a matrix that give us the right result
import numpy
def fib_matrix(n):
mat = numpy.matrix( [[1,1],[1,0]], dtype=object) ** n
return mat[0,1]
to which I assume they have an optimized matrix exponentiation making it more efficient that the previous method.
Using the properties of the underlying Lucas sequence is possible to do it without the matriz, and equally as efficient as exponentiation by squaring and with the same number of variables as the other, but that is a little harder to understand at first glance unlike the first example because alongside the second example it require more mathematical.
The close form, the one with the golden ratio, will give you the result even faster, but that have the risk of being inaccurate because the use of floating point arithmetic.
As an additional word to the previous answer by hiro protagonist, note that if using Numpy is a requirement, you can solve very easely your issue by replacing:
F = np.empty(n+2)
with
F = np.empty(n+2, dtype=object)
but it will not do anything more than transferring back the computation to pure Python.
This question already has answers here:
Summing with a for loop faster than with reduce?
(3 answers)
Closed 7 years ago.
Among the ways to obtain the sum of a list A of ints in Python are the following two:
Built-in sum function: sum(A)
Reduce function with adder lambda: reduce(lambda x, y: x + y, A)
Is there any speed advantage to using either of these, or are their performances roughly the same?
Among the ways to obtain the sum of a list A of ints in Python are the
following two:
Built-in sum function: sum(A)
Reduce function with adder lambda: reduce(lambda x, y: x + y, A)
Is there any speed advantage to using either of these, or are their
performances roughly the same?
On my machine the "sum" function appears to be way faster than the "reduce" version (at least for summing 5000 arrays of size 1000).
See:
$ cat doit.py
from timeit import timeit
print timeit('reduce(lambda x, y: x + y, range(1000))',number=5000)
print timeit('sum(range(1000))',number=5000)
$ python2 doit.py
0.460000038147
0.0599999427795
Update:
To address the comment, I've updated my answer to also include a 'setup' for creating the array to be summed:
$ cat doit2.py
from timeit import timeit
print timeit('reduce(lambda x, y: x + y, a)',setup='a=range(1000)',number=5000)
print timeit('sum(a)',setup='a=range(1000)',number=5000)
$ python2 doit2.py
0.530030012131
0.0320019721985
Again, the "sum" version appears to be the clear winner.
The answer almost certainly varies depending on the implementation you're using. However, as a best practice, you should assume that the built-in function has the best performance unless you have (a) proved otherwise and (b) shown that the difference is impacting performance in your specific application.
There are two complementary reasons for this. First, it's safe to assume that the people who implement the language are concerned with performance, and that they hear every complaint (justified or not) about performance. Therefore, if there's a better implementation, it's safe to assume that they will change to that implementation as soon as possible.
And if an even better one comes on line, you can assume that they'll change to that. This means that you get speed improvements for free, as they're discovered.
Second, it's safe to assume that the built-in function is going to communicate better in your codebase than an inline lambda. It's just simpler to read "sum" and understand "sum" than to parse the lambda. Since programmer time is in general vastly more expensive than CPU time, it makes sense to always optimize for the former over the latter, unless there is a clear and specific reason to do otherwise.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 months ago.
Improve this question
Functional programming is one of the programming paradigms in Python. It treats computation as the evaluation of mathematical functions and avoids state and mutable data. I am trying to understand how Python incorporates functional programming.
Consider the following factorial program (factorial.py):
def factorial(n, total):
if n == 0:
return total
else:
return factorial(n-1, total*n)
num = raw_input("Enter a natural number: ")
print factorial(int(num), 1)
That code avoids mutable data, because we are not changing the value of any variable. We are only recursively calling the factorial function with a new value.
If the example given above for functional programming is correct, then what does avoiding state mean?
Does functional programming only mean that I must use only functions whenever I have computations (as given in the above example)?
If the given example is wrong, what is a simple example with an explanation?
The example is correct for functional programming. And a good example of what not to do in Python because it is inefficient and doesn't scale. Python doesn't have any tail-call optimisation, so recursive calls should not be used solely to avoid imperative loops. If you really start programming in this style in Python, your programs will end with runtime errors eventually.
You are describing pure functional programming which is not something Python could be used for.
Python supports functional programming to some degree in the sense that functions are first class values. That means functions can be passed to other functions and returned as results from functions. And the standard library contains functions also found in most functional programming languages standard libraries, like map(), filter(), reduce(), and the stuff in the functools and itertools modules.
Borrowing from http://ua.pycon.org/static/talks/kachayev/#/8 where he makes a comparison between the way one thinks of imperative and functional programs. The example is borrowed.
Imperative:
expr, res = "28+32+++32++39", 0
for t in expr.split("+"):
if t != "":
res += int(t)
print res
Functional:
from operator import add
expr = "28+32+++32++39"
print reduce(add, map(int, filter(bool, expr.split("+"))))
If the given example is wrong, then kindly provide another simple example with an explanation.
It’s not, and it shows a realistic problem to start with, which you’ll see if you call factorial with a huge number. The maximum recursion depth will be reached. Python doesn’t have tail call optimization.
It the example given above for functional programming is correct, then what does avoiding state mean.
It means (in Python) that once a variable had been assigned, you should not reassign a new value, or change the value that you assigned to that variable.
Secondly, does functional programming only mean that, I must use only functions whenever I have computations (as given in the above example)
Functional programming is quite broad. Python is a multi-paradigm language that supports some functional programming concepts.
Functional programming means that all computations should be seen as mathematical functions.
I wrote a post about it that explains all the above in greater detail: Use Functional Programming In Python
Basics
Functional programming is a paradigm where we use only expressions and no statements. Statements do something like an if, whereas expressions evaluate mathematical stuff. We try to avoid mutability (values getting changed), since we only want pure functions. Pure functions don't have side effects, meaning that a function, given the same input, always produces the same output. We want functions to be pure, since they are easier to debug. By doing this, we are describing what a function is, as opposed to giving steps about how to do something and we write a lot less code. Functional programming is often called declarative, while other approaches are called imperative.
Utilities
Know about built-in functions, since they are so useful. For starters: abs, round, pow, max & min, sum, len, sorted, reversed, zip, and range.
Lambdas
Anonymous functions or lambdas are pure functions that take input and produce a value. There isn't any return statement, since we are returning what we are evaluating. You can also give them a name by just declaring a variable.
(lambda x, y: x + y)(1, 1)
add = lambda x, y: x + y
add(1, 1)
Ternary operator
Since we are working with expressions, instead of using if statements for logic, we use the ternary operator.
(expression) if (condition) else (expression2)
Map, filter & reduce
We also need a way of looping. This comes with list comprehension.
In this example we add one to each array element.
inc = lambda x: [i + 1 for i in x]
We can also operate on elements that meet a condition.
evens = lambda x: [i for i in x if (i % 2) == 0]
Some people say that that is the correct Pythonic way of doing things. But people who are more serious use a different approach: map, filter, and reduce. These are the building blocks of functional programming. While map and filter are built-in functions, reduce used to be, but now is in the functools module. To use it, import it:
from functools import reduce
Map is a function that takes a function and calls it on the elements of an array. The result of a map function is unfortunately unreadable, so you need to turn it into a tuple or a collection.
inc = lambda x: tuple(map(lambda y: y + 1, x))
Filter is a function that calls a function on an array, keeps the elements that output True and removes the ones that are False. Like map, it's not readable.
evens = lambda x: tuple(filter(lambda y: (y % 2) == 0, x))
Reduce takes a function that has two parameters. One is the result of the last time it was called & one is the new value. It keeps doing this until it reduces down to a value.
from functools import reduce
m_sum = lambda x: reduce(lambda y, z: y + z, x)
Let and do
A lot of functional programming languages have do. Do is a function that takes n arguments, evaluates all of them and returns the value of the last one. Python doesn't have do, but I created one myself.
do = lambda *args: tuple(map(lambda y: y, args))[-1]
Here's an example using do that prints something and exits.
print_err = lambda x: do(print(x), exit())
We use do to get imperative advantages.
Let allows some expression to be equal to something in a expression. In Python, most people do that as follows.
def something(x):
y = x + 10
return y * 3
Python 3.8 adds the expression assignment operators :=. So that code can now be written as a lambda.
something = lambda x: do(y := x + 10, y * 3)
Working with impure systems
The console, the file system, the web, etc. are immutable and impure and we need a way to work with them. In some languages like Haskell you have monads, which are wrappers for functions. Clojure allows the use of impure functions. In Python, a multi-paradigm language, we don't need anything, since it's not just functional. Print is an impure function.
Recursion
Your example uses recursion. Recursion uses the call stack, which is a small chunk of memory that is used for calling functions. It can get full and crash. A lot of functional programming languages use something known as lazy evaluation to help with the problem of recursion, but Python doesn't have it. So avoid recursion as much as possible.
Answers
Avoiding state is being immutable, meaning not changing the value of something.
In functional programming, everything is an expression, which is a function. We don't have that in Python though, since it's multi-paradigm.
In a more functional way, your code would be written as follows. Also it's not functional due to using an if statement instead of a ternary expression.
from functools import reduce
factorial = lambda x: reduce(lambda y, z: y*z, range(1,x+1))
This produces a range of 1 to x and multiplies all the values using range.
This question already has answers here:
Which is faster in Python: x**.5 or math.sqrt(x)?
(15 answers)
Closed 9 years ago.
In my field it's very common to square some numbers, operate them together, and take the square root of the result. This is done in pythagorean theorem, and the RMS calculation, for example.
In numpy, I have done the following:
result = numpy.sqrt(numpy.sum(numpy.pow(some_vector, 2)))
And in pure python something like this would be expected:
result = math.sqrt(math.pow(A, 2) + math.pow(B,2)) # example with two dimensions.
However, I have been using this pure python form, since I find it much more compact, import-independent, and seemingly equivalent:
result = (A**2 + B**2)**0.5 # two dimensions
result = (A**2 + B**2 + C**2 + D**2)**0.5
I have heard some people argue that the ** operator is sort of a hack, and that squaring a number by exponentiating it by 0.5 is not so readable. But what I'd like to ask is if:
"Is there any COMPUTATIONAL reason to prefer the former two alternatives over the third one(s)?"
Thanks for reading!
math.sqrt is the C implementation of square root and is therefore different from using the ** operator which implements Python's built-in pow function. Thus, using math.sqrt actually gives a different answer than using the ** operator and there is indeed a computational reason to prefer numpy or math module implementation over the built-in. Specifically the sqrt functions are probably implemented in the most efficient way possible whereas ** operates over a large number of bases and exponents and is probably unoptimized for the specific case of square root. On the other hand, the built-in pow function handles a few extra cases like "complex numbers, unbounded integer powers, and modular exponentiation".
See this Stack Overflow question for more information on the difference between ** and math.sqrt.
In terms of which is more "Pythonic", I think we need to discuss the very definition of that word. From the official Python glossary, it states that a piece of code or idea is Pythonic if it "closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages." In every single other language I can think of, there is some math module with basic square root functions. However there are languages that lack a power operator like ** e.g. C++. So ** is probably more Pythonic, but whether or not it's objectively better depends on the use case.
Even in base Python you can do the computation in generic form
result = sum(x**2 for x in some_vector) ** 0.5
x ** 2 is surely not an hack and the computation performed is the same (I checked with cpython source code). I actually find it more readable (and readability counts).
Using instead x ** 0.5 to take the square root doesn't do the exact same computations as math.sqrt as the former (probably) is computed using logarithms and the latter (probably) using the specific numeric instruction of the math processor.
I often use x ** 0.5 simply because I don't want to add math just for that. I'd expect however a specific instruction for the square root to work better (more accurately) than a multi-step operation with logarithms.