What is the difference between these two python code?.i thought both are same but the output i am getting is different
def fibonacci(num):
a=1
b=1
series=[]
series.append(a)
series.append(b)
for i in range(1,num-1):
series.append(a+b)
#a,b=b,a+b
a=b
b=a+b
return series
print(fibonacci(10))
def fibonacci(num):
a=1
b=1
series=[]
series.append(a)
series.append(b)
for i in range(1,num-1):
series.append(a+b)
a,b=b,a+b
#a=b
#b=a+b
return series
print(fibonacci(10))
In the first method
a=b
b=a+b
is an incorrect way of swapping, when you say a=b you have lost the value of a, so b=a+b is the same as b=b+b, which is not what you want.
Another way to achieve an equivalent result to this approach, a,b = b,a+b, is by using a temporary variable to store a, as follows:
tmp = a
a = b
b = tmp + b
The issue here is about storing the values you are calculating, on the first snippet you are saying a+b only on the second snippet you are saying b =a+b. The value of b is changing when you say b = a+b.
Hope my explanation is understandable.You are re-assigning the value of b o the first snippet (b=a+b)
Related
This is a general problem I keep having. As I feel like this question is best asked through an example, I made up a function to illustrate the general issue I am having.
def function(a):
b=56
if a>0:
b=b+2
a=a-1
return function(a)
else:
print(b)
Here, I'm trying to set an initial value for b that will change depending on a. For example, if a=1, I would like the function to return 58, but it actually returns 56. I understand that whenever the function loops back around, it resets b as 56, so the function will always return 56 no matter what a is. I was wondering how I could set the initial value as 56 without it reseting every time.
I hope this makes sense! Thanks for the help!
Edited: Depending on your use case, if b can be a parameter to the recursion state as well. Since b needs to be determined by a, we can set the initial value of b when b is None
def func(a, b=None):
if b is None:
# b is not set, determine `b` now depending on `a`
b = 56 # fill-in other cases here
if a > 0:
b += 2
a -= 1
return func(a, b)
else:
return b
func(1)
The typical approach to doing this sort of thing is to have your recursive function take the "accumulator" (b here) as an additional parameter. Then your actual function just calls it with the desired initial value, here 56.
For your example this would be:
def recursive(a, b):
if a>0:
b=b+2
a=a-1
return recursive(a, b)
else:
print(b)
def function(a):
return recursive(a, 56)
Note that you could simplify recursive by having the if statement return recursive(a - 1, b + 2). And you probably want to return b rather than simply printit.
My code is as follows...
def addition(a, b):
c = a + b
return c
And I then want to be able to use C later on in the program as a variable. For example...
d = c * 3
However, I get a NameError that 'C' is not defined... But I have returned c, so why can I not use it later on in the code?! So confused. Thanks!
(This is obviously a simpler version of what I want to do but thought I'd keep it simple so I can understand the basics of why I cannot call on this variable outside my function even though I am returning the variable. Thanks)
You have returned the value of c but not the whole variable i.e. the name c exists only within the scope it is instantiated.
So, if you want to use the value returned, you should re-assign it to a new name. You can do it by re-assigning it to c again, but it could be any name you wanted.
def addition(a, b):
c = a + b
return c
new_var = addition(1,2) #new_var gets the value 3
c = addition(2,3) #c gets the value 5
Take a look at this nice explanation about variables and scopes (link)
You usually define a function to use it later in your code. For that case, use another global variable c:
def addition(a, b):
c = a + b
return c
c = addition(1, 2)
d = c * 3 # d == 9
Functions allow this usage of repeated code, or procedure distinction, so that you can later write in your code
m = addition(4, 5)
and it will store the required result of the functionality into m.
If you want to define c in the function and use it later, you can use global variables.
c = 0
def addition(a, b):
global c
c = a + b
return c
It's not considered good to use globals, though. You could also call the function in the variable assignment.
d = addition(a, b) * 3
For this, you need to put real numbers in the place of a and b. I recommend you use the second option.
On the Literate Programs site, I was looking at the Python code for the GCD algorithm.
def gcd(a,b):
""" the euclidean algorithm """
while a:
a, b = b%a, a
return b
What is going on in the body? Expression evaluation? Is it a compressed form of another structure?
There are two things going on here:
a, b = b%a, a
First, a tuple is created with the contents (b%a, a). Then the contents of that tuple are unpacked and assigned to the names a and b.
Looks like shorthand for:
while a > 0:
temp = a
a = b%a
b = temp
return b
a is receiving the result of b%a while b is receiving the value of a
Works the same as:
while a > 0:
tmp = a
a = b%a
b = tmp
return b
See this post for more information on switching variables: Is there a standardized method to swap two variables in Python?
def fibonacci(num):
a=0
b=1
for i in range(num):
a, b=b, a+b
print a
How does the line inside the loop works?
Somehow a & b 's values change, can seems to understand how..
EDIT:
For some reason I got confused, thought that the middle exp of b=b is something new...
didn't read it well..
It really is (a,b) = (b, a+b) which is the basic form of swap in python (:
b, a+b creates a tuple
This tuple is unpacked back into a and b
This line a, b = b, a+b is equivalent to (a, b) = (b, a+b), which is a tuple assignment.
The line in question can be more clearly written (through tuple packing on the right side and sequence unpacking on the left side) as:
(a, b) = (b, a + b)
As the assignments to a and b are carried out in parallel, this is exactly the same as:
new_a = b
new_b = a + b
a = new_a
b = new_b
I'm kind of beginner in python. I was looking at one the types to make a fibonacci function,
def fib(n):
a=0
b=1
while a<n:
print a
a,b=b,a+b
and I saw the a,b=b,a+b declaration. So, I thought a=b and b=a+b were the same to a,b=a,b+a, so I changed the function for it to be like this:
def fib(n):
a=0
b=1
while a<n:
print a
a=b
b=a+b
and I thought it would be right, but when I executed the program, I got a different output. Can someone explain to me the difference between those two types of declaration?
Thanks, anyway.
b, a+b creates a tuple containing those two values. Then a, b = ... unpacks the tuple and assigns its values to the variables. In your code however you overwrite the value of a first, so the second line uses the new value.
a, b = b, a + b
is roughly equal to:
tmp = a
a = b
b = tmp + b
When Python executes
a,b = b, a+b
it evaluates the right-hand side first, then unpacks the tuple and assigns the values to a and b. Notice that a+b on the right-hand side is using the old values for a.
When Python executes
a=b
b=a+b
it evaluates b and assigns its value to a.
Then it evaluates a+b and assigns that value to b. Notice now that a+b is using the new value for a.
That syntax simultaneously assigns new values to a and b based on the current values. The reason it's not equivalent is that when you write the two separate statements, the second assignment uses the new value of a instead of the old value of a.
In the first example, a isn't updated to take the value of b until the entire line has been evaluated -- so b is actually a + b.
In you example, you've already set a to b, so the last line (b=a+b) could just as easily be b=b+b.
It's all in the order in which things are evaluated.