How to check that Python iterator is finite? - python

Lets say there is a function return_list_from_iterable that takes any iterable as an argument.
def return_list_from_iterable(iterable):
if is_finite(iterable): #check if iterator is finite
return list(iterable)
Is there a way to check if iterator is fininite before calling list(iterable), e.g if to pass itertools.repeat('hello, infinity!') as an argument then, i guess, something bad can happen while the function is running.

Related

when do we initialise a function call within the function vs as an argument?

I have a question about arguments in functions, in particular initialising an array or other data structure within the function call, like the following:
def helper(root, result = []):
...
My question is, what is the difference between the above vs. doing:
def helper(root):
result = []
I can see why this would be necessary if we were to run recursions, i.e. we would need to use the first case in some instances.
But are there any other instances, and am I right in saying it is necessary in some cases for recursion, or can we always use the latter instead?
Thanks
Python uses pointers for lists, so initializing a list or any other mutable objects in function definition is a bad idea.
The best way of doing it is like this:
def helper(root, result=None):
if isinstance(result, type(None)):
result = []
Now if you only pass one argument to the function, the "result" will be an empty list.
If you initiate the list within the function definition, by calling the function multiple times, "result" won't reset and it will keep the values from previous calls.

Recursion does not work with positional arguments

I am trying to do multiplication recursion (multiplying all the values of a container) in Python. The function receives the elements of the list as positional argument(*n). On execution I receive the error saying "Maximum recursion depth reached". However, the code works fine if I simply use n instead of *n and send the elements in a list.
Code not working:
def multiply(*n):
if n:
return n[0]*multiply(n[1:])
else:
return 1
multiply(5,1,4,9)
Working code:
def multiply(n):
if n:
return n[0]*multiply(n[1:])
else:
return 1
multiply([5,1,4,9])
In the first piece of code, the expression
multiply(n[1:])
is calling multiply with only one argument. Namely, the rest of the list. In order to call it with arguments equal to the contents of the list n[1:], you use the splat operator again, like so:
multiply(*n[1:])
When you tell a function to expect an arbitrary number of positional arguments with *n, you need to accommodate it in that format: with multiple arguments, not with a single iterable that contains all the arguments. If you have a single iterable whose elements should be used as arguments, you have to unpack it with * when you call it. The second function works because it's expecting a single, iterable argument, and you send it a single, iterable argument.
Replace n[1:] with *n[1:].

How can I pass a function's output to a different function as input?

I'm trying to write a function which gets as parameters a list of functions, and parameters for the first one. It then calls them in order, passing the output of the previous one into the next one. (Obviously for this to work all functions must expect the same number of parameters and return the same number of values).
Here's what I tried:
def chain(functions, *first_func_params):
params = first_func_params
for func in functions:
params = func(*params)
However this works only if all the functions return tuples ('multiple values') or other sequence types, because only sequence types can be unpacked into the parameter list of a function.
If the functions simply return single values, chain doesn't work.
I could simply check if func's output is a tuple or not and act accordingly. But is there a more elegant solution?
How would you implement this?
You could use:
if not isinstance(params, tuple):
params = (params,)
Or catch the TypeError exception that will be raised if you try to use * before an unpackable object.

Calling gen.send() with a new generator in Python 3.3+?

From PEP342:
Because generator-iterators begin execution at the top of the generator's function body, there is no yield expression to receive a value when the generator has just been created. Therefore, calling send() with a non-None argument is prohibited when the generator iterator has just started, ...
For example,
>>> def a():
... for i in range(5):
... print((yield i))
...
>>> g = a()
>>> g.send("Illegal")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't send non-None value to a just-started generator
Why is this illegal? The way I understood the use of yield here, it pauses execution of the function, and returns to that spot the next time that next() (or send()) is called. But it seems like it should be legal to print the first result of (yield i)?
Asked a different way, in what state is the generator 'g' directly after g = a(). I assumed that it had run a() up until the first yield, and since there was a yield it returned a generator, instead of a standard synchronous object return.
So why exactly is calling send with non-None argument on a new generator illegal?
Note: I've read the answer to this question, but it doesn't really get to the heart of why it's illegal to call send (with non-None) on a new generator.
Asked a different way, in what state is the generator 'g' directly after g = a(). I assumed that it had run a() up until the first yield, and since there was a yield it returned a generator, instead of a standard synchronous object return.
No. Right after g = a() it is right at the beginning of the function. It does not run up to the first yield until after you advance the generator once (by calling next(g)).
This is what it says in the quote you included in your question: "Because generator-iterators begin execution at the top of the generator's function body..." It also says it in PEP 255, which introduced generators:
When a generator function is called, the actual arguments are bound to function-local formal argument names in the usual way, but no code in the body of the function is executed.
Note that it does not matter whether the yield statement is actually executed. The mere occurrence of yield inside the function body makes the function a generator, as documented:
Using a yield expression in a function definition is sufficient to cause that definition to create a generator function instead of a normal function.

TypeError: 'str' object is not callable for append()

#!/usr/bin/python
class List:
list = []
def append(self, append):
print(append())
#self.append = append
def displayList(self, displayList):
print(displayList())
#print(self.append)
def main():
list = List()
list.append('abc')
list.append('def')
list.append('ghi')
list.displayList()
if __name__ == '__main__':
main()
You have a method append (referenced with self.append), with a parameter append, then the method calls the passed argument append. But in your main you call the object's append method and send it a string. Remember that in that method you're calling the passed argument. Since that argument is a string, you can't call it.
The other method in your class, displayList, does the exact same thing as append, only you're calling it with no argument at all, which will also generate an error.
Don't attempt to fix these issues by prepending self (print(self.append()) or print(self.displayList(0))), as that will simply exceed the maximum recursion depth.
Your List class's list is also a class variable, not an instance variable. That will probably result in more problems later on.
I recommend taking a step back and thinking again about what you're trying to do and why. If you're creating this class for fun/education, there are probably better ways to learn. If you're doing it as part of a practical program (i.e., using it as a solution to a particular challenge), you may have an XY Problem.
Guess what? It is because string object is not callable

Categories