I created an iterator to increment the figure number in various plotting function calls:
figndx=itertools.count()
I then proceed to call these throughout my code, passing next(figndx) as an argument to increment the value I use for the figure number: - for ex:
an.plotimg(ref_frame,next(figndx),'Ref Frame')
an.plotimg(new_frame,next(figndx),'New Frame')
etc...
After some particular function call, I want to read back the figndx value and store it in a variable for later use. However, when I look at figndx , it returns count(7), for example. How do I extract the '7' from this?
I've tried :
figndx
figndx.__iter__()
and I can't find anything else in the 'suggested' methods (when I type the dot (.)) that will get the actual iterator value. Can this be done?
`
Just wrap a count object
class MyCount:
def __init__(self, *args, **kwargs):
self._c = itertools.count(*args, **kwargs)
self._current = next(self._c)
def __next__(self):
current = self._current
self._current = next(self._c)
return current
def __iter__(self):
return self
def peek(self):
return self._current
You can create yourself a peeker, using itertools.tee, and encapsulate the peek:
from itertools import count, tee
def peek(iterator):
iterator, peeker = tee(iterator)
return iterator, next(peeker)
Then you can call it like
figndx = count(1)
next(figndx)
next(figndx)
figndx, next_value = peek(figndx)
next_value
# 3
Related
I would like to apply a maximum number of items for a list, making sure that the code does not allow the function that appends to the list to add more than, for example, 3 items.
Function that appends to list:
transactions = []
def append_hash():
transactions.append(hash)
How do I not allow append_hash to add more than three hashes to the list: transactions without deleting any previous hashes?
A list is, by definition, of arbitrary size. You'll need a new type instead.
class BoundedListFullError(RuntimeError):
pass
class BoundedList:
def __init__(self, max_size, x=None):
if x is None:
x = []
self.values = []
self.values.extend(x)
self.max_size = max_size
def append(self, x):
if len(self.values) == self.max_size:
raise BoundedListFullError(self.max_size)
self.values.append(x)
def extend(self, xs):
if len(self.values) + len(xs) > self.max_size:
raise BoundedListFullError(self.max_size)
self.values.extend(xs)
You could just subclass list and modify the append method:
class MyStack(list):
def __init__(self, max_size, *args, **kwargs):
super().__init__(*args, **kwargs)
self.max_size = max_size
def append(self, value):
if len(self) >= self.max_size:
raise ValueError("NO!")
# Per #chepner's suggestion
super().append(value)
somestack = MyStack(3)
somestack.append(1)
somestack.append(2)
somestack.append(3)
somestack.append(4) # Raises ValueError
If you controll your code and ensure you only ever use your function:
transactions = []
def append_hash(h):
transaction = (transactions + [h])[:3]
or
def append_hash(h):
if len(transaction) < 3:
transaction.append(3)
# else:
# raise some error you need to choose/define
Neither of those will enforce it though - you can still modify the list without your function. You would need a seperate class - see chepners answer.
Adding a fourth hash will silently fail - if you want to raise an exception instead, the second solution can be adapted.
I'm receiving an unknown number of records for background processing from generators. If there is a more important job, I have to stop to release the process.
The main process is best described as:
def main():
generator_source = generator_for_test_data() # 1. contact server to get data.
uw = UploadWrapper(generator_source) # 2. wrap the data.
while not interrupt(): # 3. check for interrupts.
row = next(uw)
if row is None:
return
print(long_running_job(row)) # 4. do the work.
Is there a way to get to __next__ without having to plug __iter__?
Having two steps - (1) make an iterator, then (2) iterate over it, just seems clumsy.
There are many cases where I'd prefer to submit a function to a function manager (mapreduce style), but in this case I need an instantiated class with some settings. Registering a single function can therefor only work if that function alone is __next__
class UploadWrapper(object):
def __init__(self, generator):
self.generator = generator
self._iterator = None
def __iter__(self):
for page in self.generator:
yield from page.data
def __next__(self):
if self._iterator is None: # ugly bit.
self._iterator = self.__iter__() #
try:
return next(self._iterator)
except StopIteration:
return None
Q: Is there a simpler way?
Working sample added for completeness:
import time
import random
class Page(object):
def __init__(self, data):
self.data = data
def generator_for_test_data():
for t in range(10):
page = Page(data=[(t, i) for i in range(100, 110)])
yield page
def long_running_job(row):
time.sleep(random.randint(1,10)/100)
assert len(row) == 2
assert row[0] in range(10)
assert row[1] in range(100, 110)
return row
def interrupt(): # interrupt check
if random.randint(1,50) == 1:
print("INTERRUPT SIGNAL!")
return True
return False
class UploadWrapper(object):
def __init__(self, generator):
self.generator = generator
self._iterator = None
def __iter__(self):
for ft in self.generator:
yield from ft.data
def __next__(self):
if self._iterator is None:
self._iterator = self.__iter__()
try:
return next(self._iterator)
except StopIteration:
return None
def main():
gen = generator_for_test_data()
uw = UploadWrapper(gen)
while not interrupt(): # check for job interrupt.
row = next(uw)
if row is None:
return
print(long_running_job(row))
if __name__ == "__main__":
main()
Your UploadWrapper seems overtly complex, there is more than a single simpler solution.
My first thought is to ditch the class altogether and just use a function instead:
def uploadwrapper(page_gen):
for page in page_gen:
yield from page.data
Just replace uw = UploadWrapper(gen) with uw = uploadwrapper(gen), and that'll work.
If you insist on the class, you can just get rid of the __next__() and replace uw = UploadWrapper(gen) with uw = iter(UploadWrapper(gen)), and it'll work.
In either case, you must also catch the StopIteration in the caller. __next__() is supposed to raise StopIteration when it's done, and not return None, like yours does. Otherwise, it won't work with things expecting a well-behaving iterator, eg. for loops.
I think you might have some misconceptions about how it all is supposed to fit together, so I'll try my best to explain how it's supposed to work, to the best of my knowledge:
The point of __iter__() is that if you have eg. a list, you can get multiple independent iterators by calling iter(). When you have a for loop, you're essentially first getting an iterator with iter() and then calling next() on it on every loop iteration. If you have two nested loops that use the same list, the iterators and their positions are still separate so there's no conflict. __iter__() is supposed to return an iterator for the container it's on, or if it's called on an iterator, it's supposed to just return self. In that sense, it's kind of wrong for UploadWrapper not to return self in __iter__(), since it wraps a generator and so can't really give independent iterators. As for why leaving out __next__() works, it's because when you define a generator (ie. use yield in a function), the generator has an __iter__() (that returns self, as it should) and __next__() that does what you'd expect. In your original code, you're not really using __iter__() at all for what it's supposed to be used: the code works even if you rename it to something else! This is because you never call iter() on the instance, and just directly call next().
If you wanted to do it "properly" as a class, I think something like this might suffice:
class UploadWrapper(object):
def __init__(self, generator):
self.generator = generator
self.subgen = iter(next(generator).data)
def __iter__(self):
return self
def __next__(self):
while True:
try:
return next(self.subgen)
except StopIteration:
self.subgen = iter(next(self.generator).data)
In Clojure I can do something like this:
(-> path
clojure.java.io/resource
slurp
read-string)
instead of doing this:
(read-string (slurp (clojure.java.io/resource path)))
This is called threading in Clojure terminology and helps getting rid of a lot of parentheses.
In Python if I try to use functional constructs like map, any, or filter I have to nest them to each other. Is there a construct in Python with which I can do something similar to threading (or piping) in Clojure?
I'm not looking for a fully featured version since there are no macros in Python, I just want to do away with a lot of parentheses when I'm doing functional programming in Python.
Edit: I ended up using toolz which supports pipeing.
Here is a simple implementation of #deceze's idea (although, as #Carcigenicate points out, it is at best a partial solution):
import functools
def apply(x,f): return f(x)
def thread(*args):
return functools.reduce(apply,args)
For example:
def f(x): return 2*x+1
def g(x): return x**2
thread(5,f,g) #evaluates to 121
I wanted to take this to the extreme and do it all dynamically.
Basically, the below Chain class lets you chain functions together similar to Clojure's -> and ->> macros. It supports both threading into the first and last arguments.
Functions are resolved in this order:
Object method
Local defined variable
Built-in variable
The code:
class Chain(object):
def __init__(self, value, index=0):
self.value = value
self.index = index
def __getattr__(self, item):
append_arg = True
try:
prop = getattr(self.value, item)
append_arg = False
except AttributeError:
try:
prop = locals()[item]
except KeyError:
prop = getattr(__builtins__, item)
if callable(prop):
def fn(*args, **kwargs):
orig = list(args)
if append_arg:
if self.index == -1:
orig.append(self.value)
else:
orig.insert(self.index, self.value)
return Chain(prop(*orig, **kwargs), index=self.index)
return fn
else:
return Chain(prop, index=self.index)
Thread each result as first arg
file = Chain(__file__).open('r').readlines().value
Thread each result as last arg
result = Chain(range(0, 100), index=-1).map(lambda x: x * x).reduce(lambda x, y: x + y).value
I want to test if delete a LinkList's head element fast than add an element to LinkList's end.
This is my LinkList's main code:
class LNode:
def __init__(self,elem,next_=None):
self.elem=elem
self.next=next_
class LinkList:
def __init__(self):
self.__head=None
#delete head element
def head_pop(self):
if self.__head is None:
raise LinkedListUnderflow("in pop")
e=self.__head.elem
self.__head=self.__head.next
return e
#add an element at end
def append(self,elem):
if self.__head is None:
self.__head=LNode(elem)
return
p=self.__head
while p.next is not None:
p=p.next
p.next=LNode(elem)
import time
#test time
def timetest(f):
start=time.clock()
for a in range(0,1000000):
f
end=time.clock()
print("times:"+str(end-start))
then,I try this:
llist=LinkList()
def append():
llist.append(666)
def head_pop():
llist.head_pop()
timetest(append())
timetest(head_pop())
Output:
times:0.029582597002445254
times:0.03032071299821837
As you can see, they cost same time.
But I think it should be O(n):O(1).
What you're doing is passing the result of append() to your time test function, whereas you want to pass the function itself!
Change your time-test to call the f function:
def timetest(f):
start=time.clock()
for a in range(0,1000000):
f() # <- note the () here
end=time.clock()
print("times:"+str(end-start))
Then use this to test:
timetest(append)
timetest(head_pop)
As you can see, we're passing in the function for the test to call, instead of the RESULT of the function (being called once!)
I'm trying to write a simple GUI front end for Plurk using pyplurk.
I have successfully got it to create the API connection, log in, and retrieve and display a list of friends. Now I'm trying to retrieve and display a list of Plurks.
pyplurk provides a GetNewPlurks function as follows:
def GetNewPlurks(self, since):
'''Get new plurks since the specified time.
Args:
since: [datetime.datetime] the timestamp criterion.
Returns:
A PlurkPostList object or None.
'''
offset = jsonizer.conv_datetime(since)
status_code, result = self._CallAPI('/Polling/getPlurks', offset=offset)
return None if status_code != 200 else \
PlurkPostList(result['plurks'], result['plurk_users'].values())
As you can see this returns a PlurkPostList, which in turn is defined as follows:
class PlurkPostList:
'''A list of plurks and the set of users that posted them.'''
def __init__(self, plurk_json_list, user_json_list=[]):
self._plurks = [PlurkPost(p) for p in plurk_json_list]
self._users = [PlurkUser(u) for u in user_json_list]
def __iter__(self):
return self._plurks
def GetUsers(self):
return self._users
def __eq__(self, other):
if other.__class__ != PlurkPostList: return False
if self._plurks != other._plurks: return False
if self._users != other._users: return False
return True
Now I expected to be able to do something like this:
api = plurk_api_urllib2.PlurkAPI(open('api.key').read().strip(), debug_level=1)
plurkproxy = PlurkProxy(api, json.loads)
user = plurkproxy.Login('my_user', 'my_pass')
ps = plurkproxy.GetNewPlurks(datetime.datetime(2009, 12, 12, 0, 0, 0))
print ps
for p in ps:
print str(p)
When I run this, what I actually get is:
<plurk.PlurkPostList instance at 0x01E8D738>
from the "print ps", then:
for p in ps:
TypeError: __iter__ returned non-iterator of type 'list'
I don't understand - surely a list is iterable? Where am I going wrong - how do I access the Plurks in the PlurkPostList?
When you define your own __iter__ method, you should realize that that __iter__ method should return an iterator, not an iterable. You are returning a list, not an iterator to a list, so it fails. You can fix it by doing return iter(self._plurks), for example.
If you wanted to do something a little more complex, like process each item in self._plurks as it's being iterated over, the usual trick is to make your __iter__ method be a generator. That way, the returnvalue of the call to __iter__ is the generator, which is an iterator:
def __iter__(self):
for item in self._plurks:
yield process(item)
The __iter__ method should return an object which implements the next() method.
A list does not have a next() method, but it has an __iter__ method, which returns a listiterator object. The listiterator object has a next() method.
You should write:
def __iter__(self):
return iter(self._plurks)
As an alternative, you can also define the next() function and have __iter__() return self. See Build a Basic Python Iterator for a nice example.