Can we create a queue in form of list? - python

I have been learning the queue data structure recently. How do we actually create a queue? Can we just simply use a list and insert and remove items from the list? Or do I need to do something else? I have tried creating a queue class too. What is the correct method?
class Queue:
def __init__(self, capacity):
self.capacity = capacity
self.queue = []
def IsEmpty(self):
return len(self.queue) == self.capacity
def IsFull(self):
return len(self.queue) == 0
def Enqueue(self, x):
if len(self.queue) == self.capacity:
return 'Queue overloaded'
self.queue.insert(0, x)
return f'{x} enqueued into queue.'
def Dequeue(self):
return f'{self.queue[0]} dequeued fron queue.'
self.queue.pop(0)
def GetFront(self):
return self.queue[0]
def GetBack(self):
return self.queue[len(self.queue) - 1]

The common ways to implement a queue are basically:
Like an ArrayList, you use an array and reallocate a bigger one if it fills up. Unlike an array list, you need to allow the elements in the queue to wrap around from the end to the start. This is what java ArrayDeque does, and is probably the most common implementation. There are variants that waste more memory but don't wrap around, and these are usually used if you sometimes have to pass parts of the queue to some other function as a contiguous region.
Use a singly-linked list, but also keep a pointer to the tail node so you can enqueue items quickly. This is typically an intrusive list (i.e., you just and next pointers to objects you already have), or...
There's a particularly simple lock-free implementation of a thread-safe queue using a singly-linked list with an extra head node (and again a pointer to the tail node). This is what Java's ConcurrentLinkedQueue uses.
Your implementation of a queue is none of these. The python list is backed by an array like (1), but the first item is always at the start of the array. The pop() operation can therefore take a long time, because all the other items have to be moved toward the start.

Related

How to move last element of a linked list to first in python (below code)?

#DSA-Prac-1
class Node:
def __init__(self,data):
self.__data=data
self.__next=None
def get_data(self):
return self.__data
def set_data(self,data):
self.__data=data
def get_next(self):
return self.__next
def set_next(self,next_node):
self.__next=next_node
class LinkedList:
def __init__(self):
self.__head=None
self.__tail=None
def get_head(self):
return self.__head
def get_tail(self):
return self.__tail
def add(self,data):
new_node=Node(data)
if(self.__head is None):
self.__head=self.__tail=new_node
else:
self.__tail.set_next(new_node)
self.__tail=new_node
def insert(self,data,data_before):
new_node=Node(data)
if(data_before==None):
new_node.set_next(self.__head)
self.__head=new_node
if(new_node.get_next()==None):
self.__tail=new_node
else:
node_before=self.find_node(data_before)
if(node_before is not None):
new_node.set_next(node_before.get_next())
node_before.set_next(new_node)
if(new_node.get_next() is None):
self.__tail=new_node
else:
print(data_before,"is not present in the Linked list")
def display(self):
temp=self.__head
while(temp is not None):
print(temp.get_data())
temp=temp.get_next()
def find_node(self,data):
temp=self.__head
while(temp is not None):
if(temp.get_data()==data):
return temp
temp=temp.get_next()
return None
def delete(self,data):
node=self.find_node(data)
if(node is not None):
if(node==self.__head):
if(self.__head==self.__tail):
self.__tail=None
self.__head=node.get_next()
else:
temp=self.__head
while(temp is not None):
if(temp.get_next()==node):
temp.set_next(node.get_next())
if(node==self.__tail):
self.__tail=temp
node.set_next(None)
break
temp=temp.get_next()
else:
print(data,"is not present in Linked list")
def change_order(input_list):
'I need the code to be written here'
return input_list
input_list=LinkedList()
input_list.add(9)
input_list.add(3)
input_list.add(56)
input_list.add(6)
input_list.add(2)
input_list.add(7)
input_list.add(4)
result=change_order(input_list)
result.display()
Only the function change_order must be written. No changes should be made in other parts of the program. The input linked list is 9->3->56->6->2->7->4 and the output should be 4->9->3->56->6->2->7.I need answer for this particular code.
This is what i have tried. Since the head of linkedlist class is a private attribute, i face difficulty in assigning the new head.
def change_order(input_list):
temp=input_list.get_head()
while temp and temp.get_next():
sec_last = temp
temp=temp.get_next()
sec_last.set_next(None)
temp.set_next(input_list.get_head())
Your LinkedList class should already provide all the functionality you need to be able to accomplish this without any need to mess around with the pointers yourself:
data = input_list.get_tail().get_data() # get last element
input_list.delete(data) # remove last element
input_list.insert(data, None) # insert that element first
Note that the list interface assumes that all items in the list are unique; some of these methods don't work properly if you have duplicates and will only ever operate on the first match. If you do the insert before the delete, for example, the delete will remove the item at the head that you just inserted, not the item at the tail that you wanted to remove.
This is essentially a bug/constraint of this list implementation; normally a list interface would provide you with some sort of iterator to allow you to handle cases with multiple matches.
If you had to work around that under the parameters of the assignment, being able to modify the head isn't the hard part (because you can do that reliably via insert), but rather popping off the tail (since the only interface that lets you do that is delete, which will flatly not let you access a node by reference, and is therefore always going to do the wrong thing if you're trying to use it to delete the tail when there's a duplicate). The simplest solution IMO is to just convert the entire list into a better format, do whatever you need, and then convert it back. (This is a terrible lesson if the goal is to learn how linked lists work, but it's a good lesson if the goal is learn how sometimes you need to do silly things to work around someone else's incompetence.)
def change_order(input_list):
'I need the code to be written here'
# Define helper functions to convert LinkedList to and from List.
def convert_linked_list_to_list(linked_list):
"""Converts a LinkedList to a native List."""
arr = []
node = input_list.get_head()
while node is not None:
arr.append(node.get_data())
node = node.get_next()
return arr
def rebuild_linked_list_from_list(linked_list, arr):
"""Replaces input LinkedList contents with native List contents."""
while linked_list.get_head() is not None:
linked_list.delete(linked_list.get_head().get_data())
for data in arr:
linked_list.add(data)
# Now do the order change using a List.
arr = convert_linked_list_to_list(input_list)
rebuild_linked_list_from_list(input_list, arr[-1:] + arr[:-1])
return input_list
In real life, you'd define these helpers outside of this function, because they're bound to be useful in other situations where you need to manipulate one of these LinkedList containers in ways that its terrible interface doesn't support, but the parameters of the assignment require everything to be contained to change_order, so there you have it.

Efficient growing pools of objects

Is there an established module, or good practice, to work efficiently with large object pools in Python 3?
What I mean by "object pool" is some class capable of:
fetching new instances of specified type, while dynamically extending the memory allocation under the hood when necessary;
maintaining a consistent indexing for previously fetched objects.
Here is a basic example:
class Value:
__slots__ = ('a','b')
def __init__(self,a=None,b=None):
self.a = a
self.b = b
class BasicPool:
def __init__(self):
self.data = []
def __getitem__(self,k):
return self.data[k]
def fetch(self):
v = Value()
self.data.append(v)
return v
class BlockPool:
def __init__(self,bsize=100):
self.bsize = bsize
self.next = bsize
self.data = []
def __getitem__(self,k):
b,k = divmod(k,self.bsize)
return self.data[b][k]
def fetch(self):
self.next += 1
if self.next >= self.bsize:
self.data.append([ Value() for _ in range(self.bsize) ])
self.next = 0
return self.data[-1][self.next]
The BasicPool doesn't do anything smart: whenever a new instance is requested, it is instanciated and appended to an underlying list. On the other hand, the BlockPool grows a list of pre-allocated blocks of instances. Surprisingly though, it seems that preallocation is not beneficial in practice:
from timeit import default_timer as timer
def benchmark(P):
N = int(1e6)
start = timer()
for _ in range(N): P.fetch()
print( timer() - start )
print( 'Basic pool:' )
for _ in range(5): benchmark(BasicPool())
# Basic pool:
# 1.2352294209995307
# 0.5003506309985823
# 0.48115064000012353
# 0.48508202800076106
# 1.1760561199989752
print( 'Block pool:' )
for _ in range(5): benchmark(BlockPool())
# Block pool:
# 0.7272855400005938
# 1.4875716509995982
# 0.726611527003115
# 0.7369502859983186
# 1.4867010340021807
As you can see, the BasicPool is always faster than the BlockPool (I also don't know the cause of these large variations). Pools of objects must be a fairly common need in Python; is the best approach really to use the builtin list.append? Are there smarter containers that can be used to further improve runtime performance, or is this dominated by the instanciation time anyway?
The whole point of the geometric growth of the array underlying a list is to reduce the reallocation overhead to a constant factor. That constant can easily be smaller than that for manually making blocks (principally because of the slow, interpreted manipulation of self.next and self.data in the latter). (Asymptotically, the cost of BlockPool.fetch is still the append, of course.) Moreover, your benchmark doesn’t include the additional cost of destroying the blocks, nor that of the two-step indexing on read.
So list is surely as good as it gets (without writing your own C code). You can improve BasicPool a bit by inheriting from list rather than containing one, eliminating a dictionary lookup per fetch and the interpreted __getitem__ wrapper entirely.

Shared python generator

I am trying to reproduce the reactive extensions "shared" observable concept with Python generators.
Say I have an API that gives me an infinite stream that I can use like this:
def my_generator():
for elem in the_infinite_stream():
yield elem
I could use this generator multiple times like so:
stream1 = my_generator()
stream2 = my_generator()
And the_infinite_stream() will be called twice (once for each generator).
Now say that the_infinite_stream() is an expensive operation. Is there a way to "share" the generator between multiple clients? It seems like tee would do that, but I have to know in advance how many independent generators I want.
The idea is that in other languages (Java, Swift) using the reactive extensions (RxJava, RxSwift) "shared" streams, I can conveniently duplicate the stream on the client side. I am wondering how to do that in Python.
Note: I am using asyncio
I took tee implementation and modified it such you can have various number of generators from infinite_stream:
import collections
def generators_factory(iterable):
it = iter(iterable)
deques = []
already_gone = []
def new_generator():
new_deque = collections.deque()
new_deque.extend(already_gone)
deques.append(new_deque)
def gen(mydeque):
while True:
if not mydeque: # when the local deque is empty
newval = next(it) # fetch a new value and
already_gone.append(newval)
for d in deques: # load it to all the deques
d.append(newval)
yield mydeque.popleft()
return gen(new_deque)
return new_generator
# test it:
infinite_stream = [1, 2, 3, 4, 5]
factory = generators_factory(infinite_stream)
gen1 = factory()
gen2 = factory()
print(next(gen1)) # 1
print(next(gen2)) # 1 even after it was produced by gen1
print(list(gen1)) # [2, 3, 4, 5] # the rest after 1
To cache only some amount of values you can change already_gone = [] into already_gone = collections.deque(maxlen=size) and add size=None parameter to generators_factory.
Consider simple class attributes.
Given
def infinite_stream():
"""Yield a number from a (semi-)infinite iterator."""
# Alternatively, `yield from itertools.count()`
yield from iter(range(100000000))
# Helper
def get_data(iterable):
"""Print the state of `data` per stream."""
return ", ".join([f"{x.__name__}: {x.data}" for x in iterable])
Code
class SharedIterator:
"""Share the state of an iterator with subclasses."""
_gen = infinite_stream()
data = None
#staticmethod
def modify():
"""Advance the shared iterator + assign new data."""
cls = SharedIterator
cls.data = next(cls._gen)
Demo
Given a tuple of client streams (A, B and C),
# Streams
class A(SharedIterator): pass
class B(SharedIterator): pass
class C(SharedIterator): pass
streams = A, B, C
let us modify and print the state of one iterator shared between them:
# Observe changed state in subclasses
A.modify()
print("1st access:", get_data(streams))
B.modify()
print("2nd access:", get_data(streams))
C.modify()
print("3rd access:", get_data(streams))
Output
1st access: A: 0, B: 0, C: 0
2nd access: A: 1, B: 1, C: 1
3rd access: A: 2, B: 2, C: 2
Although any stream can modify the iterator, the class attribute is shared between sub-classes.
See Also
Docs on asyncio.Queue - an async alternative to shared container
Post on the Observer Pattern + asyncio
You can call "tee" repeatedly to create multiple iterators as needed.
it = iter([ random.random() for i in range(100)])
base, it_cp = itertools.tee(it)
_, it_cp2 = itertools.tee(base)
_, it_cp3 = itertools.tee(base)
Sample: http://tpcg.io/ZGc6l5.
You can use single generator and "subscriber generators":
subscribed_generators = []
def my_generator():
while true:
elem = yield
do_something(elem) # or yield do_something(elem) depending on your actual use
def publishing_generator():
for elem in the_infinite_stream():
for generator in subscribed_generators:
generator.send(elem)
subscribed_generators.extend([my_generator(), my_generator()])
# Next is just ane example that forces iteration over `the_infinite_stream`
for elem in publishing_generator():
pass
Instead of generator-function you may also create a class with methods: __next__, __iter__, send, throw. That way you can modify MyGenerator.__init__ method to automatically add new instances of it to subscribed_generators.
This is somewhat similar to event-based approach with a "dumb implementation":
for elem in the_infinite_stream is similar to emitting event
for generator ...: generator.send is similar to sending event to each subscriber.
So one way to implement a "more complex but structured solution" would be to use event-based approach:
For example you can use asyncio.Event
Or some third-party solution like aiopubsub
For any of those approaches you should emit event for each element from the_infinite_stream, and your instances of my_generator should be subscribed to those events.
And other approaches can also be used and the best choice depends: on details of your task, on how are you using event-loop in asyncio. For example:
You can implement the_infinite_stream (or wrapper for it) as some class with "cursors" (objects that track current position in the stream for different subscribers); then each my_generator registers new cursor and uses it to get next item in the infinite stream. In this approach event-loop will not automatically revisit my_generator instances, which might be required if those instances "are not equal" (for example have some "priority balancing")
Intermediate generator calling all the instances of my_generator (as described earlier). In this approach each instance of my_generator is automatically revisited by event-loop. Most likely this approach is thread-safe.
Event-based approaches:
using asyncio.Event. Similar to use of intermediate generator. Not
thread-safe
aiopubsub.
something that uses Observer pattern
Make the_infinite_generator (or wrapper for it) to be "Singleton" that "caches" latest event. Some approaches were described in other answers. Another "caching" solutions can be used:
emit the same element once for each instance of the_infinite_generator (use class with custom __new__ method that tracks instances, or use the same instance of class that has a method returning "shifted" iterator over the_infinite_loop) until someone calls special method on
instance of the_infinite_generator (or on class): infinite_gen.next_cycle. In
this case there should always be some "last finalizing
generator/processor" that at the end of each event-loop's cycle will
do the_infinite_generator().next_cycle()
Similar to previous but same event is allowed to fire multiple times in the same my_generator instance (so they should watch for this case). In this approach the_infinite_generator().next_cycle() can be called "periodically" with loop.call_later or loop.cal_at. This approach might be needed if "subscribers" should be able to handle/analyze: delays, rate-limits, timeouts between events, etc.
Many other solutions are possible. It's hard to propose something specific without looking at your current implementation and without knowing what is the desired behavior of generators that use the_infinite_loop
If I understand your description of "shared" streams correctly, that you really need "one" the_infinite_stream generator and a "handler" for it. Example that tries to do this:
class StreamHandler:
def __init__(self):
self.__real_stream = the_infinite_stream()
self.__sub_streams = []
def get_stream(self):
sub_stream = [] # or better use some Queue/deque object. Using list just to show base principle
self.__sub_streams.append(sub_stream)
while True:
while sub_stream:
yield sub_stream.pop(0)
next(self)
def __next__(self):
next_item = next(self.__real_stream)
for sub_stream in self.__sub_steams:
sub_stream.append(next_item)
some_global_variable = StreamHandler()
# Or you can change StreamHandler.__new__ to make it singleton, or you can create an instance at the point of creation of event-loop
def my_generator():
for elem in some_global_variable.get_stream():
yield elem
But if all your my_generator objects are initialized at the same point of infinite stream, and "equally" iterated inside the loop, then this approach will introduce "unnecessary" memory overhead for each "sub_stream" (used as queue). Unnecessary: because those queues will always be the same (but that can be optimized: if there are some existing "empty" sub_stream than it can be re-used for new sub_streams with some changes to "pop-logic"). And many-many other implementations and nuances can be discussed
If you have a single generator, you can use one queue per "subscriber" and route events to each subscriber as the primary generator produces results.
This has the advantage of allowing the subscribers to move at their own pace, and it can be dropped in existing code with very little changes to the original source.
For example:
def my_gen():
...
m1 = Muxer(my_gen)
m2 = Muxer(my_gen)
consumer1(m1).start()
consumer2(m2).start()
As items are pulled from the primary generator they are inserted into queues for each listener. Listeners can subscribe any time by constructing a new Muxer():
import queue
from threading import Lock
from collections import namedtuple
class Muxer():
Entry = namedtuple('Entry', 'genref listeners, lock')
already = {}
top_lock = Lock()
def __init__(self, func, restart=False):
self.restart = restart
self.func = func
self.queue = queue.Queue()
with self.top_lock:
if func not in self.already:
self.already[func] = self.Entry([func()], [], Lock())
ent = self.already[func]
self.genref = ent.genref
self.lock = ent.lock
self.listeners = ent.listeners
self.listeners.append(self)
def __iter__(self):
return self
def __next__(self):
try:
e = self.queue.get_nowait()
except queue.Empty:
with self.lock:
try:
e = self.queue.get_nowait()
except queue.Empty:
try:
e = next(self.genref[0])
for other in self.listeners:
if not other is self:
other.queue.put(e)
except StopIteration:
if self.restart:
self.genref[0] = self.func()
raise
return e
Original source code, including test suite:
https://gist.github.com/earonesty/cafa4626a2def6766acf5098331157b3
The unit tests run many threads concurrently processing the same generated events in sequence. The code is order preserving, with a lock acquired during the single generator's access.
Caveats: the version here uses a singleton to gate access, otherwise it would be possible to accidentally evade its control over the contained generators. It also allows the contained generators to be "restartable", which was a useful feature for me a the time. There is no "close()" feature, simply because I didn't need it. This is an appropriate use case for __del__ however, since the last reference to a listener is the right time to clean up.

Python multiprocessing pool with shared data

I'm attempting to speed up a multivariate fixed-point iteration algorithm using multiprocessing however, I'm running issues dealing with shared data. My solution vector is actually a named dictionary rather than a vector of numbers. Each element of the vector is actually computed using a different formula. At a high level, I have an algorithm like this:
current_estimate = previous_estimate
while True:
for state in all_states:
current_estimate[state] = state.getValue(previous_estimate)
if norm(current_estimate, previous_estimate) < tolerance:
break
else:
previous_estimate, current_estimate = current_estimate, previous_estimate
I'm trying to parallelize the for-loop part with multiprocessing. The previous_estimate variable is read-only and each process only needs to write to one element of current_estimate. My current attempt at rewriting the for-loop is as follows:
# Class and function definitions
class A(object):
def __init__(self,val):
self.val = val
# representative getValue function
def getValue(self, est):
return est[self] + self.val
def worker(state, in_est, out_est):
out_est[state] = state.getValue(in_est)
def worker_star(a_b_c):
""" Allow multiple arguments for a pool
Taken from http://stackoverflow.com/a/5443941/3865495
"""
return worker(*a_b_c)
# Initialize test environment
manager = Manager()
estimates = manager.dict()
all_states = []
for i in range(5):
a = A(i)
all_states.append(a)
estimates[a] = 0
pool = Pool(process = 2)
prev_est = estimates
curr_est = estimates
pool.map(worker_star, itertools.izip(all_states, itertools.repeat(prev_est), itertools.repreat(curr_est)))
The issue I'm currently running into is that the elements added to the all_states array are not the same as those added to the manager.dict(). I keep getting key value errors when trying to access elements of the dictionary using elements of the array. And debugging, I found that none of the elements are the same.
print map(id, estimates.keys())
>>> [19558864, 19558928, 19558992, 19559056, 19559120]
print map(id, all_states)
>>> [19416144, 19416208, 19416272, 19416336, 19416400]
This is happening because the objects you're putting into the estimates DictProxy aren't actually the same objects as those that live in the regular dict. The manager.dict() call returns a DictProxy, which is proxying access to a dict that actually lives in a completely separate manager process. When you insert things into it, they're really being copied and sent to a remote process, which means they're going to have a different identity.
To work around this, you can define your own __eq__ and __hash__ functions on A, as described in this question:
class A(object):
def __init__(self,val):
self.val = val
# representative getValue function
def getValue(self, est):
return est[self] + self.val
def __hash__(self):
return hash(self.__key())
def __key(self):
return (self.val,)
def __eq__(x, y):
return x.__key() == y.__key()
This means the key look ups for items in the estimates will just use the value of the val attribute to establish identity and equality, rather than the id assigned by Python.

Python and threading: Why does "large" list entity get lost when accessing from other method?

A large entity (list) is created in one method (foo) and bound to self.result.
The attempt to access this entity in a second method (transmit) fails starting at a certain size (something between 150,000 and 155,000 characters in the list). Printing (print self.result) from inside transmit leaves me with None.
I guess this is important: self.foo is directly called in a separate thread.
Please help. How do I get such "large" entity from a separate thread back into the main thread without such limitation?
...
def apply(self):
self.get_data()
self.start_foo_thread()
def start_foo_thread(self):
self.foo_thread = threading.Thread(target=self.foo)
self.foo_thread.daemon = True
self.progressbar.start()
self.foo_thread.start()
self.master.after(20, self.check_foo_thread)
def check_foo_thread(self):
if self.foo_thread.is_alive():
self.master.after(20, self.check_foo_thread)
else:
self.progressbar.stop()
def foo(self):
s = self.stringinput
n = self.numberinput
list = multiply_str_into_list(s, n)
self.result = list_to_text(list)
print self.result # output is not None
def transmit(self):
print self.result # output is None for more than about 155,000 characters in the list
return self.result
def multiply_str_into_list(string, n): #takes a string and multiplies it by n and writes into list
n_string = []
for i in range(0,n):
n_string.append(string)
return n_string
def list_to_text(list): #takes a list as input and joins it into str with each list item on a new line
a = '\n'.join(list)
return a
You don't really provide enough information to reproduce the problem, let alone debug it, but my guess is that at some point, you are doing something like this:
self.result = self.result.append(x)
Since .append() modifies the list in place, and returns None, this will clobber the reference to the list. It needn't be .append() either -- all of the methods that mutate lists return None.
As to why it is happening only at a certain size, perhaps you have some code like the above that is triggered only at a certain size, or else it is a red herring.

Categories