Suppose I have list as follow:
lst = [0,10,20,30,40,50,60,70]
I want elements from lst from index = 5 to index = 2 in cyclic order.
lst[5:2] yields []
I want lst[5:2] = [50,60,70,0,10]. Is there any simple library function to do this?
Simply split the slicing in two if the second term is smaller than the first:
lst = [0,10,20,30,40,50,60,70]
def circslice(l, a, b):
if b>=a:
return l[a:b]
else:
return l[a:]+l[:b]
circslice(lst, 5, 2)
output: [50, 60, 70, 0, 10]
Using a deque as suggested in comments:
from collections import deque
d = deque(lst)
a,b = 5,2
d.rotate(-a)
list(d)[:len(lst)-a+b]
NB. I find it not very practical as it requires to make a copy of the list to create the deque, and another copy to slice
For something that allows you to still use the native slicing syntax and that maintains static typing compatibility, you can use a light wrapper class around your sequence:
from typing import Generic, Protocol, TypeVar
S = TypeVar('S', bound="ConcatSequence")
class CircularView(Generic[S]):
def __init__(self, seq: S) -> None:
self.seq = seq
def __getitem__(self, s: slice) -> S:
if s.start <= s.stop:
return self.seq[s]
else:
wrap = len(self.seq) % s.step if s.step else 0
return self.seq[s.start::s.step] + self.seq[wrap:s.stop:s.step]
lst = [0, 10, 20, 30, 40, 50, 60, 70]
print(CircularView(lst)[2:5]) # [20, 30, 40]
print(CircularView(lst)[5:2]) # [50, 60, 70, 0, 10]
print(CircularView(lst)[5:2:2]) # [50, 70, 0]
print(CircularView(lst)[5:3:2]) # [50, 70, 0, 20]
print(CircularView(lst)[4:3:3]) # [40, 70, 20]
with the optional protocol for static typing
class ConcatSequence(Protocol):
"""
A sequence that implements concatenation via '__add__'.
This protocol is required instead of using
'collections.abc.Sequence' since not all sequence types
implement '__add__' (for example, 'range').
"""
def __add__(self, other):
...
def __getitem__(self, item):
...
def __len__(self):
...
This method passes type checking with mypy.
You could use a function like this:
def circular_indexing(list_, start_index, end_index) -> list:
return [*list_[start_index:len(list_)], *list_[0:end_index]]
For example:
list1 = [0, 1, 2, 3]
def circular_indexing(list_, start_index, end_index) -> list:
return [*list_[start_index:len(list_)], *list_[0:end_index]]
print(circular_indexing(list1, 2, 1))
Output: [2, 3, 0]
There are two fast/easy solutions to this problem.
The first, and more complicated method, would be to overwrite the default python library implementation of the python list.__getitem__ method, which has been referenced in other places on StackOverflow.
This would allow you to reference the slicing as you would normally, i.e. list[5:3], and it would, in theory, behave as you define. This would be a "local expansion" of the default library.
Transversely, you could implement your own function that will iterate over your list in a circular manner, that meets your own criterion.
Some pseudo-code:
def foo(left_idx, right_idx):
if right_idx < left_idx:
wrap index when right bound has been reached
else:
iterate normally
Related
I am tying to make my custom class iterable by defining an iterator based on Vijay Shankar's answer here:
import numpy as np
import itertools
class MyClass():
id = itertools.count()
def __init__(self, location = None):
self.id = next(MyClass.id)
self.location = np.random.uniform(0, 1, size=(1, 2)).tolist()
def __iter__(self):
for _ in self.__dict__.values():
yield _
def create():
objects = []
objects.append(MyClass())
counter = 1
while counter != 20:
new_object = MyClass()
objects.append(new_object)
counter = counter + 1
return objects
objects = create()
objects = [[item for subsublist in sublist for item in subsublist] for sublist in objects]
However, I still get this error
objects = [[item for subsublist in sublist for item in subsublist] for sublist in objects]
TypeError: 'MyClass' object is not iterable
How can I fix this problem?
Edit:
Currently, this is what the iterator returns:
>>> print([x for x in create()[0]])
[20, [[0.2552026126490259, 0.48489389169530417]]]
How should one revise it so that it returns like below?
>>> print([x for x in create()[0]])
[20, [0.2552026126490259, 0.48489389169530417]]
Your code has one too many iterations:
[[item for subsublist in sublist for item in subsublist] for sublist in objects]
I count 3 fors Meaning 3 iterations. The first iterations would be the list from create() into the MyClass() objects. The second iteration would be the attributes of each MyClass() The third would attempt to iterate over location/id/whatever other properties the class has. This isn't safe because the property id (int) is not an iterator.
List[MyClass] -> MyClass -> Properties of MyClass (id/location) -> ERROR
Your iterator is working. Here's an iteration over just a single MyClass():
print([x for x in create()[0]])
>>> [20, [[0.2552026126490259, 0.48489389169530417]]]
If you want to expand a list of your class (instead of just one as I did above)
my_classes = create()
objects = [[attribute for attribute in my_class] for my_class in my_classes]
print(objects)
>>>[[0, [[0.7226935825759357, 0.18522688980137658]]], [1, [[0.1660964810272717, 0.016810136422152677]]], [2, [[0.1611089351209548, 0.3935547119768953]]], [3, [[0.4589556901947873, 0.18405198063215056]]], [4, [[0.811343515881961, 0.6123114388786854]]], [5, [[0.38830918188777996, 0.23119360704055836]]], [6, [[0.3269834811013743, 0.3608326475799025]]], [7, [[0.9971686351479419, 0.7054058805215702]]], [8, [[0.11316919241038192, 0.07453424664431929]]], [9, [[0.5548059787590179, 0.062422711183232615]]], [10, [[0.38567389514423267, 0.659106105987059]]], [11, [[0.973277039327461, 0.2821071201116454]]], [12, [[0.16566758369419543, 0.3010363002131601]]], [13, [[0.923317671409532, 0.30016022638587536]]], [14, [[0.9757923181511164, 0.5888806462517852]]], [15, [[0.5582498753119571, 0.27190786180188264]]], [16, [[0.28120075553258217, 0.6873211952682786]]], [17, [[0.7016575026994472, 0.5820325771264436]]], [18, [[0.5815482608888624, 0.22729004063915448]]], [19, [[0.2009082164070768, 0.11317171355184519]]]]
Additionally. You may as well use yield from here. As you're yielding another iterable.
class MyClass():
id = itertools.count()
def __init__(self, location = None):
self.id = next(MyClass.id)
self.location = np.random.uniform(0, 1, size=(1, 2)).tolist()
def __iter__(self):
yield from self.__dict__.valies()
EDIT:
Per your question on location being a nested list instead of a list just throw away that extra dimension when you assign to self.location.
print(np.random.uniform(0, 1, size=(1, 2)).tolist())
>>> [[0.3649653171602294, 0.8447097505387996]]
print(np.random.uniform(0, 1, size=(1, 2)).tolist()[0])
[0.247024738276844, 0.9303441776787809]
>>>
Is there a way in python that you can specify a default offset for list?
Like:
a = [0, 1, 2, 3, 4, 5, 6]
a.offset = 2
So that whenever use index for access/modify, the index will be added by the offset first:
a[0] == 2
a[4] == 6
There's no built-in way to achieve this. However you can create your custom class by extending list to get this behaviour. When you do my_list[n], internally __getitem__() function is triggered. You can override this function to return the value by adding offset to the index.
Similarly, list contains other magic functions which you can override to further modify the behaviour of your custom class. For example, __setitem__() is triggered when you assign any value to list, __delitem__() is trigger while deleting the item.
Here's a sample code to create OffsetList class which takes additional argument as offset while creating the list, and performs index based operations on index+offset value.
class OffsetList(list):
def __init__(self, offset, *args, **kwargs):
super(OffsetList, self).__init__(*args, **kwargs)
self.offset = offset
def _get_offset_index(self, key):
if isinstance(key, slice):
key = slice(
None if key.start is None else key.start + self.offset,
None if key.stop is None else key.stop + self.offset,
key.step
)
elif isinstance(key, int):
key += self.offset
return key
def __getitem__(self, key):
key = self._get_offset_index(key)
return super(OffsetList, self).__getitem__(key)
def __setitem__(self, key, value):
key = self._get_offset_index(key)
return super(OffsetList, self).__setitem__(key, value)
def __delitem__(self, key):
key = self._get_offset_index(key)
return super(OffsetList, self).__delitem__(key)
Sample Run:
# With offset as `0`, behaves as normal list
>>> offset_list = OffsetList(0, [10,20,30,40,50,60])
>>> offset_list[0]
10
# With offset as `1`, returns index+1
>>> offset_list = OffsetList(1, [10,20,30,40,50,60])
>>> offset_list[0]
20
# With offset as `2`, returns index+2
>>> offset_list = OffsetList(2, [10,20,30,40,50,60])
>>> offset_list[0]
30
# Slicing support, with `start` as start+offset and `end` as end+offset
>>> offset_list[1:]
[40, 50, 60]
# Assigning new value, based on index+offset
>>> offset_list[0] = 123
>>> offset_list
[10, 20, 123, 40, 50, 60]
# Deleting value based on index+offset
>>> del offset_list[0]
>>> offset_list
[10, 20, 40, 50, 60]
Similarly you can modify the behaviour of other magic functions like __len__(), __iter__(), __repr__(), __str__(), etc as per your need.
There is no such feature in Python -- or in any other language that I know of. Your suggested syntax is reasonable, assuming that you could get the feature approved. However, it has several drawbacks.
Until and unless this feature became common usage, you would confuse anyone trying to read such code. Zero-based and one-based indexing are the "rule"; arbitrary indexing is a violation of long-learned assumptions.
You would seriously crimp Python's right-end indexing: the semantics aren't clear. If someone writes a[-1] to access the last element, should they get that element (this is a language-defined idiom), the original a[1] element (per your definition), a "reflective" a[-3], or index out of bounds trying to move two elements to the right?
Note that Python does give you the capability to define your own functionality:
class
Any time you don't like the given data types, you get to make your own. You're not allowed to alter the built-in types, but you can do what you like by inheriting from list and writing your own get and other methods.
If you're just reading data from the list, you could probably work with a subscript copy of the original:
a = [0, 1, 2, 3, 4, 5, 6]
a = a[2:]
a[0] == 2 # True
a[4] == 6 # True
Keep in mind that this makes a copy of the list using the same variable name so you are losing the original content (indexes 0 and 1). You could keep it in a separate variable if you do need it though:
a = [0, 1, 2, 3, 4, 5, 6]
a0,a = a,a[2:]
a[0] == 2 # True
a[4] == 6 # True
a0[0] == 0 # True
a0[4] == 4 # True
If you really need a view on the original array with read and write capabilities, then I would suggest using a numpy array:
import numpy as np
a = np.array([0, 1, 2, 3, 4, 5, 6])
b = a[2:].view()
b[0] == 2 # True
b[4] == 4 # True
b[1] = 99
print(a) # [ 0 1 2 99 4 5 6]
a[3] == 99 # True
If you want to implement something similar to numpy yourself, you could create a class that represents a "view" on a list with an internal slice property (start, stop, step):
class ListView:
def __init__(self,aList,start=None,stop=None,step=1):
self.data = aList
self.slice = slice(start,stop,step)
#property
def indices(self): return range(len(self.data))[self.slice]
def offset(self,index=None):
if not isinstance(index,slice): return self.indices[index]
first = self.indices[index][0]
last = self.indices[index][-1]
step = (index.step or 1)*(self.slice.step or 1)
return slice(first,last+1-2*(step<0),step)
def __len__(self): return len(self.indices)
def __getitem__(self,index): return self.data[self.offset(index)]
def __repr__(self): return self[:].__repr__()
def __iter__(self): return self[:].__iter__()
def __setitem__(self,index,value): self.data[self.offset(index)] = value
def __delitem__(self,index): del self.data[self.offset(index)]
usage:
a = list(range(1,21))
v = ListView(a,3,-2,2)
len(v) # 8
print(a)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
print(v)
# [4, 6, 8, 10, 12, 14, 16, 18]
v[2] += 80
print(a)
# [1, 2, 3, 4, 5, 6, 7, 88, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
v.slice = slice(-4,None,-3)
print(v)
# [17, 14, 11, 88, 5, 2]
The following code snippet shows how to initialize a python array from various container classes (tuple, list, dictionary, set, etc...)
import array as arr
ar_iterator = arr.array('h', range(100))
ar_tuple = arr.array('h', (0, 1, 2,))
ar_list = arr.array('h', [0, 1, 2,])
ar_list = arr.array('h', {0:None, 1:None, 2:None}.keys())
ar_set = arr.array('h', set(range(100)))
ar_fset = arr.array('h', frozenset(range(100)))
The array initialized from range(100) is particularly nice because an iterator does not need to store a hundred elements. It can simply store the current value and a transition function describing how to calculate the next value from the current value (add one to the current value every-time __next__ is called).
However, what if the initial values of an array do not follow a simple pattern, such as counting upwards 0, 1, 2, 3, 4, ..., 99? An iterator might not be practical. It makes no sense to create a list, copy the list to the array, and then delete the list. You have essentially created the array twice and copied it unnecessarily. Is there someway to construct an array directly, by passing in the initial values?
From the python docs (https://docs.python.org/3/library/array.html):
class array.array(typecode[, initializer])
A new array whose items are restricted by typecode, and initialized from the optional initializer value, which must be a list, a bytes-like object, or iterable over elements of the appropriate type.
So it would appear that you are constrained to passing in an initial python container.
Assuming that the initial elements can be derived logically, you could pass a generator as the initialiser. Generators yield their elements as they are iterated over, similar to range.
>>> def g():
... for _ in range(10):
... yield random.randint(0, 100)
...
>>> arr = array.array('h', g())
>>> arr
array('h', [47, 6, 91, 0, 76, 20, 77, 75, 46, 7])
For simple cases, a generator expression can be used:
>>> arr = array.array('h', (random.randint(0, 100) for _ in range(10)))
>>> arr
array('h', [72, 30, 40, 58, 77, 74, 25, 6, 71, 58])
Is it possible to do the below with list comprehension? Trying to store the maximum value that has been seen at any given point through the loop.
def test(input):
a = input[0]
b = []
for i in input:
a = max(i,a)
b.append(a)
return b
print test([-5,6,19,4,5,20,1,30])
# returns [-5, 6, 19, 19, 19, 20, 20, 30]
You can use itertools.accumulate with the max builtin in Python 3:
from itertools import accumulate
lst = [-5,6,19,4,5,20,1,30]
r = list(accumulate(lst, max)) #[i for i in accumulate(lst, max)]
print(r)
# [-5, 6, 19, 19, 19, 20, 20, 30]
What you present here is a typical form of what is known in functional programming as scan.
A way to do this with list comprehension that is inefficient is:
[max(input[:i]) for i in range(1,n+1)]
But this will run in O(n2).
You can do this with list comprehension given you use a function with side effects: like the following:
def update_and_store(f,initial=None):
cache = [initial]
def g(x):
cache[0] = f(cache[0],x)
return cache[0]
return g
You can then use:
h = update_and_store(max,a[0])
[h(x) for x in a]
Or you can use a dictonaries setdefault() like:
def update_and_store(f):
c = {}
def g(x):
return c.setdefault(0,f(c.pop(0,x),x))
return g
and call it with:
h = update_and_store(max)
[h(x) for x in a]
like #AChampion says.
But functions with side-effects are rather unpythonic and not declarative.
But you better use a scanl or accumulate approach like the one offered by itertools:
from itertools import accumulate
accumulate(input,max)
If using NumPy is permitted, then you can use NumPy:
import numpy as np
np.maximum.accumulate([-5,6,19,4,5,20,1,30])
# array([-5, 6, 19, 19, 19, 20, 20, 30])
Take for example the python built in pow() function.
xs = [1,2,3,4,5,6,7,8]
from functools import partial
list(map(partial(pow,2),xs))
>>> [2, 4, 8, 16, 32, 128, 256]
but how would I raise the xs to the power of 2?
to get [1, 4, 9, 16, 25, 49, 64]
list(map(partial(pow,y=2),xs))
TypeError: pow() takes no keyword arguments
I know list comprehensions would be easier.
No
According to the documentation, partial cannot do this (emphasis my own):
partial.args
The leftmost positional arguments that will be prepended to the positional arguments
You could always just "fix" pow to have keyword args:
_pow = pow
pow = lambda x, y: _pow(x, y)
I think I'd just use this simple one-liner:
import itertools
print list(itertools.imap(pow, [1, 2, 3], itertools.repeat(2)))
Update:
I also came up with a funnier than useful solution. It's a beautiful syntactic sugar, profiting from the fact that the ... literal means Ellipsis in Python3. It's a modified version of partial, allowing to omit some positional arguments between the leftmost and rightmost ones. The only drawback is that you can't pass anymore Ellipsis as argument.
import itertools
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(newfunc.leftmost_args + fargs + newfunc.rightmost_args), **newkeywords)
newfunc.func = func
args = iter(args)
newfunc.leftmost_args = tuple(itertools.takewhile(lambda v: v != Ellipsis, args))
newfunc.rightmost_args = tuple(args)
newfunc.keywords = keywords
return newfunc
>>> print partial(pow, ..., 2, 3)(5) # (5^2)%3
1
>>> print partial(pow, 2, ..., 3)(5) # (2^5)%3
2
>>> print partial(pow, 2, 3, ...)(5) # (2^3)%5
3
>>> print partial(pow, 2, 3)(5) # (2^3)%5
3
So the the solution for the original question would be with this version of partial list(map(partial(pow, ..., 2),xs))
Why not just create a quick lambda function which reorders the args and partial that
partial(lambda p, x: pow(x, p), 2)
You could create a helper function for this:
from functools import wraps
def foo(a, b, c, d, e):
print('foo(a={}, b={}, c={}, d={}, e={})'.format(a, b, c, d, e))
def partial_at(func, index, value):
#wraps(func)
def result(*rest, **kwargs):
args = []
args.extend(rest[:index])
args.append(value)
args.extend(rest[index:])
return func(*args, **kwargs)
return result
if __name__ == '__main__':
bar = partial_at(foo, 2, 'C')
bar('A', 'B', 'D', 'E')
# Prints: foo(a=A, b=B, c=C, d=D, e=E)
Disclaimer: I haven't tested this with keyword arguments so it might blow up because of them somehow. Also I'm not sure if this is what #wraps should be used for but it seemed right -ish.
you could use a closure
xs = [1,2,3,4,5,6,7,8]
def closure(method, param):
def t(x):
return method(x, param)
return t
f = closure(pow, 2)
f(10)
f = closure(pow, 3)
f(10)
You can do this with lambda, which is more flexible than functools.partial():
pow_two = lambda base: pow(base, 2)
print(pow_two(3)) # 9
More generally:
def bind_skip_first(func, *args, **kwargs):
return lambda first: func(first, *args, **kwargs)
pow_two = bind_skip_first(pow, 2)
print(pow_two(3)) # 9
One down-side of lambda is that some libraries are not able to serialize it.
One way of doing it would be:
def testfunc1(xs):
from functools import partial
def mypow(x,y): return x ** y
return list(map(partial(mypow,y=2),xs))
but this involves re-defining the pow function.
if the use of partial was not 'needed' then a simple lambda would do the trick
def testfunc2(xs):
return list(map(lambda x: pow(x,2), xs))
And a specific way to map the pow of 2 would be
def testfunc5(xs):
from operator import mul
return list(map(mul,xs,xs))
but none of these fully address the problem directly of partial applicaton in relation to keyword arguments
Even though this question was already answered, you can get the results you're looking for with a recipe taken from itertools.repeat:
from itertools import repeat
xs = list(range(1, 9)) # [1, 2, 3, 4, 5, 6, 7, 8]
xs_pow_2 = list(map(pow, xs, repeat(2))) # [1, 4, 9, 16, 25, 36, 49, 64]
Hopefully this helps someone.
Yes, you can do it, provided the function takes keyword arguments. You just need to know the name.
In the case of pow() (provided you are using Python 3.8 or newer) you need exp instead of y.
Try to do:
xs = [1,2,3,4,5,6,7,8]
print(list(map(partial(pow,exp=2),xs)))
As already said that's a limitation of functools.partial if the function you want to partial doesn't accept keyword arguments.
If you don't mind using an external library 1 you could use iteration_utilities.partial which has a partial that supports placeholders:
>>> from iteration_utilities import partial
>>> square = partial(pow, partial._, 2) # the partial._ attribute represents a placeholder
>>> list(map(square, xs))
[1, 4, 9, 16, 25, 36, 49, 64]
1 Disclaimer: I'm the author of the iteration_utilities library (installation instructions can be found in the documentation in case you're interested).
The very versatile funcy includes an rpartial function that exactly addresses this problem.
xs = [1,2,3,4,5,6,7,8]
from funcy import rpartial
list(map(rpartial(pow, 2), xs))
# [1, 4, 9, 16, 25, 36, 49, 64]
It's just a lambda under the hood:
def rpartial(func, *args):
"""Partially applies last arguments."""
return lambda *a: func(*(a + args))
If you can't use lambda functions, you can also write a simple wrapper function that reorders the arguments.
def _pow(y, x):
return pow(x, y)
and then call
list(map(partial(_pow,2),xs))
>>> [1, 4, 9, 16, 25, 36, 49, 64]
Yes
if you created your partial class
class MyPartial:
def __init__(self, func, *args):
self._func = func
self._args = args
def __call__(self, *args):
return self._func(*args, *self._args) # swap ordering
xs = [1,2,3,4,5,6,7,8]
list(map(MyPartial(pow,2),xs))
>>> [1, 4, 9, 16, 25, 36, 49, 64]