Say I have the list
a = [[1, 2, 3],
[4, 5, 6]]
and I have an index that I want to use to access an element of this list.
index = [1,2]
I want to use something like
a[*index] = 9
to mean a[index[0]][index[1]] = 9, but this doesn't work and neither does a[**index] = 9. Is there a similar way to do this without having a chain of index calls?
I would like a method to do this without using any libraries that must be imported.
First of all a[c, d, e] is equivalent to a[(c, d, e)] which is equivalent to a.__getitem__((c, d, e)). Note the double parentheses. Any __getitem__ implementation that wants to play nice with the Python data model always expects exactly one (explicit) argument.
That's why unpacking values from index inside the [] does not make much sense. a[*index] will give you a SyntaxError and a.__getitem__(*index) gives you a TypeError (because you are providing too many arguments).
Standard Python lists expect integer arguments to __getitem__, but numpy supports indexing with tuples (a numpy array still only takes exactly one argument for __getitem__, but it's allowed to be a tuple).
Demo:
>>> import numpy as np
>>> a = np.array([[1,2,3], [4,5,6]])
>>> a[(1,2)]
6
Of course, you can omit the parentheses, because
>>> a[1,2]
6
is exactly equivalent.
You can use reduce(), which is part of the standard library:
>>> a = [[1, 2, 3],
... [4, 5, 6]]
>>> index = [1, 2]
>>> import functools, operator
>>> functools.reduce(operator.getitem, index, a)
6
Or, you can write your own class that supports that kind of multi-dimensional indexing:
import functools, operator
class Matrix:
def __init__(self, lst):
self._lst = lst
def __getitem__(self, index):
return functools.reduce(operator.getitem, index, self._lst)
a = Matrix([[1, 2, 3],
[4, 5, 6]])
index = [1, 2]
print(a[index]) # -> 6
Otherwise, this is not possible using just lists and without loops or other functions.
Related
given a nested list like
>>> m= [[3, 1], [2, 7]]
I can get an element like this
>>> m[1][0]
2
How can i get the same value if the index is given in a list, i.e. as [1, 0]?
I am looking for something that the Q programming language offers with dot like in code below
q) m: (3 1; 2 7)
q) m[1][0]
2
q) m . 1 0
2
As a quick-n-dirty solution, you can abuse functools.reduce like this:
from functools import reduce
def get_recursive(lst, idx_list):
return reduce(list.__getitem__, [lst, *idx_list])
>>> y = [[3, 1], [2, 7]]
>>> get_recursive(y, [0, 1])
1
>>> get_recursive(y, [1, 0])
2
There are quite a few corner cases to handle (plus you'd have to be sure the path exists or else handle any errors that arise) but this should get you started.
just define a recursive function that takes the index of the passed list, then passes this index and the sliced index list to itself, until the sliced index list is empty:
def get(m,l):
if not l:
return m
return get(m[l[0]],l[1:])
print(get([[3, 1], [2, 7]],[1,0]))
prints: 2
You can convert to NumPy array first:
import numpy as np
m = [[3, 1], [2, 7]]
np.array(m)[1,0]
Output:
2
This solution requires imports, but only from the standard library. Very similar to #cs95's solution, but slightly cleaner in my view.
from functools import reduce
from operator import getitem
nested_list = [[[['test'], ['test1']], ['test2']], ['test3', 'test4']]
indices = [0, 1, 0]
assert reduce(getitem, indices, nested_list) == 'test2'
This question already has answers here:
Why do these list operations (methods: clear / extend / reverse / append / sort / remove) return None, rather than the resulting list?
(6 answers)
Closed 5 months ago.
Working my way into Python (2.7.1)
But failing to make sense (for hours) of this:
>>> a = [1, 2]
>>> b = [3, 4]
>>>
>>> a.extend([b[0]])
>>> a
[1, 2, 3]
>>>
>>> a.extend([b[1]])
>>> a
[1, 2, 3, 4]
>>>
>>> m = [a.extend([b[i]]) for i in range(len(b))] # list of lists
>>> m
[None, None]
The first two extends work as expected, but when compacting the same in a list comprehension it fails.
What am i doing wrong?
extend modifies the list in-place.
>>> [a + b[0:i] for i in range(len(b)+1)]
[[1, 2], [1, 2, 3], [1, 2, 3, 4]]
list.extend() extends a list in place. Python standard library methods that alter objects in-place always return None (the default); your list comprehension executed a.extend() twice and thus the resulting list consists of two None return values.
Your a.extend() calls otherwise worked just fine; if you were to print a it would show:
[1, 2, 3, 4, 3, 4]
You don't see the None return value in the Python interpreter, because the interpreter never echoes None results. You could test for that explicitly:
>>> a = []
>>> a.extend(['foo', 'bar']) is None
True
>>> a
['foo', 'bar']
the return value of extend is None.
extend function extends the list with the value you've provided in-place and returns None. That's why you have two None values in your list. I propose you rewrite your comprehension like so:
a = [1, 2]
b = [3, 4]
m = [a + [v] for v in b] # m is [[1,2,3],[1,2,4]]
For python lists, methods that change the list work in place and return None. This applies to extendas well as to append, remove, insert, ...
In reply to an older question, I sketched an subclass of list that would behave as you expected list to work.
Why does [].append() not work in python?
This is intended as educational. For pros and cons.. look at the comments to my answer.
I like this for the ability of chaining methods and working in a fluent style, e.g. then something like
li = FluentList()
li.extend([1,4,6]).remove(4).append(7).insert(1,10).reverse().sort(key=lambda x:x%2)
would be possible.
a.extend() returns None.
You probably want one of these:
>>> m = a + b
>>> m
[1, 2, 3, 4]
>>> a.extend(b)
>>> a
[1, 2, 3, 4]
Aside from that, if you want to iterate over all elements of a list, you just can do it like that:
m = [somefunction(element) for element in somelist]
or
for element in somelist:
do_some_thing(element)
In most cases there is no need to go over the indices.
And if you want to add just one element to a list, you should use somelist.append(element) instead of `somelist.extend([element])
In Python what is equivalent to Ruby's Array.each method? Does Python have a nice and short closure/lambda syntax for it?
[1,2,3].each do |x|
puts x
end
Does Python have a nice and short closure/lambda syntax for it?
Yes, but you don't want it in this case.
The closest equivalent to that Ruby code is:
new_values = map(print, [1, 2, 3])
That looks pretty nice when you already have a function lying around, like print. When you just have some arbitrary expression and you want to use it in map, you need to create a function out of it with a def or a lambda, like this:
new_values = map(lambda x: print(x), [1, 2, 3])
That's the ugliness you apparently want to avoid. And Python has a nice way to avoid it: comprehensions:
new_values = [print(x) for x in values]
However, in this case, you're just trying to execute some statement for each value, not accumulate the new values for each value. So, while this will work (you'll get back a list of None values), it's definitely not idiomatic.
In this case, the right thing to do is to write it explicitly—no closures, no functions, no comprehensions, just a loop:
for x in values:
print x
The most idiomatic:
for x in [1,2,3]:
print x
You can use numpy for vectorized arithmetic over an array:
>>> import numpy as np
>>> a = np.array([1, 2, 3])
>>> a * 3
array([3, 6, 9])
You can easily define a lambda that can be used over each element of an array:
>>> array_lambda=np.vectorize(lambda x: x * x)
>>> array_lambda([1, 2, 3])
array([1, 4, 9])
But as others have said, if you want to just print each, use a loop.
There are also libraries that wrap objects to expose all the usual functional programming stuff.
PyDash http://pydash.readthedocs.org/en/latest/
underscorepy (Google github underscore.py)
E.g. pydash allows you to do things like this:
>>> from pydash import py_
>>> from __future__ import print_function
>>> x = py_([1,2,3,4]).map(lambda x: x*2).each(print).value()
2
4
6
8
>>> x
[2, 4, 6, 8]
(Just always remember to "trigger" execution and/or to un-wrap the wrapped values with .value() at the end!)
without need of an assignment:
list(print(_) for _ in [1, 2, 3])
or just
[print(_) for _ in [1, 2, 3]]
I have an array that matches the parameters of a function:
TmpfieldNames = []
TmpfieldNames.append(Trademark.name)
TmpfieldNames.append(Trademark.id)
return func(Trademark.name, Trademark.id)
func(Trademark.name.Trademark.id) works, but func(TmpfieldNames) doesn't. How can I call the function without explicitly indexing into the array like func(TmpfieldNames[0], TmpfieldNames[1])?
With * you can unpack arguments from a list or tuple and ** unpacks arguments from a dict.
>>> range(3, 6) # normal call with separate arguments
[3, 4, 5]
>>> args = [3, 6]
>>> range(*args) # call with arguments unpacked from a list
[3, 4, 5]
Example from the documentation.
I think what you are looking for is this:
def f(a, b):
print a, b
arr = [1, 2]
f(*arr)
What you are looking for is:
func(*TmpfieldNames)
But this isn't the typical use case for such a feature; I'm assuming you've created it for demonstration.
I am playing with python and am able to get the intersection of two lists:
result = set(a).intersection(b)
Now if d is a list containing a and b and a third element c, is there an built-in function for finding the intersection of all the three lists inside d? So for instance,
d = [[1,2,3,4], [2,3,4], [3,4,5,6,7]]
then the result should be
[3,4]
set.intersection(*map(set,d))
for 2.4, you can just define an intersection function.
def intersect(*d):
sets = iter(map(set, d))
result = sets.next()
for s in sets:
result = result.intersection(s)
return result
for newer versions of python:
the intersection method takes an arbitrary amount of arguments
result = set(d[0]).intersection(*d[1:])
alternatively, you can intersect the first set with itself to avoid slicing the list and making a copy:
result = set(d[0]).intersection(*d)
I'm not really sure which would be more efficient and have a feeling that it would depend on the size of the d[0] and the size of the list unless python has an inbuilt check for it like
if s1 is s2:
return s1
in the intersection method.
>>> d = [[1,2,3,4], [2,3,4], [3,4,5,6,7]]
>>> set(d[0]).intersection(*d)
set([3, 4])
>>> set(d[0]).intersection(*d[1:])
set([3, 4])
>>>
You can get the intersection of an arbitrary number sets using set.intersection(set1, set2, set3...). So you just need to convert your lists into sets and then pass them to this method as follows:
d = [[1,2,3,4], [2,3,4], [3,4,5,6,7]]
set.intersection(*[set(x) for x in d])
result:
{3, 4}
#user3917838
Nice and simple but needs some casting to make it work and give a list as a result. It should look like:
list(reduce(set.intersection, [set(item) for item in d ]))
where:
d = [[1,2,3,4], [2,3,4], [3,4,5,6,7]]
And result is:
[3, 4]
At least in Python 3.4
Lambda reduce.
from functools import reduce #you won't need this in Python 2
l=[[1, 2, 3, 4], [2, 3, 4], [3, 4, 5, 6, 7]]
reduce(set.intersection, [set(l_) for l_ in l])
I find reduce() to be particularly useful. In fact, the numpy documents recommend using reduce() to intersect multiple lists: numpy.intersect1d reference
To answer your question:
import numpy as np
from functools import reduce
# apply intersect1d to (a list of) multiple lists:
reduce(np.intersect1d, [list_1, list_2, ... list_n])