I have a list: mylist = [0, 0, 0, 0, 0]
I only want to replace selected elements, say the first, second, and fourth by a common number, A = 100.
One way to do this:
mylist[:2] = [A]*2
mylist[3] = A
mylist
[100, 100, 0, 100, 0]
I am looking for a one-liner, or an easier method to do this. A more general and flexible answer is preferable.
Especially since you're replacing a sizable chunk of the list, I'd do this immutably:
mylist = [100 if i in (0, 1, 3) else e for i, e in enumerate(mylist)]
It's intentional in Python that making a new list is a one-liner, while mutating a list requires an explicit loop. Usually, if you don't know which one you want, you want the new list. (In some cases it's slower or more complicated, or you've got some other code that has a reference to the same list and needs to see it mutated, or whatever, which is why that's "usually" rather than "always".)
If you want to do this more than once, I'd wrap it up in a function, as Volatility suggests:
def elements_replaced(lst, new_element, indices):
return [new_element if i in indices else e for i, e in enumerate(lst)]
I personally would probably make it a generator so it yields an iteration instead of returning a list, even if I'm never going to need that, just because I'm stupid that way. But if you actually do need it:
myiter = (100 if i in (0, 1, 3) else e for i, e in enumerate(mylist))
Or:
def elements_replaced(lst, new_element, indices):
for i, e in enumerate(lst):
if i in indices:
yield new_element
else:
yield e
def replace_element(lst, new_element, indices):
for i in indices:
lst[i] = new_element
return lst
It's definitely a more general solution, not a one-liner though. For example, in your case, you would call:
mylist = replace_element(mylist, 100, [0, 1, 3])
Numpy supports this if you're not opposed to using an np.ndarray:
>>> a = np.zeros(5)
>>> a[[0,1,3]] = 100
>>> a
array([ 100., 100., 0., 100., 0.])
Is this what you're looking for? Make a list of the indexes you want to change, and then loop through that list to change the values.
els_to_replace = [0, 1, 3]
mylist = [0, 0, 0, 0, 0]
for index in els_to_replace:
mylist[index] = 100
mylist
Out[9]: [100, 100, 0, 100, 0]
I like a list comprehension:
[100 if index in [1, 4] else 0 for index, x in enumerate(mylist) ]
Not a huge fan of this one, but you could try this (although I think all of the above are much more concise and easy to read):
In [22]: from operator import setitem
In [23]: mylist = [0, 0, 0, 0, 0]
In [24]: indeces_to_replace = [0, 1, 3]
In [25]: _ = map(lambda x: setitem(mylist, x, 100), indeces_to_replace)
In [26]: mylist
Out[26]: [100, 100, 0, 100, 0]
Aside from the questionable readability and need for an import, #abarnert pointed out a few additional issues, namely that map still creates an unnecessary list (which is discarded with the _ but created nonetheless) and that it won't work in Python 3 because map returns an iterator in Python 3.x. You can use the six module to simulate the behavior of map in Python 3.x from Python 2.x, and in combination with collections.deque (again as suggested by #abarnert), you can achieve the same output without creating the additional list in memory because a deque that can contain a maximum of 0 items will discard everything it receives from the map iterator (note that with six, map is simulated by using itertools.imap).
Again, there is absolutely no need to ever use this - every solution above/below is better :)
In [1]: from collections import deque
In [2]: from six.moves import map
In [3]: from operator import setitem
In [4]: mylist = [0, 0, 0, 0, 0]
In [5]: indeces_to_replace = [0, 1, 3]
In [6]: deque(map(lambda x: setitem(mylist, x, 100), indeces_to_replace), maxlen=0)
Out[6]: deque([], maxlen=0)
In [7]: mylist
Out[7]: [100, 100, 0, 100, 0]
Related
Assume I have an array like [2,3,4], I am looking for a way in NumPy (or Tensorflow) to convert it to [0,0,1,1,1,2,2,2,2] to apply tf.math.segment_sum() on a tensor that has a size of 2+3+4.
No elegant idea comes to my mind, only loops and list comprehension.
Would something like this work for you?
import numpy
arr = numpy.array([2, 3, 4])
numpy.repeat(numpy.arange(arr.size), arr)
# array([0, 0, 1, 1, 1, 2, 2, 2, 2])
You don't need to use numpy. You can use nothing but list comprehensions:
>>> foo = [2,3,4]
>>> sum([[i]*foo[i] for i in range(len(foo))], [])
[0, 0, 1, 1, 1, 2, 2, 2, 2]
It works like this:
You can create expanded arrays by multiplying a simple one with a constant, so [0] * 2 == [0,0]. So for each index in the array, we expand with [i]*foo[i]. In other words:
>>> [[i]*foo[i] for i in range(len(foo))]
[[0, 0], [1, 1, 1], [2, 2, 2, 2]]
Then we use sum to reduce the lists into a single list:
>>> sum([[i]*foo[i] for i in range(len(foo))], [])
[0, 0, 1, 1, 1, 2, 2, 2, 2]
Because we are "summing" lists, not integers, we pass [] to sum to make an empty list the starting value of the sum.
(Note that this likely will be slower than numpy, though I have not personally compared it to something like #Patol75's answer.)
I really like the answer from #Patol75 since it's neat. However, there is no pure tensorflow solution yet, so I provide one which maybe kinda complex. Just for reference and fun!
BTW, I didn't see tf.repeat this API in tf master. Please check this PR which adds tf.repeat support equivalent to numpy.repeat.
import tensorflow as tf
repeats = tf.constant([2,3,4])
values = tf.range(tf.size(repeats)) # [0,1,2]
max_repeats = tf.reduce_max(repeats) # max repeat is 4
tiled = tf.tile(tf.reshape(values, [-1,1]), [1,max_repeats]) # [[0,0,0,0],[1,1,1,1],[2,2,2,2]]
mask = tf.sequence_mask(repeats, max_repeats) # [[1,1,0,0],[1,1,1,0],[1,1,1,1]]
res = tf.boolean_mask(tiled, mask) # [0,0,1,1,1,2,2,2,2]
Patol75's answer uses Numpy but Gort the Robot's answer is actually faster (on your example list at least).
I'll keep this answer up as another solution, but it's slower than both.
Given that a = [2,3,4] this could be done using a loop like so:
b = []
for i in range(len(a)):
for j in range(a[i]):
b.append(range(len(a))[i])
Which, as a list comprehension one-liner, is this diabolical thing:
b = [range(len(a))[i] for i in range(len(a)) for j in range(a[i])]
Both end up with b = [0,0,1,1,1,2,2,2,2].
Given data as
data = [ [0, 1], [2,3] ]
I want to index all first elements in the lists inside the list of lists. i.e. I need to index 0 and 2.
I have tried
print data[:][0]
but it output the complete first list .i.e.
[0,1]
Even
print data[0][:]
produces the same result.
My question is specifically how to accomplish what I have mentioned. And more generally, how is python handling double/nested lists?
Using list comprehension:
>>> data = [[0, 1], [2,3]]
>>> [lst[0] for lst in data]
[0, 2]
>>> [first for first, second in data]
[0, 2]
Using map:
>>> map(lambda lst: lst[0], data)
[0, 2]
Using map with operator.itemgetter:
>>> import operator
>>> map(operator.itemgetter(0), data)
[0, 2]
Using zip:
>>> zip(*data)[0]
(0, 2)
With this sort of thing, I generally recommend numpy:
>>> data = np.array([ [0, 1], [2,3] ])
>>> data[:,0]
array([0, 2])
As far as how python is handling it in your case:
data[:][0]
Makes a copy of the entire list and then takes the first element (which is the first sublist).
data[0][:]
takes the first sublist and then copies it.
The list indexing or nesting in general (be it dict, list or any other iterable) works Left to Right. Thus,
data[:][0]
would work out as
(data[:]) [0] == ([[0, 1], [2,3]]) [0]
which ultimately gives you
[0, 1]
As for possible workaronds or proper methods, falsetru & mgilson have done a good job in that regards.
try this:
print [x for x, y in data[:]]
numpy has a function to set the specified elements to be some value
a=numpy.zeros(10)
numpy.put(a, [2,3],1)
However, this method will directly change 'a' and return None if succeed.
Is there anyway we can keep 'a' intact and return the new array instead?
(I don't have to use numpy, and least dependency will be more preferred)
Without a function call, and in my opinion somewhat more readable, you can do this:
>>> a = [0, 0, 0, 0, 0]
>>> b = a[:]
>>> b[1:3] = [2, 3]
>>> a
[0, 0, 0, 0, 0]
>>> b
[0, 2, 3, 0, 0]
Note that a[:] can be very slow for large lists, because - as Makato says - it creates a completely new copy of the list. This is also the reason why numpy ships with the "in situ" (in place) function put(). If you can, avoid copying the list altogether.
To get a sense of the performance, you can think of a[:] as [i for i in a].
I'm not familiar with numpy, so in case I misunderstood and you want to "insert x at positions [a, b, c]", the other way around, instead, you could do this:
>>> def put(my_list, x, positions):
... return [x if n in positions else i for n, i in enumerate(my_list)]
...
>>> put(a, 1, [1, 2])
[0, 1, 1, 0, 0]
or, alternatively,
>>> b = a[:]
>>> b[1:3] = [1] * 2
>>> b
[0, 1, 1, 0, 0]
Which will be slightly slower.
None that i know of. You can use this (not so efficient but nice).
f = lambda a: lambda b: lambda v: [(v if i in b else a[i]) for i in range(len(a))]
f ([0,0,0,0,0]) ([2,3]) (1)
If I wanted something more efficient, I would choose this:
def f(a, b, v):
c = a[:]
for i in b: c[i] = v
return c
You could initialize this way:
a = [0]*10
And then just make a copy and modify it.
Its not difficult to implement:
def put(orig, where, what):
return map((lambda i:what if i in where else orig[i]), range(len(orig)))
If I have a list:
to_modify = [5,4,3,2,1,0]
And then declare two other lists:
indexes = [0,1,3,5]
replacements = [0,0,0,0]
How can I take to_modify's elements as index to indexes, then set corresponding elements in to_modify to replacements, i.e. after running, indexes should be [0,0,3,0,1,0].
Apparently, I can do this through a for loop:
for ind in to_modify:
indexes[to_modify[ind]] = replacements[ind]
But is there other way to do this?
Could I use operator.itemgetter somehow?
The biggest problem with your code is that it's unreadable. Python code rule number one, if it's not readable, no one's gonna look at it for long enough to get any useful information out of it. Always use descriptive variable names. Almost didn't catch the bug in your code, let's see it again with good names, slow-motion replay style:
to_modify = [5,4,3,2,1,0]
indexes = [0,1,3,5]
replacements = [0,0,0,0]
for index in indexes:
to_modify[indexes[index]] = replacements[index]
# to_modify[indexes[index]]
# indexes[index]
# Yo dawg, I heard you liked indexes, so I put an index inside your indexes
# so you can go out of bounds while you go out of bounds.
As is obvious when you use descriptive variable names, you're indexing the list of indexes with values from itself, which doesn't make sense in this case.
Also when iterating through 2 lists in parallel I like to use the zip function (or izip if you're worried about memory consumption, but I'm not one of those iteration purists). So try this instead.
for (index, replacement) in zip(indexes, replacements):
to_modify[index] = replacement
If your problem is only working with lists of numbers then I'd say that #steabert has the answer you were looking for with that numpy stuff. However you can't use sequences or other variable-sized data types as elements of numpy arrays, so if your variable to_modify has anything like that in it, you're probably best off doing it with a for loop.
numpy has arrays that allow you to use other lists/arrays as indices:
import numpy
S=numpy.array(s)
S[a]=m
Why not just:
map(s.__setitem__, a, m)
You can use operator.setitem.
from operator import setitem
a = [5, 4, 3, 2, 1, 0]
ell = [0, 1, 3, 5]
m = [0, 0, 0, 0]
for b, c in zip(ell, m):
setitem(a, b, c)
>>> a
[0, 0, 3, 0, 1, 0]
Is it any more readable or efficient than your solution? I am not sure!
A little slower, but readable I think:
>>> s, l, m
([5, 4, 3, 2, 1, 0], [0, 1, 3, 5], [0, 0, 0, 0])
>>> d = dict(zip(l, m))
>>> d #dict is better then using two list i think
{0: 0, 1: 0, 3: 0, 5: 0}
>>> [d.get(i, j) for i, j in enumerate(s)]
[0, 0, 3, 0, 1, 0]
for index in a:
This will cause index to take on the values of the elements of a, so using them as indices is not what you want. In Python, we iterate over a container by actually iterating over it.
"But wait", you say, "For each of those elements of a, I need to work with the corresponding element of m. How am I supposed to do that without indices?"
Simple. We transform a and m into a list of pairs (element from a, element from m), and iterate over the pairs. Which is easy to do - just use the built-in library function zip, as follows:
for a_element, m_element in zip(a, m):
s[a_element] = m_element
To make it work the way you were trying to do it, you would have to get a list of indices to iterate over. This is doable: we can use range(len(a)) for example. But don't do that! That's not how we do things in Python. Actually directly iterating over what you want to iterate over is a beautiful, mind-liberating idea.
what about operator.itemgetter
Not really relevant here. The purpose of operator.itemgetter is to turn the act of indexing into something, into a function-like thing (what we call "a callable"), so that it can be used as a callback (for example, a 'key' for sorting or min/max operations). If we used it here, we'd have to re-call it every time through the loop to create a new itemgetter, just so that we could immediately use it once and throw it away. In context, that's just busy-work.
You can solve it using dictionary
to_modify = [5,4,3,2,1,0]
indexes = [0,1,3,5]
replacements = [0,0,0,0]
dic = {}
for i in range(len(indexes)):
dic[indexes[i]]=replacements[i]
print(dic)
for index, item in enumerate(to_modify):
for i in indexes:
to_modify[i]=dic[i]
print(to_modify)
The output will be
{0: 0, 1: 0, 3: 0, 5: 0}
[0, 0, 3, 0, 1, 0]
elif menu.lower() == "edit":
print ("Your games are: "+str (games))
remove = input("Which one do you want to edit: ")
add = input("What do you want to change it to: ")
for i in range(len(games)) :
if str(games[i]) == str(remove) :
games[i] = str(add)
break
else :
pass
pass
why not use it like this? replace directly from where it was removed and anyway you can add arrays and the do .sort the .reverse if needed
Hi there on a Saturday Fun Night,
I am getting around in python and I am quite enjoying it.
Assume I have a python array:
x = [1, 0, 0, 1, 3]
What is the fastest way to count all non zero elements in the list (ans: 3) ? Also I would like to do it without for loops if possible - the most succint and terse manner possibe, say something conceptually like
[counter += 1 for y in x if y > 0]
Now - my real problem is that I have a multi dimensional array and what I really want to avoid is doing the following:
for p in range(BINS):
for q in range(BINS):
for r in range(BINS):
if (mat3D[p][q][r] > 0): some_feature_set_count += 1
From the little python I have seen, my gut feeling is that there is a really clean syntax (and efficient) way how to do this.
Ideas, anyone?
For the single-dimensional case:
sum(1 for i in x if i)
For the multi-dimensional case, you can either nest:
sum(sum(1 for i in row if i) for row in rows)
or do it all within the one construct:
sum(1 for row in rows
for i in row if i)
If you are using numpy as suggested by the fact that you're using multi-dimensional arrays in Python, the following is similar to #Marcelo's answer, but a tad cleaner:
>>> a = numpy.array([[1,2,3,0],[0,4,2,0]])
>>> sum(1 for i in a.flat if i)
5
>>>
If you go with numpy and your 3D array is a numpy array, this one-liner will do the trick:
numpy.where(your_array_name != 0, 1, 0).sum()
example:
In [23]: import numpy
In [24]: a = numpy.array([ [[0, 1, 2], [0, 0, 7], [9, 2, 0]], [[0, 0, 0], [1, 4, 6], [9, 0, 3]], [[1, 3, 2], [3, 4, 0], [1, 7, 9]] ])
In [25]: numpy.where(a != 0, 1, 0).sum()
Out[25]: 18
While perhaps not concise, this is my choice of how to solve this which works for any dimension:
def sum(li):
s = 0
for l in li:
if isinstance(l, list):
s += sum(l)
elif l:
s += 1
return s
def zeros(n):
return len(filter(lambda x:type(x)==int and x!=0,n))+sum(map(zeros,filter(lambda x:type(x)==list,n)))
Can't really say if it is the fastest way but it is recursive and works with N dimensional lists.
zeros([1,2,3,4,0,[1,2,3,0,[1,2,3,0,0,0]]]) => 10
I would have slightly changed Marcelo's answer to the following:
len([x for x in my_list if x != 0])
The sum() above tricked me for a second, as I thought he was getting the total value instead of count until I seen the 1 hovering at the start. I'd rather be explicit with len().
Using chain to reduce array lookups:
from itertools import chain
BINS = [[[2,2,2],[0,0,0],[1,2,0]],
[[1,0,0],[0,0,2],[1,2,0]],
[[0,0,0],[1,1,1],[1,3,0]]]
sum(1 for c in chain.from_iterable(chain.from_iterable(BINS)) if c > 0)
14
I haven't done any performance checks on this. But it doesn't use any significant memory.
Note that it is using a generator expression, not a list comprehension. Adding the [list comprehension] syntax will create an array to be summed instead of feeding one number at a time to sum.