Related
This question already has answers here:
Using a dictionary to count the items in a list
(8 answers)
Closed 7 months ago.
Given an unordered list of values like
a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2]
How can I get the frequency of each value that appears in the list, like so?
# `a` has 4 instances of `1`, 4 of `2`, 2 of `3`, 1 of `4,` 2 of `5`
b = [4, 4, 2, 1, 2] # expected output
In Python 2.7 (or newer), you can use collections.Counter:
>>> import collections
>>> a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2]
>>> counter = collections.Counter(a)
>>> counter
Counter({1: 4, 2: 4, 5: 2, 3: 2, 4: 1})
>>> counter.values()
dict_values([2, 4, 4, 1, 2])
>>> counter.keys()
dict_keys([5, 1, 2, 4, 3])
>>> counter.most_common(3)
[(1, 4), (2, 4), (5, 2)]
>>> dict(counter)
{5: 2, 1: 4, 2: 4, 4: 1, 3: 2}
>>> # Get the counts in order matching the original specification,
>>> # by iterating over keys in sorted order
>>> [counter[x] for x in sorted(counter.keys())]
[4, 4, 2, 1, 2]
If you are using Python 2.6 or older, you can download an implementation here.
If the list is sorted, you can use groupby from the itertools standard library (if it isn't, you can just sort it first, although this takes O(n lg n) time):
from itertools import groupby
a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2]
[len(list(group)) for key, group in groupby(sorted(a))]
Output:
[4, 4, 2, 1, 2]
Python 2.7+ introduces Dictionary Comprehension. Building the dictionary from the list will get you the count as well as get rid of duplicates.
>>> a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
>>> d = {x:a.count(x) for x in a}
>>> d
{1: 4, 2: 4, 3: 2, 4: 1, 5: 2}
>>> a, b = d.keys(), d.values()
>>> a
[1, 2, 3, 4, 5]
>>> b
[4, 4, 2, 1, 2]
Count the number of appearances manually by iterating through the list and counting them up, using a collections.defaultdict to track what has been seen so far:
from collections import defaultdict
appearances = defaultdict(int)
for curr in a:
appearances[curr] += 1
In Python 2.7+, you could use collections.Counter to count items
>>> a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
>>>
>>> from collections import Counter
>>> c=Counter(a)
>>>
>>> c.values()
[4, 4, 2, 1, 2]
>>>
>>> c.keys()
[1, 2, 3, 4, 5]
Counting the frequency of elements is probably best done with a dictionary:
b = {}
for item in a:
b[item] = b.get(item, 0) + 1
To remove the duplicates, use a set:
a = list(set(a))
You can do this:
import numpy as np
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
np.unique(a, return_counts=True)
Output:
(array([1, 2, 3, 4, 5]), array([4, 4, 2, 1, 2], dtype=int64))
The first array is values, and the second array is the number of elements with these values.
So If you want to get just array with the numbers you should use this:
np.unique(a, return_counts=True)[1]
Here's another succint alternative using itertools.groupby which also works for unordered input:
from itertools import groupby
items = [5, 1, 1, 2, 2, 1, 1, 2, 2, 3, 4, 3, 5]
results = {value: len(list(freq)) for value, freq in groupby(sorted(items))}
results
format: {value: num_of_occurencies}
{1: 4, 2: 4, 3: 2, 4: 1, 5: 2}
I would simply use scipy.stats.itemfreq in the following manner:
from scipy.stats import itemfreq
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
freq = itemfreq(a)
a = freq[:,0]
b = freq[:,1]
you may check the documentation here: http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.stats.itemfreq.html
from collections import Counter
a=["E","D","C","G","B","A","B","F","D","D","C","A","G","A","C","B","F","C","B"]
counter=Counter(a)
kk=[list(counter.keys()),list(counter.values())]
pd.DataFrame(np.array(kk).T, columns=['Letter','Count'])
seta = set(a)
b = [a.count(el) for el in seta]
a = list(seta) #Only if you really want it.
Suppose we have a list:
fruits = ['banana', 'banana', 'apple', 'banana']
We can find out how many of each fruit we have in the list like so:
import numpy as np
(unique, counts) = np.unique(fruits, return_counts=True)
{x:y for x,y in zip(unique, counts)}
Result:
{'banana': 3, 'apple': 1}
This answer is more explicit
a = [1,1,1,1,2,2,2,2,3,3,3,4,4]
d = {}
for item in a:
if item in d:
d[item] = d.get(item)+1
else:
d[item] = 1
for k,v in d.items():
print(str(k)+':'+str(v))
# output
#1:4
#2:4
#3:3
#4:2
#remove dups
d = set(a)
print(d)
#{1, 2, 3, 4}
For your first question, iterate the list and use a dictionary to keep track of an elements existsence.
For your second question, just use the set operator.
def frequencyDistribution(data):
return {i: data.count(i) for i in data}
print frequencyDistribution([1,2,3,4])
...
{1: 1, 2: 1, 3: 1, 4: 1} # originalNumber: count
I am quite late, but this will also work, and will help others:
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
freq_list = []
a_l = list(set(a))
for x in a_l:
freq_list.append(a.count(x))
print 'Freq',freq_list
print 'number',a_l
will produce this..
Freq [4, 4, 2, 1, 2]
number[1, 2, 3, 4, 5]
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
counts = dict.fromkeys(a, 0)
for el in a: counts[el] += 1
print(counts)
# {1: 4, 2: 4, 3: 2, 4: 1, 5: 2}
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
# 1. Get counts and store in another list
output = []
for i in set(a):
output.append(a.count(i))
print(output)
# 2. Remove duplicates using set constructor
a = list(set(a))
print(a)
Set collection does not allow duplicates, passing a list to the set() constructor will give an iterable of totally unique objects. count() function returns an integer count when an object that is in a list is passed. With that the unique objects are counted and each count value is stored by appending to an empty list output
list() constructor is used to convert the set(a) into list and referred by the same variable a
Output
D:\MLrec\venv\Scripts\python.exe D:/MLrec/listgroup.py
[4, 4, 2, 1, 2]
[1, 2, 3, 4, 5]
Simple solution using a dictionary.
def frequency(l):
d = {}
for i in l:
if i in d.keys():
d[i] += 1
else:
d[i] = 1
for k, v in d.iteritems():
if v ==max (d.values()):
return k,d.keys()
print(frequency([10,10,10,10,20,20,20,20,40,40,50,50,30]))
#!usr/bin/python
def frq(words):
freq = {}
for w in words:
if w in freq:
freq[w] = freq.get(w)+1
else:
freq[w] =1
return freq
fp = open("poem","r")
list = fp.read()
fp.close()
input = list.split()
print input
d = frq(input)
print "frequency of input\n: "
print d
fp1 = open("output.txt","w+")
for k,v in d.items():
fp1.write(str(k)+':'+str(v)+"\n")
fp1.close()
from collections import OrderedDict
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
def get_count(lists):
dictionary = OrderedDict()
for val in lists:
dictionary.setdefault(val,[]).append(1)
return [sum(val) for val in dictionary.values()]
print(get_count(a))
>>>[4, 4, 2, 1, 2]
To remove duplicates and Maintain order:
list(dict.fromkeys(get_count(a)))
>>>[4, 2, 1]
i'm using Counter to generate a freq. dict from text file words in 1 line of code
def _fileIndex(fh):
''' create a dict using Counter of a
flat list of words (re.findall(re.compile(r"[a-zA-Z]+"), lines)) in (lines in file->for lines in fh)
'''
return Counter(
[wrd.lower() for wrdList in
[words for words in
[re.findall(re.compile(r'[a-zA-Z]+'), lines) for lines in fh]]
for wrd in wrdList])
For the record, a functional answer:
>>> L = [1,1,1,1,2,2,2,2,3,3,4,5,5]
>>> import functools
>>> >>> functools.reduce(lambda acc, e: [v+(i==e) for i, v in enumerate(acc,1)] if e<=len(acc) else acc+[0 for _ in range(e-len(acc)-1)]+[1], L, [])
[4, 4, 2, 1, 2]
It's cleaner if you count zeroes too:
>>> functools.reduce(lambda acc, e: [v+(i==e) for i, v in enumerate(acc)] if e<len(acc) else acc+[0 for _ in range(e-len(acc))]+[1], L, [])
[0, 4, 4, 2, 1, 2]
An explanation:
we start with an empty acc list;
if the next element e of L is lower than the size of acc, we just update this element: v+(i==e) means v+1 if the index i of acc is the current element e, otherwise the previous value v;
if the next element e of L is greater or equals to the size of acc, we have to expand acc to host the new 1.
The elements do not have to be sorted (itertools.groupby). You'll get weird results if you have negative numbers.
Another approach of doing this, albeit by using a heavier but powerful library - NLTK.
import nltk
fdist = nltk.FreqDist(a)
fdist.values()
fdist.most_common()
Found another way of doing this, using sets.
#ar is the list of elements
#convert ar to set to get unique elements
sock_set = set(ar)
#create dictionary of frequency of socks
sock_dict = {}
for sock in sock_set:
sock_dict[sock] = ar.count(sock)
For an unordered list you should use:
[a.count(el) for el in set(a)]
The output is
[4, 4, 2, 1, 2]
Yet another solution with another algorithm without using collections:
def countFreq(A):
n=len(A)
count=[0]*n # Create a new list initialized with '0'
for i in range(n):
count[A[i]]+= 1 # increase occurrence for value A[i]
return [x for x in count if x] # return non-zero count
num=[3,2,3,5,5,3,7,6,4,6,7,2]
print ('\nelements are:\t',num)
count_dict={}
for elements in num:
count_dict[elements]=num.count(elements)
print ('\nfrequency:\t',count_dict)
You can use the in-built function provided in python
l.count(l[i])
d=[]
for i in range(len(l)):
if l[i] not in d:
d.append(l[i])
print(l.count(l[i])
The above code automatically removes duplicates in a list and also prints the frequency of each element in original list and the list without duplicates.
Two birds for one shot ! X D
This approach can be tried if you don't want to use any library and keep it simple and short!
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
marked = []
b = [(a.count(i), marked.append(i))[0] for i in a if i not in marked]
print(b)
o/p
[4, 4, 2, 1, 2]
In Python remove() will remove the first occurrence of value in a list.
How to remove all occurrences of a value from a list?
This is what I have in mind:
>>> remove_values_from_list([1, 2, 3, 4, 2, 2, 3], 2)
[1, 3, 4, 3]
Functional approach:
Python 3.x
>>> x = [1,2,3,2,2,2,3,4]
>>> list(filter((2).__ne__, x))
[1, 3, 3, 4]
or
>>> x = [1,2,3,2,2,2,3,4]
>>> list(filter(lambda a: a != 2, x))
[1, 3, 3, 4]
or
>>> [i for i in x if i != 2]
Python 2.x
>>> x = [1,2,3,2,2,2,3,4]
>>> filter(lambda a: a != 2, x)
[1, 3, 3, 4]
You can use a list comprehension:
def remove_values_from_list(the_list, val):
return [value for value in the_list if value != val]
x = [1, 2, 3, 4, 2, 2, 3]
x = remove_values_from_list(x, 2)
print x
# [1, 3, 4, 3]
You can use slice assignment if the original list must be modified, while still using an efficient list comprehension (or generator expression).
>>> x = [1, 2, 3, 4, 2, 2, 3]
>>> x[:] = (value for value in x if value != 2)
>>> x
[1, 3, 4, 3]
Repeating the solution of the first post in a more abstract way:
>>> x = [1, 2, 3, 4, 2, 2, 3]
>>> while 2 in x: x.remove(2)
>>> x
[1, 3, 4, 3]
See the simple solution
>>> [i for i in x if i != 2]
This will return a list having all elements of x without 2
better solution with list comprehension
x = [ i for i in x if i!=2 ]
All of the answers above (apart from Martin Andersson's) create a new list without the desired items, rather than removing the items from the original list.
>>> import random, timeit
>>> a = list(range(5)) * 1000
>>> random.shuffle(a)
>>> b = a
>>> print(b is a)
True
>>> b = [x for x in b if x != 0]
>>> print(b is a)
False
>>> b.count(0)
0
>>> a.count(0)
1000
>>> b = a
>>> b = filter(lambda a: a != 2, x)
>>> print(b is a)
False
This can be important if you have other references to the list hanging around.
To modify the list in place, use a method like this
>>> def removeall_inplace(x, l):
... for _ in xrange(l.count(x)):
... l.remove(x)
...
>>> removeall_inplace(0, b)
>>> b is a
True
>>> a.count(0)
0
As far as speed is concerned, results on my laptop are (all on a 5000 entry list with 1000 entries removed)
List comprehension - ~400us
Filter - ~900us
.remove() loop - 50ms
So the .remove loop is about 100x slower........ Hmmm, maybe a different approach is needed. The fastest I've found is using the list comprehension, but then replace the contents of the original list.
>>> def removeall_replace(x, l):
.... t = [y for y in l if y != x]
.... del l[:]
.... l.extend(t)
removeall_replace() - 450us
Numpy approach and timings against a list/array with 1.000.000 elements:
Timings:
In [10]: a.shape
Out[10]: (1000000,)
In [13]: len(lst)
Out[13]: 1000000
In [18]: %timeit a[a != 2]
100 loops, best of 3: 2.94 ms per loop
In [19]: %timeit [x for x in lst if x != 2]
10 loops, best of 3: 79.7 ms per loop
Conclusion: numpy is 27 times faster (on my notebook) compared to list comprehension approach
PS if you want to convert your regular Python list lst to numpy array:
arr = np.array(lst)
Setup:
import numpy as np
a = np.random.randint(0, 1000, 10**6)
In [10]: a.shape
Out[10]: (1000000,)
In [12]: lst = a.tolist()
In [13]: len(lst)
Out[13]: 1000000
Check:
In [14]: a[a != 2].shape
Out[14]: (998949,)
In [15]: len([x for x in lst if x != 2])
Out[15]: 998949
At the cost of readability, I think this version is slightly faster as it doesn't force the while to reexamine the list, thus doing exactly the same work remove has to do anyway:
x = [1, 2, 3, 4, 2, 2, 3]
def remove_values_from_list(the_list, val):
for i in range(the_list.count(val)):
the_list.remove(val)
remove_values_from_list(x, 2)
print(x)
To remove all duplicate occurrences and leave one in the list:
test = [1, 1, 2, 3]
newlist = list(set(test))
print newlist
[1, 2, 3]
Here is the function I've used for Project Euler:
def removeOccurrences(e):
return list(set(e))
a = [1, 2, 2, 3, 1]
to_remove = 1
a = [i for i in a if i != to_remove]
print(a)
Perhaps not the most pythonic but still the easiest for me haha
for i in range(a.count(' ')):
a.remove(' ')
Much simpler I believe.
I believe this is probably faster than any other way if you don't care about the lists order, if you do take care about the final order store the indexes from the original and resort by that.
category_ids.sort()
ones_last_index = category_ids.count('1')
del category_ids[0:ones_last_index]
Let
>>> x = [1, 2, 3, 4, 2, 2, 3]
The simplest and efficient solution as already posted before is
>>> x[:] = [v for v in x if v != 2]
>>> x
[1, 3, 4, 3]
Another possibility which should use less memory but be slower is
>>> for i in range(len(x) - 1, -1, -1):
if x[i] == 2:
x.pop(i) # takes time ~ len(x) - i
>>> x
[1, 3, 4, 3]
Timing results for lists of length 1000 and 100000 with 10% matching entries: 0.16 vs 0.25 ms, and 23 vs 123 ms.
If your list contains only duplicates of only one element for example list_a=[0,0,0,0,0,0,1,3,4,6,7] the code below would be helpful:
list_a=[0,0,0,0,0,0,1,3,4,6,7]
def remove_element(element,the_list):
the_list=list(set(the_list))
the_list.remove(element)
return the_list
list_a=remove_element(element=0,the_list=list_a)
print(list_a)
or
a=list(set(i for i in list_a if i!=2))
a.remove(2)
The basic idea is that the sets do not allow duplicates, so first I have converted the list into set(which removes the duplicates), then used .remove() function to remove the first instance of the element(as now we have only one instance per item).
But if you have duplicates of multiple elements, the below methods would help:
List comprehension
list_a=[1, 2, 3, 4, 2, 2, 3]
remove_element=lambda element,the_list:[i for i in the_list if i!=element]
print(remove_element(element=2,the_list=list_a))
Filter
list_a=[1, 2, 3, 4, 2, 2, 3]
a=list(filter(lambda a: a != 2, list_a))
print(a)
While loop
list_a=[1, 2, 3, 4, 2, 2, 3]
def remove_element(element,the_list):
while element in the_list:the_list.remove(element)
return the_list
print(remove_element(2,list_a))
for loop (same as List comprehension)
list_a=[1, 2, 3, 4, 2, 2, 3]
a=[]
for i in list_a:
if i!=2:
a.append(i)
print(a)
Remove all occurrences of a value from a Python list
lists = [6.9,7,8.9,3,5,4.9,1,2.9,7,9,12.9,10.9,11,7]
def remove_values_from_list():
for list in lists:
if(list!=7):
print(list)
remove_values_from_list()
Result: 6.9 8.9 3 5 4.9 1 2.9 9 12.9 10.9 11
Alternatively,
lists = [6.9,7,8.9,3,5,4.9,1,2.9,7,9,12.9,10.9,11,7]
def remove_values_from_list(remove):
for list in lists:
if(list!=remove):
print(list)
remove_values_from_list(7)
Result: 6.9 8.9 3 5 4.9 1 2.9 9 12.9 10.9 11
I just did this for a list. I am just a beginner. A slightly more advanced programmer can surely write a function like this.
for i in range(len(spam)):
spam.remove('cat')
if 'cat' not in spam:
print('All instances of ' + 'cat ' + 'have been removed')
break
No one has posted an optimal answer for time and space complexity, so I thought I would give it a shot. Here is a solution that removes all occurrences of a specific value without creating a new array and at an efficient time complexity. The drawback is that the elements do not maintain order.
Time complexity: O(n)
Additional space complexity: O(1)
def main():
test_case([1, 2, 3, 4, 2, 2, 3], 2) # [1, 3, 3, 4]
test_case([3, 3, 3], 3) # []
test_case([1, 1, 1], 3) # [1, 1, 1]
def test_case(test_val, remove_val):
remove_element_in_place(test_val, remove_val)
print(test_val)
def remove_element_in_place(my_list, remove_value):
length_my_list = len(my_list)
swap_idx = length_my_list - 1
for idx in range(length_my_list - 1, -1, -1):
if my_list[idx] == remove_value:
my_list[idx], my_list[swap_idx] = my_list[swap_idx], my_list[idx]
swap_idx -= 1
for pop_idx in range(length_my_list - swap_idx - 1):
my_list.pop() # O(1) operation
if __name__ == '__main__':
main()
A lot of answers are really good. Here is a simple approach if you are a beginner in python in case you want to use the remove() method for sure.
rawlist = [8, 1, 8, 5, 8, 2, 8, 9, 8, 4]
ele_remove = 8
for el in rawlist:
if el == ele_remove:
rawlist.remove(ele_remove)
It may be slower for too large lists.
If you didn't have built-in filter or didn't want to use extra space and you need a linear solution...
def remove_all(A, v):
k = 0
n = len(A)
for i in range(n):
if A[i] != v:
A[k] = A[i]
k += 1
A = A[:k]
hello = ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']
#chech every item for a match
for item in range(len(hello)-1):
if hello[item] == ' ':
#if there is a match, rebuild the list with the list before the item + the list after the item
hello = hello[:item] + hello [item + 1:]
print hello
['h', 'e', 'l', 'l', 'o', 'w', 'o', 'r', 'l', 'd']
We can also do in-place remove all using either del or pop:
import random
def remove_values_from_list(lst, target):
if type(lst) != list:
return lst
i = 0
while i < len(lst):
if lst[i] == target:
lst.pop(i) # length decreased by 1 already
else:
i += 1
return lst
remove_values_from_list(None, 2)
remove_values_from_list([], 2)
remove_values_from_list([1, 2, 3, 4, 2, 2, 3], 2)
lst = remove_values_from_list([random.randrange(0, 10) for x in range(1000000)], 2)
print(len(lst))
Now for the efficiency:
In [21]: %timeit -n1 -r1 x = random.randrange(0,10)
1 loop, best of 1: 43.5 us per loop
In [22]: %timeit -n1 -r1 lst = [random.randrange(0, 10) for x in range(1000000)]
g1 loop, best of 1: 660 ms per loop
In [23]: %timeit -n1 -r1 lst = remove_values_from_list([random.randrange(0, 10) for x in range(1000000)]
...: , random.randrange(0,10))
1 loop, best of 1: 11.5 s per loop
In [27]: %timeit -n1 -r1 x = random.randrange(0,10); lst = [a for a in [random.randrange(0, 10) for x in
...: range(1000000)] if x != a]
1 loop, best of 1: 710 ms per loop
As we see that in-place version remove_values_from_list() does not require any extra memory, but it does take so much more time to run:
11 seconds for inplace remove values
710 milli seconds for list comprehensions, which allocates a new list in memory
You can convert your list to numpy.array and then use np.delete and pass the indices of the element and its all occurrences.
import numpy as np
my_list = [1, 2, 3, 4, 5, 6, 7, 3, 4, 5, 6, 7]
element_to_remove = 3
my_array = np.array(my_list)
indices = np.where(my_array == element_to_remove)
my_array = np.delete(my_array, indices)
my_list = my_array.tolist()
print(my_list)
#output
[1, 2, 4, 5, 6, 7, 4, 5, 6, 7]
About the speed!
import time
s_time = time.time()
print 'start'
a = range(100000000)
del a[:]
print 'finished in %0.2f' % (time.time() - s_time)
# start
# finished in 3.25
s_time = time.time()
print 'start'
a = range(100000000)
a = []
print 'finished in %0.2f' % (time.time() - s_time)
# start
# finished in 2.11
p=[2,3,4,4,4]
p.clear()
print(p)
[]
Only with Python 3
What's wrong with:
Motor=['1','2','2']
for i in Motor:
if i != '2':
print(i)
print(motor)
In R, you could split a vector according to the factors of another vector:
> a <- 1:10
[1] 1 2 3 4 5 6 7 8 9 10
> b <- rep(1:2,5)
[1] 1 2 1 2 1 2 1 2 1 2
> split(a,b)
$`1`
[1] 1 3 5 7 9
$`2`
[1] 2 4 6 8 10
Thus, grouping a list (in terms of python) according to the values of another list (according to the order of the factors).
Is there anything handy in python like that, except from the itertools.groupby approach?
From your example, it looks like each element in b contains the 1-indexed list in which the node will be stored. Python lacks the automatic numeric variables that R seems to have, so we'll return a tuple of lists. If you can do zero-indexed lists, and you only need two lists (i.e., for your R use case, 1 and 2 are the only values, in python they'll be 0 and 1)
>>> a = range(1, 11)
>>> b = [0,1] * 5
>>> split(a, b)
([1, 3, 5, 7, 9], [2, 4, 6, 8, 10])
Then you can use itertools.compress:
def split(x, f):
return list(itertools.compress(x, f)), list(itertools.compress(x, (not i for i in f)))
If you need more general input (multiple numbers), something like the following will return an n-tuple:
def split(x, f):
count = max(f) + 1
return tuple( list(itertools.compress(x, (el == i for el in f))) for i in xrange(count) )
>>> split([1,2,3,4,5,6,7,8,9,10], [0,1,1,0,2,3,4,0,1,2])
([1, 4, 8], [2, 3, 9], [5, 10], [6], [7])
Edit: warning, this a groupby solution, which is not what OP asked for, but it may be of use to someone looking for a less specific way to split the R way in Python.
Here's one way with itertools.
import itertools
# make your sample data
a = range(1,11)
b = zip(*zip(range(len(a)), itertools.cycle((1,2))))[1]
{k: zip(*g)[1] for k, g in itertools.groupby(sorted(zip(b,a)), lambda x: x[0])}
# {1: (1, 3, 5, 7, 9), 2: (2, 4, 6, 8, 10)}
This gives you a dictionary, which is analogous to the named list that you get from R's split.
As a long time R user I was wondering how to do the same thing. It's a very handy function for tabulating vectors. This is what I came up with:
a = [1,2,3,4,5,6,7,8,9,10]
b = [1,2,1,2,1,2,1,2,1,2]
from collections import defaultdict
def split(x, f):
res = defaultdict(list)
for v, k in zip(x, f):
res[k].append(v)
return res
>>> split(a, b)
defaultdict(list, {1: [1, 3, 5, 7, 9], 2: [2, 4, 6, 8, 10]})
You could try:
a = [1,2,3,4,5,6,7,8,9,10]
b = [1,2,1,2,1,2,1,2,1,2]
split_1 = [a[k] for k in (i for i,j in enumerate(b) if j == 1)]
split_2 = [a[k] for k in (i for i,j in enumerate(b) if j == 2)]
results in:
In [22]: split_1
Out[22]: [1, 3, 5, 7, 9]
In [24]: split_2
Out[24]: [2, 4, 6, 8, 10]
To make this generalise you can simply iterate over the unique elements in b:
splits = {}
for index in set(b):
splits[index] = [a[k] for k in (i for i,j in enumerate(b) if j == index)]
Ok so I have two lists:
x = [1, 2, 3, 4]
y = [1, 1, 2, 5, 6]
I compare them in such a way so I get the following output:
x = [3, 4]
y = [1, 5, 6]
The basic is idea to go through each list and compare them. If they have an element in common remove that element. But only one of that element not all of them. If they don't have an element in common leave it. Two identical lists would become x = [], y = []
Here is my rather hacked up and pretty lame solution. I hoping other can recommend a better and / or more pythonic way of doing this. 3 loops seems excessive...
done = True
while not done:
done = False
for x in xlist:
for y in ylist:
if x == y:
xlist.remove(x)
ylist.remove(y)
done = False
print xlist, ylist
Thanks as always for taking the time to read this question. XOXO
It's possible that the data structure you are looking for is the multiset (or "bag"), and if so, a good way to implement it in Python is to use collections.Counter:
>>> from collections import Counter
>>> x = Counter([1, 2, 3, 4])
>>> y = Counter([1, 1, 2, 5, 6])
>>> x - y
Counter({3: 1, 4: 1})
>>> y - x
Counter({1: 1, 5: 1, 6: 1})
If you want to convert the Counter objects back to lists with multiplicity, you can use the elements method:
>>> list((x - y).elements())
[3, 4]
>>> list((y - x).elements())
[1, 5, 6]
If you don't care about order, use collections.Counter to do it in one line:
>>> Counter(x)-Counter(y)
Counter({3: 1, 4: 1})
>>> Counter(y)-Counter(x)
Counter({1: 1, 5: 1, 6: 1})
If you care about order, you can probably iterate through your lists grabbing elements from the above dictionaries:
def prune(seq, toPrune):
"""Prunes elements from front of seq in O(N) time"""
remainder = Counter(seq)-Counter(toPrune)
R = []
for x in reversed(seq):
if remainder.get(x):
remainder[x] -= 1
R.insert(0,x)
return R
Demo:
>>> prune(x,y)
[3, 4]
>>> prune(y,x)
[1, 1, 5, 6]
To build on Gareth's answer:
>>> a = Counter([1, 2, 3, 4])
>>> b = Counter([1, 1, 2, 5, 6])
>>> (a - b).elements()
[3, 4]
>>> (b - a).elements()
[1, 5, 6]
Benchmark code:
from collections import Counter
from collections import defaultdict
import random
# short lists
#a = [1, 2, 3, 4, 7, 8, 9]
#b = [1, 1, 2, 5, 6, 8, 8, 10]
# long lists
a = []
b = []
for i in range(0, 1000):
q = random.choice((1, 2, 3, 4))
if q == 1:
a.append(i)
elif q == 2:
b.append(i)
elif q == 3:
a.append(i)
b.append(i)
else:
a.append(i)
b.append(i)
b.append(i)
# Modifies the lists in-place! Naughty! And it doesn't actually work, to boot.
def original(xlist, ylist):
done = False
while not done:
done = True
for x in xlist:
for y in ylist:
if x == y:
xlist.remove(x)
ylist.remove(y)
done = False
return xlist, ylist # not strictly necessary, see above
def counter(xlist, ylist):
x = Counter(xlist)
y = Counter(ylist)
return ((x-y).elements(), (y-x).elements())
def nasty(xlist, ylist):
x = sum(([i]*(xlist.count(i)-ylist.count(i)) for i in set(xlist)),[])
y = sum(([i]*(ylist.count(i)-xlist.count(i)) for i in set(ylist)),[])
return x, y
def gnibbler(xlist, ylist):
d = defaultdict(int)
for i in xlist: d[i] += 1
for i in ylist: d[i] -= 1
return [k for k,v in d.items() for i in range(v)], [k for k,v in d.items() for i in range(-v)]
# substitute algorithm to test in the call
for x in range(0, 100000):
original(list(a), list(b))
Running the Insufficiently Rigorous Benchmarks[tm] (short lists are the original ones, long lists are randomly generated lists approximately 1000 entries long with a mix of matches and repeats, times given in multipliers of the Original algorithm):
100K iterations, short lists 1K iterations, long lists
Original 1.0 1.0
Counter 9.3 0.06
Nasty 2.9 1.4
Gnibbler 2.4 0.02
Note 1: The creation of the Counter object seems to overshadow the actual algorithm at small list sizes.
Note 2: Original and gnibbler are the same at list lengths of approximately 35, above which gnibbler (and Counter) are faster.
Just using collections.defaultdict so will work on Python2.5+
>>> x = [1, 2, 3, 4]
>>> y = [1, 1, 2, 5, 6]
>>> from collections import defaultdict
>>> d = defaultdict(int)
>>> for i in x:
... d[i] += 1
...
>>> for i in y:
... d[i] -= 1
...
>>> [k for k,v in d.items() for i in range(v)]
[3, 4]
>>> [k for k,v in d.items() for i in range(-v)]
[1, 5, 6]
I find this is better than range (or xrange) if the number repetitions get large
>>> from itertools import repeat
>>> [k for k,v in d.items() for i in repeat(None, v)]
Quite nasty :P
a = sum(([i]*(x.count(i)-y.count(i)) for i in set(x)),[])
b = sum(([i]*(y.count(i)-x.count(i)) for i in set(y)),[])
x,y = a,b
This is simple if you dont care about the duplicates:
>>> x=[1,2,3,4]
>>> y=[1,1,2,5,6]
>>> list(set(x).difference(set(y)))
[3, 4]
>>> list(set(y).difference(set(x)))
[5, 6]
In Python remove() will remove the first occurrence of value in a list.
How to remove all occurrences of a value from a list?
This is what I have in mind:
>>> remove_values_from_list([1, 2, 3, 4, 2, 2, 3], 2)
[1, 3, 4, 3]
Functional approach:
Python 3.x
>>> x = [1,2,3,2,2,2,3,4]
>>> list(filter((2).__ne__, x))
[1, 3, 3, 4]
or
>>> x = [1,2,3,2,2,2,3,4]
>>> list(filter(lambda a: a != 2, x))
[1, 3, 3, 4]
or
>>> [i for i in x if i != 2]
Python 2.x
>>> x = [1,2,3,2,2,2,3,4]
>>> filter(lambda a: a != 2, x)
[1, 3, 3, 4]
You can use a list comprehension:
def remove_values_from_list(the_list, val):
return [value for value in the_list if value != val]
x = [1, 2, 3, 4, 2, 2, 3]
x = remove_values_from_list(x, 2)
print x
# [1, 3, 4, 3]
You can use slice assignment if the original list must be modified, while still using an efficient list comprehension (or generator expression).
>>> x = [1, 2, 3, 4, 2, 2, 3]
>>> x[:] = (value for value in x if value != 2)
>>> x
[1, 3, 4, 3]
Repeating the solution of the first post in a more abstract way:
>>> x = [1, 2, 3, 4, 2, 2, 3]
>>> while 2 in x: x.remove(2)
>>> x
[1, 3, 4, 3]
See the simple solution
>>> [i for i in x if i != 2]
This will return a list having all elements of x without 2
better solution with list comprehension
x = [ i for i in x if i!=2 ]
All of the answers above (apart from Martin Andersson's) create a new list without the desired items, rather than removing the items from the original list.
>>> import random, timeit
>>> a = list(range(5)) * 1000
>>> random.shuffle(a)
>>> b = a
>>> print(b is a)
True
>>> b = [x for x in b if x != 0]
>>> print(b is a)
False
>>> b.count(0)
0
>>> a.count(0)
1000
>>> b = a
>>> b = filter(lambda a: a != 2, x)
>>> print(b is a)
False
This can be important if you have other references to the list hanging around.
To modify the list in place, use a method like this
>>> def removeall_inplace(x, l):
... for _ in xrange(l.count(x)):
... l.remove(x)
...
>>> removeall_inplace(0, b)
>>> b is a
True
>>> a.count(0)
0
As far as speed is concerned, results on my laptop are (all on a 5000 entry list with 1000 entries removed)
List comprehension - ~400us
Filter - ~900us
.remove() loop - 50ms
So the .remove loop is about 100x slower........ Hmmm, maybe a different approach is needed. The fastest I've found is using the list comprehension, but then replace the contents of the original list.
>>> def removeall_replace(x, l):
.... t = [y for y in l if y != x]
.... del l[:]
.... l.extend(t)
removeall_replace() - 450us
Numpy approach and timings against a list/array with 1.000.000 elements:
Timings:
In [10]: a.shape
Out[10]: (1000000,)
In [13]: len(lst)
Out[13]: 1000000
In [18]: %timeit a[a != 2]
100 loops, best of 3: 2.94 ms per loop
In [19]: %timeit [x for x in lst if x != 2]
10 loops, best of 3: 79.7 ms per loop
Conclusion: numpy is 27 times faster (on my notebook) compared to list comprehension approach
PS if you want to convert your regular Python list lst to numpy array:
arr = np.array(lst)
Setup:
import numpy as np
a = np.random.randint(0, 1000, 10**6)
In [10]: a.shape
Out[10]: (1000000,)
In [12]: lst = a.tolist()
In [13]: len(lst)
Out[13]: 1000000
Check:
In [14]: a[a != 2].shape
Out[14]: (998949,)
In [15]: len([x for x in lst if x != 2])
Out[15]: 998949
At the cost of readability, I think this version is slightly faster as it doesn't force the while to reexamine the list, thus doing exactly the same work remove has to do anyway:
x = [1, 2, 3, 4, 2, 2, 3]
def remove_values_from_list(the_list, val):
for i in range(the_list.count(val)):
the_list.remove(val)
remove_values_from_list(x, 2)
print(x)
To remove all duplicate occurrences and leave one in the list:
test = [1, 1, 2, 3]
newlist = list(set(test))
print newlist
[1, 2, 3]
Here is the function I've used for Project Euler:
def removeOccurrences(e):
return list(set(e))
a = [1, 2, 2, 3, 1]
to_remove = 1
a = [i for i in a if i != to_remove]
print(a)
Perhaps not the most pythonic but still the easiest for me haha
for i in range(a.count(' ')):
a.remove(' ')
Much simpler I believe.
I believe this is probably faster than any other way if you don't care about the lists order, if you do take care about the final order store the indexes from the original and resort by that.
category_ids.sort()
ones_last_index = category_ids.count('1')
del category_ids[0:ones_last_index]
Let
>>> x = [1, 2, 3, 4, 2, 2, 3]
The simplest and efficient solution as already posted before is
>>> x[:] = [v for v in x if v != 2]
>>> x
[1, 3, 4, 3]
Another possibility which should use less memory but be slower is
>>> for i in range(len(x) - 1, -1, -1):
if x[i] == 2:
x.pop(i) # takes time ~ len(x) - i
>>> x
[1, 3, 4, 3]
Timing results for lists of length 1000 and 100000 with 10% matching entries: 0.16 vs 0.25 ms, and 23 vs 123 ms.
If your list contains only duplicates of only one element for example list_a=[0,0,0,0,0,0,1,3,4,6,7] the code below would be helpful:
list_a=[0,0,0,0,0,0,1,3,4,6,7]
def remove_element(element,the_list):
the_list=list(set(the_list))
the_list.remove(element)
return the_list
list_a=remove_element(element=0,the_list=list_a)
print(list_a)
or
a=list(set(i for i in list_a if i!=2))
a.remove(2)
The basic idea is that the sets do not allow duplicates, so first I have converted the list into set(which removes the duplicates), then used .remove() function to remove the first instance of the element(as now we have only one instance per item).
But if you have duplicates of multiple elements, the below methods would help:
List comprehension
list_a=[1, 2, 3, 4, 2, 2, 3]
remove_element=lambda element,the_list:[i for i in the_list if i!=element]
print(remove_element(element=2,the_list=list_a))
Filter
list_a=[1, 2, 3, 4, 2, 2, 3]
a=list(filter(lambda a: a != 2, list_a))
print(a)
While loop
list_a=[1, 2, 3, 4, 2, 2, 3]
def remove_element(element,the_list):
while element in the_list:the_list.remove(element)
return the_list
print(remove_element(2,list_a))
for loop (same as List comprehension)
list_a=[1, 2, 3, 4, 2, 2, 3]
a=[]
for i in list_a:
if i!=2:
a.append(i)
print(a)
Remove all occurrences of a value from a Python list
lists = [6.9,7,8.9,3,5,4.9,1,2.9,7,9,12.9,10.9,11,7]
def remove_values_from_list():
for list in lists:
if(list!=7):
print(list)
remove_values_from_list()
Result: 6.9 8.9 3 5 4.9 1 2.9 9 12.9 10.9 11
Alternatively,
lists = [6.9,7,8.9,3,5,4.9,1,2.9,7,9,12.9,10.9,11,7]
def remove_values_from_list(remove):
for list in lists:
if(list!=remove):
print(list)
remove_values_from_list(7)
Result: 6.9 8.9 3 5 4.9 1 2.9 9 12.9 10.9 11
I just did this for a list. I am just a beginner. A slightly more advanced programmer can surely write a function like this.
for i in range(len(spam)):
spam.remove('cat')
if 'cat' not in spam:
print('All instances of ' + 'cat ' + 'have been removed')
break
No one has posted an optimal answer for time and space complexity, so I thought I would give it a shot. Here is a solution that removes all occurrences of a specific value without creating a new array and at an efficient time complexity. The drawback is that the elements do not maintain order.
Time complexity: O(n)
Additional space complexity: O(1)
def main():
test_case([1, 2, 3, 4, 2, 2, 3], 2) # [1, 3, 3, 4]
test_case([3, 3, 3], 3) # []
test_case([1, 1, 1], 3) # [1, 1, 1]
def test_case(test_val, remove_val):
remove_element_in_place(test_val, remove_val)
print(test_val)
def remove_element_in_place(my_list, remove_value):
length_my_list = len(my_list)
swap_idx = length_my_list - 1
for idx in range(length_my_list - 1, -1, -1):
if my_list[idx] == remove_value:
my_list[idx], my_list[swap_idx] = my_list[swap_idx], my_list[idx]
swap_idx -= 1
for pop_idx in range(length_my_list - swap_idx - 1):
my_list.pop() # O(1) operation
if __name__ == '__main__':
main()
A lot of answers are really good. Here is a simple approach if you are a beginner in python in case you want to use the remove() method for sure.
rawlist = [8, 1, 8, 5, 8, 2, 8, 9, 8, 4]
ele_remove = 8
for el in rawlist:
if el == ele_remove:
rawlist.remove(ele_remove)
It may be slower for too large lists.
If you didn't have built-in filter or didn't want to use extra space and you need a linear solution...
def remove_all(A, v):
k = 0
n = len(A)
for i in range(n):
if A[i] != v:
A[k] = A[i]
k += 1
A = A[:k]
hello = ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']
#chech every item for a match
for item in range(len(hello)-1):
if hello[item] == ' ':
#if there is a match, rebuild the list with the list before the item + the list after the item
hello = hello[:item] + hello [item + 1:]
print hello
['h', 'e', 'l', 'l', 'o', 'w', 'o', 'r', 'l', 'd']
We can also do in-place remove all using either del or pop:
import random
def remove_values_from_list(lst, target):
if type(lst) != list:
return lst
i = 0
while i < len(lst):
if lst[i] == target:
lst.pop(i) # length decreased by 1 already
else:
i += 1
return lst
remove_values_from_list(None, 2)
remove_values_from_list([], 2)
remove_values_from_list([1, 2, 3, 4, 2, 2, 3], 2)
lst = remove_values_from_list([random.randrange(0, 10) for x in range(1000000)], 2)
print(len(lst))
Now for the efficiency:
In [21]: %timeit -n1 -r1 x = random.randrange(0,10)
1 loop, best of 1: 43.5 us per loop
In [22]: %timeit -n1 -r1 lst = [random.randrange(0, 10) for x in range(1000000)]
g1 loop, best of 1: 660 ms per loop
In [23]: %timeit -n1 -r1 lst = remove_values_from_list([random.randrange(0, 10) for x in range(1000000)]
...: , random.randrange(0,10))
1 loop, best of 1: 11.5 s per loop
In [27]: %timeit -n1 -r1 x = random.randrange(0,10); lst = [a for a in [random.randrange(0, 10) for x in
...: range(1000000)] if x != a]
1 loop, best of 1: 710 ms per loop
As we see that in-place version remove_values_from_list() does not require any extra memory, but it does take so much more time to run:
11 seconds for inplace remove values
710 milli seconds for list comprehensions, which allocates a new list in memory
You can convert your list to numpy.array and then use np.delete and pass the indices of the element and its all occurrences.
import numpy as np
my_list = [1, 2, 3, 4, 5, 6, 7, 3, 4, 5, 6, 7]
element_to_remove = 3
my_array = np.array(my_list)
indices = np.where(my_array == element_to_remove)
my_array = np.delete(my_array, indices)
my_list = my_array.tolist()
print(my_list)
#output
[1, 2, 4, 5, 6, 7, 4, 5, 6, 7]
About the speed!
import time
s_time = time.time()
print 'start'
a = range(100000000)
del a[:]
print 'finished in %0.2f' % (time.time() - s_time)
# start
# finished in 3.25
s_time = time.time()
print 'start'
a = range(100000000)
a = []
print 'finished in %0.2f' % (time.time() - s_time)
# start
# finished in 2.11
p=[2,3,4,4,4]
p.clear()
print(p)
[]
Only with Python 3
What's wrong with:
Motor=['1','2','2']
for i in Motor:
if i != '2':
print(i)
print(motor)