I'm trying to create a function that takes two lists and selects an element at random from each of them. Is there any way to do this using the random.seed function?
You can use random.choice to pick a random element from a sequence (like a list).
If your two lists are list1 and list2, that would be:
a = random.choice(list1)
b = random.choice(list2)
Are you sure you want to use random.seed? This will initialize the random number generator in a consistent way each time, which can be very useful if you want subsequent runs to be identical but in general that is not desired. For example, the following function will always return 8, even though it looks like it should randomly choose a number between 0 and 10.
>>> def not_very_random():
... random.seed(0)
... return random.choice(range(10))
...
>>> not_very_random()
8
>>> not_very_random()
8
>>> not_very_random()
8
>>> not_very_random()
8
Note: #F.J's solution is much less complicated and better.
Use random.randint to pick a pseudo-random index from the list. Then use that index to select the element:
>>> import random as r
>>> r.seed(14) # used random number generator of ... my head ... to get 14
>>> mylist = [1,2,3,4,5]
>>> mylist[r.randint(0, len(mylist) - 1)]
You can easily extend this to work on two lists.
Why do you want to use random.seed?
Example (using Python2.7):
>>> import collections as c
>>> c.Counter([mylist[r.randint(0, len(mylist) - 1)] for x in range(200)])
Counter({1: 44, 5: 43, 2: 40, 3: 39, 4: 34})
Is that random enough?
I totally redid my previous answer. Here is a class which wraps a random-number generator (with optional seed) with the list. This is a minor improvement over F.J.'s, because it gives deterministic behavior for testing. Calling choice() on the first list should not affect the second list, and vice versa:
class rlist ():
def __init__(self, lst, rg=None, rseed=None):
self.lst = lst
if rg is not None:
self.rg = rg
else:
self.rg = random.Random()
if rseed is not None:
self.rg.seed(rseed)
def choice(self):
return self.rg.choice(self.lst)
if __name__ == '__main__':
rl1 = rlist([1,2,3,4,5], rseed=1234)
rl2 = rlist(['a','b','c','d','e'], rseed=1234)
print 'First call:'
print rl1.choice(),rl2.choice()
print 'Second call:'
print rl1.choice(),rl2.choice()
Related
I have a list
data=['_','_','A','B','C',1,2,3,4,5]
I need to randomly get an integer among 1,2,3,4,5
The list keeps getting modified so i cant just simply choose from the last 5 members
Here's what i tried but it throws an error retrieving the other members:
inp2=int(random.choice(data))
You can filter the non-integer items;
inp2 = ramdom.choice([x for x in data if isinstance(x, int)])
Though it is almost similar to the answer of Neo, you can also try this:
inp2 = random.choice(filter(lambda d: isinstance(d, int), data))
To create a list of last 5 elements.
>>> from random import choice
>>> data=['_','_','A','B','C',1,2,3,4,5]
>>> l = len(data)
>>> data[(l-5):l]
[1, 2, 3, 4, 5]
>>> k = data[(l-5):l]
>>> choice(k)
5
>>> choice(k)
2
>>>
random.choice([i for i in data[-5:] if isinstance(x, int)])
It is more safe for check type of data[-5:] by isinstance().
Try this.
import operator
inp2 = random.choice(filter(operator.isNumberType, data))
For this specific problem selecting last 5 elements also a good solution.
inp2 = random.choice(data[5:])
I think the best solution would be creating another list, where would be just int values you want to pick from.
I don't know your specified assignment, but for example if you have a method for adding to your list, just add there:
def add(self, element):
self.data.append(element)
if type(element) == int:
self.data_int.append(element)
and then just use:
def get_value(self):
return random.choice(self.data_int)
In python, as far as I know, there are at least 3 to 4 ways to create and initialize lists of a given size:
Simple loop with append:
my_list = []
for i in range(50):
my_list.append(0)
Simple loop with +=:
my_list = []
for i in range(50):
my_list += [0]
List comprehension:
my_list = [0 for i in range(50)]
List and integer multiplication:
my_list = [0] * 50
In these examples I don't think there would be any performance difference given that the lists have only 50 elements, but what if I need a list of a million elements? Would the use of xrange make any improvement? Which is the preferred/fastest way to create and initialize lists in python?
Let's run some time tests* with timeit.timeit:
>>> from timeit import timeit
>>>
>>> # Test 1
>>> test = """
... my_list = []
... for i in xrange(50):
... my_list.append(0)
... """
>>> timeit(test)
22.384258893239178
>>>
>>> # Test 2
>>> test = """
... my_list = []
... for i in xrange(50):
... my_list += [0]
... """
>>> timeit(test)
34.494779364416445
>>>
>>> # Test 3
>>> test = "my_list = [0 for i in xrange(50)]"
>>> timeit(test)
9.490926919482774
>>>
>>> # Test 4
>>> test = "my_list = [0] * 50"
>>> timeit(test)
1.5340533503559755
>>>
As you can see above, the last method is the fastest by far.
However, it should only be used with immutable items (such as integers). This is because it will create a list with references to the same item.
Below is a demonstration:
>>> lst = [[]] * 3
>>> lst
[[], [], []]
>>> # The ids of the items in `lst` are the same
>>> id(lst[0])
28734408
>>> id(lst[1])
28734408
>>> id(lst[2])
28734408
>>>
This behavior is very often undesirable and can lead to bugs in the code.
If you have mutable items (such as lists), then you should use the still very fast list comprehension:
>>> lst = [[] for _ in xrange(3)]
>>> lst
[[], [], []]
>>> # The ids of the items in `lst` are different
>>> id(lst[0])
28796688
>>> id(lst[1])
28796648
>>> id(lst[2])
28736168
>>>
*Note: In all of the tests, I replaced range with xrange. Since the latter returns an iterator, it should always be faster than the former.
If you want to see the dependency with the length of the list n:
Pure python
I tested for list length up to n=10000 and the behavior remains the same. So the integer multiplication method is the fastest with difference.
Numpy
For lists with more than ~300 elements you should consider numpy.
Benchmark code:
import time
def timeit(f):
def timed(*args, **kwargs):
start = time.clock()
for _ in range(100):
f(*args, **kwargs)
end = time.clock()
return end - start
return timed
#timeit
def append_loop(n):
"""Simple loop with append"""
my_list = []
for i in xrange(n):
my_list.append(0)
#timeit
def add_loop(n):
"""Simple loop with +="""
my_list = []
for i in xrange(n):
my_list += [0]
#timeit
def list_comprehension(n):
"""List comprehension"""
my_list = [0 for i in xrange(n)]
#timeit
def integer_multiplication(n):
"""List and integer multiplication"""
my_list = [0] * n
import numpy as np
#timeit
def numpy_array(n):
my_list = np.zeros(n)
import pandas as pd
df = pd.DataFrame([(integer_multiplication(n), numpy_array(n)) for n in range(1000)],
columns=['Integer multiplication', 'Numpy array'])
df.plot()
Gist here.
There is one more method which, while sounding weird, is handy in right curcumstances. If you need to produce the same list many times (initializing matrix for roguelike pathfinding and related stuff in my case), you can store a copy of the list in the tuple, then turn it to list when you need it. It is noticeably quicker than generating list via comprehensions and, unlike list multiplication, works with nested data structures.
# In class definition
def __init__(self):
self.l = [[1000 for x in range(1000)] for y in range(1000)]
self.t = tuple(self.l)
def some_method(self):
self.l = list(self.t)
self._do_fancy_computation()
# self.l is changed by this method
# Later in code:
for a in range(10):
obj.some_method()
Voila, on every iteration you have a fresh copy of the same list in no time!
Disclaimer:
I do not have a slightest idea why is this so quick or whether it works anywhere outside CPython 3.4.
If you want to create a list incrementing, i.e. adding 1 every time, use the range function. In range the start argument is included and the end argument is excluded as shown below:
list(range(10,20))
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
If you want to create a list by adding 2 to previous elements use this:
list(range(10,20,2))
[10, 12, 14, 16, 18]
Here the third argument is the step size to be taken. Now you can give any start element, end element and step size and create many lists fast and easy.
Thank you..!
Happy Learning.. :)
In python, as far as I know, there are at least 3 to 4 ways to create and initialize lists of a given size:
Simple loop with append:
my_list = []
for i in range(50):
my_list.append(0)
Simple loop with +=:
my_list = []
for i in range(50):
my_list += [0]
List comprehension:
my_list = [0 for i in range(50)]
List and integer multiplication:
my_list = [0] * 50
In these examples I don't think there would be any performance difference given that the lists have only 50 elements, but what if I need a list of a million elements? Would the use of xrange make any improvement? Which is the preferred/fastest way to create and initialize lists in python?
Let's run some time tests* with timeit.timeit:
>>> from timeit import timeit
>>>
>>> # Test 1
>>> test = """
... my_list = []
... for i in xrange(50):
... my_list.append(0)
... """
>>> timeit(test)
22.384258893239178
>>>
>>> # Test 2
>>> test = """
... my_list = []
... for i in xrange(50):
... my_list += [0]
... """
>>> timeit(test)
34.494779364416445
>>>
>>> # Test 3
>>> test = "my_list = [0 for i in xrange(50)]"
>>> timeit(test)
9.490926919482774
>>>
>>> # Test 4
>>> test = "my_list = [0] * 50"
>>> timeit(test)
1.5340533503559755
>>>
As you can see above, the last method is the fastest by far.
However, it should only be used with immutable items (such as integers). This is because it will create a list with references to the same item.
Below is a demonstration:
>>> lst = [[]] * 3
>>> lst
[[], [], []]
>>> # The ids of the items in `lst` are the same
>>> id(lst[0])
28734408
>>> id(lst[1])
28734408
>>> id(lst[2])
28734408
>>>
This behavior is very often undesirable and can lead to bugs in the code.
If you have mutable items (such as lists), then you should use the still very fast list comprehension:
>>> lst = [[] for _ in xrange(3)]
>>> lst
[[], [], []]
>>> # The ids of the items in `lst` are different
>>> id(lst[0])
28796688
>>> id(lst[1])
28796648
>>> id(lst[2])
28736168
>>>
*Note: In all of the tests, I replaced range with xrange. Since the latter returns an iterator, it should always be faster than the former.
If you want to see the dependency with the length of the list n:
Pure python
I tested for list length up to n=10000 and the behavior remains the same. So the integer multiplication method is the fastest with difference.
Numpy
For lists with more than ~300 elements you should consider numpy.
Benchmark code:
import time
def timeit(f):
def timed(*args, **kwargs):
start = time.clock()
for _ in range(100):
f(*args, **kwargs)
end = time.clock()
return end - start
return timed
#timeit
def append_loop(n):
"""Simple loop with append"""
my_list = []
for i in xrange(n):
my_list.append(0)
#timeit
def add_loop(n):
"""Simple loop with +="""
my_list = []
for i in xrange(n):
my_list += [0]
#timeit
def list_comprehension(n):
"""List comprehension"""
my_list = [0 for i in xrange(n)]
#timeit
def integer_multiplication(n):
"""List and integer multiplication"""
my_list = [0] * n
import numpy as np
#timeit
def numpy_array(n):
my_list = np.zeros(n)
import pandas as pd
df = pd.DataFrame([(integer_multiplication(n), numpy_array(n)) for n in range(1000)],
columns=['Integer multiplication', 'Numpy array'])
df.plot()
Gist here.
There is one more method which, while sounding weird, is handy in right curcumstances. If you need to produce the same list many times (initializing matrix for roguelike pathfinding and related stuff in my case), you can store a copy of the list in the tuple, then turn it to list when you need it. It is noticeably quicker than generating list via comprehensions and, unlike list multiplication, works with nested data structures.
# In class definition
def __init__(self):
self.l = [[1000 for x in range(1000)] for y in range(1000)]
self.t = tuple(self.l)
def some_method(self):
self.l = list(self.t)
self._do_fancy_computation()
# self.l is changed by this method
# Later in code:
for a in range(10):
obj.some_method()
Voila, on every iteration you have a fresh copy of the same list in no time!
Disclaimer:
I do not have a slightest idea why is this so quick or whether it works anywhere outside CPython 3.4.
If you want to create a list incrementing, i.e. adding 1 every time, use the range function. In range the start argument is included and the end argument is excluded as shown below:
list(range(10,20))
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
If you want to create a list by adding 2 to previous elements use this:
list(range(10,20,2))
[10, 12, 14, 16, 18]
Here the third argument is the step size to be taken. Now you can give any start element, end element and step size and create many lists fast and easy.
Thank you..!
Happy Learning.. :)
Used a loop to add a bunch of elements to a list with
mylist = []
for x in otherlist:
mylist.append(x[0:5])
But instead of the expected result ['x1','x2',...], I got: [u'x1', u'x2',...]. Where did the u's come from and why? Also is there a better way to loop through the other list, inserting the first six characters of each element into a new list?
The u means unicode, you probably will not need to worry about it
mylist.extend(x[:5] for x in otherlist)
The u means unicode. It's Python's internal string representation (from version ... ?).
Most times you don't need to worry about it. (Until you do.)
The answers above me already answered the "u" part - that the string is encoded in Unicode. About whether there's a better way to extract the first 6 letters from the items in a list:
>>> a = ["abcdefgh", "012345678"]
>>> b = map(lambda n: n[0:5], a);
>>> for x in b:
print(x)
abcde
01234
So, map applies a function (lambda n: n[0:5]) to each element of a and returns a new list with the results of the function for every element. More precisely, in Python 3, it returns an iterator, so the function gets called only as many times as needed (i.e. if your list has 5000 items, but you only pull 10 from the result b, lambda n: n[0:5] gets called only 10 times). In Python2, you need to use itertools.imap instead.
>>> a = [1, 2, 3]
>>> def plusone(x):
print("called with {}".format(x))
return x + 1
>>> b = map(plusone, a)
>>> print("first item: {}".format(b.__next__()))
called with 1
first item: 2
Of course, you can apply the function "eagerly" to every element by calling list(b), which will give you a normal list with the function applied to each element on creation.
>>> b = map(plusone, a)
>>> list(b)
called with 1
called with 2
called with 3
[2, 3, 4]
The list.index(x) function returns the index in the list of the first item whose value is x.
Is there a function, list_func_index(), similar to the index() function that has a function, f(), as a parameter. The function, f() is run on every element, e, of the list until f(e) returns True. Then list_func_index() returns the index of e.
Codewise:
>>> def list_func_index(lst, func):
for i in range(len(lst)):
if func(lst[i]):
return i
raise ValueError('no element making func True')
>>> l = [8,10,4,5,7]
>>> def is_odd(x): return x % 2 != 0
>>> list_func_index(l,is_odd)
3
Is there a more elegant solution? (and a better name for the function)
You could do that in a one-liner using generators:
next(i for i,v in enumerate(l) if is_odd(v))
The nice thing about generators is that they only compute up to the requested amount. So requesting the first two indices is (almost) just as easy:
y = (i for i,v in enumerate(l) if is_odd(v))
x1 = next(y)
x2 = next(y)
Though, expect a StopIteration exception after the last index (that is how generators work). This is also convenient in your "take-first" approach, to know that no such value was found --- the list.index() function would throw ValueError here.
One possibility is the built-in enumerate function:
def index_of_first(lst, pred):
for i,v in enumerate(lst):
if pred(v):
return i
return None
It's typical to refer a function like the one you describe as a "predicate"; it returns true or false for some question. That's why I call it pred in my example.
I also think it would be better form to return None, since that's the real answer to the question. The caller can choose to explode on None, if required.
#Paul's accepted answer is best, but here's a little lateral-thinking variant, mostly for amusement and instruction purposes...:
>>> class X(object):
... def __init__(self, pred): self.pred = pred
... def __eq__(self, other): return self.pred(other)
...
>>> l = [8,10,4,5,7]
>>> def is_odd(x): return x % 2 != 0
...
>>> l.index(X(is_odd))
3
essentially, X's purpose is to change the meaning of "equality" from the normal one to "satisfies this predicate", thereby allowing the use of predicates in all kinds of situations that are defined as checking for equality -- for example, it would also let you code, instead of if any(is_odd(x) for x in l):, the shorter if X(is_odd) in l:, and so forth.
Worth using? Not when a more explicit approach like that taken by #Paul is just as handy (especially when changed to use the new, shiny built-in next function rather than the older, less appropriate .next method, as I suggest in a comment to that answer), but there are other situations where it (or other variants of the idea "tweak the meaning of equality", and maybe other comparators and/or hashing) may be appropriate. Mostly, worth knowing about the idea, to avoid having to invent it from scratch one day;-).
Not one single function, but you can do it pretty easily:
>>> test = lambda c: c == 'x'
>>> data = ['a', 'b', 'c', 'x', 'y', 'z', 'x']
>>> map(test, data).index(True)
3
>>>
If you don't want to evaluate the entire list at once you can use itertools, but it's not as pretty:
>>> from itertools import imap, ifilter
>>> from operator import itemgetter
>>> test = lambda c: c == 'x'
>>> data = ['a', 'b', 'c', 'x', 'y', 'z']
>>> ifilter(itemgetter(1), enumerate(imap(test, data))).next()[0]
3
>>>
Just using a generator expression is probably more readable than itertools though.
Note in Python3, map and filter return lazy iterators and you can just use:
from operator import itemgetter
test = lambda c: c == 'x'
data = ['a', 'b', 'c', 'x', 'y', 'z']
next(filter(itemgetter(1), enumerate(map(test, data))))[0] # 3
A variation on Alex's answer. This avoids having to type X every time you want to use is_odd or whichever predicate
>>> class X(object):
... def __init__(self, pred): self.pred = pred
... def __eq__(self, other): return self.pred(other)
...
>>> L = [8,10,4,5,7]
>>> is_odd = X(lambda x: x%2 != 0)
>>> L.index(is_odd)
3
>>> less_than_six = X(lambda x: x<6)
>>> L.index(less_than_six)
2
you could do this with a list-comprehension:
l = [8,10,4,5,7]
filterl = [a for a in l if a % 2 != 0]
Then filterl will return all members of the list fulfilling the expression a % 2 != 0. I would say a more elegant method...
Intuitive one-liner solution:
i = list(map(lambda value: value > 0, data)).index(True)
Explanation:
we use map function to create a list containing True or False based on if each element in our list pass the condition in the lambda or not.
then we convert the map output to list
then using the index function, we get the index of the first true which is the same index of the first value passing the condition.