Is there a way to "compress" an array in python so as to keep the same range but simply decrease the number of elements to a given value?
For example I have an array with 1000 elements and I want to modify it to have 100. Specifically I have a numpy array that is
x = linspace(-1,1,1000)
But because of the way in which I am using it in my project, I can't simply recreate it using linspace as it will not always be in the domain of -1 to 1 and have 1000 elements. These parameters change and I don't have access to them in the function I am defining. So I need a way to compress the array while keeping the -1 to 1 mapping. Think of it as decreasing the "resolution" of the array. Is this possible with any built in functions or different libraries?
A simple way to "resample" your array is to group it into chunks, then average each chunk:
(Chunking function is from this answer)
# Chunking function
def chunks(l, n):
for i in xrange(0, len(l), n):
yield l[i:i+n]
# Resampling function
def resample(arr, newLength):
chunkSize = len(arr)/newLength
return [np.mean(chunk) for chunk in chunks(arr, chunkSize)]
# Example:
import numpy as np
x = np.linspace(-1,1,15)
y = resample(x, 5)
print y
# Result:
# [-0.85714285714285721, -0.4285714285714286, -3.7007434154171883e-17, 0.42857142857142844, 0.8571428571428571]
As you can see, the range of the resampled array does drift inward, but this effect would be much smaller for larger arrays.
It's not clear to me whether the arrays will always be generated by numpy.linspace or not. If so, there are simpler ways of doing this, like simply picking each nth member of the original array, where n is determined by the "compression" ratio:
def linearResample(arr, newLength):
spacing = len(arr) / newLength
return arr[::spacing]
You could pick items at random to reduce any bias you have in the reduction. If the original sample is unordered it would just be:
import random
sample = range(1000)
def reduce(sample, count):
work = sample[:]
random.shuffle(work)
return work[:count]
If order matters, then use enum to track position and reassemble
def reduce(sample, count):
indexed = [item for item in enumerate(sample)]
random.shuffle(indexed)
trimmed = indexed[:count]
trimmed.sort()
return [item for index,item in trimmed]
Related
I am seeking to sample n random permutations of a list in Python.
This is my code:
obj = [ 5 8 9 ... 45718 45719 45720]
#type(obj) = numpy.ndarray
pairs = random.sample(list(permutations(obj,2)),k= 150)
Although the code does what I want it to, it causes memory issues. I sometimes receive the error Memory error when running on CPU, and when running on GPU, my virtual machine crashes.
How can I make the code work in a more memory-efficient manner?
This avoids using permutations at all:
count = len(obj)
def index2perm(i,obj):
i1, i2 = divmod(i,len(obj)-1)
if i1 <= i2:
i2 += 1
return (obj[i1],obj[i2])
pairs = [index2perm(i,obj) for i in random.sample(range(count*(count-1)),k=3)]
Building on Pablo Ruiz's excellent answer, I suggest wrapping his sampling solution into a generator function that yields unique permutations by keeping track of what it has already yielded:
import numpy as np
def unique_permutations(sequence, r, n):
"""Yield n unique permutations of r elements from sequence"""
seen = set()
while len(seen) < n:
# This line of code adapted from Pablo Ruiz's answer:
candidate_permutation = tuple(np.random.choice(sequence, r, replace=False))
if candidate_permutation not in seen:
seen.add(candidate_permutation)
yield candidate_permutation
obj = list(range(10))
for permutation in unique_permutations(obj, 2, 15):
# do something with the permutation
# Or, to save the result as a list:
pairs = list(unique_permutations(obj, 2, 15))
My assumption is that you are sampling a small subset of the very large number of possible permutations, in which case collisions will be rare enough that keeping a seen set will not be expensive.
Warnings: this function is an infinite loop if you ask for more permutations than are possible given the inputs. It will also get increasingly slow an n gets close to the number of possible permutations, since collisions will get increasingly frequent.
If I were to put this function in my code base, I would put a shield at the top that calculated the number of possible permutations and raised a ValueError exception if n exceeded that number, and maybe output a warning if n exceeded one tenth that number, or something like that.
You can avoid listing the permutation iterator that could be massive in memory. You can generate random permutations by sampling the list with replace=False.
import numpy as np
obj = np.array([5,8,123,13541,42])
k = 15
permutations = [tuple(np.random.choice(obj, 2, replace=False)) for _ in range(k)]
print(permutations)
This problem becomes much harder, if you for example impose no repetition in your random permutations.
Edit, no repetitions code
I think this is the best possible approach for the non repetition case.
We index all possible permutations from 1 to n**2-n in a permutation matrix where the diagonal should be avoided. We sample the indexes without repetitions and without listing them, then we map the samples to the coordinates of the permutations and then we get the permutations from the indexes of matrix.
import random
import numpy as np
obj = np.array([1,2,3,10,43,19,323,142,334,33,312,31,12])
k = 150
obj_len = len(obj)
indexes = random.sample(range(obj_len**2-obj_len), k)
def mapm(m):
return m + m //(obj_len) +1
permutations = [(obj[mapm(i)//obj_len], obj[mapm(i)%obj_len]) for i in indexes]
This approach is not based on any assumption, does not load the permutations and also the performance is not based on a while loop failing to insert duplicates, as no duplicates are ever generated.
I have a given array with a length of over 1'000'000 and values between 0 and 255 (included) as integers. Now I would like to plot on the x-axis the integers from 0 to 255 and on the y-axis the quantity of the corresponding x value in the given array (called Arr in my current code).
I thought about this code:
list = []
for i in range(0, 256):
icounter = 0
for x in range(len(Arr)):
if Arr[x] == i:
icounter += 1
list.append(icounter)
But is there any way I can do this a little bit faster (it takes me several minutes at the moment)? I thought about an import ..., but wasn't able to find a good package for this.
Use numpy.bincount for this task (look for more details here)
import numpy as np
list = np.bincount(Arr)
While I completely agree with the previous answers that you should use a standard histogram algorithm, it's quite easy to greatly speed up your own implementation. Its problem is that you pass through the entire input for each bin, over and over again. It would be much faster to only process the input once, and then write only to the relevant bin:
def hist(arr):
nbins = 256
result = [0] * nbins # or np.zeroes(nbins)
for y in arr:
if y>=0 and y<nbins:
result[y] += 1
return result
I am attempting to build a simple genetic algorithm that will optimize to an input string, but am having trouble building the [individual x genome] matrix (row n is individual n's genome.) I want to be able to change the population size, mutation rate, and other parameters to study how that affects convergence rate and program efficiency.
This is what I have so far:
import random
import itertools
import numpy as np
def evolve():
goal = 'Hello, World!' #string to optimize towards
ideal = list(goal)
#converting the string into a list of integers
for i in range (0,len(ideal)):
ideal [i] = ord(ideal[i])
print(ideal)
popSize = 10 #population size
genome = len(ideal) #determineing the length of the genome to be the length of the target string
mut = 0.03 #mutation rate
S = 4 #tournament size
best = float("inf") #initial best is very large
maxVal = max(ideal)
minVal = min(ideal)
print (maxVal)
i = 0 #counting variables assigned to solve UnboundLocalError
j = 0
print(maxVal, minVal)
#constructing initial population array (individual x genome)
pop = np.empty([popSize, len(ideal)])
for i, j in itertools.product(range(i), range(j)):
pop[i, j] = [i, random.randint(minVal,maxVal)]
print(pop)
This produces a matrix of the population size with the correct genome length, but the genomes are something like:
[ 6.91364167e-310 6.91364167e-310 1.80613009e-316 1.80613009e-316
5.07224590e-317 0.00000000e+000 6.04100487e+151 3.13149876e-120
1.11787892e+253 1.47872844e-028 7.34486815e+223 1.26594941e-118
7.63858409e+228]
I need them to be random integers corresponding to random ASCII characters .
What am I doing wrong with this method?
Is there a way to make this faster?
I found my current method here:
building an nxn matrix in python numpy, for any n
I found another method that I do not understand, but seems faster and simper, if I can use it here I would like to.
Initialise numpy array of unknown length
Thank you for any assistance you can provide.
Your loop isn't executing because i and j are both 0, so range(i) and range(j) are empty. Also you can't assign a list [i,random] to an array value (np.empty defaults to np.float64). I've simply changed it to only store the random number, but if you really want to store a list, you can change the creation of pop to be pop = np.empty([popSize, len(ideal)],dtype=list)
Otherwise use this for the last lines:
for i, j in itertools.product(range(popSize), range(len(ideal))):
pop[i, j] = random.randint(minVal,maxVal)
Given the number of rows (or columns) , n, of a square matrix, I am trying to get the index pairs of the lower triangular matrix in a 1 dimensional list. So far I thought of the following solution:
def getLowerTriangularIndices(n):
inds=[];
for i in range(1,n):
for j in range(i):
inds.append((i,j))
return inds;
Considering the two for loops, it would be far better to have a more efficient way of calculating this maybe using numpy. Does anyone have a suggestion?
Numpy has a method for that...
import numpy as np
# create your matrix. If it's not yet a numpy array, make it one
ar = np.array(matrix)
indices = np.tril_indices_from(ar)
This returns a tuple of two arrays. If you want to have them as lists, you could do
indices = [list(x) for x in np.tril_indices_from(ar)]
You actually do not need to have an array to get the indices, there is also np.tril_indices, which takes the shape as arguments.
So your function would read:
def getLowerTriangularIndices(n):
return [list(x) for x in np.tril_indices(n)]
or if you want a list of tuples instead:
def getLowerTriangularIndices(n):
return zip(*np.tril_indices(n)]
I have a list a_tot with 1500 elements and I would like to divide this list into two lists in a random way. List a_1 would have 1300 and list a_2 would have 200 elements. My question is about the best way to randomize the original list with 1500 elements. When I have randomized the list, I could take one slice with 1300 and another slice with 200.
One way is to use the random.shuffle, another way is to use the random.sample. Any differences in the quality of the randomization between the two methods? The data in list 1 should be a random sample as well as the data in list2.
Any recommendations?
using shuffle:
random.shuffle(a_tot) #get a randomized list
a_1 = a_tot[0:1300] #pick the first 1300
a_2 = a_tot[1300:] #pick the last 200
using sample
new_t = random.sample(a_tot,len(a_tot)) #get a randomized list
a_1 = new_t[0:1300] #pick the first 1300
a_2 = new_t[1300:] #pick the last 200
The source for shuffle:
def shuffle(self, x, random=None, int=int):
"""x, random=random.random -> shuffle list x in place; return None.
Optional arg random is a 0-argument function returning a random
float in [0.0, 1.0); by default, the standard random.random.
"""
if random is None:
random = self.random
for i in reversed(xrange(1, len(x))):
# pick an element in x[:i+1] with which to exchange x[i]
j = int(random() * (i+1))
x[i], x[j] = x[j], x[i]
The source for sample:
def sample(self, population, k):
"""Chooses k unique random elements from a population sequence.
Returns a new list containing elements from the population while
leaving the original population unchanged. The resulting list is
in selection order so that all sub-slices will also be valid random
samples. This allows raffle winners (the sample) to be partitioned
into grand prize and second place winners (the subslices).
Members of the population need not be hashable or unique. If the
population contains repeats, then each occurrence is a possible
selection in the sample.
To choose a sample in a range of integers, use xrange as an argument.
This is especially fast and space efficient for sampling from a
large population: sample(xrange(10000000), 60)
"""
# XXX Although the documentation says `population` is "a sequence",
# XXX attempts are made to cater to any iterable with a __len__
# XXX method. This has had mixed success. Examples from both
# XXX sides: sets work fine, and should become officially supported;
# XXX dicts are much harder, and have failed in various subtle
# XXX ways across attempts. Support for mapping types should probably
# XXX be dropped (and users should pass mapping.keys() or .values()
# XXX explicitly).
# Sampling without replacement entails tracking either potential
# selections (the pool) in a list or previous selections in a set.
# When the number of selections is small compared to the
# population, then tracking selections is efficient, requiring
# only a small set and an occasional reselection. For
# a larger number of selections, the pool tracking method is
# preferred since the list takes less space than the
# set and it doesn't suffer from frequent reselections.
n = len(population)
if not 0 <= k <= n:
raise ValueError, "sample larger than population"
random = self.random
_int = int
result = [None] * k
setsize = 21 # size of a small set minus size of an empty list
if k > 5:
setsize += 4 ** _ceil(_log(k * 3, 4)) # table size for big sets
if n <= setsize or hasattr(population, "keys"):
# An n-length list is smaller than a k-length set, or this is a
# mapping type so the other algorithm wouldn't work.
pool = list(population)
for i in xrange(k): # invariant: non-selected at [0,n-i)
j = _int(random() * (n-i))
result[i] = pool[j]
pool[j] = pool[n-i-1] # move non-selected item into vacancy
else:
try:
selected = set()
selected_add = selected.add
for i in xrange(k):
j = _int(random() * n)
while j in selected:
j = _int(random() * n)
selected_add(j)
result[i] = population[j]
except (TypeError, KeyError): # handle (at least) sets
if isinstance(population, list):
raise
return self.sample(tuple(population), k)
return result
As you can see, in both cases, the randomization is essentially done by the line int(random() * n). So, the underlying algorithm is essentially the same.
There are two major differences between shuffle() and sample():
1) Shuffle will alter data in-place, so its input must be a mutable sequence. In contrast, sample produces a new list and its input can be much more varied (tuple, string, xrange, bytearray, set, etc).
2) Sample lets you potentially do less work (i.e. a partial shuffle).
It is interesting to show the conceptual relationships between the two by demonstrating that is would have been possible to implement shuffle() in terms of sample():
def shuffle(p):
p[:] = sample(p, len(p))
Or vice-versa, implementing sample() in terms of shuffle():
def sample(p, k):
p = list(p)
shuffle(p)
return p[:k]
Neither of these are as efficient at the real implementation of shuffle() and sample() but it does show their conceptual relationships.
The randomization should be just as good with both option. I'd say go with shuffle, because it's more immediately clear to the reader what it does.
random.shuffle() shuffles the given list in-place. Its length stays the same.
random.sample() picks n items out of the given sequence without replacement (which also might be a tuple or whatever, as long as it has a __len__()) and returns them in randomized order.
I think they are quite the same, except that one updated the original list, one use (read only) it. No differences in quality.
from random import shuffle
from random import sample
x = [[i] for i in range(10)]
shuffle(x)
sample(x,10)
shuffle update the output in same list but sample return the update
list sample provide the no of argument in pic facility but shuffle
provide the list of same length of input