I tried using random.randint(0, 100), but some numbers were the same. Is there a method/module to create a list unique random numbers?
This will return a list of 10 numbers selected from the range 0 to 99, without duplicates.
import random
random.sample(range(100), 10)
You can use the shuffle function from the random module like this:
import random
nums = list(range(1, 100)) # list of integers from 1 to 99
# adjust this boundaries to fit your needs
random.shuffle(nums)
print(nums) # <- List of unique random numbers
Note here that the shuffle method doesn't return any list as one may expect, it only shuffle the list passed by reference.
You can first create a list of numbers from a to b, where a and b are respectively the smallest and greatest numbers in your list, then shuffle it with Fisher-Yates algorithm or using the Python's random.shuffle method.
Linear Congruential Pseudo-random Number Generator
O(1) Memory
O(k) Operations
This problem can be solved with a simple Linear Congruential Generator. This requires constant memory overhead (8 integers) and at most 2*(sequence length) computations.
All other solutions use more memory and more compute! If you only need a few random sequences, this method will be significantly cheaper. For ranges of size N, if you want to generate on the order of N unique k-sequences or more, I recommend the accepted solution using the builtin methods random.sample(range(N),k) as this has been optimized in python for speed.
Code
# Return a randomized "range" using a Linear Congruential Generator
# to produce the number sequence. Parameters are the same as for
# python builtin "range".
# Memory -- storage for 8 integers, regardless of parameters.
# Compute -- at most 2*"maximum" steps required to generate sequence.
#
def random_range(start, stop=None, step=None):
import random, math
# Set a default values the same way "range" does.
if (stop == None): start, stop = 0, start
if (step == None): step = 1
# Use a mapping to convert a standard range into the desired range.
mapping = lambda i: (i*step) + start
# Compute the number of numbers in this range.
maximum = (stop - start) // step
# Seed range with a random integer.
value = random.randint(0,maximum)
#
# Construct an offset, multiplier, and modulus for a linear
# congruential generator. These generators are cyclic and
# non-repeating when they maintain the properties:
#
# 1) "modulus" and "offset" are relatively prime.
# 2) ["multiplier" - 1] is divisible by all prime factors of "modulus".
# 3) ["multiplier" - 1] is divisible by 4 if "modulus" is divisible by 4.
#
offset = random.randint(0,maximum) * 2 + 1 # Pick a random odd-valued offset.
multiplier = 4*(maximum//4) + 1 # Pick a multiplier 1 greater than a multiple of 4.
modulus = int(2**math.ceil(math.log2(maximum))) # Pick a modulus just big enough to generate all numbers (power of 2).
# Track how many random numbers have been returned.
found = 0
while found < maximum:
# If this is a valid value, yield it in generator fashion.
if value < maximum:
found += 1
yield mapping(value)
# Calculate the next value in the sequence.
value = (value*multiplier + offset) % modulus
Usage
The usage of this function "random_range" is the same as for any generator (like "range"). An example:
# Show off random range.
print()
for v in range(3,6):
v = 2**v
l = list(random_range(v))
print("Need",v,"found",len(set(l)),"(min,max)",(min(l),max(l)))
print("",l)
print()
Sample Results
Required 8 cycles to generate a sequence of 8 values.
Need 8 found 8 (min,max) (0, 7)
[1, 0, 7, 6, 5, 4, 3, 2]
Required 16 cycles to generate a sequence of 9 values.
Need 9 found 9 (min,max) (0, 8)
[3, 5, 8, 7, 2, 6, 0, 1, 4]
Required 16 cycles to generate a sequence of 16 values.
Need 16 found 16 (min,max) (0, 15)
[5, 14, 11, 8, 3, 2, 13, 1, 0, 6, 9, 4, 7, 12, 10, 15]
Required 32 cycles to generate a sequence of 17 values.
Need 17 found 17 (min,max) (0, 16)
[12, 6, 16, 15, 10, 3, 14, 5, 11, 13, 0, 1, 4, 8, 7, 2, ...]
Required 32 cycles to generate a sequence of 32 values.
Need 32 found 32 (min,max) (0, 31)
[19, 15, 1, 6, 10, 7, 0, 28, 23, 24, 31, 17, 22, 20, 9, ...]
Required 64 cycles to generate a sequence of 33 values.
Need 33 found 33 (min,max) (0, 32)
[11, 13, 0, 8, 2, 9, 27, 6, 29, 16, 15, 10, 3, 14, 5, 24, ...]
The solution presented in this answer works, but it could become problematic with memory if the sample size is small, but the population is huge (e.g. random.sample(insanelyLargeNumber, 10)).
To fix that, I would go with this:
answer = set()
sampleSize = 10
answerSize = 0
while answerSize < sampleSize:
r = random.randint(0,100)
if r not in answer:
answerSize += 1
answer.add(r)
# answer now contains 10 unique, random integers from 0.. 100
If you need to sample extremely large numbers, you cannot use range
random.sample(range(10000000000000000000000000000000), 10)
because it throws:
OverflowError: Python int too large to convert to C ssize_t
Also, if random.sample cannot produce the number of items you want due to the range being too small
random.sample(range(2), 1000)
it throws:
ValueError: Sample larger than population
This function resolves both problems:
import random
def random_sample(count, start, stop, step=1):
def gen_random():
while True:
yield random.randrange(start, stop, step)
def gen_n_unique(source, n):
seen = set()
seenadd = seen.add
for i in (i for i in source() if i not in seen and not seenadd(i)):
yield i
if len(seen) == n:
break
return [i for i in gen_n_unique(gen_random,
min(count, int(abs(stop - start) / abs(step))))]
Usage with extremely large numbers:
print('\n'.join(map(str, random_sample(10, 2, 10000000000000000000000000000000))))
Sample result:
7822019936001013053229712669368
6289033704329783896566642145909
2473484300603494430244265004275
5842266362922067540967510912174
6775107889200427514968714189847
9674137095837778645652621150351
9969632214348349234653730196586
1397846105816635294077965449171
3911263633583030536971422042360
9864578596169364050929858013943
Usage where the range is smaller than the number of requested items:
print(', '.join(map(str, random_sample(100000, 0, 3))))
Sample result:
2, 0, 1
It also works with with negative ranges and steps:
print(', '.join(map(str, random_sample(10, 10, -10, -2))))
print(', '.join(map(str, random_sample(10, 5, -5, -2))))
Sample results:
2, -8, 6, -2, -4, 0, 4, 10, -6, 8
-3, 1, 5, -1, 3
If the list of N numbers from 1 to N is randomly generated, then yes, there is a possibility that some numbers may be repeated.
If you want a list of numbers from 1 to N in a random order, fill an array with integers from 1 to N, and then use a Fisher-Yates shuffle or Python's random.shuffle().
Here is a very small function I made, hope this helps!
import random
numbers = list(range(0, 100))
random.shuffle(numbers)
A very simple function that also solves your problem
from random import randint
data = []
def unique_rand(inicial, limit, total):
data = []
i = 0
while i < total:
number = randint(inicial, limit)
if number not in data:
data.append(number)
i += 1
return data
data = unique_rand(1, 60, 6)
print(data)
"""
prints something like
[34, 45, 2, 36, 25, 32]
"""
One straightforward alternative is to use np.random.choice() as shown below
np.random.choice(range(10), size=3, replace=False)
This results in three integer numbers that are different from each other. e.g., [1, 3, 5], [2, 5, 1]...
The answer provided here works very well with respect to time
as well as memory but a bit more complicated as it uses advanced python
constructs such as yield. The simpler answer works well in practice but, the issue with that
answer is that it may generate many spurious integers before actually constructing
the required set. Try it out with populationSize = 1000, sampleSize = 999.
In theory, there is a chance that it doesn't terminate.
The answer below addresses both issues, as it is deterministic and somewhat efficient
though currently not as efficient as the other two.
def randomSample(populationSize, sampleSize):
populationStr = str(populationSize)
dTree, samples = {}, []
for i in range(sampleSize):
val, dTree = getElem(populationStr, dTree, '')
samples.append(int(val))
return samples, dTree
where the functions getElem, percolateUp are as defined below
import random
def getElem(populationStr, dTree, key):
msd = int(populationStr[0])
if not key in dTree.keys():
dTree[key] = range(msd + 1)
idx = random.randint(0, len(dTree[key]) - 1)
key = key + str(dTree[key][idx])
if len(populationStr) == 1:
dTree[key[:-1]].pop(idx)
return key, (percolateUp(dTree, key[:-1]))
newPopulation = populationStr[1:]
if int(key[-1]) != msd:
newPopulation = str(10**(len(newPopulation)) - 1)
return getElem(newPopulation, dTree, key)
def percolateUp(dTree, key):
while (dTree[key] == []):
dTree[key[:-1]].remove( int(key[-1]) )
key = key[:-1]
return dTree
Finally, the timing on average was about 15ms for a large value of n as shown below,
In [3]: n = 10000000000000000000000000000000
In [4]: %time l,t = randomSample(n, 5)
Wall time: 15 ms
In [5]: l
Out[5]:
[10000000000000000000000000000000L,
5731058186417515132221063394952L,
85813091721736310254927217189L,
6349042316505875821781301073204L,
2356846126709988590164624736328L]
In order to obtain a program that generates a list of random values without duplicates that is deterministic, efficient and built with basic programming constructs consider the function extractSamples defined below,
def extractSamples(populationSize, sampleSize, intervalLst) :
import random
if (sampleSize > populationSize) :
raise ValueError("sampleSize = "+str(sampleSize) +" > populationSize (= " + str(populationSize) + ")")
samples = []
while (len(samples) < sampleSize) :
i = random.randint(0, (len(intervalLst)-1))
(a,b) = intervalLst[i]
sample = random.randint(a,b)
if (a==b) :
intervalLst.pop(i)
elif (a == sample) : # shorten beginning of interval
intervalLst[i] = (sample+1, b)
elif ( sample == b) : # shorten interval end
intervalLst[i] = (a, sample - 1)
else :
intervalLst[i] = (a, sample - 1)
intervalLst.append((sample+1, b))
samples.append(sample)
return samples
The basic idea is to keep track of intervals intervalLst for possible values from which to select our required elements from. This is deterministic in the sense that we are guaranteed to generate a sample within a fixed number of steps (solely dependent on populationSize and sampleSize).
To use the above function to generate our required list,
In [3]: populationSize, sampleSize = 10**17, 10**5
In [4]: %time lst1 = extractSamples(populationSize, sampleSize, [(0, populationSize-1)])
CPU times: user 289 ms, sys: 9.96 ms, total: 299 ms
Wall time: 293 ms
We may also compare with an earlier solution (for a lower value of populationSize)
In [5]: populationSize, sampleSize = 10**8, 10**5
In [6]: %time lst = random.sample(range(populationSize), sampleSize)
CPU times: user 1.89 s, sys: 299 ms, total: 2.19 s
Wall time: 2.18 s
In [7]: %time lst1 = extractSamples(populationSize, sampleSize, [(0, populationSize-1)])
CPU times: user 449 ms, sys: 8.92 ms, total: 458 ms
Wall time: 442 ms
Note that I reduced populationSize value as it produces Memory Error for higher values when using the random.sample solution (also mentioned in previous answers here and here). For above values, we can also observe that extractSamples outperforms the random.sample approach.
P.S. : Though the core approach is similar to my earlier answer, there are substantial modifications in implementation as well as approach alongwith improvement in clarity.
The problem with the set based approaches ("if random value in return values, try again") is that their runtime is undetermined due to collisions (which require another "try again" iteration), especially when a large amount of random values are returned from the range.
An alternative that isn't prone to this non-deterministic runtime is the following:
import bisect
import random
def fast_sample(low, high, num):
""" Samples :param num: integer numbers in range of
[:param low:, :param high:) without replacement
by maintaining a list of ranges of values that
are permitted.
This list of ranges is used to map a random number
of a contiguous a range (`r_n`) to a permissible
number `r` (from `ranges`).
"""
ranges = [high]
high_ = high - 1
while len(ranges) - 1 < num:
# generate a random number from an ever decreasing
# contiguous range (which we'll map to the true
# random number).
# consider an example with low=0, high=10,
# part way through this loop with:
#
# ranges = [0, 2, 3, 7, 9, 10]
#
# r_n :-> r
# 0 :-> 1
# 1 :-> 4
# 2 :-> 5
# 3 :-> 6
# 4 :-> 8
r_n = random.randint(low, high_)
range_index = bisect.bisect_left(ranges, r_n)
r = r_n + range_index
for i in xrange(range_index, len(ranges)):
if ranges[i] <= r:
# as many "gaps" we iterate over, as much
# is the true random value (`r`) shifted.
r = r_n + i + 1
elif ranges[i] > r_n:
break
# mark `r` as another "gap" of the original
# [low, high) range.
ranges.insert(i, r)
# Fewer values possible.
high_ -= 1
# `ranges` happens to contain the result.
return ranges[:-1]
I found a quite faster way than having to use the range function (very slow), and without using random function from python (I donĀ“t like the random built-in library because when you seed it, it repeats the pattern of the random numbers generator)
import numpy as np
nums = set(np.random.randint(low=0, high=100, size=150)) #generate some more for the duplicates
nums = list(nums)[:100]
This is quite fast.
You can use Numpy library for quick answer as shown below -
Given code snippet lists down 6 unique numbers between the range of 0 to 5. You can adjust the parameters for your comfort.
import numpy as np
import random
a = np.linspace( 0, 5, 6 )
random.shuffle(a)
print(a)
Output
[ 2. 1. 5. 3. 4. 0.]
It doesn't put any constraints as we see in random.sample as referred here.
import random
sourcelist=[]
resultlist=[]
for x in range(100):
sourcelist.append(x)
for y in sourcelist:
resultlist.insert(random.randint(0,len(resultlist)),y)
print (resultlist)
Try using...
import random
LENGTH = 100
random_with_possible_duplicates = [random.randrange(-3, 3) for _ in range(LENGTH)]
random_without_duplicates = list(set(random_with_possible_duplicates)) # This removes duplicates
Advatages
Fast, efficient and readable.
Possible Issues
This method can change the length of the list if there are duplicates.
If you wish to ensure that the numbers being added are unique, you could use a Set object
if using 2.7 or greater, or import the sets module if not.
As others have mentioned, this means the numbers are not truly random.
If the amount of numbers you want is random, you can do something like this. In this case, length is the highest number you want to choose from.
If it notices the new random number was already chosen, itll subtract 1 from count (since a count was added before it knew whether it was a duplicate or not). If its not in the list, then do what you want with it and add it to the list so it cant get picked again.
import random
def randomizer():
chosen_number=[]
count=0
user_input = int(input("Enter number for how many rows to randomly select: "))
numlist=[]
#length = whatever the highest number you want to choose from
while 1<=user_input<=length:
count=count+1
if count>user_input:
break
else:
chosen_number = random.randint(0, length)
if line_number in numlist:
count=count-1
continue
if chosen_number not in numlist:
numlist.append(chosen_number)
#do what you want here
Edit: ignore my answer here. use python's random.shuffle or random.sample, as mentioned in other answers.
to sample integers without replacement between `minval` and `maxval`:
import numpy as np
minval, maxval, n_samples = -50, 50, 10
generator = np.random.default_rng(seed=0)
samples = generator.permutation(np.arange(minval, maxval))[:n_samples]
# or, if minval is 0,
samples = generator.permutation(maxval)[:n_samples]
with jax:
import jax
minval, maxval, n_samples = -50, 50, 10
key = jax.random.PRNGKey(seed=0)
samples = jax.random.shuffle(key, jax.numpy.arange(minval, maxval))[:n_samples]
From the CLI in win xp:
python -c "import random; print(sorted(set([random.randint(6,49) for i in range(7)]))[:6])"
In Canada we have the 6/49 Lotto. I just wrap the above code in lotto.bat and run C:\home\lotto.bat or just C:\home\lotto.
Because random.randint often repeats a number, I use set with range(7) and then shorten it to a length of 6.
Occasionally if a number repeats more than 2 times the resulting list length will be less than 6.
EDIT: However, random.sample(range(6,49),6) is the correct way to go.
I would like to remove the coordinates that lie close to each other or if it is just a duplicate.
For example,
x = [[9, 169], [5, 164],[340,210],[1020,102],[210,312],[12,150]]
In the above list, the first and second element lies close to each other. How do I remove the second element while preserving the first one?
Following is what I have tried,
def process(input_list, thresh=(10, 10)):
buffer = input_list.copy()
n = 0
prev_cx, prev_cy = 0, 0
for i in range(len(input_list)):
elem = input_list[i]
cx, cy = elem
if n == 0:
prev_cx, prev_cy = cx, cy
else:
ab_cx, ab_cy = abs(prev_cx - cx), abs(prev_cy - cy)
if ab_cx <= thresh[0] and ab_cy <= thresh[1]:
del buffer[i]
n += 1
return buffer
x = [[9, 169], [5, 164], [340, 210], [1020, 102], [210, 312], [12, 150]]
processed = process(x)
print(processed)
The problem is that it doesn't recursively check if there are any other duplicates since it only checks the adjacent coordinates. What is an efficient way of filtering the coordinate?
Sample Input with thresh = (10,10):
x = [[12,24], [5, 12],[100,1020], [20,30], [121,214], [15,12]]
Sample output:
x = [[12,24],[100,1020], [121,214]]
Your question is a bit vague, but I'm taking it to mean:
You want to compare all combinations of points
If a combination contains points closer than a threshold
Then remove the point further from the start of the input list
Try this:
import itertools
def process(input_list, threshold=(10,10)):
combos = itertools.combinations(input_list, 2)
points_to_remove = [point2
for point1, point2 in combos
if abs(point1[0]-point2[0])<=threshold[0] and abs(point1[1]-point2[1])<=threshold[1]]
points_to_keep = [point for point in input_list if point not in points_to_remove]
return points_to_keep
coords = [[12,24], [5, 12],[100,1020], [20,30], [121,214], [15,12]]
print(process(coords))
>>> [[12, 24], [5, 12], [100, 1020], [121, 214]]
The way this works is to generate all combinations of points using itertools (which leaves the points in the original order), and then create a list of points to remove using the threshold. Then it returns a list of points not in that list of points to remove.
You'll notice that I have an extra point than you. I simply copied what seemed to be your intended functionality (i.e. both dy AND dx <= thresh for point removal). However, if I change the line with the AND statement to remove point if dy OR dx <= thresh, I get the same output as your sample.
So I'm going to ask you to recheck your sample output.
BTW, it might be useful for you to confirm if checking for x and y proximity separately is what you really want. So as a bonus, I've included a version using the Euclidean distance as well:
import itertools
import math
def process(input_list, threshold=100):
combos = itertools.combinations(input_list, 2)
points_to_remove = [point2 for point1, point2 in combos if math.dist(point1, point2)<=threshold]
points_to_keep = [point for point in input_list if point not in points_to_remove]
return points_to_keep
coords = [[12,24], [5, 12],[100,1020], [20,30], [121,214], [15,12]]
print(process(coords))
>>> [[12, 24], [100, 1020], [121, 214]]
This version fits your original sample when I used a threshold radius of 100.
I'd split this up a bit different. It's also tricky of course because of the way you have to modify the list.
def remove_close_neighbors(input_list, thresh, position):
target_item = input_list[position]
return [item for i, item in enumerate(input_list) if i == position or not is_close(target_item, item, thresh)]
This will remove all the "duplicate" (or close) points, other than the item under consideration.
(Then define is_close to check the threshold condition)
And then we can go over our items:
def process(input_list, thresh):
pos = 0
while pos < len(input_list):
input_list = remove_close_neighbors(input_list, thresh, pos)
pos += 1
This is by no means the most efficient way to achieve this. Depends on how scalable this needs to be for you. If we're talking "a bajillion points", you will need to look into clever data structures and algorithms. I think a tree structure could be good then, to group points "by sector", because then you don't have to compare each point to each other point all the time.
I have a list of 150 numbers from 0 to 149. I would like to use a for loop with 150 iterations in order to generate 150 lists of 6 numbers such that,t in each iteration k, the number k is included as well as 5 different random numbers. For example:
S0 = [0, r1, r2, r3, r4, r5] # r1, r2,..., r5 are random numbers between 0 and 150
S1 = [1, r1', r2', r3', r4', r5'] # r1', r2',..., r5' are new random numbers between 0 and 150
...
S149 = [149, r1'', r2'', r3'', r4'', r5'']
In addition, the numbers in each list have to be different and with a minimum distance of 5. This is the code I am using:
import random
import numpy as np
final_list = []
for k in range(150):
S = [k]
for it in range(5):
domain = [ele for ele in range(150) if ele not in S]
d = 0
x = k
while d < 5:
d = np.Infinity
x = random.sample(domain, 1)[0]
for ch in S:
if np.abs(ch - x) < d:
d = np.abs(ch - x)
S.append(x)
final_list.append(S)
Output:
[[0, 149, 32, 52, 39, 126],
[1, 63, 16, 50, 141, 79],
[2, 62, 21, 42, 35, 71],
...
[147, 73, 38, 115, 82, 47],
[148, 5, 78, 115, 140, 43],
[149, 36, 3, 15, 99, 23]]
Now, the code is working but I would like to know if it's possible to force that number of repetitions that each number has through all the iterations is approximately the same. For example, after using the previous code, this plot indicates how many times each number has appeared in the generated lists:
As you can see, there are numbers that have appeared more than 10 times while there are others that have appeared only 2 times. Is it possible to reduce this level of variation so that this plot can be approximated as a uniform distribution? Thanks.
First, I am not sure that your assertion that the current results are not uniformly distributed is necessarily correct. It would seem prudent to me to try and examine the histogram over several repetitions of the process, rather than just one.
I am not a statistician, but when I want to approximate uniform distribution (and assuming that the functions in random provide uniform distribution), what I try to do is to simply accept all results returned by random functions. For that, I need to limit the choices given to these functions ahead of calling them. This is how I would go about your task:
import random
import numpy as np
N = 150
def random_subset(n):
result = []
cands = set(range(N))
for i in range(6):
result.append(n) # Initially, n is the number that must appear in the result
cands -= set(range(n - 4, n + 5)) # Remove candidates less than 5 away
n = random.choice(list(cands)) # Select next number
return result
result = np.array([random_subset(n) for n in range(N)])
print(result)
Simply put, whenever I add a number n to the result set, I take out of the selection candidates, an environment of the proper size, to ensure no number of a distance of less than 5 can be selected in the future.
The code is not optimized (multiple set to list conversions) but it works (as per my uderstanding).
You can force it to be precisely uniform, if you so desire.
Apologies for the mix of globals and locals, this seemed the most readable. You would want to rewrite according to how variable your constants are =)
import random
SIZE = 150
SAMPLES = 5
def get_samples():
pool = list(range(SIZE)) * SAMPLES
random.shuffle(pool)
items = []
for i in range(SIZE):
selection, pool = pool[:SAMPLES], pool[SAMPLES:]
item = [i] + selection
items.append(item)
return items
Then you will have exactly 5 of each (and one more in the leading position, which is a weird data structure).
>>> set(collections.Counter(vv for v in get_samples() for vv in v).values())
{6}
The method above does not guarantee the last 5 numbers are unique, in fact, you would expect ~10/150 to have a duplicate. If that is important, you need to filter your distribution a little more and decide how well you value tight uniformity, duplicates, etc.
If your numbers are approximately what you gave above, you also can patch up the results (fairly) and hope to avoid long search times (not the case for SAMPLES sizes closer to OPTIONS size)
def get_samples():
pool = list(range(SIZE)) * SAMPLES
random.shuffle(pool)
i = 0
while i < len(pool):
if i % SAMPLES == 0:
seen = set()
v = pool[i]
if v in seen: # swap
dst = random.choice(range(SIZE))
pool[dst], pool[i] = pool[i], pool[dst]
i = dst - dst % SAMPLES # Restart from swapped segment
else:
seen.add(v)
i += 1
items = []
for i in range(SIZE):
selection, pool = pool[:SAMPLES], pool[SAMPLES:]
assert len(set(selection)) == SAMPLES, selection
item = [i] + selection
items.append(item)
return items
This will typically take less than 5 passes through to clean up any duplicates, and should leave all arrangements satisfying your conditions equally likely.
The brute force search I'm trying to implement should iterate over the range of values, but it seems to be getting stuck.
I'd ideally like to keep using for loops
I've tried reordering the for loops and using [Angle,Max_Camber,Max_Camber_Position,Thickness] directly instead of assigning it.
Iterations = -1
Current_Max = 0
Airfoil = [0, 1, 1, 10]
for Angle in range(Airfoil[0],90,15):
for Max_Camber in range(Airfoil[1],11,2):
for Max_Camber_Position in range(Airfoil[2],11,2):
for Thickness in range(Airfoil[3],100,20):
Iterations+=1
print("Iterations = ",Iterations)
# Commenting this out stops the error
# The loop should have 749 iterations rather than 17
Airfoil =[Angle,Max_Camber,Max_Camber_Position,Thickness]
print("Airfoil = ", Airfoil)
when you assign Airfoil with value
Airfoil =[Angle,Max_Camber,Max_Camber_Position,Thickness]
you actually changed iteration starting position
in order to get 749 iterations
change your print Airfoil to some name else like
Iterations = -1
Current_Max = 0
Airfoil = [0, 1, 1, 10]
for Angle in range(Airfoil[0],90,15):
for Max_Camber in range(Airfoil[1],11,2):
for Max_Camber_Position in range(Airfoil[2],11,2):
for Thickness in range(Airfoil[3],100,20):
Iterations+=1
print("Iterations = ",Iterations)
Airfoil2 =[Angle,Max_Camber,Max_Camber_Position,Thickness]
print("Airfoil = ", Airfoil2)
this will work
You should use a different variable name to create your list before printing it. You could also make use of Python's product() function which helps with avoiding deeply nested for loops for doing exactly what you are doing, for example:
from itertools import product
Iterations = -1
Current_Max = 0
Airfoil = [0, 1, 1, 10]
for Angle, Max_Camber, Max_Camber_Position, Thickness in product(
range(Airfoil[0], 90, 15), range(Airfoil[1], 11, 2),
range(Airfoil[2], 11, 2), range(Airfoil[3], 100, 20)):
Iterations += 1
print("Iterations = ", Iterations)
Airfoil2 = [Angle, Max_Camber, Max_Camber_Position, Thickness]
print("Airfoil = ", Airfoil2)
I have the following code :
x = np.array([[1]])
print x
print np.lib.pad(x,(1,1),mode='constant',constant_values=[4,8])
output :
[[1]]
[[4 4 8]
[4 1 8]
[4 8 8]]
the problem is :
in the constant values I want to put the new padding for example :
print np.lib.pad(x,(1,1),mode='constant',constant_values = [1,2,3,4,5,6,7,8])
and output like :
[[1,2,3]
[8,1,4]
[7,6,5]]
This is a reverse engineering of Print two-dimensional array in spiral order:
inner_array = np.array([3, 6, 7, 2])
outer_array = np.array([0, 23, 3, 5, 6, 8, 99, 73, 18, 42, 67, 88, 91, 12])
total_array = inner_array[::-1][np.newaxis]
i = 0
while True:
s = total_array.shape[1]
try:
total_array = np.vstack([total_array, outer_array[i:i+s]])
except ValueError:
break
try:
total_array = np.rot90(total_array, -1)
except ValueError:
pass
i += s
total_array = np.rot90(total_array, -1)
print(total_array)
The answer is way different since it's pretty customized to my problem ( Field Of View).
def updatevalues(self,array,elementsCount):
counter =0
R1 =Agent.GetCenterCoords(array.shape)[0]
C1 = array.shape[1]-1
coords = {'R1':R1,'R2':R1,'C1':C1,'C2':C1,'Phase':1}
array[coords['R1'],coords['C1']] = True
while counter<elementsCount:
counter +=2
self.Phases[coords['Phase']](array,coords)
def Phase1(self,array,coords):
'''
Phase 1.
During this phase we start from the max column(C1,C2) and middle Row (R1,R2)
and start moving up and down till
minimum row (R1 ) , max Row (R2) then we move to phase 2
'''
coords['R1'] -=1
coords['R2'] +=1
array[coords['R1'],coords['C1']] = True
array[coords['R2'],coords['C2']] = True
if coords['R1']==0 or coords['R2'] == array.shape[0]-1:
coords['Phase']=2
def Phase2(self,array,coords):
'''
Phase 2.
During this phase we start from the max column (C1,C2) and Min,Max Rows (R1,R2)
and start changing (C1,C2 to minimum) till
C1,C2 ==0 then we move to phase 3
'''
coords['C1'] -=1
coords['C2'] -=1
array[coords['R1'],coords['C1']] = True
array[coords['R2'],coords['C2']] = True
if coords['C1']==0 or coords['C2'] ==0:
coords['Phase']=3
def Phase3(self,array,coords):
'''
Phase 3.
During this phase we start from the minimum columns (C1,C2) and Min,Max Rows (R1,R2)
and start changing (R1,R2) toward center till R1==R2 then we break (all border got covered)
'''
coords['R1'] +=1
coords['R2'] -=1
array[coords['R1'],coords['C1']] = True
array[coords['R2'],coords['C2']] = True
if coords['R1']==coords['R2']:
coords['Phase']=4
#staticmethod
def GetCenterCoords(shape):
return (shape[0]-1)/2 , (shape[1]-1)/2
the solution depends on how many values we want to change on the border starting from the max row, middle column to the right then start moving in two directions simultaneously.
Sorry for the complex solution but as I told you #Ophir it's pretty customized solution for my problem. (when I put the question I used a very general forum to simplify it.)
hope this help others one day.