counting up and then down a range in python - python

I am trying to program a standard snake draft, where team A pick, team B, team C, team C, team B, team A, ad nauseum.
If pick number 13 (or pick number x) just happened how can I figure which team picks next for n number of teams.
I have something like:
def slot(n,x):
direction = 'down' if (int(x/n) & 1) else 'up'
spot = (x % n) + 1
slot = spot if direction == 'up' else ((n+1) - spot)
return slot
I have feeling there is a simpler, more pythonic what than this solution. Anyone care to take a hack at it?
So I played around a little more. I am looking for the return of a single value, rather than the best way to count over a looped list. The most literal answer might be:
def slot(n, x): # 0.15757 sec for 100,000x
number_range = range(1, n+1) + range(n,0, -1)
index = x % (n*2)
return number_range[index]
This creates a list [1,2,3,4,4,3,2,1], figures out the index (e.g. 13 % (4*2) = 5), and then returns the index value from the list (e.g. 4). The longer the list, the slower the function.
We can use some logic to cut the list making in half. If we are counting up (i.e. (int(x/n) & 1) returns False), we get the obvious index value (x % n), else we subtract that value from n+1:
def slot(n, x): # 0.11982 sec for 100,000x
number_range = range(1, n+1) + range(n,0, -1)
index = ((n-1) - (x % n)) if (int(x/n) & 1) else (x % n)
return number_range[index]
Still avoiding a list altogether is fastest:
def slot(n, x): # 0.07275 sec for 100,000x
spot = (x % n) + 1
slot = ((n+1) - spot) if (int(x/n) & 1) else spot
return slot
And if I hold the list as variable rather than spawning one:
number_list = [1,2,3,4,5,6,7,8,9,10,11,12,12,11,10,9,8,7,6,5,4,3,2,1]
def slot(n, x): # 0.03638 sec for 100,000x
return number_list[x % (n*2)]

Why not use itertools cycle function:
from itertools import cycle
li = range(1, n+1) + range(n, 0, -1) # e.g. [1, 2, 3, 4, 4, 3, 2, 1]
it = cycle(li)
[next(it) for _ in xrange(10)] # [1, 2, 3, 4, 4, 3, 2, 1, 1, 2]
Note: previously I had answered how to run up and down, as follows:
it = cycle(range(1, n+1) + range(n, 0, -1)) #e.g. [1, 2, 3, 4, 3, 2, 1, 2, 3, ...]

Here's a generator that will fulfill what you want.
def draft(n):
while True:
for i in xrange(1,n+1):
yield i
for i in xrange(n,0,-1):
yield i
>>> d = draft(3)
>>> [d.next() for _ in xrange(12)]
[1, 2, 3, 3, 2, 1, 1, 2, 3, 3, 2, 1]

from itertools import chain, cycle
def cycle_up_and_down(first, last):
up = xrange(first, last+1, 1)
down = xrange(last, first-1, -1)
return cycle(chain(up, down))
turns = cycle_up_and_down(1, 4)
print [next(turns) for n in xrange(10)] # [1, 2, 3, 4, 4, 3, 2, 1, 1, 2]

Here is a list of numbers that counts up, then down:
>>> [ -abs(5-i)+5 for i in range(0,10) ]
[0, 1, 2, 3, 4, 5, 4, 3, 2, 1]
Written out:
count_up_to = 5
for i in range( 0, count_up_to*2 ):
the_number_you_care_about = -abs(count_up_to-i) + count_up_to
# do stuff with the_number_you_care_about
Easier to read:
>>> list( range(0,5) ) + list( range( 5, 0, -1 ) )
[0, 1, 2, 3, 4, 5, 4, 3, 2, 1]
Written out:
count_up_to = 5
for i in list( range(0,5) ) + list( range(5, 0, -1) ):
# i is the number you care about
Another way:
from itertools import chain
for i in chain( range(0,5), range(5,0,-1) ):
# i is the number you care about

Related

could someone explain in detail on this ((sum(x[i-n:i+n+1]))..where does i-n:i+n+1 start and end?

could someone explain in detail on this ((sum(x[i-n:i+n+1]))..where does i-n:i+n+1 start and end??
i just don't want to copy the code`
The function should create a return a new list r where
def smooth_a(x, n):
x = [x[0]]*n+x+[x[-1]]*n #creates a copy of x
res = [] #empty list
for i in range(n, len(x)-n):
res.append(sum(x[i-n:i+n+1])/(2*n+1))
return res
x = [1, 2, 6, 4, 5, 0, 1, 2] #def of x
print('smooth_a(x, 1): ', smooth_a(x, 1)) #prints new list with n=1
Let's say x = [1, 2, 6, 4, 5, 0, 1, 2] and n = 1
Then, if you take i = 3 for example:
sum(x[i-n:i+n+1]) = sum(x[2:5])
= sum([6, 4, 5]) <- slices x from 2 to 4, both inclusive
= 15
For your specific problem, be careful with the lower slice index, if it's negative, the slicing will return an empty list, so I would write it as sum(x[max(i-n, 0):i+n+1]).

how to make sure that two numbers next to each other in a list are different

I have a simple code that generates a list of random numbers.
x = [random.randrange(0,11) for i in range(10)]
The problem I'm having is that, since it's random, it sometimes produces duplicate numbers right next to each other. How do I change the code so that it never happens? I'm looking for something like this:
[1, 7, 2, 8, 7, 2, 8, 2, 6, 5]
So that every time I run the code, all the numbers that are next to each other are different.
x = []
while len(x) < 10:
r = random.randrange(0,11)
if not x or x[-1] != r:
x.append(r)
x[-1] contains the last inserted element, which we check not to be the same as the new random number. With not x we check that the array is not empty, as it would generate a IndexError during the first iteration of the loop
Here's an approach that doesn't rely on retrying:
>>> import random
>>> x = [random.choice(range(12))]
>>> for _ in range(9):
... x.append(random.choice([*range(x[-1]), *range(x[-1]+1, 12)]))
...
>>> x
[6, 2, 5, 8, 1, 8, 0, 4, 6, 0]
The idea is to choose each new number by picking from a list that excludes the previously picked number.
Note that having to re-generate a new list to pick from each time keeps this from actually being an efficiency improvement. If you were generating a very long list from a relatively short range, though, it might be worthwhile to generate different pools of numbers up front so that you could then select from the appropriate one in constant time:
>>> pool = [[*range(i), *range(i+1, 3)] for i in range(3)]
>>> x = [random.choice(random.choice(pool))]
>>> for _ in range(10000):
... x.append(random.choice(pool[x[-1]]))
...
>>> x
[0, 2, 0, 2, 0, 2, 1, 0, 1, 2, 0, 1, 2, 1, 0, ...]
O(n) solution by adding to the last element randomly from [1,stop) modulo stop
import random
x = [random.randrange(0,11)]
x.extend((x[-1]+random.randrange(1,11)) % 11 for i in range(9))
x
Output
[0, 10, 4, 5, 10, 1, 4, 8, 0, 9]
from random import randrange
from itertools import islice, groupby
# Make an infinite amount of randrange's results available
pool = iter(lambda: randrange(0, 11), None)
# Use groupby to squash consecutive values into one and islice to at most 10 in total
result = [v for v, _ in islice(groupby(pool), 10)]
Function solution that doesn't iterate to check for repeats, just checks each add against the last number in the list:
import random
def get_random_list_without_neighbors(lower_limit, upper_limit, length):
res = []
# add the first number
res.append(random.randrange(lower_limit, upper_limit))
while len(res) < length:
x = random.randrange(lower_limit, upper_limit)
# check that the new number x doesn't match the last number in the list
if x != res[-1]:
res.append(x)
return res
>>> print(get_random_list_without_neighbors(0, 11, 10)
[10, 1, 2, 3, 1, 8, 6, 5, 6, 2]
def random_sequence_without_same_neighbours(n, min, max):
x = [random.randrange(min, max + 1)]
uniq_value_count = max - min + 1
next_choises_count = uniq_value_count - 1
for i in range(n - 1):
circular_shift = random.randrange(0, next_choises_count)
x.append(min + (x[-1] + circular_shift + 1) % uniq_value_count)
return x
random_sequence_without_same_neighbours(n=10, min=0, max=10)
It's not to much pythonic but you can do something like this
import random
def random_numbers_generator(n):
"Generate a list of random numbers but without two duplicate numbers in a row "
result = []
for _ in range(n):
number = random.randint(1, n)
if result and number == result[-1]:
continue
result.append(number)
return result
print(random_numbers_generator(10))
Result:
3, 6, 2, 4, 2, 6, 2, 1, 4, 7]

Divide the list into three lists such that their sum are close to each other

Let's say that I have an array of number S = [6, 2, 1, 7, 4, 3, 9, 5, 3, 1]. I want to divide into three arrays. The order of the number and the number of item in those array does not matter.
Let's say A1, A2, and A3 are the sub arrays. I want to minimize the function
f(x) = ( SUM(A1) - SUM(S) / 3 )^2 / 3 +
( SUM(A2) - SUM(S) / 3 )^2 / 3 +
( SUM(A3) - SUM(S) / 3 )^2 / 3
I don't need an optimal solution; I just need the solution that is good enough.
I don't want an algorithm that is too slow. I can trade some speed for a better result, but I cannot trade too much.
The length of S is around 10 to 30.
Why
Why do I need to solve this problem? I want to nicely arrange the box into three columns such that the total height of each columns is not too different from each other.
What have I tried
My first instinct is to use greedy. The result is not that bad, but it does not ensure an optimal solution. Is there a better way?
s = [6, 2, 1, 7, 4, 3, 9, 5, 3, 1]
s = sorted(s, reverse=True)
a = [[], [], []]
sum_a = [0, 0, 0]
for x in s:
i = sum_a.index(min(sum_a))
sum_a[i] += x
a[i].append(x)
print(a)
As you said you don't mind a non-optimal solution, I though I would re-use your initial function, and add a way to find a good starting arrangement for your initial list s
Your initial function:
def pigeon_hole(s):
a = [[], [], []]
sum_a = [0, 0, 0]
for x in s:
i = sum_a.index(min(sum_a))
sum_a[i] += x
a[i].append(x)
return map(sum, a)
This is a way to find a sensible initial ordering for your list, it works by creating rotations of your list in sorted and reverse sorted order. The best rotation is found by minimizing the standard deviation, once the list has been pigeon holed:
def rotate(l):
l = sorted(l)
lr = l[::-1]
rotation = [np.roll(l, i) for i in range(len(l))] + [np.roll(lr, i) for i in range(len(l))]
blocks = [pigeon_hole(i) for i in rotation]
return rotation[np.argmin(np.std(blocks, axis=1))] # the best rotation
import random
print pigeon_hole(rotate([random.randint(0, 20) for i in range(20)]))
# Testing with some random numbers, these are the sums of the three sub lists
>>> [64, 63, 63]
Although this could be optimized further it is quite quick taking 0.0013s for 20 numbers. Doing a quick comparison with #Mo Tao's answer, using a = rotate(range(1, 30))
# This method
a = rotate(range(1, 30))
>>> [[29, 24, 23, 18, 17, 12, 11, 6, 5], [28, 25, 22, 19, 16, 13, 10, 7, 4, 1], [27, 26, 21, 20, 15, 14, 9, 8, 3, 2]]
map(sum, a)
# Sum's to [145, 145, 145] in 0.002s
# Mo Tao's method
>>> [[25, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1], [29, 26, 20, 19, 18, 17, 16], [28, 27, 24, 23, 22, 21]]
# Sum's to [145, 145, 145] in 1.095s
This method also seems to find the optimal solution in many cases, although this probably wont hold for all cases. Testing this implementation 500 times using a list of 30 numbers against Mo Tao's answer, and comparing if the sub-lists sum to the same quantity:
c = 0
for i in range(500):
r = [random.randint(1, 10) for j in range(30)]
res = pigeon_hole(rotate(r))
d, e = sorted(res), sorted(tao(r)) # Comparing this to the optimal solution by Mo Tao
if all([k == kk] for k, kk in zip(d, e)):
c += 1
memory = {}
best_f = pow(sum(s), 3)
best_state = None
>>> 500 # (they do)
I thought I would provide an update with a more optimized version of my function here:
def rotate2(l):
# Calculate an acceptable minimum stdev of the pigeon holed list
if sum(l) % 3 == 0:
std = 0
else:
std = np.std([0, 0, 1])
l = sorted(l, reverse=True)
best_rotation = None
best_std = 100
for i in range(len(l)):
rotation = np.roll(l, i)
sd = np.std(pigeon_hole(rotation))
if sd == std:
return rotation # If a min stdev if found
elif sd < best_std:
best_std = sd
best_rotation = rotation
return best_rotation
The main change is that the search for a good ordering stops once a suitable rotation has been found. Also only the reverse sorted list is searched which doesnt appear to alter the result. Timing this with
print timeit.timeit("rotate2([random.randint(1, 10) for i in range(30)])", "from __main__ import rotate2, random", number=1000) / 1000.
results in a large speed up. On my current computer rotate takes about 1.84ms and rotate2 takes about 0.13ms, so about a 14x speed-up. For comparison גלעד ברקן 's implementation took about 0.99ms on my machine.
As I mentioned in the comment of the question, this is the straight-forward dynamic programming method. It takes less than 1 second for s = range(1, 30) and gives optimized solution.
I think the code is self-explained if you known Memoization.
s = range(1, 30)
# s = [6, 2, 1, 7, 4, 3, 9, 5, 3, 1]
n = len(s)
memory = {}
best_f = pow(sum(s), 3)
best_state = None
def search(state, pre_state):
global memory, best_f, best_state
s1, s2, s3, i = state
f = s1 * s1 + s2 * s2 + s3 * s3
if state in memory or f >= best_f:
return
memory[state] = pre_state
if i == n:
best_f = f
best_state = state
else:
search((s1 + s[i], s2, s3, i + 1), state)
search((s1, s2 + s[i], s3, i + 1), state)
search((s1, s2, s3 + s[i], i + 1), state)
search((0, 0, 0, 0), None)
a = [[], [], []]
state = best_state
while state[3] > 0:
pre_state = memory[state]
for j in range(3):
if state[j] != pre_state[j]:
a[j].append(s[pre_state[3]])
state = pre_state
print a
print best_f, best_state, map(sum, a)
We can research the stability of the solution you found with respect to replacing of elements between found lists. Below I placed my code. If we make the target function better by a replacement we keep found lists and go further hoping that we will make the function better again with another replacement. As the starting point we can take your solution. The final result will be something like a local minimum.
from copy import deepcopy
s = [6, 2, 1, 7, 4, 3, 9, 5, 3, 1]
s = sorted(s, reverse=True)
a = [[], [], []]
sum_a = [0, 0, 0]
for x in s:
i = sum_a.index(min(sum_a))
sum_a[i] += x
a[i].append(x)
def f(a):
return ((sum(a[0]) - sum(s) / 3.0)**2 + (sum(a[1]) - sum(s) / 3.0)**2 + (sum(a[2]) - sum(s) / 3.0)**2) / 3
fa = f(a)
while True:
modified = False
# placing
for i_from, i_to in [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)]:
for j in range(len(a[i_from])):
a_new = deepcopy(a)
a_new[i_to].append(a_new[i_from][j])
del a_new[i_from][j]
fa_new = f(a_new)
if fa_new < fa:
a = a_new
fa = fa_new
modified = True
break
if modified:
break
# replacing
for i_from, i_to in [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)]:
for j_from in range(len(a[i_from])):
for j_to in range(len(a[i_to])):
a_new = deepcopy(a)
a_new[i_to].append(a_new[i_from][j_from])
a_new[i_from].append(a_new[i_to][j_to])
del a_new[i_from][j_from]
del a_new[i_to][j_to]
fa_new = f(a_new)
if fa_new < fa:
a = a_new
fa = fa_new
modified = True
break
if modified:
break
if modified:
break
if not modified:
break
print(a, f(a)) # [[9, 3, 1, 1], [7, 4, 3], [6, 5, 2]] 0.2222222222222222222
It's interesting that this approach works well even if we start with arbitrary a:
from copy import deepcopy
s = [6, 2, 1, 7, 4, 3, 9, 5, 3, 1]
def f(a):
return ((sum(a[0]) - sum(s) / 3.0)**2 + (sum(a[1]) - sum(s) / 3.0)**2 + (sum(a[2]) - sum(s) / 3.0)**2) / 3
a = [s, [], []]
fa = f(a)
while True:
modified = False
# placing
for i_from, i_to in [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)]:
for j in range(len(a[i_from])):
a_new = deepcopy(a)
a_new[i_to].append(a_new[i_from][j])
del a_new[i_from][j]
fa_new = f(a_new)
if fa_new < fa:
a = a_new
fa = fa_new
modified = True
break
if modified:
break
# replacing
for i_from, i_to in [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)]:
for j_from in range(len(a[i_from])):
for j_to in range(len(a[i_to])):
a_new = deepcopy(a)
a_new[i_to].append(a_new[i_from][j_from])
a_new[i_from].append(a_new[i_to][j_to])
del a_new[i_from][j_from]
del a_new[i_to][j_to]
fa_new = f(a_new)
if fa_new < fa:
a = a_new
fa = fa_new
modified = True
break
if modified:
break
if modified:
break
if not modified:
break
print(a, f(a)) # [[3, 9, 2], [6, 7], [4, 3, 1, 1, 5]] 0.2222222222222222222
It provides a different result but the same value of the function.
I would have to say that your greedy function does produce good results but tends to become very slow if input size is large say more than 100.
But, you've said that your input size is fixed in the range - 10,30. Hence the greedy solution is actually quite good.Instead of becoming all too greedy in the beginning itself.I propose to become a bit lazy at first and become greedy at the end.
Here is a altered function lazy :
def lazy(s):
k = (len(s)//3-2)*3 #slice limit
s.sort(reverse=True)
#Perform limited extended slicing
a = [s[1:k:3],s[2:k:3],s[:k:3]]
sum_a = list(map(sum,a))
for x in s[k:]:
i = sum_a.index(min(sum_a))
sum_a[i] += x
a[i].append(x)
return a
What it does is it first sorts the input in descending order and fills items in three sub-lists one-by-one until about 6 items are left.(You can change this limit and test, but for size 10-30 I think this is the best)
When that is done simply continue with the greedy approach.This method takes very less time and more accurate than the greedy solution on average.
Here is a line plot of size versus time -
and size versus accuracy -
Accuracy is the standard deviation from the mean of final sub-lists and the original list. Because you want the columns to stack up at almost similar height and not at the (mean of the original list) height.
Also, the range of item value is between 3-15 so that sum is around 100-150 as you mentioned.
These are the test functions -
def test_accuracy():
rsd = lambda s:round(math.sqrt(sum([(sum(s)//3-y)**2 for y in s])/3),4)
sm = lambda s:list(map(sum,s))
N=[i for i in range(10,30)]
ST=[]
MT=[]
for n in N:
case = [r(3,15) for x in range(n)]
ST.append(rsd(sm(lazy(case))))
MT.append(rsd(sm(pigeon(case))))
strace = go.Scatter(x=N,y=ST,name='Lazy pigeon')
mtrace = go.Scatter(x=N,y=MT,name='Pigeon')
data = [strace,mtrace]
layout = go.Layout(
title='Uniform distribution in 3 sublists',
xaxis=dict(title='List size',),
yaxis=dict(title='Accuracy - Standard deviation',))
fig = go.Figure(data=data, layout=layout)
plotly.offline.plot(fig,filename='N vs A2.html')
def test_timings():
N=[i for i in range(10,30)]
ST=[]
MT=[]
for n in N:
case = [r(3,15) for x in range(n)]
start=time.clock()
lazy(case)
ST.append(time.clock()-start)
start=time.clock()
pigeon(case)
MT.append(time.clock()-start)
strace = go.Scatter(x=N,y=ST,name='Lazy pigeon')
mtrace = go.Scatter(x=N,y=MT,name='Pigeon')
data = [strace,mtrace]
layout = go.Layout(
title='Uniform distribution in 3 sublists',
xaxis=dict(title='List size',),
yaxis=dict(title='Time (seconds)',))
fig = go.Figure(data=data, layout=layout)
plotly.offline.plot(fig,filename='N vs T2.html')
Here is the complete file.
Edit -
I tested kezzos answer for accuracy and it performed really good. The deviation stayed less than .8 all the time.
Average standard deviation in 100 runs.
Lazy Pigeon Pigeon Rotation
1.10668795 1.1573573 0.54776425
In the case of speed, the order is quite high for rotation function to compare. But, 10^-3 is fine unless you want to run that function repeatedly.
Lazy Pigeon Pigeon Rotation
5.384013e-05 5.930269e-05 0.004980
Here is bar chart comparing accuracy of all three functions. -
All in all, kezzos solution is the best if you are fine with the speed.
Html files of plotly - versus time,versus accuracy and the bar chart.
Here's my nutty implementation of Korf's1 Sequential Number Partitioning (SNP), but it only uses Karmarkar–Karp rather than Complete Karmarkar–Karp for the two-way partition (I've included an unused, somewhat unsatisfying version of CKK - perhaps someone has a suggestion to make it more efficient?).
On the first subset, it places lower and upper bounds. See the referenced article. I'm sure more efficient implementations can be made. Edit MAX_ITERATIONS for better results versus longer wait :)
By the way, the function, KK3 (extension of Karmarkar–Karp to three-way partition, used to compute the first lower bound), seems pretty good by itself.
from random import randint
from collections import Counter
from bisect import insort
from time import time
def KK3(s):
s = list(map(lambda x: (x,0,0,[],[],[x]),sorted(s)))
while len(s) > 1:
large = s.pop()
small = s.pop()
combined = sorted([large[0] + small[2], large[1] + small[1],
large[2] + small[0]],reverse=True)
combined = list(map(lambda x: x - combined[2],combined))
combined = combined + sorted((large[3] + small[5], large[4] +
small[4], large[5] + small[3]),key = sum)
insort(s,tuple(combined))
return s
#s = [6, 2, 1, 7, 4, 3, 9, 5, 3, 1]
s = [randint(0,100) for r in range(0,30)]
# global variables
s = sorted(s,reverse=True)
sum_s = sum(s)
upper_bound = sum_s // 3
lower_bound = sum(KK3(s)[0][3])
best = (sum_s,([],[],[]))
iterations = 0
MAX_ITERATIONS = 10000
def partition(i, accum):
global lower_bound, best, iterations
sum_accum = sum(accum)
if sum_accum > upper_bound or iterations > MAX_ITERATIONS:
return
iterations = iterations + 1
if sum_accum >= lower_bound:
rest = KK(diff(s,accum))[0]
new_diff = sum(rest[1]) - sum_accum
if new_diff < best[0]:
best = (new_diff,(accum,rest[1],rest[2]))
lower_bound = (sum_s - 2 * new_diff) // 3
print("lower_bound: " + str(lower_bound))
if not best[0] in [0,1] and i < len(s) - 1 and sum(accum) + sum(s[i
+ 1:]) > lower_bound:
_accum = accum[:]
partition(i + 1, _accum + [s[i]])
partition(i + 1, accum)
def diff(l1,l2):
return list((Counter(l1) - Counter(l2)).elements())
def KK(s):
s = list(map(lambda x: (x,[x],[]),sorted(s)))
while len(s) > 1:
large = s.pop()
small = s.pop()
insort(s,(large[0] - small[0],large[1] + small[2],large[2] + small[1]))
return s
print(s)
start_time = time()
partition(0,[])
print(best)
print("iterations: " + str(iterations))
print("--- %s seconds ---" % (time() - start_time))
1 Richard E. Korf, Multi-Way Number Partitioning, Computer Science Department, University of California, Los Angeles; aaai.org/ocs/index.php/IJCAI/IJCAI-09/paper/viewFile/625/705

Check if values in list exceed threshold a certain amount of times and return index of first exceedance

I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.
Example:
# Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
I don't know if it's considered pythonic to abuse the fact that booleans are ints but I like doing like this
def check(l, thr1, thr2):
c = [n > thr1 for n in l]
if sum(c) >= thr2:
return c.index(1)
Try this:
def check_list(testlist)
overages = [x for x in testlist if x > thr1]
if len(overages) >= thr2:
return testlist.index(overages[0])
# This return is not needed. Removing it will not change
# the outcome of the function.
return None
This uses the fact that you can use if statements in list comprehensions to ignore non-important values.
As mentioned by Chris_Rands in the comments, the return None is unnecessary. Removing it will not change the result of the function.
If you are looking for a one-liner (or almost)
a = filter(lambda z: z is not None, map(lambda (i, elem) : i if elem>=thr1 else None, enumerate(list1)))
print a[0] if len(a) >= thr2 else false
A naive and straightforward approach would be to iterate over the list counting the number of items greater than the first threshold and returning the index of the first match if the count exceeds the second threshold:
def answer(l, thr1, thr2):
count = 0
first_index = None
for index, item in enumerate(l):
if item > thr1:
count += 1
if not first_index:
first_index = index
if count >= thr2: # TODO: check if ">" is required instead
return first_index
thr1 = 4
thr2 = 5
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
print(answer(list1, thr1, thr2)) # prints 3
print(answer(list2, thr1, thr2)) # prints None
This is probably not quite pythonic though, but this solution has couple of advantages - we keep the index of the first match only and have an early exit out of the loop if we hit the second threshold.
In other words, we have O(k) in the best case and O(n) in the worst case, where k is the number of items before reaching the second threshold; n is the total number of items in the input list.
I don't know if I'd call it clean or pythonic, but this should work
def get_index(list1, thr1, thr2):
cnt = 0
first_element = 0
for i in list1:
if i > thr1:
cnt += 1
if first_element == 0:
first_element = i
if cnt > thr2:
return list1.index(first_element)
else:
return "criteria not met"
thr1 = 4
thr2 = 5
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
def func(lst)
res = [ i for i,j in enumerate(lst) if j > thr1]
return len(res)>=thr2 and res[0]
Output:
func(list1)
3
func(list2)
false

Python - iterating beginning with the middle of the list and then checking either side

Really not sure where this fits. Say, I have a list:
>>>a = [1, 2, 3, 4, 5, 6, 7]
How can I iterate it in such a way, that it will check 4 first, then 5, then 3, then 6, and then 2(and so on for bigger lists)? I have only been able to work out the middle which is
>>>middle = [len(a)/2 if len(a) % 2 = 0 else ((len(a)+1)/2)]
I'm really not sure how to apply this, nor am I sure that my way of working out the middle is the best way. I've thought of grabbing two indexes and after each iteration, adding 1 and subtracting 1 from each respective index but have no idea how to make a for loop abide by these rules.
With regards as to why I need this; it's for analysing a valid play in a card game and will check from the middle card of a given hand up to each end until a valid card can be played.
You can just keep removing from the middle of list:
lst = range(1, 8)
while lst:
print lst.pop(len(lst)/2)
This is not the best solution performance-wise (removing item from list is expensive), but it is simple - good enough for a simple game.
EDIT:
More performance stable solution would be a generator, that calculates element position:
def iter_from_middle(lst):
try:
middle = len(lst)/2
yield lst[middle]
for shift in range(1, middle+1):
# order is important!
yield lst[middle - shift]
yield lst[middle + shift]
except IndexError: # occures on lst[len(lst)] or for empty list
raise StopIteration
To begin with, here is a very useful general purpose utility to interleave two sequences:
def imerge(a, b):
for i, j in itertools.izip_longest(a,b):
yield i
if j is not None:
yield j
with that, you just need to imerge
a[len(a) / 2: ]
with
reversed(a[: len(a) / 2])
You could also play index games, for example:
>>> a = [1, 2, 3, 4, 5, 6, 7]
>>> [a[(len(a) + (~i, i)[i%2]) // 2] for i in range(len(a))]
[4, 5, 3, 6, 2, 7, 1]
>>> a = [1, 2, 3, 4, 5, 6, 7, 8]
>>> [a[(len(a) + (~i, i)[i%2]) // 2] for i in range(len(a))]
[4, 5, 3, 6, 2, 7, 1, 8]
Here's a generator that yields alternating indexes for any given provided length. It could probably be improved/shorter, but it works.
def backNforth(length):
if length == 0:
return
else:
middle = length//2
yield middle
for ind in range(1, middle + 1):
if length > (2 * ind - 1):
yield middle - ind
if length > (2 * ind):
yield middle + ind
# for testing:
if __name__ == '__main__':
r = range(9)
for _ in backNforth(len(r)):
print(r[_])
Using that, you can just do this to produce a list of items in the order you want:
a = [1, 2, 3, 4, 5, 6, 7]
a_prime = [a[_] for _ in backNforth(len(a))]
In addition to the middle elements, I needed their index as well. I found Wasowski's answer very helpful, and modified it:
def iter_from_middle(lst):
index = len(lst)//2
for i in range(len(lst)):
index = index+i*(-1)**i
yield index, lst[index]
>>> my_list = [10, 11, 12, 13, 14, 15]
>>> [(index, item) for index, item in iter_from_middle(my_list)]
[(3, 13), (2, 12), (4, 14), (1, 11), (5, 15), (0, 10)]

Categories