I am writing a program that has to work through roughly 1000 candidates and find the best score. I need to use multiprocessing to work through a list because this will be done roughly 60000 times. How would we use multiprocessing in this situation. Say that the score is calculated like this:
def get_score(a, b):
return (a * b) / (a + b)
I know a in every case but it changes every time you go through the list of candidates because it adds the best candidate to the list. I want it to iterate through a list of candidates and then find the best score. A non-multiprocessing example would be like this:
s = [random.randint(0, 100)]
candidates = [random.randint(0, 100) for i in range(1000)]
for i in range(60000):
best_score = 0
best_candidate = candidates[0]
for j in candidates:
if get_score(s[-1], j) > best_score:
best_candidate = j
best_score = get_score(s[-1], j)
s.append(best_candidate)
I know that I could create a function but I feel like there is an easier way to do this. Sorry for the beginner question.:/
Your code has many inconsistencies like not updating best_score and still comparing with a 0- valued best score.
Your nested loop based design makes it hard to parralelize the solution, you also didn't provide more details like do order matters?
I'm giving a dummy multiprocessing based solution, which runs the 60000 range loop into n-cpus in parallel, and writes those solutions to numpy arrays. However, it's upto you how you'll merge the solution.
import random
import numpy as np
import multiprocessing as mp
s = [random.randint(0, 100)]
candidates = [random.randint(0, 100) for i in range(1000)]
n_cpu = mp.cpu_count()
def get_score(a, b):
return (a * b) / (a + b)
def partial_gen(num_segment):
part_arr = []
for i in range(60000//n_cpu): # breaking the loop into n_cpu segments
best_score = 0
best_candidate = candidates[0]
for j in candidates:
new_score = get_score(s[-1], j)
if new_score > best_score:
best_candidate = j
best_score = new_score # are you sure? you don't wanna update this?
part_arr.append(best_candidate)
part_arr = np.array(part_arr)
np.save(f'{num_segment}.npy', part_arr)
p = mp.Pool(n_cpu)
p.map(partial_gen, range(n_cpu))
One easy way to speed things up would be to use vectorization (as a first optimization step, rather than multiprocessing). You can achieve this by using numpy ndarrays.
Related
I am trying to solve this math problem in python, and I'm not sure what it is called:
The answer X is always 100
Given a list of 5 integers, their sum would equal X
Each integer has to be between 1 and 25
The integers can appear one or more times in the list
I want to find all the possible unique lists of 5 integers that match.
These would match:
20,20,20,20,20
25,25,25,20,5
10,25,19,21,25
along with many more.
I looked at itertools.permutations, but I don't think that handles duplicate integers in the list. I'm thinking there must be a standard math algorithm for this, but my search queries must be poor.
Only other thing to mention is if it matters that the list size could change from 10 integers to some other length (6, 24, etc).
This is a constraint satisfaction problem. These can often be solved by a method called linear programming: You fix one part of the solution and then solve the remaining subproblem. In Python, we can implement this approach with a recursive function:
def csp_solutions(target_sum, n, i_min=1, i_max=25):
domain = range(i_min, i_max + 1)
if n == 1:
if target_sum in domain:
return [[target_sum]]
else:
return []
solutions = []
for i in domain:
# Check if a solution is still possible when i is picked:
if (n - 1) * i_min <= target_sum - i <= (n - 1) * i_max:
# Construct solutions recursively:
solutions.extend([[i] + sol
for sol in csp_solutions(target_sum - i, n - 1)])
return solutions
all_solutions = csp_solutions(100, 5)
This yields 23746 solutions, in agreement with the answer by Alex Reynolds.
Another approach with Numpy:
#!/usr/bin/env python
import numpy as np
start = 1
end = 25
entries = 5
total = 100
a = np.arange(start, end + 1)
c = np.array(np.meshgrid(a, a, a, a, a)).T.reshape(-1, entries)
assert(len(c) == pow(end, entries))
s = c.sum(axis=1)
#
# filter all combinations for those that meet sum criterion
#
valid_combinations = c[np.where(s == total)]
print(len(valid_combinations)) # 23746
#
# filter those combinations for unique permutations
#
unique_permutations = set(tuple(sorted(x)) for x in valid_combinations)
print(len(unique_permutations)) # 376
You want combinations_with_replacement from itertools library. Here is what the code would look like:
from itertools import combinations_with_replacement
values = [i for i in range(1, 26)]
candidates = []
for tuple5 in combinations_with_replacement(values, 5):
if sum(tuple5) == 100:
candidates.append(tuple5)
For me on this problem I get 376 candidates. As mentioned in the comments above if these are counted once for each arrangement of the 5-pair, then you'd want to look at all, permutations of the 5 candidates-which may not be all distinct. For example (20,20,20,20,20) is the same regardless of how you arrange the indices. However, (21,20,20,20,19) is not-this one has some distinct arrangements.
I think that this could be what you are searching for: given a target number SUM, a left treshold L, a right treshold R and a size K, find all the possible lists of K elements between L and R which sum gives SUM. There isn't a specific name for this problem though, as much as I was able to find.
I have looked on several websites, books, and in the documentation and I can't figure out what I am doing wrong. I try to ask for help as a last resort, so that I can learn on my own, but I have spent far too long trying to figure this out, and I am sure it is something really simple that I am doing wrong, but I am learning. The code produces a single different result every time it is ran. The code produces the following error:
26.8
Traceback (most recent call last):
File "main.py", line 7, in
tot = sum(rand)/len(rand)
TypeError: 'float' object is not iterable
import random
for x in range (10000):
rand = random.uniform(10, 100)
print(round(rand, 1))
tot = sum(rand)/len(rand)
print (round(tot, 1))
You're not actually generating a list, you're generating individual values.
Do you really want to print out 10000 values along the way to your final result?
If the answer is "no!", then your code can be reduced to:
import random
N = 10000
print(round(sum(random.uniform(10, 100) for _ in range(N)) / N, 1))
or, if you prefer to break it out a little bit more for readability:
import random
N = 10000
total = sum(random.uniform(10, 100) for _ in range(N))
average = total / N
print(round(average, 1))
If this is beyond the scope of what you've learned, you can create total outside the loop initialized to zero, update it with each new value as you iterate through the loop, and then calculate the final answer:
import random
N = 10000
total = 0.0
for _ in range(N): # use '_' instead of x, since x was unused in your prog
total += random.uniform(10, 100)
average = total / N
print(round(average, 1))
This avoids wasting storage for a list of 10000 values and avoids the append() you're not yet familiar with. Of course, if you need the 10000 values later for some other purpose, you'll need to tuck them away in a list:
import random
N = 10000
l = [random.uniform(10, 100) for _ in range(N)]
total = sum(l)
print(round(total / N, 1))
Addendum
Just for jollies, you can also do this recursively:
import random
def sum_of_rands(n):
if n > 1:
half_n = n // 2
return sum_of_rands(half_n) + sum_of_rands(n - half_n)
elif n == 1:
return random.uniform(10, 100)
N = 10000
print(round(sum_of_rands(N) / N, 1))
print(sum_of_rands(0)) # returns None because nothing is being summed
Splitting the problem in half (on average) in each recursive call keeps the stack to O(log N).
I'd actually advise you to stick with list comprehension or looping, but wanted to show you there are lots of different ways to get to the same result.
In the sum function you must parse an iterable object but you're parsing a float object.
To avoid this error you should put two last lines outside the for loop and append rand to a list. I don't know if it's what you want to do but it shows you how use sum:
import random
l = []
for x in range(10000):
rand = random.uniform(10, 100)
l.append(rand)
print(round(rand, 1))
tot = sum(l)/len(l)
print(round(tot, 1))
Write a program to simulate tossing a fair coin for 100 times and count the number of heads. Repeat this simulation 10**5 times to obtain a distribution of the head count.
I wrote below code to count number of heads 100 times, and outer loop should repeat my function 100K times to obtain distribution of the head:
import random
def coinToss():
return random.randint(0, 1)
recordList = []
for j in range(10**5):
for i in range(100):
flip = coinToss()
if (flip == 0):
recordList.append(0)
print(str(recordList.count(0)))
but each time I run my program, instead of getting a list of 100K heads probability, I get no#s higher, can anyone tell me what I doing wrong ?
42
89
136
....
392
442
491
Here's a version with numpy that allows you to more elegantly produce random numbers, as you can also specify a size attribute.
import numpy as np
n_sim = 10
n_flip = 100
sims = np.empty(n_sim)
for j in xrange(n_sim):
flips = np.random.randint(0, 2, n_flip)
sims[j] = np.sum(flips)
Since the original problem asks for a distribution of head counts, you need to keep track of two lists: one for the number of heads per 100-toss trial, and one for the number of heads in the current 100-toss trial.
import random
def coinToss():
return random.randint(0, 1)
experiments = [] # Number of heads per 100-toss experiment
for j in range(10**5):
cnt = [] # Number of heads in current 100-toss experiment
for i in range(100):
flip = coinToss()
if (flip == 0):
cnt.append(0)
experiments.append(cnt.count(0))
print(str(cnt.count(0)))
However, I would strongly suggest doing this in something like numpy which will greatly improve performance. You can do this is one line with numpy:
import numpy as np
experiments = np.random.binomial(n=100, p=0.5, size=10**5)
You can then analyze/plot the distribution of head counts with whatever tools you want (e.g. numpy, matplotlib).
You might notice that your number of heads is ~50 more each time. This is because you don't reset the record counter to [] each time you loop. If you add "recordList = []" straight after your print statement and with the same indentation, it will basically fix your code.
Another nifty way to do this would be to wrap the 100 coin flips experiment in a function and then call the function 10**5 times. You could also use list comprehension to make everything nice and concise:
import random
def hundred_flips():
result = sum([random.randint(0, 1) for i in range(100)])
return result
all_results = [hundred_flips() for i in range(10**5)]
You can simulate a matrix with all your coin flips and then do your calculations on the matrix.
from numpy import mean, std
from numpy.random import rand
N_flip = int(1e5)
N_trials = int(1e2)
coin_flips = rand(N_flip, N_trials) > 0.5
p = mean(coin_flips, axis=0) # Vector of length N_trials with estimated probabilites
print('Mean: %3.2f%%, Std: %3.2f%%' % (mean(p)*100, std(p)*100))
I am just getting started with competitive programming and after writing the solution to certain problem i got the error of RUNTIME exceeded.
max( | a [ i ] - a [ j ] | + | i - j | )
Where a is a list of elements and i,j are index i need to get the max() of the above expression.
Here is a short but complete code snippet.
t = int(input()) # Number of test cases
for i in range(t):
n = int(input()) #size of list
a = list(map(int, str(input()).split())) # getting space separated input
res = []
for s in range(n): # These two loops are increasing the run-time
for d in range(n):
res.append(abs(a[s] - a[d]) + abs(s - d))
print(max(res))
Input File This link may expire(Hope it works)
1<=t<=100
1<=n<=10^5
0<=a[i]<=10^5
Run-time on leader-board for C language is 5sec and that for Python is 35sec while this code takes 80sec.
It is an online judge so independent on machine.numpy is not available.
Please keep it simple i am new to python.
Thanks for reading.
For a given j<=i, |a[i]-a[j]|+|i-j| = max(a[i]-a[j]+i-j, a[j]-a[i]+i-j).
Thus for a given i, the value of j<=i that maximizes |a[i]-a[j]|+|i-j| is either the j that maximizes a[j]-j or the j that minimizes a[j]+j.
Both these values can be computed as you run along the array, giving a simple O(n) algorithm:
def maxdiff(xs):
mp = mn = xs[0]
best = 0
for i, x in enumerate(xs):
mp = max(mp, x-i)
mn = min(mn, x+i)
best = max(best, x+i-mn, -x+i+mp)
return best
And here's some simple testing against a naive but obviously correct algorithm:
def maxdiff_naive(xs):
best = 0
for i in xrange(len(xs)):
for j in xrange(i+1):
best = max(best, abs(xs[i]-xs[j]) + abs(i-j))
return best
import random
for _ in xrange(500):
r = [random.randrange(1000) for _ in xrange(50)]
md1 = maxdiff(r)
md2 = maxdiff_naive(r)
if md1 != md2:
print "%d != %d\n%s" % (md1, md2, r)
exit
It takes a fraction of a second to run maxdiff on an array of size 10^5, which is significantly better than your reported leaderboard scores.
"Competitive programming" is not about saving a few milliseconds by using a different kind of loop; it's about being smart about how you approach a problem, and then implementing the solution efficiently.
Still, one thing that jumps out is that you are wasting time building a list only to scan it to find the max. Your double loop can be transformed to the following (ignoring other possible improvements):
print(max(abs(a[s] - a[d]) + abs(s - d) for s in range(n) for d in range(n)))
But that's small fry. Worry about your algorithm first, and then turn to even obvious time-wasters like this. You can cut the number of comparisons to half, as #Brett showed you, but I would first study the problem and ask myself: Do I really need to calculate this quantity n^2 times, or even 0.5*n^2 times? That's how you get the times down, not by shaving off milliseconds.
Is there a pythonic way to build up a list that contains a running average of some function?
After reading a fun little piece about Martians, black boxes, and the Cauchy distribution, I thought it would be fun to calculate a running average of the Cauchy distribution myself:
import math
import random
def cauchy(location, scale):
p = 0.0
while p == 0.0:
p = random.random()
return location + scale*math.tan(math.pi*(p - 0.5))
# is this next block of code a good way to populate running_avg?
sum = 0
count = 0
max = 10
running_avg = []
while count < max:
num = cauchy(3,1)
sum += num
count += 1
running_avg.append(sum/count)
print running_avg # or do something else with it, besides printing
I think that this approach works, but I'm curious if there might be a more elegant approach to building up that running_avg list than using loops and counters (e.g. list comprehensions).
There are some related questions, but they address more complicated problems (small window size, exponential weighting) or aren't specific to Python:
calculate exponential moving average in python
How to efficiently calculate a running standard deviation?
Calculating the Moving Average of a List
You could write a generator:
def running_average():
sum = 0
count = 0
while True:
sum += cauchy(3,1)
count += 1
yield sum/count
Or, given a generator for Cauchy numbers and a utility function for a running sum generator, you can have a neat generator expression:
# Cauchy numbers generator
def cauchy_numbers():
while True:
yield cauchy(3,1)
# running sum utility function
def running_sum(iterable):
sum = 0
for x in iterable:
sum += x
yield sum
# Running averages generator expression (** the neat part **)
running_avgs = (sum/(i+1) for (i,sum) in enumerate(running_sum(cauchy_numbers())))
# goes on forever
for avg in running_avgs:
print avg
# alternatively, take just the first 10
import itertools
for avg in itertools.islice(running_avgs, 10):
print avg
You could use coroutines. They are similar to generators, but allows you to send in values. Coroutines was added in Python 2.5, so this won't work in versions before that.
def running_average():
sum = 0.0
count = 0
value = yield(float('nan'))
while True:
sum += value
count += 1
value = yield(sum/count)
ravg = running_average()
next(ravg) # advance the corutine to the first yield
for i in xrange(10):
avg = ravg.send(cauchy(3,1))
print 'Running average: %.6f' % (avg,)
As a list comprehension:
ravg = running_average()
next(ravg)
ravg_list = [ravg.send(cauchy(3,1)) for i in xrange(10)]
Edits:
Using the next() function instead of the it.next() method. This is so it also will work with Python 3. The next() function has also been back-ported to Python 2.6+.
In Python 2.5, you can either replace the calls with it.next(), or define a next function yourself.
(Thanks Adam Parkin)
I've got two possible solutions here for you. Both are just generic running average functions that work on any list of numbers. (could be made to work with any iterable)
Generator based:
nums = [cauchy(3,1) for x in xrange(10)]
def running_avg(numbers):
for count in xrange(1, len(nums)+1):
yield sum(numbers[:count])/count
print list(running_avg(nums))
List Comprehension based (really the same code as the earlier):
nums = [cauchy(3,1) for x in xrange(10)]
print [sum(nums[:count])/count for count in xrange(1, len(nums)+1)]
Generator-compatabile Generator based:
Edit: This one I just tested to see if I could make my solution compatible with generators easily and what it's performance would be. This is what I came up with.
def running_avg(numbers):
sum = 0
for count, number in enumerate(numbers):
sum += number
yield sum/(count+1)
See the performance stats below, well worth it.
Performance characteristics:
Edit: I also decided to test Orip's interesting use of multiple generators to see the impact on performance.
Using timeit and the following (1,000,000 iterations 3 times):
print "Generator based:", ', '.join(str(x) for x in Timer('list(running_avg(nums))', 'from __main__ import nums, running_avg').repeat())
print "LC based:", ', '.join(str(x) for x in Timer('[sum(nums[:count])/count for count in xrange(1, len(nums)+1)]', 'from __main__ import nums').repeat())
print "Orip's:", ', '.join(str(x) for x in Timer('list(itertools.islice(running_avgs, 10))', 'from __main__ import itertools, running_avgs').repeat())
print "Generator-compatabile Generator based:", ', '.join(str(x) for x in Timer('list(running_avg(nums))', 'from __main__ import nums, running_avg').repeat())
I get the following results:
Generator based: 17.653908968, 17.8027219772, 18.0342400074
LC based: 14.3925321102, 14.4613749981, 14.4277560711
Orip's: 30.8035550117, 30.3142540455, 30.5146529675
Generator-compatabile Generator based: 3.55352187157, 3.54164409637, 3.59098005295
See comments for code:
Orip's genEx based: 4.31488609314, 4.29926609993, 4.30518198013
Results are in seconds, and show the LC new generator-compatible generator method to be consistently faster, your results may vary though. I expect the massive difference between my original generator and the new one is the fact that the sum isn't calculated on the fly.