Using randint() how do I give lower values a higher weight (higher chance to be picked)?
I have the following code:
def grab_int():
current = randint(0,10)
# I want the weight of the numbers to go down the higher they get (so I will get more 0s than 1s, more 1s than 2, etc)
OBS!
I would like to do this in a more elegant fashion than found in some other answers, (such as: Python Weighted Random). Is there a way to do this by perhaps importing some kind of weight module?
Specification:
I would like to have a randint() where the chance of it returning 0 is 30% and then it linearly deteriorates down to 10 being a 1% chance.
The following method satisfies your requirements. It uses the rejection sampling approach: Generate an integer uniformly at random, and accept it with probability proportional to its weight. If the number isn't accepted, we reject it and try again (see also this answer of mine).
import random
def weighted_random(mn, mx, mnweight, mxweight):
while True:
# Get the highest weight.
highestweight=max(mnweight,mxweight)
# Generate a uniform random integer in the interval [mn, mx].
r=random.randint(mn,mx)
# Calculate the weight for this integer. This ensures the min's
# weight is mnweight and the max's weight is mxweight
weight=mnweight+(mxweight-mnweight)*((i-mn)/(mx-mn))
# Generate a random value between 0 and the highest weight
v=random.random()*highestweight
# Is it less than this weight?
if v<weight:
# Yes, so return it
return r
# No, so try again
(Admittedly due to floating-point division as well as random.random(), which outputs floating-point numbers, the implementation is not exactly "elegant", but the example below is, once we've written it. The implementation could be improved by the use of Fractions in the fractions module.)
This method can also be implemented using the existing random.choices method in Python as follows. First we calculate the required weights for random.choices, then we pass those weights. However, this approach is not exactly efficient if the range between the minimum and maximum is very high.
import random
# Calculate weights for `random.choices`
def make_weights(mn, mx, mnweight, mxweight):
r=(mx-mn)
return [mnweight+(mxweight-mnweight)*(i/r) for i in range(mn, mx+1)]
def weighted_random(mn, mx, mnweight, mxweight):
weights=make_weights(mn, mx, mnweight, mxweight)
return random.choices(range(mn, mx+1), weights=weights)[0]
With the NumPy library this can even be implemented as follows:
import random
import numpy
def weighted_random(mn, mx, mnweight, mxweight):
return random.choices(range(mn, mx+1), \
weights=numpy.linspace(mnweight,mxweight,(mx-mn)+1))[0]
An example of the weighted_random function follows:
# Generate 100 random integers in the interval [0, 10],
# where 0 is assigned the weight 30 and 10 is assigned the
# weight 10 and numbers in between are assigned
# decreasing weights.
print([weighted_random(0,10,30,10) for i in range(100)])
The easiest way I found was to just manually assign weights:
def grab_int():
global percent
global percentLeft
global upby
# I want the weight of the numbers to go down the higher they get (so I will get more 0s than 1s, more 1s than 2, etc)
current = randint(0,100)
if current < 30:
upby = randint(1,2)
#0
elif current < 40:
upby = 1
#1
elif current < 45:
upby = 2
#2
elif current < 50:
upby = 3
#3
upby = 4
#4
elif current < 60:
upby = 5
#5
elif current < 65:
upby = 6
#6
elif current < 70:
upby = 7
#7
elif current < 75:
upby = 8
#8
elif current < 90:
upby = 9
#9
elif current < 95:
upby = 10
#10
else: # I'm dumb so I accidentally only added up to 95%, This just gives 0 a 5% higher chance without having to rewrite all the other values
upby = 0
#0
Related
I have tried to solve the Primitive calculator problem with dynamic and recursive approach , works fine for smaller inputs but taking long time for larger inputs (eg: 96234) .
You are given a primitive calculator that can perform the following three operations with
the current number π₯: multiply π₯ by 2, multiply π₯ by 3, or add 1 to π₯. Your goal is given a
positive integer π, find the minimum number of operations needed to obtain the number π
starting from the number 1.
import sys
def optimal_sequence(n,memo={}):
if n in memo:
return memo[n]
if (n==1):
return 0
c1 = 1+optimal_sequence(n-1,memo)
c2 = float('inf')
if n % 2 == 0 :
c2 = 1+optimal_sequence(n // 2,memo)
c3 = float('inf')
if n % 3 == 0 :
c3 = 1+optimal_sequence(n // 3,memo)
c = min(c1,c2,c3)
memo[n] = c
return c
input = sys.stdin.read()
n = int(input)
sequence = optimal_sequence(n)
print(sequence) # Only printing optimal no. of operations
Can anyone point out what is wrong in recursive solution as it works fine by using for loop.
There are a few things to consider here. The first is that you always check if you can subtract 1 away from n. This is always going to be true until n is 1. therefor with a number like 12. You will end up taking 1 away first, then calling the function again with n=11, then n=10, then n=9 etc......only once you have resolved how many steps it will take to resolve using the -1 method (in this case c1 will be 11) you then try for c2.
So for c2 you then half 12, then call the function which will start with the -1 again so you end up with n=12, n=6, n=5, n=4...etc. Even though you have n in the memo you still spend a lot of wasted time on function calls.
Instead you probably want to just shrink the problem space as fast as possible. So start with the rule that will reduce n the most. I.E divide by 3, if that doesnt work then divide by 2, only if neither of the first two worked then subtract 1.
With this method you dont even need to track n as n will always be getting smaller so there is no need to have a memo dict tracking the results.
from time import time
def optimal_sequence(n):
if n == 1:
return 0
elif n % 3 == 0:
c = optimal_sequence(n // 3)
elif n % 2 == 0:
c = optimal_sequence(n // 2)
else:
c = optimal_sequence(n - 1)
return 1 + c
n = int(input("Enter a value for N: "))
start = time()
sequence = optimal_sequence(n)
end = time()
print(f"{sequence=} took {end - start} seconds")
also input is a python function to read from teh terminal you dont need to uses stdin
OUTPUT
Enter a value for N: 96234
sequence=15 took 0.0 seconds
I'm trying to maximize the Euler Totient function on Python given it can use large arbitrary numbers. The problem is that the program gets killed after some time so it doesn't reach the desired ratio. I have thought of increasing the starting number into a larger number, but I don't think it's prudent to do so. I'm trying to get a number when divided by the totient gets higher than 10. Essentially I'm trying to find a sparsely totient number that fits this criteria.
Here's my phi function:
def phi(n):
amount = 0
for k in range(1, n + 1):
if fractions.gcd(n, k) == 1:
amount += 1
return amount
The most likely candidates for high ratios of N/phi(N) are products of prime numbers. If you're just looking for one number with a ratio > 10, then you can generate primes and only check the product of primes up to the point where you get the desired ratio
def totientRatio(maxN,ratio=10):
primes = []
primeProd = 1
isPrime = [1]*(maxN+1)
p = 2
while p*p<=maxN:
if isPrime[p]:
isPrime[p*p::p] = [0]*len(range(p*p,maxN+1,p))
primes.append(p)
primeProd *= p
tot = primeProd
for f in primes:
tot -= tot//f
if primeProd/tot >= ratio:
return primeProd,primeProd/tot,len(primes)
p += 1 + (p&1)
output:
totientRatio(10**6)
16516447045902521732188973253623425320896207954043566485360902980990824644545340710198976591011245999110,
10.00371973209101,
55
This gives you the smallest number with that ratio. Multiples of that number will have the same ratio.
n = 16516447045902521732188973253623425320896207954043566485360902980990824644545340710198976591011245999110
n*2/totient(n*2) = 10.00371973209101
n*11*13/totient(n*11*13) = 10.00371973209101
No number will have a higher ratio until you reach the next product of primes (i.e. that number multiplied by the next prime).
n*263/totient(n*263) = 10.041901868473037
Removing a prime from the product affects the ratio by a proportion of (1-1/P).
For example if m = n/109, then m/phi(m) = n/phi(n) * (1-1/109)
(n//109) / totient(n//109) = 9.91194248684247
10.00371973209101 * (1-1/109) = 9.91194248684247
This should allow you to navigate the ratios efficiently and find the numbers that meed your need.
For example, to get a number with a ratio that is >= 10 but closer to 10, you can go to the next prime product(s) and remove one or more of the smaller primes to reduce the ratio. This can be done using combinations (from itertools) and will allow you to find very specific ratios:
m = n*263/241
m/totient(m) = 10.000234225865265
m = n*(263...839) / (7 * 61 * 109 * 137) # 839 is 146th prime
m/totient(m) = 10.000000079805726
I have a partial solution for you, but the results don't look good.. (this solution may not give you an answer with modern computer hardware (amount of ram is limiting currently)) I took an answer from this pcg challenge and modified it to spit out ratios of n/phi(n) up to a particular n
import numba as nb
import numpy as np
import time
n = int(2**31)
#nb.njit("i4[:](i4[:])", locals=dict(
n=nb.int32, i=nb.int32, j=nb.int32, q=nb.int32, f=nb.int32))
def summarum(phi):
#calculate phi(i) for i: 1 - n
#taken from <a>https://codegolf.stackexchange.com/a/26753/42652</a>
phi[1] = 1
i = 2
while i < n:
if phi[i] == 0:
phi[i] = i - 1
j = 2
while j * i < n:
if phi[j] != 0:
q = j
f = i - 1
while q % i == 0:
f *= i
q //= i
phi[i * j] = f * phi[q]
j += 1
i += 1
#divide each by n to get ratio n/phi(n)
i = 1
while i < n: #jit compiled while loop is faster than: for i in range(): blah blah blah
phi[i] = i//phi[i]
i += 1
return phi
if __name__ == "__main__":
s1 = time.time()
a = summarum(np.zeros(n, np.int32))
locations = np.where(a >= 10)
print(len(locations))
I only have enough ram on my work comp. to test about 0 < n < 10^8 and the largest ratio was about 6. You may or may not have any luck going up to larger n, although 10^8 already took several seconds (not sure what the overhead was... spyder's been acting strange lately)
p55# is a sparsely totient number satisfying the desired condition.
Furthermore, all subsequent primorial numbers are as well, because pn# / phi(pn#) is a strictly increasing sequence:
p1# / phi(p1#) is 2, which is positive. For n > 1, pn# / phi(pn#) is equal to pn-1#pn / phi(pn-1#pn), which, since pn and pn-1# are coprime, is equal to (pn-1# / phi(pn-1#)) * (pn/phi(pn)). We know pn > phi(pn) > 0 for all n, so pn/phi(pn) > 1. So we have that the sequence pn# / phi(pn#) is strictly increasing.
I do not believe these to be the only sparsely totient numbers satisfying your request, but I don't have an efficient way of generating the others coming to mind. Generating primorials, by comparison, amounts to generating the first n primes and multiplying the list together (whether by using functools.reduce(), math.prod() in 3.8+, or ye old for loop).
As for the general question of writing a phi(n) function, I would probably first find the prime factors of n, then use Euler's product formula for phi(n). As an aside, make sure to NOT use floating-point division. Even finding the prime factors of n by trial division should outperform computing gcd n times, but when working with large n, replacing this with an efficient prime factorization algorithm will pay dividends. Unless you want a good cross to die on, don't write your own. There's one in sympy that I'm aware of, and given the ubiquity of the problem, probably plenty of others around. Time as needed.
Speaking of timing, if this is still relevant enough to you (or a future reader) to want to time... definitely throw the previous answer in the mix as well.
This is the code I created to find the largest power of 2 factor. I do not think that this is 100% correct because I keep getting 2 as the answer. I need some help figuring this out. I am completely new at programming.
MY CODE:
def largestPowerOfTwoThatIsAFactorOf(num):
factor = 2
while not(num > 0):
factor = factor + 1
return factor
print(largestPowerOfTwoThatIsAFactorOf(4))
print(largestPowerOfTwoThatIsAFactorOf(15))
print(largestPowerOfTwoThatIsAFactorOf(120))
#For any odd integer, largest power of 2 thatβs a factor is 1.
This is an interesting and useful function, fit to deal with FFT, to perform Signal Processing and Analysis once FFT is a square matrix with "power of two" dimensions... understanding that a power of two is the result of two elevated to some power, and that there are infinite powers of two greater than n, a better name to the function should be minimum power of two greater than n - we just use it to dimension a collection of signal data to submit it to FFT filter. There follows two options, named minpowof2 and the maxpowof2...
def minpowof2(n):
'''Minimun Power of Two Greater Than n'''
f = 1
if n < 2: return 'try n greater or iqual to 2'
while n > 2:
n /= 2
f += 1
return 2**f
def maxpowof2(n):
'''Maximum Power of Two Lower than n'''
return int(minpot2(n)/2)
def largestPowerOfTwoThatIsAFactorOf(num):
if num % 2 != 0: return 1
factor = 0
while num % 2 == 0:
num /= 2
factor += 1
return 2 ** factor
## or return factor; as per your requirement
You need to update num inside the loop. Also, you cna optimize the code a little by checking whether the input was odd or not in the first statement.
I want to calculate average for random walk for 1000 times to get good average so my code for this random walk is
import math
import random
from matplotlib import pyplot
position = 0
walk = [position]
steps = 10
for i in xrange(steps):
step = 1 if random.randint(0, 1) else -1
position += step
walk.append(position)
print((walk))
pyplot.hist(walk)
pyplot.show()
so, what is best way to make python repeat it many times and calculated the average for these random walks.
Thanks
It will be easier to do if you break it down into smaller functions, for example making the main part of your code a function
def makewalk(steps):
position = 0
walk = [position]
for i in xrange(steps):
step = 1 if random.randint(0, 1) else -1
position += step
walk.append(position)
return walk # instead of simply printing it
Also, you could use inbuilt functions to reduce it to a few lines
import numpy
def makewalk(N):
steps = numpy.random.randint(0, 2, N) * 2 - 1
# an array of length N with random integers between 0 (inclusive) and 2 (exclusive)
# multiplying it by two and subtracting 1 the numbers 0 and 1 become -1 and 1 respectively
walk = numpy.cumsum(steps) # what it says, a cumulative sum
return walk
Now just loop over it 1000 times
from matplotlib import pyplot
steps = 10000
numwalks = 1000
walks = [makewalk(steps) for i in xrange(numwalks)]
There are your walks, do whatever you like with them, and since the walks are numpy arrays you can easily compute the elementwise sum without loops
averagewalk = numpy.sum(walks, 0)*1.0/numwalks # sums along the 0th axis and returns an array of length steps
I've been attempting to use Python to create a script that lets me generate large numbers of points for use in the Monte Carlo method to calculate an estimate to Pi. The script I have so far is this:
import math
import random
random.seed()
n = 10000
for i in range(n):
x = random.random()
y = random.random()
z = (x,y)
if x**2+y**2 <= 1:
print z
else:
del z
So far, I am able to generate all of the points I need, but what I would like to get is the number of points that are produced when running the script for use in a later calculation. I'm not looking for incredibly precise results, just a good enough estimate. Any suggestions would be greatly appreciated.
If you're doing any kind of heavy duty numerical calculation, considering learning numpy. Your problem is essentially a one-linear with a numpy setup:
import numpy as np
N = 10000
pts = np.random.random((N,2))
# Select the points according to your condition
idx = (pts**2).sum(axis=1) < 1.0
print pts[idx], idx.sum()
Giving:
[[ 0.61255615 0.44319463]
[ 0.48214768 0.69960483]
[ 0.04735956 0.18509277]
...,
[ 0.37543094 0.2858077 ]
[ 0.43304577 0.45903071]
[ 0.30838206 0.45977162]], 7854
The last number is count of the number of events that counted, i.e. the count of the points whose radius is less than one.
Not sure if this is what you're looking for, but you can run enumerate on range and get the position in your iteration:
In [1]: for index, i in enumerate(xrange(10, 15)):
...: print index + 1, i
...:
...:
1 10
2 11
3 12
4 13
5 14
In this case, index + 1 would represent the current point being created (index itself would be the total number of points created at the beginning of a given iteration). Also, if you are using Python 2.x, xrange is generally better for these sorts of iterations as it does not load the entire list into memory but rather accesses it on an as-needed basis.
Just add hits variable before the loop, initialize it to 0 and inside your if statement increment hits by one.
Finally you can calculate PI value using hits and n.
import math
import random
random.seed()
n = 10000
hits = 0 # initialize hits with 0
for i in range(n):
x = random.random()
y = random.random()
z = (x,y)
if x**2+y**2 <= 1:
hits += 1
else:
del z
# use hits and n to compute PI