How can I find the maximum value of the following equation: Fp=(1000 + 9*(x**2) - (183)*x) Given values of x in the range of (1-10), using python. This is what I have tried already:
L= range(1, 11)
for x in L:
Fp=(1000 + 9*(x**2) - (183)*x)
Td=20 - 0.12*(x**2) + (4.2)*x
print(Fp, Td)
print(max(Fp))
Assuming that you have the set of natural numbers in mind, since you have a small range of numbers to check (only 10 numbers), the first approach would be to check the value of the equation for every number, save it in a list, and find the maximum of that list. Take a look at the code below.
max_list = []
for x in range(1,11):
Fp = (1000 + 9*(x**2) - (183)*x)
max_list.append(Fp)
print( max(max_list) )
Another more elegant approach is to analyze the equation. Since your Fp equation is a polynomial equation with the positive second power coeficent, you can assume that either the last element of the range is going to yield the maximum or the first.
So you only need to check those values.
value_range = (1,10)
Fp_first = 1000 + 9*(value_range[0]**2) - (183)*value_range[0]
Fp_last = 1000 + 9*(value_range[1]**2) - (183)*value_range[1]
max_val = max(Fp_first , Fp_last)
You could do it like this:-
def func(x):
return 1_000 + 9 * x * x - 183 * x
print(max([func(x) for x in range(1, 11)]))
The problem with your code is that you're taking the max of a scalar rather than of the values of Fp for each of the values of x.
For a small range of integer values of x, you can iterate over them as you do. And, if you only need the max value,
L = range(1, 11)
highest_Fp = 0 # ensured lower than any Fp value
for x in L:
Fp = (1000 + 9*(x**2) - (183)*x)
Td = 20 - 0.12*(x**2) + (4.2)*x
print(Fp, Td)
if Fp > highest_Fp:
highest_Fp = Fp
print(highest_Fp)
Related
n = 5
cube = n**3
def get_sum(n):
a1 = n * (n - 1) + 1
for i in range(a1, cube, 2):
print(i, end='+')
print(f'{get_sum(n)}')
print(cube)
I have output:
21+23+25+27+29+31+33+35+37+39+41+43+45+47+49+51+53+55+57+59+61+63+65+67+69+71+73+75+77+79+81+83+85+87+89+91+93+95+97+99+101+103+105+107+109+111+113+115+117+119+121+123+None
125
How can I get a range till 29 so the sum of these numbers will be equal to cube in Python?
For example, 21+23+25+27+29 = 5^3
first, no need to write print(f'{get_sum(n)}') since your function doesn't return anything except None which you can see in your output, get_sum(n) is enough.
since you are always looping n times, you can simplify your condition, in my solution I used a while loop with a sum variable to keep tabs with the current sum of numbers.
you can apply the same logic with a for loop of course, this is just my implementation.
def get_sum(n):
a1 = n * (n - 1) + 1
sum = a1
while sum < cube:
print(a1, end='+')
a1+=2
sum+=a1
print(a1, end='=')
n = 5
cube = n**3
get_sum(n)
print(cube)
output:
21+23+25+27+29=125
Inefficient approach:
Keep a variable that tracks the current sum to check if we need to break the loop or not (as mentioned in the other answers).
Efficient Approach:
n^3 can be expressed as a sum of n odd integers, which are symmetric about n^2. Examples:
3^3 = 7+9+11 (symmetric about 9)
4^3 = 13+15+17+19 (symmetric about 16)
5^3 = 21+23+25+27+29 (symmetric about 25)
Use this approach to get a simpler algorithm
If the sum is 1, I could just divide the values by their sum. However, this approach is not applicable when the sum is 0.
Maybe I could compute the opposite of each value I sample, so I would always have a pair of numbers, such that their sum is 0. However this approach reduces the "randomness" I would like to have in my random array.
Are there better approaches?
Edit: the array length can vary (from 3 to few hundreds), but it has to be fixed before sampling.
There is a Dirichlet-Rescale (DRS) algorithm that generates random numbers summing up to a given number. As it says, it has the feature that
the vectors are uniformly distributed over the valid region of the
domain of all possible vectors, bounded by the constraints.
There is also a Python library for it.
You could use sklearns Standardscaler. It scales your data to have a variance of 1 and a mean of 0. The mean of 0 is equivalent to a sum of 0.
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import numpy as np
rand_numbers = StandardScaler().fit_transform(np.random.rand(100,1, ))
If you don't want to use sklearn you can standardize by hand, the formula is pretty simple:
rand_numbers = np.random.rand(1000,1, )
rand_numbers = (rand_numbers - np.mean(rand_numbers)) / np.std(rand_numbers)
The problem here is the variance of 1, that causes numbers greater than 1 or smaller than -1. Therefor you devide the array by its max abs value.
rand_numbers = rand_numbers*(1/max(abs(rand_numbers)))
Now you have an array with values between -1 and 1 with a sum really close to zero.
print(sum(rand_numbers))
print(min(rand_numbers))
print(max(rand_numbers))
Output:
[-1.51822999e-14]
[-0.99356294]
[1.]
What you will have with this solution is either one 1 or one -1 in your data allways. If you would want to avoid this you could add a positive random factor to the division through the max abs. rand_numbers*(1/(max(abs(rand_numbers))+randomfactor))
Edit
As #KarlKnechtel mentioned the division by the standard deviation is redundant with the division by max absolute value.
The above can be simply done by:
rand_numbers = np.random.rand(100000,1, )
rand_numbers = rand_numbers - np.mean(rand_numbers)
rand_numbers = rand_numbers / max(abs(rand_numbers))
I would try the following solution:
def draw_randoms_while_sum_not_zero(eps):
r = random.uniform(-1, 1)
sum = r
yield r
while abs(sum) > eps:
if sum > 0:
r = random.uniform(-1, 0)
else:
r = random.uniform(0,1)
sum += r
yield r
As the floating point numbers are not perfectly accurate, you can never be sure, that the numbers you'd draw might sum up to 0. You need to decide, what margin is acceptable and call the above generator.
It'll yield (lazily return) random numbers as you need them as long as they don't sum up to 0 ± eps
epss = [0.1, 0.01, 0.001, 0.0001, 0.00001]
for eps in epss:
lengths = []
for _ in range(100):
lengths.append(len(list(draw_randoms_while_sum_not_zero(eps))))
print(f'{eps}: min={min(lengths)}, max={max(lengths)}, avg={sum(lengths)/len(lengths)}')
Results:
0.1: min=1, max=24, avg=6.1
0.01: min=1, max=174, avg=49.27
0.001: min=4, max=2837, avg=421.41
0.0001: min=5, max=21830, avg=4486.51
1e-05: min=183, max=226286, avg=48754.42
Since you are fine with the approach of generating lots of numbers and dividing by the sum, why not generate n/2 positive numbers divide by sum. Then generate n/2 negative numbers and divide by sum?
Want a random positive to negative mix? Randomly generate that mix randomly first then continue.
One way to generate such list is by having the opposite number.
If that is not a desirable property, you can introduce some extra randomness by adding / subtracting the same random value to different opposite couples, e.g.:
def exact_sum_uniform_random(num, min_val=-1.0, max_val=1.0, epsilon=0.1):
items = [random.uniform(min_val, max_val) for _ in range(num // 2)]
opposites = [-x for x in items]
if num % 2 != 0:
items.append(0.0)
for i in range(len(items)):
diff = random.random() * epsilon
if items[i] + diff <= max_val \
and any(opposite - diff >= min_val for opposite in opposites):
items[i] += diff
modified = False
while not modified:
j = random.randint(0, num // 2 - 1)
if opposites[j] - diff >= min_val:
opposites[j] -= diff
modified = True
result = items + opposites
random.shuffle(result)
return result
random.seed(0)
x = exact_sum_uniform_random(3)
print(x, sum(x))
# [0.7646391433441265, -0.7686875811622043, 0.004048437818077755] 2.2551405187698492e-17
EDIT
If the upper and lower limits are not strict, a simple way to construct a zero sum sequence is to sum-normalize two separate sequences to 1 and -1 and join them together:
def norm(items, scale):
return [item / scale for item in items]
def zero_sum_uniform_random(num, min_val=-1.0, max_val=1.0):
a = [random.uniform(min_val, max_val) for _ in range(num // 2)]
a = norm(a, sum(a))
b = [random.uniform(min_val, max_val) for _ in range(num - len(a))]
b = norm(b, -sum(b))
result = a + b
random.shuffle(result)
return result
random.seed(0)
n = 3
x = exact_mean_uniform_random(n)
print(exact_mean_uniform_random(n), sum(x))
# [1.0, 2.2578843364303585, -3.2578843364303585] 0.0
Note that both approaches will not have, in general, a uniform distribution.
I have a math function whose output is defined by two variables, x and y.
The function is e^(x^3 + y^2).
I want to calculate every possible integer combination between 1 and some defined integer for x and y, and place them in an array so that each output is aligned with the cooresponding x value and y value index. So something like:
given:
x = 3
y = 5
output would be an array like this:
f(1,1) f(1,2) f(1,3)
f(2,1) f(2,2) f(2,3)
f(3,1) f(3,2) f(3,3)
f(4,1) f(4,2) f(4,3)
f(5,1) f(5,2) f(5,3)
I feel like this is an easy problem to tackle but I have limited knowledge. The code that follows is the best description.
import math
import numpy as np
equation = math.exp(x**3 + y**2)
#start at 1, not zero
i = 1
j = 1
#i want an array output
output = []
#function
def shape_f (i,j):
shape = []
output.append(shape)
while i < x + 1:
while j < y +1:
return math.exp(i**3 + j**2)
#increase counter
i = i +1
j = j +1
print output
I've gotten a blank array recently but I have also gotten one value (int instead of an array)
I am not sure if you have an indentation error, but it looks like you never do anything with the output of the function shape_f. You should define your equation as a function, rather than expression assignment. Then you can make a function that populates a list of lists as you describes.
import math
def equation(x, y):
return math.exp(x**3 + y**2)
def make_matrix(x_max, y_max, x_min=1, y_min=1):
out = []
for i in range(x_min, x_max+1):
row = []
for j in range(y_min, y_max+1):
row.append(equation(i, j))
out.append(row)
return out
matrix = make_matrix(3, 3)
matrix
# returns:
[[7.38905609893065, 148.4131591025766, 22026.465794806718],
[8103.083927575384, 162754.79141900392, 24154952.7535753],
[1446257064291.475, 29048849665247.426, 4311231547115195.0]]
We can do this very simply with numpy.
First, we use np.arange to generate a range of values from 0 (to simplify indexing) to a maximum value for both x and y. We can perform exponentiation, in a vectorised manner, to get the values of x^3 and y^2.
Next, we can apply np.add on the outer product of x^3 and y^3 to get every possible combination thereof. The final step is taking the natural exponential of the result:
x_max = 3
y_max = 5
x = np.arange(x_max + 1) ** 3
y = np.arange(y_max + 1) ** 2
result = np.e ** np.add.outer(x, y)
print(result[2, 3]) # e^(2 ** 3 + 3 ** 2)
Output:
24154952.753575277
A trivial solution would be to use the broadcasting feature of numpy with the exp function:
x = 3
y = 5
i = np.arange(y).reshape(-1, 1) + 1
j = np.arange(x).reshape(1, -1) + 1
result = np.exp(j**3 + y**2)
The reshape operations make i into a column with y elements and j into a row with x elements. Exponentiation does not change those shapes. Broadcasting happens when you add the two arrays together. The unit dimensions in one array get expanded to the corresponding dimension in the other. The result is a y-by-x matrix.
I'm being asked to add the first 100 terms f the sequence (1 + 1/2 + 1/4 + 1/8 ...etc)
what i ve been trying is something Iike
for x in range(101):
n = ((1)/(2**x))
sum(n)
gives me an error, guess you cant put ranges to a power
print(n)
will give me a list of all the values, but i need them summed together
anyone able to give me a hand?
using qtconsole if that's of any relevance, i'm quite new to this if you haven't already guessed
You keep only one value at a time. If you want the sum, you need to aggregate the results, and for that you'd need an initial value, to which you can add each round the current term:
n = 0 # initial value
for x in range(100):
n += 1 / 2**x # add current term
print(n)
Hmm, there is actually a formula for sum of geometric series:
In your question, a is 1, r is 0.5 and n is 100
So we can do like
a = 1
r = 0.5
n = 100
print(a * (1 - r ** n) / (1 - r))
It is important to initialize sum_n to zero. With each iteration, you add (1/2**x) from your sequence/series to sum_n until you reach n_range.
n_range = 101
sum_n = 0 # initialize sum_n to zero
for x in range(n_range):
sum_n += (1/(2**x))
print(sum_n)
You are getting an error because sum takes an iterable and you are passing it a float:
sum(iterable[, start])
To solve your problem, as others have suggested, you need to init an accumulator and add your power on every iteration.
If you absolutely must use the sum function:
>>> import math
>>> sum(map(lambda x:math.pow(2,-x),range(100)))
2.0
I’m having a bit of trouble controlling the results from a data generating algorithm I am working on. Basically it takes values from a list and then lists all the different combinations to get to a specific sum. So far the code works fine(haven’t tested scaling it with many variables yet), but I need to allow for negative numbers to be include in the list.
The way I think I can solve this problem is to put a collar on the possible results as to prevent infinity results(if apples is 2 and oranges are -1 then for any sum, there will be an infinite solutions but if I say there is a limit of either then it cannot go on forever.)
So Here's super basic code that detects weights:
import math
data = [-2, 10,5,50,20,25,40]
target_sum = 100
max_percent = .8 #no value can exceed 80% of total(this is to prevent infinite solutions
for node in data:
max_value = abs(math.floor((target_sum * max_percent)/node))
print node, "'s max value is ", max_value
Here's the code that generates the results(first function generates a table if its possible and the second function composes the actual results. Details/pseudo code of the algo is here: Can brute force algorithms scale? ):
from collections import defaultdict
data = [-2, 10,5,50,20,25,40]
target_sum = 100
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = defaultdict(bool) # all values are False by default
T[0, 0] = True # base case
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(target_sum + 1): #set the range of one higher than sum to include sum itself
for c in range(s / x + 1):
if T[s - c * x, i]:
T[s, i+1] = True
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
for c in range(sum // x_k + 1):
if T[sum - c * x_k, k - 1]:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)
My problem is, I don't know where/how to integrate my limiting code to the main code inorder to restrict results and allow for negative numbers. When I add a negative number to the list, it displays it but does not include it in the output. I think this is due to it not being added to the table(first function) and I'm not sure how to have it added(and still keep the programs structure so I can scale it with more variables).
Thanks in advance and if anything is unclear please let me know.
edit: a bit unrelated(and if detracts from the question just ignore, but since your looking at the code already, is there a way I can utilize both cpus on my machine with this code? Right now when I run it, it only uses one cpu. I know the technical method of parallel computing in python but not sure how to logically parallelize this algo)
You can restrict results by changing both loops over c from
for c in range(s / x + 1):
to
max_value = int(abs((target_sum * max_percent)/x))
for c in range(max_value + 1):
This will ensure that any coefficient in the final answer will be an integer in the range 0 to max_value inclusive.
A simple way of adding negative values is to change the loop over s from
for s in range(target_sum + 1):
to
R=200 # Maximum size of any partial sum
for s in range(-R,R+1):
Note that if you do it this way then your solution will have an additional constraint.
The new constraint is that the absolute value of every partial weighted sum must be <=R.
(You can make R large to avoid this constraint reducing the number of solutions, but this will slow down execution.)
The complete code looks like:
from collections import defaultdict
data = [-2,10,5,50,20,25,40]
target_sum = 100
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = defaultdict(bool) # all values are False by default
T[0, 0] = True # base case
R=200 # Maximum size of any partial sum
max_percent=0.8 # Maximum weight of any term
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(-R,R+1): #set the range of one higher than sum to include sum itself
max_value = int(abs((target_sum * max_percent)/x))
for c in range(max_value + 1):
if T[s - c * x, i]:
T[s, i+1] = True
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
max_value = int(abs((target_sum * max_percent)/x_k))
for c in range(max_value + 1):
if T[sum - c * x_k, k - 1]:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)