I'm very new to Python, so please forgive my ignorance. I'm trying to calculate the total number of energy units in a system. For example, the Omega here will output both (0,0,0,1) and (2,2,2,1) along with a whole lot of other tuples. I want to extract from Omega how many tuples have a total value of 1 (like the first example) and how many have a total value of 7 (like the second example). How do I achieve this?
import numpy as np
import matplotlib.pyplot as plt
from itertools import product
N = 4 ##The number of Oscillators
q = range(3) ## Range of number of possible energy units per oscillator
Omega = product(q, repeat = N)
print(list(product(q, repeat = N)))
try this:
Omega = product(q, repeat = N)
l = list(product(q, repeat = N))
l1 = [i for i in l if sum(i)==1]
l2 = [i for i in l if sum(i)==7]
print(l1,l2)
I believe you can use sum() on tuples as well as lists of integers/numbers.
Now you say omega is a list of tuples, is that correct? Something like
Omega = [(0,0,0,1), (2,2,2,1), ...)]
In that case I think you can do
sums_to_1 = [int_tuple for int_tuple in omega if sum(int_tuple) == 1]
If you want to have some default value for the tuples that don't sum to one you can put the if statement in the list comprehension in the beginning and do
sums_to_1 = [int_tuple if sum(int_tuple) == 1 else 'SomeDefaultValue' for int_tuple in omega]
Related
I have two ordered lists of consecutive integers m=0, 1, ... M and n=0, 1, 2, ... N. Each value of m has a probability pm, and each value of n has a probability pn. I am trying to find the ordered list of unique values r=n/m and their probabilities pr. I am aware that r is infinite if n=0 and can even be undefined if m=n=0.
In practice, I would like to run for M and N each be of the order of 2E4, meaning up to 4E8 values of r - which would mean 3 GB of floats (assuming 8 Bytes/float).
For this calculation, I have written the python code below.
The idea is to iterate over m and n, and for each new m/n, insert it in the right place with its probability if it isn't there yet, otherwise add its probability to the existing number. My assumption is that it is easier to sort things on the way instead of waiting until the end.
The cases related to 0 are added at the end of the loop.
I am using the Fraction class since we are dealing with fractions.
The code also tracks the multiplicity of each unique value of m/n.
I have tested up to M=N=100, and things are quite slow. Are there better approaches to the question, or more efficient ways to tackle the code?
Timing:
M=N=30: 1 s
M=N=50: 6 s
M=N=80: 30 s
M=N=100: 82 s
import numpy as np
from fractions import Fraction
import time # For timiing
start_time = time.time() # Timing
M, N = 6, 4
mList, nList = np.arange(1, M+1), np.arange(1, N+1) # From 1 to M inclusive, deal with 0 later
mProbList, nProbList = [1/(M+1)]*(M), [1/(N+1)]*(N) # Probabilities, here assumed equal (not general case)
# Deal with mn=0 later
pmZero, pnZero = 1/(M+1), 1/(N+1) # P(m=0) and P(n=0)
pNaN = pmZero * pnZero # P(0/0) = P(m=0)P(n=0)
pZero = pmZero * (1 - pnZero) # P(0) = P(m=0)P(n!=0)
pInf = pnZero * (1 - pmZero) # P(inf) = P(m!=0)P(n=0)
# Main list of r=m/n, P(r) and mult(r)
# Start with first line, m=1
rList = [Fraction(mList[0], n) for n in nList[::-1]] # Smallest first
rProbList = [mProbList[0] * nP for nP in nProbList[::-1]] # Start with first line
rMultList = [1] * len(rList) # Multiplicity of each element
# Main loop
for m, mP in zip(mList[1:], mProbList[1:]):
for n, nP in zip(nList[::-1], nProbList[::-1]): # Pick an n value
r, rP, rMult = Fraction(m, n), mP*nP, 1
for i in range(len(rList)-1): # See where it fits in existing list
if r < rList[i]:
rList.insert(i, r)
rProbList.insert(i, rP)
rMultList.insert(i, 1)
break
elif r == rList[i]:
rProbList[i] += rP
rMultList[i] += 1
break
elif r < rList[i+1]:
rList.insert(i+1, r)
rProbList.insert(i+1, rP)
rMultList.insert(i+1, 1)
break
elif r == rList[i+1]:
rProbList[i+1] += rP
rMultList[i+1] += 1
break
if r > rList[-1]:
rList.append(r)
rProbList.append(rP)
rMultList.append(1)
break
# Deal with 0
rList.insert(0, Fraction(0, 1))
rProbList.insert(0, pZero)
rMultList.insert(0, N)
# Deal with infty
rList.append(np.Inf)
rProbList.append(pInf)
rMultList.append(M)
# Deal with undefined case
rList.append(np.NAN)
rProbList.append(pNaN)
rMultList.append(1)
print(".... done in %s seconds." % round(time.time() - start_time, 2))
print("************** Final list\nr", 'Prob', 'Mult')
for r, rP, rM in zip(rList, rProbList, rMultList): print(r, rP, rM)
print("************** Checks")
print("mList", mList, 'nList', nList)
print("Sum of proba = ", np.sum(rProbList))
print("Sum of multi = ", np.sum(rMultList), "\t(M+1)*(N+1) = ", (M+1)*(N+1))
Based on the suggestion of #Prune, and on this thread about merging lists of tuples, I have modified the code as below. It's a lot easier to read, and runs about an order of magnitude faster for N=M=80 (I have omitted dealing with 0 - would be done same way as in original post). I assume there may be ways to tweak the merge and conversion back to lists further yet.
# Do calculations
data = [(Fraction(m, n), mProb(m) * nProb(n)) for n in range(1, N+1) for m in range(1, M+1)]
data.sort()
# Merge duplicates using a dictionary
d = {}
for r, p in data:
if not (r in d): d[r] = [0, 0]
d[r][0] += p
d[r][1] += 1
# Convert back to lists
rList, rProbList, rMultList = [], [], []
for k in d:
rList.append(k)
rProbList.append(d[k][0])
rMultList.append(d[k][1])
I expect that "things are quite slow" because you've chosen a known inefficient sort. A single list insertion is O(K) (later list elements have to be bumped over, and there is added storage allocation on a regular basis). Thus a full-list insertion sort is O(K^2). For your notation, that is O((M*N)^2).
If you want any sort of reasonable performance, research and use the best-know methods. The most straightforward way to do this is to make your non-exception results as a simple list comprehension, and use the built-in sort for your penultimate list. Simply append your n=0 cases, and you're done in O(K log K) time.
I the expression below, I've assumed functions for m and n probabilities.
This is a notational convenience; you know how to directly compute them, and can substitute those expressions if you wish.
data = [ (mProb(m) * nProb(n), Fraction(m, n))
for n in range(1, N+1)
for m in range(0, M+1) ]
data.sort()
data.extend([ # generate your "zero" cases here ])
I am working in SageMath (Python-based), I am quite new to programming and I have the following question. In my computations I have a quadratic form: x^TAx = b , where the matrix A is defined already as a symmetric matrix, and x is defined as
import itertools
X = itertools.product([0,1], repeat = n)
for x in X:
x = vector(x)
print x
as all combination of [0,1] repeated n times. I got a set of values for b the following way:
import itertools
X = itertools.product([0,1], repeat = n)
results = []
for x in X:
x = vector(x)
x = x.row()
v = x.transpose()
b = x * A * v
results.append(b[0, 0])
And then I defined:
U = set(results)
U1 = sorted(U)
A = []
for i in U1:
U2 = round(i, 2)
A.append(U2)
So I have a sorted set to get a few minimal values of my results. I need to extract minimal values from the set and identify what particular x is corresponding to each value of b. I heard that I can use dictionary method and define preimages in there, but I am really struggling to define my dictionary as {key: value}. Could someone help me please, solve the problem or give me the idea of what direction should I think in? Thank you.
I am trying to find stdev for a sequence of numbers that were extracted from combinations of dice (30) that sum up to 120. I am very new to Python, so this code makes the console freeze because the numbers are endless and I am not sure how to fit them all into a smaller, more efficient function. What I did is:
found all possible combinations of 30 dice;
filtered combinations that sum up to 120;
multiplied all items in the list within result list;
tried extracting standard deviation.
Here is the code:
import itertools
import numpy
dice = [1,2,3,4,5,6]
subset = itertools.product(dice, repeat = 30)
result = []
for x in subset:
if sum(x) == 120:
result.append(x)
my_result = numpy.product(result, axis = 1).tolist()
std = numpy.std(my_result)
print(std)
Note that D(X^2) = E(X^2) - E(X)^2, you can solve this problem analytically by following equations.
f[i][N] = sum(k*f[i-1][N-k]) (1<=k<=6)
g[i][N] = sum(k^2*g[i-1][N-k])
h[i][N] = sum(h[i-1][N-k])
f[1][k] = k ( 1<=k<=6)
g[1][k] = k^2 ( 1<=k<=6)
h[1][k] = 1 ( 1<=k<=6)
Sample implementation:
import numpy as np
Nmax = 120
nmax = 30
min_value = 1
max_value = 6
f = np.zeros((nmax+1, Nmax+1), dtype ='object')
g = np.zeros((nmax+1, Nmax+1), dtype ='object') # the intermediate results will be really huge, to keep them accurate we have to utilize python big-int
h = np.zeros((nmax+1, Nmax+1), dtype ='object')
for i in range(min_value, max_value+1):
f[1][i] = i
g[1][i] = i**2
h[1][i] = 1
for i in range(2, nmax+1):
for N in range(1, Nmax+1):
f[i][N] = 0
g[i][N] = 0
h[i][N] = 0
for k in range(min_value, max_value+1):
f[i][N] += k*f[i-1][N-k]
g[i][N] += (k**2)*g[i-1][N-k]
h[i][N] += h[i-1][N-k]
result = np.sqrt(float(g[nmax][Nmax]) / h[nmax][Nmax] - (float(f[nmax][Nmax]) / h[nmax][Nmax]) ** 2)
# result = 32128174994365296.0
You ask for a result of an unfiltered lengths of 630 = 2*1023, impossible to handle as such.
There are two possibilities that can be combined:
Include more thinking to pre-treat the problem, e.g. on how to sample only
those with sum 120.
Do a Monte Carlo simulation instead, i.e. don't sample all
combinations, but only a random couple of 1000 to obtain a representative
sample to determine std sufficiently accurate.
Now, I only apply (2), giving the brute force code:
N = 30 # number of dices
M = 100000 # number of samples
S = 120 # required sum
result = [[random.randint(1,6) for _ in xrange(N)] for _ in xrange(M)]
result = [s for s in result if sum(s) == S]
Now, that result should be comparable to your result before using numpy.product ... that part I couldn't follow, though...
Ok, if you are out after the standard deviation of the product of the 30 dices, that is what your code does. Then I need 1 000 000 samples to get roughly reproducible values for std (1 digit) - takes my PC about 20 seconds, still considerably less than 1 million years :-D.
Is a number like 3.22*1016 what you are looking for?
Edit after comments:
Well, sampling the frequency of numbers instead gives only 6 independent variables - even 4 actually, by substituting in the constraints (sum = 120, total number = 30). My current code looks like this:
def p2(b, s):
return 2**b * 3**s[0] * 4**s[1] * 5**s[2] * 6**s[3]
hits = range(31)
subset = itertools.product(hits, repeat=4) # only 3,4,5,6 frequencies
product = []
permutations = []
for s in subset:
b = 90 - (2*s[0] + 3*s[1] + 4*s[2] + 5*s[3]) # 2 frequency
a = 30 - (b + sum(s)) # 1 frequency
if 0 <= b <= 30 and 0 <= a <= 30:
product.append(p2(b, s))
permutations.append(1) # TODO: Replace 1 with possible permutations
print numpy.std(product) # TODO: calculate std manually, considering permutations
This computes in about 1 second, but the confusing part is that I get as a result 1.28737023733e+17. Either my previous approaches or this one has a bug - or both.
Sorry - not that easy: The sampling is not of the same probability - that is the problem here. Each sample has a different number of possible combinations, giving its weight, which has to be considered before taking the std-deviation. I have drafted that in the code above.
How to create N "random" strings of length K using the probability table? K would be some even number.
prob_table = {'aa': 0.2, 'ab': 0.3, 'ac': 0.5}
Let's say K = 6, there would be a higher probability of 'acacab' than 'aaaaaa'.
This is sub-problem of a larger problem that I’m using to generate synthetic sequences based on a probability table. I’m not sure how to use the probability table to generate “random” strings?
What I have so far:
def seq_prob(fprob_table,K= 6, N= 10):
#fprob_table is the probability dictionary that you input
#K is the length of the sequence
#N is the amount of sequences
seq_list = []
#possibly using itertools or random to generate the semi-"random" strings based on the probabilities
return seq_list
There are some good approaches to making weighted random choices described at the end of the documentation for the builtin random module:
A common task is to make a random.choice() with weighted probabilities.
If the weights are small integer ratios, a simple technique is to build a sample population with repeats:
>>> weighted_choices = [('Red', 3), ('Blue', 2), ('Yellow', 1), ('Green', 4)]
>>> population = [val for val, cnt in weighted_choices for i in range(cnt)]
>>> random.choice(population)
'Green'
A more general approach is to arrange the weights in a cumulative distribution with itertools.accumulate(), and then locate the random value with bisect.bisect():
>>> choices, weights = zip(*weighted_choices)
>>> cumdist = list(itertools.accumulate(weights))
>>> x = random.random() * cumdist[-1]
>>> choices[bisect.bisect(cumdist, x)]
'Blue'
To adapt that latter approach to your specific problem, I'd do:
import random
import itertools
import bisect
def seq_prob(fprob_table, K=6, N=10):
choices, weights = fprob_table.items()
cumdist = list(itertools.accumulate(weights))
results = []
for _ in range(N):
s = ""
while len(s) < K:
x = random.random() * cumdist[-1]
s += choices[bisect.bisect(cumdist, x)]
results.append(s)
return results
This assumes that the key strings in your probability table are all the same length If they have multiple different lengths, this code will sometimes (perhaps most of the time!) give answers that are longer than K characters. I suppose it also assumes that K is an exact multiple of the key length, though it will actually work if that's not true (it just will give result strings that are all longer than K characters, since there's no way to get K exactly).
You could use random.random:
from random import random
def seq_prob(fprob_table, K=6, N=10):
#fprob_table is the probability dictionary that you input
#K is the length of the sequence
#N is the amount of sequences
seq_list = []
s = ""
while len(seq_list) < N:
for k, v in fprob_table.items():
if len(s) == K:
seq_list.append(s)
s = ""
break
rn = random()
if rn <= v:
s += k
return seq_list
This can be no doubt be improved upon but the random.random is useful when dealing with probability.
I'm sure there is a cleaner/better way, but here is one easy way to do this.
Here we're filling pick_list with the 100 separate character-pair values, the number of values determined by the probability. In this case, there are 20 'aa', 30 'ab' and 50 'ac' entries within pick_list. Then random.choice(pick_list) uniformly pulls a random entry from the list.
import random
prob_table = {'aa': 0.2, 'ab': 0.3, 'ac': 0.5}
def seq_prob(fprob_table, K=6, N=10):
#fprob_table is the probability dictionary that you input
# fill list with number of items based on the probabilities
pick_list = []
for key, prob in fprob_table.items():
pick_list.extend([key] * int((prob * 100)))
#K is the length of the sequence
#N is the amount of sequences
seq_list = []
for i in range(N):
sub_seq = "".join(random.choice(pick_list) for _ in range(int(K/2)))
seq_list.append(sub_seq)
return seq_list
With results:
seq_prob(prob_table)
['ababac',
'aaacab',
'aaaaac',
'acacac',
'abacac',
'acaaac',
'abaaab',
'abaaab',
'aaabaa',
'aaabaa']
If your tables or sequences are large, using numpy may be helpful as it will probably be significantly faster. Also, numpy is built for this sort of problem, and the approach is easy to understand and just 3 or 4 lines.
The idea would be to convert the probabilities into cumulative probabilities, ie, mapping (.2, .5, .3) to (.2, .7, 1.), and then random numbers generated along the flat distribution from 0 to 1 will fall within the bins of the cumulative sum with a frequency corresponding to the weights. Numpy's searchsorted can be used to quickly find which bin the random values lie within. That is,
import numpy as np
prob_table = {'aa': 0.2, 'ab': 0.3, 'ac': 0.5}
N = 10
k = 3 # number of strings (not number of characters)
rvals = np.random.random((N, k)) # generate a bunch of random values
string_indices = np.searchsorted(np.cumsum(prob_table.values()), rvals) # weighted indices
x = np.array(prob_table.keys())[string_indices] # get the strings associated with the indices
y = ["".join(x[i,:]) for i in range(x.shape[0])] # convert this to a list of strings
# y = ['acabab', 'acacab', 'acabac', 'aaacaa', 'acabac', 'acacab', 'acabaa', 'aaabab', 'abacac', 'aaabab']
Here I used k as the number of strings you would need, rather than K as the number of characters, since the problem statement is ambiguous about strings/characters.
I would like to solve a minimum set cover problem of the following sort. All the lists contain only 1s and 0s.
I say that a list A covers a list B if you can make B from A by inserting exactly x symbols.
Consider all 2^n lists of 1s and 0s of length n and set x = n/3. I would to compute a minimal set of lists of length 2n/3 that covers them all.
Here is a naive approach I have started on. For every possible list of length 2n/3 I create a set of all lists I can create from it using this function (written by DSM).
from itertools import product, combinations
def all_fill(source, num):
output_len = len(source) + num
for where in combinations(range(output_len), len(source)):
# start with every possibility
poss = [[0,1]] * output_len
# impose the source list
for w, s in zip(where, source):
poss[w] = [s]
# yield every remaining possibility
for tup in product(*poss):
yield tup
I then create the set of sets as follows using n = 6 as an example.
n = 6
shortn = 2*n/3
x = n/3
coversets=set()
for seq in product([0,1], repeat = shortn):
coversets.add(frozenset(all_fill(seq,x)))
I would like to find a minimal set of sets from coversets whose union is allset = set(product([0,1], repeat=n)).
In this case, set(all_fill([1,1,1,1],2)), set(all_fill([0,0,0,0],2)), set(all_fill([1,1,0,0],2)), set(all_fill([0,0,1,1],2)) will do.
My aim is to solve the problem for n = 12. I am happy to use external libraries if that will help and I expect the time to be exponential in n in the worst case.
I’ve written a small program to write an integer program to be solved by
CPLEX or another MIP solver. Below it is a solution for n=12.
from collections import defaultdict
from itertools import product, combinations
def all_fill(source, num):
output_len = (len(source) + num)
for where in combinations(range(output_len), len(source)):
poss = ([[0, 1]] * output_len)
for (w, s) in zip(where, source):
poss[w] = [s]
for tup in product(*poss):
(yield tup)
def variable_name(seq):
return ('x' + ''.join((str(s) for s in seq)))
n = 12
shortn = ((2 * n) // 3)
x = (n // 3)
all_seqs = list(product([0, 1], repeat=shortn))
hit_sets = defaultdict(set)
for seq in all_seqs:
for fill in all_fill(seq, x):
hit_sets[fill].add(seq)
print('Minimize')
print(' + '.join((variable_name(seq) for seq in all_seqs)))
print('Subject To')
for (fill, seqs) in hit_sets.items():
print(' + '.join((variable_name(seq) for seq in seqs)), '>=', 1)
print('Binary')
for seq in all_seqs:
print(variable_name(seq))
print('End')
MIP - Integer optimal solution: Objective = 1.0000000000e+01
Solution time = 7.66 sec. Iterations = 47411 Nodes = 337
CPLEX> Incumbent solution
Variable Name Solution Value
x00000000 1.000000
x00000111 1.000000
x00011110 1.000000
x00111011 1.000000
x10110001 1.000000
x11000100 1.000000
x11001110 1.000000
x11100001 1.000000
x11111000 1.000000
x11111111 1.000000
All other variables matching '*' are 0.
CPLEX>