Related
I am trying to simulate a grid of spins in python that can change their orientation (represented by the sign):
>>> import numpy as np
>>> spin_values = np.random.choice([-1, 1], (2, 2))
>>> spin_values
array([[-1, 1],
[ 1, 1]])
I then throw two sets of random indices of that grid for spins that have a certain probability to switch their orientation, let's say:
>>> i = np.array([1, 1])
>>> j = np.array([0, 0])
>>> switches = np.array([-1, -1])
i and j here contain the indices that might change and switches states whether they do switch (-1) or keep their orientation (1). My idea for calculating the new orientations was:
>>> spin_values[i, j] *= switches
When a spin orientation only changes once this works fine. However, when it is supposed to change twice (as with the example values) it only changes once, therefore giving me a wrong result.
>>> spin_values
array([[-1, 1],
[-1, 1]])
How could I get the right results while having a short run time (this has to be done many times on a bigger grid)?
I would use numpy.unique to get the count of unique pairs of indices and compute -1 ** n:
idx, cnt = np.unique(np.vstack([i, j]), axis=1, return_counts=True)
spin_values[tuple(idx)] = (-1) ** cnt
Updated spin_values:
array([[-1, 1],
[ 1, 1]])
Let's assume i have 100 different kinds of items, each item got a name and a physical weight.
I know the names of all 100 items but only the weight of 80 items.
When i ship items, i pack them in groups of 10 and sum the weight of these items.
Due to some items are missing their weight, this will give an inaccurate sum when im about to ship.
I have different shipments with missing weights
Shipment 1
Item Name
Item Weight
Item 2
10
Item 27
20
Item 42
20
Item 71
-
Item 77
-
Total weight: 75
Shipment 2
Item Name
Item Weight
Item 2
10
Item 27
20
Item 42
20
Item 71
-
Item 92
-
Total weight: 90
Shipment 3
Item Name
Item Weight
Item 2
10
Item 27
20
Item 42
20
Item 55
35
Item 77
-
Total weight: 100
Since some of the shipments share the same items with missing weights and i have the shipments total weight, is there a way with machine learning to determine the weight of these items without im unpacking the entire shipment?
Or would it just be a, in this case, 100x3 Matrix with a lot of empty values?
At this point im not really sure if i should use some type of regression to solve this or if its just a matrix, that would expand a lot if i had n more items to ship.
I also wondered if this was some type of knapsack problem, but i hope anyone can guide my in the right direction.
Forget about machine learning. This is a simple system of linear equations.
w_71 + w_77 = 25
w_71 + w_92 = 40
w_77 = 15
You can solve it with sympy.solvers.solveset.linsolve, or scipy.optimize.linprog, or scipy.linalg.lstsq, or numpy.linalg.lstsq
sympy.linsolve is maybe the easiest to understand if you are not familiar with matrices; however, if the system is underdetermined, then instead of returning a particular solution to the system, sympy.linsolve will return the general solution in parametric form.
scipy.lstsq or numpy.lstsq expect the problem to be given in matrix form. If there is more than one possible solution, they will return the most "average" solution. However, they cannot take any positivity constraint into account: they might return a solution where one of the variables is negative. You can maybe fix this behaviour by adding a new equation to the system to manually force a variable to be positive, then solve again.
scipy.linprog expects the problem to be given in matrix form; it also expects you to specify a linear objective function, to choose which particular solution is "best" in case there is more than one possible solution. linprog also considers that all variables are nonnegative by default, or allows you to specify explicit bounds for the variables yourself. It also allows you to add inequality constraints, in addition to the equations, if you wish to.
Using sympy.solvers.solveset.linsolve
from sympy.solvers.solveset import linsolve
from sympy import symbols
w71, w77, w92 = symbols('w71 w77 w92')
eqs = [w71+w77-25, w71+w92-40, w77-15]
solution = linsolve(eqs, [w71, w77, w92])
# solution = {(10, 15, 30)}
In your example, there is only one possible solution, so linsolve returned that solution: w71 = 10, w77 = 15, w92 = 30.
However, in case there is more than one possible solution, linsolve will return a parametric form for the general solution:
x,y,z = symbols('x y z')
eqs = [x+y-10, y+z-20]
solution = linsolve(eqs, [x, y, z])
# solution = {(z - 10, 20 - z, z)}
Here there is an infinity of possible solutions. linsolve is telling us that we can pick any value for z, and then we'll get the corresponding x and y as x = z - 10 and y = 20 - z.
Using numpy.linalg.lstsq
lstsq expects the system of equations to be given in matrix form. If there is more than one possible solution, then it will return the most "average" solution. For instance, if the system of equation is simply x + y = 10, then lstsq will return the particular solution x = 5, y = 5 and will ignore more "extreme" solutions such as x = 10, y = 0.
from numpy.linalg import lstsq
# w_71 + w_77 = 25
# w_71 + w_92 = 40
# w_77 = 15
A = [[1, 1, 0], [1, 0, 1], [0, 1, 0]]
b = [25, 40, 15]
solution = lstsq(A, b)
solution[0]
# array([10., 15., 30.])
Here lstsq found the unique solution, w71 = 10, w77=15, w92 = 30.
# x + y = 10
# y + z = 20
A = [[1, 1, 0], [0, 1, 1]]
b = [10, 20]
solution = lstsq(A, B)
solution[0]
# array([-3.55271368e-15, 1.00000000e+01, 1.00000000e+01])
Here lstsq had to choose a particular solution, and chose the one it considered most "average", x = 0, y = 10, z = 10. You might want to round the solution to integers.
One drawback of lstsq is that it doesn't take into account your non-negativity constraint. That is, it might return a solution where one of the variables is negative:
# x + y = 2
# y + z = 20
A = [[1, 1, 0], [0, 1, 1])
b = [2, 20]
solution = lstsq(A, b)
solution[0]
# array([-5.33333333, 7.33333333, 12.66666667])
See how lstsq ignored the possible positive solution x = 1, y = 1, z = 18 and instead returned the solution it considered most "average", x = -5.33, y = 7.33, z = 12.67.
One way to fix this is to add an equation yourself to force the offending variable to be positive. For instance, here we noticed that lstsq wanted x to be negative, so we can manually force x to be equal to 1 instead, and solve again:
# x + y = 2
# y + z = 20
# x = 1
A = [[1, 1, 0], [0, 1, 1], [1, 0, 0]]
b = [2, 20, 1]
solution = lstsq(A, b)
solution[0]
# array([ 1., 1., 19.])
Now that we manually forced x to be 1, lstsq found solution x=1, y=1, z=19 which we're more happy with.
Using scipy.optimize.linprog
The particularity of linprog is that it expects you to specify the "objective" used to choose a particular solution, in case there is more than one possible solution.
Also, linprog allows you to specify bounds for the variables. The default is that all variables are nonnegative, which is what you want.
from scipy.optimize import linprog
# w_71 + w_77 = 25
# w_71 + w_92 = 40
# w_77 = 15
A = [[1, 1, 0], [1, 0, 1], [0, 1, 0]]
b = [25, 40, 15]
c = [1, 1, 1] # coefficients for objective: minimise w71 + w77 + w92.
solution = linprog(c, A_eq = A, b_eq = b)
solution.x
# array([10., 15., 30.])
I've been working on a Google foobar problem for a couple of days and have all but one test passing, and I'm pretty stuck at this point. Let me know if you have any ideas! I'm using a method described here, and I have a working example up on repl.it here. Here's the problem spec:
Doomsday Fuel
Making fuel for the LAMBCHOP's reactor core is a tricky process because of the exotic matter involved. It starts as raw ore, then during processing, begins randomly changing between forms, eventually reaching a stable form. There may be multiple stable forms that a sample could ultimately reach, not all of which are useful as fuel.
Commander Lambda has tasked you to help the scientists increase fuel creation efficiency by predicting the end state of a given ore sample. You have carefully studied the different structures that the ore can take and which transitions it undergoes. It appears that, while random, the probability of each structure transforming is fixed. That is, each time the ore is in 1 state, it has the same probabilities of entering the next state (which might be the same state). You have recorded the observed transitions in a matrix. The others in the lab have hypothesized more exotic forms that the ore can become, but you haven't seen all of them.
Write a function answer(m) that takes an array of array of nonnegative ints representing how many times that state has gone to the next state and return an array of ints for each terminal state giving the exact probabilities of each terminal state, represented as the numerator for each state, then the denominator for all of them at the end and in simplest form. The matrix is at most 10 by 10. It is guaranteed that no matter which state the ore is in, there is a path from that state to a terminal state. That is, the processing will always eventually end in a stable state. The ore starts in state 0. The denominator will fit within a signed 32-bit integer during the calculation, as long as the fraction is simplified regularly.
*For example, consider the matrix m:
[
[0,1,0,0,0,1], # s0, the initial state, goes to s1 and s5 with equal probability
[4,0,0,3,2,0], # s1 can become s0, s3, or s4, but with different probabilities
[0,0,0,0,0,0], # s2 is terminal, and unreachable (never observed in practice)
[0,0,0,0,0,0], # s3 is terminal
[0,0,0,0,0,0], # s4 is terminal
[0,0,0,0,0,0], # s5 is terminal
]
So, we can consider different paths to terminal states, such as:
s0 -> s1 -> s3
s0 -> s1 -> s0 -> s1 -> s0 -> s1 -> s4
s0 -> s1 -> s0 -> s5
Tracing the probabilities of each, we find that
s2 has probability 0
s3 has probability 3/14
s4 has probability 1/7
s5 has probability 9/14
So, putting that together, and making a common denominator, gives an answer in the form of
[s2.numerator, s3.numerator, s4.numerator, s5.numerator, denominator] which is
[0, 3, 2, 9, 14].*
Test cases
Inputs:
(int) m = [[0, 2, 1, 0, 0], [0, 0, 0, 3, 4], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
Output:
(int list) [7, 6, 8, 21]
Inputs:
(int) m = [[0, 1, 0, 0, 0, 1], [4, 0, 0, 3, 2, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
Output:
(int list) [0, 3, 2, 9, 14]
Here's my code so far.
from __future__ import division
from itertools import compress
from itertools import starmap
from operator import mul
import fractions
def convertMatrix(transMatrix):
probMatrix = []
for i in range(len(transMatrix)):
row = transMatrix[i]
newRow = []
rowSum = sum(transMatrix[i])
if all([v == 0 for v in transMatrix[i]]):
for j in transMatrix[i]:
newRow.append(0)
newRow[i] = 1
probMatrix.append(newRow)
else:
for j in transMatrix[i]:
if j == 0:
newRow.append(0)
else:
newRow.append(j/rowSum)
probMatrix.append(newRow)
return probMatrix
def answer(m):
# convert matrix numbers into probabilities
probMatrix = convertMatrix(m)
# find terminal states
terminalStateFilter = []
for row in range(len(m)):
if all(x == 0 for x in m[row]):
terminalStateFilter.append(True)
else:
terminalStateFilter.append(False)
# multiply matrix by probability vector
oldFirstRow = probMatrix[0]
probVector = None
for i in range(3000):
probVector = [sum(starmap(mul, zip(oldFirstRow, col))) for col in zip(*probMatrix)]
oldFirstRow = probVector
# generate numerators
numerators = []
for i in probVector:
numerator = fractions.Fraction(i).limit_denominator().numerator
numerators.append(numerator)
# generate denominators
denominators = []
for i in probVector:
denominator = fractions.Fraction(i).limit_denominator().denominator
denominators.append(denominator)
# calculate factors to multiply numerators by
factors = [max(denominators)/x for x in denominators]
# multiply numerators by factors
numeratorsTimesFactors = [a*b for a,b in zip(numerators, factors)]
# filter numerators by terminal state booleans
terminalStateNumerators = list(compress(numeratorsTimesFactors, terminalStateFilter))
# append numerators and denominator to answer
answerlist = []
for i in terminalStateNumerators:
answerlist.append(i)
answerlist.append(max(denominators))
return list(map(int, answerlist))
I'm working on a project involving binary patterns (here np.arrays of 0 and 1).
I'd like to modify a random subset of these and return several altered versions of the pattern where a given fraction of the values have been changed (like map a function to a random subset of an array of fixed size)
ex : take the pattern [0 0 1 0 1] and rate 0.2, return [[0 1 1 0 1] [1 0 1 0 1]]
It seems possible by using auxiliary arrays and iterating with a condition, but is there a "clean" way to do that ?
Thanks in advance !
The map function works on boolean arrays too. You could add the subsample logic to your function, like so:
import numpy as np
rate = 0.2
f = lambda x: np.random.choice((True, x),1,p=[rate,1-rate])[0]
a = np.array([0,0,1,0,1], dtype='bool')
map(f, a)
# This will output array a with on average 20% of the elements changed to "1"
# it can be slightly more or less than 20%, by chance.
Or you could rewrite a map function, like so:
import numpy as np
def map_bitarray(f, b, rate):
'''
maps function f on a random subset of b
:param f: the function, should take a binary array of size <= len(b)
:param b: the binary array
:param rate: the fraction of elements that will be replaced
:return: the modified binary array
'''
c = np.copy(b)
num_elem = len(c)
idx = np.random.choice(range(num_elem), num_elem*rate, replace=False)
c[idx] = f(c[idx])
return c
f = lambda x: True
b = np.array([0,0,1,0,1], dtype='bool')
map_bitarray(f, b, 0.2)
# This will output array b with exactly 20% of the elements changed to "1"
rate=0.2
repeats=5
seed=[0,0,1,0,1]
realizations=np.tile(seed,[repeats,1]) ^ np.random.binomial(1,rate,[repeats,len(seed)])
Use np.tile() to generate a matrix from the seed row.
np.random.binomial() to generate a binomial mask matrix with your requested rate.
Apply the mask with the xor binary operator ^
EDIT:
Based on #Jared Goguen comments, if you want to change 20% of the bits, you can elaborate a mask by choosing elements to change randomly:
seed=[1,0,1,0,1]
rate=0.2
repeats=10
mask_list=[]
for _ in xrange(repeats):
y=np.zeros(len(seed),np.int32)
y[np.random.choice(len(seed),0.2*len(seed))]=1
mask_list.append(y)
mask = np.vstack(mask_list)
realizations=np.tile(seed,[repeats,1]) ^ mask
So, there's already an answer that provides sequences where each element has a random transition probability. However, it seems like you might want an exact fraction of the elements to change instead. For example, [1, 0, 0, 1, 0] can change to [1, 1, 0, 1, 0] or [0, 0, 0, 1, 0], but not [1, 1, 1, 1, 0].
The premise, based off of xvan's answer, uses the bit-wise xor operator ^. When a bit is xor'd with 0, it's value will not change. When a bit is xor'd with 1, it will flip. From your question, it seems like you want to change len(seq)*rate number of bits in the sequence. First create mask which contains len(seq)*rate number of 1's. To get an altered sequence, xor the original sequence with a shuffled version of mask.
Here's a simple, inefficient implementation:
import numpy as np
def edit_sequence(seq, rate, count):
length = len(seq)
change = int(length * rate)
mask = [0]*(length - change) + [1]*change
return [seq ^ np.random.permutation(mask) for _ in range(count)]
rate = 0.2
seq = np.array([0, 0, 1, 0, 1])
print edit_sequence(seq, rate, 5)
# [0, 0, 1, 0, 0]
# [0, 1, 1, 0, 1]
# [1, 0, 1, 0, 1]
# [0, 1, 1, 0, 1]
# [0, 0, 0, 0, 1]
I don't really know much about NumPy, so maybe someone with more experience can make this efficient, but the approach seems solid.
Edit: Here's a version that times about 30% faster:
def edit_sequence(seq, rate, count):
mask = np.zeros(len(seq), dtype=int)
mask[:len(seq)*rate] = 1
output = []
for _ in range(count):
np.random.shuffle(mask)
output.append(seq ^ mask)
return output
It appears that this updated version scales very well with the size of seq and the value of count. Using dtype=bool in seq and mask yields another 50% improvement in the timing.
I have an object which is described by two quantities, A and B (in real case they can be more than two). Objects are correlated depending on the value of A and B. In particular I know the correlation matrix for A and for B. Just as example:
a = np.array([[1, 1, 0, 0],
[1, 1, 0, 0],
[0, 0, 1, 1],
[0, 0, 1, 1]])
b = np.array([[1, 1, 0],
[1, 1, 1],
[0, 1, 1]])
na = a.shape[0]
nb = b.shape[0]
correlation for A:
so if an element has A == 0.5 and the other equal to A == 1.5 they are fully correlated (red). Otherwise if an element has A == 0.5 and the second item has A == 3.5 they are uncorrelated (blue).
Similarly for B:
Now I want multiply the two correlation matrixes, but I want to obtain as final matrix a matrix with two axis, where the new axes are a folded version of the original axes:
def get_folded_bin(ia, ib):
return ia * nb + ib
here what I am doing:
result = np.swapaxes(np.tensordot(a, b, axes=0), 1, 2).reshape(na* nb, na * nb)
visually:
and in particular this must hold:
for ia1 in xrange(na):
for ia2 in xrange(na):
for ib1 in xrange(nb):
for ib2 in xrange(nb):
assert(a[ia1, ia2] * b[ib1, ib2] == result[get_folded_bin(ia1, ib1), get_folded_bin(ia2, ib2)])
actually my problem is to do it with more quantities (A, B, C, ...) in a general way. Maybe there is also a simpler function within numpy to do that.
np.einsum lets you simplify the tensordot expression a bit:
result = np.einsum('ij,kl->ikjl',a,b).reshape(-1, na * nb)
I don't think there's a way of eliminating the reshape.
It may also be easier to generalize to more arrays, though I wouldn't get carried away with too many iteration variables in one einsum expression.
I think finally I have found a solution:
np.kron(a,b)
and then I can compose with
np.kron(np.kron(a,b), c)