In a recent homework, I was asked to perform a Bayesian fit over a set of data a and b using a Metropolis algorithm. The relationship between a and b is given:
e(t) = e_0*cos(w*t)
w = 2 * pi
The Metropolis algorithm is (it works fine with other fit):
def metropolis(logP, args, v0, Nsteps, stepSize):
vCur = v0
logPcur = logP(vCur, *args)
v = []
Nattempts = 0
for i in range(Nsteps):
while(True):
#Propose step:
vNext = vCur + stepSize*np.random.randn(*vCur.shape)
logPnext = logP(vNext, *args)
Nattempts += 1
#Accept/reject step:
Pratio = (1. if logPnext>logPcur else np.exp(logPnext-logPcur))
if np.random.rand() < Pratio:
vCur = vNext
logPcur = logPnext
v.append(vCur)
break
acceptRatio = Nsteps*(1./Nattempts)
return np.array(v), acceptRatio
I have tried to Bayesian fit the cosine wave and used the Metropolis algorithm above:
e_0 = -0.00155
def strain_t(e_0,t):
return e_0*np.cos(2*np.pi*t)
data = pd.read_csv('stressStrain.csv')
t = np.array(data['t'])
e = strain_t(e_0,t)
def logfitstrain_t(params,t,e):
e_0 = params[0]
sigmaR = params[1]
strainModel = strain_t(e_0,t)
return np.sum(-0.5*((e-strainModel)/sigmaR)**2 - np.log(sigmaR))
params0 = np.array([-0.00155,np.std(t)])
params, accRatio = metropolis(logfitstrain_t, (t,e), params0, 1000, 0.042)
print('Acceptance ratio:', accRatio)
e0 = np.mean(params[0])
print('e0=',e0)
e_t = e0*np.cos(2*np.pi*t)
sns.jointplot(t, e_t, kind='hex',color='purple')
The data in .csv looks like
There isn't any error message showing after I hit run, but it takes forever for python to give me an output. What did I do wrong here?
Why it might "take forever"
Your algorithm is designed to run until it accepts a given number of proposals (1000 in the example). Thus, if it's running for a long time, you're likely rejecting a bunch of proposals. This can happen when the step size is too large, leading new proposals to end up in distant, low probability regions of the likelihood space. Try reducing your step size. This may require you to also increase the number of samples to ensure the posterior space becomes adequately explored.
A more serious concern
Because you only append accepted proposals to the chain v, you haven't actually implemented the Metropolis algorithm, and instead obtain a biased set of samples that will tend to overrepresent less likely regions of the posterior space. A true Metropolis implementation re-appends the previous proposal whenever the new proposal is rejected. You can still enforce a minimum number of accepted proposals, but you really must append something every time.
Related
I want to implement the following algorithm, taken from this book, section 13.6:
I don't understand how to implement the update rule in pytorch (the rule for w is quite similar to that of theta).
As far as I know, torch requires a loss for loss.backwward().
This form does not seem to apply for the quoted algorithm.
I'm still certain there is a correct way of implementing such update rules in pytorch.
Would greatly appreciate a code snippet of how the w weights should be updated, given that V(s,w) is the output of the neural net, parameterized by w.
EDIT: Chris Holland suggested a way to implement, and I implemented it. It does not converge on Cartpole, and I wonder if I did something wrong.
The critic does converge on the solution to the function gamma*f(n)=f(n)-1 which happens to be the sum of the series gamma+gamma^2+...+gamma^inf
meaning, gamma=1 diverges. gamma=0.99 converges on 100, gamma=0.5 converges on 2 and so on. Regardless of the actor or policy.
The code:
def _update_grads_with_eligibility(self, is_critic, delta, discount, ep_t):
gamma = self.args.gamma
if is_critic:
params = list(self.critic_nn.parameters())
lamb = self.critic_lambda
eligibilities = self.critic_eligibilities
else:
params = list(self.actor_nn.parameters())
lamb = self.actor_lambda
eligibilities = self.actor_eligibilities
is_episode_just_started = (ep_t == 0)
if is_episode_just_started:
eligibilities.clear()
for i, p in enumerate(params):
if not p.requires_grad:
continue
eligibilities.append(torch.zeros_like(p.grad, requires_grad=False))
# eligibility traces
for i, p in enumerate(params):
if not p.requires_grad:
continue
eligibilities[i][:] = (gamma * lamb * eligibilities[i]) + (discount * p.grad)
p.grad[:] = delta.squeeze() * eligibilities[i]
and
expected_reward_from_t = self.critic_nn(s_t)
probs_t = self.actor_nn(s_t)
expected_reward_from_t1 = torch.tensor([[0]], dtype=torch.float)
if s_t1 is not None: # s_t is not a terminal state, s_t1 exists.
expected_reward_from_t1 = self.critic_nn(s_t1)
delta = r_t + gamma * expected_reward_from_t1.data - expected_reward_from_t.data
negative_expected_reward_from_t = -expected_reward_from_t
self.critic_optimizer.zero_grad()
negative_expected_reward_from_t.backward()
self._update_grads_with_eligibility(is_critic=True,
delta=delta,
discount=discount,
ep_t=ep_t)
self.critic_optimizer.step()
EDIT 2:
Chris Holland's solution works. The problem originated from a bug in my code that caused the line
if s_t1 is not None:
expected_reward_from_t1 = self.critic_nn(s_t1)
to always get called, thus expected_reward_from_t1 was never zero, and thus no stopping condition was specified for the bellman equation recursion.
With no reward engineering, gamma=1, lambda=0.6, and a single hidden layer of size 128 for both actor and critic, this converged on a rather stable optimal policy within 500 episodes.
Even faster with gamma=0.99, as the graph shows (best discounted episode reward is about 86.6).
BIG thank you to #Chris Holland, who "gave this a try"
I am gonna give this a try.
.backward() does not need a loss function, it just needs a differentiable scalar output. It approximates a gradient with respect to the model parameters. Let's just look at the first case the update for the value function.
We have one gradient appearing for v, we can approximate this gradient by
v = model(s)
v.backward()
This gives us a gradient of v which has the dimension of your model parameters. Assuming we already calculated the other parameter updates, we can calculate the actual optimizer update:
for i, p in enumerate(model.parameters()):
z_theta[i][:] = gamma * lamda * z_theta[i] + l * p.grad
p.grad[:] = alpha * delta * z_theta[i]
We can then use opt.step() to update the model parameters with the adjusted gradient.
I used this k-means++ python code for initializing k centers but it is very long for large data, for example 400000 points of 2 dimension:
class KPlusPlus(KMeans):
def _dist_from_centers(self):
cent = self.mu
X = self.X
D2 = np.array([min([np.linalg.norm(x-c)**2 for c in cent]) for x in X])
self.D2 = D2
def _choose_next_center(self):
self.probs = self.D2/self.D2.sum()
self.cumprobs = self.probs.cumsum()
r = random.random()
ind = np.where(self.cumprobs >= r)[0][0]
return(self.X[ind])
def init_centers(self):
self.mu = random.sample(self.X, 1)
while len(self.mu) < self.K:
self._dist_from_centers()
self.mu.append(self._choose_next_center())
def plot_init_centers(self):
X = self.X
fig = plt.figure(figsize=(5,5))
plt.xlim(-1,1)
plt.ylim(-1,1)
plt.plot(zip(*X)[0], zip(*X)[1], '.', alpha=0.5)
plt.plot(zip(*self.mu)[0], zip(*self.mu)[1], 'ro')
plt.savefig('kpp_init_N%s_K%s.png' % (str(self.N),str(self.K)), \
bbox_inches='tight', dpi=200)
Is there a way to speed up k-means++?
Initial seeding has a large impact on k-means execution time. In this post you can find some strategies to speed it up.
Perhaps, you could consider to use the Siddhesh Khandelwal's K-means variant, which was publised in Proceedings of European Conference on Information Retrieval (ECIR 2017).
Siddhesh provided the python implementation in GitHub, and it is accompanied by some other previous heuristic algorithms.
K-means++ initialization takes O(n*k) to run. This is reasonably fast for small k and large n, but if you choose k too large, it will take some time. It is about as expensive as one iteration of the (slow) Lloyd variant, so it will usually pay off to use kmeans++.
Your implementation is worse, at least O(n*k²) because it performs unnecessary recomputations. And it probably always chooses the same point as next center.
Note that you also only have the initialization, not the actual kmeans yet.
I haven't run any experiment yet, but Scalable K-Means++ seems rather good for very large data sets (perhaps for those even larger than what you describe).
You can find the paper here and another post explaining it here.
Unfortunately, I haven't seen any code around I'd trust...
I am implementing some basic linear equation solvers in Python.
I have currently implemented forward and backward substitution for triangular systems of equations (so very straightforward to solve!), but the precision of the solutions becomes very poor even with systems of about 50 equations (50x50 coefficient matrix).
The following code performs the forward/backward substitution:
FORWARD_SUBSTITUTION = 1
BACKWARD_SUBSTITUTION = 2
def solve_triang_subst(A: np.ndarray, b: np.ndarray,
substitution=FORWARD_SUBSTITUTION) -> np.ndarray:
"""Solves a triangular system via
forward or backward substitution.
A must be triangular. FORWARD_SUBSTITUTION means A should be
lower-triangular, BACKWARD_SUBSTITUTION means A should be upper-triangular.
"""
rows = len(A)
x = np.zeros(rows, dtype=A.dtype)
row_sequence = reversed(range(rows)) if substitution == BACKWARD_SUBSTITUTION else range(rows)
for row in row_sequence:
delta = b[row] - np.dot(A[row], x)
cur_x = delta / A[row][row]
x[row] = cur_x
return x
I am using numpy and 64-bit floats.
Simple Testing Tool
I have set up a simple test suite which generates coefficient matrices and x vectors, computes the b, and then uses forward or backward substitution to recover the x, comparing it to the its known value for validity.
The following code performs these checks:
import numpy as np
import scipy.linalg as sp_la
RANDOM_SEED = 1984
np.random.seed(RANDOM_SEED)
def check(sol: np.ndarray, x_gt: np.ndarray, description: str) -> None:
if not np.allclose(sol, x_gt, rtol=0.1):
print("Found inaccurate solution:")
print(sol)
print("Ground truth (not achieved...):")
print(x_gt)
raise ValueError("{} did not work!".format(description))
def fuzz_test_solving():
N_ITERATIONS = 100
refine_result = True
for mode in [FORWARD_SUBSTITUTION, BACKWARD_SUBSTITUTION]:
print("Starting mode {}".format(mode))
for iteration in range(N_ITERATIONS):
N = np.random.randint(3, 50)
A = np.random.uniform(0.0, 1.0, [N, N]).astype(np.float64)
if mode == BACKWARD_SUBSTITUTION:
A = np.triu(A)
elif mode == FORWARD_SUBSTITUTION:
A = np.tril(A)
else:
raise ValueError()
x_gt = np.random.uniform(0.0, 1.0, N).astype(np.float64)
b = np.dot(A, x_gt)
x_est = solve_triang_subst(A, b, substitution=mode,
refine_result=refine_result)
# TODO report error and count, don't throw!
# Keep track of error norm!!
check(x_est, x_gt,
"Mode {} custom triang iteration {}".format(mode, iteration))
if __name__ == '__main__':
fuzz_test_solving()
Note that the maximum size of a test matrix is 49x49. Even in this case, the system cannot always compute decent solutions, and fails by more than a margin of 0.1. Here's an example of such a failure (this is doing backward substitution, so the biggest error is in the 0th coefficient; all the test data are sampled uniformly from [0, 1[):
Solution found with Mode 2 custom triang iteration 24:
[ 0.27876067 0.55200497 0.49499509 0.3259397 0.62420183 0.47041149
0.63557676 0.41155446 0.47191956 0.74385864 0.03002819 0.4700286
0.37989592 0.56527691 0.15072607 0.05659282 0.52587574 0.82252197
0.65662833 0.50250729 0.74139748 0.10852731 0.27864265 0.42981232
0.16327331 0.74097937 0.24411709 0.96934199 0.890266 0.9183985
0.14842446 0.51806495 0.36966843 0.18227989 0.85399593 0.89615663
0.39819336 0.90445931 0.21430972 0.61212349 0.85205597 0.66758689
0.1793689 0.38067267 0.39104614 0.6765885 0.4118123 ]
Ground truth (not achieved...)
[ 0.20881608 0.71009766 0.44735271 0.31169033 0.63982328 0.49075813
0.59669585 0.43844108 0.47764942 0.72222069 0.03497499 0.4707452
0.37679884 0.56439738 0.15120397 0.05635977 0.52616387 0.82230625
0.65670245 0.50251426 0.74139956 0.10845974 0.27864289 0.42981226
0.1632732 0.74097939 0.24411707 0.96934199 0.89026601 0.91839849
0.14842446 0.51806495 0.36966843 0.18227989 0.85399593 0.89615663
0.39819336 0.90445931 0.21430972 0.61212349 0.85205597 0.66758689
0.1793689 0.38067267 0.39104614 0.6765885 0.4118123 ]
I have also implemented the iterative refinement method described in Section 2.5 of [0], and while it did help a little, the results are still poor for larger matrices.
MATLAB Sanity Check
I also did this experiment in MATLAB, and even there, once there are more than 100 equations, the estimation error shoots up exponentially.
Here is the MATLAB code I used for this experiment:
err_norms = [];
range = 1:3:120;
for size=range
A = rand(size, size);
A = tril(A);
x_gt = rand(size, 1);
b = A * x_gt;
x_sol = A\b;
err_norms = [err_norms, norm(x_gt - x_sol)];
end
plot(range, err_norms);
set(gca, 'YScale', 'log')
And here is the resulting plot:
Main Question
My question is: Is this normal behavior, seeing as there is essentially no structure in the problem, given that I randomly generate the A matrix and x?
What about solving linear systems of 100s of equations for various practical applications? Are these limitations simply an accepted fact, and e.g., optimization algorithms are just naturally robust to these issues? Or am I missing some important facets of this problem?
[0]: Press, William H. Numerical recipes 3rd edition: The art of scientific computing. Cambridge university press, 2007.
There are no limitations. This is a very fruitful exercise that we all came to realize; writing linear solvers are not that easy and that's why almost always LAPACK or its cousins in other languages are used with full confidence.
You are hit by almost singular matrices and because you are using matlab's backslash you don't see that matlab is switching to least squares solutions behind the scenes when near singularity is hit. If you just change A\b to linsolve(A,b) hence you restrict the solver to solve square systems you'll probably see lots of warnings on your console.
I didn't test it because I don't have a license anymore but if I write blindly this should show you the condition numbers of the matrices at each step.
err_norms = [];
range = 1:3:120;
for i=1:40
size = range(i);
A = rand(size, size);
A = tril(A);
x_gt = rand(size, 1);
b = A * x_gt;
x_sol = linsolve(A,b);
err_norms = [err_norms, norm(x_gt - x_sol)];
zzz(i) = rcond(A);
end
semilogy(range, err_norms);
figure,semilogy(range,zzz);
Note that because you are picking up numbers from a uniform distribution it becomes more and more likely to hit ill-conditioned matrices (wrt to inversion) as the rows have more probability to have rank deficiency. That's why the error becomes bigger and bigger. Sprinkle some identity matrix times a scalar and all errors should come back to eps*n levels.
But best, leave this to expert algorithms which have been tested through decades. It is really not that trivial to write any of these. You can read the Fortran codes, for example, dtrsm solves the triangular system.
On the Python side, you can use scipy.linalg.solve_triangular which uses ?trtrs routines from LAPACK.
so I've got two implementations of a linear regression using gradient descent. One in Tensorflow, one in Numpy. I'm finding the one in Numpy is about 3x faster than in Tensorflow. Here's my code -
Tensorflow:
class network_cluster(object):
def __init__(self, data_frame, feature_cols, label_cols):
self.init_data(data_frame, feature_cols, label_cols)
self.init_tensors()
def init_data(self, data_frame, feature_cols, label_cols):
self.data_frame = data_frame
self.feature_cols = feature_cols
self.label_cols = label_cols
def init_tensors(self):
self.features = tf.placeholder(tf.float32)
self.labels = tf.placeholder(tf.float32)
self.weights = tf.Variable(tf.random_normal((len(self.feature_cols), len(self.label_cols))))
self.const = tf.Variable(tf.random_normal((len(self.label_cols),)))
def linear_combiner(self):
return tf.add(tf.matmul(self.features, self.weights), self.const)
def predict(self):
return self.linear_combiner()
def error(self):
return tf.reduce_mean(tf.pow(self.labels - self.predict(), 2), axis = 0)
def learn_model(self, epocs = 100):
optimizer = tf.train.AdadeltaOptimizer(1).minimize(self.error())
error_rcd = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoc in range(epocs):
_, error = sess.run([optimizer, self.error()], feed_dict={
self.features: self.data_frame[self.feature_cols],
self.labels: self.data_frame[self.label_cols]
})
error_rcd.append(error[0])
return error_rcd
def get_coefs(self):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
coefs = sess.run([self.weights, self.const])
return coefs
test_cluster = network_cluster(dataset, ['ship_jumps', 'npc_kills', 'ship_kills', 'pod_kills'], ['hour_of_week'])
%timeit test_cluster.learn_model(epocs = 100)
And numpy:
def grad_descent(dataset, features, predictor, max_iters = 10000):
def initialize_model(dataset, features, predictor):
constant_array = np.ones(shape = (len(dataset), 1))
features_array = dataset.loc[:, features].values
features_array = np.append(constant_array, features_array, axis = 1)
predict_array = dataset.loc[:, predictor].values
betas = np.zeros(shape = (len(features) + 1, len(predictor)))
return (features_array, predict_array, betas)
def calc_gradient(features_array, predict_array, betas):
prediction = np.dot(features_array, betas)
predict_error = predict_array - prediction
gradient = -2 * np.dot(features_array.transpose(), predict_error)
gradient_two = 2 * np.expand_dims(np.sum(features_array ** 2, axis = 0), axis = 1)
return (gradient, gradient_two)
def update_betas(gradient, gradient_two, betas):
new_betas = betas - ((gradient / gradient_two) / len(betas))
return new_betas
def model_error(features_array, predict_array, betas):
prediction = np.dot(features_array, betas)
predict_error = predict_array - prediction
model_error = np.sqrt(np.mean(predict_error ** 2))
return model_error
features_array, predict_array, betas = initialize_model(dataset, features, predictor)
prior_error = np.inf
for iter_count in range(max_iters):
gradient, gradient_two = calc_gradient(features_array, predict_array, betas)
betas = update_betas(gradient, gradient_two, betas)
curr_error = model_error(features_array, predict_array, betas)
if curr_error == prior_error:
break
prior_error = curr_error
return (betas, iter_count, curr_error)
%timeit grad_descent(dataset, ['ship_jumps', 'npc_kills', 'ship_kills', 'pod_kills'], ['hour_of_week'], max_iters = 100)
I'm testing using the Spyder IDE, and I do have an Nvidia GPU (960). The Tensorflow code clocks in at ~20 seconds, with the Numpy code at about 7 seconds on the same dataset. The dataset is almost 1 million rows.
I would have expected Tensorflow to beat out Numpy handily here, but that's not the case. Granted I am new to using Tensorflow, and the Numpy implementation doesn't use a class, but still, 3x better with Numpy?!
Hoping for some thoughts/ideas on what I'm doing wrong here.
Without looking at your code in detail (not that much experience with TF):
This comparison is flawed!
Yaroslav's comment is of course true: GPU-computing has some overhead (at least data-preparation; not sure what kind of compiling is clocked here)
You are comparing pure GD to Adadelta in full-batch mode it seems:
Adadelta of course implicates some overhead (there are more operations than calculating the gradient and multiplying the current iterate) as it's one of the common variance-reduction methods which come with a price!
The idea is: invest some additional operations to:
remove the number of iterations needed given some learning-rate
(this is even much more complex: for most people -> achieve good convergence with using the default learning-rates)
It seems you are just running 100 epochs each and clocking this
That's not meaningful!
It's very much possible that the objective is very different:
if iteration size is not enough
or the initial learning-rate is badly chosen
or the same, but the non-existing early-stopping made sure a possible better algorithm with proven convergence (according to some criterion) wastes some additional time doing all iterations until 100 is reached!
(Adadelta was probably designed for the SGD-setting; not GD)
It's very hard to compare such different algorithms, especially when using just one task / dataset.
Even if you would introduce early-stopping, you will observe random-seed-based indeterministic performance which is hard to interpret.
You are basically measuring iteration-time, but this is not a good measure. Compare first-order methods (gradients -> SGD, GD, ...) with second-order methods (hessian -> Newton). the latter is very slow to iterate, but usually obtains quadratic convergence behaviour resulting in way less iterations needed! In NN-applications this example is more: LBFGS vs. SGD/... (although i don't know if LBFGS is available in TF; torch supports it). LBFGS is known to achieve local-quadratic convergence which is again hard to interpret in real-world tasks (especially as this limited-memory approximation of the inverse-hessian is a parameter of LBFGS). This comparison can also be done on Linear-Programming where the Simplex-method has fast-iterations while Interior-point methods (basically Newton-based; but treating constrained-optimization here there are some additional ideas needed) are much slower per iteration (despite being faster to achieve convergence in many cases).
What i ignored here: nearly all theoretical results regarding convergence and co. are limited to convex and smooth functions. NNs are typically non-convex, which means, the task of evaluating these performance-measures is even harder. But your problem here is convex of course.
I also have to admit, that my answer is only scratching the surface of this complex problem, even if unconstrained smooth convex optimization is one of the easier tasks in numerical-optimization (compared to constrained, nonsmooth nonconvex optimization).
For a general introduction to numerical-optimization, which also talks a lot about first-order vs. second-order methods (and there are many methods in-between), i recommend Numerical Optimization by Nocedal and Wright which can be found on the web.
I have 3 questions regarding subclassing of scipy.stats.rv_continuous.
My goal is to code a statistical mixture model of a truncated normal distribution, a truncated exponential distribution and 2 uniform distributions.
1) Why is drawing random variates via mm_model.rvs(size = 1000) so extremely slow? I read something about performance issues in the documentary, but I was really surprised.
2) After drawing random variates via mm_model.rvs(size = 1000) I get this IntegrationWarning?
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg, IntegrationWarning)
3) I read in the documentary that I can transmit parameters to the pdf via the "shape" parameter. I tried to adjust my pdf and set the shape parameter but it did not work. Could someone explain it?
Thanks for any help.
def trunc_norm(z,low_bound,up_bound,mu,sigma):
a = (low_bound - mu) / sigma
b = (up_bound - mu) / sigma
return stats.truncnorm.pdf(z,a,b,loc=mu,scale=sigma)
def trunc_exp(z,up_bound,lam):
return stats.truncexpon.pdf(z,b=up_bound*lam,scale=1/lam)
def uniform(z,a,b):
return stats.uniform.pdf(z,loc=a,scale=(b-a))
class Measure_mixture_model(stats.rv_continuous):
def _pdf(self,z):
z_true = 8
z_min = 0
z_max = 10
p_hit = 0.7
p_short = 0.1
p_max = 0.1
p_rand = 0.1
sig_hit = 1
lambda_short = 0.5
sum_p = p_hit + p_short + p_max + p_rand
if sum_p < 0.99 or 1.01 < sum_p:
misc.eprint("apriori probabilities p_hit, p_short, p_max, p_rand have to be 1!")
return None
pdf_hit = trunc_norm(z,z_min,z_max,z_true,sig_hit)
pdf_short = trunc_exp(z,z_true,lambda_short)
pdf_max = uniform(z,0.99*z_max,z_max)
pdf_rand = uniform(z,z_min,z_max)
pdf = p_hit * pdf_hit + p_short * pdf_short + p_max * pdf_max + p_rand * pdf_rand
return pdf
#mm_model = Measure_mixture_model(shapes='z_true,z_min,z_max,p_hit,p_short,p_max,p_rand,sig_hit,lambda_short')
mm_model = Measure_mixture_model()
z = np.linspace(-1,11,1000)
samples = mm_model.pdf(z)
plt.plot(z,samples)
plt.show()
rand_samples = mm_model.rvs(size = 1000)
bins = np.linspace(-1, 11, 100)
plt.hist(rand_samples,bins)
plt.title("measure mixture model")
plt.xlabel("z: measurement")
plt.ylabel("p: relative frequency")
plt.show()
(1) and (2) are probably related. You are asking scipy to generate random samples based on only the density you provide.
I don't really know what scipy does, but I suspect it integrates the density ("pdf") to get the probability function ("cdf") and then inverts it to map uniform samples to your distribution. This is numerically expensive and as you experienced error prone.
To speed things up you can help scipy by implementing _rvs directly. Just draw a uniform to decide which sub-model of your mixture to select and then invoke the rvs of the selected sub-model. And similar for other functions you may require.
Here are some tips on how to implement a vectorised rvs:
To batch-select sub-models. Since your number of sub-models is small np.digitize should be good enough for this. If possible use rv_frozen instances for sub-models; they are very convenient, but I seem to remember that you can't pass all the optional parameters to them, so you may have to handle those separately.
self._bins = np.cumsum([p_hit, p_short, p_max])
self._bins /= self._bins[-1] + p_rand
submodel = np.digitize(uniform.rvs(size=size), self._bins)
result = np.empty(size)
for j, frozen in enumerate((frz_trunc_norm, frz_trunc_exp, frz_unif_1, frz_unif_2)):
inds = np.where(submodel == j)
result[inds] = frozen.rvs(size=inds.shape)
return result
Re (3) Here is what the scipy docs have to say.
A note on shapes: subclasses need not specify them explicitly. In this case, shapes will be automatically deduced from the signatures of the overridden methods (pdf, cdf etc). If, for some reason, you prefer to avoid relying on introspection, you can specify shapes explicitly as an argument to the instance constructor.
So the normal way would be to just put some arguments in your methods.