I am learning Bayesian inference by the book Bayesian Analysis with Python. However, when using plot_ppc, I got AttributeError and the warning
/usr/local/Caskroom/miniconda/base/envs/kaggle/lib/python3.9/site-packages/pymc3/sampling.py:1689: UserWarning: samples parameter is smaller than nchains times ndraws, some draws and/or chains may not be represented in the returned posterior predictive sample
warnings.warn(
The model is
shift = pd.read_csv('../data/chemical_shifts.csv')
with pm.Model() as model_g:
μ = pm.Uniform('μ', lower=40, upper=70)
σ = pm.HalfNormal('σ', sd=10)
y = pm.Normal('y', mu=μ, sd=σ, observed=shift)
trace_g = pm.sample(1000, return_inferencedata=True)
If I used the following codes
with model_g:
y_pred_g = pm.sample_posterior_predictive(trace_g, 100, random_seed=123)
data_ppc = az.from_pymc3(trace_g.posterior, posterior_predictive=y_pred_g) # 'Dataset' object has no attribute 'report'
I got 'Dataset' object has no attribute 'report'.
If I used the following codes
with model_g:
y_pred_g = pm.sample_posterior_predictive(trace_g, 100, random_seed=123)
data_ppc = az.from_pymc3(trace_g, posterior_predictive=y_pred_g) # AttributeError: 'InferenceData' object has no attribute 'report'
I got AttributeError: 'InferenceData' object has no attribute 'report'.
ArviZ version: 0.11.2
PyMC3 Version: 3.11.2
Aesara/Theano Version: 1.1.2
Python Version: 3.9.6
Operating system: MacOS Big Sur
How did you install PyMC3: conda
You are passing return_inferancedata=True to pm.sample(), which according to the PyMC3 documentation will return an InferenceData object rather than a MultiTrace object.
return_inferencedatabool, default=False
Whether to return the trace as an arviz.InferenceData (True) object or a MultiTrace (False) Defaults to False, but we’ll switch to True in an upcoming release.
The from_pymc3 function, however, expects a MultiTrace object.
The good news is that from_pymc3 returns an InferenceData object, so you can solve this in one of two ways:
The easiest solution is to simply remove the from_pymc3 calls, since it returns InferenceData, which you already have due to return_inferencedata=True.
Set return_inferencedata=False (you can also remove that argument, but the documentation states that in the future it will default to True, so to be future proof it's best to explicitly set it to False). This will return a MultiTrace which can be passed to from_pymc3.
Related
I am using pycaret in my new laptop and also after a gap 6 months so not sure if this problem is due to some issue in my laptop or due to some changes in pycaret package itself. Earlier I simply used to create experiment using setup method of pycaret and it used to work. But now it keep raising one error after another. Like I used below 2 lines to setup experiment.
from pycaret.classification import *
exp = setup(data=df.drop(['id'], axis=1), target='cancer', session_id=123)
But this gave error:-
ValueError: Setting a random_state has no effect since shuffle is False. You should leave random_state to its default (None), or set shuffle=True.
Then I changed my second as below-
exp = setup(data=df.drop(['id'], axis=1), target='cancer', session_id=123, fold_shuffle=True, imputation_type='iterative')
Then it returned a new error-
AttributeError: 'Make_Time_Features' object has no attribute 'list_of_features'
I remember well earlier I never had to use these attributes in my setup method. Looks like even default values of attributes in setup method of pycaret are not working. Can anyone suggest me how to troubleshoot this?
I'm following the tutorial on torchtext transformers which is published on 1.9 pytorch. However, because I'm working on a Tegra TX2, I am stuck to using torchtext 0.6.0, and not 0.10.0 (which is what I assume the tutorial uses).
Following the tutorial, the following throws an error:
data = [torch.tensor(vocab(tokenizer(item)), dtype=torch.long) for item in raw_text_iter]
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
The error is:
TypeError: 'Vocab' object is not callable
I understand what the error means, what I don't know, is that is the expected return from Vocab in this case?
Looking at the documentation for TorchText 0.6.0 I see that it has:
stoi
itos
freqs
vectors
Is the example expecting the vectors from Vocab?
EDIT:
I looked up the 0.10.0 documentation and it doesn't have a __call__.
Looking at the source for the implementation of Vocab in 0.10.0, apparently it is a subclass of torch.nn.Module, which means it inherits __call__ from there (calling it is roughly equivalent to calling its forward() method, but with some additional machinery for implementing hooks and such).
We can also see that it wraps some underling VocabPyBind object (equivalent to the Vocab class in older versions), and its forward() method just calls its lookup_indices method.
So in short, it seems the equivalent in older versions of the library would be to call vocab.lookup_indices(tokenizer(item)).
Update: Apparently in 0.6.0 the Vocab class does not even have a lookup_indices method, but reading the source for that, this is just equivalent to:
[vocab[token] for token in tokenizer]
If you're ever able to upgrade, for the sake of forward-compatibility you could write a wrapper like:
from torchtext.vocab import Vocab as _Vocab
class Vocab(_Vocab):
def lookup_indices(self, tokens):
return [vocab[token] for token in tokens]
def __call__(self, tokens):
return self.lookup_indices(tokens)
I have imported the measure module from the skimage package. I want to execute the measure function marching cubes. Here is the function call I make:
from skimage import measure
stuff = measure.marching_cubes(volume = p,
level = threshold, step_size = 1,
allow_degenerate = True)
This function call throws a traceback error saying TypeError: marching_cubes() got an unexpected keyword argument 'step_size'. However, measure.marching_cubes() function does accept a step_size argument (see docs).
If I comment out the step_size and allow_degenerate parameters (they have default values), then the call works correctly, but marching_cubes only returns 2 parameters (vert and faces) when I expect it to return 4 parameters (verts, faces, normals, and values).
What am I doing wrong, and what should I do to get the expected behavior from measure.marching_cubes()?
I suspect there is a version problem with the skimage library you are using.
The documentation link provided by you is for skimage version 0.14. In this version the parameter 'step_size' is present.
However in skimage version 0.12, measure.marching_cubes() function is also present but with only 4 parameters excluding step_size. I suspect you have might be using version 0.12.
You have also stated that 'marching_cubes only returns 2 parameters (vert and faces) when I expect it to return 4 parameters (verts, faces, normal, and values).' In version 0.12 the function returns only two parameters. Hence I strongly suspect you are using an older version of skimage
I was able to figure this out by looking up at the documentation for version 0.12
Solution:
Try upgrading the skimage library to the latest version (currently v0.14) and hopefully it should work.
Cheers!!!
in the newer version of skimage (it gets updated while updating scikit-image,) step_size parameter is present. But the old method is replaced by two methods; marching_cubes_lewiner and marching_cubes_classic. The method marching_cubes_lewiner takes step_size parameter. Please try it after updating.
I've just installed the latest version of Anaconda.
I am having a basic problem with Bokeh, from this example.
from bokeh.plotting import *
f = figure()
f.line(x, y)
AttributeError: 'NoneType' object has no attribute 'line'
I can plot by saying line(x,y), but it looks like the above method would provide more flexibility if it worked.
The example (and even the user guide) contradict the documentation for bokeh.plotting.figure(), which explicitly says it returns None, which explains the error you observe.
Using line() directly therefore seems to be the way to go.
However, this holds for bokeh versions before 0.7: version 0.7 deprecated implicit plotting. This means that figure().line() should work with bokeh 0.7+. The documentation for figure() has apparently not yet been updated.
I have the following code (based on the samples here), but it is not working:
[...]
def my_analyzer(s):
return s.split()
my_vectorizer = CountVectorizer(analyzer=my_analyzer)
X_train = my_vectorizer.fit_transform(traindata)
ch2 = SelectKBest(chi2,k=1)
X_train = ch2.fit_transform(X_train,Y_train)
[...]
The following error is given when calling fit_transform:
AttributeError: 'function' object has no attribute 'analyze'
According to the documentation, CountVectorizer should be created like this: vectorizer = CountVectorizer(tokenizer=my_tokenizer). However, if I do that, I get the following error: "got an unexpected keyword argument 'tokenizer'".
My actual scikit-learn version is 0.10.
You're looking at the documentation for 0.11 (to be released soon), where the vectorizer has been overhauled. Check the documentation for 0.10, where there is no tokenizer argument and the analyzer should be an object implementing an analyze method:
class MyAnalyzer(object):
#staticmethod
def analyze(s):
return s.split()
v = CountVectorizer(analyzer=MyAnalyzer())
http://scikit-learn.org/dev is the documentation for the upcoming release (which may change at any time), while http://scikit-learn/stable has the documentation for the current stable version.