I have been dealing an error when trying to learn Google "temporal fusion transformer" algorithm in anaconda spyder 5.1.5.
Guys, it is very important for me to solve this error. Somebody should say something. I will be very glad.
The example which i use in the link below;
https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html
In example, when i come to run the code which i mention below, i got the error
study = optimize_hyperparameters(
train_dataloader,
val_dataloader,
model_path="optuna_test",
n_trials=200,
max_epochs=50,
gradient_clip_val_range=(0.01, 1.0),
hidden_size_range=(8, 128),
hidden_continuous_size_range=(8, 128),
attention_head_size_range=(1, 4),
learning_rate_range=(0.001, 0.1),
dropout_range=(0.1, 0.3),
trainer_kwargs=dict(limit_train_batches=30),
reduce_on_plateau_patience=4,
use_learning_rate_finder=False # use Optuna to find ideal learning rate or use in-built learning rate finder
)
Here is the error below
A new study created in memory with name: no-name-fe7e21ce-3034-4679-b60a-ee4d5c9a4db5
[W 2022-10-21 19:36:49,382] Trial 0 failed because of the following error: TypeError("__init__() got an unexpected keyword argument 'weights_summary'")
Traceback (most recent call last):
File "C:\Users\omer\anaconda3\lib\site-packages\optuna\study\_optimize.py", line 196, in _run_trial
value_or_values = func(trial)
File "C:\Users\omer\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py", line 150, in objective
trainer = pl.Trainer(
File "C:\Users\omer\anaconda3\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
Traceback (most recent call last):
Input In [3] in <cell line: 1>
study = optimize_hyperparameters(
File ~\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:217 in optimize_hyperparameters
study.optimize(objective, n_trials=n_trials, timeout=timeout)
File ~\anaconda3\lib\site-packages\optuna\study\study.py:419 in optimize
_optimize(
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:66 in _optimize
_optimize_sequential(
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:160 in _optimize_sequential
frozen_trial = _run_trial(study, func, catch)
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:234 in _run_trial
raise func_err
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:196 in _run_trial
value_or_values = func(trial)
File ~\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:150 in objective
trainer = pl.Trainer(
File ~\anaconda3\lib\site-packages\pytorch_lightning\utilities\argparse.py:345 in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
What is problem with the code? Is there anyone to help me, please?
So, i had same ploblom as you have.
I suggest you find out "weights_suammry" variable on your code
I use .yaml file and put parameters of pytorch_lightning.Trainer automatically using hydra also use strategy=DDPStrategy(find~)
i just realize there was weights_summary in .yaml file,
the structure was
trainer:
_target_: ~~
~~:
weights_summary : "top"
and i remove weights_summary on it and the plobloms have solved
the weights_summary argument was removed starting from Pytorch-Lightning version 1.7.0. See pull request here. As an alternative use parameter enable_model_summary, as described in the docs here.
I think you installed pytorch-forecasting from conda-forge. The current version is v0.10.2 for conda-forge while it is v0.10.3 for pip. See https://github.com/jdb78/pytorch-forecasting. They solved this issue in v0.10.3. So you can either reinstall it with pip or downgrade pytorch-lightning, like
conda remove pytorch-lightning
conda install pytorch-lightning=1.6.4 -c conda-forge
conda remove pytorch-forecasting
conda install pytorch-forecasting -c conda-forge
As a temporary measure, there is a way to directly modify the installed library file.
In my case, line 147 of file /opt/conda/lib/python3.7/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py
weights_summary=[None, "top"][optuna_verbose < optuna.logging.INFO] ,
was modified as follows.
enable_model_summary=[None, "top"][optuna_verbose < optuna.logging.INFO],
So it seems like there is an incompatibility issue with the pytorch_lightning version that you are using. Your version is probably to advanced.
I'm using pytorch_ligtning v1.5, and pytorch_forecasting at 0.10.2, and it works.
Related
I've been trying to run through this tutorial (https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html) for the past day and constantly get an error after running this portion:
bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)
The error is the following:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes
ax.set_yscale("log", basey=10)
File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale
ax.yaxis._set_scale(value, **kwargs)
File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale
self._scale = mscale.scale_factory(value, self, **kwargs)
File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory
return scale_cls(axis, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'basey'
Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either.
Would appreciate any help! And thank you in advance I'm a beginner!
It seems that ax.set_yscale("log", basey=10) does not recognise keyword argument basey. This keyword was replaced in the most recent matplotlib releases if you would install an older version it should work:
pip install matplotlib==3.3.4
So why is this happening in the first place? The package you are using does not have specific dependencies pinned down so it installs the most recent versions of dependencies. If there are any API changes to the more recent versions of packages the code breaks - it's good practice to pin down dependency versions of the project.
I had a similar problem when I tried to scale the y-axis of my plot logarithmically. Thereby the base should be 2. I had success when I tried base=2 instead of basey=2.
plt.yscale("log",base=2)
This should also work with the latest version of matplotlib.
I looked for posts with a similar issue ("wrong" keyword calls on init) in Github and SO and it seems like you might need to update your matplotlib:
sudo pip install --upgrade matplotlib # for Linux
sudo pip install matplotlib --upgrade # for Windows
I think it is because of the version issues.
In the version 3.6.0 of matplotlib, the keywords like "basey" or "susby" have been changed. More details could be find in
matplotlib.scale.LogScale and yscale
class matplotlib.scale.LogScale(axis, *, base=10, subs=None, nonpositive='clip')
Bases: ScaleBase
A standard logarithmic scale. Care is taken to only plot positive values.
Parameters:
axisAxis
The axis for the scale.
basefloat, default: 10
The base of the logarithm.
nonpositive{'clip', 'mask'}, default: 'clip'
Determines the behavior for non-positive values. They can either be masked as invalid, or clipped to a very small positive number.
subssequence of int, default: None
Where to place the subticks between each major tick. For example, in a log10 scale, [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick.
I have a simple graph and need to draw it on my screen, here is my code:
def gera_grafo(matriz):
grafo = nx.to_networkx_graph(matriz, create_using=nx.Graph)
nx.draw(grafo)
plt.show()
return grafo
Where matrix is an adjacency list containing the weights of the connections. The coda was working just fine, but i had to create a new python virtualenv and since then, even though all the required libraries are correctly installed it throws an error on the nx.draw() call. The error I got is:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 396, in _random_state
random_state_arg = args[random_state_index]
IndexError: tuple index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 58, in <module>
grafo = gera_grafo(matriz)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/criacao_do_grafo.py", line 39, in gera_grafo
nx.draw(grafo)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py", line 333, in draw_networkx
pos = nx.drawing.spring_layout(G) # default to spring layout
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/decorator.py", line 214, in fun
return caller(func, *(extras + args), **kw)
File "/run/media/luisola/A216C03316C009ED/Users/Luis/Documents/Iniciação Científica/inicia-o-cient-fica/venv/lib/python3.9/site-packages/networkx/utils/decorators.py", line 400, in _random_state
raise nx.NetworkXError("random_state_index is incorrect") from e
networkx.exception.NetworkXError: random_state_index is incorrect
Is this an error on my code? If so, what can i do? Thanks in advance
As stated for this question: networkx shows random_state_index is incorrect
Their was a problem with decorator=5.0.0. As discussed in the related issue on GitHub (https://github.com/networkx/networkx/issues/4718) decorator>=5.0.X should be available soon. So either wait a little bit to upgrade or downgrade to an old version as suggested in the SO question above.
Edit decorator==5.0.5 or >=5.0.7 fixes the error
As discussed in the issue linked above, decorator has now been updated and fixed.
I was using decorator==5.0.6, and got the same error.
However, upgrading to 5.0.7 solves my problem.
If you are using MacOS - BigSur, networkx won't work the way you want it to. I needed to go and open my project in Ubuntu.
Its worked for me, after downgrade the networkx and decorator
networkx 2.3 pypi_0 pypi
decorator 4.3.0 pypi_0 pypi
OS: Mac OS BigSur
Since I am using Windows 10, the suggestion given by Sparky05 worked for me by updating the version of decorator module in python by using pip install decorator==5.0.7.
Also you can update the version of networkx library. I will share the link for updating it later, but it will not work if you are using Spyder IDE. It will work in VSCode.
I am trying to generate YOLOv2 model yolo.h5 so that I can load this pre-trained model. I am trying to port Andrew Ng coursera Yolo assignment ( which runs in tensorflow 1.x) to tensorflow 2.3.
I was able to cleanly port it thanks to tensorflow uprade (https://www.tensorflow.org/guide/upgrade), But little did I realize that I cannot download the yolo.h5 file ( either its get corrupted or the download times out) and therefore I thought I should build one and I followed instructions from https://github.com/JudasDie/deeplearning.ai/issues/2.
It looked pretty straight forward as I cloned YAD2k repo and downloaded both the yolo.weights and yolo.cfg.
I ran the following the command as per the instructions:
python yad2k.py yolo.cfg yolo.weights model_data/yolo.h5
But I got the following error:-
Traceback (most recent call last):
_main(parser.parse_args())
File "yad2k.py", line 233, in _main
Lambda(
File "/home/sunny/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line
925, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "/home/sunny/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line
1117, in _functional_construction_call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/home/sunny/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py", line 903, i
n call
result = self.function(inputs, **kwargs)
File "/home/sunny/YAD2K/yad2k/models/keras_yolo.py", line 32, in space_to_depth_x2
return tf.space_to_depth(x, block_size=2)
AttributeError: module 'tensorflow' has no attribute 'space_to_depth'
From the all chats I figured out that the above needs to run in tensorflow 1.x . However it puts me back where I started which is to run it in tensorflow 1.x. I would love to stick with tensorflow 2.3.
Wondering if someone can guide me here. Frankly, to get me going all I need is an model hd5 file. But I thought generating one would be a better learning than to get one.
The above problem goes away when you upgrade all of your code under yad2k repo ( particularly yad2k.py and python files under models folder to tensorflow 2.x. The beautiful upgrade utility provided by tensorflow does the magic for you by replacing the original call to the compatible tf.compat.v1.space_to_depth(input=x, block_size=...)
Therefore for those who are planning to do the hard job of downgrading their tensorflow and keras, I would recommend them to try the tensorflow upgrade. This saves a lot of time.
This takes care of my model h5 file creation. My bad - I didn't think about it when I asking the question.
I am running this extremely simple PyTorch example NN from the documentation as is, with nothing at all changed.
I get this error:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/opt/conda/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
input = module(input)
File "/opt/conda/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias)
File "/opt/conda/envs/fastai/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear
return torch.addmm(bias, input, weight.t())
RuntimeError: addmm(): argument 'mat1' (position 1) must be Variable, not torch.FloatTensor
Apparently during the matrix multiplication, there is some data type error.
Why would the matrices I'm trying to multiply need to be Variable anyway?
I can do
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out))
but get
AttributeError: 'Variable' object has no attribute 'item'
so that didn't help.
I am running PyTorch version 0.3.1.post2.
I think I just found the answer to my own question, so I'll leave this here if anyone else comes across this:
**NOTE:** These examples have been update for PyTorch 0.4, which made several major changes to the core PyTorch API. Most notably, prior to 0.4 Tensors had to be wrapped in Variable objects to use autograd; this functionality has now been added directly to Tensors, and Variables are now deprecated.
So this means I'm running and old version of PyTorch
I had the same issue, so I will just add the update commands in case you don't want to search them:
Optional:
conda list | grep pytorch
conda upgrade conda
The following update didn't update anything, although they should and I think its the right way to update (you may try it first)
conda update pytorch torchvision
What did help was to specify the version explicitly:
conda install pytorch=0.4.0 -c pytorch
I read in the howto documentation to install Trigger, but when I test in python environment, I get the error below:
>>> from trigger.netdevices import NetDevices
>>> nd = NetDevices()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 913, in __init__
with_acls=with_acls)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 767, in __init__
production_only=production_only, with_acls=with_acls)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 83, in _populate
# device_data = _munge_source_data(data_source=data_source)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 73, in _munge_source_data
# return loader.load_metadata(path, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/loader.py", line 163, in load_metadata
raise RuntimeError('No data loaders succeeded. Tried: %r' % tried)
RuntimeError: No data loaders succeeded. Tried: [<trigger.netdevices.loaders.filesystem.XMLLoader object at 0x7f550a1ed350>, <trigger.netdevices.loaders.filesystem.JSONLoader object at 0x7f550a1ed210>, <trigger.netdevices.loaders.filesystem.SQLiteLoader object at 0x7f550a1ed250>, <trigger.netdevices.loaders.filesystem.CSVLoader object at 0x7f550a1ed290>, <trigger.netdevices.loaders.filesystem.RancidLoader object at 0x7f550a1ed550>]
Does anyone have some idea how to fix it?
The NetDevices constructor is apparently trying to find a "metadata source" that isn't there.
Firstly, you need to define the metadata. Second, your code should handle the exception where none is found.
I'm the lead developer of Trigger. Check out the the doc Working with NetDevices. It is probably what you were missing. We've done some work recently to improve the quality of the setup/install docs, and I hope that this is more clear now!
If you want to get started super quickly, you can feed Trigger a CSV-formatted NetDevices file, like so:
test1-abc.net.example.com,juniper
test2-abc.net.example.com,cisco
Just put that in a file, e.g. /tmp/netdevices.csv and then set the NETDEVICES_SOURCE environment variable:
export NETDEVICES_SOURCE=/tmp/netdevices.csv
And then fire up python and continue on with your examples and you should be good to go!
I found that the default of /etc/trigger/netdevices.xml wasn't listed in the setup instructions. It did indicate to copy from the trigger source folder:
cp conf/netdevices.json /etc/trigger/netdevices.json
But, I didn't see how to specify this instead of the default NETDEVICES_SOURCE on the installation page. But, as soon as I had a file that NETDEVICES_SOURCE pointed to in my /etc/trigger folder, it worked.
I recommend this to get the verifying functionality examples to work right away with minimal fuss:
cp conf/netdevices.xml /etc/trigger/netdevices.xml
Using Ubuntu 14.04 with Python 2.7.3