cannot train nlu.yml file rasa in google colab - python

i'm triying to train nlu model in colab, then i gets this error. I use rasa 2.6.3
ValueError: Unknown data format for file /content/drive/MyDrive/my_project/data/nlu/nlu.yml
here is my code
# Import modules for training
from rasa_nlu.training_data import load_data
from rasa_nlu.config import RasaNLUModelConfig
from rasa_nlu.model import Trainer
from rasa_nlu import config
# loading the nlu training samples
training_data = load_data("/content/drive/MyDrive/my_project/data/nlu/nlu.yml")
trainer = Trainer(config.load("/content/drive/MyDrive/my_project/config.yml"))
# training the nlu
interpreter = trainer.train(training_data)
model_directory = trainer.persist("/content/drive/MyDrive/my_project/models/", fixed_model_name="current")

Related

How to upgrade TF1 GAN notebook on Colab to TF2? It does't work because Colab is't supporting TF1 anymore

I was trying to run this notebook on colab,
https://colab.research.google.com/github/https-deeplearning-ai/GANs-Public/blob/master/C1W1_(Colab)_Inputs_to_a_pre_trained_GAN.ipynb ,
but first I got this :
ValueError: Tensorflow 1 is unsupported in Colab.
then I upgraded it using this script:
import tensorflow as tf
!tf_upgrade_v2 \
--intree stylegan/ \
--inplace
and I did comment these:
%tensorflow_version 1.x
tflib.init_tf()
but I got this one! and couldn't solve:
AttributeError: Can't get attribute 'Network' on <module 'dnnlib.tflib.network' from '/content/stylegan/dnnlib/tflib/network.py'>
Can somebody help?
# Clone the official StyleGAN repository from GitHub
!git clone https://github.com/NVlabs/stylegan.git
%tensorflow_version 1.x
import os
import pickle
import numpy as np
import PIL.Image
import stylegan
from stylegan import config
from stylegan.dnnlib import tflib
from tensorflow.python.util import module_wrapper
module_wrapper._PER_MODULE_WARNING_LIMIT = 0
# Initialize TensorFlow
tflib.init_tf()
# Go into that cloned directory
path = 'stylegan/'
if "stylegan" not in os.getcwd():
os.chdir(path)
# Load pre-trained network
# url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # Downloads the pickled model file: karras2019stylegan-ffhq-1024x1024.pkl
url = 'https://bitbucket.org/ezelikman/gans/downloads/karras2019stylegan-ffhq-1024x1024.pkl'
with stylegan.dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f:
print(f)
_G, _D, Gs = pickle.load(f)
# Gs.print_layers() # Print network details

How do I create a configuration and trainer with rasa 1.1x?

I am following an example on datacamp which is using a deprecated version of rasa_nlu. The sample code of the datacamp example looks like this.
# Import necessary modules
from rasa_nlu.converters import load_data
from rasa_nlu.config import RasaNLUConfig
from rasa_nlu.model import Trainer
# Create args dictionary
args = {"pipeline": "spacy_sklearn"}
# Create a configuration and trainer
config = RasaNLUConfig(cmdline_args=args)
trainer = Trainer(config)
# Load the training data
training_data = load_data("./training_data.json")
# Create an interpreter by training the model
interpreter = trainer.train(training_data)
# Test the interpreter
print(interpreter.parse("I'm looking for a Mexican restaurant in the North of town"))
This example imports RasaNLUConfig from rasa_nlu.config to create a config and trainer.
My question is how do I make something like this with the newer rasa 1.1x? The code that I wrote looks like this
from rasa_nlu.training_data import load_data
#Instead of from rasa_nlu import config the deprecated version used 'from rasa_nlu.config import RasaNLUConfig'
from rasa_nlu import config
from rasa_nlu.model import Trainer
#create args dictionary
args = {"pipeline": "spacy_sklearn"}
#create a configuration and trainer
config= RasaNLUConfig(cmdline_args=args)
trainer = Trainer(config)
#load training data
training_data = load_data('/content/nluintent.md')
#Create an interpreter by training the model
interpreter = trainer.train(training_data)
print(interpreter.parse('Hi, can you help me?'))
How would I be able to train the model using the new version of Rasa?

HuggingFace SciBert AutoModelForMaskedLM cannot be imported

I am trying to use the pretrained SciBERT model (https://huggingface.co/allenai/scibert_scivocab_uncased) from Huggingface to evaluate masked words in scientific/biomedical text for bias using CrowS-Pairs (https://github.com/nyu-mll/crows-pairs/). The CrowS-Pairs code works great with the built in models like BERT.
I modified the code of metric.py with the goal of allowing an option of using the SciBERT model -
import os
import csv
import json
import math
import torch
import argparse
import difflib
import logging
import numpy as np
import pandas as pd
from transformers import BertTokenizer, BertForMaskedLM
from transformers import AlbertTokenizer, AlbertForMaskedLM
from transformers import RobertaTokenizer, RobertaForMaskedLM
from transformers import AutoTokenizer, AutoModelForMaskedLM
and get the following error
2021-06-21 17:11:38.626413: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "metric.py", line 15, in <module>
from transformers import AutoTokenizer, AutoModelForMaskedLM
ImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
Later in the Python file, the AutoTokenizer and AutoModelForMaskedLM are defined as
tokenizer = AutoTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
model = AutoModelForMaskedLM.from_pretrained("allenai/scibert_scivocab_uncased")
Libraries
huggingface-hub-0.0.8
sacremoses-0.0.45
tokenizers-0.10.3
transformers-4.7.0
The error occurs with and without GPU support.
Try this:
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased", do_lower_case=True)
model = BertForMaskedLM.from_pretrained("allenai/scibert_scivocab_uncased")

Issue with converting tensorflow model to Intel Movidius graph

Hello I faced with the problem when trying to use Intel Movidius Neural Stick with tensorflow. I have keras model and I convert it to tensorflow model. When I convert it to Movidius graph I got error:
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 290, in parse_tensor
if have_first_input(strip_tensor_id(node.outputs[0].name)):
IndexError: list index out of range
Here is my code:
from keras.models import model_from_json
from keras.models import load_model
from keras import backend as K
import tensorflow as tf
import nn
import os
weights_file = "weights.h5"
sess = K.get_session()
K.set_learning_phase(0)
model = nn.alexnet_model() # get keras model
model.load_weights(weights_file)
saver = tf.train.Saver()
saver.save(sess, "./TF_Model/tf_model") # convert keras to tensorflow model
tf_model_path = "./TF_Model/tf_model"
fw = tf.summary.FileWriter('logs', sess.graph)
fw.close()
os.system('mvNCCompile TF_Model/tf_model.meta -in=conv2d_1_input -on=activation_7/Softmax') # get Movidius graph
Python version: 2.7
OS: Ubuntu 16.04
Tensorflow version: 1.12
As I know, the ncsdk compiler does not resolve every part of a normal tensorflow network, so you have to modify the network and re-save it in an NCS-friendly way in order to successfully make a Movidius graph.
For more information about how to modify tensorflow network, have a look at the official guidance.
Hope it will help you.

Failed precondition error when using Tensorflow serving to serve pretrained keras xception model

This is the code that i am using to export the keras model into tensorflow serving format.The exported model loads up successfully in tensorflow serving( without any warnings or errors). But when i use my client to make a request to the server, i get a FailedPrecondition error.
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Attempting to use uninitialized value block11_sepconv2_bn/moving_mean
import sys
import os
import tensorflow as tf
from keras import backend as K
from keras.models import Model
from keras.models import load_model
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import
build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
config = tf.ConfigProto( device_count = {'GPU': 2 , 'CPU': 12} )
sess = tf.Session(config=config)
K.set_session(sess)
K._LEARNING_PHASE = tf.constant(0)
K.set_learning_phase(0)
xception = load_model('models/xception/model.h5')
config = xception.get_config()
weights = xception.get_weights()
new_xception = Model.from_config(config)
new_xception.set_weights(weights)
export_path = 'prod_models/2'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'images': new_xception.input},
outputs={'scores': new_xception.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict':
signature})
builder.save()
Package versions
Python 3.6.3
tensorflow-gpu 1.8.0
Keras 2.1.5
CUDA 9.0.176
I tried to replicate your problem with the following model as I do not have access to the model file that you are using:
from keras.applications.xception import Xception
new_xception = Xception()
I can make requests to this model without issues (python 3.6.4, tf 1.8.0, keras 2.2.0). What version of TF-serving are you using?

Categories