Modifying your own BERT Model - python

Disclaimer : I am new to the topic, so if there is any misuse of term or inaccuracy please do tell me.
I am trying to make a model (learning purpose) by using BERT, the model will have the task of summarizing a text extractively (classifying each sentence/word to see which will be kept as summary), all i can find when i google is 'import extractive-summarizer + use = voila!', but what i want to do (if possible, again cmiiw) is more like i get a 'base' BERT model(or pretrained), and then i train it or modify some params or add some stuff to make it do what i want it to do.
I really dont know how to approach this, so before i start researching in how, i want to ask a question here in case i cant figure anything out myself. Here are some questions. Of course if u have a link or course that helps i would be grateful to have them.
Should i use pretrained or not? where to find them? how to determine what kind of pretrain i need? if i have to use base, how to modify it?
How much dataset i need and what kind?
What kind of preprocessing needed for BERT input?
If u can answer even only one of these please just put it down, thank you very much.

Related

Dataset for a python application

I am working on an application to predict a disease from it's symptoms, I have some trouble making a dataset.
If someone has a dataset on this, please link it to drive and share it here.
Also I have a question on a good model for this(sklearn only). I am currently using decision tree classifier as my model for the project. Give suggestions if you have any.
Thank you for reading.
EDIT: Got the solution
You can make your own from this csv template:
Sickness, Symptom1, Symptom2, Symptom4
Covid-19, Cough, Loss of taste, Fever, Chills
Common Cold, Sneezing, Cough, Runny Nose, Headache
ignore bullet points, just for formatting. then use pandas read csv to read the data. if u need more help #mention me
I see that you are having trouble finding a dataset. I made a quick search, and i found this one in kaggle. It would require preprocessing, since many of the symptoms are nulls in the columns. Maybe you could make it so each column is a specific sympton, with values 1 (or 0) if the symptom is (or isn't) present. This would have the problem that the number of 0s would be very high. You can try that and see if it works.
You can also see another implementation with Random Forest in this link, with very different preprocessing. It is an advanced model of Decision Tree. However, the Decision Tree is more interpretable, if that is what you need.

Tensorflow: What's the best practice to get a section of a manual from a question?

I would like to use Tensorflow to create a smart faq. I've seen how to manage a chatbot, but my need is to let the user searching for help and the result must be the most probable chapter or section of a manual.
For example the user can ask:
"What are the O.S. supported?"
The reply must be a list of all the possible sections of the manual in which could be the correct answer.
My text record set for the training procedure is only the manual itself. I've followed the text classification example, but i don't think is what i need because in that case it would only understand if a given text belongs to a category or another one.
What's the best practice to accomplish this task (i use Python)?
Thank you in advance
An idea could be building embeddings of your text using Bert or other pretrained models (take a look to transformers) and later compare (for instance using cosine distance) such embeddings with your query (the question) and get the most similar ones interpreting as the section or chapter containing them.

Whats the name of this matrix or table?

i hope you all will be doing fine.
I am having a conceptual problem,I dont know the name of this table and neither i know how can i extract it using scikit-learn.Even, if i knew the correct terminology for this table that would have helped a lot or if someone can tell me, which scikit function to use then it will be awesome.
i have googled it a lot e.g using terms like aggregated table, classification reports but couldn't find this type of table.
thanks for your time!
happy coding!
You can use eli5 package in python.
ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions.
For the specific case, you can use eli5.show_weights() function for your classifier. Notice that it works for classifiers in sklearn and sklearn-crfsuit also.
Sorry for the late reply, but I discovered after searching and discussion with my peers. This is custom matrix used for algorithm comparison on the basis of feature extraction techniques. Thanks #OmG for taking your time to answer this question.

How would I go about image labeling/Classification?

Let's say I have a set of images of passports. I am working on a project where I have to identify the name on each passport and eventually transform that object into text.
For the very first part of labeling (or classification (I think. beginner here)) where the name is on each passport, how would I go about that?
What techniques / software can I use to accomplish this?
in great detail or any links would be great. I'm trying to figure out how this is done exactly so I can began coding
I know training a model is involved possibly but I'm just not sure
I'm using Python if that matters.
thanks
There's two routes you can take, one where you have labeled data (or you want to label data yourseld), and one where you don't have that.
Let's start with the latter. Say you have an image of a passport. You want to detect where the text in the image is, and what that text says. You can achieve this using a library called pytessaract. It's an AI that does exactly this for you. It works well because it has been trained on a lot of other images, so it's good in detecting text in any image.
If you have labels you might be able to improve your model you could make with pytessaract, but this is a lot harder. If you want to learn it anyway, I would recommend with learning ŧensorflow, and use "transfer learning" to improve your model.

How to create a neuronal network in machine learning to make statistics?

I'm actually working on a machine learning project and I need to make some "previsions".
I have this datas (solar panel and kind of weather)
I need to make a prevision of the energy efficiency of the following days.
I search a little bit on this, there is some informations about neuronal network like Keras. I install it but I just don't know how to make it works in my situation. I am beginer in machine learning, I learn a lot about this but it's a lot of theory and little practice so I'm really lost.
If there is someone who can just says me how can I do or give me something like a search trail !
Thanks a lot for the support !
To use keras/tensorflow or other libraries you need to know how to code in python at least and should have the understanding of neural networks. To begin with, you can have a look at knime (https://www.knime.org/) this provides similar functionalities but you need not do any coding. This might help you in understanding what's happening when you apply any kind of algorithm. Once you have a fair idea you might want to try to use keras/tensorflow.

Categories