Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am working on a project wherein I have to extract the following information from a set of articles (the articles could be on anything):
People Find the names of any people present, like "Barack Obama"
Topic or related tags of the article, like "Parliament", "World Energy"
Company/Organisation I should be able to obtain the names of the any companies or organisations mentioned, like "Apple" or "Google"
Is there an NLP framework/library of this sort available in Python which would help me accomplish this task?
#sel and #3kt really good answers. OP you are looking for Entity Extraction, commonly referred to as Named entity recognition. There exist many APIs to perform this. But the first question you need to ask yourself is
What is the structure of my DATA? or rather,
Are my sentences good English sentences?
In the sense of figuring out whether the data you are working with is consistently grammatically correct, well capitalized and is well structured. These factors are paramount when it comes to extracting entities. The data I worked with were tweets. ABSOLUTE NIGHTMARE!! I performed a detailed analysis on the performance of various APIs on entity extraction and I shall share with you what I found.
Here's APIs that perform fabulous entity extraction-
NLTK has a handy reference book which talks in-depth about its functions with multiple examples. NLTK does not perform well on noisy data(tweets) because it has been trained on structured data.NLTK is absolute garbage for badly capitalized words(Eg, DUCK, Verb, CHAIR). Moreover, it is slightly less precise when compared to other APIs. It is great for structured data or curated data from News articles and Scholarly reports. It is a great learning tool for beginners.
Alchemy is simpler to implement and performs very well in categorizing the named entities.It has great precision when compared to the APIs I have mentioned.However, it has a certain transaction cost. You can only perform 1000 queries in a day! It identifies twitter-handles and can handle awkward capitalization.
IMHO sPacy API is probably the best. It's open source. It outperforms the Alchemy API but is not as precise. Categorizes entities almost as well Alchemy.
Choosing which API should be a simple problem for you now that you know how each API is likely to behave according to the data you have.
EXTRA -
POLYGLOT is yet another API.
Here is a blog post that performs entity extraction in NLTK.
There is a beautiful paper by Alan Ritter that might go over your head. But it is the standard for entity extraction(particularly in noisy data) at a professional level. You could refer to it every now and then to understand complex concepts like LDA or SVM for capitalisation.
What you are actually looking for is called in literature 'Named entity Recognition' or NER.
You might like to take a look at this tutorial:
http://textminingonline.com/how-to-use-stanford-named-entity-recognizer-ner-in-python-nltk-and-other-programming-languages
One easy way of solving this problem partially this problem is using regular expressions to extract words having the patterns that you can find in this paper to extract peoples names. This of course might lead to extracting all the categories you are looking for i.e. the topics and the campanies names as well.
There is also an API that you can use, that actually gives the same results you are looking for, which is called Alchemy. Unfortunatelly no documentation is available to explain the method they use to extract the topics nor the people's names.
Hope this helps.
You should take a look at NLTK.
Finding names and companies can be achieved by tagging the recovered text, and extracting proper nouns (tagged NNP). Finding the topic is a bit more tricky, and may require some machine learning on a given set of article.
Also, since we're talking about articles, I recommend the newspaper module, that can recover those from their URLs, and do some basic nlp operations (summary, keywords).
Related
I study literature and am trying to work out how I would go about importing a series of novels from .txt or other formats into python to play around with different word frequencies, similarities, etc. I hope to try and establish some quantitative ways to define a genre beyond just subject matter.
I particularly want to see if certain word strings, concepts, and locations occur in each of these novels. Something like this: (http://web.uvic.ca/~mvp1922/modmac/). I would then like to focus in on one novel, using the past data as comparison and also analyzing it separately for character movement and interactions with other characters.
I am very sorry if this is vague, unclear, or just a stupid question. I am just starting out.
Welcome to StackOverflow!
This is a really, really big topic. If you're just getting started, I would recommend this book, which walks you through some of the basics of NLP using Python's nltk library. (If you're already experienced with Python and just not NLP, some parts of the book will be a bit elementary.) I've used this book in teaching university-level courses and had a good experience with it.
Once you've got the fundamentals under your belt, it sounds like you basically have a text classification (or possibly clustering) problem. There are a lot of good tutorials out there on this topic, including many that use Python libraries, such as scikit-learn. For more efficient Googling, other topics you'll want to explore are "bag of words" (analysis that ignores sentence structure, most likely the approach you'll start with) and "named-entity recognition" (if you want to identify characters, locations, etc.).
For future questions, the best way to get helpful answers on SO is to post specific examples of code that you're struggling with - this is a good resource on how to do that. Many users will avoid open-ended questions but jump all over puzzles with a clear, specific problem to solve.
Happy learning!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Update: How would one approach the task of classifying any text on public forums such as Games, or blogs such that derogatory comments/texts before bring posted are filtered.
Original: "
I want to filter out adult content from tweets (or any text for that matter).
For spam detection, we have datasets that check whether a particular text is spam or ham.
For adult content, I found a dataset I want to use (extract below):
arrBad = [
'acrotomophilia',
'anal',
'anilingus',
'anus',
.
. etc.
.
'zoophilia']
Question
How can I use that dataset to filter text instances?
"
I would approach this as a Text Classification problem, because using blacklists of words typically does not work very well to classify full texts. The main reason why blacklists don't work is that you will have a lot of false positives (one example: your list contains the word 'sexy', which alone isn't enough to flag a document as being for adults). To do so you need a training set with documents tagged as being "adult content" and others "safe for work". So here is what I would do:
check whether an existing labelled dataset can be used. You need
several thousands of documents of each class.
If you don't find any, create one. For instance you can create a scraper and download Reddit content. Read for instance Text Classification of NSFW Reddit Posts
Build a text classifier with NLTK. If you don't know how, read: Learning to Classify Text
This can be treated as a binary text classification problem. You should collect documents that contain 'adult-content' as well as documents that do not contain adult content ('universal'). It may so happen that a word/phrase that you have included in the list arrBad may be present in the 'universal' document, for example, 'girl on top' in the sentence 'She wanted to be the first girl on top of Mt. Everest.' You need to get a count vector of the number of times each word/phrase occurs in a 'adult-content' document and a 'universal' document.
I'd suggest you consider using algorithms like Naive Bayes (which should work fairly well in your case). However, if you want to capture the context in which each phrase is used, you could consider the Support Vector Machines algorithm as well (but that would involve tweaking a lot of complex parameters).
You may be interested in something like TextRazor. By using their API you could be able to classify the input text.
And for example you can choose to remove all input texts thats comes with some of the categories or keywords you don't want.
I think you more need to explore on filtering algorithms, studying their usage, how multi pattern searching works and how you can use some of those algorithms (their implementations are free online, so it is not hard to find an existing implementation and customize for your needs). Some pointers can be.
Check how grep family of algorithm works, especially the bitap algorithm and Wu-Manber implementation for fgrep..Depending upon how accurate you want to be, it may require adding some fuzzy logic handling (think why people use fukc instead of fuck..right?).
You may find Bloom Filter interesting, since it wont have any false negatives (your data set), downside is that it may contain false positives..
I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button.
I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293"
to: Brand: Levi, Size: 32, Category: Jeans, code: A0b293
I imagine it would be some combination of lexical parsing and machine learning techniques.
I am rather language agnostic but if pushed would prefer python, Matlab or C++ references
Thanks
You need to provide more information about the source of the text (the web? user input?), the domain (is it just clothes?), the potential formatting and vocabulary...
Assuming worst case scenario you need to start learning NLP. A very good free book is the documentation of NLTK: http://www.nltk.org/book . It is also a very good introduction to Python and the SW is free (for various usages). Be warned: NLP is hard. It doesn't always work. It is not fun at times. The state of the art is no where near where you imagine it is.
Assuming a better scenario (your text is semi-structured) - a good free tool is pyparsing. There is a book, plenty of examples and the resulting code is extremely attractive.
I hope this helps...
Possibly look at "Collective Intelligence" by Toby Segaran. I seem to remember that addressing the basics of this in one chapter.
After some researching I have found that this problem is commonly referred to as Information Extraction and have amassed a few papers and stored them in a Mendeley Collection
http://www.mendeley.com/research-papers/collections/3237331/Information-Extraction/
Also as Tai Weiss noted NLTK for python is a good starting point and this chapter of the book, looks specifically at information extraction
If you are only working for cases like the example you cited, you are better off using some manual rule-based that is 100% predictable and covers 90% of the cases it might encounter production..
You could enumerable lists of all possible brands and categories and detect which is which in an input string cos there's usually very little intersection in these two lists..
The other two could easily be detected and extracted using regular expressions. (1-3 digit numbers are always sizes, etc)
Your problem domain doesn't seem big enough to warrant a more heavy duty approach such as statistical learning.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I have tried the Orange Framework for Naive Bayesian classification.
The methods are extremely unintuitive, and the documentation is extremely unorganized. Does anyone here have another framework to recommend?
I use mostly NaiveBayesian for now.
I was thinking of using nltk's NaiveClassification but then they don't think they can handle continuous variables.
What are my options?
The scikit-learn has an implementation of Gaussian naive Bayesian classifier. In general, the goal of this library is to provide a good trade off between code that is easy to read and use, and efficiency. Hopefully it should be a good library to learn of the algorithms work.
This might be a good place to start. It's the full source code (the text parser, the data storage, and the classifier) for a python implementation of of a naive Bayesian classifier. Although it's complete, it's still small enough to digest in one session. I think the code is reasonably well written and well commented. This is part of the source code files for the book Programming Collective Intelligence.
To get the source, click the link, dl and unpack the zip, from the main folder 'PCI_Code', go to the folder 'chapter 6', which has a python source file 'docclass.py. That's the complete source code for a Bayesian spam filter. The training data (emails) are persisted in an sqlite database which is also included in the same folder ('test.db') The only external library you need are the python bindings to sqlite (pysqlite); you also need sqlite itself if you don't already have it installed).
If you're processing natural language, check out the Natural Language Toolkit.
If you're looking for something else, here's a simple search on PyPI.
pebl appears to handle continuous variables.
I found Divmod Reverend to be the simplest and easiest to use Python Bayesian classifier.
I just took Paul Graham's LISP stuff and converted to to Python
http://www.paulgraham.com/spam.html
There’s also SpamBayes, which I think can be used as a general naive Bayesian clasisfier, instead of just for spam.
I was wondering how as semantic service like Open Calais figures out the names of companies, or people, tech concepts, keywords, etc. from a piece of text. Is it because they have a large database that they match the text against?
How would a service like Zemanta know what images to suggest to a piece of text for instance?
Michal Finkelstein from OpenCalais here.
First, thanks for your interest. I'll reply here but I also encourage you to read more on OpenCalais forums; there's a lot of information there including - but not limited to:
http://opencalais.com/tagging-information
http://opencalais.com/how-does-calais-learn
Also feel free to follow us on Twitter (#OpenCalais) or to email us at team#opencalais.com
Now to the answer:
OpenCalais is based on a decade of research and development in the fields of Natural Language Processing and Text Analytics.
We support the full "NLP Stack" (as we like to call it):
From text tokenization, morphological analysis and POS tagging, to shallow parsing and identifying nominal and verbal phrases.
Semantics come into play when we look for Entities (a.k.a. Entity Extraction, Named Entity Recognition). For that purpose we have a sophisticated rule-based system that combines discovery rules as well as lexicons/dictionaries. This combination allows us to identify names of companies/persons/films, etc., even if they don't exist in any available list.
For the most prominent entities (such as people, companies) we also perform anaphora resolution, cross-reference and name canonization/normalization at the article level, so we'll know that 'John Smith' and 'Mr. Smith', for example, are likely referring to the same person.
So the short answer to your question is - no, it's not just about matching against large databases.
Events/Facts are really interesting because they take our discovery rules one level deeper; we find relations between entities and label them with the appropriate type, for example M&As (relations between two or more companies), Employment Changes (relations between companies and people), and so on. Needless to say, Event/Fact extraction is not possible for systems that are based solely on lexicons.
For the most part, our system is tuned to be precision-oriented, but we always try to keep a reasonable balance between accuracy and entirety.
By the way there are some cool new metadata capabilities coming out later this month so stay tuned.
Regards,
Michal
I'm not familiar with the specific services listed, but the field of natural language processing has developed a number of techniques that enable this sort of information extraction from general text. As Sean stated, once you have candidate terms, it's not to difficult to search for those terms with some of the other entities in context and then use the results of that search to determine how confident you are that the term extracted is an actual entity of interest.
OpenNLP is a great project if you'd like to play around with natural language processing. The capabilities you've named would probably be best accomplished with Named Entity Recognizers (NER) (algorithms that locate proper nouns, generally, and sometimes dates as well) and/or Word Sense Disambiguation (WSD) (eg: the word 'bank' has different meanings depending on it's context, and that can be very important when extracting information from text. Given the sentences: "the plane banked left", "the snow bank was high", and "they robbed the bank" you can see how dissambiguation can play an important part in language understanding)
Techniques generally build on each other, and NER is one of the more complex tasks, so to do NER successfully, you will generally need accurate tokenizers (natural language tokenizers, mind you -- statistical approaches tend to fare the best), string stemmers (algorithms that conflate similar words to common roots: so words like informant and informer are treated equally), sentence detection ('Mr. Jones was tall.' is only one sentence, so you can't just check for punctuation), part-of-speech taggers (POS taggers), and WSD.
There is a python port of (parts of) OpenNLP called NLTK (http://nltk.sourceforge.net) but I don't have much experience with it yet. Most of my work has been with the Java and C# ports, which work well.
All of these algorithms are language-specific, of course, and they can take significant time to run (although, it is generally faster than reading the material you are processing). Since the state-of-the-art is largely based on statistical techniques, there is also a considerable error rate to take into account. Furthermore, because the error rate impacts all the stages, and something like NER requires numerous stages of processing, (tokenize -> sentence detect -> POS tag -> WSD -> NER) the error rates compound.
Open Calais probably use language parsing technology and language statics to guess which words or phrases are Names, Places, Companies, etc. Then, it is just another step to do some kind of search for those entities and return meta data.
Zementa probably does something similar, but matches the phrases against meta-data attached to images in order to acquire related results.
It certainly isn't easy.