I am using Python and NLTK to analyse some 'goal' titles. Basically, people set some goals such as 'work harder', 'watch the sun rise' and so on. What I want to do is see how and where each of the non-stopwords appear in the list of entries and dump the results into a file. I know that I can give 'concordance' a width, and that helps a little (helps me prune unimportant data out of the output).
Ideally what I want to do is show concordances from comma to comma (one 'goal' entry is separated by commas). I know I could maybe use a search to do this rather than nltk, but the nltk package is much more elegant and I would like to use it, if possible.
Related
I am new to NER and Spacy. Trying to figure out what, if any, text cleaning needs to be done. Seems like some examples I've found trim the leading and trailing whitespace and then muck with the start/stop indexes. I saw one example where the guy did a bunch of cleaning and his accuracy was really bad because all the indexes were messed up.
Just to clarify, the dataset was annotated with DataTurks, so you get json like this:
"Content": <original text>
"label": [
"Skills"
],
"points": [
{
"start": 1295,
"end": 1621,
"text": "\n• Programming language...
So by "mucking with the indexes", I mean, if you strip off the leading \n, you need to update the start index, so it's still aligned properly.
So that's really the question, if I start removing characters from the beginning, end or middle, I need to apply the rule to the content attribute and adjust start/end indexes to match, no? I'm guessing an obvious "yes" :), so I was wondering how much cleaning needs to be done.
So you would remove the \ns, bullets, leading / trailing whitespace, but leave standard punctuation like commas, periods, etc?
What about stuff like lowercasing, stop words, lemmatizing, etc?
One concern I'm seeing with a few samples I've looked at, is the start/stop indexes do get thrown off by the cleaning they do because you kind of need to update EVERY annotation as you remove characters to keep them in sync.
I.e.
A 0 -> 100
B 101 -> 150
if I remove a char at position 50, then I need to adjust B to 100 -> 149.
First, spaCy does no transformation of the input - it takes it literally as-is and preserves the format. So you don't lose any information when you provide text to spaCy.
That said, input to spaCy with the pretrained pipelines will work best if it is in natural sentences with no weird punctuation, like a newspaper article, because that's what spaCy's training data looks like.
To that end, you should remove meaningless white space (like newlines, leading and trailing spaces) or formatting characters (maybe a line of ----?), but that's about all the cleanup you have to do. The spaCy training data won't have bullets, so they might get some weird results, but I would leave them in to start. (Also, bullets are obviously printable characters - maybe you mean non-ASCII?)
I have no idea what you mean by "muck with the indexes", but for some older NLP methods it was common to do more extensive preprocessing, like removing stop words and lowercasing everything. Doing that will make things worse with spaCy because it uses the information you are removing for clues, just like a human reader would.
Note that you can train your own models, in which case they'll learn about the kind of text you show them. In that case you can get rid of preprocessing entirely, though for actually meaningless things like newlines / leading and following spaces you might as well remove them anyway.
To address your new info briefly...
Yes, character indexes for NER labels must be updated if you do preprocessing. If they aren't updated they aren't usable.
It looks like you're trying to extract "skills" from a resume. That has many bullet point lists. The spaCy training data is newspaper articles, which don't contain any lists like that, so it's hard to say what the right thing to do is. I don't think the bullets matter much, but you can try removing or not removing them.
What about stuff like lowercasing, stop words, lemmatizing, etc?
I already addressed this, but do not do this. This was historically common practice for NLP models, but for modern neural models, including spaCy, it is actively unhelpful.
Python question, as it's the only language I know.
I've got many very long text files (8,000+ lines) with the sentences fragmented and split across multiple lines, i.e.
Research both sides and you should then
formulate your own answer. It's simple,
straightforward advice that is common sense.
And when it comes to vaccinations, climate
change, and the novel coronavirus SARS-CoV-2
etc.
I need to concatenate the fragments into full sentences breaking them at the full stops (periods) question marks, quoted full stops, etc. And write them to a new cleaned up text file, but I am unsure the best way to go about it.
I tried looping though but the results showed me that this method was not going to work.
I have never coded Generators (not sure if that is what is called for in this instance) before as I am an amateur developer and use coding to make my life easier and solve problems.
Any help would be very greatly appreciated.
If you read the file into a variable f, then you can access the text one row at a time (as in f is similar to a list of strings). The functions that might be helpful to you are String.join and String.split. Join will take a list of strings, and join them with a string in between. 'z'.join["a", "b", "c"] will produce "azbzc". Split will take a string as a parameter, find each instance of that string, and split it up. "azbzc".split('z') will produce ["a", "b", "c"] again. Removing the newline after every line, then joining them with something like a space will rebuild the text back into a single string, then using split on things like question marks, etc. will split it up the way you want it.
I am trying to sift through a Word document and complete some quick grammar checks. I currently have code that splits the document into words using python-docx and then I run my grammar checks on the words/sentences based on specific criteria. I then past the correct grammar back into the document using the .add_run function.
The issue I'm having is when I past the correct grammar back into the document, it doesn't save the font style (Bold and Italics), so when I paste words in, it just shows up as text without the Bold and Italics when I want to keep it.
My question is, is there any way in python (using python-docx or any other package) to save the font details (mainly bold and italics) for each word so that I can use code to bold or italicize words accordingly?
I've already tried using the .style function for paragraphs and runs, but the issue that arises is that the style per paragraph is unclear on what is being bolded and italicized, and I don't want to paste a whole paragraph, just the wrong words.
I've also tried looking at the .style for each run, but it is unreliable since runs often splits a word into 2 when it shouldn't. ("Mario" might become 2 different runs "M" and "ario" even when there isn't any style change).
I've also tried looking at other packages but haven't found anything helpful.
if the input is "Stack Overflow is a question and answer site for professional and enthusiast programmers."(with "question" and "answer" is bold and italicized),
the expected output is same sentence with the word "question" and "answer" bold and italicized
the actual output is the same sentence but no bold or italics in it whatsoever.
This is a challenging problem in the general case:
Character-formatting can be applied at various levels (directly, style, document-default, etc.) and determining the effective character formatting for a particular run is therefore not a single-step process.
A given word can partly appear in more than one run and not all parts of the word are guaranteed to be complete runs, for example a sequence of five r|uns could ab|so|lut|ely be split like this"
So you have your work cut out for you. A general-case solution might have a Word object for each word, with a sequence of (run, offset, length) tuples for each which, with the proper processing, could give you the information you need. You would probably assume that the formatting of the first character of the word could safely be applied to the whole word. In that case, you could just have a sequence of (word, run) pairs, where the run item is the one in which the first character of word appears in.
As a start I would look at run.bold and run.italic to get the most common formatting and then work your way up from there.
I'm working on to get twitter trends using tweepy in python and I'm able find out world top 50 trends so for sample I'm getting results like these
#BrazilianFansAreTheBest, #PSYPagtuklas, Federer, Corcuera, Ouvindo ANTI,
艦これ改, 영혼의 나이, #TodoDiaéDiaDe, #TronoChicas, #이사람은_분위기상_군주_장수_책사,
#OTWOLKnowYourLimits, #BaeIn3Words, #NoEntiendoPorque
(Please ignore non English words)
So here I need to parse every hashtag and convert them into proper English words, Also I checked how people write hashtag and found below ways -
#thisisawesome
#ThisIsAwesome
#thisIsAwesome
#ThisIsAWESOME
#ThisISAwesome
#ThisisAwesome123
(some time hashtags have numbers as well)
So keeping all these in mind I thought if I'm able to split below string then all above cases will be covered.
string ="pleaseHelpMeSPLITThisString8989"
Result = please, Help, Me, SPLIT, This, String, 8989
I tried something using re.sub but it is not giving me desired results.
Regex is the wrong tool for the job. You need a clearly-defined pattern in order to write a good regex, and in this case, you don't have one. Given that you can have Capitalized Words, CAPITAL WORDS, lowercase words, and numbers, there's no real way to look at, say, THATSand and differentiate between THATS and or THAT Sand.
A natural-language approach would be a better solution, but again, it's inevitably going to run into the same problem as above - how do you differentiate between two (or more) perfectly valid ways to parse the same inputs? Now you'd need to get a trie of common sentences, build one for each language you plan to parse, and still need to worry about properly parsing the nonsensical tags twitter often comes up with.
The question becomes, why do you need to split the string at all? I would recommend finding a way to omit this requirement, because it's almost certainly going to be easier to change the problem than it is to develop this particular solution.
I have my SPSS dataset with customer comments housed as a string variable. I'm trying to come up with a piece of syntax that will pull out a word either before or after a specific keyword (One example might be "you were out of organic milk"). If you are wanting to know what type of milk they are talking about, you would want to pull out the word directly before it ("organic").
Through a series of string searches/manipulations, I have it to look for the first space before the word, and to pull out the characters in between. However, I feel like there should be an easier way if I was solely using Python (split on spaces, identify what place the keyword is, and return the word X before or after). However, I don't know to accomplish this in SPSS using python. Any ideas for how to approach this problem?