Detecting music title inside string - python

I am currently writing my own voice assistant in python using nltk for preprocessing and pytorch for processing the data. After lots of hours searching for any method, I can't find a way to extract the title of a song from other spoken text.
So what I want to achieve is at example filtering "Numb" out of "Play numb by Linkin Park". Is this somehow possible with NLP or just using neural network and how?

This is potentially quite a difficult problem to solve generally. As a first pass, you could try imposing some additional assumptions:
The text passed to your "song name extractor" is perfectly translated from speech
The user will follow a set format for requesting the song
If you make these assumptions the problem can be solved using regex, like so:
import re
# your input text
song_request = "Play numb by Linkin Park"
# search the input text for a matching substring
song_search = re.search("(?<=Play ).*(?= by)", song_request)
# if you get a match, extract the song title
if song_search:
song_title = song_search.group()
else:
song_title = "" # just in case your assumption doesn't hold

Related

Create word lists from medical journals

I have been asked to compile a crossword for a surgeon's publication, - it comes out quarterly. I need to make it medically oriented, preferably using different specialty words. eg some will be orthopaedics, some cardiac surgery and some human anatomy etc.
I can get surgical journals online.
I want to create word lists for each specialty and use them in the compiler. I will use crossword compiler .
I can use journal articles on the web, or downloaded pdf's. I am a surgeon and use pandas for data analysis but my python skills are a bit primitive so I need relatively simple solutions. How can I create the specific word lists for each surgical specialty.
They don't need to be very specific words, so eg I thought I could scrape the journal volume for words, compare them to a list of common words and delete those leaving me with a technical list. May require some trial and error. I havent used beautiful soup before but willing to try it.
Alternatively I could just get rid of the beautful soup step and use endnote to download a few hundred journals and export to txt.
Its the extraction and list making I think i am mainly struggling to conceptualise.
I created this program that you can use to parse through a .txt file to find the most common words. I also included a block of code that will help you to convert a .pdf file to .txt. Hope my approach to the solution helps, good luck with your crossword for the surgeon's publication!
'''
Find the most common words in a txt file
'''
import collections
# The re module provides regular expression matching operations
import re
'''
Use this if you would like to convert a PDF to a txt file
'''
# import PyPDF2
# pdffileobj=open('textFileName.pdf','rb')
# pdfreader=PyPDF2.PdfFileReader(pdffileobj)
# x=pdfreader.numPages
# pageobj=pdfreader.getPage(x-1)
# text=pageobj.extractText()
# file1=open(r"(folder path)\\textFileName.txt","a")
# file1.writelines(text)
# file1.close()
words = re.findall(r'\w+', open('textFileName.txt').read().lower())
most_common = collections.Counter(words).most_common(10)
print(most_common)

how to remove names of people in corpus using python

I've been searching for this for a long time and most of the materials I've found were entity named recognition. I'm running topic modeling but in my data, there were too many names in the texts.
Is there any python library which contains (English) names of people? or if not, what would be a good way to remove names of people from each document in corpus?
Here's a simple example:
texts=['Melissa\'s home was clean and spacious. I would love to visit again soon.','Kevin was nice and Kevin\'s home had a huge parking spaces.']
I would suggest using a tokenizer with some capability to recognize and differentiate proper nouns. spacy is quite versatile and its default tokenizer does a decent job of this.
There are hazards to using a list of names as if they're stop words - let me illustrate:
import spacy
import pandas as pd
nlp = spacy.load("en_core_web_sm")
texts=["Melissa's home was clean and spacious. I would love to visit again soon.",
"Kevin was nice and Kevin's home had a huge parking spaces."
"Bill sold a work of art to Art and gave him a bill"]
tokenList = []
for i, sentence in enumerate(texts):
doc = nlp(sentence)
for token in doc:
tokenList.append([i, token.text, token.lemma_, token.pos_, token.tag_, token.dep_])
tokenDF = pd.DataFrame(tokenList, columns=["i", "text", "lemma", "POS", "tag", "dep"]).set_index("i")
So the first two sentences are easy, and spacy identifies the proper nouns "PROPN":
Now, the third sentence has been constructed to show the issue - lots of people have names that are also things. spacy's default tokenizer isn't perfect, but it does a respectable job with the two sides of the task: don't remove names when they are being used as regular words (e.g. bill of goods, work of art), and do identify them when they are being used as names. (you can see that it messed up one of the references to Art (the person).
Not sure if this solution is efficient and robust but it's simple to understand (to me at the very least):
import re
# get a list of existed names (over 18 000) from the file
with open('names.txt', 'r') as f:
NAMES = set(f.read().splitlines())
# your list of texts
texts=["Melissa's home was clean and spacious. I would love to visit again soon.",
"Kevin was nice and Kevin's home had a huge parking spaces."]
# join the texts into one string
texts = ' | '.join(texts)
# find all the words that look like names
pattern = r"(\b[A-Z][a-z]+('s)?\b)"
found_names = re.findall(pattern, texts)
# get singular forms, and remove doubles
found_names = set([name[0].replace("'s","") for name in found_names])
# remove all the words that look like names but are not included in the NAMES
found_names = [name for name in found_names if name in NAMES]
# loop trough the found names and remove every name from the texts
for name in found_names:
texts = re.sub(name + "('s)?", "", texts) # include plural forms
# split the texts back to the list
texts = texts.split(' | ')
print(texts)
Output:
[' home was clean and spacious. I would love to visit again soon.',
' was nice and home had a huge parking spaces.']
List of the names was obtained here: https://www.usna.edu/Users/cs/roche/courses/s15si335/proj1/files.php%3Ff=names.txt.html
And I completely endorse the recommendation of #James_SO to use more smart tools.

Substring with multiple instances of the same character

So I am using a Magtek USB reader that will read card information,
As of right now I can swipe a card and I get a long string of information that goes into a Tkinter Entry textbox that looks like this
%B8954756016548963^LAST/FIRST INITIAL^180912345678912345678901234?;8954756016548963=180912345678912345678901234?
All of the data has been randomized, but that's the format
I've got a tkinter button (it gets the text from the entry box in the format I included above and runs this)
def printCD(self):
print(self.carddata.get())
self.card_data_get = self.carddata.get()
self.creditnumber =
self.card_data_get[self.card_data_get.find("B")+1:
self.card_data_get.find("^")]
print(self.creditnumber)
print(self.card_data_get.count("^"))
This outputs:
%B8954756016548963^LAST/FIRST INITIAL^180912345678912345678901234?;8954756016548963=180912345678912345678901234?
8954756016548963
This yields no issues, but if I wanted to get the next two variables firstname, and lastname
I would need to reuse self.variable.find("^") because in the format it's used before LAST and after INITIAL
So far when I've tried to do this it hasn't been able to reuse "^"
Any takers on how I can split that string of text up into individual variables:
Card Number
First Name
Last Name
Expiration Date
Regex will work for this. I didn't capture everything because you didn't detail what's what but here's an example of capturing the name:
import re
data = "%B8954756016548963^LAST/FIRST INITIAL^180912345678912345678901234?;8954756016548963=180912345678912345678901234?"
matches = re.search(r"\^(?P<name>.+)\^", data)
print(matches.group('name'))
# LAST/FIRST INITIAL
If you aren't familiar with regex, here's a way of testing pattern matching: https://regex101.com/r/lAARCP/1 and an intro tutorial: https://regexone.com/
But basically, I'm searching for (one or more of anything with .+ between two carrots, ^).
Actually, since you mentioned having first and last separate, you'd use this regex:
\^(?P<last>.+)/(?P<first>.+)\^
This question may also interest you regarding finding something twice: Finding multiple occurrences of a string within a string in Python
If you find regex difficult you can divide the problem into smaller pieces and attack one at a time:
data = '%B8954756016548963^LAST/FIRST INITIAL^180912345678912345678901234?;8954756016548963=180912345678912345678901234?'
pieces = data.split('^') # Divide in pieces, one of which contains name
for piece in pieces:
if '/' in piece:
last, the_rest = piece.split('/')
first, initial = the_rest.split()
print('Name:', first, initial, last)
elif piece.startswith('%B'):
print('Card no:', piece[2:])

Using spaCy to replace the "topic" of a sentence

So as a bit of a thought experiment I coded up a function in python that uses spaCy to find the subject of a news article, then replace it with a noun of choice. The problem is, it doesn't exactly work well, and I was hoping it could be improved. I don't exactly understand spaCy that well, and the documentation is a bit hard to understand.
First, the code:
doc=nlp(thetitle)
for text in doc:
#subject would be
if text.dep_ == "nsubj":
subject = text.orth_
#iobj for indirect object
if text.dep_ == "iobj":
indirect_object = text.orth_
#dobj for direct object
if text.dep_ == "dobj":
direct_object = text.orth_
try:
subject
except NameError:
if not thetitle: #if empty title
thetitle = "cat"
subject = "cat"
else: #if unknown subject
try: #do we have a direct object?
direct_object
except NameError:
try: #do we have an indirect object?
indirect_object
except NameError: #still no??
subject = random.choice(thetitle.split())
else:
subject = indirect_object
else:
subject = direct_object
else:
thecat = "cat" #do nothing here, everything went okay
newtitle = re.sub(r"\b%s\b" % subject, toreplace, thetitle)
if (newtitle == thetitle) : #if no replacement happened due to regex
newtitle = thetitle.replace(subject, toreplace)
return newtitle
the "cat" lines are filler lines that don't do anything. "thetitle" is a variable for a random news article title I'm pulling in from RSS feeds. "toreplace" is the variable that holds the string to replace whatever the found subject is.
Let's use an example:
"Video Games that Should Be Animated TV Shows - Screen Rant" And here's the displaCy breakdown of that: https://demos.explosion.ai/displacy/?text=Video%20Games%20that%20Should%20Be%20Animated%20TV%20Shows%20-%20Screen%20Rant&model=en&cpu=1&cph=1
The word the code decided to replace ended up being "that", which isn't even a noun in this sentence, but seems to have resulted in the random word choice fallback, since it couldn't find a subject, indirect object, or direct object. My hope is that it would find something more like "Video games" in this example.
I should note if I take the last bit out (which appears to be the source for the news article) in displaCy: https://demos.explosion.ai/displacy/?text=Video%20Games%20that%20Should%20Be%20Animated%20TV%20Shows&model=en&cpu=1&cph=1 it seems to think "that" is the subject, which is incorrect.
What is a better way to parse this? Should I look for proper nouns first?
Not directly answering your question, I think the code below is far more readable because the conditions are explicit, and what happens when a condition is valid is not buried in an else clause far away. This code also takes care of the cases with multiple objects.
To your problem: any natural language processing tool will have a hard time to find the subject (or maybe rather topic) of a sentence fragment, they are trained with complete sentences. I'm not even sure if such fragments technically have subjects (I'm not an expert, though). You could try to train your own model, but then you will have to provide labeled sentences, I don't know if such a thing already exists for sentence fragments.
I am not fully sure what you want to achieve, looking at the common nouns and pronouns might likely contain the word you want to replace, and the first one appearing is likely the most important.
import spacy
import random
import re
from collections import defaultdict
def replace_subj(sentence, nlp):
doc = nlp(sentence)
tokens = defaultdict(list)
for text in doc:
tokens[text.dep_].append(text.orth_)
if not sentence:
return "cat"
if "nsubj" in tokens:
subject = tokens["nsubj"][0]
elif "dobj" in tokens:
subject = tokens["dobj"][0]
elif "iobj" in tokens:
subject = tokens["iobj"][0]
else:
subject = random.choice(sentence.split())
return re.sub(r"\b{}\b".format(subject), "cat", sentence)
if __name__ == "__main__":
sentence = """Video Games that Should Be Animated TV Shows - Screen Rant"""
nlp = spacy.load("en")
print(replace_subj(sentence, nlp))

Search methods and string matching in python

I have a task to search for a group of specific terms(around 138000 terms) in a table made of 4 columns and 187000 rows. The column headers are id, title, scientific_title and synonyms, where each column might contain more than one term inside it.
I should end up with a csv table with the id where a term has been found and the term itself. What could be the best and the fastest way to do so?
In my script, I tried creating phrases by iterating over the different words in a term in order and comparing each word with each row of each column of the table.
It looks something like this:
title_prepared = string_preparation(title)
sentence_array = title_prepared.split(" ")
length = len(sentence_array)
for i in range(length):
for place_length in range(len(sentence_array)):
last_element = place_length + 1
phrase = ' '.join(sentence_array[0:last_element])
if phrase in literalhash:
final_dict.setdefault(id,[])
if not phrase in final_dict[id]:
final_dict[trial_id].append(phrase)
How should I be doing this?
The code on the website you link to is case-sensitive - it will only work when the terms in tumorabs.txt and neocl.xml are the exact same case. If you can't change your data then change:
After:
for line in text:
add:
line = line.lower()
(this is indented four spaces)
And change:
phrase = ' '.join(sentence_array[0:last_element])
to:
phrase = ' '.join(sentence_array[0:last_element]).lower()
AFAICT this works with the unmodified code from the website when I change the case of some of the data in tumorabs.txt and neocl.xml.
To clarify the problem: we are running small scientific project where we need to extract all text parts with particular keywords. We have used coded dictionary and python script posted on http://www.julesberman.info/coded.htm ! But it seems that something does not working properly.
For exemple the script do not recognize a keyword "Heart Disease" in string "A Multicenter Randomized Trial Evaluating the Efficacy of Sarpogrelate on Ischemic Heart Disease After Drug-eluting Stent Implantation in Patients With Diabetes Mellitus or Renal Impairment".
Thanks for understanding! we are a biologist and medical doctor, with little bit knowlege of python!
If you need some more code i would post it online.

Categories