NLTK Tokenizer encoding issue - python

After tokenizing, my sentence contains many weird characters. How can I remove them?
This is my code:
def summary(filename, method):
list_names = glob.glob(filename)
orginal_data = []
topic_data = []
print(list_names)
for file_name in list_names:
article = []
article_temp = io.open(file_name,"r", encoding = "utf-8-sig").readlines()
for line in article_temp:
print(line)
if (line.strip()):
tokenizer =nltk.data.load('tokenizers/punkt/english.pickle')
sentences = tokenizer.tokenize(line)
print(sentences)
article = article + sentences
orginal_data.append(article)
topic_data.append(preprocess_data(article))
if (method == "orig"):
summary = generate_summary_origin(topic_data, 100, orginal_data)
elif (method == "best-avg"):
summary = generate_summary_best_avg(topic_data, 100, orginal_data)
else:
summary = generate_summary_simplified(topic_data, 100, orginal_data)
return summary
The print(line) prints a line of a txt. And print(sentences) prints the tokenized sentences in the line.
But sometimes the sentences contains weird characters after nltk's processing.
Assaly, who is a fan of both Pusha T and Drake, said he and his friends
wondered if people in the crowd might boo Pusha T during the show, but
said he never imagined actual violence would take place.
[u'Assaly, who is a fan of both Pusha T and Drake, said he and his
friends wondered if people in\xa0the crowd might boo Pusha\xa0T during
the show, but said he never imagined actual violence would take
place.']
Like above example, where is the \xa0 and \xa0T from?

x = u'Assaly, who is a fan of both Pusha T and Drake, said he and his friends wondered if people in\xa0the crowd might boo Pusha\xa0T during the show, but said he never imagined actual violence would take place.'
# method 1
x.replace('\xa0', ' ')
# method 2
import unicodedata
unicodedata.normalize('NFKD', x)
print(x)
Output:
Assaly, who is a fan of both Pusha T and Drake, said he and his friends wondered if people in the crowd might boo Pusha T during the show, but said he never imagined actual violence would take place.
Reference: unicodedata.normalize()

Related

Replacing [[Words]] with other [[Words]] from a reference file in Notepad++ using Javascript

I have a translation file that looks like this:
Apple=Apfel
Apple pie=Apfelkuchen
Banana=Banane
Bananaisland=Bananen Insel
Cherry=Kirsche
Train=Zug
...500+ more lines like that
now I have a file I need to work on with text. Only certain parts of text needs to be replaced, example:
The [[Apple]] was next to the [[Banana]]. Meanwhile the [[Cherry]] was chilling by the [[Train]].
The [[Apple pie]] tastes great on the [[Bananaisland]].
Result needs to be
The [[Apfel]] was next to the [[Banane]]. Meanwhile the [[Kirsche]] was chilling by the [[Zug]].
The [[Apfelkuchen]] tastes great on the [[Bananen Insel]].
There are way too many incident to copy/paste manually. What is an easy way to search for [[XXX]] and replace from another file as mentioned?
I tried getting help for this for many hours but to no avail. The closest I have gotten was this script:
import re
separators = "=", "\n"
def custom_split(sepr_list, str_to_split):
# create regular expression dynamically
regular_exp = '|'.join(map(re.escape, sepr_list))
return re.split(regular_exp, str_to_split)
with open('D:/_working/paired-search-replace.txt') as f:
for l in f:
s = custom_split(separators, l)
editor.replace(s[0], s[1])
However, this will replace too much, or not consistent. E.g. [[Apple]] gets correctly replaced by [[Apfel]] but [[File:Apple.png]] gets wrongly replaced by [[File:Apfel.png]] and [[Apple pie]] gets replaced by [[Apfel pie]], so I tried tweaking the regular expression for hours on end to no avail. Does anyone have any info -in very simple terms please- how I can fix this/achieve my goal?
This is a little tricky because [ is a meta character in regex.
I'm sure there is a more efficient way to do it but this works:
replaces="""Apple=Apfel
Apple pie=Apfelkuchen
Banana=Banane
Bananaisland=Bananen Insel
Cherry=Kirsche
Train=Zug"""
text = """
The [[Apple]] was next to the [[Banana]]. Meanwhile the [[Cherry]] was chilling by the [[Train]].
The [[Apple pie]] tastes great on the [[Bananaisland]].
"""
if __name__ == '__main__':
import re
for replace in replaces.split('\n'):
english, german = replace.split('=')
text = re.sub(rf'\[\[{english}\]\]', f'[[{german}]]', text)
print(text)
outputs:
The [[Apfel]] was next to the [[Banane]]. Meanwhile the [[Kirsche]] was chilling by the [[Zug]].
The [[Apfelkuchen]] tastes great on the [[Bananen Insel]].
First, read in the file with translations:
translations={}
with open('file/with/translations.txt', 'r', encoding='utf-8') as f:
for line in f:
items = line.strip().split('=', 1)
translations[items[0]] = items[1]
I assume the phrases/words are unique in the file.
Then, you need to match all substrings between [[ and ]], capture the text in between (with a regex like \[\[(.*?)]], see the online demo), check if there is a key with the group 1 value in the translations dictionary, and replace with [[ + dictionary value + ]] if there is such a key, or return the whole match if there is no such a translation:
text = """The [[Apple]] was next to the [[Banana]]. Meanwhile the [[Cherry]] was chilling by the [[Train]].
The [[Apple pie]] tastes great on the [[Bananaisland]]."""
import re
translated_text = re.sub(r"\[\[(.*?)]]", lambda x: f'[[{translations[x.group(1)]}]]' if x.group(1) in translations else x.group(), text)
Output:
>>> translated_text
'The [[Apfel]] was next to the [[Banane]]. Meanwhile the [[Kirsche]] was chilling by the [[Zug]]. \nThe [[Apfelkuchen]] tastes great on the [[Bananen Insel]].'

Using python to delete unwanted parts of a text file

I have an input file such as
[headline - https://prachatai.com/journal/2020/10/89984]
'ประยุทธ์' ขอบคุณทุกฝ่าย ยืนยันเจ้าหน้าที่ปฏิบัติตามหลักสากลทุกประการ - ด้านตำรวจยืนยันไม่มีการใช้กระสุนยางและแก๊สน้ำตากระชับพื้นที่ผู้ชุมนุม ระบุสารเคมีผสมน้ำไม่มีอันตราย ใช้เพื่อระุบตัวผู้ชุมนุมดำเนินคดีในอนาคต
เมื่อคืนวันที่ 16 ต.ค. 2563 อนุชา บูรพชัยศรี โฆษกประจำสำนักนายกรัฐมนตรี เปิดเผยว่า พล.อ. ประยุทธ์ จันทร์โอชา นายกรัฐมนตรี และรัฐมนตรีว่าการกระทรวงกลาโหม ขอขอบคุณเจ้าหน้าที่ทุกฝ่าย ประชาชนทุกกลุ่ม และผู้ชุมนุมที่ให้ความร่วมมือกับทางเจ้าหน้าที่ของรัฐในการยุติการชุมนุม
[headline - https://prachatai.com/english/about/internship]
Here is some english text
[headline - https://prachatai.com/english/node/8813]
Foreigners attended the protest at Thammasat University to show their support for the people of Thailand and their fight for democracy. The use of social media has greatly contributed to the expansion of foreign participation in protests.
A protester with a Guy Fawkes mask at the 19 Sept protest.
[headline - https://prachatai.com/journal/2020/10/89903]
ต.ค.62-ก.ย.63 แรงงานไทยในต่างประเทศส่งเงินกลับบ้าน 200,254 ล้านบาท
นายสุชาติ ชมกลิ่น รัฐมนตรีว่าการกระทรวงแรงงาน เปิดเผยว่า นับจากช่วงที่ประเทศไทยเข้าสู่สถานการณ์การแพร่ระบาดของโรคโควิด-19 ส่งผลกระทบต่อการจัดส่งแรงงานไทยไปทำงานต่างประเทศในภาพรวม เนื่องจากหลายประเทศที่เป็นเป้าหมายในการเดินทางไปทำงานของแรงงานไทย ชะลอการรับคนต่างชาติเข้าประเทศ
My goal here is to remove every english articles. I have multiple large text files so I want to find an efficient way to get rid of the English articles and keep everything else.
So an example output would look like.
[headline - https://prachatai.com/journal/2020/10/89984]
'ประยุทธ์' ขอบคุณทุกฝ่าย ยืนยันเจ้าหน้าที่ปฏิบัติตามหลักสากลทุกประการ - ด้านตำรวจยืนยันไม่มีการใช้กระสุนยางและแก๊สน้ำตากระชับพื้นที่ผู้ชุมนุม ระบุสารเคมีผสมน้ำไม่มีอันตราย ใช้เพื่อระุบตัวผู้ชุมนุมดำเนินคดีในอนาคต
เมื่อคืนวันที่ 16 ต.ค. 2563 อนุชา บูรพชัยศรี โฆษกประจำสำนักนายกรัฐมนตรี เปิดเผยว่า พล.อ. ประยุทธ์ จันทร์โอชา นายกรัฐมนตรี และรัฐมนตรีว่าการกระทรวงกลาโหม ขอขอบคุณเจ้าหน้าที่ทุกฝ่าย ประชาชนทุกกลุ่ม และผู้ชุมนุมที่ให้ความร่วมมือกับทางเจ้าหน้าที่ของรัฐในการยุติการชุมนุม
[headline - https://prachatai.com/journal/2020/10/89903]
ต.ค.62-ก.ย.63 แรงงานไทยในต่างประเทศส่งเงินกลับบ้าน 200,254 ล้านบาท
นายสุชาติ ชมกลิ่น รัฐมนตรีว่าการกระทรวงแรงงาน เปิดเผยว่า นับจากช่วงที่ประเทศไทยเข้าสู่สถานการณ์การแพร่ระบาดของโรคโควิด-19 ส่งผลกระทบต่อการจัดส่งแรงงานไทยไปทำงานต่างประเทศในภาพรวม เนื่องจากหลายประเทศที่เป็นเป้าหมายในการเดินทางไปทำงานของแรงงานไทย ชะลอการรับคนต่างชาติเข้าประเทศ
If you can see, all the English articles are under
[headline - https://.../english/...
Each article begins with these [headline tags which is their URLs. And the English articles happen to have english in their URLs.
So now I want to get rid of the English artices. How do I achieve this?
current code
with open('example.txt', 'r') as inputFile:
data = inputFile.read().splitlines()
Outputtext = ""
for line in data:
if line.startswith("[headline"):
if line.contains("english"):
#somehow read until the next [headline and do check
else:
Outputtext = Outputtext + line + "\n"
else
You can possibly do this with just Regex. It may need to be tweaked to fit the specific rules for your formatting, though.
import re
all_articles = "..."
# match "[headline...english" and everything after till another "[headline"
english_article_regex = r"\[headline[^\]]*\/english[^\]]*].*?(?=(\[headline|$))"
result = re.sub(english_article_regex, "", all_articles, 0, re.DOTALL)
Here's the live example:
https://regex101.com/r/heKomA/3
I think you needed to put an extra amount of time into it and you might have solved this problem yourself. When I see your code, I see someone learning programming that is confused about what he needs to do.
You need to think step by step. Like, here, you have a text composed of articles. You want to filter out some articles depending on a condition. What's the first thing you need to do ?
You first need to know how to recognize what is an article. Is an article a pack of 3 lines in your file ? Oh, the size changes, so you need another common factor. They all begin with [headline ? Alright. Now, I need to make "groups" of articles. There are a very large number of ways you could do it. But I just wanted to give you an insight as to how you could solve your problem. One step at a time.
Here is a solution to your problem. And it is not the only one, far from it.
HELLO
IGNORE
THESE
[headline - https://prachatai.com/journal/2020/10/89984]
NOENGLISHTEXT
MULTIPLE
LINES
TEXT
[headline - https://prachatai.com/english/about/internship]
Here is some english text
[headline - https://prachatai.com/english/node/8813]
Foreigners attended the protest at Thammasat University to show their support for the people of Thailand and their fight for democracy. The use of social media has greatly contributed to the expansion of foreign participation in protests.
A protester with a Guy Fawkes mask at the 19 Sept protest.
[headline - https://prachatai.com/journal/2020/10/89903]
NOENGLISHTEXT SECOND
MULTIPLE
LINES
And my solution, in pure python.
def filter_out_english_block(lines: list) -> str:
filtered_lines = []
flag = False
for line in lines:
if line.startswith("[headline"):
if 'english' not in line:
flag = True
else:
flag = False
if flag:
filtered_lines.append(line)
return "".join(filtered_lines)
if __name__ == '__main__':
with open("hello.txt", "r") as f:
lines = f.readlines()
print(lines)
# ['HELLO\n', 'IGNORE\n', 'THESE\n', '[headline - https://prachatai.com/journal/2020/10/89984]\n', 'NOENGLISHTEXT\n', 'MULTIPLE\n', 'LINES\n', 'TEXT\n', '[headline - https://prachatai.com/english/about/internship]\n', 'Here is some english text\n', '[headline - https://prachatai.com/english/node/8813]\n', 'Foreigners attended the protest at Thammasat University to show their support for the people of Thailand and their fight for democracy. The use of social media has greatly contributed to the expansion of foreign participation in protests.\n', 'A protester with a Guy Fawkes mask at the 19 Sept protest.\n', '[headline - https://prachatai.com/journal/2020/10/89903]\n', 'NOENGLISHTEXT SECOND\n', 'MULTIPLE\n', 'LINES']
new_text = filter_out_english_block(lines)
print(new_text)
# [headline - https://prachatai.com/journal/2020/10/89984]
# NOENGLISHTEXT
# MULTIPLE
# LINES
# TEXT
# [headline - https://prachatai.com/journal/2020/10/89903]
# NOENGLISHTEXT SECOND
# MULTIPLE
# LINES
The explanation is :
I first decide to iterate through the file as a list.
I decide to store lines only If I have previously seen a condition that suits me (Here, it would be to see the [headline line, that does not contain the english string.
And my storing condition is set by default to False, so that the first lines are ignored until I see a condition that suits me for storing.

How can I get children of an ancestor using spacy dependency tree in python

The code as follows:
import spacy
from nltk import Tree
en_nlp = spacy.load('en')
parsed = en_nlp(u"Photos under low lighting are poor, both front and back cameras.")
print(u'sentence:{0}'.format(parsed.text))
try2 = []
print(u'parsed_sentence_children::{0}'.format([(x.text,x.pos_,x.dep_,[(x.text,x.dep_) for x in list(x.children)]) for x in parsed]))
print("\n\n")
for x in parsed:
if x.pos_=="NOUN" and x.dep_=="nsubj":
print(u'Noun and noun subject:{0}'.format(try2 =[(x.text,x.pos_,x.dep_,[(x.text,x.pos_)for x in list(x.ancestors)])])
The output to this is:
[(u'Photos', u'NOUN', u'nsubj', [(u'are', u'VERB')]
Now I wish to print acomp children of:
[(u'are', u'VERB')]
which is the ancestor of:
[(u'Photos', u'NOUN', u'nsubj')]
How can I do this?
You can traverse the tokens:
import spacy
nlp = spacy.load('en')
text = 'Photos under low lighting are poor, both front and back cameras.'
for token in nlp(text):
if token.dep_ == 'nsubj': # Or other forms of subjects / objects
print(token.lemma_+"'s are:")
for a in token.ancestors:
if a.text == 'are': # Or however you determine your selection
for atok in a.children:
if atok.dep_ == 'acomp': # Note, you should look for more than just acomp
print(atok.text)
Which outputs (in Python3):
photo's are:
poor
However, take a look at Spacy's page on dependencies. There are a lot to consider. You could play around with DisplaCy (this link is also an example of a similar sentence, that has different dependencies).
I hope this at least helps to get you pointed in the right direction!

how to read ngrams from a file and then match them with tokens

i want to read ngrams which are saved in a file. and then match each word of those ngrams with individual token in my corpus if it match with that then replace it with ngram.let say i have these bigrams:
painful punishment
worldly life
straight path
Last Day
great reward
severe punishment
clear evidence
what i want to do is to read first bigram and then split it and comapre its first word "painful" with my tokens in corpus where it match with the token move to the next token and match it with the next word of bigram if it is "punishment" then replace it with one token as "painful punsihment". i dont know how to do this. i want to convert this logic into code.if any one can help me i will be really very thankful.
Firstly, this is not a question for StackOverflow (sounds like a homework question). You can easily discern various methods to accomplish this via Google. I will however give you one solution as I need to warm up:
# -*- coding: utf-8 -*-
import traceback, sys, re
'''
Open the bigrams file and load into an array.
Assuming bigrams are cleaned (else, you can do this using method below).
'''
try:
with open('bigrams.txt') as bigrams_file:
bigrams = bigrams_file.read().splitlines()
except Exception:
print('BIGRAMS LOAD ERROR: '+str(traceback.format_exc()))
sys.exit(1)
test_input = 'There is clear good evidence a great reward is in store.'
'''
Clean input method.
'''
def clean_input(text_input):
text_input = text_input.lower()
text_input = text_input.strip(' \t\n\r')
alpha_num_underscore_only = re.compile(r'([^\s\w_])+', re.UNICODE)
text_input = alpha_num_underscore_only.sub(' ', text_input)
text_input = re.sub(' +', ' ', text_input)
return text_input.strip()
test_input_words = test_input.split()
test_input_clean = clean_input(test_input)
test_input_clean_words = test_input_clean.split()
'''
Loop through the test_input bigram by bigram.
If we match one, then increment the index to move onto the next bigram.
This is a quick implementation --- you can modify for efficiency, and higher-order n-grams.
'''
output_text = []
skip_index = 0
for i in range(len(test_input_clean_words)-1):
if i >= skip_index:
if ' '.join([test_input_clean_words[i], test_input_clean_words[i+1]]) in bigrams:
print(test_input_clean_words[i], test_input_clean_words[i+1])
skip_index = i+2
output_text.append('TOKEN_'+'_'.join([test_input_words[i], test_input_words[i+1]]).upper())
else:
skip_index = i+1
output_text.append(test_input_words[i])
output_text.append(test_input_words[len(test_input_clean_words)-1])
print(' '.join(output_text))
Input:
There is clear good evidence a great reward is in store.
Output:
There is clear good evidence a TOKEN_GREAT_REWARD is in store.

Going through a list individually

I have very limited coding background except for some Ruby, so if there's a better way of doing this, please let me know!
Essentially I have a .txt file full of words. I want to import the .txt file and turn it into a list. Then, I want to take the first item in the list, assign it to a variable, and use that variable in an external request that sends off to get the definition of the word. The definition is returned, and tucked into a different .txt file. Once that's done, I want the code to grab the next item in the list and do it all again until the list is exhausted.
Below is my code in progress to give an idea of where I'm at. I'm still trying to figure out how to iterate through the list correctly, and I'm having a hard time interpreting the documentation.
Sorry in advance if this was already asked! I searched, but couldn't find anything that specifically answered my issue.
from __future__ import print_function
import requests
import urllib2, urllib
from bs4 import BeautifulSoup
lines = []
with open('words.txt') as f:
lines = f.readlines()
for each in lines
wordlist = open('test.txt', 'a')
word = ##figure out how to get items from list and assign them here
url = 'http://services.aonaware.com/DictService/Default.aspx?action=define&dict=wn&query=%s' % word
# print url and make sure it's correct
html = urllib.urlopen(url).read()
# print html (deprecated)
soup = BeautifulSoup(html)
visible_text = soup.find('pre')(text=True)[0]
print(visible_text, file=wordlist)
Keep everything in a loop. Like that:
with open('test.txt', 'a') as wordlist:
for word in lines:
url = 'http://services.aonaware.com/DictService/Default.aspx?action=define&dict=wn&query=%s' % word
print url
# print url and make sure it's correct
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
visible_text = soup.find('pre')(text=True)[0]
wordlist.write("{0}\n".format(visible_text))
Secondly, some suggestions:
f.readlines() won't discard the trailing \n. So, I would use f.read().splitlines()
lines = f.read().splitlines()
You don't to initialize the lines list with [ ], as you are forming the list at one shot and assigning it to lines. You need to initialize the list, only when you consider using append() to the list. So, the below line isn't needed.
lines = []
You can handle KeyError by the following:
try:
value = soup.find('pre', text=True)[0]
return value
except KeyError:
return None
I also show how you can use the Python requests library for retrieving the raw html page. This allows us to easily check the status code for whether the retrieval was successful. You can replace the relevant lines to urllib with this if you like.
You can install requests in the command line using pip: pip install requests
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import re
import requests
import urllib2, urllib
from bs4 import BeautifulSoup
def get_html_with_urllib(word):
url = "http://services.aonaware.com/DictService/Default.aspx?action=define&dict=wn&query={word}".format(word=word)
html = urllib.urlopen(url).read()
return html
def get_html(word):
url = "http://services.aonaware.com/DictService/Default.aspx?action=define&dict=wn&query={word}".format(word=word)
response = requests.get(url)
# Something bad happened
if response.status_code != 200:
return ""
# Did not get back html
if not response.headers["Content-Type"].startswith("text/html"):
return ""
html = response.content
return html
def format_definitions(raw_definitions_text):
# Get individual lines in definitions text
parts = raw_definitions_text.split('\n')
# Convert to str
# Remove extra spaces on the left.
# Add one space at the end for later joining with next line
parts = map(lambda x: str(x).lstrip() + ' ', parts)
result = []
current = ""
for p in parts:
if re.search("\w*[0-9]+:", p):
# Start of new line. Contains some char followed by <number>:
# Save previous lines
result.append(current.replace('\n', ' '))
# Set start of current line
current = p
else:
# Continue line
current += p
result.append(current)
return '\n'.join(result)
def get_definitions(word):
# Uncomment this to use requests
# html = get_html(word)
# Could not get definition
# if not html:
# return None
html = get_html_with_urllib(word)
soup = BeautifulSoup(html, "html.parser")
# Get block containing definition
definitions = soup.find("pre").get_text()
definitions = format_definitions(definitions)
return definitions
def batch_query(input_filepath):
with open(input_filepath) as infile:
for word in infile:
word = word.strip() # Remove spaces from both ends
definitions = get_definitions(word)
if not definitions:
print("Could not retrieve definitions for {word}".format(word=word))
print("Definition for {word} is: ".format(word=word))
print(definitions)
def main():
input_filepath = sys.argv[1] # Alternatively, change this to file containing words
batch_query(input_filepath)
if __name__ == "__main__":
main()
Output:
Definition for cat is:
cat
n 1: feline mammal usually having thick soft fur and being unable to roar; domestic cats; wildcats [syn: true cat]
2: an informal term for a youth or man; "a nice guy"; "the guy's only doing it for some doll" [syn: guy, hombre, bozo]
3: a spiteful woman gossip; "what a cat she is!"
4: the leaves of the shrub Catha edulis which are chewed like tobacco or used to make tea; has the effect of a euphoric stimulant; "in Yemen kat is used daily by 85% of adults" [syn: kat, khat, qat, quat, Arabian tea, African tea]
5: a whip with nine knotted cords; "British sailors feared the cat" [syn: cat-o'-nine-tails]
6: a large vehicle that is driven by caterpillar tracks; frequently used for moving earth in construction and farm work [syn: Caterpillar]
7: any of several large cats typically able to roar and living in the wild [syn: big cat]
8: a method of examining body organs by scanning them with X rays and using a computer to construct a series of cross-sectional scans along a single axis [syn: computerized tomography, computed tomography, CT, computerized axial tomography, computed axial tomography]
v 1: beat with a cat-o'-nine-tails
2: eject the contents of the stomach through the mouth; "After drinking too much, the students vomited"; "He purged continuously"; "The patient regurgitated the food we gave him last night" [syn: vomit, vomit up, purge, cast, sick, be sick, disgorge, regorge, retch, puke, barf, spew, spue, chuck, upchuck, honk, regurgitate, throw up] [ant: keep down] [also: catting, catted]
Definition for dog is:
dog
n 1: a member of the genus Canis (probably descended from the common wolf) that has been domesticated by man since prehistoric times; occurs in many breeds; "the dog barked all night" [syn: domestic dog, Canis familiaris]
2: a dull unattractive unpleasant girl or woman; "she got a reputation as a frump"; "she's a real dog" [syn: frump]
3: informal term for a man; "you lucky dog"
4: someone who is morally reprehensible; "you dirty dog" [syn: cad, bounder, blackguard, hound, heel]
5: a smooth-textured sausage of minced beef or pork usually smoked; often served on a bread roll [syn: frank, frankfurter, hotdog, hot dog, wiener, wienerwurst, weenie]
6: a hinged catch that fits into a notch of a ratchet to move a wheel forward or prevent it from moving backward [syn: pawl, detent, click]
7: metal supports for logs in a fireplace; "the andirons were too hot to touch" [syn: andiron, firedog, dog-iron] v : go after with the intent to catch; "The policeman chased the mugger down the alley"; "the dog chased the rabbit" [syn: chase, chase after, trail, tail, tag, give chase, go after, track] [also: dogging, dogged]
Definition for car is:
car
n 1: 4-wheeled motor vehicle; usually propelled by an internal combustion engine; "he needs a car to get to work" [syn: auto, automobile, machine, motorcar]
2: a wheeled vehicle adapted to the rails of railroad; "three cars had jumped the rails" [syn: railcar, railway car, railroad car]
3: a conveyance for passengers or freight on a cable railway; "they took a cable car to the top of the mountain" [syn: cable car]
4: car suspended from an airship and carrying personnel and cargo and power plant [syn: gondola]
5: where passengers ride up and down; "the car was on the top floor" [syn: elevator car]

Categories