How to extract street address from any classified ads? - python

There are a lot of classified ads appearing in NON-HTML format(paper ,text ,written ,etc) which tend to sell house,automobile,rent,lease,flat,etc. A classified ads say for example, a flat rent ad has some of the features included like: SIZE,AREA,LOCALITY,PRICE,CONTACT INFO. .etc
My question is how to extract the street address(address mentioned in article /LOCALITY) in which the ad resides or has mentioned in former article ?
Is there any solution to this problem using NLTK & python ??
Imagine that the source of article is in normal text file(.txt) .

If the source is in .txt format regular expressions probably would be the best solution.
I don't think it easy (or even possible) to write a regex for all arbitrary kinds of ads but the more examples you'll have the better your search will work.

Related

Inconsistent tags between XBRL files from the SEC (EDGAR)

I'm parsing every XBRL files from the SEC through EDGAR in order to retrieve some data (in json format on python).
I have no problem parsing those files. My problem lies in the structure of the XBRL files provided by the SEC, i noticed that some companies use some tags and others dont. Some will use "Revenues" while others won't have any tags pertaining to revenues, i have the same issue with "ShortTermBorrowings"...
Is there a list of XBRL tags from the SEC that are used throughout all companies ?
Thank's
The short answer is "no", there is not a list of required tags for financial reports made to the SEC (other than some "Document and Entity Information" metadata tags).
This reflects the nature of the underlying financial reports, which are governed by the US GAAP ("Generally Accepted Accounting Principles") accounting standard, which does not prescribe specific data points which must be reported, and as a result the XBRL system does not enforce specific required tags.
In both the examples that you've linked to where Revenue is not tagged, this appears to me to be poor tag choice. I think the best that you can do in this case is to infer that if RevenueNotFromContractWithCustomer is not also tagged, then Revenue == RevenueFromContractWithCustomerExcludingAssessedTax. Such inferences can be informed by the relationships in the US GAAP taxonmy. For example, see the definition of Revenue in the US GAAP taxonomy (and in particular, the "Relationships" tab).
Indeed, it is the case that filers use inconsistent tagging. This is one of the main challenges for processing XBRL data across filings.
There is a list of tags for use by all companies, in the US GAAP taxonomy namespace, however this alone is not enough to solve the problem, as (i) companies might still use different tags within this taxonomy, and (ii) companies can create new concepts in their own namespace, aka extension concepts, and sometimes do so even when a US GAAP concept would have been applicable.
But there is good news: Charles Hoffman, CPA solved this problem by providing a fundamental accounting concepts taxonomy, together with mappings and rules to make all filings interoperable. I recommend this tutorial as a starting point.
I would not rely solely on any list of tags the SEC or anyone else provides.
I'd also check the source data for the tags actually being used.
I'd also ask:
How can I create a list of all tags used throughout all SEC Edgar filings, for each "filing type" (10K, 10Q, Form 3, Form 4, Dorm 5, Form 13F, etc.)?

Restrict words in Google Books API

Is there a way to restrict certain words from appearing in a title in google books api.
For example, I want to receive data about fantasy books however I keep getting books such as "Guide to Literary Agents 2017" in my search. I was wondering if I could restrict the some words such as "Literary" in my search (or would there be a better way to resolve this problem).
Also this is my api link:
https://www.googleapis.com/books/v1/volumes?q=subject: fantasy+ young adult &printType=books&langRestrict= en&maxResults=40&key=APIKey'
Yes, in the Books APIs Developers Guide, I found this.
To exclude entries that match a given term, use the form q=-term.
So, in your example you could try something like
https://www.googleapis.com/books/v1/volumes?q=-Literary&subject:fantasy+young%20adult&printType=books&langRestrict=en&maxResults=40
I didn't see the title Guide to Literary Agents 2017 in the results, so I tried excluding a few other keywords and it does seem to exclude those titles.

Extraction of features such as skills and responsibilities from job advertisements using python

I am currently working on a project in information extraction from Job advertisements, we extracted the email addresses, telephone numbers, and addresses using regex but we are finding it difficult extracting features such as job title, name of the company, skills, and qualifications. Can anyone advise me on how we could extract them?
We found out that custom entities and custom dictionaries can be used as inputs to extract such attributes. When it comes to skills and responsibilities as they are sentences or paragraphs we are finding it difficult to extract them.
We have used spacy so far, is there a better package or methodology that can be used?
You can use NER i.e. Named Entity Recognition for extracting different entities.
Refer this link for more details:
https://towardsdatascience.com/named-entity-recognition-with-nltk-and-spacy-8c4a7d88e7da

Best method to extract information from unstructured text

My aim is to extract information from old scanned reports and store in the structured database. I have already extracted text from these reports using Solr.
All of these are scientific reports and have a different structure in terms of the content of the report, but all of these has similar information. I wanted to a create a structured database using these reports such as name of the company involved in the report, name of the software involved in the report, name of the location, date of the experiment etc. For each of these fields, I have some keywords which shall be used for extraction, For example for the Location information: Location, Place of experiment, Place, Facility etc. What will be the best way to proceed in the direction?
Also, in some of these files, there are no sentences to process. Information is given in Form like structure, for example:
Location: Canada
Date of the experiment: 1985-05-01.
Which techniques will be best to extract the information? also which software, libraries should I use?

auto-generate e-commerce tags from item description

We are developing an e-commerce portal that enables users to list their items (name, description, tags) on the site.
However, we realized that users are not understanding item tags very well, some of them write arbitrary words some others leave it blank, so we decided to deal with it, i thought about using an Entity Extractor to generate tags, first, i tried to pass this listing to Calais:
I'm a Filipino Male looking for Office Assistant job,with knowledge in MS Word,Excel,Power Point & Internet Browsing,i'm a quick learner with clear & polite communicative skills,immense flexibility in terms of work assignments and work hours,and performing my duties with full dedication,integrity and honesty.
and i got these tags: Religion Belief, Positive psychology, Integrity, Evaluation, Behavior, Psychology, Skill.
Then i tried Stanford NER and got: Excel, Power, Point, &, Internet, Browsing
after that, i stopped trying these solutions as i thought they will not fit, and started thinking about having an e-commerce-related thesaurus that may contain product/brand names and trade related terms so i can use it with filtering user-generated posts and finding the proper tags but i couldn't find one.
so 1st question: did i miss something?
2nd question: is there better scinarios for this (i.e generating the tags)?

Categories