Python-To Extract Data from Text file using Regex using python script - python

i want to Extract firm name(Samsung India Electronics Pvt. Ltd.) from my text file that are present in next line after Firm name. i have extract some data by my code but i am not able to extact firm name because i am new in python or python regex
import re
hand = open(r'C:\Users\sachin.s\Downloads\wordFile_Billing_PrintDocument_7528cc93-3644-4e38-a7b3-10f721fa2049.txt')
copy=False
for line in hand:
line = line.rstrip()
if re.search('Order Number\S*: [0-9.]+', line):
print(line)
if re.search('Invoice No\S*: [0-9.]+', line):
print(line)
if re.search('Invoice Date\S*: [0-9.]+', line):
print(line)
if re.search('PO No\S*: [0-9.]+', line):
print(line)
Firm Name: Address:
Samsung India Electronics Pvt. Ltd.
Regd Office: 6th Floor, DLF Centre, Sansad Marg, New Delhi-110001
SAMSUNG INDIA ELECTRONICS PVT LTD, MEDCHAL MANDAL HYDERABAD
RANGA REDDY DISTRICT HYDERABAD TELANGANA 501401
Phone: 1234567
Fax No:
Branch: S5S2 - [SIEL]HYDERABAD
Order Number: 1403543436
Currency: INR
Invoice No: 36S2I0030874
Invoice Date: 15.12.2018
PI No: 5929947652

Use regex:
import re
data = """
Firm Name: Address:
Samsung India Electronics Pvt. Ltd.
Regd Office: 6th Floor, DLF Centre, Sansad Marg, New Delhi-110001
SAMSUNG INDIA ELECTRONICS PVT LTD, MEDCHAL MANDAL HYDERABAD
RANGA REDDY DISTRICT HYDERABAD TELANGANA 501401 Phone: 1234567 Fax No: Branch: S5S2 - [SIEL]HYDERABAD
Order Number: 1403543436
Currency: INR
Invoice No: 36S2I0030874
Invoice Date: 15.12.2018
PI No: 5929947652
"""
result = re.findall('Address:(.*)Regd', data, re.MULTILINE|re.DOTALL)[0]
Output:
Samsung India Electronics Pvt. Ltd.

Related

Web scraping table filtering results

I'm using Python to Web Scrape a table of data found here. Specifically, I want to pull the business name, url, owners name, street, city, and phone. After being run through Beautiful Soup and split the code to filter appears as:
['\\\', \\\' href="?listingid=9758&profileid=217Y3Q544Y&action=uweb&url=http%3a%2f%2fwww.jpspa.com" target="_BLANK"', "Johnson Price Sprinkle PA', '/a", "', '/b", "', '/td", "', '/tr", "', '/table", "', '/td", "', '/tr", '', 'tr class="GeneralBody"', '', 'td bgcolor="#808080" height="1"', '', 'img border="0" height="1" src="images/dot_clear.gif" width="1"/', "', '/td", "', '/tr", "', '/table", "', '/td", "', '/tr", '', 'tr class="GeneralBody"', '', 'td align="left" valign="top" width="90%"', 'Maria Pilos', "', '', '79 Woodfin Place, Suite 300", "', '', 'Asheville, NC 28801", "', '', '", 'b', "Phone:', '/b", ' **(828) 254-2374**', "', '', '", 'b', "Fax:', '/b", " (828) 252-9994', '\', \'", '\\\', \\\' href="DirectoryEmailForm.aspx?listingid=9758"', "Send Email', '/a", "', '/td", '', 'td align="right" rowspan="3" valign="top" width="10%"', '', 'span style="font-size: 8pt"', '\\\', \\\' href="?, '!--..End Listing--", '', "/td']<
I bolded the values I want to return and I identified their position in the code. To filter them the code is below. Temp_array is the code above to filter, temp_count is the position in the array, and business_listing is the array I'm appending the value to when found. Basically when the temp_count == the position of the value in the array, it appends that value to the array.
<
temp_count=0
for i in temp_array:
if temp_count ==0:
business_listings.append(i)
temp_count+=1
elif temp_count ==2:
business_listings.append(i)
temp_count+=1
elif temp_count ==19:
business_listings.append(i)
temp_count+=1
elif temp_count ==19:
business_listings.append(i)
temp_count+=1
elif temp_count ==20:
business_listings.append(i)
temp_count+=1
elif temp_count ==23:
business_listings.append(i)
temp_count+=1
elif temp_count ==27:
business_listings.append(i)
temp_count+=1
elif temp_count ==42:
business_listings.append(i)
temp_count+=1
else:
count+=1
The output is as follows:
['\\\', \\\' href="?listingid=9758&profileid=2B713K5Z48&action=uweb&url=http%3a%2f%2fwww.jpspa.com" target="_BLANK"']>
and only filters the first 2 values or won't filter anything.
This script will print information about various businesses:
import requests
from bs4 import BeautifulSoup
url = 'https://web.ashevillechamber.org/cwt/external/wcpages/wcdirectory/Directory.aspx?CategoryID=1242&Title=Accounting++and++Bookkeeping&AdKeyword=Accounting++and++Bookkeeping'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for b in soup.select('td[bgcolor="#E6E6E6"] b'):
business_name = b.text
business_url = b.a['href'] if b.a else '-'
owner = b.find_next('td', width="90%").contents[0]
addr, current = [], owner.find_next(text=True)
while not current.find_parent('b'):
addr.append(current.strip())
current = current.find_next(text=True)
addr = '\n'.join(addr)
phone = current.find_next(text=True).strip()
print('Business Name :', business_name)
print('Business URL :', business_url)
print('Owner :', owner)
print('Phone :', phone)
print('Address:')
print(addr)
print('-' * 80)
Prints:
Business Name : Johnson Price Sprinkle PA
Business URL : ?listingid=9758&profileid=2D7R3B5E4N&action=uweb&url=http%3a%2f%2fwww.jpspa.com
Owner : Maria Pilos
Phone : (828) 254-2374
Address:
79 Woodfin Place, Suite 300
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Leah B. Noel, CPA, PC
Business URL : ?listingid=9656&profileid=549S620J3J&action=uweb&url=http%3a%2f%2fwww.lbnoelcpa.com%2f
Owner : Ms. Leah Noel
Phone : 828-333-4529
Address:
14 S. Pack Square #503
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Worley, Woodbery, & Associates, PA
Business URL : ?listingid=9661&profileid=3L7R304J8X&action=uweb&url=http%3a%2f%2fwww.worleycpa.com%2f
Owner : Mr. David Worley
Phone : (828) 271-7997
Address:
7 Orchard Street, Ste. 202
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Peridot Consulting, Inc.
Business URL : ?listingid=14005&profileid=7L724E5W7E&action=uweb&url=http%3a%2f%2fwww.PeridotConsultingInc.com
Owner : John Michael Kledis
Phone : (828) 242-6971
Address:
PO Box 8904
Asheville, NC 28804
--------------------------------------------------------------------------------
Business Name : DHG
Business URL : ?listingid=9579&profileid=25711D625I&action=uweb&url=http%3a%2f%2fwww.dhgllp.com%2f
Owner : Adrienne Bernardi
Phone : (828) 254-2254
Address:
PO Box 3049
Asheville, NC 28802
--------------------------------------------------------------------------------
Business Name : Gould Killian CPA Group, P.A.
Business URL : ?listingid=9659&profileid=2P7X216Y66&action=uweb&url=http%3a%2f%2fwww.gk-cpa.com
Owner : Ed Towson
Phone : (828) 258-0363
Address:
100 Coxe Avenue
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Michelle Tracz CPA, CFE, PLLC
Business URL : ?listingid=12921&profileid=610C8H3I7N&action=uweb&url=http%3a%2f%2fwww.michelletraczcpa.com
Owner : Michelle Tracz
Phone : (828) 280-2530
Address:
1238 Hendersonville Rd.
Asheville, NC 28803
--------------------------------------------------------------------------------
Business Name : Burleson & Earley, P.A.
Business URL : ?listingid=10436&profileid=57132N5P9C&action=uweb&url=http%3a%2f%2fwww.burlesonearley.com%2f
Owner : Bronwyn Burleson, CPA
Phone : (828) 251-2846
Address:
902 Sand Hill Road
Asheville, NC 28806
--------------------------------------------------------------------------------
Business Name : Carol L. King & Associates, P.A.
Business URL : ?listingid=10439&profileid=2Z8C7I0B4X&action=uweb&url=http%3a%2f%2fwww.clkcpa.com
Owner : Carol King
Phone : (828) 258-2323
Address:
40 North French Broad Avenue
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Goldsmith Molis & Gray
Business URL : ?listingid=12638&profileid=6C8D2C7F55&action=uweb&url=http%3a%2f%2fwww.gmg-cpa.com
Owner : Allen Gray
Phone : (828) 281-3161
Address:
32 Orange St.
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Corliss & Solomon, PLLC
Business URL : ?listingid=12407&profileid=6T7Y798S1R&action=uweb&url=http%3a%2f%2fwww.candspllc.com
Owner : Slater Solomon
Phone : (828) 236-0206
Address:
242 Charlotte St., Suite 1
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Mountain BizWorks
Business URL : ?listingid=12733&profileid=2L9E9G6A1S&action=uweb&url=http%3a%2f%2fwww.mountainbizworks.org
Owner : Matthew Raker
Phone : (828) 253-2834
Address:
153 South Lexington Ave.
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : LeBlanc CPA Limited
Business URL : -
Owner : Leslie LeBlanc
Phone : (828) 225-4940
Address:
218 Broadway
Asheville, NC 28801-2347
--------------------------------------------------------------------------------
Business Name : Bolick & Associates, PA, CPA's
Business URL : -
Owner : Alan E Bolick, CPA
Phone : (828) 253-4692
Address:
Central Office Park Suite 104
56 Central Avenue
Asheville, NC 28801
--------------------------------------------------------------------------------
EDIT: To parse URLs:
import requests
from bs4 import BeautifulSoup
from urllib.parse import unquote
url = 'https://web.ashevillechamber.org/cwt/external/wcpages/wcdirectory/Directory.aspx?CategoryID=1242&Title=Accounting++and++Bookkeeping&AdKeyword=Accounting++and++Bookkeeping'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for b in soup.select('td[bgcolor="#E6E6E6"] b'):
business_name = b.text
business_url = b.a['href'] if b.a else '-'
owner = b.find_next('td', width="90%").contents[0]
addr, current = [], owner.find_next(text=True)
while not current.find_parent('b'):
addr.append(current.strip())
current = current.find_next(text=True)
addr = '\n'.join(addr)
phone = current.find_next(text=True).strip()
print('Business Name :', business_name)
print('Business URL :', unquote(business_url).rsplit('=', maxsplit=1)[-1])
print('Owner :', owner)
print('Phone :', phone)
print('Address:')
print(addr)
print('-' * 80)
Prints:
Business Name : Johnson Price Sprinkle PA
Business URL : http://www.jpspa.com
Owner : Maria Pilos
Phone : (828) 254-2374
Address:
79 Woodfin Place, Suite 300
Asheville, NC 28801
--------------------------------------------------------------------------------
Business Name : Leah B. Noel, CPA, PC
Business URL : http://www.lbnoelcpa.com/
Owner : Ms. Leah Noel
Phone : 828-333-4529
Address:
14 S. Pack Square #503
Asheville, NC 28801
--------------------------------------------------------------------------------
...and so on.

Parsing through HTML in a dictionary

I'm trying to pull table data from the following website: https://msih.bgu.ac.il/md-program/residency-placements/
While there are no table tags I found the common tag to pull individual segments of the table to be div class=accord-con
I made a dictionary where the keys are the graduation year (ie, 2019, 2018, etc), and the values is the html from each div class-accord con.
I'm stuck and don't know how to parse the html within the dictionary. My goal is to have separate lists of the specialty, hospital, and location for each year. I don't know how to move forward.
Below is my working code:
import numpy as np
import bs4 as bs
from bs4 import BeautifulSoup
import urllib.request
import pandas as pd
sauce = urllib.request.urlopen('https://msih.bgu.ac.il/md-program/residency-placements/').read()
soup = bs.BeautifulSoup(sauce, 'lxml')
headers = soup.find_all('div', class_={'accord-head'})
grad_yr_list = []
for header in headers:
grad_yr_list.append(header.h2.text[-4:])
rez_classes = soup.find_all('div', class_={'accord-con'})
data_dict = dict(zip(grad_yr_list, rez_classes))
Here is a sample of what my dictionary looks like:
{'2019': <div class="accord-con"><h4>Anesthesiology</h4><ul><li>University at Buffalo School of Medicine, Buffalo, NY</li></ul><h4>Emergency Medicine</h4><ul><li>Aventura Hospital, Aventura, Fl</li></ul><h4>Family Medicine</h4><ul><li>Louisiana State University School of Medicine, New Orleans, LA</li><li>UT St Thomas Hospitals, Murfreesboro, TN</li><li>Sea Mar Community Health Center, Seattle, WA</li></ul><h4>Internal Medicine</h4><ul><li>Oregon Health and Science University, Portland, OR</li><li>St Joseph Hospital, Denver, CO </li></ul><h4>Obstetrics-Gynecology</h4><ul><li>Jersey City Medical Center, Jersey City, NJ</li><li>New York Presbyterian Brooklyn Methodist Hospital, Brooklyn, NY</li></ul><h4>Pediatrics</h4><ul><li>St Louis Children’s Hospital, St Louis, MO</li><li>University of Maryland Medical Center, Baltimore, MD</li><li>St Christopher’s Hospital, Philadelphia, PA</li></ul><h4>Surgery</h4><ul><li>Mountain Area Health Education Center, Asheville, NC</li></ul><p></p></div>,
'2018': <div class="accord-con"><h4>Anesthesiology</h4><ul><li>NYU School of Medicine, New York, NY</li></ul><h4>Emergency Medicine</h4><ul><li>Kent Hospital, Warwick, Rhode Island</li><li>University of Connecticut School of Medicine, Farmington, CT</li><li>University of Texas Health Science Center at San Antonio, San Antonio, TX</li><li>Vidant Medical Center East Carolina University, Greenville, NC</li></ul><h4>Family Medicine</h4><ul><li>University of Kansas Medical Center, Wichita, KS</li><li>Ellis Hospital, Schenectady, NY</li><li>Harrison Medical Center, Seattle, WA</li><li>St Francis Hospital, Wilmington, DE </li><li>University of Virginia, Charlottesville, VA</li><li>Valley Medical Center, Renton, WA</li></ul><h4>Internal Medicine</h4><ul><li>Oregon Health and Science University, Portland, OR</li><li>Virginia Commonwealth University Health Systems, Richmond, VA</li><li>University of Chicago Medical Center, Chicago, IL</li></ul><h4>Obstetrics-Gynecology</h4><ul><li>St Francis Hospital, Hartford, CT</li></ul><h4>Pediatrics</h4><ul><li>Case Western University Hospitals Cleveland Medical Center, Cleveland, OH</li><li>Jersey Shore University Medical Center, Neptune City, NJ</li><li>University of Maryland Medical Center, Baltimore, MD</li><li>University of Virginia, Charlottesville, VA</li><li>Vidant Medical Center East Carolina University, Greenville, NC</li></ul><h4>Preliminary Medicine Neurology</h4><ul><li>Howard University Hospital, Washington, DC</li></ul><h4>Preliminary Medicine Radiology</h4><ul><li>Maimonides Medical Center, Bronx, NY</li></ul><h4>Preliminary Medicine Surgery</h4><ul><li>Providence Park Hospital, Southfield, MI</li></ul><h4>Psychiatry</h4><ul><li>University of Maryland Medical Center, Baltimore, MI</li></ul><p></p></div>,
My ultimate goal is to pull this data into a pandas dataframe with the following columns: grad year, specialty, hospital, location
Your code is quite close to finding the end result. Once you have paired the years with the student placement data, simply apply an extraction function to the latter.:
from bs4 import BeautifulSoup as soup
import re
from selenium import webdriver
_d = webdriver.Chrome('/path/to/chromedriver')
_d.get('https://msih.bgu.ac.il/md-program/residency-placements/')
d = soup(_d.page_source, 'html.parser')
def placement(block):
r = block.find_all(re.compile('ul|h4'))
return {r[i].text:[b.text for b in r[i+1].find_all('li')] for i in range(0, len(r)-1, 2)}
result = {i.h2.text:placement(i) for i in d.find_all('div', {'class':'accord-head'})}
print(result['Class of 2019'])
Output:
{'Anesthesiology': ['University at Buffalo School of Medicine, Buffalo, NY'], 'Emergency Medicine': ['Aventura Hospital, Aventura, Fl'], 'Family Medicine': ['Louisiana State University School of Medicine, New Orleans, LA', 'UT St Thomas Hospitals, Murfreesboro, TN', 'Sea Mar Community Health Center, Seattle, WA'], 'Internal Medicine': ['Oregon Health and Science University, Portland, OR', 'St Joseph Hospital, Denver, CO\xa0'], 'Obstetrics-Gynecology': ['Jersey City Medical Center, Jersey City, NJ', 'New York Presbyterian Brooklyn Methodist Hospital, Brooklyn, NY'], 'Pediatrics': ['St Louis Children’s Hospital, St Louis, MO', 'University of Maryland Medical Center, Baltimore, MD', 'St Christopher’s Hospital, Philadelphia, PA'], 'Surgery': ['Mountain Area Health Education Center, Asheville, NC']}
Note: I ended up using selenium because for me, the returned HTML response from requests.get did not included the rendered student placement data.
You have dictionary with BS elements ('bs4.element.Tag') and you don't have to parse them.
You can directly uses find(), find_all(), etc.
for key, value in data_dict.items():
print(type(value), key, value.find('h4').text)
Result
<class 'bs4.element.Tag'> 2019 Anesthesiology
<class 'bs4.element.Tag'> 2018 Anesthesiology
<class 'bs4.element.Tag'> 2017 Anesthesiology
<class 'bs4.element.Tag'> 2016 Emergency Medicine
<class 'bs4.element.Tag'> 2015 Emergency Medicine
<class 'bs4.element.Tag'> 2014 Anesthesiology
<class 'bs4.element.Tag'> 2013 Anesthesiology
<class 'bs4.element.Tag'> 2012 Emergency Medicine
<class 'bs4.element.Tag'> 2011 Emergency Medicine
<class 'bs4.element.Tag'> 2010 Dermatology
<class 'bs4.element.Tag'> 2009 Emergency Medicine
<class 'bs4.element.Tag'> 2008 Family Medicine
<class 'bs4.element.Tag'> 2007 Anesthesiology
<class 'bs4.element.Tag'> 2006 Triple Board (Pediatrics/Adult Psychiatry/Child Psychiatry)
<class 'bs4.element.Tag'> 2005 Family Medicine
<class 'bs4.element.Tag'> 2004 Anesthesiology
<class 'bs4.element.Tag'> 2003 Emergency Medicine
<class 'bs4.element.Tag'> 2002 Family Medicine
Full code:
import urllib.request
import bs4 as bs
sauce = urllib.request.urlopen('https://msih.bgu.ac.il/md-program/residency-placements/').read()
soup = bs.BeautifulSoup(sauce, 'lxml')
headers = soup.find_all('div', class_={'accord-head'})
grad_yr_list = []
for header in headers:
grad_yr_list.append(header.h2.text[-4:])
rez_classes = soup.find_all('div', class_={'accord-con'})
data_dict = dict(zip(grad_yr_list, rez_classes))
for key, value in data_dict.items():
print(type(value), key, value.find('h4').text)
You can go to pandas once you get the soup, then parse the necessary information
df = pd.DataFrame(soup)
df['grad_year'] = df[0].map(lambda x: x.text[-4:])
df['specialty'] = df[1].map(lambda x: [i.text for i in x.find_all('h4')])
df['hospital'] = df[1].map(lambda x: [i.text for i in x.find_all('li')])
df['location'] = df[1].map(lambda x: [''.join(i.text.split(',')[1:]) for i in x.find_all('li')])
You will have to do some pandas magic after that
I don't know pandas. The following code can get the data in the table. I don't know if this will meet your needs.
import requests
from simplified_scrapy.simplified_doc import SimplifiedDoc
url = 'https://msih.bgu.ac.il/md-program/residency-placements/'
response = requests.get(url)
doc = SimplifiedDoc(response.text)
divs = doc.getElementsByClass('accord-head')
datas={}
for div in divs:
grad_year = div.h2.text[-4:]
rez_classe = div.getElementByClass('accord-con')
h4s = rez_classe.h4s # get h4
for h4 in h4s:
if not h4.next:
continue
lis = h4.next.lis
specialty = h4.text
hospital = [li.text for li in lis]
datas[grad_year]={'specialty':specialty,'hospital':hospital}
for data in datas:
print (data,datas[data])

How to extract data from a webpage's body where we have dynamic google ads in between the content using scrapy

I am trying to pick the part of body from "The Lalit Kala Akademi Scholarship 2017 - 2018 from the...."
to
"Email: lka#lalitkala.gov.in; lalitkala1954#yahoo.in Website: lalitkala.gov.in"
But my output is many "\n" and "\t". I guess it happening due to adwords in between. Any idea how to solve this?
import scrapy
class MySpider(scrapy.Spider):
name = "test"
start_urls = [
'http://www.indiaeducation.net/scholarships/lalit-kala-akademi-scholarship.aspx',
]
def parse(self, response):
for scholarships in response.xpath('//*[#id="wrapper"]'):
yield {
'text': scholarships.xpath('//*[#id="artBody"]/text()').extract(),
}
Something like this?
>>> u" ".join(line.strip() for line in response.xpath('//div[#id="artBody"]//*[not(self::div)][not(self::script)]/text()').extract())
u'The Lalit Kala Akademi Scholarship 2017 - 2018 from the National Academy of Art, Delhi is awarded to learners who have passion for the visual arts. The scholarship is given by Lalit Kala Akademi, an Indian governmental institution for promotion and innovation of the visual arts. Presently 40 scholarships are offered by the National Academy of Art, however, this number may vary depending upon availability of resources. Scholarship Through this scholarship the artists are given a work space to improve their skills and to develop new ideas within their field of visual art. Visual artists ( in disciplines such as graphic, sculpture, painting, ceramics), art historians and art critics may apply. The scholarship is worth Rs. 10,000 per month for a one-year period. Important dates Application available: available online now at lalitkala.gov.in Last date for the filled application to reach the Akademi at New Delhi: 25 May, 2015 Eligibility criteria The Lalit Kala Akademi , New Delhi is looking for young rising artists in the visual arts, as well as art historians and art critics in the age group of 21-35 years Application procedure Prospective candidates should fill out the application form and include a Rs. 100 entry fee through postal order/ bank draft payable to Secretary, Lalit Kala Akademi drawn on New Delhi branch . A forum of senior artists decides who will be selected for the award. Selection is partly based on a published article or photographs of exhibits. The candidates awarded Lalit Kala Akademy Scholarship or National Academy of Art Scholarship need to work with regional centres of the Akademi listed below: Bhubaneswar Guwahati Kolkata Lucknow Delhi Shimla Contact details Lalit Kala Akademi Rabindra Bhavan, 35, Ferozeshah Road , New Delhi-110001 Telephone: 011 - 23009200 Fax : 011 - 23009292 Email: lka#lalitkala.gov.in; lalitkala1954#yahoo.in Website: lalitkala.gov.in'

Parsing a txt file with custom tags

I'm trying to parse a txt file from EDGAR however with different filing types, there are different formats of reports even though they are all txt files. I have no problem using BeautifulSoup to parse xml reports however i came across this type of report:
<SEC-DOCUMENT>0001047469-13-001017.txt : 20130214
<SEC-HEADER>0001047469-13-001017.hdr.sgml : 20130214
<ACCEPTANCE-DATETIME>20130214060031
ACCESSION NUMBER: 0001047469-13-001017
CONFORMED SUBMISSION TYPE: 13F-HR
PUBLIC DOCUMENT COUNT: 1
CONFORMED PERIOD OF REPORT: 20121231
FILED AS OF DATE: 20130214
DATE AS OF CHANGE: 20130214
EFFECTIVENESS DATE: 20130214
FILER:
COMPANY DATA:
COMPANY CONFORMED NAME: BILL & MELINDA GATES FOUNDATION TRUST
CENTRAL INDEX KEY: 0001166559
IRS NUMBER: 911663695
STATE OF INCORPORATION: WA
FISCAL YEAR END: 1231
FILING VALUES:
FORM TYPE: 13F-HR
SEC ACT: 1934 Act
SEC FILE NUMBER: 028-10098
FILM NUMBER: 13605999
BUSINESS ADDRESS:
STREET 1: 2365 CARILLON POINT
CITY: KIRKLAND
STATE: WA
ZIP: 98033
BUSINESS PHONE: 4258897900
MAIL ADDRESS:
STREET 1: 2365 CARILLON POINT
CITY: KIRKLAND
STATE: WA
ZIP: 98033
FORMER COMPANY:
FORMER CONFORMED NAME: GATES BILL & MELINDA FOUNDATION
DATE OF NAME CHANGE: 20020205
</SEC-HEADER>
<DOCUMENT>
<TYPE>13F-HR
<SEQUENCE>1
<FILENAME>a2212666z13f-hr.txt
<DESCRIPTION>13F-HR
<TEXT>
<Page>
UNITED STATES
SECURITIES AND EXCHANGE COMMISSION
WASHINGTON, D.C. 20549
FORM 13F
FORM 13F COVER PAGE
Report for the Calendar Year or Quarter Ended: December 31, 2012
-----------------------
Check Here if Amendment / /; Amendment Number:
---------
This Amendment (Check only one.): / / is a restatement.
/ / adds new holdings entries.
Institutional Investment Manager Filing this Report:
Name: Bill & Melinda Gates Foundation Trust
-------------------------------------
Address: 2365 Carillon Point
-------------------------------------
Kirkland, WA 98033
-------------------------------------
Form 13F File Number: 28-10098
---------------------
The institutional investment manager filing this report and the person by whom
it is signed hereby represent that the person signing the report is authorized
to submit it, that all information contained herein is true, correct and
complete, and that it is understood that all required items, statements,
schedules, lists, and tables, are considered integral parts of this form.
Person Signing this Report on Behalf of Reporting Manager:
Name: Michael Larson
-------------------------------
Title: Authorized Agent
-------------------------------
Phone: (425) 889-7900
-------------------------------
Signature, Place, and Date of Signing:
/s/ Michael Larson Kirkland, Washington February 14, 2013
------------------------------- -------------------- -----------------
[Signature] [City, State] [Date]
Report Type (Check only one.):
/X/ 13F HOLDINGS REPORT. (Check here if all holdings of this reporting
manager are reported in this report.)
/ / 13F NOTICE. (Check here if no holdings reported are in this report,
and all holdings are reported by other reporting manager(s).)
/ / 13F COMBINATION REPORT. (Check here if a portion of the holdings for this
reporting manager are reported in this report and a portion are reported by
other reporting manager(s).)
<Page>
FORM 13F SUMMARY PAGE
Report Summary:
Number of Other Included Managers: 0
--------------------
Form 13F Information Table Entry Total: 26
--------------------
Form 13F Information Table Value Total: $ 16,788,719
--------------------
(thousands)
List of Other Included Managers:
Provide a numbered list of the name(s) and Form 13F file number(s) of all
institutional investment managers with respect to which this report is filed,
other than the manager filing this report.
NONE
2
<Page>
FORM 13 INFORMATION TABLE
As of December 31, 2012
<Table>
<Caption>
VOTING AUTHORITY
VALUE SHRS OR SH/ PUT/ INVESTMENT OTHER ----------------------
NAME OF ISSUER TITLE OF CLASS CUSIP (x$1000) PRN AMOUNT PRN CALL DISCRETION MANAGERS SOLE SHARED NONE
---------------------------- ---------------- --------- ---------- ------------ --- ---- ---------- -------- ---------- ------ ----
<S> <C> <C> <C> <C> <C> <C> <C> <C> <C> <C> <C>
AUTOLIV INC COM 052800109 8,329 123,600 SH SOLE 123,600
AUTONATION INC COM 05329W102 75,379 1,898,716 SH SOLE 1,898,716
BERKSHIRE HATHAWAY INC DEL CL B NEW 084670702 7,811,199 87,081,373 SH SOLE 87,081,373
BP PLC SPONSORED ADR 055622104 297,018 7,133,000 SH SOLE 7,133,000
CANADIAN NATL RY CO COM 136375102 779,358 8,563,437 SH SOLE 8,563,437
CATERPILLAR INC DEL COM 149123101 919,168 10,260,857 SH SOLE 10,260,857
COCA COLA CO COM 191216100 1,232,573 34,002,000 SH SOLE 34,002,000
COCA COLA FEMSA SAB DE CV SPON ADR REP L 191241108 926,242 6,214,719 SH SOLE 6,214,719
CROWN CASTLE INTL CORP COM 228227104 384,822 5,332,900 SH SOLE 5,332,900
DIAMOND FOODS INC COM 252603105 6,031 441,163 SH SOLE 441,163
ECOLAB INC COM 278865100 313,946 4,366,425 SH SOLE 4,366,425
EXXON MOBIL CORP COM 30231G102 661,576 7,643,858 SH SOLE 7,643,858
FEDEX CORP COM 31428X106 277,453 3,024,999 SH SOLE 3,024,999
FOMENTO ECONOMICO MEXICANO SPON ADR UNITS 344419106 21,953 218,000 SH SOLE 218,000
GRUPO TELEVISA SA SPON ADR REP ORD 40049J206 448,647 16,879,103 SH SOLE 16,879,103
LIBERTY GLOBAL INC COM SER A 530555101 133,508 2,119,515 SH SOLE 2,119,515
LIBERTY GLOBAL INC COM SER C 530555309 41,507 706,507 SH SOLE 706,507
MCDONALDS CORP COM 580135101 870,853 9,872,500 SH SOLE 9,872,500
ORBOTECH LTD ORD M75253100 6,973 823,300 SH SOLE 823,300
PROCTER & GAMBLE CO COM 742718109 101,835 1,500,000 SH SOLE 1,500,000
REPUBLIC SVCS INC COM 760759100 39,596 1,350,000 SH SOLE 1,350,000
SIGNET JEWELERS LIMITED SHS G81276100 9,993 187,130 SH SOLE 187,130
TOYOTA MOTOR CORP SP ADR REP2COM 892331307 14,295 153,300 SH SOLE 153,300
WAL-MART STORES INC COM 931142103 757,558 11,103,000 SH SOLE 11,103,000
WASTE MGMT INC COM 94106L109 628,700 18,633,672 SH SOLE 18,633,672
WILLIS GROUP HOLDINGS PUBLIC SHS G96666105 20,209 602,700 SH SOLE 602,700
---------- ------------
16,788,719 240,235,774
</Table>
</TEXT>
</DOCUMENT>
</SEC-DOCUMENT>
As you can see this file is just plain txt file with custom tags.
My question is: how do i target texts within a specific tag? for example I only need the texts inside the TEXT tag from the above txt file.
You can select the Text tags and then work on that content:
soup = BeautifulSoup(open("/yourfile.html"), "html.parser")
text_tags = soup.find('text')
for text in text_tags:
print text
# work from here
Note: I used a html.parser and it already returns the text tag. You may need to change to a xml parser if that suits your needs better

Python: html table content

I am trying to scrape this website but I keep getting error when I try to print out just the content of the table.
soup = BeautifulSoup(urllib2.urlopen('http://clinicaltrials.gov/show/NCT01718158
').read())
print soup('table')[6].prettify()
for row in soup('table')[6].findAll('tr'):
tds = row('td')
print tds[0].string,tds[1].string
IndexError Traceback (most recent call last)
<ipython-input-70-da84e74ab3b1> in <module>()
1 for row in soup('table')[6].findAll('tr'):
2 tds = row('td')
3 print tds[0].string,tds[1].string
4
IndexError: list index out of range
The table has a header row, with <th> header elements rather than <td> cells. Your code assumes there will always be <td> elements in each row, and that fails for the first row.
You could skip the row with not enough <td> elements:
for row in soup('table')[6].findAll('tr'):
tds = row('td')
if len(tds) < 2:
continue
print tds[0].string, tds[1].string
at which point you get output:
>>> for row in soup('table')[6].findAll('tr'):
... tds = row('td')
... if len(tds) < 2:
... continue
... print tds[0].string, tds[1].string
...
Responsible Party: Bristol-Myers Squibb
ClinicalTrials.gov Identifier: None
Other Study ID Numbers: AI452-021, 2011‐005409‐65
Study First Received: October 29, 2012
Last Updated: November 7, 2014
Health Authority: None
The last row contains text interspersed with <br/> elements; you could use the element.strings generator to extract all strings and perhaps join them into newlines; I'd strip each string first though:
>>> for row in soup('table')[6].findAll('tr'):
... tds = row('td')
... if len(tds) < 2:
... continue
... print tds[0].string, '\n'.join(filter(unicode.strip, tds[1].strings))
...
Responsible Party: Bristol-Myers Squibb
ClinicalTrials.gov Identifier: NCT01718158
History of Changes
Other Study ID Numbers: AI452-021, 2011‐005409‐65
Study First Received: October 29, 2012
Last Updated: November 7, 2014
Health Authority: United States: Institutional Review Board
United States: Food and Drug Administration
Argentina: Administracion Nacional de Medicamentos, Alimentos y Tecnologia Medica
France: Afssaps - Agence française de sécurité sanitaire des produits de santé (Saint-Denis)
Germany: Federal Institute for Drugs and Medical Devices
Germany: Ministry of Health
Israel: Israeli Health Ministry Pharmaceutical Administration
Israel: Ministry of Health
Italy: Ministry of Health
Italy: National Bioethics Committee
Italy: National Institute of Health
Italy: National Monitoring Centre for Clinical Trials - Ministry of Health
Italy: The Italian Medicines Agency
Japan: Pharmaceuticals and Medical Devices Agency
Japan: Ministry of Health, Labor and Welfare
Korea: Food and Drug Administration
Poland: National Institute of Medicines
Poland: Ministry of Health
Poland: Ministry of Science and Higher Education
Poland: Office for Registration of Medicinal Products, Medical Devices and Biocidal Products
Russia: FSI Scientific Center of Expertise of Medical Application
Russia: Ethics Committee
Russia: Ministry of Health of the Russian Federation
Spain: Spanish Agency of Medicines
Taiwan: Department of Health
Taiwan: National Bureau of Controlled Drugs
United Kingdom: Medicines and Healthcare Products Regulatory Agency

Categories