Scrape text from a list of different urls using Python - python

I have a list of different URLs which I would like to scrape the text from using Python. So far I've managed to build a script that returns URLs based on a Google Search with keywords, however I would now like to scrape the content of these URLs. The problem is that I'm now scraping the ENTIRE website including the layout/style info, while I only would like to scrape the 'visible text'. Ultimately, my goal is to scrape for names of all these urls, and store them in a pandas DataFrame. Perhaps even include how often certain names are mentioned, but that is for later. Below is a rather simple start of my code so far:
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
import requests
from time import sleep
from random import randint
import spacy
import en_core_web_sm
import pandas as pd
url_list = ["https://www.nhtsa.gov/winter-driving-safety", "https://www.safetravelusa.com/", "https://www.theatlantic.com/business/archive/2014/01/how-2-inches-of-snow-created-a-traffic-nightmare-in-atlanta/283434/", "https://www.wsdot.com/traffic/passes/stevens/"]
df = pd.DataFrame(url_list, columns = ['url'])
df_Names = []
# load english language model
nlp = en_core_web_sm.load()
# find Names in text
def spacy_entity(df):
df1 = nlp(df)
df2 = [[w.text,w.label_] for w in df1.ents]
return df2
for index, url in df.iterrows():
print(index)
print(url)
sleep(randint(2,5))
# print(page)
req = Request(url[0], headers={"User-Agent": 'Mozilla/5.0'})
webpage = urlopen(req).read()
soup = BeautifulSoup(webpage, 'html5lib').get_text()
df_Names.append(spacy_entity(soup))
df["Names"] = df_Names

For getting the visible text with BeautifoulSoup, there is already this answer:
BeautifulSoup Grab Visible Webpage Text
Once you get your visible text, if you want to extract "names" (I'm assuming by names here you mean "nouns"), you can check nltk package (or Blob) on this other answer: Extracting all Nouns from a text file using nltk
Once you apply both, you can ingest your outputs into pandas DataFrame.
Note: Please notice that extracting the visible text given an HTML it is still an open problem. This two papers can highlight the problem way better than I can and they are both using Machine Learning techniques: https://arxiv.org/abs/1801.02607, https://dl.acm.org/doi/abs/10.1145/3366424.3383547.
And their respective githubs https://github.com/dalab/web2text, https://github.com/mrjleo/boilernet

Related

how to fetch text data from website and storing as excel file using python

I want to create a script that fetches the all the data in the following website : https://www.bis.doc.gov/dpl/dpl.txt and store it in a excel file and count the number of records in it, using python language. I've tried to achieve by implementing the code as:
import requests
import re
from bs4 import BeautifulSoup
URL = "https://www.bis.doc.gov/dpl/dpl.txt"
page = requests.get(URL)
soup = BeautifulSoup(page.text, "lxml")
print(soup)
I've fetched the data but didn't know the next step of storing it as excel file. Anyone pls guide or share your valuable ideas. Thank you in advance!
You can do it with pandas easily. Since the data is in tab seperated value.
Note: openpyxl needs to be installed for this to work.
import requests
import io
import pandas as pd
URL = "https://www.bis.doc.gov/dpl/dpl.txt"
page = requests.get(URL)
df = pd.read_csv(io.StringIO(page.text), sep="\t")
df.to_excel(r'i_data.xlsx', index = False)

Listing names of Excel files from URL using Python and BeautifulSoup

I am trying to do web scraping using Python and BeautifulSoup, so going through tutorials, but I am stuck after successful requests.get(url).
As soon as I define the elements which I want to extract (names of Excel file names appearing on the website) based on tag and its class, which contains string of "file-id-..." (... means id of the files) all I get an empty list.
My goal is to list all Excel file names from this url address, and basically open them later by using for loop. All of it, to extract specific monthly data from national labour office, which have the same structure throughout the year.
labour_office_web_text = requests.get("url").text
soup = BeautifulSoup(labour_office_web_text, "lxml")
file_names = soup.find_all('a[class*="file-id-"]')
file_names
Any recommendations? Thank you!
To get all .xls links from that page you can use next example:
import requests
from bs4 import BeautifulSoup
url = "https://www.upsvr.gov.sk/statistiky/nezamestnanost-mesacne-statistiky/2020.html?page_id=971502"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
for link in soup.select('a[href*=".xls"]'):
print(link["class"], link["href"])
Prints:
['file-id-1059252'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2012.xlsx
['file-id-1050892'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2011.xlsx
['file-id-1042979'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2010.xlsx
['file-id-1034316'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2009_okresy.xlsx
['file-id-1027296'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2008_okresy.xlsx
['file-id-1021527'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2007_okresy.xlsx
['file-id-1015636'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2006_okresy.xlsx
['file-id-1009682'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_maj2020_okresy.xlsx
['file-id-1002749'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_apr2020_okresy.xlsx
['file-id-995793'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_mar_2020_okresy.xlsx
['file-id-983937'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2002_okresy.xlsx
['file-id-971509'] https://www.upsvr.gov.sk/buxus/docs/statistic/mesacne/2020/MS_2001.xlsx

Function to web scrape tables from several pages

I am learning Python and I am trying to create a function to web scrape tables of vaccination rates from several different web pages - a github repository for Our World in Data https://github.com/owid/covid-19-data/tree/master/public/data/vaccinations/country_data and https://ourworldindata.org/about. The code works perfectly when web scraping a single table and saving it into a data frame...
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "https://github.com/owid/covid-19-data/blob/master/public/data/vaccinations/country_data/Bangladesh.csv"
response = requests.get(url)
response
scraping_html_table_BD = BeautifulSoup(response.content, "lxml")
scraping_html_table_BD = scraping_html_table_BD.find_all("table", "js-csv-data csv-data js-file-line-container")
df = pd.read_html(str(scraping_html_table_BD))
BD_df = df[0]
But I have not had much luck when trying to create a function to scrape several pages. I have been following the tutorial on this website 3 in the section 'Scrape multiple pages with one script' and StackOverflow questions like 4 and 5 amongst other pages. I have tried creating a global variable first but I end up with errors like "Recursion Error: maximum recursion depth exceeded while calling a Python object". This is the best code I have managed as it doesn't generate an error but I've not managed to save the output to a global variable. I really appreciate your help.
import pandas as pd
from bs4 import BeautifulSoup
import requests
link_list = ['/Bangladesh.csv',
'/Nepal.csv',
'/Mongolia.csv']
def get_info(page_url):
page = requests.get('https://github.com/owid/covid-19-data/tree/master/public/data/vaccinations/country_data' + page_url)
scape = BeautifulSoup(page.text, 'html.parser')
vaccination_rates = scape.find_all("table", "js-csv-data csv-data js-file-line-container")
result = {}
df = pd.read_html(str(vaccination_rates))
vaccination_rates = df[0]
df = pd.DataFrame(vaccination_rates)
print(df)
df.to_csv("testdata.csv", index=False)
for link in link_list:
get_info(link)
edit: I can view the final webpage that is iterated as it saves to a csv file, but not the data from the preceding links.
new = pd.read_csv('testdata6.csv')
pd.set_option("display.max_rows", None, "display.max_columns", None)
new
This is because in every iteration your 'testdata.csv' is overwritten with a new one.
so you can do :
df.to_csv(page_url[1:], index=False)
I'm guessing you're overwriting your 'testdata.csv' each time, hence why you can see the final page. I would either add an enumerate function to add an identifier for a separate csv each time you scrape a page, eg:
for key, link in enumerate(link_list):
get_info(link, key)
...
df.to_csv(f"testdata{key}.csv", index=False)
Or, open this csv as part of your get_info function, steps of which are available in append new row to old csv file python.

unable to parse html table with Beautiful Soup

I am very new to using Beautiful Soup and I'm trying to import data from the below url as a pandas dataframe.
However, the final result has the correct columns names, but no numbers for the rows.
What should I be doing instead?
Here is my code:
from bs4 import BeautifulSoup
import requests
def get_tables(html):
soup = BeautifulSoup(html, 'html.parser')
table = soup.find_all('table')
return pd.read_html(str(table))[0]
url = 'https://www.cmegroup.com/trading/interest-rates/stir/eurodollar.html'
html = requests.get(url).content
get_tables(html)
The data you see in the table is loaded from another URL via JavaScript. You can use this example to save the data to csv:
import json
import requests
import pandas as pd
data = requests.get('https://www.cmegroup.com/CmeWS/mvc/Quotes/Future/1/G').json()
# uncomment this to print all data:
# print(json.dumps(data, indent=4))
df = pd.json_normalize(data['quotes'])
df.to_csv('data.csv')
Saves data.csv (screenshot from LibreOffice):
The website you're trying to scrape data from is rendering the table values dynamically and using requests.get will only return the HTML the server sends prior to JavaScript rendering.
You will have to find an alternative way of accessing the data or render the webpages JS (see this example).
A common way of doing this is to use selenium to automate a browser which allows you to render the JavaScript and get the source code that way.
Here is a quick example:
import time
import pandas as pd
from selenium.webdriver import Chrome
#Request the dynamically loaded page source
c = Chrome(r'/path/to/webdriver.exe')
c.get('https://www.cmegroup.com/trading/interest-rates/stir/eurodollar.html')
#Wait for it to render in browser
time.sleep(5)
html_data = c.page_source
#Load into pd.DataFrame
tables = pd.read_html(html_data)
df = tables[0]
df.columns = df.columns.droplevel() #Convert the MultiIndex to an Index
Note that I didn't use BeautifulSoup, you can directly pass the html to pd.read_html. You'll have to do some more cleaning from there but that's the gist.
Alternatively, you can take a peak at requests-html which is a library that offers JavaScript rendering and might be able to help, search for a way to access the data as JSON or .csv from elsewhere and use that, etc.

Beautiful Soup. Text extraction into a dataframe

I'm trying to extract the information from a single web-page that contains multiple similarly structured recordings. Information is contained within div tags with different classes (I'm interested in username, main text and date). Here is the code I use:
import bs4 as bs
import urllib
import pandas as pd
href = 'https://example.ru/'
sause = urllib.urlopen(href).read()
soup = bs.BeautifulSoup(sause, 'lxml')
user = pd.Series(soup.Series('div', class_='Username'))
main_text = pd.Series(soup.find_all('div', class_='MainText'))
date = pd.Series(soup.find_all('div', class_='Date'))
result = pd.DataFrame()
result = pd.concat([user, main_text, date], axis=1)
The problem is that I receive information with all tags, while I want only a text. Surprisingly, .text attribute doesn't work with find_all method, so now I'm completely out of ides.
Thank you for any help!
list comprehension is the way to go, to get all the text within MainText for example, try
[elem.text for elem in soup.find_all('div', class_='MainText')]

Categories