I am brand new to coding, and was given a web scraping tutorial (found here) to help build my skills as I learn. I've already had to make several adjustments to the code in this tutorial, but I digress. I'm scraping off of http://books.toscrape.com/ and, when I try to export a Dataframe of just the book categories into Excel, I get a couple of issues. Note that, when exporting to csv (and then opening the file in Notepad), these issues are not present. I am working in a Jupyter Notebook in Azure Data Studio.
First, the row with all the data appears to not exist, even though it is displayed, making it so that I have to tab over to each column to go past the data that is shown in the default windowsize of Excel.
Second, it only displays the first 9 results (the first being "Books," and the other 8 being the first 8 categories).
Image of desired scrape section
Here is my code:
s = Service('C:/Users/.../.../chromedriver.exe')
browser = webdriver.Chrome(service=s)
url = 'http://books.toscrape.com/'
browser.get(url)
results = []
content = browser.page_source
soup = BeautifulSoup(content)
# changes from the tutorial due to recommendations
# from StackOverflow, based on similar questions
# from error messages popping up when using original
# formatting; tutorial is outdated.
for li in soup.find_all(attrs={'class': 'side_categories'}):
name = element.find('li')
if name not in results:
results.append(name.text)
df = pd.DataFrame({'Categories': results})
df.to_excel('categories.xlsx', index=False)
# per https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.formats.style.Styler.to_excel.html
# encoding is Deprecated and apparently wasn't
# needed for any excel writer other than xlwt,
# which is no longer maintained.
Images of results:
View before tabbing further in the columns
End of the displayed results
What can I do to fix this?
Edit: Many apologies, I didn't realize I have copied an older, incorrect version of my code blocks. Should be correct now.
The code in question will not create any dataframe. However, you should select your elements more specific for example with css selectors:
for a in soup.select('ul.nav-list a'):
if a.get_text(strip=True) not in results:
results.append(a.get_text(strip=True))
Example
import requests
from bs4 import BeautifulSoup
import pandas as pd
results = []
soup = BeautifulSoup(requests.get('http://books.toscrape.com/').content)
for a in soup.select('ul.nav-list a'):
if a.get_text(strip=True) not in results:
results.append(a.get_text(strip=True))
pd.DataFrame({'Categories': results})
Output
Categories
0
Books
1
Travel
2
Mystery
3
Historical Fiction
4
Sequential Art
5
Classics
6
Philosophy
...
Related
I am trying to extract a specific table from a pdf, the pdf looks like the image below
I tried with different libraries on python,
With tabula-py
from tabula import read_pdf
from tabulate import tabulate
df = read_pdf("./tmp/pdf/Food Calories List.pdf")
df
With PyPDF2
pdf_file = open("./tmp/pdf/Food Calories List.pdf", 'rb')
read_pdf = PyPDF2.PdfFileReader(pdf_file)
number_of_pages = read_pdf.getNumPages()
page = read_pdf.getPage(0)
page_content = page.extractText()
data = page_content
df = pd.DataFrame([x.split(';') for x in data.split('\n')])
aux = page_content
df = pd.DataFrame([x.split(';') for x in aux.split('\n')])
Even with textract and beautiful soup, the issue that I am facing is that the output format is a mess, Is there any way to extract this table with a better format?
I suspect the issues stem from the fact that the table have merged cells (on the left) and reading data from a table only works when the rows and cells are consistent rather than some merged and some not.
I'd skip over the first two columns and then recreate / populate them on the left hand side once you have the table loaded (As a pandas dataframe for example).
Then you can have one label per row and work with the data consistently, otherwise your cells per column will be inconsistently numbered.
I would look into using tabula templates which you can dynamically generate based on word locations on page. This will give tabula more guidance on which area to consider and lead to more accurate extraction. See tabula.read_pdf_with_template as documented here: https://tabula-py.readthedocs.io/en/latest/tabula.html#tabula.io.read_pdf_with_template.
Camelot can be another Python library to try. Its advanced settings seem to show that it can handle merged cells. However, this will likely require some adjustments to certain settings such as copy_text and shift_text.
Note: Camelot can only read text-based tables. If the table is inside an image, it won't be able to extract it.
If the above is not an issue, try the sample code below:
import camelot
tables = camelot.read_pdf('./tmp/pdf/Food Calories List.pdf', pages='1', copy_text=['v'])
print(tables[0].df)
So I developed a script that would pull data from a live-updated site tracking coronavirus data. I set it up to pull data every 30 minutes but recently tested it on updates every 30 seconds.
The idea is that it creates the request to the site, pulls the html, creates a list of all of the data I need, then restructures into a dataframe (basically it's the country, the cases, deaths, etc.).
Then it will take each row and append to the rows of each of the 123 excel files that are for the various countries. This will work well for, I believe, somewhere in the range of 30-50 iterations before it either causes file corruptions or weird data entries.
I have my code below. I know it's poorly written (my initial reasoning was I felt confident I could set it up quickly and I wanted to collect data quickly.. unfortunately I overestimated my abilities but now I want to learn what went wrong). Below my code I'll include sample output.
PLEASE note that this 30 second interval code pull is only for quick testing. I don't usually look to send that many requests for months. I just wanted to see what the issue was. Originally it was set to pull every 30 minutes when I detected this issue.
See below for the code:
import schedule
import time
def RecurringProcess2():
import requests
from bs4 import BeautifulSoup
import pandas as pd
import datetime
import numpy as np
from os import listdir
import os
try:
extractTime = datetime.datetime.now()
extractTime = str(extractTime)
print("Access Initiated at " + extractTime)
link = 'https://www.worldometers.info/coronavirus/'
response = requests.get(link)
soup = BeautifulSoup(response.text,'html.parser').findAll('td')#[1107].get_text()
table = pd.DataFrame(columns=['Date and Time','Country','Total Cases','New Cases','Total Deaths','New Deaths','Total Recovered','Active Cases','Serious Critical','Total Cases/1M pop'])
soupList = []
for i in range(1107):
value = soup[i].get_text()
soupList.insert(i,value)
table = np.reshape(soupList,(123,-1))
table = pd.DataFrame(table)
table.columns=['Country','Total Cases','New Cases (+)','Total Deaths','New Deaths (+)','Total Recovered','Active Cases','Serious Critical','Total Cases/1M pop']
table['Date & Time'] = extractTime
#Below code is run once to generate the initial files. That's it.
# for i in range(122):
# fileName = table.iloc[i,0] + '.xlsx'
# table.iloc[i:i+1,:].to_excel(fileName)
FilesDirectory = 'D:\\Professional\\Coronavirus'
fileType = '.csv'
filenames = listdir(FilesDirectory)
DataFiles = [ filename for filename in filenames if filename.endswith(fileType) ]
for file in DataFiles:
countryData = pd.read_csv(file,index_col=0)
MatchedCountry = table.loc[table['Country'] == str(file)[:-4]]
if file == ' USA .csv':
print("Country Data Rows: ",len(countryData))
if os.stat(file).st_size < 1500:
print("File Size under 1500")
countryData = countryData.append(MatchedCountry)
countryData.to_csv(FilesDirectory+'\\'+file, index=False)
except :
pass
print("Process Complete!")
return
schedule.every(30).seconds.do(RecurringProcess2)
while True:
schedule.run_pending()
time.sleep(1)
When I check the code after some number of iterations (usually successful for like 30-50) it has either displayed only 2 rows and lost all other rows, or it'll keep appending while deleting a single entry in the row above while two rows above loses 2 entries, etc. (essentially forming a triangle of sorts).
Above that image would be a few hundred empty rows. Does anyone have an idea of what is going wrong here? I'd consider this a failed attempt but would still like to learn from this attempt. I appreciate any help in advance.
Hi as per my understanding the webpage only has one table element. My suggestion would be to use pandas read_html method as it provides clean and structured table.
Try the below code you can modify to schedule the same:-
import requests
import pandas as pd
url = 'https://www.worldometers.info/coronavirus/'
html = requests.get(url).content
df_list = pd.read_html(html)
df = df_list[-1]
print(df)
Disclaimer: I'm still evaluating this solution. So far it works almost perfectly for 77 rows.
Originally I had set the script up to run for .xlsx files. I converted everything to .csv but retained the index column code:
countryData = pd.read_csv(file,index_col=0)
I started realizing that things were being ordered differently every time the script ran. I have since removed that from the code and so far it works. Almost.
Unnamed: 0 Unnamed: 0.1
0 7
7
For some reason I have the above output in every file. I don't know why. But it's in the first 2 columns yet it still seems to be reading and writing correctly. Not sure what's going on here.
I'm actually trying to code a little "GPS" and actually I couldn't use Google API because of the daily restriction.
I decided to use a site "viamichelin" which provide me the distance between two adresses. I created a little code to fetch all the URL adresses I needed like this :
import pandas
import numpy as np
df = pandas.read_excel('C:\Users\Bibi\Downloads\memoire\memoire.xlsx', sheet_name='Clients')
df2= pandas.read_excel('C:\Users\Bibi\Downloads\memoire\memoire.xlsx', sheet_name='Agences')
matrix=df.as_matrix(columns=None)
clients = np.squeeze(np.asarray(matrix))
matrix2=df2.as_matrix(columns=None)
agences = np.squeeze(np.asarray(matrix2))
compteagences=0
comptetotal=0
for j in agences:
compteclients=0
for i in clients:
print agences[compteagences]
print clients[compteclients]
url ='https://fr.viamichelin.be/web/Itineraires?departure='+agences[compteagences]+'&arrival='+clients[compteclients]+'&arrivalId=34MTE1MnJ5ZmQwMDMzb3YxMDU1ZDFvbGNOVEF1TlRVNU5UUT1jTlM0M01qa3lOZz09Y05UQXVOVFl4TlE9PWNOUzQzTXpFNU5nPT1jTlRBdU5UVTVOVFE9Y05TNDNNamt5Tmc9PTBqUnVlIEZvbmQgZGVzIEhhbGxlcw==&index=0&vehicle=0&type=0&distance=km¤cy=EUR&highway=false&toll=false&vignette=false&orc=false&crossing=true&caravan=false&shouldUseTraffic=false&withBreaks=false&break_frequency=7200&coffee_duration=1200&lunch_duration=3600&diner_duration=3600&night_duration=32400&car=hatchback&fuel=petrol&fuelCost=1.393&allowance=0&corridor=&departureDate=&arrivalDate=&fuelConsumption='
print url
compteclients+=1
comptetotal+=1
compteagences+=1
All my datas are on Excel that's why I used the pandas library. I have all the URL's needed for my project.
Although, I would like to extract the number of kilometers needed but there's a little problem. In the source code, I don't have the information I need, so I can't extract it with Python... The site is presented like this:
Michelin
When I click on "inspect" I can find the information needed (on the left) but I can't on the source code (on the right) ... Can someone provide me some help?
Itinerary
I have already tried this, without succeeding :
import os
import csv
import requests
from bs4 import BeautifulSoup
requete = requests.get("https://fr.viamichelin.be/web/Itineraires?departure=Rue%20Lebeau%2C%20Liege%2C%20Belgique&departureId=34MTE1Mmc2NzQwMDM0NHoxMDU1ZW44d2NOVEF1TmpNek5ERT1jTlM0MU5qazJPQT09Y05UQXVOak16TkRFPWNOUzQxTnpBM01nPT1jTlRBdU5qTXpOREU9Y05TNDFOekEzTWc9PTBhUnVlIExlYmVhdQ==&arrival=Rue%20Rys%20De%20Mosbeux%2C%20Trooz%2C%20Belgique&arrivalId=34MTE1MnJ5ZmQwMDMzb3YxMDU1ZDFvbGNOVEF1TlRVNU5UUT1jTlM0M01qa3lOZz09Y05UQXVOVFl4TlE9PWNOUzQzTXpFNU5nPT1jTlRBdU5UVTVOVFE9Y05TNDNNamt5Tmc9PTBqUnVlIEZvbmQgZGVzIEhhbGxlcw==&index=0&vehicle=0&type=0&distance=km¤cy=EUR&highway=false&toll=false&vignette=false&orc=false&crossing=true&caravan=false&shouldUseTraffic=false&withBreaks=false&break_frequency=7200&coffee_duration=1200&lunch_duration=3600&diner_duration=3600&night_duration=32400&car=hatchback&fuel=petrol&fuelCost=1.393&allowance=0&corridor=&departureDate=&arrivalDate=&fuelConsumption=")
page = requete.content
soup = BeautifulSoup(page, "html.parser")
print soup
Looking at the inspector for the page, the actual routing is done via a JavaScript invocation to this rather long URL.
The data you need seems to be in that response, starting from _scriptLoaded(. (Since it's a JavaScript object literal, you can use Python's built-in JSON library to load the data into a dict.)
I'm not a programmer, but I'm trying to teach myself Python so that I can pull data off various sites for projects that I'm working on. I'm using "Automate the Boring Stuff" and I'm having trouble getting the examples to work with one of the pages I'm trying to pull data from.
I'm using Anaconda as my prompt with Python 3.65. Here's what I've done:
Step 1: create the beautiful soup object
import requests, bs4
res = requests.get('https://www.almanac.com/weather/history/zipcode/02111/2017-05-15')
res.raise_for_status()
weatherTest = bs4.BeautifulSoup(res.text)
type(weatherTest)
This works, and returns the result
<class 'bs4.BeautifulSoup'>
I've made the assumption that the "noStarchSoup" that was in the original text (in place of weatherTest here) is a name the author gave to the object that I can rename to something more relevant to me. If that's not accurate, please let me know.
Step 2: pull an element out of the html
Here's where I get stuck. The author had just mentioned how to pull a page down into a file (which I would prefer not to do, I want to use the bs4 object), but then is using that file as his source for the html data. The exampleFile was his downloaded file.
import bs4
exampleFile = open('https://www.almanac.com/weather/history/zipcode/02111/2017-05-15')
I've tried using weatherTest in place of exampleFile, I've tried running the whole thing with the original object name (noStarchSoup), I've even tried it with exampleFile, even though I haven't downloaded the file.
What I get is
"OSError: [Errno 22] Invalid argument:
'https://www.almanac.com/weather/history/zipcode/02111/2017-05-15'
The next step is to tell it what element to pull but I'm trying to fix this error first and kind of spinning my wheels here.
Couldn't resist here!
I found this page during my search but this answer didn't quite help... try this code :)
Step 1: download Anaconda 3.0+
Step 2: (function)
# Import Libraries
import bs4
import requests
def import_high_short_tickers(market_type):
if market_type == 'NADAQ':
page = requests.get('https://www.highshortinterest.com/nasdaq/')
elif market_type == 'NYSE':
page = requests.get('https://www.highshortinterest.com/nyse/')
else:
logger.error("Invalid market_type: " + market_type)
return None
# Parse the HTML Page
soup = bs4.BeautifulSoup(page.content, 'html.parser')
# Grab only table elements
all_soup = soup.find_all('table')
# Get what you want from table elements!
for element in all_soup:
listing = str(element)
if 'https://finance.yahoo.com/' in listing:
# Stuff the results in a pandas data frame (if your not using these you should)
data = pd.read_html(listing)
return data
Yes Yes its very crude but don't hate!
Cheers!
I am a research analyst trying to collate data and perform analysis.I need data from this page . I need data of Abrasives to vanspati Oils (you'll find it on left side). I always encounter problems like this, I figured out that selenium will be able to handle such stuff. But I am stuck on how to download this data into Excel. I need one excel sheet for each category.
My exact technical question is how do I address the problem of downloading the table data.I did a little bit of background research and understood that the data can be extracted if the table has class_name.from here. I see that the table has class="tbldata14 bdrtpg" So I used it in my code.
I got this error
InvalidSelectorException: Message: The given selector tbldata14 bdrtpg
is either invalid or does not result in a WebElement.
How can I download this table data? Point me to any references that I can read and solve this problem.
My code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.moneycontrol.com/stocks/marketinfo/netprofit/bse/index.html")
elem=driver.find_element_by_class_name("tbldata14 bdrtpg")
Thanks in advance.Also please suggest if there is another simple way [I tried copy paste it is too tedious!]
Fetching the data you're interesting in can be achieved as following,
from selenium import webdriver
url = "http://www.moneycontrol.com/stocks/marketinfo/netprofit/bse/index.html"
# Get table-cells where the cell contains an anchor or text
xpath = "//table[#class='tbldata14 bdrtpg']//tr//td[child::a|text()]"
driver = webdriver.Firefox()
driver.get(url)
data = driver.find_elements_by_xpath(xpath)
# Group the output where each row contains 5 elements
rows=[data[x:x+5] for x in xrange(0, len(data), 5)]
for r in rows:
print "Company {}, Last Price {}, Change {}, % Change {}, Net Profit {}" \
.format(r[0].text, r[1].text, r[2].text, r[3].text, r[4].text)
Writing the data to an excel file is explained here,
Python - Write to Excel Spreadsheet
Python, appending printed output to excel file