extract information inside span tag - python

I am trying to extract PMC ID between "span" tag.
To do so, I used find element by xpath, but I'm facing the following error:
selenium.common.exceptions.NoSuchElementException:Message: Unable to locate element: /div/main/div/details/div/div[2]/details/summary/span[5]
Following is the link:
https://www.ncbi.nlm.nih.gov/pmc/utils/idconv/v1.0/?tool=my_tool&email=my_email#example.com&ids=9811893
Following is my code:
driver = webdriver.Firefox(executable_path='geckodriver.exe')
driver.implicitly_wait(10) # this lets webdriver wait 10 seconds for the website to load
driver.get("https://www.ncbi.nlm.nih.gov/pmc/utils/idconv/v1.0/?tool=my_tool&email=my_email#example.com&ids=9811893")
pmc= driver.find_element_by_xpath('/div/main/div/details/div/div[2]/details/summary/span[5]')
pmc.get_text()
The output should be:
PMC24938

You can use a css attribute selector then get_attribute to get the attribute value
from selenium import webdriver
driver = webdriver.Firefox(executable_path='geckodriver.exe')
driver.get("https://www.ncbi.nlm.nih.gov/pmc/utils/idconv/v1.0/?tool=my_tool&email=my_email#example.com&ids=9811893")
pmc = driver.find_element_by_css_selector('[pmcid]')
print(pmc.get_attribute('pmcid'))
Result:
Though you don't need selenium for this site. Use faster requests and bs4
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.ncbi.nlm.nih.gov/pmc/utils/idconv/v1.0/?tool=my_tool&email=my_email#example.com&ids=9811893')
soup = bs(r.content, 'lxml')
pmc = soup.select_one('[pmcid]')['pmcid']
print(pmc)

Related

BeautifulSoup returning none when I can see that the element exists

I'm trying to scrape the link to the image on this reddit website for practice, but BS4 seems to be returning none whenever I use find() to find the class of the object. Any help?
from bs4 import BeautifulSoup as soup
page = requests.get("https://www.reddit.com/r/wallpaper/comments/qswblq/the_frontier_by_me_5120x2880/")
soup = soup(page.content, "html.parser")
print(soup.find(class_="ImageBox-image")['src'])
As mentioned in the comments there is an alternativ, you can use selenium.
Instead of requests it will render the site like a browser and will give page_source you could inspect and find your element.
Example:
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome('YOUR PATH TO CHROMEDIVER')
driver.get('https://www.reddit.com/r/wallpaper/comments/qswblq/the_frontier_by_me_5120x2880/')
content = driver.page_source
soup = BeautifulSoup(content,'html.parser')
soup.find(class_="ImageBox-image")['src']

Python - Item Price Web Scraping for Target

I'm trying to get any item's price from Target website. I did some examples for this website using selenium and Redsky API but now I tried to wrote bs4 code below:
import requests
from bs4 import BeautifulSoup
url = "https://www.target.com/p/ruffles-cheddar-38-sour-cream-potato-chips-2-5oz/-/A-14930847#lnk=sametab"
r= requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
price = soup.find("div",class_= "web-migration-tof__PriceFontSize-sc-14z8sos-14 elGGzp")
print(price)
But it returns me None .
I tried soup.find("div",{'class': "web-migration-tof__PriceFontSize-sc-14z8sos-14 elGGzp"})
What am I missing?
I can accept any selenium code or Redsky API code but my priority is bs4
The page is dynamic. The data is rendered after the initial request is made. You can use selenium to load the page, and once it's rendered, then you can pull out the relevant tag. API though is always the preferred way to go if it's available.
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome('C:/chromedriver_win32/chromedriver.exe')
# If you don't want to open a browser, comment out the line above and uncomment below
#options = webdriver.ChromeOptions()
#options.add_argument('headless')
#driver = webdriver.Chrome('C:/chromedriver_win32/chromedriver.exe', options=options)
url = "https://www.target.com/p/ruffles-cheddar-38-sour-cream-potato-chips-2-5oz/-/A-14930847#lnk=sametab"
driver.get(url)
r = driver.page_source
soup = BeautifulSoup(r, "lxml")
price = soup.find("div",class_= "web-migration-tof__PriceFontSize-sc-14z8sos-14 elGGzp")
print(price.text)
Output:
$1.99
You are simply using wrong locator.
Try this
price_css_locator = 'div[data-test=product-price]'
or in XPath style
price_xpath_locator = '//div[#data-test="product-price"]'
With bs4 it should be something like this:
soup.select('div[data-test="product-price"]')
to get the element get you just need to add .text
price = soup.select('div[data-test="product-price"]').text
print(price)
use .text
price = soup.find("div",class_= "web-migration-tof__PriceFontSize-sc-14z8sos-14 elGGzp")
print(price.text)

Getting empty result when searching for <span> with bs4

Im want to use bs4 in my Flask-App for searching a specific span.
I never used bs4 before so I'm a little bit confused why I don't get any results for my search.
from bs4 import BeautifulSoup
url = "https://www.mcfit.com/de/fitnessstudios/studiosuche/studiodetails/studio/berlin-lichtenberg/"
html_content = requests.get(url).text
soup = BeautifulSoup(html_content, "lxml")
spans = soup.find_all('span', {'class': 'sc-fzoXWK hnKkAN'})
print(spans)
The class 'sc-fzoXWK hnKkAN' only contains 1 span.
When I execute I only get a []as result.
Those contents are dynamically generated using javascript so using requests to retrieve the HTML will just retrieve the static contents, you can combine BeautifulSoup with something like Selenium to achieve what you want:
Install selenium:
pip install selenium
And then retrieve the contents using the Firefox engine or any other that supports javascript:
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('https://www.mcfit.com/de/fitnessstudios/studiosuche/studiodetails/studio/berlin-lichtenberg/')
html_content = driver.page_source
soup = BeautifulSoup(html_content, "lxml")
elems = soup.find_all('div', {'class': 'sc-fzoXWK hnKkAN'})
print(elems)
If you use Firefox, the geckodriver needs to be accessible by your script, you can download it from https://github.com/mozilla/geckodriver/releases and put it in your PATH (or c:/windows if you are using this OS) so it is available from everywhere.

Beautifulsoup not returning complete HTML of the page

I have been digging on the site for some time and im unable to find the solution to my issue. Im fairly new to web scraping and trying to simply extract some links from a web page using beautiful soup.
url = "https://www.sofascore.com/pt/futebol/2018-09-18"
page = urlopen(url).read()
soup = BeautifulSoup(page, "lxml")
print(soup)
At the most basic level, all im trying to do is access a specific tag within the website. I can work out the rest for myself, but the part im struggling with is the fact that a tag that I am looking for is not in the output.
For example: using the built in find() I can grab the following div class tag:
class="l__grid js-page-layout"
However what i'm actually looking for are the contents of a tag that is embedded at a lower level in the tree.
js-event-list-tournament-events
When I perform the same find operation on the lower-level tag, I get no results.
Using Azure-based Jupyter Notebook, i have tried a number of the solutions to similar problems on stackoverflow and no luck.
Thanks!
Kenny
The page use JS to load the data dynamically so you have to use selenium. Check below code.
Note you have to install selenium and chromedrive (unzip the file and copy into python folder)
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = "https://www.sofascore.com/pt/futebol/2018-09-18"
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
time.sleep(3)
page = driver.page_source
driver.quit()
soup = BeautifulSoup(page, 'html.parser')
container = soup.find_all('div', attrs={
'class':'js-event-list-tournament-events'})
print(container)
or you can use their json api
import requests
url = 'https://www.sofascore.com/football//2018-09-18/json'
r = requests.get(url)
print(r.json())
I had the same problem and the following code worked for me. Chromedriver must be installed!
import time
from bs4 import BeautifulSoup
from selenium import webdriver
chromedriver_path= "/Users/.../chromedriver"
driver = webdriver.Chrome(chromedriver_path)
url = "https://yourURL.com"
driver.get(url)
time.sleep(3) #if you want to wait 3 seconds for the page to load
page_source = driver.page_source
soup = bs4.BeautifulSoup(page_source, 'lxml')
This soup you can use as usual.

Using Requests and BeautifulSoup - Python returns tag with no text

I'm trying to capture the number of visits on this page, but python returns the tag with no text.
This is what I've done.
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.kijiji.ca/v-2-bedroom-apartments-condos/city-of-halifax/clayton-park-west-condo-style-luxury-2-bed-den/1016364514")
soup = BeautifulSoup(r.content)
print soup.find_all("span",{"class":"ad-visits"})
The values you are trying to scrape are populated by javascript so beautfulsoup or requests aren't going to work in this case.
You'll need to use something like selenium to get the output.
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.kijiji.ca/v-2-bedroom-apartments-condos/city-of-halifax/clayton-park-west-condo-style-luxury-2-bed-den/1016364514")
soup = BeautifulSoup(driver.page_source , 'html.parser')
print soup.find_all("span",{"class":"ad-visits"})
Selenium will return the page source as rendered and you can then use beautifulsoup to get the value
[<span class="ad-visits">385</span>]

Categories