Unable to print the href? - python

I have been searching across the site in the hope of finding an answer, however, every question I view doesn't have heavily nested HTML code like the page I am trying to scrape.
I am really hoping someone will spot my obvious error. I have the following code which is pulling the category headers and but annoyingly not the href that goes with each one. When run, the code currently returns 'None' for all the href's but I cannot decipher why. I think it may be because I am targeting the wrong element, tag or class in the HTML but cannot correctly identify which one it should be.
from selenium import webdriver
import time
# The website to scrape
url = "https://www.jtinsight.com/JTIRA/JTIRA.aspx#!/full-category-list"
# Creating the WebDriver object using the ChromeDriver
driver = webdriver.Chrome()
# Directing the driver to the defined url
driver.get(url)
# driver.implicitly_wait(5)
time.sleep(1)
# Locate the categories
categories = driver.find_elements_by_xpath('//div[#class="subCatEntry ng-scope"]')
# Print out all categories on current page
num_page_items = len(categories)
print(num_page_items)
for headers in range(num_page_items):
print(categories[headers].text)
for elem in categories:
print(elem.get_attribute("a.divLink[href='*']"))
# Clean up (close browser once task is completed)
time.sleep(1)
driver.close()
I would really appreciate if anyone can point out my error.

Try this below code.
for elem in categories:
print(elem.find_element_by_css_selector("a.divLink").get_attribute('href'))

You are passing the CSS selector for the get_attribute method. That wouldn't work. You have to provide the attribute name only. If the web element elem has an attribute named href then it would print the value of that attribute.
First, get the anchor <a> element. All the subcategory anchors have class divLink. For getting anchor elements try this,
categories = driver.find_elements_by_class_name('divLink')
Second, Print the attribute value by passing the attribute name in the get_ttribute. Try this,
print(elem.get_attribute("href"))
This way you'll be able to print all the href values.

Related

Can't find class in Java Scripted page using beautifulsoup, selenium, and python

When parsing the following webpage:
https://resultados.registraduria.gov.co/coalicion/0/colombia
only the div "root" is shown with no other classes. I'm trying to retrieve the names of the candidates under the "FilaTablaPartidos__NombreCandidatoConsulta-sc-1s2q8ec-8 uMrIO" class, but, when I try to use the find_all() method from beautiful soup it returns and empty list.
This is the code I'm currently using:
url = 'https://resultados.registraduria.gov.co/coalicion/0/colombia'
driver = webdriver.Chrome('/usr/local/bin/chromedriver')
driver.get(url)
stages = driver.find_elements_by_class_name(\
'FilaTablaPartidos__NombreCandidatoConsulta-sc-1s2q8ec-8 uMrIO')
soup = BeautifulSoup(driver.page_source,'html.parser')
nombres_ptags = soup.find_all('p',{'class':\
'FilaTablaPartidos__NombreCandidatoConsulta-sc-1s2q8ec-8 uMrIO'})
nombres = []
for name in nombres_ptags:
nombres.append(name.text)
driver.close()
I even try to use the driver method find_elements_by_class_name but it still returns as an empty list.
I think the children under the tree apparently aren't showing when parsing the html and I think that is why its not finding the class I need.
Any help is appreciated, thank you!

Python Search multiple html files in a variable

I have used Selenium driver to crawl through many site pages. Every time I get a new page I append the html to a variable called "All_APP_Pages". The variable All_APP_Pages is a variable holding html for many pages. Did not post code because its long and no relevant to issue. Python list "All_APP_Pages" as being of type bytes.
from lxml import html
from lxml import etree
import xml.etree.ElementTree as ET
from selenium.webdriver.common.by import By
dom = etree.HTML(All_APP_Pages)
xp = "//tr[.//span[contains(.,'Product Data Solutions (UHC MR)')] and .//td[contains(.,'SQLServer')] and .//td[contains(.,'MR')]]//a"
link = dom.xpath(xp)
print(link)
Once all pages have been scanned I need to get the link from this xpath
"//tr[.//span[contains(.,'Product Data Solutions (ABC MR)')] and .//td[contains(.,'SQLServer')] and .//td[contains(.,'MR')]]//a"
The xpath listed here works. However it only works with the selenium driver if driver is on the page where this link exists. That is why all page are in one variable since I dont know what page the link will be on. The print shows this result
[<Element a at 0x1c39dea1180>]
How do I get this value from link I so can check if value is correct?
You need to iterate the list and get the href value
dom = etree.HTML(All_APP_Pages)
xp = "//tr[.//span[contains(.,'Product Data Solutions (UHC MR)')] and .//td[contains(.,'SQLServer')] and .//td[contains(.,'MR')]]//a"
link = dom.xpath(xp)
hrefs=[l.attrib["href"] for l in link]
print(hrefs)

Web scraping using selenium and beautifulsoup.. trouble in parsing and selecting button

I am trying to web scrape the following website "url='https://angel.co/life-sciences'
". The website contains more than 8000 data. From this page I need the information like company name and link, joined date and followers. Before that I need to sort the followers column by clicking the button. then load more information by clicking more hidden button. The page is clickable (more hidden) content at the max 20 times, after that it doesn't load more information. But I can take only top follower information by sorting it. Here I have implemented the click() event but it's showing error.
Unable to locate element: {"method":"xpath","selector":"//div[#class="column followers sortable sortable"]"} #before edit this was my problem, using wrong class name
So do I need to give more sleep time here?(tried giving that but same error)
I need to parse all the above information then visit individual link of those website to scrape content div of that html page only.
Please suggest me a way to do it
Here is my current code, I have not added html parsing part using beautifulsoup.
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from time import sleep
from selenium import webdriver
from bs4 import BeautifulSoup
#import urlib2
driver = webdriver.Chrome()
url='https://angel.co/life-sciences'
driver.get(url)
sleep(10)
driver.find_element_by_xpath('//div[#class="column followers sortable"]').click()#edited
sleep(5)
for i in range(2):
driver.find_element_by_xpath('//div[#class="more hidden"]').click()
sleep(8)
sleep(8)
element = driver.find_element_by_id("root").get_attribute('innerHTML')
#driver.execute_script("return document.getElementsByTagName('html')[0].innerHTML")
#WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CLASS_NAME, 'more hidden')))
'''
results = html.find_elements_by_xpath('//div[#class="name"]')
# wait for the page to load
for result in results:
startup = result.find_elements_by_xpath('.//a')
link = startup.get_attribute('href')
print(link)
'''
page_source = driver.page_source
html = BeautifulSoup(element, 'html.parser')
#for link in html.findAll('a', {'class': 'startup-link'}):
# print link
divs = html.find_all("div", class_=" dts27 frw44 _a _jm")
The above code was working and was giving me html source before I have added the Followers click event.
My final goal is to import all these five information like Name of the company, Its link, Joined date, No of Followers and the company description (which to be obtained after visiting their individual links) into a CSV or xls file.
Help and comments are apprecieted.
This is my first python work and selenium, so little confused and need guidance.
Thanks :-)
The click method is intended to emulate a mouse click; it's for use on elements that can be clicked, such as buttons, drop-down lists, check boxes, etc. You have applied this method to a div element which is not clickable. Elements like div, span, frame and so on are used to organise HTML and provide for decoration of fonts, etc.
To make this code work you will need to identify the elements in the page that are actually clickable.
Oops my typing mistake or some silly mistake here, I was using the div class name wrong it is "column followers sortable" instead I was using "column followers sortable selected". :-(
Now the above works pretty good.. but can anyone guide me with beautifulsoup HTML parsing part?

Python scrape table

I'm new to programming so it's very likely my idea of doing what I'm trying to do is totally not the way to do that.
I'm trying to scrape standings table from this site - http://www.flashscore.com/hockey/finland/liiga/ - for now it would be fine if I could even scrape one column with team names, so I try to find td tags with the class "participant_name col_participant_name col_name" but the code returns empty brackets:
import requests
from bs4 import BeautifulSoup
import lxml
def table(url):
teams = []
source = requests.get(url).content
soup = BeautifulSoup(source, "lxml")
for td in soup.find_all("td"):
team = td.find_all("participant_name col_participant_name col_name")
teams.append(team)
print(teams)
table("http://www.flashscore.com/hockey/finland/liiga/")
I tried using tr tag to retrieve whole rows, but no success either.
I think the main problem here is that you are trying to scrape a dynamically generated content using requests, note that there's no participant_name col_participant_name col_name text at all in the HTML source of the page, which means this is being generated with JavaScript by the website. For that job you should use something like selenium together with ChromeDriver or the driver that you find better, below is an example using both of the mentioned tools:
from bs4 import BeautifulSoup
from selenium import webdriver
url = "http://www.flashscore.com/hockey/finland/liiga/"
driver = webdriver.Chrome()
driver.get(url)
source = driver.page_source
soup = BeautifulSoup(source, "lxml")
elements = soup.findAll('td', {'class':"participant_name col_participant_name col_name"})
I think another issue with your code is the way you were trying to access the tags, if you want to match a specific class or any other specific attribute you can do so using a Python's dictionary as an argument of .findAll function.
Now we can use elements to find all the teams' names, try print(elements[0]) and notice that the team's name is inside an a tag, we can access it using .a.text, so something like this:
teams = []
for item in elements:
team = item.a.text
print(team)
teams.append(team)
print(teams)
teams now should be the desired output:
>>> teams
['Assat', 'Hameenlinna', 'IFK Helsinki', 'Ilves', 'Jyvaskyla', 'KalPa', 'Lukko', 'Pelicans', 'SaiPa', 'Tappara', 'TPS Turku', 'Karpat', 'KooKoo', 'Vaasan Sport', 'Jukurit']
teams could also be created using list comprehension:
teams = [item.a.text for item in elements]
Mr Aguiar beat me to it! I will just point out that you can do it all with selenium alone. Of course he is correct in pointing out that this is one of the many sites that loads most of its content dynamically.
You might be interested in observing that I have used an xpath expression. These often make for compact ways of saying what you want. Not too hard to read once you get used to them.
>>> from selenium import webdriver
>>> driver = webdriver.Chrome()
>>> driver.get('http://www.flashscore.com/hockey/finland/liiga/')
>>> items = driver.find_elements_by_xpath('.//span[#class="team_name_span"]/a[text()]')
>>> for item in items:
... item.text
...
'Assat'
'Hameenlinna'
'IFK Helsinki'
'Ilves'
'Jyvaskyla'
'KalPa'
'Lukko'
'Pelicans'
'SaiPa'
'Tappara'
'TPS Turku'
'Karpat'
'KooKoo'
'Vaasan Sport'
'Jukurit'
You're very close.
Start out being a little less ambitious, and just focus on "participant_name". Take a look at https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all . I think you want something like:
for td in soup.find_all("td", "participant_name"):
Also, you must be seeing different web content than I am. After a wget of your URL, grep doesn't find "participant_name" in the text at all. You'll want to verify that your code is looking for an ID or a class that is actually present in the HTML text.
Achieving the same using css selector which will let you make the code more readable and concise:
from selenium import webdriver; driver = webdriver.Chrome()
driver.get('http://www.flashscore.com/hockey/finland/liiga/')
for player_name in driver.find_elements_by_css_selector('.participant_name'):
print(player_name.text)
driver.quit()

Trying to select element by xpath with Selenium but getting error "Unable to locate element"

I am trying to scrape the list of followings for a given instagram user. This requires using Selenium to navigate to the user's Instagram page and then clicking "following". However, I cannot seem to click the "following" button with Selenium.
driver = webdriver.Chrome()
url = 'https://www.instagram.com/beforeeesunrise/'
driver.get(url)
driver.find_element_by_xpath('//*[#id="react-root"]/section/main/article/header/div[2]/ul/li[3]/a').click()
However, this results in a NoSuchElementException. I copied the xpath from the html, tried using the class name, partial link and full link and cannot seem to get this to work! I've also made sure that the above xpath include the element with a "click" event listener.
UPDATE: By logging in I was able to get the above information. However (!), now I cannot get the resulting list of "followings". When I click on the button with the driver, the html does not include the information in the pop up dialog that you see on Instagram. My goal is to get all of the users that the given username is following.
Make sure you are using the correct X Path.
Use the following link to get perfect X Paths to access web elements and then try.
Selenium Command
Hope this helps to solve the problem!
Try a different XPath. I've verified this is unique on the page.
driver.find_element_by_xpath("//a[contains(.,'following')]")
It's not the main goal of selenium to provide rich functionalities, from a web-scraping perspective, to find elements on the page, so the better option is to delegate this task to a specific tool, like BeautifulSoup. After we find what we're looking for, then, we can ask for selenium to interact with the element.
The bridge between selenium and BeautifulSoup will be this amazing function below that I found here. The function gets a single BeautifulSoup element and generates a unique XPATH that we can use on selenium.
import os
import re
from selenium import webdriver
from bs4 import BeautifulSoup as bs
import itertools
def xpath_soup(element):
"""
Generate xpath of soup element
:param element: bs4 text or node
:return: xpath as string
"""
components = []
child = element if element.name else element.parent
for parent in child.parents:
"""
#type parent: bs4.element.Tag
"""
previous = itertools.islice(parent.children, 0, parent.contents.index(child))
xpath_tag = child.name
xpath_index = sum(1 for i in previous if i.name == xpath_tag) + 1
components.append(xpath_tag if xpath_index == 1 else '%s[%d]' % (xpath_tag, xpath_index))
child = parent
components.reverse()
return '/%s' % '/'.join(components)
driver = webdriver.Chrome(executable_path=YOUR_CHROMEDRIVER_PATH)
driver.get(url = 'https://www.instagram.com/beforeeesunrise/')
source = driver.page_source
soup = bs(source, 'html.parser')
button = soup.find('button', text=re.compile(r'Follow'))
xpath_for_the_button = xpath_soup(button)
elm = driver.find_element_by_xpath(xpath_for_the_button)
elm.click()
...and works!
( but you need writing some code to log in with an account)

Categories