Selenium only finding certain elements in Python - python

I'm having some trouble finding elements with Selenium in Python, it works fine for every element on all other websites I have tested yet on a game website it can only find certain elements.
Here is the code I'm using:
from selenium import webdriver
import time
driver = webdriver.Chrome("./chromedriver")
driver.get("https://www.jklm.fun")
passSelf = input("Press enter when in game...")
time.sleep(1)
syllable = driver.find_element_by_xpath("/html/body/div[2]/div[2]/div[2]/div[2]/div").text
print(syllable)
Upon running the code, the element /html/body/div[2]/div[2]/div[2]/div[2]/div isn't found. In the image you can see the element it is trying to find:
Element the code is trying to find
However running the same code but replacing the XPath with something outside of the main game (for example the room code in the top right) it successfully finds the element:
Output of the code being run on a different element
I've tried using the class name, name, selector and XPath to find the original element but no prevail the only things I can think that are affecting it is that:
The elements are changing periodically (not sure if this affects it)
The elements are in the "Canvas area" and it is somehow blocking it.
I'm not certain whether these things matter as I'm new to using selenium any help is appreciated. The website the game is on is https://www.jklm.fun/ if you want to have a look through the elements

Element you are trying to access is inside an iframe. Switch to the frame first like this
driver.switch_to_frame(driver.find_element_by_xpath("//div[#class='game']/iframe[contains(#src,'jklm.fun')]"))

driver.get("https://jklm.fun/JXUS")
WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.XPATH, "//button[#class='styled']"))).click()
time.sleep(10)
driver.switch_to.frame(0)
while True:
Get_Text = driver.find_element_by_xpath("//div[#class='round']").text
print(Get_Text)

Related

I'm not sure why this xpath is unable to be located using Python Selenium

I'm trying to scrape every item on a site that's displayed in a grid format with infinite scrolling. However, I'm stuck on even getting the second item using xpath because it's saying:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[#class='el-card advertisement card is-always-shadow'][2]"}
x = 1
while x < 5:
time.sleep(5)
target = driver.find_element_by_xpath(f"//div[#class='el-card advertisement card is-always-shadow'][{x}]")
target.click()
wait.until(EC.visibility_of_element_located((By.ID, "details")))
print(driver.current_url)
time.sleep(5)
driver.back()
time.sleep(5)
WebDriverWait(driver, 3).until(EC.title_contains("Just Sold"))
time.sleep(5)
x += 1
With my f-string xpath it's able to find the first div with that class and print the URL, but the moment it completes one iteration of the while loop, it fails to find the 2nd div with that class (so 2).
I've tried monitoring it with all the time.sleep() to see exactly where it was failing because I thought maybe it was running before the page loaded and therefore it couldn't be found, but I gave it ample time to finish loading every page and yet I can't find the issue.
This is the structure of the HTML on that page:
There is a class of that name (as well as "el-card__body" which I have also tried using) within each div, one for each item being displayed.
(This is what each div looks like)
Thank you for the help in advance!
(disclaimer: this is for a research paper, I do not plan on selling/benefiting off of the information being collected)
Storing each item in a list using find_elements_by_xpath, then iterating through them did the trick, as suggested by tdelaney.

Selenium not finding list of sections with classes?

I am attempting to get a list of games on
https://www.xbox.com/en-US/live/gold#gameswithgold
According to Firefox's dev console, it seems that I found the correct class: https://i.imgur.com/M6EpVDg.png
In fact, since there are 3 games, I am supposed to get a list of 3 objects with this code: https://pastebin.com/raw/PEDifvdX (the wait is so Seleium can load the page)
But in fact, Selenium says it does not exist: https://i.imgur.com/DqsIdk9.png
I do not get what I am doing wrong. I even tried css selectors like this
listOfGames = driver.find_element_by_css_selector("section.m-product-placement-item f-size-medium context-game gameDiv")
Still nothing. What am I doing wrong?
You are trying to get three different games so you need to give different element path or you can use some sort of loop like this one
i = 1
while i < 4:
link = f"//*[#id='ContentBlockList_11']/div[2]/section[{i}]/a/div/h3"
listGames = str(driver.find_element_by_xpath(link).text)
print(listGames)
i += 1
you can use this kind of loop in some places where there is slight different in xpath,css or class
in this way it will loop over web element one by one and get the list of game
as you are trying to get name I think so you need to put .text which will only get you the name nothing else
Another option with a selector that isn't looped over and changed-- also one that's less dependent on the page structure and a little easier to read:
//a[starts-with(#data-loc-link,'keyLinknowgame')]//h3
Here's sample code:
from selenium import webdriver
from selenium.common.exceptions import StaleElementReferenceException
driver = webdriver.Chrome()
url = f"https://www.xbox.com/en-US/live/gold#gameswithgold"
driver.get(url)
driver.implicitly_wait(10)
listOfGames = driver.find_elements_by_xpath("//a[starts-with(#data-loc-link,'keyLinknowgame')]//h3")
for game in listOfGames:
try:
print(game.text)
except StaleElementReferenceException:
pass
If you're after more than just the title, remove the //h3 selection:
//a[starts-with(#data-loc-link,'keyLinknowgame')]
And add whatever additional Xpath you want to narrow things down to the content/elements that you're after.

Using Selenium to grab specific information in Python

So I am quite new to using Selenium and thus and quite unsure how to do this, or even word it for this matter.
But what I am trying to do is to use selenium to grab the following values and then store them into a list.
Image provided of the inspector window of Firefox, to show what I am trying to grab (Highlighted)
https://i.stack.imgur.com/rHk9R.png
In Selenium, you access elements using functions find_element(s)_by_xxx(), where xxx is for example the tag name, element name or class name (and more). The functions find_element_... return the first element that matches the argument, while find_elements_... return all matching elements.
Selenium has a [good documentation][1], in section "Getting started" you can find several examples of basic usage.
As to your question, the following code should collect the values you want:
from selenium import webdriver
driver = webdriver.Firefox() # driver for the browser you use
select_elem = driver.find_element_by_name('ctl00_Content...') # full name of the element
options = select_elem.find_elements_by_tag_name('option')
values = []
for option in options:
val = option.get_attribute('value')
values.append(val)

Selenium Page Source is Missing Elements

I have a basic Selenium script that makes use of the chromedriver binary. I'm trying to display a page with recaptcha on it and then hang until the answer has been completed and then store that in a variable for future use.
The roadblock I'm hitting is that I am unable to find the recaptcha element.
#!/bin/env python2.7
import os
from selenium import webdriver
driverBin=os.path.expanduser("~/Desktop/chromedriver")
driver=webdriver.Chrome(driverBin)
driver.implicitly_wait(5)
driver.get('http://patrickhlauke.github.io/recaptcha/')
Is there anything special needed to be able to see this element?
Also is there a way to grab the token after user solve without refreshing the page?
As it is now the input type of the recaptcha-token id is hidden. After solve a second recaptcha-token id is created. This is the value I wish to store in a variable. I was thinking of having a loop of checking length of found elements with that id. If greater than 1 parse. But I'm unsure whether the source updates per se.
UPDATE:
With more research it has to do with the nature of the element, particularly: with the tag: <input type="hidden". So I guess to rephrase my question, how does one extract the value of a hidden element.
The element you are looking for (the input) is in an iframe. You'll need switch to the iframe before you can locate the element and interact with it.
import os
from selenium import webdriver
driver=webdriver.Chrome()
try:
driver.implicitly_wait(5)
driver.get('http://patrickhlauke.github.io/recaptcha/')
# Find the iframe and switch to it
iframe_path = '//iframe[#title="recaptcha widget"]'
iframe = driver.find_element_by_xpath(iframe_path)
driver.switch_to.frame(iframe)
# Find the input element
input_elem = driver.find_element_by_id("recaptcha-token")
print("Found the input element: ", input_elem)
finally:
driver.quit()

How to parse Selenium driver elements?

I'm new in Selenium with Python. I'm trying to scrape some data but I can't figure out how to parse outputs from commands like this:
driver.find_elements_by_css_selector("div.flightbox")
I was trying to google some tutorial but I've found nothing for Python.
Could you give me a hint?
find_elements_by_css_selector() would return you a list of WebElement instances. Each web element has a number of methods and attributes available. For example, to get an inner text of the element, use .text:
for element in driver.find_elements_by_css_selector("div.flightbox"):
print(element.text)
You can also make a context-specific search to find other elements inside the current element. Taking into account, that I know what site you are working with, here is an example code to get the departure and arrival times for the first-way flight in a result box:
for result in driver.find_elements_by_css_selector("div.flightbox"):
departure_time = result.find_element_by_css_selector("div.departure p.p05 strong").text
arrival_time = result.find_element_by_css_selector("div.arrival p.p05 strong").text
print [departure_time, arrival_time]
Make sure you study Getting Started, Navigating and Locating Elements documentation pages.

Categories