How do I solve this error with Silenium Web Driver - python

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
driver.get(updatedCarLinks[0]) #This has a list of links that I need to look but will but this in a for loop later
time.sleep(2)
h1 = driver.find_element_by_xpath('//h1')
print(h1)
I keep getting the following error screen. I have been looking at it on other posts and can't seem to figure out why this error is happening.
selenium.common.exceptions.WebDriverException: Message: target frame detached <--- Error

Is the selenium crome driver compatible with you chrome version? Check the versions match or is supported by the driver.
More information on what exact versions are used would be helpful

Related

selenium find_element_by_css_selector for the long class name

I have tried multiple times with other instruction codes with space from the tutorial which worked fine. However, when I just changed the URL and the following class, it would give the error saying
selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified
(Session info: chrome=100.0.4896.88)
Everything worked when I used the tutorial code.
Here is my code (I have solved a few chrome driver problems from the internet)
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
options = webdriver.ChromeOptions()
options.add_experimental_option("excludeSwitches", ["enable-logging"])
driver = webdriver.Chrome(options=options)
driver.get("https://raritysniper.com/nft-drops-calendar")
time.sleep(1)
link = driver.find_element_by_css_selector(".w-full.h-full.align-middle.object-cover.dark:brightness-80.dark:contrast-103.svelte-f3nlpp").get_attribute("alt")
print(link)
I am trying to get attributes of each projects and make them into csv.
(Please refer to the screenshot)
Screen shot of HTML that I am trying to extract
it would be wonderful if anyone could depict the problem I got with the code.
Thank you!
The CSS_SELECTOR that you are using
.w-full.h-full.align-middle.object-cover.dark:brightness-80.dark:contrast-103.svelte-f3nlpp
does not really match any element in the HTML.
Instead, you should use this CSS_SELECTOR:
div.w-full.h-full.align-middle img:not(.placeholder)
In code:
driver.maximize_window()
wait = WebDriverWait(driver, 30)
driver.get("https://raritysniper.com/nft-drops-calendar")
#time.sleep(1)
first_element = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "div.w-full.h-full.align-middle img:not(.placeholder)")))
print(first_element.get_attribute('alt'))
print(first_element.get_attribute('src'))
Import:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Output:
NEON PLEXUS
https://media.raritysniper.com/featured/neon-plexus_1648840196045_3.webp
Process finished with exit code 0

Unable to fill the form using Selenium: AttributeError: 'WebDriver' object has no attribute 'get_element_by_xpath'

I am trying to fill the Uber form
using selenium.
It looks like this:
I tried to use XPath to fill the form, here is the code:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("user-data-dir=C:\\Users\\Zedas\\AppData\\Local\\Google\\Chrome\\User Data")
w = webdriver.Chrome(executable_path='chromedriver.exe', chrome_options=options)
w.get("https://m.uber.com/looking")
time.sleep(6)
w.get_element_by_xpath(f'"//*[#id="booking-experience-container"]/div/div[3]/div[2]/div/input"').send_keys("test")
But, this is the error:
w.get_element_by_xpath(f'"//*[#id="booking-experience-container"]/div/div[3]/div[2]/div/input"').send_keys("test")
AttributeError: 'WebDriver' object has no attribute 'get_element_by_xpath'
How can this error be fixed?
There's no method known as get_element_by_xpath in Selenium-Python bindings.
Please use
driver.find_element_by_xpath("xpath here")
Also, Since find_element internally looks for implicit waits to wait and interact with web element. Often, It has been observed that it is not a consistent method to look for web element/elements in Selenium automation.
Please use Explicit waits :
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "xpath here"))).send_keys("test")
You will need these imports as well.
Imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
You can read more about waits from here
w.get_element_by_xpath should be w.find_element_by_xpath
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("user-data-dir=C:\\Users\\Zedas\\AppData\\Local\\Google\\Chrome\\User Data")
w = webdriver.Chrome(executable_path='chromedriver.exe', chrome_options=options)
w.get("https://m.uber.com/looking")
time.sleep(6)
w.find_element_by_xpath(f'"//*[#id="booking-experience-container"]/div/div[3]/div[2]/div/input"').send_keys("test")
Selenium just removed find_element_by_xpath() in version 4.3.0
now, it should be
driver.find_element("xpath", '//*[#id="mG61Hd"]div[1]/input')
https://github.com/SeleniumHQ/selenium/blob/a4995e2c096239b42c373f26498a6c9bb4f2b3e7/py/CHANGES
This error message...
w.get_element_by_xpath(f'"//*[#id="booking-experience-container"]/div/div[3]/div[2]/div/input"').send_keys("test")
AttributeError: 'WebDriver' object has no attribute 'get_element_by_xpath'
implies that the WebDriver object has no attribute as get_element_by_xpath().
You have to take care of a couple of things as follows:
If you look at the various WebDriver strategies to locate elements there is no method as get_element_by_xpath() and possibly you would like to replace with find_element_by_xpath()
While invoking click() on an element, Implicit Wait is no more effective and you need to replace it with Explicit Wait i.e WebDriverWait
Solution
So your effective code block will be:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("user-data-dir=C:\\Users\\Zedas\\AppData\\Local\\Google\\Chrome\\User Data")
w = webdriver.Chrome(executable_path='chromedriver.exe', chrome_options=options)
w.get("https://m.uber.com/looking")
WebDriverWait(w, 20).until(EC.element_to_be_clickable((By.XPATH, f'"//*[#id="booking-experience-container"]/div/div[3]/div[2]/div/input"'))).send_keys("test")

Python 2.7 Selenium No Such Element on Website

I'm trying to do some webscraping from a betting website:
As part of the process, I have to click on the different buttons under the "Favourites" section on the left side to select different competitions.
Let's take the ENG Premier League button as example. I identified the button as:
(source: 666kb.com)
The XPath is: //*[#id="SportMenuF"]/div[3] and the ID is 91.
My code for clicking on the button is as follows:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chrome_path = "C:\Python27\Scripts\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("URL Removed")
content = driver.find_element_by_xpath('//*[#id="SportMenuF"]/div[3]')
content.click()
Unfortunately, I always get this error message when I run the script:
"no such element: Unable to locate element:
{"method":"xpath","selector":"//*[#id="SportMenuF"]/div[3]"}"
I have tried different identifiers such as CCS Selector, ID and, as shown in the example above, the Xpath. I tried using waits and explicit conditions, too. None of this has worked.
I also attempted scraping some values from the website without any success:
from selenium import webdriver
from selenium.webdriver.common.by import By
chrome_path = "C:\Python27\Scripts\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("URL removed")
content = driver.find_elements_by_class_name('price-val')
for entry in content:
print entry.text
Same problem, nothing shows up.
The website embeddes an iframe from a different website. Could this be the cause of my problems? I tried scraping directly from the iframe URL, too, which didn't work, either.
I would appreciate any suggestions.
Sometimes elements are either hiding behind an iframe, or they haven't loaded yet
For the iframe check, try:
driver.switch_to.frame(0)
For the wait check, try:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, '-put the x-path here-')))

why is this code showing NoSuchElementException error? I checked Chrome DOM my XPATH able to find the destinated tag

why is this code showing NoSuchElementException error? I checked Chrome DOM my XPATH able to find the destinated tag.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
class Firefox():
def test(self):
base_url='https://oakliquorcabinet.com/'
driver = webdriver.Chrome(executable_path=r'C:\Users\Vicky\Downloads\chromedriver')
driver.get(base_url)
search=driver.find_element(By.XPATH,'//div[#class="box-footer"]/button[2]')
search.click()
ff=Firefox()
ff.test()
Selenium by default waits for the DOM to load and tries to find the element. But, the confirmation pop up becomes visible after some time the main page is loaded.
Use explicit wait to fix this issue.
add these imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
change line in script:
search = WebDriverWait(driver, 10).until(expected_conditions.presence_of_element_located((By.XPATH, '//div[#class="box-footer"]/button[2]')))

Python expected_conditions don't always work

I'm having an odd problem trying to web scrape some data from ESPN. I have the below code that sometimes works as intended, but sometimes will get hung up trying to log in. It really seems random, and I'm not sure what's going on.
import time
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(r'Source')
driver.get("http://games.espn.go.com/ffl/signin")
WebDriverWait(driver,1000).until(EC.presence_of_all_elements_located((By.XPATH,"(//iframe)")))
frms = driver.find_elements_by_xpath("(//iframe)")
driver.switch_to_frame(frms[2])
time.sleep(2)
WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH,'(//input)[1]')))
driver.find_element_by_xpath("(//input)[1]").send_keys("Username")
driver.find_element_by_xpath("(//input)[2]").send_keys("password")
driver.find_element_by_xpath("//button").click()
driver.switch_to_default_content()
time.sleep(2)
When I run the code as is, it often times times out during the second "WebDriverWait", despite the page having fully loaded in chrome. If I take that line out, I then will get an error message that reads:
"selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:"
You check this xpath? Apparently it is not correct, you can try it on the browser console.
$x('yourxpath')

Categories