I have a Python code written by a frined which actually worked perfectly just a few weeks ago and now is causing some problems.
The code takes a google images search link from a csv file and using Selenium Webdriver (Chrome), gives back a link to the first image on the search page.
What could be the reason for this error? (NameError: name 'WebDriverWait' is not defined)
Thank!
That error typically means you are missing this import:
from selenium.webdriver.support.ui import WebDriverWait
Import two things:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
Related
I was having trouble with find elements by XPATH for a selenium script in Python.
Couldn’t understand the problem, i want to identify all of the elements that satisfy my xpath criteria.
So I simply compared the find_elements(By.XPATG, ‘//div’) to the same result when looking at Chrome’s Inspect tool.
The tool says 300+ results for this whereas my programme, which I got to return a count of results, only states 60? Any ideas? looks to be an thing from the browser/site side rather than my code.
Thanks
EDIT added my code:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import StaleElementReferenceException
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
import login
print(time.time())
driver.get("WEBSITE URL")
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//body")))
time.sleep(1)
links = driver.find_elements(By.XPATH,'//body')
linklist = len(links)
print(linklist)
This provides a result of around 60 however when '//body' is put in the inspect element tool, it returns 300+
Apologies but the targeted webpage is confidential so I can't post. But suggestions are appreciated so I can investigate.
Does your web page has a scroll? If yes, you may need to deal with scroll bar. Also try different conditions for WebDriverWait, for example EC.visibility_of_all_elements_located or EC.presence_of_all_elements_located
I have a few scripts, wherein I am required to import a few modules everytime. To avoid using the import statements everytime I write a new script, I tried to write a function as follows so that I can import the function instead. Here's how wrote the code for that :
def mylibs():
import selenium
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
return
mylibs()
But when I am running the next line of the code- which should use the webdriver imported above to launch an instance of the chrome browser :
browser = webdriver.Chrome(r"c:\users\nila9\drivers\chromedriver.exe")
I am getting an error like "webdriver not defined", so the browser fails to launch.
I am not able to understand what I am getting wrong here...I also tried to do it without the return also, but same result.
If this works, I can then import the module into any other script, whenever I need to use the block of import codes.
Any help appreciated.
try
driver = webdriver.Chrome(executable_path="C:\Users\nila9\drivers\chromedriver.exe")
or
driver = webdriver.Chrome("C:\Users\nila9\drivers\chromedriver.exe")
I am trying to scrape for car prices from this website:
To get car prices, you should fill out the form and I have to choose from dropdowns using Selenium.
I am using this code to choose from dropdowns:
# Imports
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
year_dropdown = Select(WebDriverWait(driver, 5)
.until(EC.element_to_be_clickable((By.ID, "j_id_3q-carInfoForm-year-selectOneMenu"))))
year_dropdown.select_by_value('2015')
But after I chose the year, it just keeps loading and never stops:
Any suggestions please?
I resolved the issue by using a real chrome driver. I was using chromdriver-manager package and when I removed it and downloaded a real chrome driver, the issue was gone.
I am trying to click on the first post after navigating to any Instagram profile. I looked at the xpath of the first post of multiple Instagram user's profiles and they all seem to be the same. Here is an example of messi's profile.
Here is my attempt with using chromedriver with python to click on Messi's first post. I have already navigated to https://www.instagram.com/leomessi/, which is Messi's profile.
first_post_elem_click = driver.find_element_by_path('//*[#id="react-root"]/section/main/div/div[4]/article/div[1]/div/div[1]/div[1]/a/div').click()
However, the first post is not being clicked on. Would greatly appreciate any help.
Please check below solution,
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
Browser = webdriver.Chrome(executable_path=r"chromedriver.exe")
Browser.get("https://www.instagram.com/leomessi/")
WebDriverWait(Browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//body//div[contains(#class,'_2z6nI')]//div//div//div[1]//div[1]//a[1]//div[1]//div[2]"))).click()
Instead of using the absolute xpath, you should be using relative xpath.
You can click on the first post using the below command(Have applied Explicit wait as well):
WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, "(//div[#class='Nnq7C weEfm']//img)[1]"))).click()
You need to add the following imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
I have just checked this in Firefox: $x('//*[#id="react-root"]/section/main/div/descendant::article/descendant::a[1]'). That should give you what you want, I think.
I'm having an odd problem trying to web scrape some data from ESPN. I have the below code that sometimes works as intended, but sometimes will get hung up trying to log in. It really seems random, and I'm not sure what's going on.
import time
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(r'Source')
driver.get("http://games.espn.go.com/ffl/signin")
WebDriverWait(driver,1000).until(EC.presence_of_all_elements_located((By.XPATH,"(//iframe)")))
frms = driver.find_elements_by_xpath("(//iframe)")
driver.switch_to_frame(frms[2])
time.sleep(2)
WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH,'(//input)[1]')))
driver.find_element_by_xpath("(//input)[1]").send_keys("Username")
driver.find_element_by_xpath("(//input)[2]").send_keys("password")
driver.find_element_by_xpath("//button").click()
driver.switch_to_default_content()
time.sleep(2)
When I run the code as is, it often times times out during the second "WebDriverWait", despite the page having fully loaded in chrome. If I take that line out, I then will get an error message that reads:
"selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:"
You check this xpath? Apparently it is not correct, you can try it on the browser console.
$x('yourxpath')