I'm making a search on Youtube, then I press a few TAB, then I press a few END via the ActionChains... Then I want to grab the results and if I find what I want, I want to click on it.
driver.get('https://www.youtube.com')
time.sleep(4)
items = driver.find_element(By.XPATH, "//input[#id='search']")
rand = random.choice(query)
items.send_keys(rand)
items.send_keys(Keys.RETURN)
time.sleep(2)
action = ActionChains(driver)
for i in range(5):
action.send_keys(Keys.TAB)
action.perform()
time.sleep(1)
for i in range(3):
action.send_keys(Keys.END)
action.perform()
time.sleep(4)
items = driver.find_elements(By.XPATH, "//div[#id='primary']//a[#id='thumbnail'][#class='yt-simple-endpoint inline-block style-scope ytd-thumbnail'][contains(#href, 'watch?v=')]")
for i in items:
if ......... :
i.click()
Usually, after using: driver.find_elements(By.XPATH.... I can do a
for i in items:
if ...... :
i.click()
But now it's not working since I've been using the action = ActionChains(driver)
I'm getting this error:
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
I found my answer, the problem is youtube and not selenium. i.click() works, the problem was my find_elements(By.XPATH, ... youtube also has invisible elements form the home page that are included in the result page, which gave me invisible items.
once I changed my XPATH to:
//ytd-search[#class='style-scope ytd-page-manager']//div[#id='primary']//a[#id='thumbnail'][#class='yt-simple-endpoint inline-block style-scope ytd-thumbnail'][contains(#href, 'watch?v=')]")
it worked
Related
I'm stuck with a selenium scrape using jupyter. I'm trying to get the "download page data" from the bottom right corner of this page: https://polkadot.subscan.io/account?role=all&page=1
enter image description here
Also, here's the html code:
Download page data
I've tried copying the xpath and full xpath from the Google Chrome "inspect" tab, but it doesn't work.
Here's the code I used, but feel free to suggest anything else.
#Initiating Webdriver
s=Service('CHROMEDRIVER LOCATION')
op = webdriver.ChromeOptions()
driver = webdriver.Chrome(service=s, options=op)
link = "https://polkadot.subscan.io/account?role=all&page=1"
driver.get(link)
Ingresar = driver.find_element(By.XPATH,"//*[#id='app']/main/div/div/div[5]/div/div[3]/div[1]/div/div")
Here's the error I get:
ElementClickInterceptedException: Message: element click intercepted: Element <div data-v-24af4670="" class="label align-items-center">...</div> is not clickable at point (125, 721). Other element would receive the click: <div data-v-c018c6b4="" class="banner">...</div>
Either fixing my code, or getting a new one that works with jupyer and selenium
Try this code:
url = "https://polkadot.subscan.io/account?role=all&page=1"
driver.get(url)
driver.find_element(By.XPATH, ".//*[text()='I accept']").click()
time.sleep(5)
download_btn = driver.find_element(By.XPATH, ".//*[text()='Download page data']")
driver.execute_script("arguments[0].scrollIntoView(true);", download_btn)
download_btn.click()
I'm working on a webscraping script for a school project. I have to scroll down to a button to click, but I can't make it work.
I tried the following code:
driver.get('https://www.goodreads.com/book/show/28187.The_Lightning_Thief');
button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[3]/div/div[1]/div/div/button')))
button.click()
time.sleep(5)
element = driver.find_element(By.CLASS_NAME,"Button Button--transparent Button--small")
actions = ActionChains(driver)
actions.move_to_element(element).perform()
button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '''//*[#id="ReviewsSection"]/div[5]/div/div[4]/a''')))
button.click()
I recieved the following error:
Message: no such element: Unable to locate element: {"method":"css selector","selector":".Button Button--transparent Button--small"}
I also tried this, but it didn't work either:
driver.execute_script("arguments[0].scrollIntoView();", element)
When you scroll down on this page it will load in more reviews, so I guess that the button in the bottom I want to click don't load in, so I also tried to scroll down a few times and then try to scroll to the element, but it didn't work.
Can someone please help out?
EDITED:
I also tried this code:
driver = webdriver.Chrome()
driver.get('https://www.goodreads.com/book/show/13335037-divergent');
button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[3]/div/div[1]/div/div/button')))
button.click()
time.sleep(3)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(3)
element = driver.find_element(By.CSS_SELECTOR, "div.Divider.Divider--contents.Divider--largeMargin")
driver.execute_script("arguments[0].scrollIntoView();", element)
button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '''//*[#id="ReviewsSection"]/div[5]/div/div[4]/a''')))
button.click()
This will scroll down to the bottom once end let the remaining reviews load in, then get the element and scroll again. But it's not scrolling down enough so the See all reviews and ratings won't be in view and can't be clicked.
Your selector is invalid since you're trying to pass multiple class names to search by class. You need to pass single #class only or to use another locator type, e.g.
element = driver.find_element(By.CSS_SELECTOR, ".Button.Button--transparent.Button--small")
P.S. Note that there are 4 elements with the same set of class names ("Button Button--transparent Button--small"), so search by class names is not a good option in this case
element = driver.find_element(By.XPATH, '//a[.="See all reviews and ratings"]')
I am making an automated JKLM bomb party bot using selenium.py (prank my friends). When it is given a link to a private JKLM, it will go there, confirm the username, but then get stuck on the “join game” button (I get TimeoutException Error).
driver = webdriver.Safari()
driver.get(link)
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "/html/body/div[2]/div[3]/form/div[2]/input")))
element.submit()
element1 = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//button[#class='styled joinRound']")))
element1.click()
I have tried Absolute XPath:
/html/body/div[2]/div[3]/div[1]/div[1]/button
Relative XPATH:
//button[#class='styled joinRound’]
And Class Name:
styled joinRound
Along with Tag Name and CSS selector.
Any help would be greatly appreciated.
HTML I am trying to access and click on:
<button class="styled joinRound" data-text="joinGame">Join game</button>
I believe you may need to switch to the iframe first in Selenium. I had success with this:
import selenium.webdriver
def main():
driver = selenium.webdriver.Firefox()
driver.get('https://jklm.fun/DKCY')
driver.switch_to_frame(0)
xpath = '//div[#class="seating"]/div[#class="join"]/button'
els = driver.find_elements_by_xpath(xpath)
if els is None or len(els) == 0:
print('failed to find element')
return
els[0].click()
if __name__ == '__main__':
main()
See:
https://www.tutorialspoint.com/how-to-handle-frames-in-selenium-with-python
https://selenium-python.readthedocs.io/navigating.html#moving-between-windows-and-frames
I am not sure what is wrong with this. Am I using EC.element_to_be_clickable() right? I am getting the error message: "selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document". I am pretty sure the XPATH is valid and have even tried with another that designates the same element.
My code:
driver.get("https://browzine.com/libraries/1374/subjects")
parent_url = "https://browzine.com/libraries/1374/subjects"
wait = WebDriverWait(driver, 10)
subjects_avail = driver.find_elements(By.XPATH, "//span[#class='subjects-list-subject-name']")
subjects = 0
for sub in subjects_avail:
WebDriverWait(driver, 5).until(EC.element_to_be_clickable(
(By.XPATH, "//span[#class='subjects-list-subject-name']")))
ActionChains(driver).move_to_element(sub).click(sub).perform()
subjects = +1
driver.get(parent_url)
Each time you clicking the sub element the driver is navigating to the new page.
Then you are opening the main page again with
driver.get(parent_url)
But all the web elements in subjects_avail list became Stale since driver already left the original main page.
To make your code working you have to get the subjects_avail list again each time you getting back to the main page and then select the correct sub title element.
Something like this:
url = "https://browzine.com/libraries/1374/subjects"
subj_list_xpath = "//span[#class='subjects-list-subject-name']"
driver.get(url)
wait = WebDriverWait(driver, 10)
subjects_avail = driver.find_elements(By.XPATH, subj_list_xpath)
for i in range(len(subjects_avail)):
WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.XPATH, subj_list_xpath)))
subjects_avail = driver.find_elements(By.XPATH, subj_list_xpath)
ActionChains(driver).move_to_element(subjects_avail[i]).click(subjects_avail[i]).perform()
driver.get(url)
Can't click link.
I see error
ElementClickInterceptedException: Message: Element is not clickable at
point (116,32) because another element obscures it
My code:
URL = "https://lenta.com/goods-actions/weekly-products/"
driver = webdriver.Firefox()
driver.get(URL)
time.sleep(2)
# ans = driver.find_element_by_link_text("Казань") this link works OK
ans = driver.find_element_by_link_text("Санкт-Петербург") # ERROR
ans.click()
time.sleep(5)
print("go next")
driver.get(URL)
Important code doesn't work only for "Санкт-Петербург"
There are 2 text strings with a value of "Санкт-Петербур" on this page. One is in the overlay; one is in the page header. The script is trying to click the link in the header (but can't because the overlay has focus).
from selenium import webdriver
URL = "https://lenta.com/goods-actions/weekly-products/"
driver = webdriver.Chrome()
driver.get(URL)
ans = driver.find_element_by_link_text("Санкт-Петербург")
print(ans.get_attribute("class"))
#=> link current-store__link js-pick-city-toggle