Selenium to scroll down a div - python

How can I scroll down and up inside a div using Selenium? I looked everywhere on the internet. Only solutions for pages.
element = driver.find_elements_by_xpath('//*[#id="root"]/div/main/div/div[2]/div[1]/div[1]/div/div[2]/nav/div[4]/div/div[2]/div/span')
element.execute_script("arguments[0].scrollIntoView();", element )

The actions class is capable of scrolling.
This import:
from selenium.webdriver.common.action_chains import ActionChains
This function:
def ScrollIntoView(element):
actions = ActionChains(driver)
actions.move_to_element(element).perform()
Assuming your element exists and is ready on the page you can Call it as such:
element = driver.find_elements_by_xpath('//*[#id="root"]/div/main/div/div[2]/div[1]/div[1]/div/div[2]/nav/div[4]/div/div[2]/div/span')
ScrollIntoView(element)

Related

How to scroll element with Selenium?

how to scroll a certain page element, not the page itself, but the element, there is a list that is updated dynamically and to get all its elements I need to scroll it to the end
You can use
org.openqa.selenium.interactions.Actions
from selenium.webdriver.common.action_chains import ActionChains
element = driver.find_element_by_id("my-id")
actions = ActionChains(driver)
actions.move_to_element(element).perform()
OR
driver.execute_script("arguments[0].scrollIntoView();", element)
If you wan to scroll to the end of the page then the easiest way is to select a label and then send:
label.sendKeys(Keys.PAGE_DOWN);
OR
label.send_keys(Keys.END)
Hope this answer

How to scroll element Selenium Python

How to scroll through an element using Selenium?
It's been a while since I've used selenium, but doing something like this should scroll until the desired element is in view using JavaScript.
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("your-site.com")
# Find element by ID or some other method
element = driver.find_element_by_id("id_of_element")
# Run JavaScript to scroll until the element is in view
driver.execute_script("arguments[0].scrollIntoView(true);", element);

Unable to identify what to 'click' for next page using selenium

I am trying to get search results from yahoo search using python - selenium and bs4. I have been able to get the links successfuly but I am not able to click the button at the bottom to go to the next page. I tried one way, but it could't identify after the second page.
Here is the link:
https://in.search.yahoo.com/search;_ylt=AwrwSY6ratRgKEcA0Bm6HAx.;_ylc=X1MDMjExNDcyMzAwMgRfcgMyBGZyAwRmcjIDc2ItdG9wLXNlYXJjaARncHJpZANidkhMeWFsMlJuLnZFX1ZVRk15LlBBBG5fcnNsdAMwBG5fc3VnZwMxMARvcmlnaW4DaW4uc2VhcmNoLnlhaG9vLmNvbQRwb3MDMARwcXN0cgMEcHFzdHJsAzAEcXN0cmwDMTQEcXVlcnkDc3RhY2slMjBvdmVyZmxvdwR0X3N0bXADMTYyNDUzMzY3OA--?p=stack+overflow&fr=sfp&iscqry=&fr2=sb-top-search
This is what im doing to get data from page but need to put in a loop which changes pages:
page = BeautifulSoup(driver.page_source, 'lxml')
lnks = page.find('div', {'id': 'web'}).find_all('a', href = True)
for i in lnks:
print(i['href'])
You don't need to scroll down to the bottom. The next button is accessible without scrolling. Suppose you want to navigate 10 pages. The python script can be like this:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
driver=webdriver.Chrome()
driver.get('Yahoo Search URL')
# Let's create a loop containing the XPath for next button
# As well as waiting for the next button to be clickable.
for i in range(10):
WebDriverWait(driver, 5).until(EC.element_to_be_clickable(By.XPATH, '//a[#class="next"]'))
navigate = driver.find_element_by_xpath('//a[#class="next"]').click()
The next page button is on the bottom of the page so you first need to scroll to that element and then click it. Like this:
from selenium.webdriver.common.action_chains import ActionChains
actions = ActionChains(driver)
next_page_btn = driver.find_element_by_css_selector("a.next")
actions.move_to_element(next_page_btn).build().perform()
time.sleep(0.5)
next_page_btn.click()

Cannot click on an xpath selected object Selenium (Python)

I am trying to click to an object that I select with Xpath, but there seems to be problem that I could not located the element. I am trying to click accept on the page's "Terms of Use" button. The code I have written is as
driver.get(link)
accept_button = driver.find_element_by_xpath('//*[#id="terms-ok"]')
accept_button.click()
prov = driver.find_element_by_id("province-region")
prov.click()
Here is the HTML code I have:
And I am getting a "NoSuchElementException". My goal is to click this "Kabul Ediyorum" button at the bottom of the HTML code. I started to think that we have some restrictions on the tasks we could do on that page. Any ideas ?
Not really sure what the issue might be.
But you could try the following:
Try to locate the element by its visible text
accept_button = driver.find_element_by_xpath("//*[text()='Kabul Ediyorum']").click()
Try with ActionChains
For that you need to import ActionChains
from selenium.webdriver.common.action_chains import ActionChains
accept_button = driver.find_element_by_xpath("//*[text()='Kabul Ediyorum']")
actions = ActionChains(driver)
actions.click(on_element=accept_button).perform()
Also make sure you have implicitly wait
# implicitly wait
driver.implicitly_wait(10)
Or explicit wait
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//*[text()='Kabul Ediyorum']"))).click()
Hope this helped!

Web Scraping using BeautifulSoup, click on element for hidden tab

I have an issue while I'm trying to capture specific information inside of the page.
Website: https://www.target.com/p/prairie-farms-vitamin-d-milk-1gal/-/A-47103206#lnk=sametab
On this page, there are hidden tabs named 'Label info', 'Shipping & Returns', 'Q&A' next to 'Details' tab under 'About this items' that I want to scrape.
I found that I need to click on these elements before doing scraping using Beautifulsoup.
Here is my code, let's say I've got pid for each link.
url = 'https://www.target.com' + str(pid)
driver.get(url)
driver.implicitly_wait(5)
soup = bs(driver.page_source, "html.parser")
wait = WebDriverWait(driver, 3)
button = soup.find_all('li', attrs={'class': "TabHeader__StyledLI-sc-25s16a-0 jMvtGI"})
index = button.index('tab-ShippingReturns')
print('The index of ShippingReturns is:', index)
if search(button, 'tab-ShippingReturns'):
button_shipping_returns = button[index].find_element_by_id("tab-ShippingReturns")
button_shipping_returns.click()
time.sleep(3)
My code returns
ResultSet object has no attribute 'find_element_by_id'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?
Can anyone kindly guide me how to resolve this?
Seems like buttons what you're trying interact have dynamically generated class by adding unique values at the end, I would suggest using contains() method of xpath selector types like:
driver.find_elements_by_xpath("//a[contains(#class,'TabHeader__Styled')]")
so, your code should looks like that:
elements = driver.find_elements_by_xpath("//a[contains(#class,'TabHeader__Styled')]")
for el in elements:
el.click()
it's not horrible to click on the button where page already in
if element is not visible and need to scroll down, you can use ActionChains:
from selenium.webdriver import ActionChains
ActionChains(driver).move_to_element(el).perform()
so code looks like that, just your element parser:
from selenium import webdriver
from selenium.webdriver import ActionChains
driver = webdriver.Chrome()
url = 'https://www.target.com/p/prairie-farms-vitamin-d-milk-1gal/-/A-47103206#lnk=sametab'
driver.get(url)
driver.implicitly_wait(5)
elements = driver.find_elements_by_xpath("//a[contains(#class,'TabHeader__Styled')]")
for el in elements:
ActionChains(driver).move_to_element(el).perform()
el.click()
driver.quit()
The following
button = soup.find_all('li', attrs={'class': "TabHeader__StyledLI-sc-25s16a-0 jMvtGI"})
will return a BeautifulSoup list of tags.
You then try to call a selenium method on that list with:
button_shipping_returns = button[index].find_element_by_id("tab-ShippingReturns")
Instead you need to call it on the webdriver element collection
driver.find_elements_by_css_selector('.TabHeader__StyledLI-sc-25s16a-0 jMvtGI')[index].find_element_by_id("tab-ShippingReturns")

Categories