Determining the end of a web page - python

I am trying to automate scrolling down a web page written in react native and taking a screenshot of the entire thing. I've solved that by sending PAGE_DOWN via send_keys.
I am trying to find the end of the page so I know when to stop taking screenshots. My problem is that the page is dynamic in length depending on the information displayed. It has collapsible sections that are expanded all at once. To make it more fun, the dev team decided not to add ids or any unique identifiers because "it's written in react".
I've tried the following:
Looking for an element at the bottom of the page: the element is 'visible' regardless of where it is on the page. I've tried different ones with the same result
Determining the clientHeight, offsetHeight, scrollHeight via javascript: the number it returns doesn't change no matter how many times the page has been moved down, so either I'm not using it right or it won't work. I'm at a loss right now.
I'm running python with selenium on a Chrome browser (hoping that the solution can be translated to IE).

You can keep taking the Y coordinate of the vertical scrolling bar element each time you performing the scrolling down.
While it keeps changing - you still not reached the page bottom.
Once the page bottom is reached the previous value of vertical scrolling bar will be equal to the current Y coordinate of that element.

One of the way I know in Selenium Python bindings :
driver.execute_script("var scrollingElement = (document.scrollingElement || document.body);scrollingElement.scrollTop = scrollingElement.scrollHeight;")

Related

Selenium Grid + Python: clicking on a WebElement results very slow

I have built a bot that plays an online roulette with Selenium (Selenium Grid) and Python.
When it comes to clicking on the number I want to bet on, it is extremely slow and does not manage to complete its stake (within the given time range for the bet) across all numbers that make my bet complete.
It seems like slowness may be given from the animation the button does after I click on it.
The code is very simple:
element = WebDriverWait(driver, timeout).until(EC.presence_of_element_located((By.XPATH, path)) # I manage to retrieve the WebElement, this is fast, no problem here
element.click() # this is slow
Here you can find:
how it looks now > https://drive.google.com/file/d/1dEuWTtrXHzRfXXVHhUbdNR8XtgMeWdU-/view?usp=sharing
my target > https://drive.google.com/file/d/1NUbr6rpOGjdMuClD5hby91jPVumqwLC5/view?usp=sharing (here I use the pynput library which is not my target cause I want the script to run on the server using Selenium Grid).
Anyone can help?
I'm not actually sure, is it the same problem or not. In my case, after clicking submit button on login form and redirecting to home page, my script doesn't do anything for around 4 minutes.
I've noticed, that WebElement.click() function ends execution only after page stops loading, but some trackers on site prevent page from complete loading, so I added uBlock extension and got rid of my problem.

Find all elements on a web page using Selenium and Python

I am trying to go through a webpage with Selenium and create a set of all elements with certain class names, so I have been using:
elements = set(driver.find_elements_by_class_name('class name'))
However, in some cases there are thousands of elements on the page (if I scroll down), and I've noticed that this code only finds the first 18-20 elements on the page (only about 14-16 are visible to me at once). Do I need to scroll, or am I doing something else wrong? Is there any way to instantaneously get all of the elements I want in the HTML into a list without having to visually see them on the screen?
It depends on your webpage. Just look at the HTML source code (or the network log), before you scroll down. If there are just the 18-20 elements then the page lazy load the next items (e.g. Twitter or Instagram). This means, the server just renders the next items if you reached a certain point on the webpage. Otherwise all thousand items would be loaded, which would increase the page size, loading time and server load.
In this case, you have to scroll down until the end and then get the source code to parse all items.
Probably you can use more advanced methods like dealing with each chunk as a kind of page for a pagination method (e.g. not saying "go to next page" but saying "scroll down"). But I guess you're a beginner, so I would start with simple scrolling down to the end (e.g. scroll, waiting, scroll,... until there are no new elements), then fetching the HTML and then parsing it.

Locating Lazy Load Elements While Scrolling in PhantomJS in Python

I'm using python and Webdriver to scrape data from a page that dynamically loads content as the user scrolls down the page (lazy load). I have a total of 30 data elements, while only 15 are displayed without first scrolling down.
I am locating my elements, and getting their values in the following way, after scrolling to the bottom of the page multiple times until each element has loaded:
# Get All Data Items
all_data = self.driver.find_elements_by_css_selector('div[some-attribute="some-attribute-value"]')
# Iterate Through Each Item, Get Value
data_value_list = []
for d in all_data:
# Get Value for Each Data item
data_value = d.find_element_by_css_selector('div[class="target-class"]').get_attribute('target-attribute')
#Save Data Value to List
data_value_list.append(data_value)
When I execute the above code using ChromeDriver, while leaving the browser window up on my screen, I get all 30 data values to populate my data_value_list. When I execute the above code using ChromeDriver, with the window minimized, my list data_value_list is only populated with the initial 15 data values.
The same issue occurs while using PhantomJS, limiting my data_value_list to only the initially-visible data values on the page.
Is there away to load these types of elements while having the browser minimized and, ideally—while utilizing PhantomJS?
NOTE: I'm using an action chain to scroll down using the following approach .send_keys(Keys.PAGE_DOWN).perform() for a calculated number of times.
I had the exact same issue. The solution I found was to execute javascript code in the virtual browser to force elements to scroll to the bottom.
Before putting the Javascript command into selenium, I recommend opening up your page in Firefox and inspecting the elements to find the scrollable content. The element should encompass all of the dynamic rows, but it should not include the scrollbar Then, after selecting the element with javascript, you can scroll it to the bottom by setting its scrollTop attribute to its scrollHeight attribute.
Then, you will need to test scrolling the content in the browser. The easiest way to select the element is by ID if the element has an id, but other ways will work. To select an element with the id "scrollableContent" and scroll it to the bottom, execute the following code in your browser's javascript console:
e = document.getElementById('scrollableContent'); e.scrollTop = e.scrollHeight;
Of course, this will only scroll the content to the current top, you will need to repeat this after new content loads if you need to scroll multiple times. Also, I have no way of figuring out how to find the exact element, for me it is trial and error.
This is some code I tried out. However, I feel it can be improved, and should be for applications that are intended to test code or scrape unpredictably. I couldn't figure out how to explicitly wait until more elements were loaded (maybe get the number of elements, scroll to the bottom, then wait for subelement + 1 to show up, and if they don't exit the loop), so I hardcoded 5 scroll events and used time.sleep. time.sleep is ugly and can lead to issues, partly because it depends on the speed of your machine.
def scrollElementToBottom(driver, element_id):
time.sleep(.2)
for i in range(5):
driver.execute_script("e = document.getElementById('" + element_id + "'); e.scrollTop = e.scrollHeight;")
time.sleep(.2)
The caveat is that the following solution worked with the Firefox driver, but I see no reason why it shouldn't work with your setup.

Extracting info from dynamic page element in Python without "clicking" to make visible?

For the life of me I can't think of a better title...
I have a Python WebDriver-based scraper that goes to Google, enters a local search such as chiropractors+new york+ny, which, after clicking on More chiropractors+New York+NY, ends up on a page like this
The goal of the scraper is to grab the phone number and full address (including suite# etc.) of each of the 20 results on such a results page. In order to do so, I need to have WebDriver click 20 entries needs to be clicked on the bring up an overlay over the Google Map:
This is mighty slow. Were it not having to trigger each of these overlays, I would be able to do everything up to that point with the much faster lxml, by going straight to the ultimate URL of the results page and then extracting via XPath. But I appear to be stuck with not being able to get data from the overlay without first clicking on the link that brings up the overlay.
Is there a way to get the data out of this page element without having to click the associated links?

Scroll to the bottom of a web page

I'm trying to make a little script which takes a look at main page of web and finds adds.
The problem is that there are web pages which contains infinite scroll. If this code was built for particular web page, I could use locating elements and scrolling.
But I can't figure out how to make Selenium to scroll at the very bottom of any page?
self.driver.execute_script("window.scrollTo(0, something);")
PS: If there is very huge page, break it down after several seconds of scrolling.
Do you know how to do that?
Here's another method that i used for Java, get the window size and then scroll to that position using javascript. Here's how to do it in Java (hope you can implement the concept in python too) -
double pageHeight = testBase.TestBase.driver.manage().window().getSize().getHeight();
driver.executeScript("window.scrollBy(0,"+pageHeight+")");
If you are implementing an infinite scroll then you can put the executeScript() lines in a loop. Hope it helps.

Categories