How to write selenium test function accurately with python pytest - python

I am new at selenium webdriver, python, pytest. I need suggestion how I can write a test function criteria with full of information and suggest me to the best way for writing test case. Please provide professional way to write a function.

I think first you have to decide if one single session is controlled by a single or multiple functions or whether you want multiple instances or not.
When it comes to selenium, you initiate a browser, control it, then close it.
If you fail to do it in this order, you end up with a tonne of chrome.exe instances in taskman.
Best testcase is initiating a browser, opening google, typing a word, hitting search and then saving the contents of the page to a variable.
If your intention is web scraping, definitely get yourself a copy of
BeautifulSoup (pip install bs4)
from bs4 import BeautfulSoup as bs

Related

Is it possible to use selenium and requests at the same time?

I am thinking of creating a web automation using python, basically it will open browser using selenium webdriver proceeds to click on a few buttons, then using requests post method, fill up a form and then continue to use selenium again. So in short I am asking if we are able to use both selenium and python requests interchangeably?
Of course you can! I use both the libraries interchangeably in the same code file. It is very helpful.
For eg. First I use requests library to fetch the webpage, next I use Selenium whenever I have to change specific parameter in the webpage (like selecting a radio button, inserting form credentials, etc.), and then based on the complexity of the source code, I either use BeautifulSoup, or I continue using Selenium!

Creating a script that takes live data from a website (for now) and displays it

This isn't really a specific question i'm sorry for that. I'm trying to create a script that would take real time data from another site ( from table tag to be exact, make it an array and display it somewhere ). I've created a simple python script:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import requests
import time
driver = webdriver.Chrome('C:/drivers/chromedriver.exe')
driver.set_page_load_timeout("10")
driver.get("link to the site")
driver.find_element_by_id("username-real").send_keys("login")
driver.find_element_by_id("pass-real").send_keys("pwd")
driver.find_element_by_xpath('//input[#class="button-login"]').submit()
#here potentially for loop that would refresh every second:
for elem in driver.find_elements_by_xpath('//[#class="table-body#"]'):
#do something
As you can see it's pretty simple, basically open chrome webdriver, log in to the website and do something with the table, I didn't try to properly get the data yet because i don't like this method.
I was wondering if there's another way to do it, without running the webdriver - some console like application? I'm pretty lost what should i look into in order to create a script like that. Other programming language? Some kind of framework/method?
If you want to use Selenium you have to use the WebDriver. See it as a "connection" between your Programm and Google Chrome. If you can use Safari you can use Selenium without any WebDrivers that have to be installed manually.
If you want to use other tools I can recommend Beautifulsoup. It's basically a HTML-Parser wich looks into the HTML-Code of the WebPage. With BS you don't have to install any Drivers etc. You also can use BS with Python.
A other Method I'm thinking of is, downloading the HTML-Text of the WebPage and search locally through the file. But I wouldn't recommend this Method.
For WebPages Selenium is really the way to go. I often use it for my own projects

Scraping PDF's from a password protected website

I work in tech support and currently have to manually keep our manuals for products updated manually by periodically checking to see if it has an update and if it does replacing the current one saved on our network.
I was wondering if it would be possible to build a small program to quickly download all files on a suppliers website and have them automatically download and be sorted into the given folders for those products, replacing the current PDF's in that file. I must also note that the website is password protected and is sorted into folders.
Is this possible with Python? I figured a small program I could perhaps run once a week or something to automatically update our manuals would be super useful (and a learning experience).
Apologies if I haven't explained the requirement well, any questions let me know.
It's certainly possible. As the other answer suggests you will want to use libaries like Requests (Handle HTTP requests) or Selenium (AUtomated browser activity) to navigate through the login.
You'll need to sort through the links on a given page, could be done with beautifulsoup ideally (An HTML parser) but could be done with selenium (Automated Browser activity).You'll need to check out libraries like requests (To handle HTTP requests) for downloading the pdf's, the OS module for sorting the folders out into specific folders and replacing files.
I strongly urge you to think through the steps, But I hope that gives an idea about the libaries that you'll need to learn abit about. The most challenging thing to learn will be using selenium, so if you can use requests to do the login that is much better.
If you've got a basic grasp of python the requests, OS module and beautifulsoup libraries are not difficult things to pick up.
You can use selenium for browser automation. This could insert the password (although the are you a robot stuff might stop you), and then you can download the pdf's simply by setting a default download location and clicking the download button. This will make the browser download the files to the default download location.

Selenium: What functions would fire request?

I am new to Selenium and web applications. Please bear with me for a second if my question seems way too obvious. Here is my story.
I have written a scraper in Python that uses Selenium2.0 Webdriver to crawl AJAX web pages. One of the biggest challenge (and ethics) is that I do not want to burn down the website's server. Therefore I need a way to monitor the number of requests my webdriver is firing on each page parsed.
I have done some google-searches. It seems like only selenium-RC provides such a functionality. However, I do not want to rewrite my code just for this reason. As a compromise, I decided to limit the rate of method calls that potentially lead to the headless browser firing requests to the server.
In the script, I have the following kind of method calls:
driver.find_element_by_XXXX()
driver.execute_script()
webElement.get_attribute()
webElement.text
I use the second function to scroll to the bottom of the window and get the AJAX content, like the following:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
Based on my intuition, only the second function will trigger request firing, since others seem like parsing existing html content.
Is my intuition wrong?
Many many thanks
Perhaps I should elaborate more. I am automating a process of crawling on a website in Python. There is a subtantial amount of work done, and the script is running without large bugs.
My colleagues, however, reminded me that if in the process of crawling a page I made too many requests for the AJAX list within a short time, I may get banned by the server. This is why I started looking for a way to monitor the number of requests I am firing from my headless PhantomJS browswer in script.
Since I cannot find a way to monitor the number of requests in script, I made the compromise I mentioned above.
Therefore I need a way to monitor the number of requests my webdriver
is firing on each page parsed
As far as I know, the number of requests is depending on the webpage's design, i.e. the resources used by the webpage and the requests made by Javascript/AJAX. Webdriver will open a browser and load the webpage just like a normal user.
In Chrome, you can check the requests and responses using Developer Tools panel. You can refer to this post. The current UI design of Developer Tools is different but the basic functions are still the same. Alternatively, you can also use the Firebug plugin in Firefox.
Updated:
Another method to check the requests and responses is by using Wireshark. Please refer to these Wireshark filters.

Selenium Webdriver for Python: get page, enter values, click submit, get source

Alright, I'm confused. So I want to scrape a page using Selenium Webdriver and Python. I've recorded a test case in the Selenium IDE. It has stuff like
Command Taget
click link=14
But I don't see how to run that in Python. The desirable end result is that I have the source of the final page.
Is there a run_test_case command? Or do I have to write individual command lines? I'm rather missing the link between the test case and the actual automation. Every site tells me how to load the initial page and how to get stuff from that page, but how do I enter values and click on stuff and get the source?
I've seen:
submitButton=driver.find_element_by_xpath("....")
submitButton.click()
Ok. And enter values? And get the source once I've submitted a page? I'm sorry that this is so general, but I really have looked around and haven't found a good tutorial that actually shows me how to do what I thought was the whole point of Selenium Webdriver.
I've never used the IDE. I just write my tests or site automation by hand.
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("http://www.google.com")
print browser.page_source
You could put that in a script and just do python wd_script.py or you could open up a Python shell and type it in by hand, watch the browser open up, watch it get driven by each line. For this to work you will obviously need Firefox installed as well. Not all versions of Firefox work with all versions of Selenium. The current latest versions of each (Firefox 19, Selenium 2.31) do though.
An example showing logging into a form might look like this:
username_field = browser.find_element_by_css_selector("input[type=text]")
username_field.send_keys("my_username")
password_field = browser.find_element_by_css_selector("input[type=password]")
password_field.send_keys("sekretz")
browser.find_element_by_css_selector("input[type=submit]").click()
print browser.page_source
This kind of stuff is much easier to write if you know css well. Weird errors can be caused by trying to find elements that are being generated in JavaScript. You might be looking for them before they exist for instance. It's easy enough to tell if this is the case by putting in a time.sleep for a little while and seeing if that fixes the problem. More elegantly you can abstract some kind of general wait for element function.
If you want to run Webdriver sessions as part of a suite of integration tests then I would suggest using Python's unittest to create them. You drive the browser to the site under test, and make assertions that the actions you are taking leave the page in a state you expect. I can share some examples of how that might work as well if you are interested.

Categories