Unable to perform click action in selenium python - python

I'm writing a test script using selenium in python. I have a web-page containing a tree-view object like this:
I want to traverse over the menu to go to the desired directory. Respective HTML code for plus/minus indications is this:
<a onclick="changeTree('tree', 'close.gif', 'open.gif');">
<img id="someid" src="open.gif" />
</a>
The src attribute of the image can be either open.gif or close.gif.
I can detect weather there is a plus or minus by simply checking the src attribute of the img tag. I can also easily access to the parent tag, a, by using .find_element_by_xpath("..").
The problem is that I can't perform the click action not on the img nor the a tag.
I'v tried webdriver.Actions(driver).move_to_element(el).click().perform(); but it did not work.
I think I should mention that there is no problem in accessing the elements, since I can print all their attributes; I just can't perform actions on them. Any help?
EDIT 1:
Here's the js code for collapsing and expanding the tree:
function changeTree(tree, image1, image2) {
if (!isTreeviewLocked(tree)) {
var image = document.getElementById("treeViewImage" + tree);
if (image.src.indexOf(image1)!=-1) {
image.src = image2;
} else {
image.src = image1;
}
if (document.getElementById("treeView" + tree).innerHTML == "") {
return true;
} else {
changeMenu("treeView" + tree);
return false;
}
} else {
return false;
}
}
EDIT 2:
I Googled for some hours and I found out that there is a problem about triggering the Javascript events and the click action from web-driver. Additionally I have a span tag in my web-page that has an onclick event and I also have this problem on it.

After some tries like .execute_script("changeTree();"), .submit(), etc, I have solved the issue by using the ActionChains class. Now, I can click in all elements that they have java-script events as onclick. The code that I have used is this:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('someURL')
el = driver.find_element_by_id("someid")
webdriver.ActionChains(driver).move_to_element(el).click(el).perform()
I don't know if it occurred just to me or what, but I found out that I should find the element right before the key command; otherwise the script does not perform the action. I think it would be related to staling elements or something like that; anyway, thanks all for their attention.

Related

How to click on specific text in a paragraph?

I have a paragraph element as follows:
<p>You have logged in successfully. <em>LOGOUT</em></p>
Clicking on "LOGOUT" will initiate a logout procedure (e.g display a confirmation prompt).
How do I simulate this clicking on "LOGOUT" using Selenium WebDriver?
To find and click the "LOGOUT" text with python, you can use the following code:
logout = driver.find_element_by_xpath("//em[text()='LOGOUT']")
logout.click()
This could help :
Execute button Click with Selenium
As a preach :
You should first, try to analize the general basic components offered for your tool, and the interactions with external systems (selection, executions, listening).
Based on the first link offered as a resource your code should be some like :
package postBlo;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chromse.ChromeDriver;
public class singleClickButton {
public singleClickButton() {
super();
}
public static void main(String[] args) throws Exception {
System.setProperty("webdriver.chrome.driver", "./exefiles/chromedriver.exe");
WebDriver = new ChromeDriver();
driver.manage().window().maximixe();
driver.get("your-local-site-to-test");
//Referen an input component and set a values
driver.findElement(By.name("id-html-tag")).sendKeys("someValue text");
/* ## Execution of button by using id
You could use both methods to identify the element you need :
By using "xpath" expression wich allows you to navigate between elements by using expressions
By using id-identifier
Chose one of both.
driver.findElement(By.xpath("expression-xpath")).click();
driver.findElement(By.id("id-element")).click();
*/
driver.findElement(By.xpath("/html/body/elemnts-container-button/button\r\n" + "")).click();
driver.findElement(By.id("button-id")).click();
}
}
As a mention I'm not related to Selenium but still the logic it's alike.
Best

Select parent child element when child element matches text using selenium

I have html like so:
<div class="card">
<div>Foo</div>
<a>View Item</a>
</div>
<div class="card">
<div>Bar</div>
<a>View Item</a>
</div>
I want to select the card matching "Bar" and click the "View Item" link. I tried
cards = browser.find_elements_by_class_name('card')
for card in cards:
if card.find_element_by_partial_link_text('Bar'):
item_anchor = card.find_element_by_partial_link_text('View Item')
item_anchor.click()
However I get the error:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"partial link text","selector":"Bar"}
Try using the EC and below xpath.
Option 1:
Check if the link exist and then click (you can add the attributes to link in the xpath, if you are looking for any specific link)
link =WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH,"//div[#class='card' and div[normalize-space(.)='Bar']]/a")))
if (link):
link.click()
Options 2:
Using different xpath and len
links =WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.XPATH,"//div[#class='card']/div[normalize-space(.)='Bar']/following-sibling::a[normalize-space(.)='View Item']")))
if len(links)>0:
links[0].click()
Option 3:
If you are not sure there are any levels present between B and A you can use the below xpath.
links =WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.XPATH,"//div[normalize-space(.)='Bar']/ancestor::div[#class='card']//a[normalize-space(.)='View Item']")))
if len(links)>0:
links[0].click()
If you want to click on View Item of Bar, you can directly use this xpath :
//div[text()='Bar']/following-sibling::a[text()='View Item']
However introducing webdriver wait would a great idea for stability as mentioned by #supputuri
There are two ways to handle this situation based on your UI behavior:
1) If UI is fixed, use this xpath to identify and use click() to click on it.
//*[#class='card']/div[.='Bar']/following-sibling::a
2) If you are taking data from any external sources (like Database or Excel), pass your expected value (like Bar or Foo) as a parameter to the xpath method like below:
Define a class called Element like as below:
public class Element {
private WebElement element;
private WebDriver driver;
private String xpath;
// Constructor will wrap web element into Object and will allow to use any of the method declared below
public Element(String xpath) {
this.driver = new ChromeDriver();
this.xpath = xpath;
this.element = this.driver.findElement(By.xpath(this.xpath));
}
public void click() {
this.element.click();
}
}
Create POM class and write a methods like below:
public class PageObjectClass {
private Element elementToClick(String value) {
return new Element("//*[#class='card']/div[.='" + value + "']/following-sibling::a");
}
public void clickOnViewItemsLink(String value) {
this.elementToClick(value).click();
}
}
By this way, you can click on any of View Item link just by passing value as a parameter
This can be achieved by a single line of code by using the correct xpath.
driver.findElement(By.xpath("//div[text()='Bar']/following-sibling::a[text()='View Item']")).click();

How can I click button or div tag dynamically until it disappear from page using selenium python?

Using this link I want all reviews from that page.
I have used xpaths(given in sample code) to click load more until it disappear from that page,but my solution fails and giving following errors.
Error- Message: Element is no longer attached to the DOM Stacktrace
or
in _read_status
raise BadStatusLine(line)
httplib.BadStatusLine: ''
Sample Code with xpaths
Either
driver.execute_script('$("div.load-more").click();')
or
xpath_content='//div[#class = "load-more"]'
driver.find_element_by_xpath(xpath_content).click()
Is there any solution which may not fail in any case? How can I click on load more until it disappear form that page or Is there any other way to get all reviews from this page?
One more thing I am using firepath to generate review's xpath which is .//*[#id='reviews-container']/div1/div[3]/div1/div/div/div[3]/div/div1/div
Is there a way to get our own xpath instead using firepath?
This is a java solution for your problem. You can use the same logic for python as well
public static void loadAll(WebDriver driver) {
while (true) {
//Using findElements to get list of elements so that it wont throw exception if element is not present
List<WebElement> elements = driver.findElements(By.xpath("//div[#class='load-more']"));
//If the size is zero that means load more element is not present so breaking the loop
if (elements.isEmpty()) {
break;
}
//Assigning first element to a variable
WebElement loadEl = elements.get(0);
//Getting text of element
String text = loadEl.getText().toLowerCase();
//check if text contains load more, as, if it is loading it will have ... ,so we cant click at that time
if (text.contains("load more")) {
loadEl.click();
}
//if text contains 1 to 4 means [for ex "Load More 4"] this is the last click so breaking the loop
if (text.matches("load more [1-4]")) {
break;
}
}
System.out.println("Done");
}

How to browse a whole website using selenium?

Is it possible to go through all the URIs of a given URL (website) using selenium ?
My aim is to launch firefox browser using selenium with a given URL of my choice (I know how to do it thanks to this website), and then let firefox browse all the pages that URL (website) has. I appreciate any hint/help on how to do it in Python.
You can use a recursive method in a class such as the one given below to do this.
public class RecursiveLinkTest {
//list to save visited links
static List<String> linkAlreadyVisited = new ArrayList<String>();
WebDriver driver;
public RecursiveLinkTest(WebDriver driver) {
this.driver = driver;
}
public void linkTest() {
// loop over all the a elements in the page
for(WebElement link : driver.findElements(By.tagName("a")) {
// Check if link is displayed and not previously visited
if (link.isDisplayed()
&& !linkAlreadyVisited.contains(link.getText())) {
// add link to list of links already visited
linkAlreadyVisited.add(link.getText());
System.out.println(link.getText());
// click on the link. This opens a new page
link.click();
// call recursiveLinkTest on the new page
new RecursiveLinkTest(driver).linkTest();
}
}
driver.navigate().back();
}
public static void main(String[] args) throws InterruptedException {
WebDriver driver = new FirefoxDriver();
driver.get("http://newtours.demoaut.com/");
// start recursive linkText
new RecursiveLinkTest(driver).linkTest();
}
}
Hope this helps you.
As Khyati mentions it is possible, however, selenium not a webcrawler or robot. You have to know where/what you are trying to test.
If you really want to go down that path I would recommend that you hit the page, pull all elements back and then loop through to click any elements that would correspond to navigation functionality (i.e. "//a" or hyperlink click).
Although if you go down this path and there is a page that opens another page then has a link back you would want to keep a list of all visited URL's and make sure that you don't duplicate a page like that.
This would work, but would also require a bit of logic in it to make it happen...and you might find yourself in an endless loop if you aren't careful.
I know you asked for a python example, but I was just in the middle of setting up a simple rep o for protractor testings and the task you want to accomplish seems to be very easy to do with protractor (which is just a wrapper around webdriver)
here is the code in javascript:
describe( 'stackoverflow scrapping', function () {
var ptor = protractor.getInstance();
beforeEach(function () {
browser.ignoreSynchronization = true;
} );
afterEach(function () {
} );
it( 'should find the number of links in a given url', function () {
browser.get( 'http://stackoverflow.com/questions/24257802/how-to-browse-a-whole-website-using-selenium' );
var script = function () {
var cb = arguments[ 0 ];
var nodes = document.querySelectorAll( 'a' );
nodes = [].slice.call( nodes ).map(function ( a ) {
return a.href;
} );
cb( nodes );
};
ptor.executeAsyncScript( script ).then(function ( res ) {
var visit = function ( url ) {
console.log( 'visiting url', url );
browser.get( url );
return ptor.sleep( 1000 );
};
var doVisit = function () {
var url = res.pop();
if ( url ) {
visit( url ).then( doVisit );
} else {
console.log( 'done visiting pages' );
}
};
doVisit();
} );
} );
} );
You can clone the repo from here
Note: I know protractor is probably not the best tool for it, but it was so simple to do it with it that I just give it a try.
I tested this with firefox (you can use the firefox-conf branch for it, but it will require that you fire webdriver manually) and chrome. If you're using osx this should work with no problem (assuming you have nodejs installed)
Selenium API provides all the facility via which you can do various operations like type ,click , goto , navigateTo , switch between frames, drag and drop, etc.
What you are aiming to do is just browsing in simple terms, clicking and providing different URls within the website also ,if I understood properly. Ya , you can definitely do it via Selenium webdriver.
And you can make a property file, for better ease and readiness where-in you can pass different properties like URLs , Base URI ,etc and do the automation testing via Selenium Webdriver in different browsers.
This is possible. I have implemented this using Java webdriver and URI. This was mainly created to identify the broken links.
Using "getElements" having tag can be get using webdriver once open and save "href" value.
Check all link status using URL class of java and Put it in stack.
Then pop link from stack and "get" link using Webdriver. Again get all the links from the page remove duplicate links which are present in stack.
Loop this until stack is empty.
You can update it as per your requirements. Such as levels of traversing, excluding other links which are not having domain of the given website etc.
Please comment if you are finding difficulty in implementation.

The html content that I'm trying to scrape only appears to load when I navigate to a certain anchor within the site

I'm trying to scrape a certain value off the following website: https://www.theice.com/productguide/ProductSpec.shtml?specId=6747556#data
Specifically, I'm trying to grab the "last" value from the table at the bottom of the page in the table with class "data default borderless". The issue is that when I search for that object name, nothing appears.
The code I use is as follows:
from bs4 import BeautifulSoup
import urllib2
url = "https://www.theice.com/productguide/ProductSpec.shtml?specId=6747556#data"
page=urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
result = soup.findAll(attrs={"class":"data default borderless"})
print result
One issue I noticed is that when I pull the soup for that URL, it strips off the anchor tag and shows me the html for the url: https://www.theice.com/productguide/ProductSpec.shtml?specId=6747556
It was my understanding that anchor tags just navigate you around the page but all the HTML should be there regardless, so I'm wondering if this table somehow doesn't load unless you've navigated to the "data" section of the webpage.
Does anyone know how to force the table to load before I pull the soup? Is there something else I'm doing wrong that prevents me from seeing the table?
Thanks in advance!
The content is dynamically generated via below js:
<script type="text/javascript">
var app = {};
app.isOption = false;
app.urls = {
'spec':'/productguide/ProductSpec.shtml?details=&specId=6747556',
'data':'/productguide/ProductSpec.shtml?data=&specId=6747556',
'confirm':'/reports/dealreports/getSampleConfirm.do?hubId=4080&productId=3418',
'reports':'/productguide/ProductSpec.shtml?reports=&specId=6747556',
'expiry':'/productguide/ProductSpec.shtml?expiryDates=&specId=6747556'
};
app.Router = Backbone.Router.extend({
routes:{
"spec":"spec",
"data":"data",
"confirm":"confirm",
"reports":"reports",
"expiry":"expiry"
},
initialize: function(){
_.bindAll(this, "spec");
},
spec:function () {
this.navigate("");
this._loadPage('spec');
},
data:function () {
this._loadPage('data');
},
confirm:function () {
this._loadPage('confirm');
},
reports:function () {
this._loadPage('reports');
},
expiry:function () {
this._loadPage('expiry');
},
_loadPage:function (cssClass, cb) {
$('#right').html('Loading..').load(this._makeUrlUnique(app.urls[cssClass]), cb);
this._updateNav(cssClass);
},
_updateNav:function (cssClass) {
// the left bar gets hidden on margin rates because the tables get smashed up too much
// so ensure they're showing for the other links
$('#left').show();
$('#right').removeClass('wide');
// update the subnav css so the arrow points to the right location
$('#subnav ul li a.' + cssClass).siblings().removeClass('on').end().addClass('on');
},
_makeUrlUnique:function (urlString) {
return urlString + '&_=' + new Date().getTime();
}
});
// init and start the app
$(function () {
window.router = new app.Router();
Backbone.history.start();
});
</script>
Two things you can do:1. figuring out the real path and variables it uses to pull the data, see this part 'data':'/productguide/ProductSpec.shtml?data=&specId=6747556', it passes a variable to the data string and get the content. 2. use the rss feed they provided and construct your own table.
the table is generated by JavaScript and you cant get it without actually loading the page in your browser
or you could use Selenium to load the page then evaluate the JavaScript and html, But Selenium will bring up and window so its visible but you can use Phantom.JS which makes the browser headless
But yes you will need to load the actual js in a browser to get the HTML is generates
Take a look at this answer also
Good Luck!
The HTML is generated using Javascript, so BeautifulSoup won't be able to get the HTML for that table (and actually the whole <div id="right" class="main"> is loaded using Javascript, I guess they're using node.js)
You can check this by printing the value of soup.get_text(). You can see that the table is not there in the source.
In that case, there is no way for you to access the data, unless you use Javascript to do exactly what the script do to get the data from the server.

Categories