Open chrome new tab in Selenium automated chrome - python

I'm creating a automated online banking balance check for my mother that's not really good at computer, I've stuck at the part that i want to open pdf file that already automated dowload in local pc then open in selenium automated chrome , is there anyway to do that , Thank you.
with normal webbrowser.open, it only open the chrome not the selenium automated chrome
import webbrowser
file_ = 'C:\\Users\\user\\Downloads\\MASTERCARD PLATINUM'+month_year+".pdf"
webbrowser.open_new_tab("https://www.google.com")

To handle a PDF document in Selenium test automation, we can use a java library called PDFBox
public void verifyContentInPDf() {
//specify the url of the pdf file
String url ="http://www.pdf995.com/samples/pdf.pdf";
driver.get(url);
try {
String pdfContent = readPdfContent(url);
Assert.assertTrue(pdfContent.contains("The Pdf995 Suite offers the following features"));
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
For Python you can follow this link

Related

Hi, can I infinite click with selenium?

Is there any way I can autoclick (spam) a button on a webpage using selenium? What I tried was while True: driver.find_element_by_id("whatev").click()
Most odds this will work. However, some sites may have protection against long auto clicking. In such case the site will redirect you to another URL, or a new HTML will be loaded with other classes, IDs and other attributes that will fail your code.
Here is what you can do using Java, translate my code to python:
loadSite();
while (true) {
try {
driver.find_element_by_id("whatev").click()
}
catch (Exception e) {
loadSite();
driver.find_element_by_id("whatev").click()
}
}

How to click on specific text in a paragraph?

I have a paragraph element as follows:
<p>You have logged in successfully. <em>LOGOUT</em></p>
Clicking on "LOGOUT" will initiate a logout procedure (e.g display a confirmation prompt).
How do I simulate this clicking on "LOGOUT" using Selenium WebDriver?
To find and click the "LOGOUT" text with python, you can use the following code:
logout = driver.find_element_by_xpath("//em[text()='LOGOUT']")
logout.click()
This could help :
Execute button Click with Selenium
As a preach :
You should first, try to analize the general basic components offered for your tool, and the interactions with external systems (selection, executions, listening).
Based on the first link offered as a resource your code should be some like :
package postBlo;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chromse.ChromeDriver;
public class singleClickButton {
public singleClickButton() {
super();
}
public static void main(String[] args) throws Exception {
System.setProperty("webdriver.chrome.driver", "./exefiles/chromedriver.exe");
WebDriver = new ChromeDriver();
driver.manage().window().maximixe();
driver.get("your-local-site-to-test");
//Referen an input component and set a values
driver.findElement(By.name("id-html-tag")).sendKeys("someValue text");
/* ## Execution of button by using id
You could use both methods to identify the element you need :
By using "xpath" expression wich allows you to navigate between elements by using expressions
By using id-identifier
Chose one of both.
driver.findElement(By.xpath("expression-xpath")).click();
driver.findElement(By.id("id-element")).click();
*/
driver.findElement(By.xpath("/html/body/elemnts-container-button/button\r\n" + "")).click();
driver.findElement(By.id("button-id")).click();
}
}
As a mention I'm not related to Selenium but still the logic it's alike.
Best

I cannot get Chrome to default to saving as a PDF when using Selenium

I'm trying to save some web pages to PDF using Python, Selenium, and Chrome, and I can't get the printer to default to Chrome's built-in "save as PDF" option.
I have found examples of how to do this in various places online, including in questions people have asked on Stack Overflow, but they way they're all implementing it doesn't work and I'm not sure if something has changed in more recent versions of Chrome, or if I'm somehow doing something wrong (for example, here is a page that has these settings: Missing elements when using selenium chrome driver to automatically 'Save as PDF').
I only included the default download location change in this code to verify it's accepting any changes at all - if you download one of the Python installs from that page, it will download to the new location and not to the standard download folder, so Chrome seems to be accepting these changes.
The problem appears to be the option "selectedDestinationID", which doesn't seem to do anything.
from selenium import webdriver
import time
import json
chrome_options = webdriver.ChromeOptions()
app_state = {
'recentDestinations': [{
'id': 'Save as PDF',
'origin': 'local'
}],
'selectedDestinationId': 'Save as PDF',
'version': 2
}
prefs = {
'printing.print_preview_sticky_settings.appState': json.dumps(app_state),
'download.default_directory': 'c:\\temp\\seleniumtesting\\'
}
chrome_options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(executable_path='C:\\temp\\seleniumtesting\\chromedriver.exe', options=chrome_options)
driver.get('https://www.python.org/downloads/release/python-373/')
time.sleep(25)
driver.close()
After the page launches, hitting ctrl+p brings up the printing page, but it defaults to the default printer. If I bring up the same page in my standard Chrome installation, it defaults to printing to PDF. I want to get to the point where I can add kiosk printing and then call window.print(), but as of now all that does is send it to the actual paper printer.
Thanks for any help anyone can offer. I'm stumped, and at this point it probably would have been faster to just save all of these manually.
It seems that if you have network printers configured they load up after opening the dialog and override your selectedDestination.
There is a preference "printing.default_destination_selection_rules" which seems to resolve.
prefs = {
"printing.print_preview_sticky_settings.appState": json.dumps(app_state),
"download.default_directory": "c:\\temp\\seleniumtesting\\".startswith(),
"printing.default_destination_selection_rules": {
"kind": "local",
"namePattern": "Save as PDF",
},
}
https://chromium.googlesource.com/chromium/src/+/master/chrome/common/pref_names.cc#1318
https://www.chromium.org/administrators/policy-list-3#DefaultPrinterSelection

phantomjs: is there a console or log to see what is going on internally?

Funny one, to us....
We drive phantom from python -> selenium.
Since phantom is non visual, we have no idea what is going on during a test.
We wonder: Is there :
A log by phantom we can monitor?
A message viewer that can receive messages from phantom?
We are looking for high level info. Like:
<< time stamp> GET on /pages/page.html
<< time stamp> js function foo called
etc
You can add --remote-debugger-port=9000 and --remote-debugger-autorun=true options when start phantomjs. Then open browser and navigate to http://localhost:9000, you will see phantomjs remote debugger console. It's like Chrome console.
NOTE:
Console can be opened in browser, only if there is a web page opened in phantomjs. Following code can be used to open page:
var page = require('webpage').create();
var url = 'http://github.com';
page.open(url, function() {
// do something...
});
You can set the service_log_path. It should save the log of phantomjs.
The example here uses the splinter library, Although you can do the same with selenium.
browser = Browser('phantomjs', service_log_path='/var/log/ghostdriver.log')

How to browse a whole website using selenium?

Is it possible to go through all the URIs of a given URL (website) using selenium ?
My aim is to launch firefox browser using selenium with a given URL of my choice (I know how to do it thanks to this website), and then let firefox browse all the pages that URL (website) has. I appreciate any hint/help on how to do it in Python.
You can use a recursive method in a class such as the one given below to do this.
public class RecursiveLinkTest {
//list to save visited links
static List<String> linkAlreadyVisited = new ArrayList<String>();
WebDriver driver;
public RecursiveLinkTest(WebDriver driver) {
this.driver = driver;
}
public void linkTest() {
// loop over all the a elements in the page
for(WebElement link : driver.findElements(By.tagName("a")) {
// Check if link is displayed and not previously visited
if (link.isDisplayed()
&& !linkAlreadyVisited.contains(link.getText())) {
// add link to list of links already visited
linkAlreadyVisited.add(link.getText());
System.out.println(link.getText());
// click on the link. This opens a new page
link.click();
// call recursiveLinkTest on the new page
new RecursiveLinkTest(driver).linkTest();
}
}
driver.navigate().back();
}
public static void main(String[] args) throws InterruptedException {
WebDriver driver = new FirefoxDriver();
driver.get("http://newtours.demoaut.com/");
// start recursive linkText
new RecursiveLinkTest(driver).linkTest();
}
}
Hope this helps you.
As Khyati mentions it is possible, however, selenium not a webcrawler or robot. You have to know where/what you are trying to test.
If you really want to go down that path I would recommend that you hit the page, pull all elements back and then loop through to click any elements that would correspond to navigation functionality (i.e. "//a" or hyperlink click).
Although if you go down this path and there is a page that opens another page then has a link back you would want to keep a list of all visited URL's and make sure that you don't duplicate a page like that.
This would work, but would also require a bit of logic in it to make it happen...and you might find yourself in an endless loop if you aren't careful.
I know you asked for a python example, but I was just in the middle of setting up a simple rep o for protractor testings and the task you want to accomplish seems to be very easy to do with protractor (which is just a wrapper around webdriver)
here is the code in javascript:
describe( 'stackoverflow scrapping', function () {
var ptor = protractor.getInstance();
beforeEach(function () {
browser.ignoreSynchronization = true;
} );
afterEach(function () {
} );
it( 'should find the number of links in a given url', function () {
browser.get( 'http://stackoverflow.com/questions/24257802/how-to-browse-a-whole-website-using-selenium' );
var script = function () {
var cb = arguments[ 0 ];
var nodes = document.querySelectorAll( 'a' );
nodes = [].slice.call( nodes ).map(function ( a ) {
return a.href;
} );
cb( nodes );
};
ptor.executeAsyncScript( script ).then(function ( res ) {
var visit = function ( url ) {
console.log( 'visiting url', url );
browser.get( url );
return ptor.sleep( 1000 );
};
var doVisit = function () {
var url = res.pop();
if ( url ) {
visit( url ).then( doVisit );
} else {
console.log( 'done visiting pages' );
}
};
doVisit();
} );
} );
} );
You can clone the repo from here
Note: I know protractor is probably not the best tool for it, but it was so simple to do it with it that I just give it a try.
I tested this with firefox (you can use the firefox-conf branch for it, but it will require that you fire webdriver manually) and chrome. If you're using osx this should work with no problem (assuming you have nodejs installed)
Selenium API provides all the facility via which you can do various operations like type ,click , goto , navigateTo , switch between frames, drag and drop, etc.
What you are aiming to do is just browsing in simple terms, clicking and providing different URls within the website also ,if I understood properly. Ya , you can definitely do it via Selenium webdriver.
And you can make a property file, for better ease and readiness where-in you can pass different properties like URLs , Base URI ,etc and do the automation testing via Selenium Webdriver in different browsers.
This is possible. I have implemented this using Java webdriver and URI. This was mainly created to identify the broken links.
Using "getElements" having tag can be get using webdriver once open and save "href" value.
Check all link status using URL class of java and Put it in stack.
Then pop link from stack and "get" link using Webdriver. Again get all the links from the page remove duplicate links which are present in stack.
Loop this until stack is empty.
You can update it as per your requirements. Such as levels of traversing, excluding other links which are not having domain of the given website etc.
Please comment if you are finding difficulty in implementation.

Categories