I am having super difficulty understanding a problem I have while working on automating a page using Chromedriver. I am in the login page and here is how the HTML for the page looks:
<frame name="mainFrame" src>
<body>
<table ..>
<tr>
<td ..>
<input type="password" name="ui_pws">
</td>
..
..
..
</frame>
This is gist, the page of course has multiple tables, divs, etc ...
I am trying to enter the password in the input element using xpath //input[#name="ui_pws"].
But the element was not found.
So I thought it might be because of wrong frame and I tried:
driver.switch_to_frame('mainFrame')
and it failed with NoSuchFrameException.
So I switched to:
main_frame = driver.find_element_by_xpath('//frame[#name="mainFrame"]')
driver.switch_to_frame(main_frame)
Then to cross verify I got the current frame element using:
current_frame = driver.execute_script("return window.frameElement")
And to my surprise I got two different elements when printed it out.
Now I am really confused as to what I should be doing to switch frames or access the password field in the webpage. I have had 4 cups of coffee since morning and still have a brain freeze.
Can anyone please guide me with this?
You can try, this is in Java should be almost similar in python too
driver.switchTo().defaultContent();
WebElement frameElement = driver.findElement(By.xpath("//frame[#name='mainFrame']"));
drive.switchTo().frame(frameElement);
SwitchTo defaultContent helps bring in focus properly, and later we can switch to the desired frame in the window.
driver.switchTo().frame(driver.findElement(By.xpath("//frame[# name='mainFrame']")));
//perform operation which you want to perform on web elements present inside the frame(mainFrame), once you finish your operation come back to default
driver.switchTo().defaultContent();
Related
I've perused SO for quite a while and cannot find the exact or similar solution to my current problem. This is my first post on SO, so I apologize if my formatting is off.
The Problem -
I'm trying to find a button on a webpage to punch me into a timeclock automatically. I am able to sign in and navigate to the correct page (it seems the page is dynamically loaded, as switching from different tabs like "Time Management" or "Pay Period" do not change the URL).
Attempts to solve -
I've tried using direct and indirect XPaths, CSS Selectors, IDs, Classes, Names, and all have failed. Included below are the different code attempts to find the button, and also a snippet of code including the button.
Button - HTML
Full Page HTML Source Code
<td>
<a onclick="return OnEmpPunchClick2(this);" id="btnEMPPUNCH_PUNCH" class="timesheet button icon " href="javascript:__doPostBack('btnEMPPUNCH_PUNCH','')">
<span> Punch</span></a>
<input type="hidden" name="hdfEMPPUNCH_PUNCH" id="hdfEMPPUNCH_PUNCH" value="0">
</td>
Attempts - PYTHON - ALL FAIL TO FIND
#All these return: "Unable to locate element"
self.browser.find_element_by_id("btnEMPPUNCH_PUNCH")
self.browser.find_element_by_xpath("//a[#id='btnEMPPUNCH_PUNCH']")
self.browser.find_element_by_css_selector('#btnEMPPUNCH_PUNCH')
#I attempted a manual wait:
wait=WebDriverWait(self.browser,30)
button = wait.until(expected_conditions.element_to_be_clickable((By.CSS_SELECTOR,'#btnEMPPUNCH_PUNCH')))
#And even manually triggering the script:
self.browser.execute_script("javascript:__doPostBack('btnEMPPUNCH_PUNCH','')")
self.browser.execute_script("__doPostBack('btnEMPPUNCH_PUNCH','')")
#Returns Message: ReferenceError: __doPostBack is not defined
None of these work, and I cannot seem to figure out why that is. Any help will be greatly appreciated!
If you go to the site, you'd notice that there is an age confirmation window which I want to bypass through scrapy but I messed up with that and I had to move on to selenium webdriver and now I'm using
driver.find_element_by_xpath('xpath').click()
to bypass that age confirmation window. Honestly I don't want to go with selenium webdriver because of its time consumption. Is there any way to bypass that window?
I searched a lot in stackoverflow and google
but didn't get any answer which may resolves my problem. If you've any link or idea of resolving it by Scrapy, that'd be appreciated. A single helpful comment will be up-voted!
To expand on Chillie's answer.
The age verification is irrelavant here. The data you are looking for is loaded via AJAX request:
See related question: Can scrapy be used to scrape dynamic content from websites that are using AJAX? to understand how they work.
You need to figure out how https://ns5bwtai8m-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20vanilla%20JavaScript%203.19.1&x-algolia-application-id=NS5BWTAI8M&x-algolia-api-key=e676b05f3844d3adf54a29732af6e43c url works and how can you retrieve in it scrapy.
But the age verification "window" is just a div that gets hidden when you press the button, not a real separate window:
<div class="age-check-modal" id="age-check-modal">
You can use the browser's Network tab in developer tools to see that no new info is uploaded or sent when you press the button. So everything is already loaded when you request a page. The "popup" is not even a popup, just an element whose display is changed to none when you click the button.
So Scrapy doesn't really care what's meant to be displayed as long as all html is loaded. If the elements are loaded, they are accessible. Or have you seen some information being unavailable without pressing the button?
You should inspect the html code more to see what each website does, this might make your scraping tasks easier.
Edit: After inspecting the original html you can see the following:
<div class="products-list">
<div class="products-container-block">
<div class="products-container">
<div id="hits" class='row'>
</div>
</div>
</div>
</div>
You can also see a lot of JS script tags.
The browser element inspector shows us the following:
The ::before part gives away that this was manipulated by JS, as you cannot do this with simple CSS. See Granitosaurus' answer for details on this.
What this means is that you need to somehow execute the arbitrary JS code on those pages. So you either need a solution with Scrapy, or just use Selenium, as many do, and as you already have.
I am still new to Python and Selenium. I would like to choose a certain option from a dropdown that is contained in an html table. However I can not get it to work. What am I doing wrong? Any help is appreciated?
Snippet of HTML-Code:
<table class="StdTableAutoCollapse">
<tr>
<td class="StdTableTD150">
<span id="ctl00_ContentPlaceBody_LbLProd1" class="StdLabel150">Prod1:</span>
</td>
<td class="StdTableTD330">
<select name="ctl00$ContentPlaceBody$DropDownListUnitType" onchange="javascript:setTimeout('__doPostBack(\'ctl00$ContentPlaceBody$DropDownListUnitType\',\'\')', 0)" id="ctl00_ContentPlaceBody_DropDownListUnitType" class="StdDropDownList330" Class="option">
<option selected="selected" value="#">- nothing -</option>
<option value="P">Dummy1</option>
</select>
</td>
</tr>
<tr>
I tried the following to select the value "Dummy1"
Python Code:
dropdown1 =
browser.find_element_by_id('ctl00_ContentPlaceBody_DropDownListUnitType')
select = Select(dropdown1)
select.select_by_value("P")
What am I missing or doing wrong? Any help is much appreciated.
EDIT
I get an error on the IPython console in Anaconda with Python 3.6:
NoSuchElementException: Unable to locate element:
[id="ctl00_ContentPlaceBody_DropDownListUnitType"]
EDIT2
I checked whether the problem is due to different iframes as mentioned by comments and in other questions here on stackoverflow. I used the idea mentioned in this https://developer.mozilla.org/en-US/docs/Tools/Working_with_iframes to check for iframes and tried with the example of Alibabas login page. There two different iframes where shown. In the page I am trying to use with selenium there is only one iframe.
It seems, Webdriver is having difficulty in directly reaching to the drop down using its id. You may need to first locate the table and then reach to the drop down. Try following and let me know, whether it works.
dropdown1 =
browser.find_element_by_xpath("//table[#class='StdTableAutoCollapse']/tr[1]/descendant::select[#id='ctl00_ContentPlaceBody_DropDownListUnitType'][1]")
select = Select(dropdown1)
select.select_by_value("P")
The problem was that I was trying to use Selenium 3.0.2 with Firefox 45. This creates issues and thus I could not select the dropdown values.I downgraded to Selenium 2.5.x and the problem went away. The issue was not the select was in a table as I first thought. I hope this helps somebody else in the future.Please see also the following question: Python, Firefox and Selenium 3: selecting value from dropdown does not work with Firefox 45
I have this little website i want to fill in a form with the requests library. The problem is i cant get to the next site when filling the form data and hitting the button(Enter does not work).
The important thing is I can't do it via a clicking bot of some kind. This needs to be done so I can run in without graphics.
info = {'name':'JohnJohn',
'message':'XXX',
'sign':"XXX",
'step':'1'}
First three entries name, message, sign are the text areas and step is I think the button.
r = requests.get(url)
r = requests.post(url, data=info)
print(r.text)
The Form Data looks like this when i send a request via chrome manually:
name:JohnJohn
message:XXX
sign:XXX
step:1
The button element looks like this:
<td colspan="2" style="text-align: center;">
<input name="step" type="hidden" value="1">
<button id="button" type="button" onclick="myClick();"
style="background-color: #ef4023; width: 80px; font-face: times; font-size: 14pt;">
WyĆlij
</button>
</td>
The next site if i do this manually has the same adres.
As you might see from the snipped you posted, clicking the button is triggering some JavaScript code, namely a method called myClick().
It is not straightforward to click on this thing using pythons requests library. You might have more luck trying to find out what happens inside myClick(). My guess would be that at some point, a POST request will be made to a HTTP endpoint. If you can figure this out, you can translate it into your python code.
If that does not work, another option would be to use something like Selenium/PhantomJS, which give you the ability to have a real, headless and scriptable browser. Using such a tool, you can actually have it fill out forms and click buttons. You can have a look at this so answer, as it shows you how to use Selenium+PhantomJS from python.
Please make sure not to abuse such methods by spamming forums or [insert illegal or otherwise abusive activity here].
In such a situation when you need to forge scripted button's request, it may be easier not to guess the logic of JS but instead perform a physical click and look into chrome devtools' network sniffer which gives you a plain request made which, in turn, can be easily forged in Python
I am trying to test the functionality of a developing webpage by using selenium in python.
This webpage have several instances where ids/names are repeated.
For example:
<input class="" name="title1" type="text">
This line of code is repeated but linked to different input fields throughout the code.
Therefore, when I try to test the webpage by using:
driver.find_element_by_name("elname").send_keys("BOb")
it seems to be looking for the first instance of the name that is in the code instead of focusing in on the current screen and inputting my desired input. This screen is not a window. So I can not switch to a window.
Is there a way to cause the driver to only focus on the current screen?
Is the "current screen" really a browser alert, or is it simply html that is designed to look like a dialog box in the browser window? It's it's really HTML, then these individual input elements probably have parents with a unique name.
<div id="first_dialog"><input class="" name="title1" type="text"></div>
so you would limit your search by that:
driver.find_element_by_id("first_dialog").find_element_by_name("elname").send_keys("BOb")
If your goal is to interact with the current element in focus use driver.switch_to.active_element. This returns the current active element with which you can interact.