can't locate element in iframe selenium - python

I'm Trying to switch to frame on a web page to access a video in that frame but error always occur that element is not find i've tried many elements all the same error
this is the code i used to switch to the frame and get the video url
WebDriverWait(browser,10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, "//*[#id = 'innerframe']/iframe")))
browser.find_element_by_xpath("//video[#id='mediaplayer']/soucre").text
here is the html of the page HTML PAGE
<div id="innerframe"><iframe src="https://ops.cielo24.com/hitman/work/load_task/554097cdbe314edb9ad5d62edf5396ed/tasks/2547efb19fc5430c9f335fe165a46df3?active_task_uuid=44686eab1ad8448d97e5e74e484575ab" width="100%" height="100%" frameborder="0"></iframe>
video html:
<div id="mediacontent">
<video height="305" width="480" id="mediaplayer"><source src="https://c24cdn.co/restricted/sliced-media/790f319ee29c46e585c5ee585ed31580.mp4?Expires=1584401901&GoogleAccessId=microservice-writer%40coresystem-171219.iam.gserviceaccount.com&Signature=QtSAPQc5GMxPx9qAI8WnCurouFagNgRE2rto1B3af%2BrUhemeqoFnJZWmfQfQ2SGXKAhc5pXL68GhLINlshZ4yGEvy7SDMEr1l44Z%2FA9bFL3Xvlsii9MfZpkXaCeXT%2FKrMZZvH%2BpbiR%2BpgQjgqLysP68fODMsQ3zub9FCx8zD2Yw5bQZg12rzQWdlEcU5VHGktTSDAjpReWHIrmca63X6jQAYru5TQi12sy18UwSlpdrF1qFgXlTOEMKwB2iPHbLRPxxpFF%2FhOkYVrCcIi6OmJOXvy6arBZY9%2FYBP2vjIpDQ3UODyH8uFrEFdWbqVTHAe0G0pKly4NK1K30dKrSGYJw%3D%3D" type="video/mp4"><source src="https://c24cdn.co/restricted/sliced-media/59b2c60d2e764a25bd4a8e2d6f15cb31.webm?Expires=1584401901&GoogleAccessId=microservice-writer%40coresystem-171219.iam.gserviceaccount.com&Signature=JGbxZYS0u2rI2gY%2BjXThKj9KkIMBDfLvW9XEImWdtfzMFNpUBBm33B7wM3XYD01JLKcMD%2BlqfWf%2FqzMFAgW2zQH07NvGKzdkYFIgwxgCUQha8ws%2FLqoJyLMiz8UeXr5Smqqjr%2FiFrLLc6HmCnYfP8g7Y%2BJ%2FJoQuHmVeZjJIKxz957SZEOQ8QIQqtbIusK%2B0uqQzvyyW4vStDF7RvjZwp44b1H0pqzsby2bjCYspacgv9JM712Z72sZdercFFczC5BR%2FxT0jXFxYn6XiRhfE0HO1e24qFiR1A%2B78Ems3A3ZdQylaVDZ4UfVX13iofy2l0LWdXMjEynLxSz7cNPGtDpg%3D%3D" type="video/webm"></video>
</div>

The xpath you are using is incorrect as there is a typing error in it, you have used soucre instead of source and the structure you have used to get it is also incorrect.
So, try to use the below code, it should work fine.
WebDriverWait(browser,10).until(EC.frame_to_be_available_and_switch_to_it(By.XPATH, "//div[#id='innerframe']/iframe"))
browser.find_element_by_xpath("//div[#id='mediacontent']//video").text

Related

Python & Selenium - Find element by label text

Im trying to locate and click an element (checkbox) from a big selection of checkboxes on a html site using python and selenium webdriver. HTML code looks like this:
HTML Code
<div class="checkbox-inline col-md-5 col-lg-3 col-sm-6 m-l-sm rightCheckBox">
<input type="checkbox" checked="checked" class="i-checks" name="PanelsContainer:tabsContentView:5:listTabs:rights-group-container:right-type-view:2:right-view:2:affected-right" disabled="disabled" id="id199"> <label>Delete group</label>
</div>
My problem is that the only unique identifier is:
<label>Delete group</label>
All other elements/id's/names are used by other checkboxes or changes from page to page.
I have tried the following code:
driver.find_element_by_xpath("//label[contains(text(), 'Delete group')]").click()
But I only get error when using this.
Error: element not interactable
Anyone able to help with this?
Try the below xpath
//label[contains(text(), 'Delete group')]//ancestor::div//input
Try with Javascript.
checkBox = driver.find_element_by_xpath("//label[text()='Delete group']//ancestor::div//input")
# Scroll to checkbox if its not in screen
driver.execute_script("arguments[0].scrollIntoView();", checkBox)
driver.execute_script("arguments[0].click();", checkBox)
Note : As per HTML shared by you, checkbox is in Disabled state, so i am not sure click will trigger any action. However above code will click your checkbox.

I am trying to select an image from a box and then click on the aligning button using selenium python

Here is the code for the image I uploaded
<body id="tinymce" class="mce-content-body " data-id="textarea-WYSIWYG" contenteditable="true"><p><br data-mce-bogus="1"></p><p><img src="//www.shahidpro.tv/uploads/articles/60011c78.jpg" width="500" height="500" vspace="" hspace="" border="0" alt=""></p></body>
<p><img src="//www.shahidpro.tv/uploads/articles/60011c78.jpg" width="500" height="500" vspace="" hspace="" border="0" alt=""></p>
I tried using
clk = WebDriverWait(browser, 30).until(EC.invisibility_of_element_located((By.XPATH, '/html/body/p[2]/a"]'))).click()
and
clk = WebDriverWait(browser, 30).until(EC.invisibility_of_element_located((By.XPATH, '//*[#id="tinymce"]/p[2]/a'))).click
and by partial link text using (//www.shahidpro.tv/uploads/articles/) but It doesn't click nor give error. I am pretty new to selenium and python
.invisibility_of_element_located:
An Expectation for checking that an element is either invisible or not present on the DOM.
locator used to find the element
The question is why do you want to wait for the element to disappear then click?
Maybe what you need is .visibility_of_element_located or more precisely is .element_to_be_clickable

HTML request does not show everything as the html in browser

I am trying to obtain comments of a website using Python and urllib.
I am able to get the html, however, I noticed that the comment section of the html I got using python is missing.
Here's what I have using python:
<div data-bv-product-id="6810124" data-bv-show="reviews" id="BVReviewsContainer">
</div>
(what's in between the div tags is empty)
Where as this is what it should look like(in the browser):
<div data-bv-product-id="6810124" data-bv-show="reviews" id="BVReviewsContainer">
<div id="BVRRContainer">
<div class="bv-cleanslate bv-cv2-cleanslate"> <div data-bv-v="contentList:1" class="bv-shared bv-core-container-437" data-product-id="6810124">
.
.
.
</div>
</div>
</div>
I am confounded as to why I am not getting the whole thing.
This post explains why scraped HTML isn't always the same; JavaScript can change the HTML of a website. One instance I've seen this happen is I believe on Archive of Our Own, where the actual body of a work was not available. According to that StackOverflow post, you should use Selenium to scrape it instead, as it essentially simulates the actual process that happens when a user accesses a page: the user opens a web browser (you can use your preferred web browser, like Chrome), then opens a page, and the page's JavaScript runs (through possible the onload event.

Python - Selenium - webscrape xmlns table

<html xmlns="hyyp://www.w3.org/1999/xhtml">
<head>_</head>
<body>
<form name="Main Form" method="post" action="HTMLReport.aspx?ReportName=...">
<div id="Whole">
<div id="ReportHolder">
<table xmlns:msxsl="urn:schemeas-microsoft-com:xslt" width="100%">
<tbody>
<tr>
<td>_</td>
<td>LIVE</td>
and the data I need is here between <td> </td>
Now my code so far is:
import time
from selenium import webdriver
chromeOps=webdriver.ChromeOptions()
chromeOps._binary_location = "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
chromeOps._arguments = ["--enable-internal-flash"]
browser = webdriver.Chrome("C:\\Program Files\\Google\\Chrome\\Application\\chromedriver.exe", port=4445, chrome_options=chromeOps)
time.sleep(3)
browser.get('website')
elem=browser.find_element_by_id('MainForm')
el=elem.find_element_by_xpath('//*[#id="ReportHolder"]')
the last two lines of code are just really me testing how path I can go before xpath breaksdown. Trying to xpath to any content beyond this point gives a noSuchElementException.
Can anyone explain to me how I draw data from within the table please?
My currently thinking is that perhaps I have to pass "something" into an xml tree api and access it through that. Although I don't know how I would capture it.
If anyone can give me that next step it would be greatly appreciated, feeling a bit like I'm holding a candle in a dark room at the moment.
It's very simple. It's a timing issue.
Solution: Place a time.sleep(5) before the xpath request.
browser.get('http://www.mmgt.co.uk/HTMLReport.aspx?ReportName=Fleet%20Day%20Summary%20Report&ReportType=7&CategoryID=4923&Startdate='+strDate+'&email=false')
time.sleep(5)
ex=browser.find_element_by_xpath('//*[#id="ReportHolder"]/table/tbody/tr/td')
xpath is requesting a reference to dynamic content.
The table is dynamic content and takes longer to load that content then it does for the python program to reach line:
ex=browser.find_element_by_xpath('//*[#id="ReportHolder"]/table/tbody/tr')
from its previous line of:
browser.get('http://www.mmgt.co.uk/HTMLReport.aspx?ReportName=Fleet%20Day%20Summary%20Report&ReportType=7&CategoryID=4923&Startdate='+strDate+'&email=false')

Selenium: Timing inconsistency with WebDriverWait & click

I have a set of divs to show/hide content in a typical accordion style. The HTML looks like this;
<div class="accordionContainer">
<div class="accordion">
<h3>Click This</h3>
<div class="accordionContent" style="display:none">
</div>
</div>
<div class="accordion">
<h3>Click This</h3>
<div class="accordionContent" style="display:none">
</div>
</div>
</div>
I've then got my python to select that first H3 and then open a link that is in accordionContent.
WebDriverWait(ff, 10).until(lambda driver : driver.find_element_by_xpath("id('main_content')/div[3]/div/div/div[1]/h3[1]")).click()
WebDriverWait(ff, 10).until(lambda driver : driver.find_element_by_xpath("id('main_content')/div[3]/div/div/div[1]/div/p/a")).click()
I have ran this & seen it work. However most of the time it fails. The first div gets clicked (I can see a little arrow on it rotate to show the content but it seems to get clicked twice as it immediately returns to default and I get the error;
[exec] selenium.common.exceptions.ElementNotVisibleException: Message: u'Element is not currently visible and so may not be interacted with'
Oddly though when it can be seen to be clicked, but not open, if you call the same click() line a second time it works.
Can that second xpath be advanced to check that the accordionContent has been changed to display: block?
This xpath should work:
"//div[#class='accordionContainer']/div[#class='accordion'][1]/div[#class='accordionContent' and contains(#style, 'block')]"
or if the structure is pretty safe, could do:
"//div[#class='accordionContainer']/div[1]/div[contains(#style, 'block')]"
Note: I am assuming that it is just a typo in the example that the closing tag for the 'accordion' div is supposed to be a closing tag (rather than the opening tag seen).

Categories