I am trying to data scrape from a certain website. I am using Selenium so that I can log myself in, and then start parsing through data.
I have 3 main errors:
Last page # not loading properly. here I am loading "1" when it should be "197" and I believe this is happening because of the load associated with the website
element 'test' xpath not being found properly. I commented out in last for loop.
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[1]/div[#class='col-lg-3 col-sm-3 result-info' and 2]/span[#class='brand-name' and 1]"}
finally, I am trying to click last page to test if that works, but I am getting an error that Element is not found.
selenium.common.exceptions.ElementNotVisibleException: Message: element not visible
This is my code
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
url = "https://marketplace.refersion.com/"
username = "jupoxar#b2bx.net"
password = "testpass123"
driver = webdriver.Chrome("/Users/xxx/Downloads/chromedriver")
if __name__ == "__main__":
driver.get(url)
driver.find_element_by_xpath("/html/body/div[#class='wrapper']/div[#class='top-block']/header[#class='header clearfix']/div[#class='login-button']/a[#class='login-link']").click()
driver.find_element_by_id("email").send_keys(username) # enters the username in textbox
driver.find_element_by_xpath("/html/body/div[#id='app']/div[#class='top-block']/div[#class='row']/div[#class='col-xs-12 col-sm-10 col-sm-offset-1 col-md-8 col-md-offset-2 col-lg-6 col-lg-offset-3 main-section']/div[#class='main-section-content']/div/form[#class='form-horizontal']/div[#class='form-group ']/div[#class='col-xs-12 col-sm-10 col-sm-offset-1 input-group input-group-lg']/input[#id='password']").send_keys(password) # enters the password in textbox
# Find the submit button using class name and click on it.
driver.find_element_by_class_name("btn-primary").click()
driver.find_element_by_link_text("Find Offers").click()
driver.find_element_by_id("sorting-dropdown").click() # enters the username in textbox
driver.find_element_by_link_text("Newest First").click()
last_page = driver.find_element_by_class_name("right-center").text
print(last_page)
# try:
# last_page = WebDriverWait(driver, 3).until(EC.presence_of_element_located((By.CLASS_NAME, 'right-center')))
# print("Page is ready!")
# except TimeoutException:
# print("Loading took too much time!")
for i in range(1, 10):
# test = driver.find_element_by_xpath("//div[1]/div[#class='col-lg-3 col-sm-3 result-info' and 2]/span[#class='brand-name' and 1]")
# print(test)
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'hover-link'))).click()
I think this has to do with the way the page is being loaded. My question is, is there any work around to something like this?
You should have explicit waits in your code to handle the dynamic loading of the pages. Sorting the page by "Newest First" causes it to refresh the results and introduces a spinner to indicate the sorting.
<i class="fa fa-spinner fa-spin" aria-hidden="true" style="font-size: 48px;"></i>
Waiting for the spinner to disappear should give you the correct page count. Something on the following lines:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
.....
# your login code
.....
driver.find_element_by_link_text("Newest First").click()
element = WebDriverWait(driver, 10).until(
EC.invisibility_of_element_located((By.XPATH, "//i[#class='fa fa-spinner fa-spin']"))
)
last_page = driver.find_element_by_class_name("right-center").text
To find all the brand names listed on the page, you need to find all the span tags with class='brand-name' by calling the method find_elements_by_xpath(plural, elements)
brand_names_list = driver.find_elements_by_xpath("//span[#class='brand-name']")
for brand_name in brand_name_list:
print brand_name.text
Related
I'm trying to click in a button by using CSS Selector. I've tried by using input with value, title and onclick but not working, this is html code:
<div id="botaoMarcar"><input type="button" disabled class="botao"
value="Marcar todas" title="Marcar todas" onclick="javascript:marcarDesmarcarTodos(true);"
></div>
My code:
driver = Chrome()
url = "https://www3.bcb.gov.br/sgspub/localizarseries/localizarSeries.do?method=prepararTelaLocalizarSeries"
driver.get(url)
try:
WebDriverWait(driver, 3).until(EC.alert_is_present(),
'Timed out waiting for PA creation ' + 'confirmation popup to appear.')
alert = driver.switch_to.alert
alert.accept()
except TimeoutException:
print("No Alert")
driver.implicitly_wait(5)
driver.maximize_window()
# This part is for input table code that I want to access
id_code = driver.find_element(By.ID, 'txCodigo')
id_code.send_keys(24)
id_code.send_keys(Keys.ENTER)
# Part not working exactly
clic_code = driver.find_element(By.CSS_SELECTOR, 'input[value*="Marcar todas"]')
clic_code.click()
I just went another way and implemented the function to mark all didn't seem like the element was in an iframe.
time.sleep(5)
driver.execute_script("marcarDesmarcarTodos(true);")
Import:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
Question.
I'm really new to Python and can't find a way to click this link.
My aim was to click the links one by one, and I got stuck from clicking the first link.
I searched several times and tried even more, but I can't even find what is the problem!
The links lead to a new window(Survey), and the following is the html structure.
<div id="bb_deployment6" class="stream_item active_stream_item" role="listitem" x-aria-selected="true" tabindex="0" style="padding-left: 20px;"><span class="stream_datestamp">1 hour</span><div class="stream_context">Survey [Today] Survey A: Click to submit survey </div><div class="stream_details"></div><div class="stream_context_bottom"></div></div>
<div id="bb_deployment5" class="stream_item" role="listitem" x-aria-selected="false" tabindex="-1" style="padding-left: 20px;"><span class="stream_datestamp">2 hour</span><div class="stream_context">Survey [Today] Survey B: Click to submit survey </div><div class="stream_details"></div><div class="stream_context_bottom"></div></div>
Here's what I've tried
First Shot
from selenium import webdriver
browser =webdriver.Chrome("C:\Pii\selenium\chromedriver.exe")
#Open the Site
browser.get("https://that site")
#Find & Click!!
browser.find_element_by_partial_link_text("Survey").click()
The first error code was
: Message: no such element: Unable to locate element: {"method":"partial link text","selector":"Survey"}
Second Shot: OK Maybe the loading time was too short?
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser =webdriver.Chrome("C:\Pii\selenium\chromedriver.exe")
browser.get("https://that site")
#Wait & Click
wait = WebDriverWait(browser, 10)
element = wait.until(EC.presence_of_element_located((By.PARTIAL_LINK_TEXT, "Survey")))
browser.find_element_by_partial_link_text("Survey").click()
and now it said
: selenium.common.exceptions.TimeoutException: Message:
Third Shot: Maybe the click part was the problem because of onclick?
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser=webdriver.Chrome("C:\Pii\selenium\chromedriver.exe")
browser.get("https://that site")
wait = WebDriverWait(browser, 10)
element = wait.until(EC.presence_of_element_located((By.PARTIAL_LINK_TEXT, "Survey")))
sample = browser.find_element_by_link_text("Survey")
browser.execute_script("arguments[0].click();",sample)
and it said
: selenium.common.exceptions.TimeoutException: Message:
The same message as above
Fourth Shot: Maybe I should use XPATH instead of text?
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser=webdriver.Chrome("C:\Pii\selenium\chromedriver.exe")
browser.get("https://that site")
wait = WebDriverWait(browser, 10)
element = wait.until(EC.presence_of_element_located((By.XPATH, '//*[#id="bb_deployment5"]/div[1]/a')))
sample = browser.find_element_by_xpath('//*[#id="bb_deployment5"]/div[1]/a')
browser.execute_script("arguments[0].click();",sample)
and the result was the same
I think I got something totally wrong, but I can't get what that is.
Any answer would be a great help. Thanks
Since the element is located in a different iframe, you should switch focus to that iframe and then search for the element. This is how you do it:
iframe = browser.find_element_by_class_name('cloud-iframe')
browser.switch_to.frame(iframe)
element = wait.until(EC.presence_of_element_located((By.XPATH, '//*[#id="bb_deployment5"]/div[1]/a')))
element.click()
This should work, but I am not 100% sure as I haven't yet seen the website myself.
I wan't to get the value or the price of a stock from a trading website. The problem is, that when i'm using the .get attribute method like this:
.get_attribute('')
I can't seem to find anything to put in between the '' that will give me the value of the stock
Here is an image of the line when using inspect:
<span _ngcontent-c31="" class="price__value" style="" xpath="1"> 187.510 </span>
This is the code below that i've been making for this:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import time
browser = webdriver.Chrome('/Users/ludvighenriksen/downloads/chromedriver')
browser.get('https://www.forex.com/en-uk/account-login/')
username_elem = browser.find_element_by_name('Username')
username_elem.send_keys('kebababdulaziz#gmail.com')
password_elem = browser.find_element_by_name('Password')
password_elem.send_keys('KEbababdulaziz')
password_elem.send_keys(Keys.ENTER)
time.sleep(5)
search_elem = WebDriverWait(browser, 20).until(EC.element_to_be_clickable(
(By.CSS_SELECTOR, "input.market-search__search-input")))
search_elem.click()
search_elem.send_keys('FB')
search_click_elem = WebDriverWait(browser, 20).until(EC.element_to_be_clickable(
(By.XPATH, "//app-market-table[#class='search-results-element ng-star-inserted']//div[#class='price--buy clickable-price arrows-flashing']")))
browser.execute_script("arguments[0].click();", search_click_elem)
price_elem = browser.find_element_by_css_selector("div.mercury:nth-child(2) div.mercury__body:nth-child(4) div.mercury__body-content-container app-workspace.ng-star-inserted:nth-child(3) div.panel-container:nth-child(1) app-workspace-panel.active.ng-star-inserted div.workspace-panel-content.workspace-panel-content--no-scroll-vertical.workspace-panel-content--no-scroll-horizontal.workspace-panel-content--auto-size div.workspace-panel-content__component.workspace-panel-content__component--auto-size app-deal-ticket.ng-star-inserted form.ticket-form.ng-untouched.ng-pristine.ng-invalid.ng-star-inserted div.market-prices app-market-prices.main-prices.ng-untouched.ng-pristine.ng-valid div.market-prices div.market-prices__direction label.buy.selected span.price.ng-star-inserted:nth-child(2) > span.price__value")
price_value = price_elem.get_attribute('value')
print(price_value)
The ('value') isn't working which makes sense i guess, but I think i've tried all that i could think of - and it prints out none.
The log in to the website is included if you want to try it out, thanks in Advance
If you want to access the content of some tag, you could use the .text option.
As a beginner with python I am trying to make a simple automated login project. One more thing I have to do is to mouse click on the 4th row of html table to show me proper content. The html code of that segment is:
<tr class="tbl_seznam_barva_1" onclick="setTimeout('__doPostBack(\'ctl02$ctl00$BrowseSql1\',\'Select$0\')',470);" onmouseover="radekSeznamuClass=this.className;this.className='RowMouseOver';" onmouseout="this.className=radekSeznamuClass;">
<td>virtuálny terminál</td>
</tr>
How to execute this "onclick" event?
from selenium import webdriver
#...
browser = webdriver.Firefox()
elem = browser.find_element_by_name('txtUsername')
elem.send_keys('myLogin' + Keys.RETURN)
elem = browser.find_element_by_xpath("//tr[4]")
# some code for event execution goes here...
If you want to click() on the element with text as virtuálny terminál you can achieve it with:
browser.find_element_by_xpath("//*[text()='virtuálny terminál']").click()
If you need to click on more elements you can use a for-loop on all the elements.
elements = browser.find_element_by_xpath("//tr[4]")
for i in elements:
print(i.text)
Edit:
You can use ActionChains:
from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
browser = webdriver.Firefox()
my_elem = browser.find_element_by_xpath("//tr[4]")
action = ActionChains(browser)
action.move_to_element(my_elem)
# action.move_to_element_with_offset(my_elem, 5, 5)
action.click()
action.perform()
Edit2:
If you can't use chromedriver and you have nothing else to do you can use execute_script:
element = browser.find_element_by_xpath("//tr[4]")
browser.execute_script("arguments[0].click();", element)
The problem is that one should wait for webpage to fully load
After the line elem.send_keys('myLogin' + Keys.RETURN) the webpage needs time to render a content, so a delay should by added:
import time
# ...
elem.send_keys('myLogin' + Keys.RETURN)
time.sleep(1)
elem=browser.find_element_by_xpath("//tr[4]")
elem.click()
I am trying to automate the clicking of "next" on my university's online lecture when the current slide ends as it requires the user to manually press "next" whenever the slide has ended.
Using selenium with python, managed to get to webpage, login and navigate to the lecture slides itself but am unable to progress further
HTML Element on pastebin trying to element on line 3397
I am trying to get the elapsedTime / totalTime and
while ( currentSlide != totalSlide )
if elapsedTime == totalTime
find and click on 'next'
I've tried:
duration = driver.find_element(By.XPATH,"//div[#class='label time'][#style='display: none;]")
duration = driver.find_element(By.XPATH,"//div[#class='.label.time'][#style='display: none;']")
Any help would be appreciated!
EDIT:
Got it working suggested by Lukas
content = urllib.request.urlopen(URL, timeout=10).read().decode("utf-8")
duration = content.split("<div class="label time" style="display: none;">")[1].split("</div>")[0]
Did you tried to use a normal parser instead of selenium?
Check this out:
content = urllib.request.urlopen(URL, timeout=10).read().decode("utf-8")
duration = content.split("<div class="label time" style="display: none;">")[1].split("</div>")[0]
i would try:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Firefox()
browser.get(URL)
delay = 30 # seconds
WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.CLASS_NAME, 'label time')))
print("Page is ready!")
# duration = browser.find_element_by_class("label time").text
# --> "InvalidSelectorException: Message: invalid selector: Compound class names not permitted"
duration = browser.find_element_by_css_selector(".label.time").text
# alternative:
duration = driver.find_element_by_xpath("//*[#class='label time']").text