I'm relatively new to using python & selenium. I'm trying to access NexisUni to automate a loop of searches. But, once I'm in NexisUni, I struggle to locate elements -- I get a "no such element" exception. I want to locate the search bar and input my search terms.
I've read about the fact that an iFrame might be present, and I need to switch frames. But, I don't see any frames! Is there a way to identify frames easily -- and could a frame be present without the word "frame" in the HTML? I've also tried loading the page longer and having the driver wait, to no avail.
The HTML code is below, the grey part is the piece I'd like to select:
HTML Code
The code I'm writing to identify it is:
SearchBar = driver.find_element_by_xpath('/html/body/main/div/div[13]/div[2]/div[1]/header/div[3]/section/span[2]/span/textarea').send_keys('search text')
I've also tried these two options:
find_element_by_class_name, find_element_by_id
WebDriverWait(driver,10).until(EC.presence_of_element_located)
... Any suggestions would be appreciated!
Related
Hi :) This is my first time here and I am new to programming.
I am currently trying to automate some work-steps, using Selenium.
There was no problem, mainly using the find_element(By.ID,'') function and clicking stuff.
But now I cannot find any element that comes after the second "html" tag on the site (see screenshot)
I tried to google this "multiple html" problem, but all I found was people saying it is not possible to have multiple html tags. I basically don't know anything about html, but this site seems to have more than one - there are actually three. And anything after the first one cannot be subject to the find_element function. Please help me with this confusion.
These "multiple html" are due to the i frames in the html code. Each iframe has its own html code. If the selector you are using is meant to find something inside one of these iframes you have to "move" your driver inside the iframe. You can find an example in this other question
I've been researching this for two days now. There seems to be no simple way of doing this. I can find an element on a page by downloading the html with Selenium and passing it to BeautifulSoup, followed by a search via classes and strings. I want to click on this element after finding it, so I want to pass its Xpath to Selenium. I have no minimal working example, only pseudo code for what I'm hoping to do.
Why is there no function/library that lets me search through the html of a webpage, find an element, and then request it's Xpath? I can do this manually by inspecting the webpage and clicking 'copy Xpath'. I can't find any solutions to this on stackoverflow, so please don't tell me I haven't looked hard enough.
Pseudo-Code:
*parser is BeautifulSoup HTML object*
for box in parser.find_all('span', class_="icon-type-2"): # find all elements with particular icon
xpath = box.get_xpath()
I'm willing to change my code entirely, as long as I can locate a particular element, and extract it's Xpath. So any other ideas on entirely different libraries are welcome.
I'm new to using Selenium, and I am having trouble figuring out how to click through all iterations of a specific element. To clarify, I can't even get it to click through one as it's a dropdown but is defined as an element.
I am trying to scrape fanduel; when clicking on a specific game you are presented with a bunch of main title bets and in order to get the information I need to click the dropdowns to get to that information. There is also another drop down that states, "See More" which is a similar problem, but assuming this gets fixed I'm assuming I will be able to figure that out.
So far, I have tried to use:
find_element_by_class_name()
find_element_by_css_selector()
I have also used them in the sense of elements, and tried to loop through and click on each index of the list, but that did not work.
If there are any ideas, they would be much appreciated.
FYI: I am using beautiful soup to scrape the website for the information, I figured Selenium would be helpful making the information that isn't currently accessible, accessible.
This image shows the dropdowns that I am trying to access, in this case the dropdown 'Win Margin'. The HTML code is shown to the left of it.
This also shows that there are multiple dropdowns, varying in amount based off the game.
You can also try using action chains from selenium
menu = driver.find_element_by_css_selector(".nav")
hidden_submenu = driver.find_element_by_css_selector(".nav # submenu1")
ActionChains(driver).move_to_element(menu).click(hidden_submenu).perform()
Source: here
I am trying to make a basic form filler using Selenium on nike.com. I have completed most of it but I am having trouble clicking on the button to select the gender. I have attempted to use many examples of find_element_by_xxxxx code bit none of it has worked. Finding elements by id and xpath haven't come to much either. A typical error i get is Message: no such element: Unable to locate element. I am very new to coding so I could very easily have made an error, but any idea on how you guys would solve it would be much appreciated
That XPATH is very long and you can simplify.
By the looks of it , I would guess those Ids are changing every time there is a new session.
A more straightforward XPATH selector could be...
"//span[text() = 'Male']"
// specifies to search the entire document
span specifies the type of element to search for
text() specifies text that needs to be inside the element
(this will give you the span element but it should still work)
or
"//span[text() = 'Male']/parent::li//input"
(this will give you the actual input button)
Also , like Ollin Boer Bohan suggested, look into using waits before performing actions on your elements.
#cavan answer is correct, Also you can use xpaht like this too
//input[#type='button']//following::span[text()='Male']
Here we can use following to locate the male, same you can do for female button
I am attempting to scrape the Census website for ACS data. I have scripted the whole processes using Selenium except the very last click. I am using Python. I need to click a download button that is in a window that pops when the data is zipped and ready, but I can't seem to identify this button. It also seems that the button might change names based on when it was last run, for example, yui-gen2, yui-gen3, etc so I am thinking I might need to account for this someone. Although I normally only see yui-gen2.
Also, the tag seems to be in a "span" which might be adding to my difficulty honing in on the button I need to click.
Please help if you can shed any light on this for me.
code snippet:
#Refine search results to get tables
driver.find_element_by_id("prodautocomplete").send_keys("S0101")
time.sleep(2)
driver.find_element_by_id("prodsubmit").click()
driver.implicitly_wait(100)
time.sleep(2)
driver.find_element_by_id("check_all_btn_above").click()
driver.implicitly_wait(100)
time.sleep(2)
driver.find_element_by_id("dnld_btn_above").click()
driver.implicitly_wait(100)
driver.find_element_by_id("yui-gen0-button").click()
time.sleep(10)
driver.implicitly_wait(100)
driver.find_element_by_id("yui-gen2-button").click()
enter image description here
enter image description here
Instead of using the element id, which as you pointed out varies, you can use XPath as Nogoseke mentioned or CSS Selector. Be careful to not make the XPath/selector too specific or reliant on changing values, in this case the element id. Rather than using the id in XPath, try expressing the XPath in terms of the DOM structure (tags):
//*/div/div/div/span/span/span/button[contains(text(),'Download')]
TIL you can validate your XPath by using the search function, rather than by running it in Selenium. I right-clicked the webpage, "inspect element", ctrl+f, and typed in the above XPath to validate that it is the Download button.
For posterity, if the above XPath is too specific, i.e. it is reliant on too many levels of the DOM structure, you can do something shorter, like
//*button[contains(text(),'Download')]
although, this may not be specific enough and may require an additional field, since there may be multiple buttons on the page with the 'Download' text.
Given the HTML you provided, you should be able to use
driver.find_element_by_id("yui-gen2-button")
I know you said you tried it but you didn't say if it works at all or what error message you are getting. If that never works, you likely have an IFRAME that you need to switch to.
If it works sometimes but not consistently due to changing ID, you can use something like
driver.find_element_by_xpath("//button[.='Download']")
On the code inspection view on Chrome you can right click on the item you want to find and copy the xpath. You can they find your element by xpath on Selenium.