I am trying to get a csv file but selenium is not able to find the element
i am able to fill the form. website url is "https://www.nseindia.com/products/content/equities/equities/eq_security.htm"
to download the csv i have to click on element with text = Download file in csv format but it's not finding it.
print(browser.find_element_by_xpath('//*
#id="historicalData"]/div1/span[2]/a').click())
i have tried using css selector and link text, tag name but getting error not able to locate element.
link is highlighted here
So what you are going to need here is
browser.find_element_by_xpath('//a[contains(text(), 'Download file in csv format')]').click()
An absolute xpath, with a lot of elements in it, is very sensitive to change.
Related
I am pulling review data from a website and when I scrape using CSS Selector and XPATH I only get the text leading up to the "read more..."bit and none after. Is there a way I can scrape the data without clicking the read more... bit?
If not, any ideas how to go about scraping all the text? Do I need to locate the positioning of the read more element, click it (its position will change based on the length of previous text), and then extract using CSS Selector or XPATH?
[Image of said "read more..." element]https://i.stack.imgur.com/wdx49.png][1]
While using selenium to automate reverse address search I am unable to retrieve the information from a card in the DOM. When I copy the specific XPATH of the text not to my surprise its a text object instead of an element. Any solutions? image
I've been researching this for two days now. There seems to be no simple way of doing this. I can find an element on a page by downloading the html with Selenium and passing it to BeautifulSoup, followed by a search via classes and strings. I want to click on this element after finding it, so I want to pass its Xpath to Selenium. I have no minimal working example, only pseudo code for what I'm hoping to do.
Why is there no function/library that lets me search through the html of a webpage, find an element, and then request it's Xpath? I can do this manually by inspecting the webpage and clicking 'copy Xpath'. I can't find any solutions to this on stackoverflow, so please don't tell me I haven't looked hard enough.
Pseudo-Code:
*parser is BeautifulSoup HTML object*
for box in parser.find_all('span', class_="icon-type-2"): # find all elements with particular icon
xpath = box.get_xpath()
I'm willing to change my code entirely, as long as I can locate a particular element, and extract it's Xpath. So any other ideas on entirely different libraries are welcome.
I have the following html structure:
I would like to extract the text ("“Business-Thinking”-Fokus im Master-Kurs") from the span highlighted (using Scrapy), however I have trouble reaching to it as it does not contain any specific class or id.
I tried to access it with the following absolute xPath:
sel.xpath('/html/body/div[4]/div[1]/div/div/h1/span/text()').extract()
I don't get any error, however it returns a blank file, meaning the text is not extracted.
Note: The parent classes are not unique, that's why I'm not using a relative path. As the text varies, I also cannot reach the span by looking for the text it contains.
Do you have any suggestion on how I should modify my xPath to extract the text? Thanks!
If you load the page using scrapy shell url it loads without javascript.
When you look at source without javascript, the xpath to the span is /html/body/div/div[1]/div/div/h1/span
To load webpages with javascript in Scrapy use Splash.
I'm working on a scraper project and one of the goals is to get every image link from HTML & CSS of a website. I was using BeautifulSoup & TinyCSS to do that but now I'd like to switch everything on Selenium as I can load the JS.
I can't find in the doc a way to target some CSS parameters without having to know the tag/id/class. I can get the images from the HTML easily but I need to target every "background-image" parameter from the CSS in order to get the URL from it.
ex: background-image: url("paper.gif");
Is there a way to do it or should I loop into each element and check the corresponding CSS (which would be time-consuming)?
You can grab all the Style tags and parse them, searching what you look.
Also you can download the css file, using the resource URL and parse them.
Also you can create a XPATH/CSS rule for searching nodes that contain the parameter that you're looking for.