Python selenium xpath using contains and not contains - python

I try to get links whose title contains some word in the mean time not contains some words, I use the following code but it says is not a valid XPath expression.
Please find my code here:
Any help will be highly appreciated!
driver.get("http://www.csisc.cn/zbscbzw/isinbm/index_list_code.shtml")
while True:
links = [link.get_attribute('href') for link in driver.find_elements_by_xpath("//a[(contains(#title,'公司债券')and not(contains(#title,'短期'))]")]
for link in links:
driver.get(link)
#dosth

There is an extra bracket in you xpath, use
links = [link.get_attribute('href') for link in driver.find_elements_by_xpath("//a[contains(#title,'公司债券')and not(contains(#title,'短期'))]")]
instead
You can use chrome developer tools first to validate your xpaths
PS: I changed the xpath here a bit to be able to find some elements in my page

There should be space before and. Also there is extra leading bracket in your XPath. Try:
"//a[contains(#title,'公司债券') and not(contains(#title,'短期'))]"

Related

Selenium - to make find_elements. readable

Basic concept I know:
find_element = find single elements. We can use .text or get.attribute('href') to make the element can be readable. Since find_elements is a list, we can't use .textor get.attribute('href') otherwise it shows no attribute.
To scrape information to be readable from find_elements, we can use for loop function:
vegetables_search = driver.find_elements(By.CLASS_NAME, "product-brief-wrapper")
for i in vegetables_search:
print(i.text)
Here is my problem, when I use find_element, it shows the same result. I searched the problem on the internet and the answer said that it's because using find_element would just show a single result only. Here is my code which hopes to grab different urls.
links.append(driver.find_element(By.XPATH, ".//a[#rel='noopener']").get_attribute('href'))
But I don't know how to combine the results into pandas. If I print these codes, links variable prints the same url on the csv file...
vegetables_search = driver.find_elements(By.CLASS_NAME, "product-brief-wrapper")
Product_name =[]
links = []
for search in vegetables_search:
Product_name.append(search.find_element(By.TAG_NAME, "h4").text)
links.append(driver.find_element(By.XPATH, ".//a[#rel='noopener']").get_attribute('href'))
#use panda modules to export the information
df = pd.DataFrame({'Product': Product_name,'Link': links})
df.to_csv('name.csv', index=False)
print(df)
Certainly, if I use loop function particularly, it shows different links.(That's mean my Xpath is correct(!?))
product_link = (driver.find_elements(By.XPATH, "//a[#rel='noopener']"))
for i in product_link:
print(i.get_attribute('href'))
My questions:
Besides using for loop function, how to make find_elements becomes readable? Just like find_element(By.attribute, 'content').text
How to go further step for my code? I cannot print out different urls.
Thanks so much. ORZ
This is the html code which's inspected from the website:
This line:
links.append(driver.find_element(By.XPATH, ".//a[#rel='noopener']").get_attribute('href'))
should be changed to be
links.append(search.find_element(By.XPATH, ".//a[#rel='noopener']").get_attribute('href'))
driver.find_element(By.XPATH, ".//a[#rel='noopener']").get_attribute('href') will always search for the first element on the DOM matching .//a[#rel='noopener'] XPath locator while you want to find the match inside another element.
To do so you need to change WebDriver driver object with WebElement search object you want to search inside, as shown above.

Getting href with Selenium and Python

I am trying to get the href with selenium and python.
This is my page:
Some class information are changing depending on which elements. So I am trying basically to get all href for <a id="job____ .....
links.append(job.find_element_by_xpath('//a[#aria-live="polite"]//span').get_attribute(name="href"))
I tried couple of things but can't figure out how. How can i get all my href from the screenshot above?
Try this, but take care your xpath
"//a[#aria-live="polite"]//span"
will get a span, and i dont see any span with href on your html. Maybe this xpath solve it
//a[./span[#aria-live="polite"]]
links.append(job.find_element_by_xpath('//a[./span[#aria-live="polite"]]').get_attribute("href"))
But it wont get all urls, this with find_elements (return a list), extend your url list with list comprehension
links.extend([x.get_attribute("href") for x in job.find_elements_by_xpath('//a[./span[#aria-live="polite"]]')])
edit 1, other xpath solution
links.extend(["website_base_url"+x.get_attribute("href") for x in job.find_elements_by_xpath('//a[contains(#id, "job_")]')])
list_of_elements_with_href = wd.find_elements_by_xpath("//a[contains(#href,'')]")
for el_with_href in list_of_elements_with_href :
links.append(el.with_href.get_attribute("href"))
or if you need more specify:
list_of_elements_with_href = wd.find_elements_by_xpath("//a[contains(#href,'') and contains(#id,'job_')]")
Based on your description and attached image, I think you have got the wrong xpath. Try the following code.
find_links = driver.find_elements_by_xpath("//a[starts-with(#id,'job_')]")
links = []
for link in find_links:
links.append(link.get_attribute("href"))
Please note elements in find_elements_by_xpath instead of element.
I am unable to test this solution as you have not provided the website.

Extract a hyperlink from a website - Selenium

I was attempting to solve this issue for a bit of time and attempted multiple solution posted on here prior to opening this question.
I am currently attempting to a run a scraper with the following code
website = 'https://www.abitareco.it/nuove-costruzioni-milano.html'
path = Path().joinpath('util', 'chromedriver')
driver = webdriver.Chrome(path)
driver.get(website)
main = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.NAME, "p1")))
My goal hyperlink has word scheda in it:
i = driver.find_element_by_xpath('.//a[contains(#href, "scheda")]')
i.text
My first issue is that find_element_by_xpath only outputs a single hyperlink and second issue is that it is not extracting anything so far.
I'd appreciate any help and/or guidance.
You need to use find_elements instead :
for name in driver.find_elements(By.XPATH, ".//a[contains(#href, 'scheda')]"):
print(name.text)
Note that find_elements will return a list of web elements, where as find_element return a single web element.
if you specifically looking for href attribute then you can try the below code :
for name in driver.find_elements(By.XPATH, ".//a[contains(#href, 'scheda')]"):
print(name.get_attribute('href'))
There's 2 issues, looking at the website.
You want to find all elements, not just one, so you need to use find_elements, not find_element
The anchors actually don't have any text in them, so .text won't return anything.
Assuming what you want is to scrape the URLs of all these links, you can use .get_attribute('href') instead of .text, like so:
url_list = driver.find_elements(By.XPATH, './/a[contains(#href, "scheda")]')
for i in url_list:
print(i.get_attribute('href'))
It will detect all webelements that match you criteria and store them in a list. I just used print as an example, but obviously you may want to do more than just print the links.

How to get href link from list using selenium python

Here is the code of the web
The xpath of search-results-list container grid is
//[#id="product_type_products_list"]/div/div[2]/div
and the xpath of result is
//*[#id="product_type_products_list"]/div/div[2]/div/div[1]
I have try using :
elems = driver.find_elements_by_xpath('//*[#id="product_type_products_list"]/div/div[2]/div')
url = driver.find_element_by_link_text(elems[0].text).get_attribute("href")
print(url)
this give the link to the beginning of the web.
Thank you for your consideration.
The code you've provided doesn't look like a valid HTML to me, however you can try the following XPath expression:
//div[#class='result']/descendant::a
More information:
XPath Tutorial
XPath Axes
XPath Functions and Operators
Try Narrowing it down to the <'A> Tag by appending the xpath like so:
elems = driver.find_elements_by_xpath('.//*[#id="product_type_products_list"]/div/div[2]/div/div[1]/a')
Then just retrieve the href attribute like you did earlier but using the same element:
url = elems[0].get_attribute("href")

Find elements by XPath with fallback

for products in self.br.find_elements_by_xpath("//*[#class='image']/a"):
self.urls.append(products.get_attribute("href"))
This code will find all hrefs links by the class.
My problem that the webpage has a changing source sometimes it can be //*[#class='image']/a but sometimes //*[#class='newPrice']/a. How can I change the for loop to use the other expression if the first xpath option found nothing?
Store the output in a variable first:
links = self.br.find_elements_by_xpath("//*[#class='image']/a")
if not links:
links = self.br.find_elements_by_xpath("//*[#class='newPrice']/a")
for products in links:
self.urls.append(products.get_attribute("href"))
Not equivalent to a fallback, but you could use an OR syntax:
for products in self.br.find_elements_by_xpath(
"//*[#class='image']/a | //*[#class='newPrice']/a"):
self.urls.append(products.get_attribute("href"))

Categories