how to extract website from google map search - python

I need exact company information such as name, website url etc,after search by google map. everything is ok, but at last i can not locate the element website.it can open website after you click the image.
but i can not find where the url element locate
anyone can help to locate the website url element? I use python selenium, only the last step, you can check the finished code as below:
from selenium import webdriver
from time import sleep
driver=webdriver.Firefox()
driver.get('https://www.google.com/maps/#35.780287,104.1374349,4z')
sleep(8)
place=driver.find_element_by_class_name('tactile-searchbox-input')
place.send_keys('oil and gas solutions+UAE')
sleep(8)
submit=driver.find_element_by_id('searchbox-searchbutton')
submit.click()

Please find the working code - Instead of sleep you should use WebdriverWait. Always use explicit waits, its a good practice.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome()
driver.get('https://www.google.co.in/maps')
SearchTextbox = driver.find_element_by_id("searchboxinput")
SearchTextbox.send_keys("cafe coffee day")
SearchTextbox.send_keys(Keys.ENTER)
All_SearchResults = WebDriverWait(driver, 20).until(
EC.presence_of_all_elements_located((By.XPATH, '//div[#data-value="Website"]/button')))
for CCD in All_SearchResults:
print(CCD.get_attribute("aria-label"))
print("end...")
Do let me know if this is what you are looking for. And if it is then please mark it as answer.

Related

Python find Element mismatch

I was having trouble with find elements by XPATH for a selenium script in Python.
Couldn’t understand the problem, i want to identify all of the elements that satisfy my xpath criteria.
So I simply compared the find_elements(By.XPATG, ‘//div’) to the same result when looking at Chrome’s Inspect tool.
The tool says 300+ results for this whereas my programme, which I got to return a count of results, only states 60? Any ideas? looks to be an thing from the browser/site side rather than my code.
Thanks
EDIT added my code:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import StaleElementReferenceException
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
import login
print(time.time())
driver.get("WEBSITE URL")
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//body")))
time.sleep(1)
links = driver.find_elements(By.XPATH,'//body')
linklist = len(links)
print(linklist)
This provides a result of around 60 however when '//body' is put in the inspect element tool, it returns 300+
Apologies but the targeted webpage is confidential so I can't post. But suggestions are appreciated so I can investigate.
Does your web page has a scroll? If yes, you may need to deal with scroll bar. Also try different conditions for WebDriverWait, for example EC.visibility_of_all_elements_located or EC.presence_of_all_elements_located

Use Python Selenium to extract span text

Hi i'm new at selenium and webscraping and i need some help.
i try to scrape one site and i need and i dont know how to get span class.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
PATCH = "/Users/bobo/Downloads/chromedriver"
driver = webdriver.Chrome(PATCH)
driver.get("https://neonet.pl")
print(driver.title)
search = driver.find_element_by_class_name("inputCss-input__label-263")
search.send_keys(Keys.RETURN)
time.sleep(5)
i try to extract this span
<span class="inputCss-input__label-263">Szukaj produktu</span>
I can see that you are trying to search something in the search bar.
First I recommend you to use the xpath instead of the class name, here is a simple technique to get the xpath of every element on a webpage:
right-click/ inspect element/ select the mouse in a box element on the upper left/ click on the element on the webpage/ it will directly show you the corresponding html/ then right click on the selected html/ copy options and then xpath.
Here is a code example that searches an element on the webpage, I also included the 'Webdriver-wait' option because sometimes the code runs to fast and can't find the next element so this function make the code wait till the element is visible:
from selenium import webdriver
from selenium.webdriver.common.by import By
from time import sleep
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(executable_path="/Users/bobo/Downloads/chromedriver")
driver.get("https://neonet.pl") #loading page
wait = WebDriverWait(driver, 20) #defining webdriver wait
search_word = 'iphone\n' # \n is going to act as an enter key
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/main/div[1]/div[4]/div/div/div[2]/div/button[1]'))).click() #clicking on cookies popup
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/main/header/div[2]/button'))).click() #clicking on search button
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/aside[2]/section/form/label/input'))).send_keys(search_word) #searching on input button
print('done!')
sleep(10)
Hope this helped you!
wait=WebDriverWait(driver,10)
driver.get('https://neonet.pl')
elem=wait.until(EC.visibility_of_element_located((By.XPATH, "//span[contains(#class,'inputCss-input')]"))).text
print(elem)
To output the value of the search bar you use .text on the Selenium Webelement.
Imports:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

Python Selenium - send_keys not sending any information to element

I am trying out Selenium for the first time so I apologize if there is an obvious mistake or problem with my code.
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://youtube.com')
searchBox = driver.find_element_by_id('search')
searchBox.send_keys('Programming')
searchButton = driver.find_element_by_id('search-icon-legacy')
searchButton.click()
So I tried this and it loads the page fine but, it does not input any characters into the searchBox (I quadruple checked that the id was correct - copied it directly from the inspector).
NOTE:
My internet is really REALLY slow and it takes YouTube approx. 20 seconds to fully load, so I thought that was an issue so I tried;
...
driver.get('https://youtube.com')
driver.implicitly_wait(30)
searchBox = driver.find_element_by_id('search')
...
But this did not work either.
I did use XPATH instead of finding it by element ID at the start and that did not work.
I checked and copied the XPATHs and IDs directly from the inspector and nothing so far has inputted anything into the textbox.
What could be the problem? (1)
and does the webdriver wait for the page to load/find the element before doing anything after it being initialized with the driver.get('websiteAddress')? (2)
NOTE: I double checked that I was selecting the right element as well.
To send keys to the input tag with id = search. We use webdriver waits to allow the element to be usable after driver.get so the page loads correctly.
driver.get('https://youtube.com')
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, "//input[#id='search']"))).send_keys("Programming")
Import
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
If you don't know the waiting time:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
delay = 40
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.XPATH, "//form[#role='form']//input[#id='username']")))
Then it just waits on the element, for as log as delay is, but will continue as soon as the element is found, that the best way to wait on slow connections.
To relate elements more easily you can use ChroPath, it is an extension for google chrome / edge that allows you to see the path of an element, through cssSelector, Abs XPath, Rel XPath and the tag name, so when your code is not working you can try these other ways. Particularly this extension helps me a lot.

Python 2.7 Selenium No Such Element on Website

I'm trying to do some webscraping from a betting website:
As part of the process, I have to click on the different buttons under the "Favourites" section on the left side to select different competitions.
Let's take the ENG Premier League button as example. I identified the button as:
(source: 666kb.com)
The XPath is: //*[#id="SportMenuF"]/div[3] and the ID is 91.
My code for clicking on the button is as follows:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chrome_path = "C:\Python27\Scripts\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("URL Removed")
content = driver.find_element_by_xpath('//*[#id="SportMenuF"]/div[3]')
content.click()
Unfortunately, I always get this error message when I run the script:
"no such element: Unable to locate element:
{"method":"xpath","selector":"//*[#id="SportMenuF"]/div[3]"}"
I have tried different identifiers such as CCS Selector, ID and, as shown in the example above, the Xpath. I tried using waits and explicit conditions, too. None of this has worked.
I also attempted scraping some values from the website without any success:
from selenium import webdriver
from selenium.webdriver.common.by import By
chrome_path = "C:\Python27\Scripts\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("URL removed")
content = driver.find_elements_by_class_name('price-val')
for entry in content:
print entry.text
Same problem, nothing shows up.
The website embeddes an iframe from a different website. Could this be the cause of my problems? I tried scraping directly from the iframe URL, too, which didn't work, either.
I would appreciate any suggestions.
Sometimes elements are either hiding behind an iframe, or they haven't loaded yet
For the iframe check, try:
driver.switch_to.frame(0)
For the wait check, try:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, '-put the x-path here-')))

Unable to submit keys using selenium with python

Following is the code which im trying to run
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import os
import time
#Create a new firefox session
browser=webdriver.Firefox()
browser.maximize_window()
#navigate to app's homepage
browser.get('http://demo.magentocommerce.com/')
#get searchbox and clear and enter details.
browser.find_element_by_css_selector("a[href='/search']").click()
search=browser.find_element_by_class_name('search-input')
search.click()
time.sleep(5)
search.click()
search.send_keys('phones'+Keys.RETURN)
However, im unable to submit the phones using send_keys.
Am i going wrong somewhere?
Secondly is it possible to always use x-path to locate an element and not rely on id/class/css-selections etc ?
The input element you are interested in has the search_query class name. To make it work without using hardcoded time.sleep() delays, use an Explicit Wait and wait for the search input element to be visible before sending keys to it. Working code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Firefox()
browser.maximize_window()
wait = WebDriverWait(browser, 10)
browser.get('http://demo.magentocommerce.com/')
browser.find_element_by_css_selector("a[href='/search']").click()
search = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "search-query")))
search.send_keys("phones" + Keys.RETURN)

Categories