I'm trying to figure out why the following script is working when I launch it with pi user on my raspberry and not with root user.
Goal: It should open Chromium full screen, and log into the website.
With root user, it opens the web client and doesn't display anything. Screen is white, and it gives me a data; page then a Privacy error page.
#!/home/pi/Documents/raspberry_screen_chrome_script/selenium/bin/python3
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
username_django = 'username'
password_django = 'password'
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--remote-debugging-port=9222");
chrome_options.add_experimental_option("useAutomationExtension", False)
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_argument("--start-fullscreen")
chrome_options.add_argument("--kiosk")
driver = webdriver.Chrome('/usr/lib/chromium-browser/chromedriver', options=chrome_options)
driver.get ("my_url")
driver.find_element_by_id('id_username').send_keys(username_django)
driver.find_element_by_id('password-input').send_keys(password_django)
driver.find_element_by_id('submit-login').click()
What I used:
selenium==3.141.0
webdriver-manager==3.4.2
Chromium 88.0.4324.187 Built on Raspbian , running on Raspbian 10
ChromeDriver 88.0.4324.187
Output when launching it with root user
root#raspberrypi:/home/pi/Documents/raspberry_screen_chrome_script# ./interface_login_local.py
Traceback (most recent call last):
File "./interface_login_local.py", line 21, in <module>
driver.find_element_by_id('id_username').send_keys(username_django)
File "/home/pi/Documents/raspberry_screen_chrome_script/selenium/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 360, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "/home/pi/Documents/raspberry_screen_chrome_script/selenium/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "/home/pi/Documents/raspberry_screen_chrome_script/selenium/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/home/pi/Documents/raspberry_screen_chrome_script/selenium/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="id_username"]"}
(Session info: chrome=88.0.4324.187)
Edit (Added Gifs to explain behaviour):
Behaviour as Pi User
Behaviour as Root User (the white block are coming when I try to right click)
Since you do not share the actual URL you are opening with this code we can only guess what is the issue.
So, it can be:
You need to add a wait / delay before accessing the username element to let the page loaded and only then access the element.
The element locator is possibly wrong.
Element can be inside an iframe.
Related
I am just getting started with Selenium and I am having some issues trying to locate different elements on a particular website. The website I want to crawl is https://connect.garmin.com/signin since they unfortunately do not provide API access to individuals.
In any case, the element I am trying to "find" is or the login box. I am having much issues with that however, and I do not know what I am doing wrong at this point.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = browser=webdriver.Firefox()
driver.get("https://connect.garmin.com/signin")
element = driver.find_element("name", "username")
element.clear()
element.send_keys("testing#gmail.com")
This is my current code.
Traceback (most recent call last):
File "/Users/will/PycharmProjects/GarminCrawlerTwitter/Crawler.py", line 7, in <module>
element = driver.find_element("name", "username")
File "/Users/will/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 855, in find_element
return self.execute(Command.FIND_ELEMENT, {
File "/Users/will/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 428, in execute
self.error_handler.check_response(response)
File "/Users/will/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [name="username"]
Stacktrace:
WebDriverError#chrome://remote/content/shared/webdriver/Errors.jsm:188:5
NoSuchElementError#chrome://remote/content/shared/webdriver/Errors.jsm:400:5
element.find/</<#chrome://remote/content/marionette/element.js:292:16
And that is the error message. It seems to run into issues trying to find the element "username". Tried using XPATH //[#id="username"] and //[#name="username"] but with same results.
I have also used driver.implicitly_wait(3) thinking that it might have been that the site wasn't fully loaded before trying to locate the username element, and that was why it was failing but that seems to have been incorrect as well.
#Zeno, the login pop-up is inside an iframe so first, you need to switch to that frame. The following should work.
driver.get("https://connect.garmin.com/signin")
iframe = driver.find_element(By.ID, "gauth-widget-frame-gauth-widget")
driver.switch_to.frame(iframe)
element = driver.find_element("name", "username")
element.clear()
element.send_keys("testing#gmail.com")
you will need the following import as I have used By:
from selenium.webdriver.common.by import By
I am trying to use selenium with chrome driver to connect to a website. But it couldn't be reached.
Here is my code:
from selenium import webdriver
from selenium.webdriver.common.by import By
CHROME_EXECUTABLE_PATH = "C://Program Files (x86)//Chrome Driver//chromedriver.exe"
CHROME_OPTIONS = webdriver.ChromeOptions()
CHROME_OPTIONS.add_argument("--disable-notifications")
BASE_URL = "https://www.nordstrom.com/"
driver = webdriver.Chrome(executable_path=CHROME_EXECUTABLE_PATH, options=CHROME_OPTIONS)
# locators
search_button_locator = "//a[#id='controls-keyword-search-popover']"
search_box_locator = "//*[#id='keyword-search-input']"
driver.get(BASE_URL)
driver.find_element(By.XPATH, search_button_locator)
driver.find_element(By.XPATH, search_box_locator).send_keys("Fave Slipper")
This code gives me some error:
E:\Python\Nordstrom.com\venv\Scripts\python.exe E:/Python/Nordstrom.com/pages/simple.py
Traceback (most recent call last):
File "E:\Python\Nordstrom.com\pages\simple.py", line 14, in <module>
driver.find_element(By.XPATH, search_button_locator)
File "E:\Python\Nordstrom.com\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 976, in find_element
return self.execute(Command.FIND_ELEMENT, {
File "E:\Python\Nordstrom.com\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "E:\Python\Nordstrom.com\venv\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//a[#id='controls-keyword-search-popover']"}
(Session info: chrome=94.0.4606.61)
Process finished with exit code 1
The page looks like this:
But the expected page should be looks like this:
How to access this website?
The error points out that it was unable to find the XPATH element, which is why it errored out.
The main causes for this can be either:
the XPATH is wrong
the element has not loaded yet on the page
the site has detected your scraping attempt and blocked you
In this case it's a combination of the 2nd and 3rd options. Whenever you use a webdriver, it exposes javascript hooks that websites can detect. To hide your activity you should learn more on how device fingerprinting and either customize your script to hide itself or use a pre-made solution for it (such as PhantomJS).
Most likely you should also look into hiding your IP by using a proxy.
There is a problem with your 'BASE_URL' so try another Browser to debug the issue and also try to use explicit wait before click or locate any element
I'm trying to scrape Amazon product prices. And I want to scrape prices text without opening the Chrome browser. I searched this on the Internet but it didn't help me.
This is my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
# Driver and link
driver = webdriver.Chrome('C:/Users/musta/Desktop/chromedriver.exe')
driver.get("https://www.amazon.com/dp/b07h9fldcd")
getText = driver.find_element_by_class_name("a-section a-spacing-micro").get_attribute("textContent")
print(getText)
driver.close()
But this didn't work. It keeps giving me this error message:
Traceback (most recent call last):
File "C:\Users\musta\Desktop\asd.py", line 8, in <module>
getText = driver.find_element_by_class_name("a-section a-spacing-micro").get_attribute("textContent")
File "C:\Users\musta\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 564, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "C:\Users\musta\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find_element
'value': value})['value']
File "C:\Users\musta\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\musta\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".a-section a-spacing-micro"}
(Session info: chrome=91.0.4472.114)
What should I do? I'm stuck at here. I want to scrape price from given div without opening Chrome browser. Hope you understand what I mean.
You need headless broswer I think :
options = webdriver.ChromeOptions()
options.add_argument("--headless")
options.add_argument("start-maximized")
driver = webdriver.Chrome(r'C:/Users/musta/Desktop/chromedriver.exe', options = options)
driver.get("https://www.amazon.com/dp/b07h9fldcd")
and regarding your error, in Selenium class name do not work with spaces :
so instead of this,
.a-section a-spacing-micro
do this with css :
.a-section.a-spacing-micro
so code should look like this :
getText = driver.find_element_by_css_selector(".a-section.a-spacing-micro").get_attribute("innerHTML")
print(getText.strip())
if it just the price you wanna grab, try with this css :
span#price_inside_buybox
in code it would look like this :
getText = driver.find_element_by_css_selector("span#price_inside_buybox
").text
print(getText.strip())
There are two questions in your query:
You can open the chrome browser in headless mode so it wont render in frontend
Try to inspect element and confirm the id/class name you have used to get_attribute
I am working on a project using selenium webdriver which requires me to locate an element and click on it. The program starts by entering a website , clicking the searchbar , typing in a preentered string and clicking enter , up to this point everything is successful. The next thing I want it to do is find the first result of the search and click on it. This part I am having trouble with. I have successfully located all elements up to this point but i cant locate this one as an error pops up. Here is my code:
from selenium import webdriver
import time
trackname = input("Track Name: ")
driver = webdriver.Chrome('D:\WebDrivers\chromedriver.exe')
driver.get('https://music.apple.com/us/artist/search/166949667')
time.sleep(2)
searchbox = driver.find_element_by_tag_name('input')
searchbox.send_keys(trackname)
from keyboard import press
press('enter')
time.sleep(2)
result = driver.find_element_by_id('search-list-lockup__description')
result.click()
I have tried locating the element other ways but it wont work , I am guessing that the issue is that after searching I have to tell it to search on that page but I am not sure. Here is the error:
Traceback (most recent call last):
File "D:\Python Projekti\iTunesDataFiller\iTunesDataFiller.py", line 18, in <module>
result = driver.find_element_by_id('search-list-lockup__description')
File "D:\Python App\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 360, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "D:\Python App\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 976, in find_element
return self.execute(Command.FIND_ELEMENT, {
File "D:\Python App\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "D:\Python App\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="search-list-lockup__description"]"}
(Session info: chrome=89.0.4389.114)
Process finished with exit code 1
What do I do?
This could be due to the element you want to access not being available.
For example say you load the page, the element could not be visible to selenium. So basically you're trying to click on an invisible element.
I suggest using this template to make sure elements are loaded before accessing them.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
waitshort = WebDriverWait(driver,.5)
wait = WebDriverWait(driver, 20)
waitLonger = WebDriverWait(driver, 100)
visible = EC.visibility_of_element_located
driver = webdriver.Chrome(executable_path='path')
driver.get('link')
element_you_want_to_access = wait.until(visible((By.XPATH,'xpath')))
I have written a program using selenium in python flask which does webscraping of a product. My intention is to first enter the product name(this is done programmatically)->after product is displayed-> it should display the price of product which I would be displaying in the terminal. My issue however is that it doesn't scrape the website and it throws a Selenium NoSuchElementException. Here is my code.
def scrape_johnLewis(product_name):
website_address = 'https://www.johnlewis.com/'
options = webdriver.ChromeOptions()
options.add_argument('start-maximized')
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
browser = webdriver.Chrome(ChromeDriverManager().install(), options=options)
browser.get(website_address)
time.sleep(10)
browser.implicitly_wait(20)
browser.find_element_by_css_selector('button.c-button-yMKB7 c-button--primary-39fbj c-button--inverted-UZv88 c-button--primary-3tLoH').click()
browser.find_element_by_id('mobileSearch').send_keys(product_name)
browser.find_element_by_css_selector('div.input-with-submit.module-inputWrapper--63f9e > button.button.module-c-button--fe2f1').click()
time.sleep(5)
# browser.find_elements_by_class_name('button.module-c-button--fe2f1')[0].submit()
product_price_raw_list = browser.find_elements_by_xpath('//div[#class="info-section_c-product-card__section__2D2D- price_c-product-card__price__3NI9k"]/span')
product_price_list = [elem.text for elem in product_price_raw_list]
print(product_price_list)
if __name__ == "__main__":
scrape_johnLewis('Canon EOS 90D Digital SLR Body')
The error that I am getting is over here browser.find_element_by_css_selector('button.c-button-yMKB7 c-button--primary-39fbj c-button--inverted-UZv88 c-button--primary-3tLoH').click()
and here is the stacktrace:
Traceback (most recent call last):
File "scrapejohnLewis.py", line 32, in <module>
scrape_johnLewis('Canon EOS 90D Digital SLR Body')
File "scrapejohnLewis.py", line 20, in scrape_johnLewis
browser.find_element_by_css_selector('button.c-button-yMKB7 c-button--primary-39fbj c-button--inverted-UZv88 c-button--primary-3tLoH').click()
File "/home/mayureshk/.local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 598, in find_element_by_css_selector
return self.find_element(by=By.CSS_SELECTOR, value=css_selector)
File "/home/mayureshk/.local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "/home/mayureshk/.local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/home/mayureshk/.local/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"button.c-button-yMKB7 c-button--primary-39fbj c-button--inverted-UZv88 c-button--primary-3tLoH"}
(Session info: chrome=74.0.3729.108)
(Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729#{#29}),platform=Linux 5.0.0-1034-oem-osp1 x86_64)
I have tried to replace it with find_element_by_tag_name however that didn't do the work either.
By inspecting the website I have come to find out the exact element but to my surprise the error says that there is no such element. What exactly could be the case? Please help.
The cookie prompt was not managed properly, it didn't go away and hence it blocks the code beneath thus stopping Selenium in finding the element. I've also made some tweaks to the code.
from selenium.webdriver.common.keys import Keys
def scrape_johnLewis(product_name):
website_address = 'https://www.johnlewis.com/'
options = webdriver.ChromeOptions()
options.add_argument('start-maximized')
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
browser = webdriver.Chrome(ChromeDriverManager(log_level='0').install(), options=options)
browser.get(website_address)
time.sleep(3)
# browser.implicitly_wait(20)
browser.find_element_by_xpath('//*[#id="pecr-cookie-banner-wrapper"]/div/div[1]/div/div[2]/button[1]').click()
browser.find_element_by_id('desktopSearch').send_keys(product_name + Keys.ENTER)
time.sleep(5)
product_price_raw_list = browser.find_elements_by_xpath('//div[#class="info-section_c-product-card__section__2D2D- price_c-product-card__price__3NI9k"]/span')
product_price_list = [elem.text for elem in product_price_raw_list]
print(product_price_list)
# ouptut
['£1,249.00', '£1,629.99', '£1,349.99']
Also try using css_selector as the last resort if you cannot find better element locators like id, xpath, tag_name.
Once, I was getting the same error. You will wonder How it was fixed?
I kept my selenium web driver by mistake, then copy the xpath from there and my code worked.
Then, I found that the xpath of the particular element in webdriver windows differs from the xpath of the same element in actual browser window.
There was a single extra tag being added into the path in webdriver window but not in actual browser window.
May be there is the same issue on your side.