I am trying to scrape multiple web pages across the world. So, I want to translate the website using Google translate extension and then scrape the page using selenium.
I did some research and figured out how to add extension while running selenium.
1) download google translate extension
2) Create .crx file
3) add extension to selenium
but I have no idea how to automatically execute the extension (By default, it does nothing)
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
option = webdriver.ChromeOptions()
option.add_extension('./translate.crx')
driver = webdriver.Chrome(executable_path = "./chromedriver", chrome_options = option)
driver.get("naver.com")
WebDriverWait(driver, 3).until(EC.presence_of_element_located((By.TAG_NAME, "body")))
''' #### Here I want something like####
driver.execute_extension("translate this page")
'''
print driver.find_element_by_tag_name("body").text
driver.quit()
Also, I found that the extension doesn't translate the original HTML, so I might have to use a different method for crawling. (Maybe passing ctrl-a, ctrl-c, ctrl-v instead by_tag_name("body"))
Could you give me any pointer for this?
Thanks in advance
driver.execute_extension
Seems to me if you can open the extension by Selenium (see an example in C#). Then you by Selenium may click on the TRANSLATE THIS PAGE link:
Shortcut
Use Google Translate API.
Related
I am trying to scrape from Google search results the blue highlighted portion as shown below:
When I use inspect element, it shows: span class="YhemCb". I have tried using various soup.find and soup.find_all commands, but everything I have tried has no
output so far. What command should I use to scrape this part?
Google uses javascript to display most of its web elements, so using something like requests and BeautifulSoup is unfortunately not enough.
Instead, use selenium! It essentially allows you to control a browser using code.
First, you will need to navigate to the google page you wish to scrape
google_search = 'https://www.google.com/search?q=courtyard+by+marriott+fayetteville+fort+bragg'
driver.get(google_search)
Then, you have to wait until the review page loads in the browser.
This is done using WebDriverWait: you have to specify an element that needs to appear on the page. The [data-attrid="kc:/local:one line summary"] span css selector allows me to select the review info about the hotel.
timeout = 10
expectation = EC.presence_of_element_located((By.CSS_SELECTOR, '[data-attrid="kc:/local:one line summary"] span'))
review_element = WebDriverWait(driver, timeout).until(expectation)
And finally, print the rating
print(review_element.get_attribute('innerHTML'))
Here's the full code in case you want to play around with it
import chromedriver_autoinstaller
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
# setup selenium (I am using chrome here, so chrome has to be installed on your system)
chromedriver_autoinstaller.install()
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
# navigate to google
google_search = 'https://www.google.com/search?q=courtyard+by+marriott+fayetteville+fort+bragg'
driver.get(google_search)
# wait until the page loads
timeout = 10
expectation = EC.presence_of_element_located((By.CSS_SELECTOR, '[data-attrid="kc:/local:one line summary"] span'))
review_element = WebDriverWait(driver, timeout).until(expectation)
# print the rating
print(review_element.get_attribute('innerHTML'))
Note Google is notoriously defensive against anyone who is trying to scrape them. On first few attempts you might be successful, but eventually you will have to deal with Google Captcha.
To work around that, I would suggest using the search engine scraper, something like the quickstart guide to get you started!
Disclaimer: I work at Oxylabs.io
I am trying to have Selenium download the URLs of a webpage as PDFs on Safari. So far, I have been able to open the URL, but I can't get Safari to download it. All the solutions I found so far were either for another browser, or they didn't work. Ideally I would like it to download all links of one page and then move on the next page.
At first I thought that clicking on each hyperlink and then downloading it was the way to go. But that would require switching windows each time, so then I tried to find a way to download it without having to click on it, but nothing worked.
I am quite new at programming so I am sure that I am missing something.
import selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pdfkit
browser = webdriver.Safari()
browser.get(a_base_url)
username = browser.find_element_by_name("tb_LoginName")
password = browser.find_element_by_name("tb_Password")
submit = browser.find_element_by_id("btn_Login")
username.send_keys(username)
password.send_keys(password)
submit.click()
element=WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH, '//*[#id="maincolumn"]/div/table/tbody/tr[2]/td[9]/a[2]'))).click()
browser.switch_to_window(browser.window_handles[0])
url=browser.current_url
I would go for the following approach:
Get href attribute of the link you want to download via WebElement.get_attribute() function
Use urllib or requests library to retrieve the URL from step 1 without using the browser
Most probably you will also need to get the browser Cookies via WebDriver.get_cookies function and add them to Cookie header for your download request
I am trying to automate logging into a website (http://www.phptravels.net/) using Selenium - Python on Chrome. This is an open website used for automation tutorials.
I am trying to click an element to open a drop-down (My Account button at the top navbar) which will then give me the option to login and redirect me to the login page.
The HTML is nested with many div and ul/li tags.
I have tried various lines of code but haven't been able to make much progress.
driver.find_element_by_id('li_myaccount').click()
driver.find_element_by_link_text(' Login').click()
driver.find_element_by_xpath("//*[#id='li_myaccount']/ul/li[1]/a").click()
These are some of the examples that I tried out. All of them failed with the error "element not visible".
How do I find those elements? Even the xpath function is throwing this error.
I have not added any time wait in my code.
Any ideas how to proceed further?
Hope this code will help:
from selenium import webdriver
from selenium.webdriver.common.by import By
url="http://www.phptravels.net/"
d=webdriver.Chrome()
d.maximize_window()
d.get(url)
d.find_element(By.LINK_TEXT,'MY ACCOUNT').click()
d.find_element(By.LINK_TEXT,'Login').click()
d.find_element(By.NAME,"username").send_keys("Test")
d.find_element(By.NAME,"password").send_keys("Test")
d.find_element(By.XPATH,"//button[text()='Login']").click()
Use the best available locator on your html page, so that you need not to create xpath of css for simple operations
You may be having issues with the page not being loaded when you try and find the element of interest. You should use the WebDriverWait class to wait until a given element is present in the page.
Adapted from the docs:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Set up your driver here....
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, 'li_myaccount'))
)
element.click()
except:
#Handle any exceptions here
finally:
driver.quit()
I am trying to scrape multiple web pages across the world. So, I want to translate the website using Google translate extension and then scrape the page using selenium.
I did some research and figured out how to add extension while running selenium.
download google translate extension
Create .crx file
add extension to selenium
but I have no idea how to automatically execute the extension (By default, it does nothing)
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
option = webdriver.ChromeOptions()
option.add_extension('./translate.crx')
driver = webdriver.Chrome(executable_path = "./chromedriver", chrome_options = option)
driver.get("naver.com")
WebDriverWait(driver, 3).until(EC.presence_of_element_located((By.TAG_NAME, "body")))
''' #### Here I want something like####
driver.execute_extension("translate this page")
'''
print driver.find_element_by_tag_name("body").text
driver.quit()
Also, I found that the extension doesn't translate the original HTML, so I might have to use a different method for crawling. (Maybe passing ctrl-a, ctrl-c, ctrl-v instead by_tag_name("body"))
Could you give me any pointer for this?
I need to scrape this page (which has a form): http://kllads.kar.nic.in/MLAWise_reports.aspx, with Python preferably (if not Python, then JavaScript). I was looking at libraries like RoboBrowser (which is basically Mechanize + BeautifulSoup) and (maybe) Selenium but I'm not quite sure on how to go about it. From inspecting the element, it seems to be a WebForm that I need to fill in. After filling that in, the webpage generates some data that I need to store. How should I do this?
You can interact with the javascript web forms relatively easily in Selenium. You may need to install a webdriver quickly, but besides that all you need to do is find the form using its xpath and then have Selenium select an option from the drop down menu using the option's xpath. For the web page provided that would look something like this:
#import functions from selenium module
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# open chrome browser using webdriver
path_to_chromedriver = '/Users/Michael/Downloads/chromedriver'
browser = webdriver.Chrome(executable_path=path_to_chromedriver)
# open web page using browser
browser.get('http://kllads.kar.nic.in/MLAWise_reports.aspx')
# wait for page to load then find 'Constituency Name' dropdown and select 'Aland (46)''
const_name = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="ddlconstname"]')))
browser.find_element_by_xpath('//*[#id="ddlconstname"]/option[2]').click()
# wait for the page to load then find 'Select Status' dropdown and select 'OnGoing'
sel_status = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="ddlstatus1"]')))
browser.find_element_by_xpath('//*[#id="ddlstatus1"]/option[2]').click()
# wait for browser to load then click 'Generate Report'
gen_report = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="BtnReport"]')))
browser.find_element_by_xpath('//*[#id="BtnReport"]').click()
Between each interaction, you are just giving the browser some time to load before attempting click the next element. Once all the forms are filled out, the page will display the data based on the options selected and you should be able to scrape the table data. I had a few issues when attempting to load data for the first Constituency Name option, but the others seemed to work fine.
You should also be able to loop through all the dropdown options available under each web form to display all the data.
Hope that helps!