Could you please let me know how to improve the following script to actually click on the export button.
The following script goes to the report's page but does not click on the export button:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("<Path to Chrome profile>") #Path to your chrome profile
url = '<URL of the report>'
driver = webdriver.Chrome(executable_path="C:/tools/selenium/chromedriver.exe", chrome_options=options)
driver.get(url)
exportButton = driver.find_element_by_xpath('//*[#id="js_2o"]')
clickexport = exportButton.click()
How would you make the script actually click on the export button?
I would appreciate your help.
Thank you!
try with xpath, example:
driver.find_element_by_xpath('//button[#id="export_button"]').click()
Selenium isn't designed for this. Do you actually care about using Selenium and the browser, or do you just want the file? If the latter, use requests. You can use the browser network inspector, right click->"copy as curl" to get all the headers and cookies you need.
Related
I am trying to figure out how to activate/click on a feature using python. Like it goes to a page and click on a certain button. How can I do this? Are there any modules that may help?
Try using the selenium package in Python.
Once you pip install selenium and download chromedriver, you should be able to use something like this -
from selenium import webdriver
url = "your_url"
chrome_options = webdriver.ChromeOptions()
driver = webdriver.Chrome("/path/to/chromedriver", chrome_options=chrome_options)
driver.delete_all_cookies()
driver.get(url)
And after your page opens, you'll first have to find the element using inspect and then based on its name/id/class/etc, you can click on it using -
driver.find_element_by_name('<element_name>').click()
So I am creating an application were you can download files trough a link. This webpage contains a download button and that needs to be pressed in order to start the download. This is the link where you can reference to: link. My code:
link = input("enter link: ")
r = requests.get(link, allow_redirects=True)
how can I make requests or any other library click on the download button and save this file?
Using selenium:
INSTALLATION
Skip this if you already have selenium installed.
Install Selenium, type the following in your terminal: pip3 install selenium
Now you need a webdriver for Selenium. If you are wanting to use Chrome, firstly type "chrome://version/" in your browser and find the version you are using. Then go to this link and download the appropriate webdriver for your browser. If you are using a different browser, like Firefox for example, just type selenium [your browser] webdriver.
Installation docs
CODE
Now for the code (following code is for Chrome, you would only need to change driver = webdriver.Chrome() if you are using a different webdriver):
from selenium import webdriver #importing webdriver
PATH = "C:/path/to/chromedriver" #webdriver location
driver = webdriver.Chrome(PATH)
link = input("Enter link: ")
driver.get(link) #going to URL
driver.find_element_by_xpath("/html/body/div/main/div[3]/div/div/div/div/div[2]/div[2]/span/button")\
.click() #clicking on the button
I used full xpath to locate the button, but you can use multiple things as seen in the documentation.
Im not sure if this is possible, but basically, I'm trying to open a tab in Chrome to show the outcome of my GET and POST requests. For instance, lets say im using Python requests to POST log in data to a site. I would want to then open a chrome tab after that POST request to see if I did it correctly, and also to continue on that webpage from there in Chrome itself. I know I can use response.text to check if my POST request succeeded, but is there a way I can physically open a chrome tab to see the result itself? Im guessing there would be a need to export some sort of cookies as well? Any help or insights would be appreciated.
When using selenium, you need to remove headless option for browser:
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
#chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('<PATH_TO_CHROMEDRIVER>', options=chrome_options)
# load page via selenium
wd.get("<YOUR_TARGET_URL>")
# don't close browser
# wd.quit()
Also remove closing browser in the end of code.
After that browser will remain open and you will be able to continue work manually.
The aim of this is to open a browser window and save the site as PDF.
I'm writing Python code that:
1) Opens a web page
2) Does a control-p to bring up the print dialog box
NOTE: I will have pre-configured the browser to save as PDF instead of defaulting as printing to a printer)
3) Does "return"
4) Enters the file name
5) Does "return" again
NOTE: In my full code, I'll be doing these steps hundreds of times
I'm having a problem early on with control-p. As a test, I'm able to send dummy text to Google's search, but I can't seem to be able to send a control-p (no error messages). I'm using Google as an easy example, but my final code will use various other sites.
Obviously I'm missing something but just can't figure out what.
I tried an alternate method of using javascript instead of ActionChains:
driver.execute_script('window.print();')
This worked in getting the print dialog but I wasn't able to feed anything else in that dialog box (like , file name and location for the pdf).
I tried PDFkit, to convert the web page into pdf. It worked on some sites, but it crashed often (depending on what the site returned), the page was sometimes poorly formatted and some sites (pinterest) just didn't render at all. For this reason, I changed method and decided to use selenium and Chrome in order for the pdf to render just like it shows in the browser.
I thought about using "element.send_keys("some text")" instead of ActionChains, but since I'm going across multiple different web sites, I don't necessarily know what element to look for.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
DRIVER = 'chromedriver'
driver = webdriver.Chrome(DRIVER)
URL = "http://www.google.com"
driver.get(URL)
actions = ActionChains(driver)
time.sleep(5) #Give the page time to load
actions.key_down(Keys.CONTROL)
actions.send_keys('p')
actions.key_up(Keys.CONTROL)
actions.perform()
time.sleep(5) #Sleep to see if the print dialog came up
driver.quit()
You can use autoit to achieve your requirement.
First do pip install -U pyautoit
from selenium import webdriver
import autoit
import time
DRIVER = 'chromedriver'
driver = webdriver.Chrome(DRIVER)
driver.get('http://google.com')
driver.maximize_window()
time.sleep(10)
autoit.send("^p")
time.sleep(10) # Pause to allow you to inspect the browser.
driver.quit()
Please let me know if it's working.
try this:
webdriver.ActionChains(driver).key_down(Keys.CONTROL).send_keys('P').key_up
(Keys.CONTROL).perform()
check this out :
robot.keyPress(KeyEvent.VK_CONTROL)
robot.keyPress(KeyEvent.VK_P)
// CTRL+P is now pressed
robot.keyRelease(KeyEvent.VK_P)
robot.keyRelease(KeyEvent.VK_CONTROL)
I would like to log into a website and download a file. I'm using selenium and the chromedriver. Would like to know if there is a better way. It currently opens up a chrome browser window and sends the info. I don't want to see the browser window opened up and the data being sent. Just want to send it and return the data into a variable.
from selenium import webdriver
driver = webdriver.Chrome()
def site_login(URL,ID_username,ID_password,ID_submit,name,pas):
driver.get(URL)
driver.find_element_by_id(ID_username).send_keys(name)
driver.find_element_by_id(ID_password).send_keys(pas)
driver.find_element_by_id(ID_submit).click()
URL = "www.mywebsite.com/login"
ID_username = "name"
ID_password = "password"
ID_submit = "submit"
name = "myemail#mail.com"
pas = "mypassword"
resp=site_login(URL,ID_username,ID_password,ID_submit,name,pas)
You can run chrome in headless mode. In which case, the chrome UI won't show up and still performing the task you were doing. Some article I found on this https://intoli.com/blog/running-selenium-with-headless-chrome/. Hope this helps.
First option: If you are able to change the driver, you can use phantom-js as driver. That was a headless browser and you can use it with selenium.
Second option: If the site are not dynamic (easily called it SPA) or you are able to trace packet (which can be done in chrome dev tools), you can directly use request with the help of beautifulsoup if you need to get some data on the page.
Just add this two lines
chrome_options = Options()
chrome_options.add_argument("--headless")
This should make chrome run in the background.