I am using Python/Selenium to submit genetic sequences to an online database, and want to save the full page of results I get back. Below is the code that gets me to the results I want:
from selenium import webdriver
URL = 'https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastx&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome'
SEQUENCE = 'CCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACA' #'GAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGA'
CHROME_WEBDRIVER_LOCATION = '/home/max/Downloads/chromedriver' # update this for your machine
# open page with selenium
# (first need to download Chrome webdriver, or a firefox webdriver, etc)
driver = webdriver.Chrome(executable_path=CHROME_WEBDRIVER_LOCATION)
driver.get(URL)
time.sleep(5)
# enter sequence into the query field and hit 'blast' button to search
seq_query_field = driver.find_element_by_id("seq")
seq_query_field.send_keys(SEQUENCE)
blast_button = driver.find_element_by_id("b1")
blast_button.click()
time.sleep(60)
At that point I have a page that I can manually click "save as," and get a local file (with a corresponding folder of image/js assets) that lets me view the whole returned page locally (minus content which is generated dynamically from scrolling down the page, which is fine). I assumed there would be a simple way to mimic this 'save as' function in python/selenium but haven't found one. The code to save the page below just saves html, and does not leave me with a local file that looks like it does in the web browser, with images, etc.
content = driver.page_source
with open('webpage.html', 'w') as f:
f.write(content)
I've also found this question/answer on SO, but the accepted answer just brings up the 'save as' box, and does not provide a way to click it (as two commenters point out)
Is there a simple way to 'save [full page] as' using python? Ideally I'd prefer an answer using selenium since selenium makes the crawling part so straightforward, but I'm open to using another library if there's a better tool for this job. Or maybe I just need to specify all of the images/tables I want to download in code, and there is no shortcut to emulating the right-click 'save as' functionality?
UPDATE - Follow up question for James' answer
So I ran James' code to generate a page.html (and associated files) and compared it to the html file I got from manually clicking save-as. The page.html saved via James' script is great and has everything I need, but when opened in a browser it also shows a lot of extra formatting text that's hidden in the manually save'd page. See attached screenshot (manually saved page on the left, script-saved page with extra formatting text shown on right).
This is especially surprising to me because the raw html of the page saved by James' script seems to indicate those fields should still be hidden. See e.g. the html below, which appears the same in both files, but the text at issue only appears in the browser-rendered page on the one saved by James' script:
<p class="helpbox ui-ncbitoggler-slave ui-ncbitoggler" id="hlp1" aria-hidden="true">
These options control formatting of alignments in results pages. The
default is HTML, but other formats (including plain text) are available.
PSSM and PssmWithParameters are representations of Position Specific Scoring Matrices and are only available for PSI-BLAST.
The Advanced view option allows the database descriptions to be sorted by various indices in a table.
</p>
Any idea why this is happening?
As you noted, Selenium cannot interact with the browser's context menu to use Save as..., so instead to do so, you could use an external automation library like pyautogui.
pyautogui.hotkey('ctrl', 's')
time.sleep(1)
pyautogui.typewrite(SEQUENCE + '.html')
pyautogui.hotkey('enter')
This code opens the Save as... window through its keyboard shortcut CTRL+S and then saves the webpage and its assets into the default downloads location by pressing enter. This code also names the file as the sequence in order to give it a unique name, though you could change this for your use case. If needed, you could additionally change the download location through some extra work with the tab and arrow keys.
Tested on Ubuntu 18.10; depending on your OS you may need to modify the key combination sent.
Full code, in which I also added conditional waits to improve speed:
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.expected_conditions import visibility_of_element_located
from selenium.webdriver.support.ui import WebDriverWait
import pyautogui
URL = 'https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastx&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome'
SEQUENCE = 'CCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACA' #'GAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGAGAAGA'
# open page with selenium
# (first need to download Chrome webdriver, or a firefox webdriver, etc)
driver = webdriver.Chrome()
driver.get(URL)
# enter sequence into the query field and hit 'blast' button to search
seq_query_field = driver.find_element_by_id("seq")
seq_query_field.send_keys(SEQUENCE)
blast_button = driver.find_element_by_id("b1")
blast_button.click()
# wait until results are loaded
WebDriverWait(driver, 60).until(visibility_of_element_located((By.ID, 'grView')))
# open 'Save as...' to save html and assets
pyautogui.hotkey('ctrl', 's')
time.sleep(1)
pyautogui.typewrite(SEQUENCE + '.html')
pyautogui.hotkey('enter')
This is not a perfect solution, but it will get you most of what you need. You can replicate the behavior of "save as full web page (complete)" by parsing the html and downloading any loaded files (images, css, js, etc.) to their same relative path.
Most of the javascript won't work due to cross origin request blocking. But the content will look (mostly) the same.
This uses requests to save the loaded files, lxml to parse the html, and os for the path legwork.
from selenium import webdriver
import chromedriver_binary
from lxml import html
import requests
import os
driver = webdriver.Chrome()
URL = 'https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastx&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome'
SEQUENCE = 'CCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACAGCTCAAACACAAAGTTACCTAAACTATAGAAGGACA'
base = 'https://blast.ncbi.nlm.nih.gov/'
driver.get(URL)
seq_query_field = driver.find_element_by_id("seq")
seq_query_field.send_keys(SEQUENCE)
blast_button = driver.find_element_by_id("b1")
blast_button.click()
content = driver.page_source
# write the page content
os.mkdir('page')
with open('page/page.html', 'w') as fp:
fp.write(content)
# download the referenced files to the same path as in the html
sess = requests.Session()
sess.get(base) # sets cookies
# parse html
h = html.fromstring(content)
# get css/js files loaded in the head
for hr in h.xpath('head//#href'):
if not hr.startswith('http'):
local_path = 'page/' + hr
hr = base + hr
res = sess.get(hr)
if not os.path.exists(os.path.dirname(local_path)):
os.makedirs(os.path.dirname(local_path))
with open(local_path, 'wb') as fp:
fp.write(res.content)
# get image/js files from the body. skip anything loaded from outside sources
for src in h.xpath('//#src'):
if not src or src.startswith('http'):
continue
local_path = 'page/' + src
print(local_path)
src = base + src
res = sess.get(hr)
if not os.path.exists(os.path.dirname(local_path)):
os.makedirs(os.path.dirname(local_path))
with open(local_path, 'wb') as fp:
fp.write(res.content)
You should have a folder called page with a file called page.html in it with the content you are after.
Inspired by FThompson's answer above, I came up with the following tool that can download full/complete html for a given page url (see: https://github.com/markfront/SinglePageFullHtml)
UPDATE - follow up with Max's suggestion, below are steps to use the tool:
Clone the project, then run maven to build:
$> git clone https://github.com/markfront/SinglePageFullHtml.git
$> cd ~/git/SinglePageFullHtml
$> mvn clean compile package
Find the generated jar file in target folder: SinglePageFullHtml-1.0-SNAPSHOT-jar-with-dependencies.jar
Run the jar in command line like:
$> java -jar .target/SinglePageFullHtml-1.0-SNAPSHOT-jar-with-dependencies.jar <page_url>
The result file name will have a prefix "FP, followed by the hashcode of the page url, with file extension ".html". It will be found in either folder "/tmp" (which you can get by System.getProperty("java.io.tmp"). If not, try find it in your home dir or System.getProperty("user.home") in Java).
The result file will be a big fat self-contained html file that includes everything (css, javascript, images, etc.) referred to by the original html source.
I'll advise u to have a try on sikulix which is an image based automation tool for operate any widgets within PC OS, it supports python grammar and run with command line and maybe the simplest way to solve ur problem.
All u need to do is just give it a screenshot, call sikulix script in ur python automation script(with OS.system("xxxx") or subprocess...).
Related
You're probably aware of Firefox's built-in PDF viewer. The fact is that I'm trying to get the HTML code of the page displaying the file so that It would gave me more informations than the PyPDF library in python.
Obviously requests does not work because it's not a real link, so I thought about using selenium (maybe also in headless mode) with the webdriver.page_source attribute:
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.firefox.options import Options
from selenium import webdriver
import os
service = Service(os.path.abspath('Files/geckodriver'))
options = Options()
# options.headless = True
driver = webdriver.Firefox(service = service, options = options)
driver.get(f'file://{os.path.abspath("Files/sample.pdf")}')
html = driver.page_source
print(html)
The fact is that this does not give me the complete source code, just the title and number of every page and the reference on the PDF file. For example, here's the page number 2:
<div class="thumbnail" data-page-number="2"><div class="thumbnailSelectionRing" style="width: 100px; height: 132px;"></div></div>.
Also for every page I got the same thing without knowing the content of every paragraph (notice that I've cut out all the code of the toolbar on the left side just for the explanation's sake) while in the Firefox Viewer I was able to see all the content.
So, I'm looking forward if some other method exists or if I have to fix something in my code.
Firefox is a completely open source company. Please browse the source code to get access to that part of it. If you do not want access to the source code for the browser, then please be clearer in your question.
I am scraping pages like this one:
site to scrape
I am using Python with Selenium and connecting through ProxyCrawler. One of the things I need to do is follow all the links that say For details, click here and grab the text there. The links look like this:
<a href='javascript:void(0)' onclick=javascript:submitLink('TIDFT/AE/VI/IS/ID100201','KQ','KQ')>For details, click here</a>
As you can see, each link's URL gets constructed by a function called submitLink. The function is not defined in the page source; rather it is called from an external .js file referenced in the head. I tried injecting the file into the DOM to make the function run but failed so far. For more details, see my question here.
So I'm trying instead to click each link to make the script run. However, this doesn't work with ProxyCrawler. If I connect directly, the links work fine but obviously that exposes my scraper.
Here is the minimum workable code:
from selenium import webdriver
from urllib import parse
apikey = MY_KEY
scrapeurl = 'https://www.timaticweb.com/cgi-bin/tim_website_client.cgi?SpecData=1&VISA=&page=both&NA=' + \
'ZW' + '&DE=' + 'AE' + '&user=KQ&subuser=KQ'
selenurl = 'https://api.proxycrawl.com/?token=' + apikey + '&url=' + parse.quote(scrapeurl)
DRIVER_PATH = '/Applications/chromedriver'
driver = webdriver.Chrome(executable_path = DRIVER_PATH)
driver.get(selenurl)
#driver.get(scrapeurl)
link = driver.find_element_by_xpath(".//a[contains(#onclick, 'submitLink')]")
link.click()
The above works is I use scrapeurl. It doesn't work with selenurl. Is there a way to use ProxyCrawler and still be able to click on those links?
I am taking a trial website case to learn to upload files using Python Selenium where the upload window is not a part of the HTML. The upload window is a system level update. This is already solved using JAVA (stackoverflow link(s) below). If this is not possible via Python then I intent to shift to JAVA for this task.
BUT,
Dear all my fellow Python lovers, why shouldn't it be possible using Python webdriver-Selenium. Hence this quest.
Solved in JAVA for URL: http://www.zamzar.com/
Solution (& JAVA code) in stackoverflow: How to handle windows file upload using Selenium WebDriver?
This is my Python code that should be self explanatory, inclusive of chrome webdriver download links.
Task (uploading file) I am trying in brief:
Website: https://www.wordtopdf.com/
Note_1: I don't need this tool for any work as there are far better packages to do this word to pdf conversion. Instead, this is just for learning & polishing Python Selenium code/application.
Note_2: You will have to painstakingly enter 2 paths into my code below after downloading and unzipping the chrome driver (link below in comments). The 2 paths are: [a] Path of a(/any) word file & [b] path of the unzipped chrome driver.
My Code:
from selenium import webdriver
UNZIPPED_DRIVER_PATH = 'C:/Users/....' # You need to specify this on your computer
driver = webdriver.Chrome(executable_path = UNZIPPED_DRIVER_PATH)
# Driver download links below (check which version of chrome you are using if you don't know it beforehand):
# Chrome Driver 74 Download: https://chromedriver.storage.googleapis.com/index.html?path=74.0.3729.6/
# Chrome Driver 73 Download: https://chromedriver.storage.googleapis.com/index.html?path=73.0.3683.68/
New_Trial_URL = 'https://www.wordtopdf.com/'
driver.get(New_Trial_URL)
time.sleep(np.random.uniform(4.5, 5.5, size = 1)) # Time to load the page in peace
Find_upload = driver.find_element_by_xpath('//*[#id="file-uploader"]')
WORD_FILE_PATH = 'C:/Users/..../some_word_file.docx' # You need to specify this on your computer
Find_upload.send_keys(WORD_FILE_PATH) # Not working, no action happens here
Based on something very similar in JAVA (How to handle windows file upload using Selenium WebDriver?), this should work like a charm. But Voila... total failure and thus chance to learn something new.
I have also tried:
Click_Alert = Find_upload.click()
Click_Alert(driver).send_keys(WORD_FILE_PATH)
Did not work. 'Alert' should be inbuilt function as per these 2 links (https://seleniumhq.github.io/selenium/docs/api/py/webdriver/selenium.webdriver.common.alert.html & Selenium-Python: interact with system modal dialogs).
But the 'Alert' function in the above link doesn't seem to exist in my Python setup even after executing
from selenium import webdriver
#All the readers, hope this doesn't take much of your time and we all get to learn something out of this.
Cheers
You get ('//*[#id="file-uploader"]') which is <a> tag
but there is hidden <input type="file"> (behind <a>) which you have to use
import selenium.webdriver
your_file = "/home/you/file.doc"
your_email = "you#example.com"
url = 'https://www.wordtopdf.com/'
driver = selenium.webdriver.Firefox()
driver.get(url)
file_input = driver.find_element_by_xpath('//input[#type="file"]')
file_input.send_keys(your_file)
email_input = driver.find_element_by_xpath('//input[#name="email"]')
email_input.send_keys(your_email)
driver.find_element_by_id('convert_now').click()
Tested with Firefox 66 / Linux Mint 19.1 / Python 3.7 / Selenium 3.141.0
EDIT: The same method for uploading on zamzar.com
Situation which I saw first time (so it took me longer time to create solution): it has <input type="file"> hidden under button but it doesn't use it to upload file. It create dynamically second <input type="file"> which uses to upload file (or maybe even many files - I didn't test it).
import selenium.webdriver
from selenium.webdriver.support.ui import Select
import time
your_file = "/home/furas/Obrazy/37884728_1975437959135477_1313839270464585728_n.jpg"
#your_file = "/home/you/file.jpg"
output_format = 'png'
url = 'https://www.zamzar.com/'
driver = selenium.webdriver.Firefox()
driver.get(url)
#--- file ---
# it has to wait because paga has to create second `input[#type="file"]`
file_input = driver.find_elements_by_xpath('//input[#type="file"]')
while len(file_input) < 2:
print('len(file_input):', len(file_input))
time.sleep(0.5)
file_input = driver.find_elements_by_xpath('//input[#type="file"]')
file_input[1].send_keys(your_file)
#--- format ---
select_input = driver.find_element_by_id('convert-format')
select = Select(select_input)
select.select_by_visible_text(output_format)
#--- convert ---
driver.find_element_by_id('convert-button').click()
#--- download ---
time.sleep(5)
driver.find_elements_by_xpath('//td[#class="status last"]/a')[0].click()
I am currently trying to figure out how i can take a list of links and make python run through all of them and save them as pdf. (I'm not a python expert)
I found a python package called "pdfkit" which is quite good, but how do i set it up so that it follow my url-list and save the pdf as different names all the time?
import pdfkit
config = pdfkit.configuration(wkhtmltopdf="C:\\Program Files (x86)\\wkhtmltopdf\\bin\\wkhtmltopdf.exe")
pdfkit.from_url('http://google.com', 'MyPDF.pdf', configuration=config)
This is my current code, lets say that i have a list of 10 webpages that i want to save as 10 different pdf files how do i make a setup that would allow me to do so?
Another issue is that i need to login to the page in order to scrape the information from the links, how would you implement that?
Best Regards,
Answer for the first question:
import pdfkit
config = pdfkit.configuration(wkhtmltopdf="C:\\Program Files (x86)\\wkhtmltopdf\\bin\\wkhtmltopdf.exe")
url_list = [
['http://google.com', 'google.com.pdf'],
['http://facebook.com', 'facebook.com.pdf'],
['http://yahoo.com', 'yahoo.com.pdf'],
]
for k, v in url_list:
pdfkit.from_url(k, v, configuration=config)
For answer to the second question, you can use the requests module session feature to login first and then pass the cookie to pdfkit to download the page. See Create PDF of a https webpage which requires login using pdfkit
import selenium.webdriver
import pdfkit
import time
config = pdfkit.configuration(wkhtmltopdf="C:\\Program Files
(x86)\\wkhtmltopdf\\bin\\wkhtmltopdf.exe")
driver = selenium.webdriver.Chrome()
driver.get('https://www.linkedin.com/')
time.sleep(1)
driver.find_element_by_id('login-email').send_keys('username')
driver.find_element_by_id('login-password').send_keys('password')
driver.find_element_by_id('login-submit').click()
time.sleep(2)
driver.save_screenshot('output.png') # only visible part
print(driver.page_source)
pdfkit.from_string(driver.page_source, 'file.pdf')
These are the steps I need to automatize:
1) Log in
2) Select an option from a drop down menu (To acces a list of products)
3) search something on the search field (The product we are looking for)
4) click a link (To open up the product's options)
5) click another link(To compile all the .pdf files relevant to said product in a bigger .pdf)
6) wait for a .pdf to load and then download it.(Save the .pdf on my machine with the name of the product as the file name)
I want to know if this is possible. If it is, where can I find how to do it?
Is it pivotal that there is actual clicking involved? If you're just looking to download PDFs then I suggest you use the Requests library. You might also want to consider using Scrapy.
In terms of searching on the site, you may want to use Fiddler to capture the HTTP POST request and then replicate that in Python.
Here is some code that might be useful as a starting place - these functions would login to a server and download a target file.
def login():
login_url = 'http://www.example.com'
payload = 'usr=username&pwd=password'
connection = requests.Session()
post_login = connection.post(data=payload,
url=login_url,
headers=main_headers,
proxies=proxies,
allow_redirects=True)
def download():
directory = "C:\\example\\"
url = "http://example.com/download.pdf"
filename = directory + '\\' + url[url.rfind("/")+1:]
r = connection.get(url=url,
headers=main_headers,
proxies=proxies)
file_size = int(r.headers["Content-Length"])
block_size = 1024
mode = 'wb'
print "\tDownloading: %s [%sKB]" % (filename, int(file_size/1024))
if r.status_code == 200:
with open(filename, mode) as f:
for chunk in r.iter_content(block_size):
f.write(chunk)
For static sites you can use the mechanize module, available from PyPi, it does everything you want - except it does not run Javascript and thus does not work on dynamic websites. Also it is strictly Python 2 only.
easy_install mechanize
For something way more complicated you might have to use python bindings for Selenium (install instructions) to control an external browser; or use spynner that embeds a web browser. However these 2 are far more difficult to set up.
Sure, just use selenium webdriver
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('http://your-website.com')
search_box = browser.find_element_by_css_selector('input[id=search]')
search_box.send_keys('my search term')
browser.find_element_by_css_selector('input[type=submit']).click()
That would get you through the visit page, enter search term, click on search, stage of your problem. Read through the api for the rest.
Mechanize has problems at the moment because so much of a web page is generated via javascript. And if it is not rendering that you can't do much with the page.
It helps if you understand css selectors, else you can find elements by id, or xpath or other things...