Getting all visible text from a webpage using Selenium - python

I've been googling this all day with out finding the answer, so apologies in advance if this is already answered.
I'm trying to get all visible text from a large number of different websites. The reason is that I want to process the text to eventually categorize the websites.
After a couple of days of research, I decided that Selenium was my best chance. I've found a way to grab all the text, with Selenium, unfortunately the same text is being grabbed multiple times:
from selenium import webdriver
import codecs
filen = codecs.open('outoput.txt', encoding='utf-8', mode='w+')
driver = webdriver.Firefox()
driver.get("http://www.examplepage.com")
allelements = driver.find_elements_by_xpath("//*")
ferdigtxt = []
for i in allelements:
if i.text in ferdigtxt:
pass
else:
ferdigtxt.append(i.text)
filen.writelines(i.text)
filen.close()
driver.quit()
The if condition inside the for loop is an attempt at eliminating the problem of fetching the same text multiple times - it does not however, only work as planned on some webpages. (it also makes the script A LOT slower)
I'm guessing the reason for my problem is that - when asking for the inner text of an element - I also get the inner text of the elements nested inside the element in question.
Is there any way around this? Is there some sort of master element I grab the inner text of? Or a completely different way that would enable me to reach my goal? Any help would be greatly appreciated as I'm out of ideas for this one.
Edit: the reason I used Selenium and not Mechanize and Beautiful Soup is because I wanted JavaScript tendered text

Using lxml, you might try something like this:
import contextlib
import selenium.webdriver as webdriver
import lxml.html as LH
import lxml.html.clean as clean
url="http://www.yahoo.com"
ignore_tags=('script','noscript','style')
with contextlib.closing(webdriver.Firefox()) as browser:
browser.get(url) # Load page
content=browser.page_source
cleaner=clean.Cleaner()
content=cleaner.clean_html(content)
with open('/tmp/source.html','w') as f:
f.write(content.encode('utf-8'))
doc=LH.fromstring(content)
with open('/tmp/result.txt','w') as f:
for elt in doc.iterdescendants():
if elt.tag in ignore_tags: continue
text=elt.text or ''
tail=elt.tail or ''
words=' '.join((text,tail)).strip()
if words:
words=words.encode('utf-8')
f.write(words+'\n')
This seems to get almost all of the text on www.yahoo.com, except for text in images and some text that changes with time (done with javascript and refresh perhaps).

Here's a variation on #unutbu's answer:
#!/usr/bin/env python
import sys
from contextlib import closing
import lxml.html as html # pip install 'lxml>=2.3.1'
from lxml.html.clean import Cleaner
from selenium.webdriver import Firefox # pip install selenium
from werkzeug.contrib.cache import FileSystemCache # pip install werkzeug
cache = FileSystemCache('.cachedir', threshold=100000)
url = sys.argv[1] if len(sys.argv) > 1 else "https://stackoverflow.com/q/7947579"
# get page
page_source = cache.get(url)
if page_source is None:
# use firefox to get page with javascript generated content
with closing(Firefox()) as browser:
browser.get(url)
page_source = browser.page_source
cache.set(url, page_source, timeout=60*60*24*7) # week in seconds
# extract text
root = html.document_fromstring(page_source)
# remove flash, images, <script>,<style>, etc
Cleaner(kill_tags=['noscript'], style=True)(root) # lxml >= 2.3.1
print root.text_content() # extract text
I've separated your task in two:
get page (including elements generated by javascript)
extract text
The code is connected only through the cache. You can fetch pages in one process and extract text in another process or defer to do it later using a different algorithm.

Related

For Loops while using selenium for webscraping Python

I am attempting to web-scrape info off of the following website: https://www.axial.net/forum/companies/united-states-family-offices/
I am trying to scrape the description for each family office, so "https://www.axial.net/forum/companies/united-states-family-offices/"+insert_company_name" are the pages I need to scrape.
So I wrote the following code to test the program for just one page:
from bs4 import BeautifulSoup as soup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome('insert_path_here/chromedriver')
driver.get("https://network.axial.net/company/ansaco-llp")
page_source = driver.page_source
soup2 = soup(page_source,"html.parser")
soup2.findAll('axl-teaser-description')[0].text
This works for the single page, as long as the description doesn't have a "show full description" drop down button. I will save that for another question.
I wrote the following loop:
#Note: Lst2 has all the names for the companies. I made sure they match the webpage
lst3=[]
for key in lst2[1:]:
driver.get("https://network.axial.net/company/"+key.lower())
page_source = driver.page_source
for handle in driver.window_handles:
driver.switch_to.window(handle)
word_soup = soup(page_source,"html.parser")
if word_soup.findAll('axl-teaser-description') == []:
lst3.append('null')
else:
c = word_soup.findAll('axl-teaser-description')[0].text
lst3.append(c)
print(lst3)
When I run the loop, all of the values come out as "null", even the ones without "click for full description" buttons.
I edited the loop to instead print out "word_soup", and the page is different then if I had run it without a loop and does not have the description text.
I don't understand why a loop would cause that but apparently it does. Does anyone know how to fix this problem?
Found solution. pause the program for 3 seconds after driver.get:
import time
lst3=[]
for key in lst2[1:]:
driver.get("https://network.axial.net/company/"+key.lower())
time.sleep(3)
page_source = driver.page_source
word_soup = soup(page_source,"html.parser")
if word_soup.findAll('axl-teaser-description') == []:
lst3.append('null')
else:
c = word_soup.findAll('axl-teaser-description')[0].text
lst3.append(c)
print(lst3)
I see that the page uses javascript to generate the text meaning it doesn't show up in the page source, which is weird but ok. I don't quite understand why you're only iterating through and switching to all the instances of Selenium you have open, but you definitely won't find the description in the page source / beautifulsoup.
Honestly, I'd personally look for a better website if you can, otherwise, you'll have to try it with selenium which is inefficient and horrible.

I am having a hard time creating a program that finds tor nodes

I am trying to create a web scraping program that goes to a specific website, collects the tor nodes and then compares it to a list that I have. If the IP addresses match then it's a tor node, if not it isn't a tor node then it's false.
I am having a hard time getting the "text" from the inspect element of the website ..[Inspect element of website][1]
[1]: https://i.stack.imgur.com/16zWw.png
Any help is appreciated, I'm stuck right now and don't know how to get the "text" from the first picture to show up on my program. Thanks in advance.
Here is the code to my program so far:
import requests
from bs4 import BeautifulSoup
page = requests.get('https://www.dan.me.uk/tornodes')
soup = BeautifulSoup(page.content, 'html.parser')
search = soup.find(id='content')
#137.74.19.201 is practice tor node
items = search.find_all(class_='article box')
Why bother with BeautifulSoup ?! the guy states clearly that there are some markers in the page ... just take the whole pate as a string, split by those markers an go from there, for example:
import requests
from bs4 import BeautifulSoup
page = requests.get('https://www.dan.me.uk/tornodes')
# assuming that page.content contains the source code of the page
if "<!--__BEGIN_TOR_NODE_LIST__-->" not in page.content:
print("list not ready")
else:
list_text = page.content.split("<!--__BEGIN_TOR_NODE_LIST__-->")[1] # take everything after this
list_text = list_text.split("<!--__END_TOR_NODE_LIST__-->")[1] # take everything before this
line_list = [line.strip() ]
for line in list_text.split("<br>"):
line_ip = line.strip().split("|")[0]
# how do what you want with it
if line_ip in my_known_ip_list:
print("This is good %s" % line_ip)
import urllib.request # the lib that handles the url stuff
target_url = 'https://www.dan.me.uk/torlist/'
my_ips = ['1.161.11.204', '1.161.11.205']
confirmed_ips = []
for line in urllib.request.urlopen(target_url):
if line in my_ips:
print(line.decode())
confirmed_ips.append(line)
print(confirmed_ips)
# ATTENTION:
# Umm... You can only fetch the data every 30 minutes - sorry. It's pointless any faster as I only update every 30 minutes anyway.
# If you keep trying to download this list too often, you may get blocked from accessing it completely.
# (this is due to some people trying to download this list every minute!)
Since there's this 30min limitation, otherwise you will receive ERROR 403, you can read the lines and save to a file, then compare your list with the downloaded list.

How to send key with BeautifulSoup

I created a script with selenium to get given GST NUMBER information. I completed that program and it's giving me required details on output without any problem.
Now I do not want to interact it with chrome browser anymore so I'm trying to do this with BeautilfulSoup.
BeautifulSoup is new for me so I have not much idea to find element and I searched a lot about how to send keys with BeautifulSoup but I'm not getting it.
Now my script is stuck here.
from bs4 import BeautifulSoup
import requests
import urllib.request as urllib2
quote_page = 'https://my.gstzen.in/p/search-taxpayer'
page = urllib2.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
Now even if I manage to find the gst input element then I'm wondering how do I send keys to it? like the 15 digit gst number with sending enter button code or clicking on "Search GST Details".
If possible, then let me know the solution so I can start my research on it.
Actually, I need to do complete this tonight.
Plus, Here is my script which do the same thing with selenium easily and I want to do the same thing with BeautilfulSoup because I do not want chrome to be run every time while checking the GST and BeautilfulSoup seems interesting.
import selenium
import time
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
import csv
import requests
#import pyvirtualdisplay
#from pyvirtualdisplay import display
#display = Display(visible=0, size=(800, 600))
#display.start()
browser = webdriver.Chrome('E:\\Chrome Driver\\chromedriver_win32\\chromedriver.exe')
browser.set_window_position(-10000,0)
browser.get('https://my.gstzen.in/p/search-taxpayer/')
with open ('product.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
next(csv_reader)
for row in csv_reader:
name, phone = row
time.sleep(1)
gst = browser.find_element_by_name('gstin')
gst.click()
gst.send_keys(name)
time.sleep(1)
Details = browser.find_element_by_xpath("//*[contains(text(), ' Search GSTIN Details')]")
Details.click()
info = browser.find_element_by_class_name('col-sm-4')
print(info.text)
info2 = browser.find_element_by_xpath('/html/body/div[4]/div/div/div[1]/div[2]/div[2]/div[1]/div[2]')
print(info2.text)
input('Press Enter to quit')
browser.quit()
BeautifulSoup is a library for parsing and formatting, not interacting with web pages. For the latter, if that page requires JavaScript to work, you're stuck using a headless browser.
If it doesn't, you have at least two options:
Watch the Network tab in your browser's developer tools and see if you can recreate the request for the page you want using requests or urllib2
Use mechanize, which is built specifically to work with forms on sites that don't depend on JavaScript
mechanize is a little more work if there's no CSRF token or similar mechanism (though, again, it'll fail if JavaScript is required) and a little less work if there is.

Advice on how to scrape data from this website

I would like some advice on how to scrape data from this website.
I started with selenium, but got stuck at the beginning because, for example, I have no idea how to set the dates.
My code until now:
from bs4 import BeautifulSoup as soup
from openpyxl import load_workbook
from openpyxl.styles import PatternFill, Font
from selenium import webdriver
from selenium.webdriver.common.by import By
import datetime
import os
import time
import re
day = datetime.date.today().day
month = datetime.date.today().month
year = datetime.date.today().year
my_url = 'https://www.eex-transparency.com/homepage/power/germany/production/availability/non-usability-by-unit/non-usability-history'
cookieValue = '12-c12-cached|from:' +str(year)+ '-' +str(month)+ '-' +str(day-5)+ ','+'to:' +str(year)+ '-' +str(month)+ '-' + str(day) +',dateType:1,company:PreussenElektra,fuel:uranium,canceled:0,durationComparator:ge,durationValue:5,durationUnit:day'
#saving url
browser = webdriver.Chrome(executable_path=r"C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe")
my_url = 'https://www.eex-transparency.com/homepage/power/germany/production/availability/non-usability-by-unit'
browser.add_cookie({'name': 'tem', 'value': cookieValue})
browser.get(my_url)
my_url = 'https://www.eex-transparency.com/homepage/power/germany/production/availability/non-usability-by-unit/non-usability-history'
browser.get(my_url)
Obviously I am not asking for code, just some suggestions on how to continue with Selenium (how to set dates and other data) or any idea on how to scrape this website
Thanks in advance.
EDIT: I am trying to follow the cookie way. That is my updated code, I read that the cookie need to be created before loading the page and so I did, any idea why it is not working?
Best approach for you will be changing cookies, because every filter data is saved in cookie.
Check cookies in chrome ( f12 -> application -> cookies ) and play with filters. If you will change it in programmers tools you have to refresh website :)
Check this post on how to change cookies in selenium python.
To get values from website you have to use classic way like u did here, but you will have to use classes:
radio = browser.find_elements_by_class_name('aaaaaa')
You can always use xPath to search elements ( chrome will generate them for you ).
Is there any particular reason why you have decided to use selenium over other web scraping tools (scrapy, urllib, etc.)? I personally have not used Selenium but I have used some of the other tools. Below is an example of a script to just pull all the html from a page.
import urllib
import urllib2
from bs4 import BeautifulSoup as soup
link = "https://ubuntu.com"
page = urllib2.urlopen(link)
data = soup(page, 'html.parser')
print (data)
This is just a short script to pull all the HTML off a page. I believe BeautifulSoup has additional tools for inputting data into fields, but the exact method slips my mind right now, if I can find my notes on it I will edit this post. I remember it being very straightforward, though.
Best of luck!
Edit: here's a discussion web scraping tools from reddit a while back that I had saved https://www.reddit.com/r/Python/comments/1qnbq3/webscraping_selenium_vs_conventional_tools/

Scrape with BeautifulSoup from site that uses AJAX pagination using Python

I'm fairly new to coding and Python so I apologize if this is a silly question. I'd like a script that goes through all 19,000 search results pages and scrapes each page for all of the urls. I've got all of the scrapping working but can't figure out how to deal with the fact that the page uses AJAX to paginate. Usually I'd just make a loop with the url to capture each search result but that's not possible. Here's the page: http://www.heritage.org/research/all-research.aspx?nomobile&categories=report
This is the script I have so far:
with io.open('heritageURLs.txt', 'a', encoding='utf8') as logfile:
page = urllib2.urlopen("http://www.heritage.org/research/all-research.aspx?nomobile&categories=report")
soup = BeautifulSoup(page)
snippet = soup.find_all('a', attrs={'item-title'})
for a in snippet:
logfile.write ("http://www.heritage.org" + a.get('href') + "\n")
print "Done collecting urls"
Obviously, it scrapes the first page of results and nothing more.
And I have looked at a few related questions but none seem to use Python or at least not in a way that I can understand. Thank you in advance for your help.
For the sake of completeness, while you may try accessing the POST request and to find a way round to access to next page, like I suggested in my comment, if an alternative is possible, using Selenium will be quite easy to achieve what you want.
Here is a simple solution using Selenium for your question:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
# uncomment if using Firefox web browser
driver = webdriver.Firefox()
# uncomment if using Phantomjs
#driver = webdriver.PhantomJS()
url = 'http://www.heritage.org/research/all-research.aspx?nomobile&categories=report'
driver.get(url)
# set initial page count
pages = 1
with open('heritageURLs.txt', 'w') as f:
while True:
try:
# sleep here to allow time for page load
sleep(5)
# grab the Next button if it exists
btn_next = driver.find_element_by_class_name('next')
# find all item-title a href and write to file
links = driver.find_elements_by_class_name('item-title')
print "Page: {} -- {} urls to write...".format(pages, len(links))
for link in links:
f.write(link.get_attribute('href')+'\n')
# Exit if no more Next button is found, ie. last page
if btn_next is None:
print "crawling completed."
exit(-1)
# otherwise click the Next button and repeat crawling the urls
pages += 1
btn_next.send_keys(Keys.RETURN)
# you should specify the exception here
except:
print "Error found, crawling stopped"
exit(-1)
Hope this helps.

Categories