simple web scraper very slow - python

I'm fairly new to python and web-scraping in general. The code below works but it seems to be awfully slow for the amount of information its actually going through. Is there any way to easily cut down on execution time. I'm not sure but it does seem like I have typed out more/made it more difficult then I actually needed to, any help would be appreciated.
Currently the code starts at the sitemap then iterates through a list of additional sitemaps. Within the new sitemaps it pulls data information to construct a url for the json data of a webpage. From the json data I pull an xml link that I use to search for a string. If the string is found it appends it to a text file.
#global variable
start = 'https://www.govinfo.gov/wssearch/getContentDetail?packageId='
dash = '-'
urlSitemap="https://www.govinfo.gov/sitemap/PLAW_sitemap_index.xml"
old_xml=requests.get(urlSitemap)
print (old_xml)
new_xml= io.BytesIO(old_xml.content).read()
final_xml=BeautifulSoup(new_xml)
linkToBeFound = final_xml.findAll('loc')
for loc in linkToBeFound:
urlPLmap=loc.text
old_xmlPLmap=requests.get(urlPLmap)
print(old_xmlPLmap)
new_xmlPLmap= io.BytesIO(old_xmlPLmap.content).read()
final_xmlPLmap=BeautifulSoup(new_xmlPLmap)
linkToBeFound2 = final_xmlPLmap.findAll('loc')
for pls in linkToBeFound2:
argh = pls.text.find('PLAW')
theWanted = pls.text[argh:]
thisShallWork =eval(requests.get(start + theWanted).text)
print(requests.get(start + theWanted))
dict1 = (thisShallWork['download'])
finaldict = (dict1['modslink'])[2:]
print(finaldict)
url2='https://' + finaldict
try:
old_xml4=requests.get(url2)
print(old_xml4)
new_xml4= io.BytesIO(old_xml4.content).read()
final_xml4=BeautifulSoup(new_xml4)
references = final_xml4.findAll('identifier',{'type': 'Statute citation'})
for sec in references:
if sec.text == "106 Stat. 4845":
Print(dash * 20)
print(sec.text)
Print(dash * 20)
sec313 = open('sec313info.txt','a')
sec313.write("\n")
sec313.write(pls.text + '\n')
sec313.close()
except:
print('error at: ' + url2)

No idea why i spent so long on this, but i did. Your code was really hard to look through. So i started with that, I broke it up into 2 parts, getting the links from the sitemaps, then the other stuff. I broke out a few bits into separate functions too.
This is checking about 2 urls per second on my machine which seems about right.
How this is better (you can argue with me about this part).
Don't have to reopen and close the output file after each write
Removed a fair bit of unneeded code
gave your variables better names (this does not improve speed in any way but please do this especially if you are asking for help with it)
Really the main thing... once you break it all up it becomes fairly clear that whats slowing you down is waiting on the requests which is pretty standard for web-scraping, you can look into multi threading to avoid the wait. Once you get into multi threading, the benefit of breaking up your code will likely also become much more evident.
# returns sitemap links
def get_links(s):
old_xml = requests.get(s)
new_xml = old_xml.text
final_xml = BeautifulSoup(new_xml, "lxml")
return final_xml.findAll('loc')
# gets the final url from your middle url and looks through it for the thing you are looking for
def scrapey(link):
link_id = link[link.find("PLAW"):]
r = requests.get('https://www.govinfo.gov/wssearch/getContentDetail?packageId={}'.format(link_id))
print(r.url)
try:
r = requests.get("https://{}".format(r.json()["download"]["modslink"][2:]))
print(r.url)
soup = BeautifulSoup(r.text, "lxml")
references = soup.findAll('identifier', {'type': 'Statute citation'})
for ref in references:
if ref.text == "106 Stat. 4845":
return r.url
else:
return False
except:
print("bah" + r.url)
return False
sitemap_links_el = get_links("https://www.govinfo.gov/sitemap/PLAW_sitemap_index.xml")
sitemap_links = map(lambda x: x.text, sitemap_links_el)
nlinks_el = map(get_links, sitemap_links)
links = [num.text for elem in nlinks_el for num in elem]
with open("output.txt", "a") as f:
for link in links:
url = scrapey(link)
if url is False:
print("no find")
else:
print("found on: {}".format(url))
f.write("{}\n".format(url))

Related

How to put a condition that only runs when there is any change in API?

Hello Community Members,
I am very new to python language and programming, currently I am working on a news API that shows the news from that API. I want this program to check and update whenever there is any update to the API. Please help what can I do to complete this.
CODE:
url = 'https://cryptopanic.com/api/v1/posts/?auth_token=<my token>&filter=hot'
html_link = requests.get(url)
datatype = html_link.json()
news_info = datatype['results']
latest_news = news_info[0]['title']
source = news_info[0]['source']['title']
print(latest_news)
I want this latest_news variable which stores the news to print whenever there is new news in the list, I have tried comparison method but still didn't find anything so far.
Does this fill your criteria? You have to run it every 5 minutes, or any time you want and you will get the latest titles.
import requests, json
old_news_info = {"news": []}
try:
old_news_info = json.load(open("old_news_info.json", "r"))
except:
pass
url = 'https://cryptopanic.com/api/v1/posts/?auth_token=<token>&filter=hot'
print("waiting for response")
html_link = requests.get(url)
datatype = html_link.json()
if datatype != {'status': 'Incomplete', 'info': 'Token not found'}:
news_info = datatype['results']
if not news_info[0] in old_news_info["news"]:
for news in news_info:
if news in old_news_info["news"]:
break
else:
old_news_info["news"].append(news)
print(news["source"]['title'])
json.dump(old_news_info, open("old_news_info.json", "w"), indent = 4)
else:
print("Token not found")

Analysing YouTube comments using Python -- parameter has disabled comments

I'm trying to get into text analysis using YouTube comments. I've been using the code from the following website to scrape YouTube:
https://www.pingshiuanchua.com/blog/post/using-youtube-api-to-analyse-youtube-comments-on-python
The script starts working, but there is a section of the code that generates an error if comments have been disabled, and I can't find a way to check to see if comments are disabled or if comments exist, and to just skip that video if there are no comments to scrape, and continue on to the next video.
The code chunk in question creating the error is:
# =============================================================================
# Get Comments of Top Videos
# =============================================================================
video_id_pop = []
channel_pop = []
video_title_pop = []
video_desc_pop = []
comments_pop = []
comment_id_pop = []
reply_count_pop = []
like_count_pop = []
from tqdm import tqdm
for i, video in enumerate(tqdm(video_id, ncols = 100)):
response = service.commentThreads().list(
part = 'snippet',
videoId = video,
maxResults = 100, # Only take top 100 comments...
order = 'relevance', #... ranked on relevance
textFormat = 'plainText',
).execute()
comments_temp = []
comment_id_temp = []
reply_count_temp = []
like_count_temp = []
for item in response['items']:
comments_temp.append(item['snippet']['topLevelComment']['snippet']['textDisplay'])
comment_id_temp.append(item['snippet']['topLevelComment']['id'])
reply_count_temp.append(item['snippet']['totalReplyCount'])
like_count_temp.append(item['snippet']['topLevelComment']['snippet']['likeCount'])
comments_pop.extend(comments_temp)
comment_id_pop.extend(comment_id_temp)
reply_count_pop.extend(reply_count_temp)
like_count_pop.extend(like_count_temp)
video_id_pop.extend([video_id[i]]*len(comments_temp))
channel_pop.extend([channel[i]]*len(comments_temp))
video_title_pop.extend([video_title[i]]*len(comments_temp))
video_desc_pop.extend([video_desc[i]]*len(comments_temp))
query_pop = [query] * len(video_id_pop)
Edited to add:
The person who created the code left a message to fix the error saying:
"You can wrap the query part of the code in a try...except statement, where if the try statement (the query part) failed, you can push an except of blank response or "error" string into the list."
I have NFI how to carry this out if it makes sense to anyone else...
Note: this is not necessarily "good" coding style, but it's the sort of thing I would do if I ran into this problem when I was writing a script for my own short-term, personal use.
Python (and many other languages) have a way to catch exceptions and handle them without crashing. Used properly, this can be a very nice way to handle bad data.
https://docs.python.org/3.8/tutorial/errors.html is a good overview of exceptions. In general, the format they take is something like
try:
code_that_can_error()
except ExceptionThatWIllBeThrown as ex:
handle_exception()
print(ex) # ex is an object that has information about what went wrong
finally:
clean_up()
(Finally is particularly useful if you have something you need to call close on, like a file. If the exception is thrown, you might not close it, but a finally is guaranteed to get called, even if an exception is thrown.)
In your case, all we need is to ignore the error and move on to the next video.
for i, video in enumerate(tqdm(video_id, ncols = 100)):
try:
response = service.commentThreads().list(
part = 'snippet',
videoId = video,
maxResults = 100, # Only take top 100 comments...
order = 'relevance', #... ranked on relevance
textFormat = 'plainText',
).execute()
comments_temp = []
[...]
video_desc_pop.extend([video_desc[i]]*len(comments_temp))
except:
# Something threw an error. Skip that video and move on
print(f"{video} has comments disabled, or something else went wrong")
query_pop = [query] * len(video_id_pop)

How to write correctly a program which extract all links from a web page?

This is the part of Udacity course WEB SEARCH ENGINE.The goal of this quiz is to write a program which extract all links from the web page.On the output program must return only LINKS.But in my case program returns all links and "NONE" twice.I know that the error in the second part of program after "WHILE" and after "ELSE".But i dont know what i must write there.
def get_next_target(page):
start_link = page.find('<a href=')
if start_link == -1:
return None,0
else:
start_quote = page.find('"', start_link)
endquo = page.find('"',start_quote + 1)
url = page[(start_quote + 1) : endquo]
return url,endquo
page = 'i know what you doing summer <a href="Udasity".i know what you doing summer <a href="Georgia" i know what you doing summer '
def ALLlink(page):
url = 1
while url != None:
url,endquo = get_next_target(page)
if url:
print url
page = page[endquo:]
else:
print ALLlink(page)
First, you can remove your else statement in your ALLlink() function since it's not doing anything.
Also, when comparing to None, you should use is not instead of !=:
while url != None: # bad
while url is not None # good
That said, I think your error is in your last line:
print ALLlink(page)
You basically have two print statements. The first is inside your function and the second is on the last line of your script. Really, you don't need the last print statement there because you're already printing in your ALLlink() function. So if you change the line to just ALLlink(page), I think it'll work.
If you do want to print there, you could modify your function to store the URLs in an array, and then print that array. Something like this:
def ALLlink(page):
urls = []
url = 1
while url is not None:
url, endquo = get_next_target(page)
if url:
urls.append(url)
page = page[endquo:]
return urls
print ALLlink(page)

How to send scraped data through reddit bot

So I've got this bot that I want to use to reply with the box score of the mets game anytime someone says "mets score" on a specific subreddit. This is my first python project and I plan on using it on a dummy subreddit I created as a learning tool. I'm having trouble sending the scores from the website I scraped through the bot so it can appear in the reply to the "mets score" comments. Any suggestions?
import praw
import time
from lxml import html
import requests
from bs4 import BeautifulSoup
r = praw.Reddit(user_agent = 'my_first_bot')
r.login('user_name', 'password')
def scores():
soup = BeautifulSoup(requests.get("http://scores.nbcsports.com/mlb/scoreboard.asp?day=20160621&meta=true").content, "lxml")
table = soup.find("a",class_="teamName", text="NY Mets").find_previous("table")
a, b = [a.text for a in table.find_all("a",class_="teamName")]
inn, a_score, b_score = ([td.text for td in row.select("td.shsTotD")] for row in table.find_all("tr"))
print (" ".join(inn))
print ("{}: {}".format(a, " ".join(a_score)))
print ("{}: {}".format(b, " ".join(b_score)))
words_to_match = ['mets score']
cache = []
def run_bot():
print("Grabbing subreddit...")
subreddit = r.get_subreddit("random_subreddit")
print("Grabbing comments...")
comments = subreddit.get_comments(limit=40)
for comment in comments:
print(comment.id)
comment_text = comment.body.lower()
isMatch = any(string in comment_text for string in words_to_match)
if comment.id not in cache and isMatch:
print("match found!" + comment.id)
comment.reply('heres the score to last nights mets game...' scores())
print("reply successful")
cache.append(comment.id)
print("loop finished, goodnight")
while True:
run_bot()
time.sleep(120)
I think I'll just put you out of your misery ;). There are multiple issues with your code snippet:
comment.reply('heres the score to last nights mets game...' scores())
The .reply() method requires a string or an object that can have a good enough representation as a string. Assuming the method scores() returns a string, you should concatenate the two arguments, like this:
comment.reply('heres the score to last nights mets game...'+ scores())
It looks like your knowledge of basic python syntax and constructs is dusty. For a quick refresher see this.
Your method scores() doesn't return anything. It just prints out a bunch of lines (which I assume are for debugging purposes).
def scores():
soup = BeautifulSoup(requests.get("http://scores.nbcsports.com/mlb/scoreboard.asp?day=20160621&meta=true").content, "lxml")
.......
print (" ".join(inn))
print ("{}: {}".format(a, " ".join(a_score)))
print ("{}: {}".format(b, " ".join(b_score)))
Funnily enough you could use those exact strings for your return value (or maybe something else entirely, as suit your needs) like this:
def scores():
.......
inn_string = " ".join(inn)
a_string = "{}: {}".format(a, " ".join(a_score))
b_string = "{}: {}".format(b, " ".join(b_score))
return "\n".join([inn_string, a_string, b_string])
These should get you up and running.
More advice: Have you read the Reddit PRAW docs? You should. You should also probably use praw.helpers.comment_stream(). It's simple and easy to use and will handle retrieving new comments for you. Currently you try and fetch a maximum of 40 comments every 120 seconds. What happens when there are more than that many relevant comments in that 120 second span. You'll end up missing some of the comments you should've replied to. .comment_stream() will take care of rate limiting for you so that your bot can reply to each new comment which needs its attention at its own pace. Read more about this here.

KeyError and TypeError in my python web scraper

So sorry about this vague and confusing title. But there is no really better way for me to summarize my problem in one sentence.
I was trying to get the student and grade information from a french website. The link is this (http://www.bankexam.fr/resultat/2014/BACCALAUREAT/AMIENS?filiere=BACS)
My code is as follows:
import time
import urllib2
from bs4 import BeautifulSoup
regions = {'R\xc3\xa9sultats Bac Amiens 2014':'/resultat/2014/BACCALAUREAT/AMIENS'}
base_url = 'http://www.bankexam.fr'
tests = {'es':'?filiere=BACES','s':'?filiere=BACS','l':'?filiere=BACL'}
for i in regions:
for x in tests:
# create the output file
output_file = open('/Users/student project/'+ i + '_' + x + '.txt','a')
time.sleep(2) #compassionate scraping
section_url = base_url + regions[i] + tests[x] #now goes to the x test page of region i
request = urllib2.Request(section_url)
response = urllib2.urlopen(request)
soup = BeautifulSoup(response,'html.parser')
content = soup.find('div',id='zone_res')
for row in content.find_all('tr'):
if row.td:
student = row.find_all('td')
name = student[0].strong.string.encode('utf8').strip()
try:
school = student[1].strong.string.encode('utf8')
except AttributeError:
school = 'NA'
result = student[2].span.string.encode('utf8')
output_file.write ('%s|%s|%s\n' % (name,school,result))
# Find the maximum pages to go through
if soup.find('div','pagination'):
import re
page_info = soup.find('div','pagination')
pages = []
for i in page_info.find_all('a',re.compile('elt')):
try:
pages.append(int(i.string.encode('utf8')))
except ValueError:
continue
max_page = max(pages)
# Now goes through page 2 to max page
for i in range(1,max_page):
page_url = '&p='+str(i)+'#anchor'
section2_url = section_url+page_url
request = urllib2.Request(section2_url)
response = urllib2.urlopen(request)
soup = BeautifulSoup(response,'html.parser')
content = soup.find('div',id='zone_res')
for row in content.find_all('tr'):
if row.td:
student = row.find_all('td')
name = student[0].strong.string.encode('utf8').strip()
try:
school = student[1].strong.string.encode('utf8')
except AttributeError:
school = 'NA'
result = student[2].span.string.encode('utf8')
output_file.write ('%s|%s|%s\n' % (name,school,result))
A little more description about the code:
I created a 'regions' dictionary and 'tests' dictionary because there are 30 other regions I need to collect and I just include one here for showcase. And I'm just interested in the test results of three tests (ES, S, L) and so I created this 'tests' dictionary.
Two errors keep showing up,
one is
KeyError: 2
and the error is linked to line 12,
section_url = base_url + regions[i] + tests[x]
The other is
TypeError: cannot concatenate 'str' and 'int' objects
and this is linked to line 10.
I know there is a lot of information here and I'm probably not listing the most important info for you to help me. But let me know how I can do to fix this!
Thanks
The issue is that you're using the variable i in more than one place.
Near the top of the file, you do:
for i in regions:
So, in some places i is expected to be a key into the regions dictionary.
The trouble comes when you use it again later. You do so in two places:
for i in page_info.find_all('a',re.compile('elt')):
And:
for i in range(1,max_page):
The second of these is what is causing your exceptions, as the integer values that get assigned to i don't appear in the regions dict (nor can an integer be added to a string).
I suggest renaming some or all of those variables. Give them meaningful names, if possible (i is perhaps acceptable for an "index" variable, but I'd avoid using it for anything else unless you're code golfing).

Categories