i am using beautiful soup to get links from a page.
What i would like it to do is select one of the links at random and continue with the rest of the program. Currently it is using all the links and continuing with the rest of the program, however i only want it to choose 1 link.
The the rest of the program will then look at the link and decide if it was good enough for what i want. If it is not good enough it will then go back and click another link. And repeat the processes.
Any idea how you would get it to do this?
This is my current code for looking up the links.
import requests
import os.path
from bs4 import BeautifulSoup
import urllib.request
import hashlib
import random
max_page = 1
img_limit = 5
def pic_spider(max_pages):
page = random.randrange(0, max_page)
pid = page * 40
pic_good = 1
while pic_good == 1:
if page <= max_pages:
url = 'http://safebooru.org/index.php?page=post&s=list&tags=yuri&pid=' + str(pid)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
id_list_location = os.path.join(id_save, "ids.txt")
first_link = soup.findAll('a', id=True, limit=img_limit)
for link in first_link:
href = "http://safebooru.org/" + link.get('href')
picture_id = link.get('id')
print("Page number = " + str(page + 1))
print("pid = " + str(pid))
print("Id = " + picture_id)
print(href)
if picture_id in open(id_list_location).read():
print("Already Downloaded or Picture checked to be too long")
else:
log_id(picture_id)
if ratio_get(href) >= 1.3:
print("Picture too long")
else:
#img_download_link(href, picture_id)
print("Ok download")
im not really sure how i would do it so any ideas would help me out, if you have any questions feel free to ask!
Am I missing something? Don't you just need to replace this:
first_link = soup.findAll('a', id=True, limit=img_limit)
for link in first_link:
With:
from random import choice
first_link = soup.findAll('a', id=True, limit=img_limit)
link = choice(first_link)
This will select one random item from the list
Related
I am trying to create an app that would allow a person to get a list of GitHub repositories relevant to the search keywords they provide. On the result page for the search query the repositories have a special div class, namely:
<div class="f4 text-normal">
</div>
How to make the Beautiful Soup iterate through all of these classes on the page and then through all the <a> tags in search for hrefs?
For now I only have an idea how to get all the hrefs from <a>s:
import requests, sys, webbrowser, bs4
#variables
linkList = []
#handle input
print('Your GitHub repository search query:')
userInput = input()
#get the results from the URL
results = requests.get('https://github.com/search?q=' + userInput + '&type=repositories'
+ ' '.join(sys.argv[1:]))
results.raise_for_status()
soup = bs4.BeautifulSoup(results.text, 'html.parser')
#find all the viable URLs
data = soup.find_all('a')
for aHref in data:
if "href" in str(aHref):
linkList.append(aHref)
print(linkList)
Note: Your selection is not that specific, it will also find other not expected links to.
Select your elements more specific and get its href attribute with a list comprehension - Access a tag’s attributes by treating it like a dictionary --> aHref['href]
['https://github.com/'+a['href'] for a in soup.select('.repo-list-item .f4 a[href]')]
Example
import requests, sys, webbrowser, bs4
print('Your GitHub repository search query:')
userInput = input()
results = requests.get('https://github.com/search?q=' + userInput + '&type=repositories'
+ ' '.join(sys.argv[1:]))
results.raise_for_status()
soup = bs4.BeautifulSoup(results.text, 'html.parser')
linkList = ['https://github.com/'+a['href'] for a in soup.select('.repo-list-item .f4 a[href]')]
Output
['https://github.com//TheAlgorithms/Python',
'https://github.com//geekcomputers/Python',
'https://github.com//walter201230/Python',
'https://github.com//injetlee/Python',
'https://github.com//kubernetes-client/python',
'https://github.com//Show-Me-the-Code/python',
'https://github.com//xxg1413/python',
'https://github.com//jakevdp/PythonDataScienceHandbook',
'https://github.com//joeyajames/Python',
'https://github.com//docker-library/python']
This should get you pretty close.
import requests, sys, webbrowser, bs4
#variables
linkList = []
#handle input
print('Your GitHub repository search query:')
userInput = input()
#get the results from the URL
results = requests.get('https://github.com/search?q=' + userInput + '&type=repositories'
+ ' '.join(sys.argv[1:]))
results.raise_for_status()
soup = bs4.BeautifulSoup(results.text, 'html.parser')
divs = soup.find_all('div', {'class': 'f4 text-normal'})
for div in divs:
a_tags = div.find_all('a')
for a_tag in a_tags:
try:
linkList.append(a_tag['href'])
except:
continue
# Test
for link in linkList:
print(link)
Im trying to write a scraper that randomly chooses a wiki article link from a page, goes there, grabs another, and loops that. I want to exclude links with "Category:", "File:", "List" in the href. Im pretty sure the links i want are all inside of p tags, but when I include "p" in find_all, i get "int object is not subscriptable" error.
The code below returns wiki pages but does not exclude the things i want to filter out.
This is a learning journey for me. All help is appreciated.
import requests
from bs4 import BeautifulSoup
import random
import time
def scrapeWikiArticle(url):
response = requests.get(
url=url,
)
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.find(id="firstHeading")
print(title.text)
print(url)
allLinks = soup.find(id="bodyContent").find_all("a")
random.shuffle(allLinks)
linkToScrape = 0
for link in allLinks:
# Here i am trying to select hrefs with /wiki/ in them and exclude hrefs with "Category:" etc. It does select for wikis but does not exclude anything.
if link['href'].find("/wiki/") == -1:
if link['href'].find("Category:") == 1:
if link['href'].find("File:") == 1:
if link['href'].find("List") == 1:
continue
# Use this link to scrape
linkToScrape = link
articleTitles = open("savedArticles.txt", "a+")
articleTitles.write(title.text + ", ")
articleTitles.close()
time.sleep(6)
break
scrapeWikiArticle("https://en.wikipedia.org" + linkToScrape['href'])
scrapeWikiArticle("https://en.wikipedia.org/wiki/Anarchism")
You need to modify the for loop, .attrs is used to access the attributes of any tag. If you want to exclude links if the href value contains particular keyword then use !=-1 comparison.
Modified code:
import requests
from bs4 import BeautifulSoup
import random
import time
def scrapeWikiArticle(url):
response = requests.get(
url=url,
)
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.find(id="firstHeading")
allLinks = soup.find(id="bodyContent").find_all("a")
random.shuffle(allLinks)
linkToScrape = 0
for link in allLinks:
if("href" in link.attrs):
if link.attrs['href'].find("/wiki/") == -1 or link.attrs['href'].find("Category:") != -1 or link.attrs['href'].find("File:") != -1 or link.attrs['href'].find("List") != -1:
continue
linkToScrape = link
articleTitles = open("savedArticles.txt", "a+")
articleTitles.write(title.text + ", ")
articleTitles.close()
time.sleep(6)
break
if(linkToScrape):
scrapeWikiArticle("https://en.wikipedia.org" + linkToScrape.attrs['href'])
scrapeWikiArticle("https://en.wikipedia.org/wiki/Anarchism")
This section seems problematic.
if link['href'].find("/wiki/") == -1:
if link['href'].find("Category:") == 1:
if link['href'].find("File:") == 1:
if link['href'].find("List") == 1:
continue
find returns the index of the substring you are looking for, you are also using it wrong.
So if wiki is not found or Category:, File: etc. appears in href, then continue.
if link['href'].find("/wiki/") == -1 or \
link['href'].find("Category:") != -1 or \
link['href'].find("File:") != -1 or \
link['href'].find("List")!= -1 :
print("skipped " + link["href"])
continue
Saint Petersburg
https://en.wikipedia.org/wiki/St._Petersburg
National Diet Library
https://en.wikipedia.org/wiki/NDL_(identifier)
Template talk:Authority control files
https://en.wikipedia.org/wiki/Template_talk:Authority_control_files
skipped #searchInput
skipped /w/index.php?title=Template_talk:Authority_control_files&action=edit§ion=1
User: Tom.Reding
https://en.wikipedia.org/wiki/User:Tom.Reding
skipped http://toolserver.org/~dispenser/view/Main_Page
Iapetus (moon)
https://en.wikipedia.org/wiki/Iapetus_(moon)
87 Sylvia
https://en.wikipedia.org/wiki/87_Sylvia
skipped /wiki/List_of_adjectivals_and_demonyms_of_astronomical_bodies
Asteroid belt
https://en.wikipedia.org/wiki/Main_asteroid_belt
Detached object
https://en.wikipedia.org/wiki/Detached_object
Use :not() to handle the list of exclusions within href alongside * contains operator. This will filter out hrefs containing (*) specified substrings. Precede this with an attribute = value selector that contains * /wiki/. I have specified a case insensitive match via i, for the first two, which can be removed:
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://en.wikipedia.org/wiki/2018_FIFA_World_Cup#Prize_money')
soup = bs(r.content, 'lxml') # 'html.parser'
links = [i['href'] for i in soup.select('#bodyContent a[href*="/wiki/"]:not([href*="Category:" i], [href*="File:" i], [href*="List"])')]
I am scraping all the words from website Merriam-Webster.
I want to scrape all pages starting from a-z and all pages within them and save them to a text file. The problem i'm having is i only get first result of the table instead of all. I know that this is a large amount of text (around 500k) but i'm doing it for educating myself.
CODE:
import requests
from bs4 import BeautifulSoup as bs
URL = 'https://www.merriam-webster.com/browse/dictionary/a/'
page = 1
# for page in range(1, 75):
req = requests.get(URL + str(page))
soup = bs(req.text, 'html.parser')
containers = soup.find('div', attrs={'class', 'entries'})
table = containers.find_all('ul')
for entries in table:
links = entries.find_all('a')
name = links[0].text
print(name)
Now what i want is to get all the entries from this table, but instead i only get the first entry.
I'm kinda stuck here so any help would be appreciated.
Thanks
https://www.merriam-webster.com/browse/medical/a-z
https://www.merriam-webster.com/browse/legal/a-z
https://www.merriam-webster.com/browse/dictionary/a-z
https://www.merriam-webster.com/browse/thesaurus/a-z
To get all entries, you can use this example:
import requests
from bs4 import BeautifulSoup
url = 'https://www.merriam-webster.com/browse/dictionary/a/'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for a in soup.select('.entries a'):
print('{:<30} {}'.format(a.text, 'https://www.merriam-webster.com' + a['href']))
Prints:
(a) heaven on earth https://www.merriam-webster.com/dictionary/%28a%29%20heaven%20on%20earth
(a) method in/to one's madness https://www.merriam-webster.com/dictionary/%28a%29%20method%20in%2Fto%20one%27s%20madness
(a) penny for your thoughts https://www.merriam-webster.com/dictionary/%28a%29%20penny%20for%20your%20thoughts
(a) quarter after https://www.merriam-webster.com/dictionary/%28a%29%20quarter%20after
(a) quarter of https://www.merriam-webster.com/dictionary/%28a%29%20quarter%20of
(a) quarter past https://www.merriam-webster.com/dictionary/%28a%29%20quarter%20past
(a) quarter to https://www.merriam-webster.com/dictionary/%28a%29%20quarter%20to
(all) by one's lonesome https://www.merriam-webster.com/dictionary/%28all%29%20by%20one%27s%20lonesome
(all) choked up https://www.merriam-webster.com/dictionary/%28all%29%20choked%20up
(all) for the best https://www.merriam-webster.com/dictionary/%28all%29%20for%20the%20best
(all) in good time https://www.merriam-webster.com/dictionary/%28all%29%20in%20good%20time
...and so on.
To scrape multiple pages:
url = 'https://www.merriam-webster.com/browse/dictionary/a/{}'
for page in range(1, 76):
soup = BeautifulSoup(requests.get(url.format(page)).content, 'html.parser')
for a in soup.select('.entries a'):
print('{:<30} {}'.format(a.text, 'https://www.merriam-webster.com' + a['href']))
EDIT: To get all pages from A to Z:
import requests
from bs4 import BeautifulSoup
url = 'https://www.merriam-webster.com/browse/dictionary/{}/{}'
for char in range(ord('a'), ord('z')+1):
page = 1
while True:
soup = BeautifulSoup(requests.get(url.format(chr(char), page)).content, 'html.parser')
for a in soup.select('.entries a'):
print('{:<30} {}'.format(a.text, 'https://www.merriam-webster.com' + a['href']))
last_page = soup.select_one('[aria-label="Last"]')['data-page']
if last_page == '':
break
page += 1
EDIT 2: To save to file:
import requests
from bs4 import BeautifulSoup
url = 'https://www.merriam-webster.com/browse/dictionary/{}/{}'
with open('data.txt', 'w') as f_out:
for char in range(ord('a'), ord('z')+1):
page = 1
while True:
soup = BeautifulSoup(requests.get(url.format(chr(char), page)).content, 'html.parser')
for a in soup.select('.entries a'):
print('{:<30} {}'.format(a.text, 'https://www.merriam-webster.com' + a['href']))
print('{}\t{}'.format(a.text, 'https://www.merriam-webster.com' + a['href']), file=f_out)
last_page = soup.select_one('[aria-label="Last"]')['data-page']
if last_page == '':
break
page += 1
I think you need another loop:
for entries in table:
links = entries.find_all('a')
for name in links:
print(name.text)
I'm new to python and web scraping.
I wrote some codes by using requests and beautifulsoup. One code is for scraping prices and names and links. Which works fine and is as follows:
from bs4 import BeautifulSoup
import requests
urls = "https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-1"
source = requests.get(urls).text
soup = BeautifulSoup(source, 'lxml')
for figcaption in soup.find_all('figcaption'):
price = figcaption.div.text
name = figcaption.find('a', class_='title').text
link = figcaption.find('a', class_='title')['href']
print(price)
print(name)
print(link)
and also one for making other urls that I need those information scraped from, which also gives the correct urls when I use print():
x = 0
counter = 1
for x in range(0, 70)
urls = "https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-" + str(counter)
counter += 1
x += 1
print(urls)
But when I try to combine these two in order to scrape a page and then change url to new one and then scrape it, it just gives the scraped information on the first page 70 times. please guide me through this. the whole code is as follows:
from bs4 import BeautifulSoup
import requests
x = 0
counter = 1
for x in range(0, 70):
urls = "https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-" + str(counter)
source = requests.get(urls).text
soup = BeautifulSoup(source, 'lxml')
counter += 1
x += 1
print(urls)
for figcaption in soup.find_all('figcaption'):
price = figcaption.div.text
name = figcaption.find('a', class_='title').text
link = figcaption.find('a', class_='title')['href']
print(price)
print()
print(name)
print()
print(link)
Your x=0 and then incriminating it by 1 is redundant and not needed, as you have it iterating through that range range(0, 70). I'm also not sure why you have a counter as you don't need that either. Here's how you would do it below:
HOWEVER, I believe that issue is not with the iteration or looping, but the url itself. If you manually go to the two pages as listed below, the content doesn’t change:
https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-1
and then
https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html#/pagesize-24/order-new/stock-1/page-2
Since the site is dynamic, you'll need to find a different way to iterate page to page, or figure out what the exact url is. So try:
from bs4 import BeautifulSoup
import requests
for x in range(0, 70):
try:
urls = 'https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html&pagesize[]=24&order[]=new&stock[]=1&page[]=' +str(x+1) + '&ajax=ok?_=1561559181560'
source = requests.get(urls).text
soup = BeautifulSoup(source, 'lxml')
print('Page: %s' %(x+1))
for figcaption in soup.find_all('figcaption'):
price = figcaption.find('span', {'class':'new_price'}).text.strip()
name = figcaption.find('a', class_='title').text
link = figcaption.find('a', class_='title')['href']
print('%s\n%s\n%s' %(price, name, link))
except:
break
You can find that link by going to the website and looking at the dev tools (Ctrl +Shift+I or right-click 'Inspect') -> network -> XHR
When I did that and then physically click to the next page, I can see how that data was rendered, and found the reference url.
I am trying to create a webcrawler that parses all the html on the page, grabs a specified (via raw_input) link, follows that link, and then repeats this process a specified number of times (once again via raw_input). I am able to grab the first link and successfully print it. However, I am having problems "looping" the whole process, and usually grab the wrong link. This is the first link
https://pr4e.dr-chuck.com/tsugi/mod/python-data/data/known_by_Fikret.html
(Full disclosure, this questions pertains to an assignment for a Coursera course)
Here's my code
import urllib
from BeautifulSoup import *
url = raw_input('Enter - ')
rpt=raw_input('Enter Position')
rpt=int(rpt)
cnt=raw_input('Enter Count')
cnt=int(cnt)
count=0
counts=0
tags=list()
soup=None
while x==0:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# Retrieve all of the anchor tags
tags=soup.findAll('a')
for tag in tags:
url= tag.get('href')
count=count + 1
if count== rpt:
break
counts=counts + 1
if counts==cnt:
x==1
else: continue
print url
Based on DJanssens' response, I found the solution;
url = tags[position-1].get('href')
did the trick for me!
Thanks for the assistance!
I also worked on that course, and help with a friend, I got this worked out:
import urllib
from bs4 import BeautifulSoup
url = "http://python-data.dr-chuck.net/known_by_Happy.html"
rpt=7
position=18
count=0
counts=0
tags=list()
soup=None
x=0
while x==0:
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html,"html.parser")
tags=soup.findAll('a')
url= tags[position-1].get('href')
count=count + 1
if count == rpt:
break
print url
I believe this is what you are looking for:
import urllib
from bs4 import *
url = raw_input('Enter - ')
position=int(raw_input('Enter Position'))
count=int(raw_input('Enter Count'))
#perform the loop "count" times.
for _ in xrange(0,count):
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
tags=soup.findAll('a')
for tag in tags:
url= tag.get('href')
tags=soup.findAll('a')
# if the link does not exist at that position, show error.
if not tags[position-1]:
print "A link does not exist at that position."
# if the link at that position exist, overwrite it so the next search will use it.
url = tags[position-1].get('href')
print url
The code will now loop the amount of times as specified in the input, each time it will take the href at the given position and replace it with the url, in that way each loop will look further in the tree structure.
I advice you to use full names for variables, which is a lot easier to understand. In addition you could cast them and read them in a single line, which makes your beginning easier to follow.
Here is my 2-cents:
import urllib
#import ssl
from bs4 import BeautifulSoup
#'http://py4e-data.dr-chuck.net/known_by_Fikret.html'
url = raw_input('Enter URL : ')
position = int(raw_input('Enter position : '))
count = int(raw_input('Enter count : '))
print('Retrieving: ' + url)
soup = BeautifulSoup(urllib.urlopen(url).read())
for x in range(1, count + 1):
link = list()
for tag in soup('a'):
link.append(tag.get('href', None))
print('Retrieving: ' + link[position - 1])
soup = BeautifulSoup(urllib.urlopen(link[position - 1]).read())