Create a script to catch links on a webpage with python 3 - python

I have to catch all the links of the topics in this page: https://www.inforge.net/xi/forums/liste-proxy.1118/
I've tried with this script:
import urllib.request
from bs4 import BeautifulSoup
url = (urllib.request.urlopen("https://www.inforge.net/xi/forums/liste-proxy.1118/"))
soup = BeautifulSoup(url, "lxml")
for link in soup.find_all('a'):
print(link.get('href'))
but it prints all the links of the page, and not just the links of the topics as I'd like to. could you suggest me the fast way to do it? I'm still a newbie, and i've started learning python recently.

You can use BeautifulSoup to parse the HTML:
from bs4 import BeautifulSoup
from urllib2 import urlopen
url= 'https://www.inforge.net/xi/forums/liste-proxy.1118/'
soup= BeautifulSoup(urlopen(url))
Then find the links with
soup.find_all('a', {'class':'PreviewTooltip'})

Related

Web scraping IMDB with Python's Beautiful Soup

I am trying to parse this page "https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1", but I can't find the href that I need (href="/title/tt0068112/episodes?ref_=tt_eps_sm").
I tried with this code:
url="https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1"
page(requests.get(url)
soup=BeautifulSoup(page.content,"html.parser")
for a in soup.find_all('a'):
print(a['href'])
What's wrong with this? I also tried to check "manually" with print(soup.prettify()) but it seems that that link is hidden or something like that.
You can get the page html with requests, the href item is in there, no need for special apis. I tried this and it worked:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1")
soup = BeautifulSoup(page.content, "html.parser")
scooby_link = ""
for item in soup.findAll("a", href="/title/tt0068112/episodes?ref_=tt_eps_sm"):
print(item["href"])
scooby_link = "https://www.imdb.com" + "/title/tt0068112/episodes?ref_=tt_eps_sm"
print(scooby_link)
I'm assuming you also wanted to save the link to a variable for further scraping so I did that as well. 🙂
To get the link with Episodes you can use next example:
import requests
from bs4 import BeautifulSoup
url = "https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
print(soup.select_one("a:-soup-contains(Episodes)")["href"])
Prints:
/title/tt0068112/episodes?ref_=tt_eps_sm

Web Scraping with Beautiful Soup Python - Seesaw - The output has no length error

I am trying to get the comments from a website called Seesaw but the output has no length. What am I doing wrong?
import requests
import requests
import base64
from bs4 import BeautifulSoup
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as req
from requests import get
html_text = requests.get("https://app.seesaw.me/#/activities/class/class.93a29acf-0eef-4d4e-9d56-9648d2623171").text
soup = BeautifulSoup(html_text, "lxml")
comments = soup.find_all("span", class_ = "ng-binding")
print(comments)
Because there is no span element with class ng-binding on the page (these elements added later via JavaScript)
import requests
html_text = requests.get("https://app.seesaw.me/#/activities/class/class.93a29acf-0eef-4d4e-9d56-9648d2623171").text
print(f'{"ng-binding" in html_text=}')
So output is:
"ng-binding" in html_text=False
Also you can check it using "View Page Source" function in your browser. You can try to use Selenium for automate interaction with the site.

Neither Selenium or Beautiful soup showing full html source?

I tried using beautiful soup to parse a website, however when I printed "page_soup" I would only get a portion of the HTML, the beginning portion of the code, which has the info I need, was omitted. No one answered my question. After doing some research I tried using Selenium to access the full HTML, however I got the same results. Below are both of my attempts with selenium and beautiful soup. When I try and print the html it starts off in the middle of the source code, skipping the doctype, lang etc initial statements.
from selenium import webdriver
from bs4 import BeautifulSoup
browser = webdriver.Chrome( executable_path= "/usr/local/bin/chromedriver")
browser.get('https://coronavirusbellcurve.com/')
html = browser.page_source
soup = BeautifulSoup(html)
print(soup)
import bs4
import urllib
from urllib.request import urlopen as uReq
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup as soup
htmlPage = urlopen(pageRequest).read()
page_soup = soup(htmlPage, 'html.parser')
print(page_soup)
The requests module seems to be returning the numbers in the first table on the page assuming you are referring to US Totals.
import requests
r = requests.get('https://coronavirusbellcurve.com/').content
print(r)

Unable to extract download link using beautifulsoup

I am trying to fetch the download CSV file link from this: https://patents.google.com/?assignee=intel
This is my code:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://patents.google.com/?assignee=intel")
soup = BeautifulSoup(page.content, 'html.parser')
soup.find_all('a', class_='style-scope search-results')
soup.find_all('a', class_='style-scope')
But last 2 lines are returning empty array. What am I missing here?
Even this is not returning anything:
soup.find(id="resultsLayout")
That's because the elements are being generated by javascript. You can use selenium to get the whole page source.
Here's an edited version of your code using selenium.
from bs4 import BeautifulSoup
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('https://patents.google.com/?assignee=intel')
page = browser.page_source
browser.quit()
soup = BeautifulSoup(page, 'html.parser')
soup.find_all('a', class_='style-scope search-results')
soup.find_all('a', class_='style-scope')
Let me know if you need clarifications. Thanks!

beautiful soup HREF not found when capitalized

Hi i have something along the lines of:
from BeautifulSoup import BeautifulSoup as bs
import urllib2
url = 'http://www.blah.com'
soup = bs(urllib2.urlopen(url))
for link in soup.findAll('a', href=True):
print link
So the problem is that the website uses both href and HREF (capitalized) for the links. This script only pulls the href. How would i modify the code also get the links with HREF?
Thanks

Categories