I'm trying to parse the content within a box like container located at the very bottom of this website but I don't find their existence in page source. I've tried to create a script to reach them anyway.
import requests
from bs4 import BeautifulSoup
url = 'https://www.proxy-list.download/HTTPS'
r = requests.get(url)
soup = BeautifulSoup(r.text,'lxml')
item = soup.select_one("a#btn3").text
print(item)
Output I'm having:
Copy to clipboard
I'm after this:
104.248.115.236:80
104.248.53.46:3128
104.236.248.219:3128
104.248.115.236:3128
104.248.115.236:8080
104.248.184.16:8080
This is how that content is visible in that page:
Try this link https://www.proxy-list.download/api/v0/get?l=en&t=https (which you can find using dev tools) to get them all like the way I've shown below:
import requests
from bs4 import BeautifulSoup
url = 'https://www.proxy-list.download/api/v0/get?l=en&t=https'
r = requests.get(url)
for item in r.json()[0]['LISTA']:
proxy = f"{item['IP']}{':'}{item['PORT']}"
print(proxy)
Related
Below is the code i am using
for link in f:
r = requests.get(link,verify=True)
soup = BeautifulSoup(r.content,'html5lib')
soup.encode('utf-8')
table = soup.find('div',attrs={'class':'right'})
print(table.div.a)
download = table.div.a['href']
Here instead of getting the download link i am getting a "#"
link i am scraping : https://www54.zippyshare.com/v/2Mu2T2KI/file.html
Desired output :
/d/2qeYvgEb/29682/Horizon%20-%20Zero%20Dawn%20CE%20--%20fitgirl-repacks.site%20--.part01.rar
Output getting : #
The problem is that the link doesn't exist in the original html, but it changes using an js script. To be able to find the link you need to let js run. I do not think that this is possible using bs4.
Another way to do it is using requests-html:
from requests_html import HTMLSession
session = HTMLSession()
root = 'https://www54.zippyshare.com'
link = 'https://www54.zippyshare.com/v/2Mu2T2KI/file.html'
r = session.get(link)
r.html.render() # this will load js
download_link = root + r.html.find('.right', first=True).find('a', first=True).attrs['href']
Note that I used a css selector .right; this is the same as class: right, though the latter is not supported on requests-html. The first=True argument does the same job as using find instead of find_all in bs4.
here, all of the link will show in this site
import pandas as pd
import requests
from bs4 import BeautifulSoup
url = "https://www54.zippyshare.com/v/2Mu2T2KI/file.html"
r = requests.get(url)
htmlcontent = r.content
soup = BeautifulSoup(htmlcontent,'html.parser')
#print(soup.prettify)
df = soup.find_all("a")
for data in df:
dx = data.get('href')
if dx != "#":
print(dx)
Hope it will help you
I'm trying to scrap a number of visitors to my local climbing centre.
import requests
from bs4 import BeautifulSoup
page = requests.get("https://portal.rockgympro.com/portal/public/c3b9019203e4bc4404983507dbdf2359/occupancy?&iframeid=occupancyCounter&fId=1644")
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find('span', id="count")
print(results)
It's printing this:
<span id="count" style="display:inline"></span>
That's nice, but the number 19 is missing... What am I doing wrong?
It's there in json format in the tag of the html. Just need to pull it out.
import requests
import json
from bs4 import BeautifulSoup
url = 'https://portal.rockgympro.com/portal/public/c3b9019203e4bc4404983507dbdf2359/occupancy?&iframeid=occupancyCounter&fId=1644'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
scriptStr = str(soup.find_all('script')[2]).split('var data = ')[-1].split(';')[0].replace("'",'"')
last_char_index = scriptStr.rfind(",")
scriptStr = scriptStr[:last_char_index] + '}'
scriptStr = scriptStr.replace(' ', ' ')
jsonData = json.loads(scriptStr)
count = jsonData['REA']['count']
capacity = jsonData['REA']['capacity']
lastUpdate = jsonData['REA']['lastUpdate']
print(f'{count} of {capacity} Climbers\n{lastUpdate}')
Output:
58 of 220 Climbers
Last updated: now (5:20 PM)
You're not doing anything wrong, the issue is that the website is populating the <span> element using JavaScript, which runs after your request is made.
Unfortunately, the requests library cannot run JavaScript since it is a pure HTTP tool. I would recommend checking out something like Selenium which is more robust and can wait for the JavaScript to load before scraping the HTML.
You can try requests_html module to get dynamic values which are calculated by javascript. I tried with below logic it worked for me on your site.
from bs4 import BeautifulSoup
import time
from requests_html import HTMLSession
url="Your Site Link"
# create an HTML Session object
session = HTMLSession()
# Use the object above to connect to needed webpage
resp = session.get(url)
# Run JavaScript code on webpage
resp.html.render(sleep=10)
soup = BeautifulSoup(resp.html.html, 'lxml')
results = soup.find('span', id="count")
print(results)
Your Site calculate Result
In the dev tools under one of the tags, you can see that many of those figures are generated after the page load by the JavaScript function showGym(). In order to allow those figures to generate you could use a browser driver tool like webbot or Selenium which can wait on pages long enough for the javascript to execute populate those fields. It might be possible to have requests do that, but I don't know as I've only used webbot when reaching problems like these as it's very easy to use.
I'm trying to scrape patentsview.org but I'm having an issue. When I try to scrape this page, it doesn't work well. Site using JavaScript to get data from their database. I tried to get the data using requests-html package but I didn't quite understand.
Here's what I tried:
# Import
import re
from bs4 import BeautifulSoup
from requests_html import HTMLSession
session = HTMLSession()
# Set requests
r = session.get('https://datatool.patentsview.org/#search/assignee&asn=1|Samsung')
r.html.render()
# Set BS and print
soup = BeautifulSoup(r.html.html, "lxml")
tags = soup.find_all("div", class_='summary')
print(tags)
This code gives me this result:
# Result
[<div class="summary"></div>]
But I want this:
This is the right div. But I can't see content of div with my code. How can I get the div's content? Hope you understand what I meant.
Use the browser dev tools. (Chrome. F12 - Network - XHR) and see the HTTP GET thst return the data (as JSON) you are looking for.
HTTP GET https://webapi.patentsview.org/api/assignees/query?q={%22_and%22:[{%22_or%22:[{%22_and%22:[{%22_contains%22:{%22assignee_first_name%22:%22Samsung%22}}]},{%22_and%22:[{%22_contains%22:{%22assignee_last_name%22:%22Samsung%22}}]},{%22_and%22:[{%22_contains%22:{%22assignee_organization%22:%22Samsung%22}}]}]}]}&f=[%22assignee_id%22,%22assignee_first_name%22,%22assignee_last_name%22,%22assignee_organization%22,%22assignee_lastknown_country%22,%22assignee_lastknown_state%22,%22assignee_lastknown_city%22,%22assignee_lastknown_location_id%22,%22assignee_total_num_patents%22,%22assignee_first_seen_date%22,%22assignee_last_seen_date%22,%22patent_id%22]&o={%22per_page%22:50,%22matched_subentities_only%22:true,%22sort_by_subentity_counts%22:%22patent_id%22,%22page%22:1}&s=[{%22patent_id%22:%22desc%22},{%22assignee_total_num_patents%22:%22desc%22},{%22assignee_organization%22:%22asc%22},{%22assignee_last_name%22:%22asc%22},{%22assignee_first_name%22:%22asc%22}]
I'm trying to run some statistical analysis on topic-based multireddits. Rather than collecting each individual subreddit by hand, I have found websites that collect these subreddits (Example, Example 2).
These sites unfortunately do not have the ability to download the list of subreddits into plaintext that can be used in a dictionary. Is there a specific method I could use to scrape these sites to only receive back the URL of each attached hyperlink on the webpage?
Thanks!
Edit: Here's my current code
Here's my current code, which runs, but returns every URL.
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "https://snoopsnoo.com/subreddits/travel/"
page = requests.get(url)
data = page.text
soup = BeautifulSoup(data)
links = []
for link in soup.find_all('a'):
reddit = link.get('href')
links.append(reddit)
df = pd.DataFrame(links, columns=['string_values'])
df.to_csv('travel.csv')
Yes there is such a method. If you are using Python, a widely used library is Beautifulsoup. This library parses the HTML directly, so no webdriver is necessary or running a webbrowser in the background like with selenium. You can install it with: pip install bs4.
For your first example site:
import urllib
from bs4 import BeautifulSoup
# Load the url
url = "https://snoopsnoo.com/subreddits/travel/"
html = urllib.request.urlopen(url).read()
# Create the parser object
soup = BeautifulSoup(html)
# Find all panel headings
panels = soup.find_all(class_="panel-heading big")
# Find the <a>-elements and exctract the link
links = [elem.find('a')['href'] for elem in panels]
print(links)
Here I checked the contents of the page to locate the panel elements by class and then extracted the <a>-elements and its href-attribute.
This code will grab all of the titles.
from selenium import webdriver
firefox_options = webdriver.FirefoxOptions()
#firefox_options.add_argument('--headless')
driver = webdriver.Firefox(executable_path='geckodriver.exe', firefox_options=firefox_options)
driver.get("https://snoopsnoo.com/subreddits/travel/")
for i in range(3):
wds = driver.find_elements_by_class_name('title')
for wd in wds:
print(wd.text)
driver.find_element_by_xpath('/html/body/div/div[2]/div[1]/ul/li/a').click
print('next page')
driver.close()
Change 3 to how many pages you want in for i in range(3): Uncomment firefox_options.add_argument('--headless') to use headless mode
Good afternoon! How do I make Beautifulsoup grab only what is between multiple sets of "[:" and ":]" So far I have got the entire page in my soup, but it does not have tags, sadly.
What it looks like so far
I have tried a couple of things so far:
soup.findAll(text="[")
keys = soup.find("span", attrs = {"class": "objectBox objectBox-string"})
import bs4 as bs
import urllib.request
source = urllib.request.urlopen("https://login.microsoftonline.com/common/discovery/keys").read()
soup = bs.BeautifulSoup(source,'lxml')
# ---------------------------------------------
# prior script that I was playing with trying to tackle this issue
import requests
import urllib.request
import time
from bs4 import BeautifulSoup
# Set URL to scrape new certs from
newcerts = "https://login.microsoftonline.com/common/discovery/keys"
# Connect to the URL
response = requests.get(newcerts)
# Parse HTML and save to BeautifulSoup Object
soup = BeautifulSoup(response.text, "html.parser")
keys = soup.find("span", attrs = {"class": "objectBox objectBox-string"})
End goal is to retrieve the public PKI keys from Azure's website at https://login.microsoftonline.com/common/discovery/keys
Not sure if this is what you meant to grab. Try the script below:
import json
import requests
url = 'https://login.microsoftonline.com/common/discovery/keys'
res = requests.get(url)
jsonobject = json.loads(res.content)
for item in jsonobject['keys']:
print(item['x5c'])