Scraping Pantip Forum using BeautifulSoup - python

I'm trying to scrape some forum posts from http://pantip.com/tag/Isuzu
One such page is http://pantip.com/topic/35647305
I want to get each post text along with its author and timestamp into a csv file.
I'm using Beautiful Soup, but admittedly I'm a complete beginner at python and web scraping. The code that I have right now gets the required fields, but only for the first post. I need information for all posts on that thread. I tried soup.find_all() and soup.select(), but I'm not getting the desired results.
Here's the code I'm using:
from bs4 import BeautifulSoup
import urllib2
print "Reading URL..."
url = urllib2.urlopen("http://pantip.com/topic/35647305")
content = url.read()
soup = BeautifulSoup(content, "html.parser")
print "Finding desired HTML..."
table = soup.select("abbr.timeago")
print "\nScraped HTML is:"
print table
text = BeautifulSoup(str(table).strip(),"html.parser").get_text().encode("utf-8").replace("\n", "")
print "\nScraped text is:\n" + text
Any clues as to what I'm doing wrong would be deeply appreciated. Also, any suggestions as to how this could be done in a better, cleaner way are welcome.
As mentioned, I'm a beginner, so please don't mind any stupid mistakes. :-)
Thanks!

The comments are rendered using an Ajax request:
import requests
from bs4 import BeautifulSoup
params = {"tid": "35647305", # in the url
"type": "3"}
with requests.Session() as s:
s.headers.update({"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"X-Requested-With": "XMLHttpRequest"})
r = (s.get("http://pantip.com/forum/topic/render_comments", params=params))
data = r.json() # data["comments"] contains what you want
Which will give you all the data. So all you need is to pass the tid from each url and update the tid in the params dict.

Related

Python, Scraping BS4

There are a lot of post about this subject but I still don't manage to achieve what I want so here is my problem:
I am trying to extract stock price from this site:
https://bors.e24.no/#!/instrument/NHY.OSE
and I would like extract the price: 57,12 from the "inspection" text:
<div class="number LAST" data-reactid=".g.1.2.0">
57,12</div>
Here is the code I tried which generate "AttributeError" and 'NoneType' object has no attribute 'text'.
I also tried to remove .text, in the PRICE line, and the result is 'Price is: None'
from bs4 import BeautifulSoup
import requests
url = 'https://bors.e24.no/#!/instrument/NHY.OSE'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'lxml')
PRICE= soup.find('div', class_= "number LAST").text
print('Price is:',(PRICE))
Try this:
import requests
headers = {
'user_agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36',
}
api_url = "https://bors.e24.no/server/components?columns=ITEM, LAST, BID, ASK, CHANGE, CHANGE_PCT, TURNOVER, LONG_NAME&itemSector=NHY.OSE&type=table"
data = requests.get(api_url, headers=headers).json()
print(data["rows"][0]["values"]["LAST"])
Output:
56.92
This happens because your
requests.get(url)
Will not get all information in the page, including the price you are looking for, because the said webpage will load some parts of it and only then fetch more data. Because of that, trying to select the div with className="number LAST"
PRICE= soup.find('div', class_= "number LAST").text
Will throw an error because this doesn't exist, yet.
There are some ways to fix this problem:
You can try to use libraries like Selenium, which is often recommended for scraping more dynamic pages that rely on some Javascript and API calls to load content.
You can open your developer tools and inspect the Network tab where you might find the request that fetches the price you are trying to scrap.
I believe that in your case, after taking a look at the Network tab myself, the right URL to request could be 'https://bors.e24.no/server/components?columns=TIME,+PRICE,+VOLUME,+BUYER,+SELLER,+ID&filter=ITEM%3D%3DsNHY&limit=5&source=feed.ose.trades.EQUITIES%2BPCC&type=history', which seems to return a dictionary with the price you are looking for.
import requests
url = 'https://bors.e24.no/server/components?columns=TIME,+PRICE,+VOLUME,+BUYER,+SELLER,+ID&filter=ITEM%3D%3DsNHY&limit=5&source=feed.ose.trades.EQUITIES%2BPCC&type=history'
page = requests.get(url)
print(page.json()["rows"][0]["values"]["PRICE"])
If you are looking to scrap various links, you will need to find a way to dynamically change the previous link to one that matches others that you are trying to crawl. Which I guess would mean to change "NHY" and "ose" to something that would match other stock that you are looking for.

Scraping HREF Links contained within a Table

I've been bouncing around a ton of similar questions, but nothing that seems to fix the issue... I've set this up (with help) to scrape the HREF tags from a different URL.
I'm trying to now take the HREF links in the "Result" column from this URL.
here
The script doesn't seem to be working like it did for other sites.
The table is an HTML element, but no matter how I tweak my script, I can't retrieve anything except a blank result.
Could someone explain to me why this is the case? I'm watching many YouTube videos trying to understand, but this just doesn't make sense to me.
import requests
from bs4 import BeautifulSoup
profiles = []
urls = [
'https://stats.ncaa.org/player/game_by_game?game_sport_year_ctl_id=15881&id=15881&org_id=6&stats_player_seq=-100'
]
for url in urls:
req = requests.get(url)
soup = BeautifulSoup(req.text, 'html.parser')
for profile in soup.find_all('a'):
profile = profile.get('href')
profiles.append(profile)
print(profiles)
The following code works:
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.60 Safari/537.17'}
r = requests.get('https://stats.ncaa.org/player/game_by_game?game_sport_year_ctl_id=15881&id=15881&org_id=6&stats_player_seq=-100', headers=headers)
soup = BeautifulSoup(r.text, 'html.parser')
for x in soup.select('a'):
print(x.get('href'))
Main issue in that case is that you miss to send a user-agent, cause some sites, regardless of whether it is a good idea, use this as base to decide that you are a bot and do not or only specific content.
So minimum is to provide some of that infromation while making your request:
req = requests.get(url,headers={'User-Agent': 'Mozilla/5.0'})
Also take a closer look to your selection. Assuming you like to get the team links only you should adjust it, I used css selectors:
for profile in soup.select('table a[href^="/team/"]'):
It also needs concating the baseUrl to the extracted values:
profile = 'https://stats.ncaa.org'+profile.get('href')
Example
from bs4 import BeautifulSoup
import requests
profiles = []
urls = ['https://stats.ncaa.org/player/game_by_game?game_sport_year_ctl_id=15881&id=15881&org_id=6&stats_player_seq=-100']
for url in urls:
req = requests.get(url,headers={'User-Agent': 'Mozilla/5.0'})
soup = BeautifulSoup(req.text, 'html.parser')
for profile in soup.select('table a[href^="/team/"]'):
profile = 'https://stats.ncaa.org'+profile.get('href')
profiles.append(profile)
print(profiles)

webscraping python not showing all tags

I'm new to webscraping. I was trying to make a script that gets data from a balance sheet (here the site: https://www.sec.gov/ix?doc=/Archives/edgar/data/320193/000032019320000010/a10-qq1202012282019.htm). The problem is getting the data: when I watch at the source code in my browser, I'm able to find the tag and the correct value. Once I write down a script with bs4, I don't get anything.
I'm trying to get informations form the balance sheet: Products, Services, Cost of sales... and the data contained in the table 1. (I'm sorry, but I can't post the image. Anyway is the first table you see scrolling down).
Here's my code.
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
url = "https://www.sec.gov/ix?doc=/Archives/edgar/data/320193/000032019320000010/a10-qq1202012282019.htm"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
read_data = urlopen(req).read()
soup_data = BeautifulSoup(read_data,"lxml")
names = soup_data.find_all("td")
for name in names:
print(name)
Thanks for your time.
Try this URL:
Also include the headers to get the data.
import requests
from bs4 import BeautifulSoup
url = "https://www.sec.gov/Archives/edgar/data/320193/000032019320000010/a10-qq1202012282019.htm"
headers = {"User-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36"}
req = requests.get(url, headers=headers)
soup_data = BeautifulSoup(req.text,"lxml")
You will be able to find the data you need.

Web scraping (Protein Data Bank) highly nested tags using beautifulsoup and Python3

I am trying to create a CSV file of all protein names, their PDB (Protein Data Bank) Ids, and the Experiment method based on an advanced search query on RSPB. There are 444 search results and I wanted to create a neat CSV file. Here is the link of the search.
I have written the following script to extract information about first search result but the output says "None".
import requests
from bs4 import BeautifulSoup
source = requests.get(url) # url is same as mentioned above
soup = BeautifulSoup(source.text, 'lxml')
item1 = soup.find('div', class_='row results-item')
The HTML code of the page seems to be highly nested and confusing.
TL;DR
I'm trying to get the following in a csv but the HTML is highly nested :(
1) PDB ID (4 digit alphanumeric code)
2) Protein complex name (Ex : The Fk1 domain of FKBP51....)
3) Method (X-ray diffraction, NMR etc)
Any help or advice will be highly appreciated!
Thank you in advance :)
Actually u cant scrape this kind of website with BeautifulSoup.. This website uses internal cdn for rendering data on webpage..However I have come up with a soulution to fetch the data as JSON format..
import requests
headers = {
"User-Agent": "Mozilla/5.0 (Linux; Android 5.0; SM-G900P
Build/LRX21T) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/80.0.3987.162 Mobile Safari/537.36"
}
payload = {
"query":{"type":"group","logical_operator":"and","nodes":
[{"type":"group","logical_operator":"and","nodes":
[{"type":"group","logical_operator":"and","nodes":
[{"type":"terminal","service":"text","parameters":
{"negation":False,"value":"plasmodium falciparum"},"node_id":0},
{"type":"group","logical_operator":"and","nodes":
[{"type":"terminal","service":"text","parameters": {"operator":"exact_match","negation":False,"value":"Homosapiens","attribute":"rcsb_entity_source_organism.ncbi_scientific_name"},"node_id":1}]}]}],"label":"text"}],"label":"query-builder"},"return_type":"entry","request_options":{"scoring_strategy":"combined","sort":[{"sort_by":"score","direction":"desc"}],"pager":{"start":0,"rows":100}},"request_info":{"src":"ui","query_id":"6878ab86935e083352a6914232c8b2e5"}}
response = requests.post('https://www.rcsb.org/search/data', headers=headers,
json=data)
print(response.json())
You can also play with the payload values to manipulate responses..Hope this will help You!!

How do I scrape information from a frame using Python and Beautiful Soup

This is my first attempt at web scraping. I am trying to use Beautiful Soup to scrape phone numbers from Raymond James' website. An example would be http://www.raymondjames.com/office_locator_display.asp?addressline=90210
Whenever I use BeautifulSoup, I am unable to find the appropriate information in the HTML.
import urllib2
from bs4 import BeautifulSoup
url='http://www.raymondjames.com/office_locator_display.asp?addressline=90210'
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36')]
page_to_scrape=opener.open(url).read()
soup=BeautifulSoup(page_to_scrape.decode('utf-8','ignore'))
The output produced does not contain the information I need. It seems the URL I provide does not point to the location frame.
I don't use Python for a whole lot of work with web data so I am ignorant on how to direct Beautiful Soup into the 'frame' in order to get contact information.
As Martijn said, dig in the network requests, and the source data is there. In this case it's an xml response to a GET request made in the iframe. Armed with that url, the solution is pretty simple:
import urllib2
from bs4 import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen('http://hosted.where2getit.com/raymondjames/ajax?&xml_request=%3Crequest%3E%3Cappkey%3E7BD67064-FC36-11E0-B80D-3AEEDDB2B31E%3C%2Fappkey%3E%3Cformdata+id%3D%22locatorsearch%22%3E%3Cdataview%3Estore_default%3C%2Fdataview%3E%3Climit%3E30%3C%2Flimit%3E%3Cgeolocs%3E%3Cgeoloc%3E%3Caddressline%3E90210%3C%2Faddressline%3E%3Clongitude%3E%3C%2Flongitude%3E%3Clatitude%3E%3C%2Flatitude%3E%3Ccountry%3E%3C%2Fcountry%3E%3C%2Fgeoloc%3E%3C%2Fgeolocs%3E%3Csearchradius%3E25%7C50%7C100%3C%2Fsearchradius%3E%3C%2Fformdata%3E%3C%2Frequest%3E'), 'lxml')
# parse the points of interest into a list
pois = soup.find_all('poi')
# now have your way with them!

Categories