How to select element by class and get text? - python

I have been trying to scrape addresses in this page: https://www.worldometers.info/coronavirus/#news
How can I get values which under class=sorting_1? It is difficult for me. I'm completely new to Beautifulsoup.

Using find:
[t.get_text(strip=True) for t in soup.find_all(attrs={'class': 'sorting_1'})]
or if you're sure that's the only class for the tags you want:
[t.get_text(strip=True) for t in soup.find_all(class_='sorting_1')]
or using select
[t.get_text(strip=True) for t in soup.select('.sorting_1')]
Any of the above should work; and if you're going to be working with BeautifulSoup, you should really familiarize yourself with the documentation and/or go through at least one tutorial.

Related

How to webscrape the correct element from a stat tracking website (cod.tracker.gg) using Python

On this specific page (or any 'matches' page) there are names you can select to view individual statistics for a match. How do I grab the 'kills' stat for example using webscraping?
In most of the tutorials I use the webscraping seems simple. However, when inspecting this site, specifically the 'kills' item, you see something like
<span data-v-71c3e2a1 title="Kills" class ="name".
Question 1.) What is the 'data-v-71c3e2a1'? I've never seen anything like this in my html,css, or webscraping tutorials. It appears in different variations all over the site.
Question 2.) More importantly, how do I grab the number of kills in this section? I've tried using scrapy and grabbing by xpath:
scrapy shell https://cod.tracker.gg/warzone/match/1424533688251708994?handle=PatrickPM
response.xpath("//*[#id="app"]/div[3]/div[2]/div/main/div[3]/div[2]/div[2]/div[6]/div[2]/div[3]/div[2]/div[1]/div/div[1]/span[2]").get()
but this raises a syntax error
response.xpath("//*[#id="app"]
SyntaxError: invalid syntax
Grabbing by response.css("").get() is also difficult. Should I be using selenium? Or just regular requests/bs4? Nothing I do can grab it.
Thank you.
Does this return the data you need?
import requests
endpoint = "https://api.tracker.gg/api/v1/warzone/matches/1424533688251708994"
r = requests.get(endpoint, params={"handle": "PatrickPM"})
data = r.json()["data"]
In any way I suggest using API if there's one available. It's much easier than using BeautifulSoup or selenium.

Scrapy Python Web Scraping - creating XPath

I am trying to create "universal" Xpath, so when I run spider, it will be able to download the hotel name for each hotel on the list.
This is the XPath that I need to convert:
//*[#id="offerPage"]/div[3]/div[1]/div[1]/div/div/div/div/div[2]/div/div[1]/h3/a
Can anyone point me the right direction?
This is the example how they did it in the scrapy docs:
https://github.com/scrapy/quotesbot/blob/master/quotesbot/spiders/toscrape-xpath.py
for text: they have :
'text': quote.xpath('./span[#class="text"]/text()').extract_first(),
When you open "http://quotes.toscrape.com/" and copy Xpath for text you will get :
/html/body/div/div[2]/div[1]/div[1]/span[1]
When you look at the html that your are scraping just using "copy xpath" from the browser source viewer is not enough.
You need to look at the attributes that the html tags have.
Of course, using just tag types as an xpath can work, but what if not every page you are going to scrape follows that pattern?
The Scrapy example you are using uses the span's class attribute to precisely point to the target tag.
I suggest reading a bit more about Xpath (for example here) to understand how flexible your search patterns can be.
If you want to go even broader, reading about DOM structure will also be useful. Let us know if you need more pointers.

Finding specific names on a website using Python

I need to make an app that uses Python to search for specific names on a website. For instance, I have to check if the string "Robert Paulson" is being used on a website. If it is, it returns True. Else, false. Also,is there any library that can help me make that?
Since you have not attempted to make your application first, then I am not going to post code for you. I will however, suggest using:
urllib2:
A robust module for interacting with webpages. i.e. pull back the html of a webpage.
BeautifulSoup (from bs4 import BeautifulSoup):
An awesome module to "regex" html to find what is is that you're looking for.
Good luck my friend!
You could do something similar to this other answer. You will just need the regex to find your string.
I have also used Selenium webdriver to solve some more complex webesite searching, although I think the link I provided would solve your problem more simply.

How to extract all the url's from a website?

I am writing a programme in Python to extract all the urls from a given website. All the url's from a site not from a page.
As I suppose I am not the first one who wants to do that I was wondering if there was a ready made solution or if I have to write the code myself.
It's not gonna be easy, but a decent starting point would be to look into these two libraries:
urllib
BeautifulSoup
I didn't see any ready made scripts that does this on a quick google search.
Using the scrapy framework makes this almost trivial.
The time consuming part would be learning how to use scrapy. THeir tutorials are great though and shoulndn't take you that long.
http://doc.scrapy.org/en/latest/intro/tutorial.html
Creating a solution that others can use is one of the joys of being part of a programming community. iF a scraper doesn't exist you can create one that everyone can use to get all links from a site!
The given answers are what I would have suggested (+1).
But if you really want to do something quick and simple, and you're on a *NIX platform, try this:
lynx -dump YOUR_URL | grep http
Where YOUR_URL is the URL that you want to check. This should get you all the links you want (except for links that are not fully written)
You first have to download the page's HTML content using a package like urlib or requests.
After that, you can use Beautiful Soup to extract the URLs. In fact, their tutorial shows how to extract all links enclosed in <a> elements as a specific example:
for link in soup.find_all('a'):
print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie
# http://example.com/tillie
If you also want to find links not enclosed in <a> elements, you'll may have to write something more complex on your own.
EDIT: I also just came across two Scrapy link extractor classes that were created specifically for this task:
http://doc.scrapy.org/en/latest/topics/link-extractors.html

A simple spider question

I am a newbie trying to achive this simple task by using Scrapy with no luck so far. I am asking your advice about how to do this with Scrapy or with any other tool (with Python). Thank you.
I want to
start from a page that lists bios of attorneys whose last name start with A: initial_url = www.example.com/Attorneys/List.aspx?LastName=A
From LastName=A to extract links to actual bios: /BioLinks/
visit each of the /BioLinks/ to extract the school info for each attorney.
I am able to extract the /BioLinks/ and School information but I am unable to go from the initial url to the bio pages.
If you think this is the wrong way to go about this, then, how would you achieve this goal?
Many thanks.
Not sure I fully understand what you're asking, but maybe you need to get the absolute URL to each bio and retrieve the source code for that page:
import urllib2
bio_page = urllib.urlopen(bio_url).read()
Then use a regular expressions or other parsing to get the attorney's law school.

Categories