<div id="wordlist" style="display:none;">leave|river|what|try|just|because|cut|now|made|
I just want to get the words and here's the code that I used but it just gives me wordlist as the output:
import requests
from bs4 import BeautifulSoup
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
text = soup.find("div", attrs={'id': 'wordlist'}).get('id')
print(text)
The code below should output just the text.
text = soup.find("div", attrs={'id': 'wordlist'})
print(text.text)
The problem is your use of get('id'). Instead, just print text from the soup.find result.
result = soup.find("div", attrs={'id': 'wordlist'})
print(result.text)
Related
import requests
from bs4 import BeautifulSoup
url = 'https://www.officialcharts.com/charts/singles-chart'
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
urls = []
for link in soup.find_all('a'):
print(link.get('href'))
def chart_spider(max_pages):
page = 1
while page >= max_pages:
url = "https://www.officialcharts.com/charts/singles-chart"
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')
for link in soup.findAll('a', {"class": "title"}):
href = "BAD HABITS" + link.title(href)
print(href)
page += 1
chart_spider(1)
Wondering how to make this print just the titles of the songs instead of the entire page. I want it to go through the top 100 charts and print all the titles for now. Thanks
Here's is a possible solution, which modify your code as little as possible:
#!/usr/bin/env python3
import requests
from bs4 import BeautifulSoup
URL = 'https://www.officialcharts.com/charts/singles-chart'
def chart_spider():
source_code = requests.get(URL)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')
for title in soup.find_all('div', {"class": "title"}):
print(title.contents[1].string)
chart_spider()
The result is a list of all the titles found in the page, one per line.
If all you want is the titles for each song on the top 100,
this code:
import requests
from bs4 import BeautifulSoup
url='https://www.officialcharts.com/charts/singles-chart/'
req = requests.get(url)
soup = BeautifulSoup(req.content, 'html.parser')
titles = [i.text.replace('\n', '') for i in soup.find_all('div', class_="title")]
does what you are looking for.
You can do like this.
The Song title is present inside a <div> tag with class name as title.
Select all those <div> with .find_all(). This gives you a list of all <div> tags.
Iterate over the list and print the text of each div.
from bs4 import BeautifulSoup
import requests
url = 'https://www.officialcharts.com/charts/singles-chart/'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
d = soup.find_all('div', class_='title')
for i in d:
print(i.text.strip())
Sample Output:
BAD HABITS
STAY
REMEMBER
BLACK MAGIC
VISITING HOURS
HAPPIER THAN EVER
INDUSTRY BABY
WASTED
.
.
.
I have this code
import requests
from bs4 import BeautifulSoup
result = requests.get("http://www.cvbankas.lt/")
src = result.content
soup = BeautifulSoup(src, 'lxml')
urls = []
for article_tag in soup.find_all("article"):
a_tag = article_tag.find('a')
urls.append(a_tag.attrs['href'])
div_tag = article_tag.find('span')
urls.append(div_tag.attrs['class'])
print(urls)
Can anyone explane me how to get the data marked in red?
You can get span with the class label "salary_amount"
salary_object = article_tag.find("span", class_= "salary_amount")
and then extract the text with the .text attribute of the created object.
How can I extract a specific portion of a html file example https://patents.google.com/patent/EP1208209A1/en?oq=medicinal+chemistry
So far I used beautifulsoup to get the text version of the html without all the tags. But I would like my code to read only say the claims sections of the above mentioned file.
here you have mate, i found out that in this site, the claims section is a html with its own Id, making things easier. I just colected the section and gave the string so you can play with it.
import requests
from bs4 import BeautifulSoup
page = requests.get("https://patents.google.com/patent/EP1208209A1/en?oq=medicinal+chemistry")
soup = BeautifulSoup(page.content, 'html.parser')
claim_sect = soup.find_all('section', attrs={"itemprop":"claims"})
print('This is the raw content: \n')
print(str(claim_sect))
print('This is the variable type: \n')
print(str(type(claim_sect)))
str_sect = claim_sect[0]
As far as I see, there are two divs with the class="flex flex-width style-scope patent-result".
soup = BeautifulSoup(sdata)
mydivs = soup.findAll("div", {"class": "flex flex-width style-scope patent-result"})
div_with_claims = mydivs [1]
filename= 'C:/Users/xyz/.ipynb_checkpoints/EP1208209A1.html'
html_file =open(filename, 'r', encoding='utf-8')
source_code = html_file.read()
#print(source_code)
soup = BeautifulSoup(source_code)
print(soup.get_text())
#mydivs = soup.findAll("div", {"class": "flex flex-width style-scope patent-result"})
#div_with_claims = mydivs [1]
#print(div_with_claims)
I would like to get Orange and 1p from below sample HTML.
I can get gather sentence like Orange1p but can not get 2 sentence separately.
Any way to get 2 sentence?
sample HTML:
<td class="info">Orange<br/>1p</td>
Current used code :
soup = BeautifulSoup(html_doc, 'html.parser')
data = soup.find("td", {"class": "info"}) # with current output of `Orange1p`
from bs4 import BeautifulSoup
html_doc = """<td class="info">Orange<br/>1p</td>"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(list(soup.find("td", {'class': 'info'}).strings))
Output:
['Orange', '1p']
from bs4 import BeautifulSoup
import requests
def imdb_spider():
url = 'http://www.imdb.com/chart/top'
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('a', {'class': 'secondaryInfo' }):
href = link.get('href')
print(href)
imdb_spider()
I'm trying to get links of all top rated movies from imdb . I'm using pycharm . The code runs for more than 30 mins but i'm not getting any print in my console.
You're correct that there's an element with class secondaryInfo for every movie title, but that's not the a element. If you want to find that, you have to use a different selector. For example, the following selector will do the trick instead of using soup.findAll().
soup.select('td.titleColumn a')
The problem is that {'class': 'secondaryInfo' } is a parameter of <span> object.
So try this:
from bs4 import BeautifulSoup
import requests
def imdb_spider():
url = 'http://www.imdb.com/chart/top'
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "lxml")
for td in soup.findAll('td', {'class': 'titleColumn'}):
href = td.find('a').get('href')
print(href)
imdb_spider()