Extract part of text with Beautifulsoup - python

How can I extract the text after the "br/" tag?
I only what that text and not whatever would be inside the "strong"-tag.
<p><strong>A title</strong><br/>
Text I want which also
includes linebreaks.</p>
Have tried code such as
text_content = paragraph.get_text(separator='strong/').strip()
But this will also include the text in the "strong" tag.
The "paragraph" variable is a bs4.element.Tag if that was not clear.
Any help appreciated!

If you have the <p> tag, then find the <br> within that and use .next_siblings
import bs4
html = '''<p><strong>A title</strong><br/>
Text I want which also
includes linebreaks.</p>'''
soup = bs4.BeautifulSoup(html, 'html.parser')
paragraph = soup.find('p')
text_wanted = ''.join(paragraph.find('br').next_siblings)
print (text_wanted)
Output:
print (text_wanted)
Text I want which also
includes linebreaks.

Find <br> tag and use next_element
from bs4 import BeautifulSoup
data='''<p><strong>A title</strong><br/>
Text I want which also
includes linebreaks.</p>'''
soup=BeautifulSoup(data,'html.parser')
item=soup.find('p').find('br').next_element
print(item)

Related

Putting Links in Parenthesis with BeautifulSoup

BeautifulSoup's get_text() function only records the textual information of an HTML webpage. However, I want my program to return the href link of an tag in parenthesis directly after it returns the actual text.
In other words, using get_text() will just return "17.602" on the following HTML:
<a class="xref fm:ParaNumOnly" href="17.602.html#FAR_17_602">17.602</a>
However, I want my program to return "17.602 (17.602.html#FAR_17_602)". How would I go about doing this?
EDIT: What if you need to print text from other tags, such as:
<p> Sample text.
<a class="xref fm:ParaNumOnly" href="17.602.html#FAR_17_602">17.602</a>
Sample closing text.
</p>
In other words, how would you compose a program that would print
Sample text. 17.602 (17.602.html#FAR_17_602) Sample closing text.
You can format the output using f-strings.
Access the tag's text using .text, and then access the href attribute.
from bs4 import BeautifulSoup
html = """
<a class="xref fm:ParaNumOnly" href="17.602.html#FAR_17_602">17.602</a>
"""
soup = BeautifulSoup(html, "html.parser")
a_tag = soup.find("a")
print(f"{a_tag.text} ({a_tag['href']})")
Output:
17.602 (17.602.html#FAR_17_602)
Edit: You can use .next_sibling and .previous_sibling
print(f"{a_tag.previous_sibling.strip()} {a_tag.text} ({a_tag['href']}) {a_tag.next_sibling.strip()}")
Output:
Sample text. 17.602 (17.602.html#FAR_17_602) Sample closing text.

Get text from inside element without its children

I'm scraping a webpage with several p elements and I wanna get the text inside of them without including their children.
The page is structured like this:
<p class="default">
<div>I don't want this text</div>
I want this text
</p>
When I use
parent.find_all("p", {"class": "default").get_text() this is the result I get:
I don't want this text
I want this text
I'm using BeautifulSoup 4 with Python 3
Edit: When I use
parent.find_all("p", {"class": "public item-cost"}, text=True, recursive=False)
It returns an empty list
You can use .find_next_sibling() with text=True parameter:
from bs4 import BeautifulSoup
html_doc = """
<p class="default">
<div>I don't want this text</div>
I want this text
</p>
"""
soup = BeautifulSoup(html_doc, "html.parser")
print(soup.select_one(".default > div").find_next_sibling(text=True))
Prints:
I want this text
Or using .contents:
print(soup.find("p", class_="default").contents[-1])
EDIT: To strip the string:
print(soup.find("p", class_="default").contents[-1].strip())
You can use xpath, which is a bit complex but provides much powerful querying.
Something like this will work for you:
soup.xpath('//p[contains(#class, "default")]//text()[normalize-space()]')

Python BeautifulSoup: How get text from self-closing tags

Im trying to parse an the contents of an evernote checklist using beautifulsoup. But when I call the html parser on the contents, it keeps correcting the self-closing tags (en-todo), so when I try to get the text of the en-todo tags, its either blank.
note_body = '<en-todo checked="true" />window caulk<en-todo />cake pan<en-todo />cake mix<en-todo />salad mix<en-todo checked="true"/>painters tape<br />'
import re
from bs4 import BeautifulSoup
soup = BeautifulSoup(note_body, 'html.parser')
checklist_items = soup.find_all('en-todo')
print checklist_items
The above code returns just the tags, without any of the text.
[<en-todo checked="true"></en-todo>, <en-todo></en-todo>, <en-todo></en-todo>, <en-todo></en-todo>, <en-todo checked="true"></en-todo>]
You need to get the text messages that aren't enclosed in a tag!
You need to use tag.next_sibling!
>>> [each.next_sibling for each in checklist_items]
[u'window caulk', u'cake pan', u'cake mix', u'salad mix', u'painters tape']
This works for your string:
from bs4 import BeautifulSoup
soup = BeautifulSoup(note_body, 'html.parser')
checklist_items = soup.find_all('en-todo')
for item in checklist_items:
print(item.get_text())
Try next.
Example:
Data: <br /> My information
item.next
Gives: My information

Anchors (URL) instead of text (<p>URL</p>)

Trying to achieve the following logic:
If URL in text is surrounded by paragraph tags (Example: <p>URL</p>), replace it in place to become a link instead: Click Here
The original file is a database dump (sql, UTF-8). Some URLs already exist in the desired format. I need to fix the missing links.
I am working on a script, which uses Beautifulsoup. If other solutions are make more sense (regex, etc.), I am open to suggestions.
You can search for all p elements that has a text starting with http. Then, replace it with a link:
for elm in soup.find_all("p", text=lambda text: text and text.startswith("http")):
elm.replace_with(soup.new_tag("a", href=elm.get_text()))
Example working code:
from bs4 import BeautifulSoup
data = """
<div>
<p>http://google.com</p>
<p>https://stackoverflow.com</p>
</div>
"""
soup = BeautifulSoup(data, "html.parser")
for elm in soup.find_all("p", text=lambda text: text and text.startswith("http")):
elm.replace_with(soup.new_tag("a", href=elm.get_text()))
print(soup.prettify())
Prints:
<div>
</div>
I can imagine this approach break, but it should be a good start for you.
If you additionally want to add texts to your links, set the .string property:
soup = BeautifulSoup(data, "html.parser")
for elm in soup.find_all("p", text=lambda text: text and text.startswith("http")):
a = soup.new_tag("a", href=elm.get_text())
a.string = "link"
elm.replace_with(a)

Beautiful Soup - Cannot find the tags

The page is: http://item.taobao.com/item.htm?id=13015989524
you can see its source code.
In its source code the following code exists
<a href="http://item.taobao.com/item.htm?id=13015989524" target="_blank">
But when I use BeautifulSoup to read the source code and execute the following
soup.findAll('a', href="http://item.taobao.com/item.htm?id=13015989524")
It returns [] empty. What does it return '[]'?
As far as I can see, the <a> tag you are trying to find is inside a <textarea> tag. BS does not parse the contents of <textarea> as HTML, and rightly so since <textarea> should not contain HTML. In short, that page is doing something sketchy.
If you really need to get that, you might "cheat" and parse the contents of <textarea> again and search within them:
import urllib
from BeautifulSoup import BeautifulSoup as BS
soup = BS(urllib.urlopen("http://item.taobao.com/item.htm?id=13015989524"))
a = []
for textarea in soup.findAll("textarea"):
textsoup = BS(textarea.text) # parse the contents as html
a.extend(textsoup.findAll("a", attrs={"href":"http://item.taobao.com/item.htm?id=13015989524"}))
for tag in a:
print tag
# outputs
# <a href="http://item.taobao.com/item.htm?id=13015989524" target="_blank"><img ...
# <a href="http://item.taobao.com/item.htm?id=13015989524" title="901 ...
Use a dictionary to store the attribute:
soup.findAll('a', {
'href': "http://item.taobao.com/item.htm?id=13015989524"
})

Categories