Anchors (URL) instead of text (<p>URL</p>) - python

Trying to achieve the following logic:
If URL in text is surrounded by paragraph tags (Example: <p>URL</p>), replace it in place to become a link instead: Click Here
The original file is a database dump (sql, UTF-8). Some URLs already exist in the desired format. I need to fix the missing links.
I am working on a script, which uses Beautifulsoup. If other solutions are make more sense (regex, etc.), I am open to suggestions.

You can search for all p elements that has a text starting with http. Then, replace it with a link:
for elm in soup.find_all("p", text=lambda text: text and text.startswith("http")):
elm.replace_with(soup.new_tag("a", href=elm.get_text()))
Example working code:
from bs4 import BeautifulSoup
data = """
<div>
<p>http://google.com</p>
<p>https://stackoverflow.com</p>
</div>
"""
soup = BeautifulSoup(data, "html.parser")
for elm in soup.find_all("p", text=lambda text: text and text.startswith("http")):
elm.replace_with(soup.new_tag("a", href=elm.get_text()))
print(soup.prettify())
Prints:
<div>
</div>
I can imagine this approach break, but it should be a good start for you.
If you additionally want to add texts to your links, set the .string property:
soup = BeautifulSoup(data, "html.parser")
for elm in soup.find_all("p", text=lambda text: text and text.startswith("http")):
a = soup.new_tag("a", href=elm.get_text())
a.string = "link"
elm.replace_with(a)

Related

Putting Links in Parenthesis with BeautifulSoup

BeautifulSoup's get_text() function only records the textual information of an HTML webpage. However, I want my program to return the href link of an tag in parenthesis directly after it returns the actual text.
In other words, using get_text() will just return "17.602" on the following HTML:
<a class="xref fm:ParaNumOnly" href="17.602.html#FAR_17_602">17.602</a>
However, I want my program to return "17.602 (17.602.html#FAR_17_602)". How would I go about doing this?
EDIT: What if you need to print text from other tags, such as:
<p> Sample text.
<a class="xref fm:ParaNumOnly" href="17.602.html#FAR_17_602">17.602</a>
Sample closing text.
</p>
In other words, how would you compose a program that would print
Sample text. 17.602 (17.602.html#FAR_17_602) Sample closing text.
You can format the output using f-strings.
Access the tag's text using .text, and then access the href attribute.
from bs4 import BeautifulSoup
html = """
<a class="xref fm:ParaNumOnly" href="17.602.html#FAR_17_602">17.602</a>
"""
soup = BeautifulSoup(html, "html.parser")
a_tag = soup.find("a")
print(f"{a_tag.text} ({a_tag['href']})")
Output:
17.602 (17.602.html#FAR_17_602)
Edit: You can use .next_sibling and .previous_sibling
print(f"{a_tag.previous_sibling.strip()} {a_tag.text} ({a_tag['href']}) {a_tag.next_sibling.strip()}")
Output:
Sample text. 17.602 (17.602.html#FAR_17_602) Sample closing text.

Get text from inside element without its children

I'm scraping a webpage with several p elements and I wanna get the text inside of them without including their children.
The page is structured like this:
<p class="default">
<div>I don't want this text</div>
I want this text
</p>
When I use
parent.find_all("p", {"class": "default").get_text() this is the result I get:
I don't want this text
I want this text
I'm using BeautifulSoup 4 with Python 3
Edit: When I use
parent.find_all("p", {"class": "public item-cost"}, text=True, recursive=False)
It returns an empty list
You can use .find_next_sibling() with text=True parameter:
from bs4 import BeautifulSoup
html_doc = """
<p class="default">
<div>I don't want this text</div>
I want this text
</p>
"""
soup = BeautifulSoup(html_doc, "html.parser")
print(soup.select_one(".default > div").find_next_sibling(text=True))
Prints:
I want this text
Or using .contents:
print(soup.find("p", class_="default").contents[-1])
EDIT: To strip the string:
print(soup.find("p", class_="default").contents[-1].strip())
You can use xpath, which is a bit complex but provides much powerful querying.
Something like this will work for you:
soup.xpath('//p[contains(#class, "default")]//text()[normalize-space()]')

Extract part of text with Beautifulsoup

How can I extract the text after the "br/" tag?
I only what that text and not whatever would be inside the "strong"-tag.
<p><strong>A title</strong><br/>
Text I want which also
includes linebreaks.</p>
Have tried code such as
text_content = paragraph.get_text(separator='strong/').strip()
But this will also include the text in the "strong" tag.
The "paragraph" variable is a bs4.element.Tag if that was not clear.
Any help appreciated!
If you have the <p> tag, then find the <br> within that and use .next_siblings
import bs4
html = '''<p><strong>A title</strong><br/>
Text I want which also
includes linebreaks.</p>'''
soup = bs4.BeautifulSoup(html, 'html.parser')
paragraph = soup.find('p')
text_wanted = ''.join(paragraph.find('br').next_siblings)
print (text_wanted)
Output:
print (text_wanted)
Text I want which also
includes linebreaks.
Find <br> tag and use next_element
from bs4 import BeautifulSoup
data='''<p><strong>A title</strong><br/>
Text I want which also
includes linebreaks.</p>'''
soup=BeautifulSoup(data,'html.parser')
item=soup.find('p').find('br').next_element
print(item)

Regex to search specific text structure

I want to find all results of a certain structure in a string, preferably using regex.
To find all urls, one can use
re.findall('https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+', decode)
and it returns
'https://en.wikipedia.org'
I would like a regex string, which finds:
href="/wiki/*anything*"
OP: beginning must be href="/wiki/ middle can be anything and end must be "
st = "since-OP-did-not-provide-a-sample-string-34278234$'blahhh-okay-enough.href='/wiki/anything/everything/nothing'okay-bye"
print(st[st.find('href'):st.rfind("'")+1])
OUTPUT:
href='/wiki/anything/everything/nothing'
EDIT:
I would go with BeautifulSoup if we are to parse probably an html.
from bs4 import BeautifulSoup
text = '''<a href='/wiki/anything/everything/nothing'><img src="/hp_imgjhg/411/1/f_1hj11_100u.jpg" alt="dyufg" />well wait now <a href='/wiki/hello/how-about-now/nothing'>'''
soup = BeautifulSoup(text, features="lxml")
for line in soup.find_all('a'):
print("href =",line.attrs['href'])
OUTPUT:
href = /wiki/anything/everything/nothing
href = /wiki/hello/how-about-now/nothing

Python + BeautifulSoup: How to get ‘href’ attribute of ‘a’ element?

I have the following:
html =
'''<div class=“file-one”>
<a href=“/file-one/additional” class=“file-link">
<h3 class=“file-name”>File One</h3>
</a>
<div class=“location”>
Down
</div>
</div>'''
And would like to get just the text of href which is /file-one/additional. So I did:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
link_text = “”
for a in soup.find_all(‘a’, href=True, text=True):
link_text = a[‘href’]
print “Link: “ + link_text
But it just prints a blank, nothing. Just Link:. So I tested it out on another site but with a different HTML, and it worked.
What could I be doing wrong? Or is there a possibility that the site intentionally programmed to not return the href?
Thank you in advance and will be sure to upvote/accept answer!
The 'a' tag in your html does not have any text directly, but it contains a 'h3' tag that has text. This means that text is None, and .find_all() fails to select the tag. Generally do not use the text parameter if a tag contains any other html elements except text content.
You can resolve this issue if you use only the tag's name (and the href keyword argument) to select elements. Then add a condition in the loop to check if they contain text.
soup = BeautifulSoup(html, 'html.parser')
links_with_text = []
for a in soup.find_all('a', href=True):
if a.text:
links_with_text.append(a['href'])
Or you could use a list comprehension, if you prefer one-liners.
links_with_text = [a['href'] for a in soup.find_all('a', href=True) if a.text]
Or you could pass a lambda to .find_all().
tags = soup.find_all(lambda tag: tag.name == 'a' and tag.get('href') and tag.text)
If you want to collect all links whether they have text or not, just select all 'a' tags that have a 'href' attribute. Anchor tags usually have links but that's not a requirement, so I think it's best to use the href argument.
Using .find_all().
links = [a['href'] for a in soup.find_all('a', href=True)]
Using .select() with CSS selectors.
links = [a['href'] for a in soup.select('a[href]')]
You can also use attrs to get the href tag with regex search
soup.find('a', href = re.compile(r'[/]([a-z]|[A-Z])\w+')).attrs['href']
First of all, use a different text editor that doesn't use curly quotes.
Second, remove the text=True flag from the soup.find_all
You could solve this with just a couple lines of gazpacho:
from gazpacho import Soup
html = """\
<div class="file-one">
<a href="/file-one/additional" class="file-link">
<h3 class="file-name">File One</h3>
</a>
<div class="location">
Down
</div>
</div>
"""
soup = Soup(html)
soup.find("a", {"class": "file-link"}).attrs['href']
Which would output:
'/file-one/additional'
A bit late to the party but I had the same issue recently scraping some recipes and got mine printing clean by doing this:
from bs4 import BeautifulSoup
import requests
source = requests.get('url for website')
soup = BeautifulSoup(source, 'lxml')
for article in soup.find_all('article'):
link = article.find('a', href=True)['href'}
print(link)

Categories