I'm trying to find a way to search the source code of a web page to see if it contains a key word. However, no matter what I search for on this page, the only result I get is -1, which I think is telling me I'm doing something wrong. Otherwise, I think it should tell me the position of when the word starts. Can someone tell me what I'm doing wrong? Here's the code.
import urllib.request
page = urllib.request.urlopen("http://www.google.com")
print(page.read())
str_page = str(page)
substring = "content"
print(str_page.find("lang"))
import urllib2
webUrl = urllib.request.urlopen('https://www.youtube.com/user/guru99com')
print ("result code: " + str(webUrl.getcode()))
data = webUrl.read()
Source_text = (data)
Keyword = 'your keyword'
if Keyword in Source_text:
#put whatever you want here
Please see if the below mentioned code helps
import urllib.request
url = "http://www.google.com"
response = urllib.request.urlopen(url)
html = response.read().decode('utf-8','ignore') # decode the html page here
if 'lang' in html:
print (html.find("lang")) # gives the position of lang
Kindly decode() the html page may be that will be help out .
Related
I am trying to use re to pull out a url from something I have scraped. I am using the below code to pull out the data below but it seems to come up empty. I am not very familiar with re. Could you give me how to pull out the url?
match = ["http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html';", "http://www.stats.gov.cn'+urlstr+'"]
url = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_#.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', match`
#print url just prints both. I only need the match = "http://www.stats.gov.cn/tjsj/zxfb/ANYTHINGHERE/ANYTHINGHERE.html"
print(url)
Expected Output = ["http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html';"]
Okay I found the solution. The .+ looks for any number of characters between http://www.stats.gov.cn/ & .html. Thanks for your help with this.
match = ["http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html';", "http://www.stats.gov.cn'+urlstr+'"]
url = re.findall('http://www.stats.gov.cn/.+.html', str(match))
print(url)
Expected Output = ["http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html"]
I'm trying to figure out how to get Python3 to display a certain phrase from an HTML document. For example, I'll be using the search engine https://duckduckgo.com .
I'd like the code to do key search for var error=document.getElementById; and get it to display what in the parenthesis are, in this case, it would be "error_homepage". Any help would be appreciated.
import urllib.request
u = input ('Please enter URL: ')
x = urllib.request.urlopen(u)
print(x.read())
You can simply read the website of interest, as you suggested, using urllib.request, and use regular expressions to search the retrieved HTML/JS/... code:
import re
import urllib.request
# the URL that data is read from
url = "http://..."
# the regex pattern for extracting element IDs
pattern = r"var error = document.getElementById\(['\"](?P<element_id>[a-zA-Z0-9_-]+)['\"]\);"
# fetch HTML code
with urllib.request.urlopen(url) as f:
html = f.read().decode("utf8")
# extract element IDs
for m in re.findall(pattern, html):
print(m)
I am a beginner in WebCrawling, and I have a question regarding crawling multiple urls.
I am using CNBC in my project. I want to extract news titles and urls from its home page, and I also want to crawl the contents of the news articles from each url.
This is what I've got so far:
import requests
from lxml import html
import pandas
url = "http://www.cnbc.com/"
response = requests.get(url)
doc = html.fromstring(response.text)
headlineNode = doc.xpath('//div[#class="headline"]')
len(headlineNode)
result_list = []
for node in headlineNode :
url_node = node.xpath('./a/#href')
title = node.xpath('./a/text()')
soup = BeautifulSoup(url_node.content)
text =[''.join(s.findAll(text=True)) for s in soup.findAll("div", {"class":"group"})]
if (url_node and title and text) :
result_list.append({'URL' : url + url_node[0].strip(),
'TITLE' : title[0].strip(),
'TEXT' : text[0].strip()})
print(result_list)
len(result_list)
I am keep on getting an error saying that'list' object has no attribute 'content'. I want to create a dictionary that contains titles for each headlines, urls for each headlines, and the news article content for each headlines. Is there an easier way to approach this?
Great start on the script. However, soup = BeautifulSoup(url_node.content) is wrong. url_content is a list. You need to form the full news URL, use requests to get the HTML and then pass it to BeautifulSoup.
Apart from that, there are a few things I would look at:
I see import issues, BeautifulSoup is not imported.
Add from bs4 import BeautifulSoup to the top. Are you using pandas? If not, remove it.
Some of the news divs on CNN with the big banner picture will yield a 0 length list when you query url_node = node.xpath('./a/#href'). You need to find the appropriate logic and selectors to get those news URLs as well. I will leave that up to you.
Check this out:
import requests
from lxml import html
import pandas
from bs4 import BeautifulSoup
# Note trailing backslash removed
url = "http://www.cnbc.com"
response = requests.get(url)
doc = html.fromstring(response.text)
headlineNode = doc.xpath('//div[#class="headline"]')
print(len(headlineNode))
result_list = []
for node in headlineNode:
url_node = node.xpath('./a/#href')
title = node.xpath('./a/text()')
# Figure out logic to get that pic banner news URL
if len(url_node) == 0:
continue
else:
news_html = requests.get(url + url_node[0])
soup = BeautifulSoup(news_html.content)
text =[''.join(s.findAll(text=True)) for s in soup.findAll("div", {"class":"group"})]
if (url_node and title and text) :
result_list.append({'URL' : url + url_node[0].strip(),
'TITLE' : title[0].strip(),
'TEXT' : text[0].strip()})
print(result_list)
len(result_list)
Bonus debugging tip:
Fire up an ipython3 shell and do %run -d yourfile.py. Look up ipdb and the debugging commands. It's quite helpful to check what your variables are and if you're calling the right methods.
Good luck.
I am trying to extract the table "Pharmacology-and-Biochemistry"from the url https://pubchem.ncbi.nlm.nih.gov/compound/23677941#section=Pharmacology-and-Biochemistry i have written this code
from lxml import etree
import urllib.request as ur
url = "https://pubchem.ncbi.nlm.nih.gov/compound /23677941#section=Chemical-and-Physical-Properties"
web = ur.urlopen(url)
s = web.read()
html = etree.HTML(s)
print (html)
nodes = html.xpath('//li[#id="Pharmacology-and-Biochemistry"/descendant::*]')
print (tr_nodes)
but the script is not getting the node specified in xpath and output is empty list
[]
I tried several other xpaths but nothing worked!
please help me !!
I think the problem is that in this url doesn't exists the table that you are searching.
Try to run this:
from urllib import urlopen
text = urlopen('https://pubchem.ncbi.nlm.nih.gov/compound/23677941#section=Pharmacology-and-Biochemistry').read()
print 'Pharmacology-and-Biochemistry' in text
The result is:
False
I'm trying to open a webpage and return all the links as a dictionary that would look like this.
{"http://my.computer.com/some/file.html" : "link text"}
So the link would be after the href= and the text would be between the > and the </a>
I'm using https://www.yahoo.com/ as my test website
I keep getting a this error:
'href=' in line:
TypeError: a bytes-like object is required, not 'str'
Heres my code:
def urlDict(myUrl):
url = myUrl
page = urllib.request.urlopen(url)
pageText = page.readlines()
urlList = {}
for line in pageText:
if '<a href=' in line:
try:
url = line.split('<a href="')[-1].split('">')[0]
txt = line.split('<a href="')[-1].split('">')[-1].split('< /a>')[0]
urlList[url] = txt
except:
pass
return urlList
What am I doing wrong? I've looked around and people have mostly suggest this mysoup parser thing. I'd use it, but I don't think that would fly with my teacher.
The issue is that you're attempting to compare a byte string to a regular string. If you add print(line) as the first command in your for loops, you'll see that it will print a string of HTML but it will have a b' at the beginning, indicating it's not utf-8 encoding. This makes things difficult. The proper way to use urllib here is the following:
def url_dict(myUrl):
with urllib.request.urlopen(myUrl) as f:
s = f.read().decode('utf-8')
This will have the s variable hold the entire text of the page. You can then use a regular expression to parse out the links and the link target. Here is an example which will pull the link targets without the HTML.
import urllib.request
import re
def url_dict():
# url = myUrl
with urllib.request.urlopen('http://www.yahoo.com') as f:
s = f.read().decode('utf-8')
r = re.compile('(?<=href=").*?(?=")')
print(r.findall(s))
url_dict()
Using regex to get both the html and the link itself in a dictionary is outside the scope of where you are in your class, so I would absolutely not recommend submitting it for the assignment, although I would recommend learning it for later use.
You'll want to use BeautifulSoup as suggested, as it make this entire thing extremely easy. There is an example in the docs that you can cut and paste to extract the URLs.
For what it's worth, here is a BeautifulSoup and requests approach.
Feel free to replace requests with urllib, but BeautifulSoup doesn't really have a nice replacement.
import requests
from bs4 import BeautifulSoup
def get_links(url):
page = requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")
return { a_tag['href']: a_tag.text for a_tag in soup.find_all('a') }
for link, text in get_links('https://www.yahoo.com/').items():
print(text.strip(), link)