I'm using pywikibot-core, and I used before another python Mediawiki API wrapper as Wikipedia.py (which has a .HTML method). I switched to pywikibot-core 'cause I think it has many more features, but I can't find a similar method.
(beware: I'm not very skilled).
I'll post here user283120 second answer, more precise than the first one:
Pywikibot core doesn't support any direct (HTML) way to interact to Wiki, so you should use API.
If you need to, you can do it easily by using urllib2.
This is an example I used to get HTML of a wiki page in commons:
import urllib2
...
url = "https://commons.wikimedia.org/wiki/" + page.title().replace(" ","_")
html = urllib2.urlopen(url).read().decode('utf-8')
"[saveHTML.py] downloads the HTML-pages of articles and images and saves the interesting parts, i.e. the article-text and the footer to a file"
source: https://git.wikimedia.org/blob/pywikibot%2Fcompat.git/HEAD/saveHTML.py
IIRC you want the HTML of the entire pages, so you need something that uses api.php?action=parse. In Python I'd often just use wikitools for such a thing, I don't know about PWB or the other requirements you have.
In general you should use pywikibot instead of wikipedia (e.g. instead of "import wikipedia" you should use "import pywikibot") and if you are looking for methods and class that were been in wikipedia.py, they are now separated and can be found in pywikibot folder (mainly in page.py and site.py)
If you want to run your scripts that you wrote in compat, you can use a script in pywikibot-core named compat2core.py (in scripts folder) and there is a detailed help about conversion named README-conversion.txt, read it carefully.
The Mediawiki API has a parse action which allows to get the html snippet for the wikimarkup as returned by the Mediawiki markup parser.
For the pywikibot library there is already a function implemented which you can use like this:
def getHtml(self,pageTitle):
'''
get the HTML code for the given page Title
Args:
pageTitle(str): the title of the page to retrieve
Returns:
str: the rendered HTML code for the page
'''
page=self.getPage(pageTitle)
html=page._get_parsed_page()
return html
When using the mwclient python library there is a generic api method see:
https://github.com/mwclient/mwclient/blob/master/mwclient/client.py
Which can be used to retrieve the html code like this:
def getHtml(self,pageTitle):
'''
get the HTML code for the given page Title
Args:
pageTitle(str): the title of the page to retrieve
'''
api=self.getSite().api("parse",page=pageTitle)
if not "parse" in api:
raise Exception("could not retrieve html for page %s" % pageTitle)
html=api["parse"]["text"]["*"]
return html
As shown above this gives a duck typed interface which is implemented in the py-3rdparty-mediawiki library for which i am a committer. This was resolved with closing issue 38 - add html page retrieval
With Pywikibot you may use http.request() to get the html content:
import pywikibot
from pywikibot.comms import http
site = pywikibot.Site('wikipedia:en')
page = pywikibot.Page(s, 'Elvis Presley')
path = '{}/index.php?title={}'.format(site.scriptpath(), page.title(as_url=True))
r = http.request(site, path)
print(r[94:135])
This should give the html content
'<title>Elvis Presley – Wikipedia</title>\n'
With Pywikibot 6.0 http.request() gives a requests.Response object rather than plain text. In this case you must use the text Attribute:
print(r.text[94:135])
to get the same result.
Related
Hello everyone,
I am trying to write a program in Python to automatically check a website language. My script looks at the HTML header, identify where the string 'lang' appears, and print the corresponding language. I use the module 'requests'.
request = requests.get('https://en.wikipedia.org/wiki/Main_Page')
splitted_text = request.text.split()
matching = [s for s in splitted_text if "lang=" in s]
language_website = matching[0].split('=')[1]
print(language_website[1:3])
>>> en
I have tested it over several websites, and it works (given the language is correctly configured in the HTML in the first place, which is likely for the websites I consider in my research).
My question is: is there a more straightforward / consistent / systematic way to achieve the same thing. How one would look at the HTML using python and return the language the website is written in? Is there a quicker way using lxml for instance (that does not involve parsing strings like I do)?
I know the question of how to find a website language has been asked before, and the method using the HTML header to retrieve the language was mentioned, but it was not developed and no code was suggested, so I think this post is reasonably different.
Thank you so very much! Have a wonderful day,
Berti
You can try this :
import requests
request = requests.head('https://en.wikipedia.org/wiki/Main_Page')
print(request.headers["Content-language"])
If you are interested to get the data from page source. This might help.
import lxml
request = requests.get('https://en.wikipedia.org/wiki/Main_Page')
root = lxml.html.fromstring(request.text)
language_construct = root.xpath("//html/#lang") # this xpath is reliable(in long-term), since this is a standard construct.
language = "Not found in page source"
if language_construct:
language = language_construct[0]
print(language)
Note: This approach will not give result for all webpages, only those which contains HTML Language Code Reference.
Refer https://www.w3schools.com/tags/ref_language_codes.asp for more.
Combining the above responses
import requests
request = requests.head('https://en.wikipedia.org/wiki/Main_Page')
print(request.headers.get("Content-language", "Not found in page source"))
https://next.newsimpact.com/NewsWidget/Live
I am trying to code a python script that will grab a value from a HTML table in the link above. The link above is the site that I am trying to grab from, and this is the code I have written. I think that possibly my XPath is incorrect, because its been doing fine on other elements, but the path I'm using is not returning/printing anything.
from lxml import html
import requests
page = requests.get('https://next.newsimpact.com/NewsWidget/Live')
tree = html.fromstring(page.content)
#This will create a list of buyers:
value = tree.xpath('//*[#id="table9521"]/tr[1]/td[4]/text()')
print('Value: ', value)
What is strange is when I open the view source code page, I cant find the table I am trying to pull from.
Thank you for your help!
Required data absent in initial page source - it comes from XHR. You can get it as below:
import requests
response = requests.get('https://next.newsimpact.com/NewsWidget/GetNextEvents?offset=-120').json()
first_previous = response['Items'][0]['Previous'] # Current output - "2.632"
second_previous = response['Items'][1]['Previous'] # Currently - "0.2"
first_forecast = response['Items'][0]['Forecast'] # ""
second_forecast = response['Items'][1]['Forecast'] # "0.3"
You can parse response as simple Python dict and get all required data
Your problem is simple, request don't handle javascript at all. The values are JS generated !
If you really need to run this xpath, you need to use a module capable of understanding JS, like spynner.
You can test when you need JS or not by first using curl or by disabling JS in your browser. With firefox : about:config in navigation bar, then search javascript.enabled, then double click on it to switch between true or false
In chrome, open chrome dev tools, there's the option somewhere.
Check https://github.com/makinacorpus/spynner
Another (possible) problem, use tree = html.fromstring(page.text) not tree = html.fromstring(page.content)
I implemented the example from: Mediawiki and Python
I read Get wikipedia abstract using python and How to parse/extract data from a mediawiki marked-up article via python and several others.
I am trying to get a dump of some Wikipedia stubs associated with a category and insert them into an internal semantic mediawiki site. For the purpose of this example I am using the "Somali_Region" category. The script uses the mediawiki API to obtain data then it parses the data removing all template information which is desirable.
from wikitools import wiki
from wikitools import category
import mwparserfromhell
wikisite = "http://en.wikipedia.org/w/api.php"
parse_category = "Somali_Region"
wikiObject = wiki.Wiki(wikisite)
wikiCategory = category.Category(wikiObject, parse_category)
articles = wikiCategory.getAllMembersGen(namespaces=[0])
for article in articles:
wikiraw = article.getWikiText()
parsedWikiText = mwparserfromhell.parse(wikiraw)
for template in parsedWikiText.filter_templates():
parsedWikiText.remove(template)
print parsedWikiText
The internal semantic mediawiki site fails if I try to do a dump from wikipedia and do an insert, so that is not an option. Is it possible to do use the API to insert data into the semantic mediawiki site? I read the mediawiki API edit page, but I could not find a python example.
If I understand correctly, you want to take your parsedWikiText and save it into a private wiki.
Here's what I have for doing that kind of thing (you'll need to store USERNAME and PASSWORD somewhere; I use a config file, but there are more secure ways). I'll pick up from right before your for loop...
# Set up and authenticate into the target wiki if you need to.
from wikitools import wiki, page
target_wiki = wiki.Wiki('http://wiki.example.com/w/api.php')
site.login(USERNAME, PASSWORD)
for article in articles:
wikiraw = article.getWikiText()
parsedWikiText = mwparserfromhell.parse(wikiraw)
for template in parsedWikiText.filter_templates():
parsedWikiText.remove(template)
# Use the API's edit function to save the new content.
target_title = article.title
target_page = page.Page(target_wiki, target_title)
result = target_page.edit(text=parsedWikiText, summary="Imported text")
# Check to see if it worked.
if result['edit']['result'] == 'Success':
print 'Saved', target_title
else:
print 'Save failed', target_title
I'm assuming here you want to save parsedWikiText into a new page. If there's already something on the page in your wiki, you'll have to read it first with target_page.getWikiText() and then mix the new text in somehow. I've also assumed the article will have the same name it had in Wikipedia; if not then change target_title.
I've been trying to write a program for the last several hours that does what I thought would be an incredibly simple task:
Program asks for user input (let's say the type 'happiness')
Program queries the website thinkexist using this format ("http://thinkexist.com/search/searchQuotation.asp?search=USERINPUT")
Program returns first quote from the website.
I've tried using Xpath with lxml, but have no experience and every single construction comes back with a blank array.
The actual meat of the quote appears to be contained in the class "sqq."
If I navigate the site via Firebug, click the DOM tab, it appears the quote is in a textNode attribute "wholeText" or "textContent"-- but I don't know how to use that knowledge programatically.
Any ideas?
import lxml.html
import urllib
site = 'http://thinkexist.com/search/searchquotation.asp'
userInput = raw_input('Search for: ').strip()
url = site + '?' + urllib.urlencode({'search':userInput})
root = lxml.html.parse(url).getroot()
quotes = root.xpath('//a[#class="sqq"]')
print quotes[0].text_content()
... and if you enter 'Shakespeare', it returns
In real life, unlike in Shakespeare, the sweetness
of the rose depends upon the name it bears. Things
are not only what they are. They are, in very important
respects, what they seem to be.
If it's not necessary for you to implement this via XPath, you may use BeautifilSoup library like this (let myXml variable contain the page HTML source):
soup = BeautifulSoup(myXml)
for a in soup.findAll(a,{'class' : 'sqq'}):
# this is your quote
print a.contents
Anyway, read the BS documentation, it may be very useful for some scraping needs that don't require the power of XPath.
You could open the html source to find out the exact class you are looking for. For example, to grab the first StackOverflow username encountered on the page you could do:
#!/usr/bin/env python
from lxml import html
url = 'http://stackoverflow.com/questions/4710307'
tree = html.parse(url)
path = '//div[#class="user-details"]/a[#href]'
print tree.findtext(path)
# -> Parseltongue
# OR to print text including the text in children
a = tree.find(path)
print a.text_content()
# -> Parseltongue
I have downloaded the web page into an html file. I am wondering what's the simplest way to get the content of that page. By content, I mean I need the strings that a browser would display.
To be clear:
Input:
<html><head><title>Page title</title></head>
<body><p id="firstpara" align="center">This is paragraph <b>one</b>.
<p id="secondpara" align="blah">This is paragraph <b>two</b>.
</html>
Output:
Page title This is paragraph one. This is paragraph two.
putting together:
from BeautifulSoup import BeautifulSoup
import re
def removeHtmlTags(page):
p = re.compile(r'''<(?:"[^"]*"['"]*|'[^']*'['"]*|[^'">])+>''')
return p.sub('', page)
def removeHtmlTags2(page):
soup = BeautifulSoup(page)
return ''.join(soup.findAll(text=True))
Related
Python HTML removal
Extracting text from HTML file using Python
What is a light python library that can eliminate HTML tags? (and only text)
Remove HTML tags in AppEngine Python Env (equivalent to Ruby’s Sanitize)
RegEx match open tags except XHTML self-contained tags (famous don't use regex to parse html rant)
Parse the HTML with Beautiful Soup.
To get all the text, without the tags, try:
''.join(soup.findAll(text=True))
Personally, I use lxml because it's a swiss-army knife...
from lxml import html
print html.parse('http://someurl.at.domain').xpath('//body')[0].text_content()
This tells lxml to retrieve the page, locate the <body> tag then extract and print all the text.
I do a lot of page parsing and a regex is the wrong solution most of the time, unless it's a one-time-only need. If the author of the page changes their HTML you run a good risk of your regex breaking. A parser is a lot more likely to continue working.
The big problem with a parser is learning how to access the sections of the document you are after, but there are a lot of XPATH tools you can use inside your browser that simplify the task.
You want to look at Extracting data from HTML documents - Dive into Python because HERE it does (almost)exactly what you want.
The best modules for this task are lxml or html5lib; Beautifull Soap is imho not worth to use anymore. And for recursive models regular expressions are definitly the wrong method.
If I am getting your question correctly, this can simply be done by using urlopen function of urllib. Just have a look at this function to open an url and read the response which will be the html code of that page.
The quickest way to get a usable sample of what a browser would display is to remove any tags from the html and print the rest. This can, for example, be done using python's re.