I want to find topics related to a given topic and also the degree of relationship between multiple topics. For this, I tried to extract the Wiki Page of the Topic and build a taxonomy using the Categories of the topic (given at the bottom of the page). I want to use Python API of Wikipedia for this (https://wikipedia.readthedocs.org/en/latest/code.html#api). But when I extract categories, it returns the hidden categories too that are normally not visible on the Wiki Page.
import wikipedia
import requests
import pprint
from bs4 import BeautifulSoup
wikipedia.set_lang("en")
query = raw_input()
WikiPage = wikipedia.page(title = query,auto_suggest = True)
cat = WikiPage.categories
for i in cat:
print i
I know the other option is to use a scraper. But I want to use the API to do this.
You can definitely use the API for this. Just append &clshow=!hidden to your category query, like this:
http://en.wikipedia.org/w/api.php?action=query&titles=Stack%20Overflow&prop=categories&clshow=!hidden
(I'm assuming English Wikipedia here, but the API is the same everywhere.
Also, just to be clear: There is no such thing as a “Python API” to Wikipedia, just the MediaWiki API, that you can call from any programming language. In your example code you are using a Python library (one of many) to access the Wikipedia API. This library does not seem to have an option for excluding hidden categories. For a list of other, perhaps more flexible, Python libraries, see http://www.mediawiki.org/wiki/API:Client_code#Python. Personally I quite like wikitools for simple tasks like yours. It would then look something like this:
from wikitools.wiki import Wiki
from wikitools.api import APIRequest
site = Wiki("http://fa.wikipedia.org/w/api.php")
site.login("username", "password")
params = {
"action": "query",
"titles": "سرریز_پشته",
"prop": "categories",
"clshow": "!hidden",
}
request = APIRequest(site, params)
result = request.query()
echo result
Related
Hello everyone,
I am trying to write a program in Python to automatically check a website language. My script looks at the HTML header, identify where the string 'lang' appears, and print the corresponding language. I use the module 'requests'.
request = requests.get('https://en.wikipedia.org/wiki/Main_Page')
splitted_text = request.text.split()
matching = [s for s in splitted_text if "lang=" in s]
language_website = matching[0].split('=')[1]
print(language_website[1:3])
>>> en
I have tested it over several websites, and it works (given the language is correctly configured in the HTML in the first place, which is likely for the websites I consider in my research).
My question is: is there a more straightforward / consistent / systematic way to achieve the same thing. How one would look at the HTML using python and return the language the website is written in? Is there a quicker way using lxml for instance (that does not involve parsing strings like I do)?
I know the question of how to find a website language has been asked before, and the method using the HTML header to retrieve the language was mentioned, but it was not developed and no code was suggested, so I think this post is reasonably different.
Thank you so very much! Have a wonderful day,
Berti
You can try this :
import requests
request = requests.head('https://en.wikipedia.org/wiki/Main_Page')
print(request.headers["Content-language"])
If you are interested to get the data from page source. This might help.
import lxml
request = requests.get('https://en.wikipedia.org/wiki/Main_Page')
root = lxml.html.fromstring(request.text)
language_construct = root.xpath("//html/#lang") # this xpath is reliable(in long-term), since this is a standard construct.
language = "Not found in page source"
if language_construct:
language = language_construct[0]
print(language)
Note: This approach will not give result for all webpages, only those which contains HTML Language Code Reference.
Refer https://www.w3schools.com/tags/ref_language_codes.asp for more.
Combining the above responses
import requests
request = requests.head('https://en.wikipedia.org/wiki/Main_Page')
print(request.headers.get("Content-language", "Not found in page source"))
I'm using pywikibot-core, and I used before another python Mediawiki API wrapper as Wikipedia.py (which has a .HTML method). I switched to pywikibot-core 'cause I think it has many more features, but I can't find a similar method.
(beware: I'm not very skilled).
I'll post here user283120 second answer, more precise than the first one:
Pywikibot core doesn't support any direct (HTML) way to interact to Wiki, so you should use API.
If you need to, you can do it easily by using urllib2.
This is an example I used to get HTML of a wiki page in commons:
import urllib2
...
url = "https://commons.wikimedia.org/wiki/" + page.title().replace(" ","_")
html = urllib2.urlopen(url).read().decode('utf-8')
"[saveHTML.py] downloads the HTML-pages of articles and images and saves the interesting parts, i.e. the article-text and the footer to a file"
source: https://git.wikimedia.org/blob/pywikibot%2Fcompat.git/HEAD/saveHTML.py
IIRC you want the HTML of the entire pages, so you need something that uses api.php?action=parse. In Python I'd often just use wikitools for such a thing, I don't know about PWB or the other requirements you have.
In general you should use pywikibot instead of wikipedia (e.g. instead of "import wikipedia" you should use "import pywikibot") and if you are looking for methods and class that were been in wikipedia.py, they are now separated and can be found in pywikibot folder (mainly in page.py and site.py)
If you want to run your scripts that you wrote in compat, you can use a script in pywikibot-core named compat2core.py (in scripts folder) and there is a detailed help about conversion named README-conversion.txt, read it carefully.
The Mediawiki API has a parse action which allows to get the html snippet for the wikimarkup as returned by the Mediawiki markup parser.
For the pywikibot library there is already a function implemented which you can use like this:
def getHtml(self,pageTitle):
'''
get the HTML code for the given page Title
Args:
pageTitle(str): the title of the page to retrieve
Returns:
str: the rendered HTML code for the page
'''
page=self.getPage(pageTitle)
html=page._get_parsed_page()
return html
When using the mwclient python library there is a generic api method see:
https://github.com/mwclient/mwclient/blob/master/mwclient/client.py
Which can be used to retrieve the html code like this:
def getHtml(self,pageTitle):
'''
get the HTML code for the given page Title
Args:
pageTitle(str): the title of the page to retrieve
'''
api=self.getSite().api("parse",page=pageTitle)
if not "parse" in api:
raise Exception("could not retrieve html for page %s" % pageTitle)
html=api["parse"]["text"]["*"]
return html
As shown above this gives a duck typed interface which is implemented in the py-3rdparty-mediawiki library for which i am a committer. This was resolved with closing issue 38 - add html page retrieval
With Pywikibot you may use http.request() to get the html content:
import pywikibot
from pywikibot.comms import http
site = pywikibot.Site('wikipedia:en')
page = pywikibot.Page(s, 'Elvis Presley')
path = '{}/index.php?title={}'.format(site.scriptpath(), page.title(as_url=True))
r = http.request(site, path)
print(r[94:135])
This should give the html content
'<title>Elvis Presley – Wikipedia</title>\n'
With Pywikibot 6.0 http.request() gives a requests.Response object rather than plain text. In this case you must use the text Attribute:
print(r.text[94:135])
to get the same result.
I implemented the example from: Mediawiki and Python
I read Get wikipedia abstract using python and How to parse/extract data from a mediawiki marked-up article via python and several others.
I am trying to get a dump of some Wikipedia stubs associated with a category and insert them into an internal semantic mediawiki site. For the purpose of this example I am using the "Somali_Region" category. The script uses the mediawiki API to obtain data then it parses the data removing all template information which is desirable.
from wikitools import wiki
from wikitools import category
import mwparserfromhell
wikisite = "http://en.wikipedia.org/w/api.php"
parse_category = "Somali_Region"
wikiObject = wiki.Wiki(wikisite)
wikiCategory = category.Category(wikiObject, parse_category)
articles = wikiCategory.getAllMembersGen(namespaces=[0])
for article in articles:
wikiraw = article.getWikiText()
parsedWikiText = mwparserfromhell.parse(wikiraw)
for template in parsedWikiText.filter_templates():
parsedWikiText.remove(template)
print parsedWikiText
The internal semantic mediawiki site fails if I try to do a dump from wikipedia and do an insert, so that is not an option. Is it possible to do use the API to insert data into the semantic mediawiki site? I read the mediawiki API edit page, but I could not find a python example.
If I understand correctly, you want to take your parsedWikiText and save it into a private wiki.
Here's what I have for doing that kind of thing (you'll need to store USERNAME and PASSWORD somewhere; I use a config file, but there are more secure ways). I'll pick up from right before your for loop...
# Set up and authenticate into the target wiki if you need to.
from wikitools import wiki, page
target_wiki = wiki.Wiki('http://wiki.example.com/w/api.php')
site.login(USERNAME, PASSWORD)
for article in articles:
wikiraw = article.getWikiText()
parsedWikiText = mwparserfromhell.parse(wikiraw)
for template in parsedWikiText.filter_templates():
parsedWikiText.remove(template)
# Use the API's edit function to save the new content.
target_title = article.title
target_page = page.Page(target_wiki, target_title)
result = target_page.edit(text=parsedWikiText, summary="Imported text")
# Check to see if it worked.
if result['edit']['result'] == 'Success':
print 'Saved', target_title
else:
print 'Save failed', target_title
I'm assuming here you want to save parsedWikiText into a new page. If there's already something on the page in your wiki, you'll have to read it first with target_page.getWikiText() and then mix the new text in somehow. I've also assumed the article will have the same name it had in Wikipedia; if not then change target_title.
Edit I now realize the API is simply inadequate and is not even working.
I would like to redirect my question, I want to be able to auto-magically search duckduckgo using their "I'm feeling ducky". So that I can search for "stackoverflow" for instance and get the main page ("https://stackoverflow.com/") as my result.
I am using the duckduckgo API. Here
And I found that when using:
r = duckduckgo.query("example")
The results do not reflect a manual search, namely:
for result in r.results:
print result
Results in:
>>>
>>>
Nothing.
And looking for an index in results results in an out of bounds error, since it is empty.
How am I supposed to get results for my search?
It seems the API (according to its documented examples) is supposed to answer questions and give a sort of "I'm feeling ducky" in the form of r.answer.text
But the website is made in such a way that I can not search it and parse results using normal methods.
I would like to know how I am supposed to parse search results with this API or any other method from this site.
Thank you.
If you visit DuckDuck Go API Page, you will find some notes about using the API. The first notes says clearly that:
As this is a Zero-click Info API, most deep queries (non topic names)
will be blank.
An here's the list of those fields:
Abstract: ""
AbstractText: ""
AbstractSource: ""
AbstractURL: ""
Image: ""
Heading: ""
Answer: ""
Redirect: ""
AnswerType: ""
Definition: ""
DefinitionSource: ""
DefinitionURL: ""
RelatedTopics: [ ]
Results: [ ]
Type: ""
So it might be a pity, but their API just truncates a bunch of results and does not give them to you; possibly to work faster, and seems like nothing can be done except using DuckDuckGo.com.
So, obviously, in that case API is not the way to go.
As for me, I see only one way out left: retrieving raw html from duckduckgo.com and parsing it using, e.g. html5lib (it worth to mention that their html is well-structured).
It also worth to mention that parsing html pages is not the most reliable way to scrap data, because html structure can change, while API usually stays stable until changes are publicly announced.
Here's and example of how can be such parsing achieved with BeautifulSoup:
from BeautifulSoup import BeautifulSoup
import urllib
import re
site = urllib.urlopen('http://duckduckgo.com/?q=example')
data = site.read()
parsed = BeautifulSoup(data)
topics = parsed.findAll('div', {'id': 'zero_click_topics'})[0]
results = topics.findAll('div', {'class': re.compile('results_*')})
print results[0].text
This script prints:
u'Eixample, an inner suburb of Barcelona with distinctive architecture'
The problem of direct querying on the main page is that it uses JavaScript to produce required results (not related topics), so you can use HTML version to get results only. HTML version has different link:
http://duckduckgo.com/?q=example # JavaScript version
http://duckduckgo.com/html/?q=example # HTML-only version
Let's see what we can get:
site = urllib.urlopen('http://duckduckgo.com/html/?q=example')
data = site.read()
parsed = BeautifulSoup(data)
first_link = parsed.findAll('div', {'class': re.compile('links_main*')})[0].a['href']
The result stored in first_link variable is a link to the first result (not a related search) that search engine outputs:
http://www.iana.org/domains/example
To get all the links you can iterate over found tags (other data except links can be received similar way)
for i in parsed.findAll('div', {'class': re.compile('links_main*')}):
print i.a['href']
http://www.iana.org/domains/example
https://twitter.com/example
https://www.facebook.com/leadingbyexample
http://www.trythisforexample.com/
http://www.myspace.com/leadingbyexample?_escaped_fragment_=
https://www.youtube.com/watch?v=CLXt3yh2g0s
https://en.wikipedia.org/wiki/Example_(musician)
http://www.merriam-webster.com/dictionary/example
...
Note that HTML-only version contains only results, and for related search you must use JavaScript version. (vithout html part in url).
After already getting an answer to my question which I accepted and gave bounty for - I found a different solution, which I would like to add here for completeness. And a big thank you to all those who helped me reach this solution. Even though this isn't the solution I asked for, it may help someone in the future.
Found after a long and hard conversation on this site and with some support mails: https://duck.co/topic/strange-problem-when-searching-intel-with-my-script
And here is the solution code (from an answer in the thread posted above):
>>> import duckduckgo
>>> print duckduckgo.query('! Example').redirect.url
http://www.iana.org/domains/example
Try:
for result in r.results:
print result.text
If it suits your application, you might also try the related searches
r = duckduckgo.query("example")
for i in r.related_searches:
if i.text:
print i.text
This yields:
Eixample, an inner suburb of Barcelona with distinctive architecture
Example (musician), a British musician
example.com, example.net, example.org, example.edu and .example, domain names reserved for use in documentation as examples
HMS Example (P165), an Archer-class patrol and training vessel of the British Royal Navy
The Example, a 1634 play by James Shirley
The Example (comics), a 2009 graphic novel by Tom Taylor and Colin Wilson
For python 3 users, the transcription of #Rostyslav Dzinko's code:
import re, urllib
import pandas as pd
from bs4 import BeautifulSoup
query = "your query"
site = urllib.request.urlopen("http://duckduckgo.com/html/?q="+query)
data = site.read()
soup = BeautifulSoup(data, "html.parser")
my_list = soup.find("div", {"id": "links"}).find_all("div", {'class': re.compile('.*web-result*.')})[0:15]
(result__snippet, result_url) = ([] for i in range(2))
for i in my_list:
try:
result__snippet.append(i.find("a", {"class": "result__snippet"}).get_text().strip("\n").strip())
except:
result__snippet.append(None)
try:
result_url.append(i.find("a", {"class": "result__url"}).get_text().strip("\n").strip())
except:
result_url.append(None)
I'd like to grab all the index words and its definitions from here. Is it possible to scrape web content with Python?
Firebug exploration shows the following URL returns my desirable contents including both index and its definition as to 'a'.
http://pali.hum.ku.dk/cgi-bin/cpd/pali?acti=xart&arid=14179&sphra=undefined
what are the modules used? Is there any tutorial available?
I do not know how many words indexed in the dictionary. I`m absolute beginner in the programming.
You should use urllib2 for gettting the URL contents and BeautifulSoup for parsing the HTML/XML.
Example - retrieving all questions from the StackOverflow.com main page:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://stackoverflow.com")
soup = BeautifulSoup(page)
for incident in soup('h3'):
print [i.decode('utf8') for i in incident.contents]
print
This code sample was adapted from the BeautifulSoup documentation.
You can get data from the web using the built-in urllib or urllib2, but the parsing itself is the most important part. May I suggest the wonderful BeautifulSoup? It can deal with just about anything.
http://www.crummy.com/software/BeautifulSoup/
The documentation is built like a tutorial. Sorta:
http://www.crummy.com/software/BeautifulSoup/documentation.html
In your case, you probably need to use wildcards to see all entries in the dictionary. You can do something like this:
import urllib2
def getArticles(query, start_index, count):
xml = urllib2.urlopen('http://pali.hum.ku.dk/cgi-bin/cpd/pali?' +
'acti=xsea&tsearch=%s&rfield=entr&recf=%d&recc=%d' %
(query, start_index, count))
# TODO:
# parse xml code here (using BeautifulSoup or an xml parser like Python's
# own xml.etree. We should at least have the name and ID for each article.
# article = (article_name, article_id)
return (article_names # a list of parsed names from XML
def getArticleContent(article):
xml = urllib2.urlopen('http://pali.hum.ku.dk/cgi-bin/cpd/pali?' +
'acti=xart&arid=%d&sphra=undefined' % article_id)
# TODO: parse xml
return parsed_article
Now you can loop over things. For instance, to get all articles starting in 'ana', use the wildcard 'ana*', and loop until you get no results:
query = 'ana*'
article_dict = {}
i = 0
while (true):
new_articles = getArticles(query, i, 100)
if len(new_articles) == 0:
break
i += 100
for article_name, article_id in new_articles:
article_dict[article_name] = getArticleContent(article_id)
Once done, you'll have a dictionary of the content of all articles, referenced by names. I omitted the parsing itself, but it's quite simple in this case, since everything is XML. You might not even need to use BeautifulSoup (even though it's still handy and easy to use for XML).
A word of warning though:
You should check the site's usage policy (and maybe robots.txt) before trying to heavily scrap articles. If you're just getting a few articles for yourself they may not care (the dictionary copyright owner, if it's not public domain, may care though), but if you're going to scrape the entire dictionary, this is going to be some heavy usage.