scikit-learn intersphinx link / inventory object? - python

Has anyone successfully linked to scikit-learn with intersphinx. It's a sphinx project and it looks like it's hosted through github pages
https://github.com/scikit-learn/scikit-learn.github.io
But so far I haven't been able to generate complete links in my sphinx project to land on scikit learn pages
currently using
'sklearn': ('http://scikit-learn.org/stable' None)
in my interspinx mapping, any help would be great, thanks

Seems from this issue linking this PR that you should use:
'sklearn': ('http://scikit-learn.org/stable',
(None, './_intersphinx/sklearn-objects.inv'))
Note: not tested but interested in the result, please let me know if it works.
EDIT:
Seems that sklearn-objects.inv is available from scikit-image repo for local intersphinx settings.
That's probably not the best solution, but maybe it can help for a start.
I assume you already tried to link directly to the documentation page of scikit-learn or maybe to the API page of the project (but yet I ask, just in case...).
I am not sure what would be the appropriate page from what is indicated in Sphinx doc.
A dictionary mapping unique identifiers to a tuple (target,
inventory). Each target is the base URI of a foreign Sphinx
documentation set and can be a local path or an HTTP URI. The
inventory indicates where the inventory file can be found: it can be
None (at the same location as the base URI) or another local or HTTP
URI.
Otherwise there is also sphobjinv that could help to build a custom intersphinx object.inv, but I had no time to test it yet.

You have a missing comma in your intersphinx mapping:
'sklearn': ('http://scikit-learn.org/stable' None)
should be:
'sklearn': ('http://scikit-learn.org/stable', None),
I use trailing commas for my dict entries, but they are not required.
With that correction, I was able to use the entry that #mzjn provided in their comment to generate a link to scikit-learn's docs.

Related

Finding all views/namespace urls in a Zope3 system?

I have a Zope3 system with some custom code for a paperless office system. The developers have since folded up shop and I need to make some changes to what I believe are the contents of some ExtJS.ComboBox pulldowns.
Once upon a time the developers told me to hit a certain URL on the server to refresh the combo box data, but I've got no documentation what the URL was anywhere.
Is there a way to query ZODB or Zope to spit out a list of known URLS in the system? I want to say it was localhost:8080/++etc++site/##something .. it was a very odd looking URL from my recollection.
The usual urls such as http://localhost:80(80)/Contol_Panel or /##manage don't get me to what some of the ZMI documentation suggests I should get. I am unsure as to whether this was intentional or not.
There is a addon called collective.z3cinspector
You might use version 1.1 for Python 2.7 / Plone < 5.2 or version 1.2.1 for Python 3 compatibility.
Navigate to http://localhost/##inspect to inspect all registered adapters and utilities (singleton adapters)
In case of /++etc++site/##something you can search for an adapter named etc. You will see something similar to what you can see in the image.

Python code to connect to twitter search API

I'm new to Python and am very confused on how to begin this assignment:
Write Python code to connect to Twitter search API at:
https://api.twitter.com/1.1/search/tweets.json
with at least the following parameters:
q for a search topic of your interest;
count to 100 for 100 records
to retrieve twitter data and assign the data to a variable.
I already created the app to access Twitter's API. Thanks.
I had this same issue. According to Twitter, you must register an account to even use basic search features of public tweets and there is no way around it. I followed the guide for the complete setup here and can confirm it works
Edit: the article is for php, but changing it to python shouldn’t be very difficult. It has all the methods outlined in the article
Just as a hint, you might want to use tweepy for this. They have very good documentation and its much easier than trying to reinvent the wheel.

Splunk Custom Module

I am new in Splunk - as well as in Python and start working on Splunk Custom Module and I have taken reference from Splunk Site Custom Module. When I have created Same file structure using Visual Studio 2017 -> Python3 then its give me an error
import controllers.module not found
import splunk not found
import splunk.search not found
import splunk.util not found
import splunk.entity not found
import json from splunk.appserver.mrsparkle.lib not found
import lib.util as util not found
Note: I have already imported Splunk SDK using "pip install splunk-sdk" Still, I can't find any package in the project.
Please, anyone, guide me how to resolve above custom module package error.
If there is any readymade samples are available then please suggest a link.
Thanks in advance
You may be wasting your time. Splunk Modules have been deprecated for over a year and may become unsupported at any time.
The packages you are looking for should be part of Splunk. Have you installed it?
I believe Splunk does not support Python 3. Try 2.7.
Have you tried taking a look to simple_xml_exaples app for sourcecodeviewer implementation. I think it is a good approach for adding custom HTML + CCS + third party.js.
Furthermore it can be implemented in a custom dashboard.js
edit
Download Simple xml examples APP; in every example dashboard it can be seen that it has a custom component called sourceviewer, that is injected after the load time in every dashboard page.
For that a custom
dashboard.js
is created and inserted in APP/appserver/static so this will override the original
so then you can insert in every dashboard page the new needed components without even afecting the dashboard xml part and its functionalities as pdf generator.
I used that way in order to implement custom Nav as a SideBar.

How to store wiki sites (vcs)

as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion.
Flat files
I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki.
On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine)
Storing in a database
By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts.
Storing a vcs repository in a database field
This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that.
So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test
In none of your choices have you considered whether you wish to be able to search your wiki. If this is a consideration, having the 'live' copy of each page in a database with full text search would be hugely beneficial. For this reason, I would personally go with storing the pages in a database every time - otherwise you'll have to create your own index somewhere.
As far as version logging goes, you only need store the live copy in an indexable format. You could automatically create a history item within your 'page' model when an changed page is written back to the database. You can cut down on the storage overhead of earlier page revisions by compressing the data, should this become necessary.
If you're expecting a massive amount of change logging, you might want to read this answer here:
How does one store history of edits effectively?
Creating a wiki is fun and rewarding, but there are a lot of prebuilt wiki software packages already. I suggest Wikipedia's List of wiki software. In particular, MoinMoin and Trac are good. Finally, John Sutherland has made a wiki using Django.

What's the search engine used in the new Python documentation?

Is it built-in in Sphinx?
It look like Sphinx contains own search engine for English language. See http://sphinx.pocoo.org/_static/searchtools.js and searchindex.js/.json (see Sphinx docs index 36Kb, Python docs index 857Kb, and Grok docs 37Kb).
Index is being precomputed when docs are generated.
When one searches, static page is being loaded and then _static/searchtools.js extract search terms from query string, normalizes (case, stemming, etc.) them and looks up in searchindex.js as it is being loaded.
First search attempt takes rather long time, consecutive are much faster as index is cached in your browser.
The Sphinx search engine is built in Javascript. It uses JQuery and a (sometimes very big) javascript file containing the search terms.
Yes. Sphinx is not built-in, however. The search widget is part of sphinx. What context did you mean by "built-in"?
On the page iteself: http://docs.python.org/about.html
http://sphinx.pocoo.org/

Categories