What would be easiest way to use MediaWiki cookies in some Python CGI scripts (on the same domain, ofc) for authentication (including MW's OpenID, especially)?
Access from python to MediaWiki database is possible, too.
A very easy way to use Cookies with mediawiki is as follows:
from cookielib import CookieJar
import urllib2
import urllib
import json
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
Now, requests can be made using opener. For example:
login_data = {
'action': 'login',
'lgname': 'Example',
'lgpassword': 'Foobar',
'format': 'json'
}
data = urllib.urlencode(login_data)
request = opener.open('http://en.wikipedia.org/w/api.php',data)
content = json.load(request)
login_data['token'] = content['login']['token']
data_2 = urllib.urlencode(login_data)
request_2 = opener.open('http://en.wikipedia.org/w/api.php',data_2)
content_2 = json.load(request_2)
print content_2['login']['result']
In the example above, if the Cookiejar was not created, the login wouldn't fully work, asking for another token. Though, it'd be recommended to use a mediawiki wrapper already created such as pywikipedia, mwhair, pytybot, simplemediawiki or wikitools, with hundreds of other mediawiki wrappers in python.
You could connect to and modify the SQL-database without HTTP and cookies using the MySQLdb module, but this is often the wrong solution to do MediaWiki maintenance. Though read-only-access should not be a problem.
The best way to access MediaWiki with a script is to use the api.php.
The best known Python based MediaWiki-API-bot is the Pywikibot (former Pywikipediabot).
The easiest way to save cookies in Python might be to use the http.cookiejar module.
Its documentation contains some simple examples.
I extracted functional example code out of my own MediaWiki-bot:
#!/usr/bin/python3
import http.cookiejar
import urllib.request
import urllib.parse
import json
s_login_name = 'example'
s_login_password = 'secret'
s_api_url = 'http://en.wikipedia.org/w/api.php'
s_user_agent = 'StackOverflowExample/0.0.1.2012.09.26.1'
def api_request(d_post_params):
d_post_params['format'] = 'json'
r_post_params = urllib.parse.urlencode(d_post_params).encode('utf-8')
o_url_request = urllib.request.Request(s_api_url, r_post_params)
o_url_request.add_header('User-Agent', s_user_agent)
o_http_response = o_url_opener.open(o_url_request)
s_reply = o_http_response.read().decode('utf-8')
d_reply = json.loads(s_reply)
return (o_http_response.code, d_reply)
o_cookie_jar = http.cookiejar.CookieJar()
o_http_cookie_processor = urllib.request.HTTPCookieProcessor(o_cookie_jar)
o_url_opener = urllib.request.build_opener(o_http_cookie_processor)
d_post_params = {'action': 'login', 'lgname': s_login_name}
i_code, d_reply = api_request(d_post_params)
print('http code: %d' % (i_code))
print('api reply: %s' % (d_reply))
s_login_token = d_reply['login']['token']
d_post_params = {
'action': 'login',
'lgname': s_login_name,
'lgpassword': s_login_password,
'lgtoken':s_login_token
}
i_code, d_reply = api_request(d_post_params)
print('http code: %d' % (i_code))
print('api reply: %s' % (d_reply))
Classes, error handling and sub-functions have been removed to increase readability.
The cookies saved in o_url_opener can also be used for requests to index.php.
You could also login via index.php (fake a browser request) but this would include parsing of HTML-output.
Variable name legend:
# Unicode string
s_* = 'a'
# Bytes (raw string)
r_* = b'a'
# Dictionary
d_* = {'a':1}
# Integer number
i_* = 4711
# Other objects
o_* = SomeClass()
Related
I tried to call AppDynamics API using python requests but face an issue.
I wrote a sample code using the python client as follows...
from appd.request import AppDynamicsClient
c = AppDynamicsClient('URL','group','appd#123')
for app in c.get_applications():
print app.id, app.name
It works fine.
But if I do a simple call like the following
import requests
usr =<uid>
pwd =<pwd>
url ='http://10.201.51.40:8090/controller/rest/applications?output=JSON'
response = requests.get(url,auth=(usr,pwd))
print 'response',response
I get the following response:
response <Response [401]>
Am I doing anything wrong here ?
Couple of things:
I think the general URL format for app dynamics applications are (notice the '#'):
url ='http://10.201.51.40:8090/controller/#/rest/applications?output=JSON'
Also, I think the requests.get method needs an additional parameter for the 'account'. For instance, my auth format looks like:
auth = (_username + '#' + _account, _password)
I am able to get a right response code back with this config. Let me know if this works for you.
You could also use native python code for more control:
example:
import os
import sys
import urllib2
import base64
# if you have a proxy else comment out this line
proxy = urllib2.ProxyHandler({'https': 'proxy:port'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
username = "YOUR APPD REST API USER NAME"
password = "YOUR APPD REST API PASSWORD"
#Enter your request
request = urllib2.Request("https://yourappdendpoint/controller/rest/applications/141/events?time-range-type=BEFORE_NOW&duration-in-mins=5&event-types=ERROR,APPLICATION_ERROR,DIAGNOSTIC_SESSION&severities=ERROR")
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
response = urllib2.urlopen(request)
html = response.read()
This will get you the response and you can parse the XML as needed.
If you prefer it in JSON simply specify it in the request.
We have Jira authenticated to Zapier, but it does not have tagging functionality, so I hacked up a simple python module below. However, it doesn't seem that I can reuse the already authenticated Jira account. Is there a way to hide the password somehow so it's not just clear text?
# jira_label and jira_url come from upstream zaps
# declaring output hash with defaults set
output = {'jira_label': jira_label}
### Python code
import requests
user = 'my_personal_user'
dpass = 'xxx' # <--- gotta do something about it
url1 = jira_url
pdata = '{"fields": {"labels": ["' + jira_label +'"]}}'
header1 = {'Content-Type': 'application/json'}
r = requests.put(url2, auth=(user, dpass), data=pdata, headers=header1)
Please use, Base64 encoding :
>>> import base64
>>> print base64.b64encode("mypassword")
bXlwYXNzd29yZA==
>>> print base64.b64decode("bXlwYXNzd29yZA==")
mypassword
With this your request will look like this:
r = requests.put(url2, auth=(user, base64.b64decode("bXlwYXNzd29yZA==")), data=pdata, headers=header1)
I want to pull a list of users in the jira-users group. as i understand it, it can be done with Python using restkit.
Does anyone have any examples or links that give an example of this?
thanks.
If somebody still need a solution, you can install JIRA rest api lib https://pypi.python.org/pypi/jira/.
Just a simple example for your question:
from jira.client import JIRA
jira_server = "http://yourjiraserver.com"
jira_user = "login"
jira_password = "pass"
jira_server = {'server': jira_server}
jira = JIRA(options=jira_server, basic_auth=(jira_user, jira_password))
group = jira.group_members("jira-users")
for users in group:
print users
Jira has a REST API for external queries, it's using HTTP protocol for request and responses and the response content is formed as JSON. So you can use python's urllib and json packages to run request and then parse results.
This is Atlassian's document for Jira REST API: http://docs.atlassian.com/jira/REST/latest/ and for example check the users API: http://docs.atlassian.com/jira/REST/latest/#id120322
Consider that you should do authentication before send your request, you can find necessary information in the document.
import urllib2, base64
import requests
import ssl
import json
import os
from pprint import pprint
import getpass
UserName = raw_input("Ener UserName: ")
pswd = getpass.getpass('Password:')
# Total number of users or licenses used in JIRA. REST api of jira can take values of 50 incremental
ListStartAt = [0,50,100,150,200,250,300]
counter = 0
for i in ListStartAt:
request = urllib2.Request("https://jiraserver.com/rest/api/2/group/member?groupname=GROUPNAME&startAt=%s" %i)
base64string = base64.encodestring('%s:%s' % (UserName, pswd)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
result = urllib2.urlopen(request, context=gcontext)
JsonGroupdata = result.read()
jsonToPython = json.loads(JsonGroupdata)
try:
for i in range (0,50):
print jsonToPython["values"][i]["key"]
counter = counter+1
except Exception as e:
pass
print counter
I'm trying to send a SOAP request using SOAPpy as the client. I've found some documentation stating how to add a cookie by extending SOAPpy.HTTPTransport, but I can't seem to get it to work.
I tried to use the example here,
but the server I'm trying to send the request to started throwing 415 errors, so I'm trying to accomplish this without using ClientCookie, or by figuring out why the server is throwing 415's when I do use it. I suspect it might be because ClientCookie uses urllib2 & http/1.1, whereas SOAPpy uses urllib & http/1.0
Does someone know how to make ClientCookie use http/1.0, if that is even the problem, or a way to add a cookie to the SOAPpy headers without using ClientCookie? If tried this code using other services, it only seems to throw errors when sending requests to Microsoft servers.
I'm still finding my footing with python, so it could just be me doing something dumb.
import sys, os, string
from SOAPpy import WSDL,HTTPTransport,Config,SOAPAddress,Types
import ClientCookie
Config.cookieJar = ClientCookie.MozillaCookieJar()
class CookieTransport(HTTPTransport):
def call(self, addr, data, namespace, soapaction = None, encoding = None,
http_proxy = None, config = Config):
if not isinstance(addr, SOAPAddress):
addr = SOAPAddress(addr, config)
cookie_cutter = ClientCookie.HTTPCookieProcessor(config.cookieJar)
hh = ClientCookie.HTTPHandler()
hh.set_http_debuglevel(1)
# TODO proxy support
opener = ClientCookie.build_opener(cookie_cutter, hh)
t = 'text/xml';
if encoding != None:
t += '; charset="%s"' % encoding
opener.addheaders = [("Content-Type", t),
("Cookie", "Username=foobar"), # ClientCookie should handle
("SOAPAction" , "%s" % (soapaction))]
response = opener.open(addr.proto + "://" + addr.host + addr.path, data)
data = response.read()
# get the new namespace
if namespace is None:
new_ns = None
else:
new_ns = self.getNS(namespace, data)
print '\n' * 4 , '-'*50
# return response payload
return data, new_ns
url = 'http://www.authorstream.com/Services/Test.asmx?WSDL'
proxy = WSDL.Proxy(url, transport=CookieTransport)
print proxy.GetList()
Error 415 is because of incorrect content-type header.
Install httpfox for firefox or whatever tool (wireshark, Charles or Fiddler) to track what headers are you sending. Try Content-Type: application/xml.
...
t = 'application/xml';
if encoding != None:
t += '; charset="%s"' % encoding
...
If you trying to send file to the web server use Content-Type:application/x-www-form-urlencoded
A nice hack to use cookies with SOAPpy calls
Using Cookies with SOAPpy calls
What's the best way to specify a proxy with username and password for an http connection in python?
This works for me:
import urllib2
proxy = urllib2.ProxyHandler({'http': 'http://
username:password#proxyurl:proxyport'})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
urllib2.install_opener(opener)
conn = urllib2.urlopen('http://python.org')
return_str = conn.read()
Use this:
import requests
proxies = {"http":"http://username:password#proxy_ip:proxy_port"}
r = requests.get("http://www.example.com/", proxies=proxies)
print(r.content)
I think it's much simpler than using urllib. I don't understand why people love using urllib so much.
Setting an environment var named http_proxy like this: http://username:password#proxy_url:port
The best way of going through a proxy that requires authentication is using urllib2 to build a custom url opener, then using that to make all the requests you want to go through the proxy. Note in particular, you probably don't want to embed the proxy password in the url or the python source code (unless it's just a quick hack).
import urllib2
def get_proxy_opener(proxyurl, proxyuser, proxypass, proxyscheme="http"):
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, proxyurl, proxyuser, proxypass)
proxy_handler = urllib2.ProxyHandler({proxyscheme: proxyurl})
proxy_auth_handler = urllib2.ProxyBasicAuthHandler(password_mgr)
return urllib2.build_opener(proxy_handler, proxy_auth_handler)
if __name__ == "__main__":
import sys
if len(sys.argv) > 4:
url_opener = get_proxy_opener(*sys.argv[1:4])
for url in sys.argv[4:]:
print url_opener.open(url).headers
else:
print "Usage:", sys.argv[0], "proxy user pass fetchurls..."
In a more complex program, you can seperate these components out as appropriate (for instance, only using one password manager for the lifetime of the application). The python documentation has more examples on how to do complex things with urllib2 that you might also find useful.
Or if you want to install it, so that it is always used with urllib2.urlopen (so you don't need to keep a reference to the opener around):
import urllib2
url = 'www.proxyurl.com'
username = 'user'
password = 'pass'
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
# None, with the "WithDefaultRealm" password manager means
# that the user/pass will be used for any realm (where
# there isn't a more specific match).
password_mgr.add_password(None, url, username, password)
auth_handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
print urllib2.urlopen("http://www.example.com/folder/page.html").read()
Here is the method use urllib
import urllib.request
# set up authentication info
authinfo = urllib.request.HTTPBasicAuthHandler()
proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})
# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
urllib.request.CacheFTPHandler)
# install it
urllib.request.install_opener(opener)
f = urllib.request.urlopen('http://www.python.org/')
"""