httpError using mwclient with local MediaWiki - python

I try to create a page using mwclient with a local MediaWiki.
With wikipedia.org everything works fine.
With my local MediaWiki I enter these commands:
import mwclient
site = mwclient.Site("192.168.1.143")
The result is the following error:
File "/Library/Python/2.7/site-packages/mwclient/http.py", line 152, in request
raise errors.HTTPStatusError, (res.status, res)
mwclient.errors.HTTPStatusError: (404, <httplib.HTTPResponse instance at 0x104368488>)
If I type the IP or the Hostname in my Browser, it works. The same with the ping command.
I used url lib with:
a=urllib.urlopen('http://www.google.com/asdfsf')
a.getcode()
and got the 200 OK Code.
What's the problem here? Any ideas?

The problem is that mwclient expects the api.php (which is what it uses to access the wiki) to be located at /w/, which is the location used for Wikimedia wikis, instead of directly under /, which is the default.
Per the documentation for Site, you need to use the path parameter for that:
site = mwclient.Site('192.168.1.143', path='/')

Related

When saving FogBugz attachment, server always returns empty response (with some headers)

I'am trying to get case attachment to save in local folder. I have problem with using attachment url to download it, each time server returns empty results and status code 200.
This is a sample url I use (changed host and token) :
https://example.fogbugz.com/default.asp?pg=pgDownload&pgType=pgFile&ixBugEvent=385319&ixAttachment=56220&sFileName=Log.7z&sTicket=&sToken=1234567890627ama72kaors2grlgsk
I have tried using token instead of sToken but no difference. If I copy above url to chrome then it won't work either, but If I login to FogBugz (manuscript) and then try this url again then it works. So I suppose there are some security issues here.
btw. I use python FogBugz api for this and save url using urllib urllib.request.urlretrieve(url, "fb/" + file_name)
Solution I have found is to use cookies from the web browser where I have previously loged in to FB account I use. So it looks like a security issue.
For that I used pycookiecheat (for windows see my fork: https://github.com/luskan/pycookiecheat). For full code see here: https://gist.github.com/luskan/66ffb8f82afb96d29d3f56a730340adc

I am Unable to authenticate CKAN 2.7.2 using oauth2 on http

I am using CKAN 2.7.2 .
I have added the following configurations in my development.ini file of ckan
ckan.oauth2.authorization_endpoint = https://account.lab.fiware.org/oauth2/authorize
ckan.oauth2.token_endpoint = https://account.lab.fiware.org/oauth2/token
ckan.oauth2.profile_api_url = https://account.lab.fiware.org/user
ckan.oauth2.client_id = xyz
ckan.oauth2.client_secret = xyz
ckan.oauth2.profile_api_user_field = abc
ckan.oauth2.profile_api_mail_field = abc#gmail.com
Also, have exported the following while running ckan using paster serve :
export OAUTHLIB_INSECURE_TRANSPORT=True
Also, I have added an application in fiware.lab also with callback URL where the CKAN instance is running (i.e a private IP of 172.30.66.XX type running on port 5000)
And when I click on Login,i get redirect to fiware lab login page and after logging in i get the following error
{"state": "eyJjYW1lX2Zyb20iOiAiL2Rhc2hib2FyZCJ9", "error": "mismatching_redirect_uri"} (HTTP 400)
If anyone could please help me in this. It would be of great help.
That error means that the redirect URL attached by CKAN is not the same as the one you have registered as callback URL when registering the application in the IDM.
Ensure that the callback URL you have included in the IDM is:
http://YOUR_CKAN_INSTANCE/oauth2/callback
The URL must match exactly (so no trailing backslash)

Change URL to another URL using mitmproxy

I am trying to redirect one page to another by using mitmproxy and Python. I can run my inline script together with mitmproxy without issues, but I am stuck when it comes to changing the URL to another URL. Like if I went to google.com it would redirect to stackoverflow.com
def response(context, flow):
print("DEBUG")
if flow.request.url.startswith("http://google.com/"):
print("It does contain it")
flow.request.url = "http://stackoverflow/"
This should in theory work. I see http://google.com/ in the GUI of mitmproxy (as GET) but the print("It does contain it") never gets fired.
When I try to just put flow.request.url = "http://stackoverflow.com" right under the print("DEBUG") it won't work neither.
What am I doing wrong? I have also tried if "google.com" in flow.request.url to check if the URL contains google.com but that won't work either.
Thanks
The following mitmproxy script will
Redirect requesto from mydomain.com to newsite.mydomain.com
Change the request method path (supposed to be something like /getjson? to a new one `/getxml
Change the destination host scheme
Change the destination server port
Overwrite the request header Host to pretend to be the origi
import mitmproxy
from mitmproxy.models import HTTPResponse
from netlib.http import Headers
def request(flow):
if flow.request.pretty_host.endswith("mydomain.com"):
mitmproxy.ctx.log( flow.request.path )
method = flow.request.path.split('/')[3].split('?')[0]
flow.request.host = "newsite.mydomain.com"
flow.request.port = 8181
flow.request.scheme = 'http'
if method == 'getjson':
flow.request.path=flow.request.path.replace(method,"getxml")
flow.request.headers["Host"] = "newsite.mydomain.com"
You can set .url attribute, which will update the underlying attributes. Looking at your code, your problem is that you change the URL in the response hook, after the request has been done. You need to change the URL in the request hook, so that the change is applied before requesting resources from the upstream server.
Setting the url attribute will not help you, as it is merely constructed from underlying data. [EDIT: I was wrong, see Maximilian’s answer. The rest of my answer should still work, though.]
Depending on what exactly you want to accomplish, there are two options.
(1) You can send an actual HTTP redirection response to the client. Assuming that the client understands HTTP redirections, it will submit a new request to the URL you give it.
from mitmproxy.models import HTTPResponse
from netlib.http import Headers
def request(context, flow):
if flow.request.host == 'google.com':
flow.reply(HTTPResponse('HTTP/1.1', 302, 'Found',
Headers(Location='http://stackoverflow.com/',
Content_Length='0'),
b''))
(2) You can silently route the same request to a different host. The client will not see this, it will assume that it’s still talking to google.com.
def request(context, flow):
if flow.request.url == 'http://google.com/accounts/':
flow.request.host = 'stackoverflow.com'
flow.request.path = '/users/'
These snippets were adapted from an example found in mitmproxy’s own GitHub repo. There are many more examples there.
For some reason, I can’t seem to make these snippets work for Firefox when used with TLS (https://), but maybe you don’t need that.

ca_certs_locater/__init__.py import error

I was trying to get authentication of my api.However,it always show the following import errors:
public_key=raw.input ('...')
secret_key=raw.input ('...')
client = upwork.Client(public_key, secret_key)
It is supposed to appear a url, however it shows that
" File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/upwork/client.py", line 118, in __init__
ca_certs=ca_certs_locater.get(),
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ca_certs_locater/__init__.py", line 36, in get
raise ImportError()"
I don't know what I should do about the ca_certs_locater
Before instantiating the upwork client, modify the module's LINUX_PATH constant.
import upwork
# Set the certificate path within the module
upwork.ca_certs_locater.LINUX_PATH = '/path/to/my/cert.crt'
...
client = upwork.Client(public_key, secret_key, **credentials)
...
I had this same problem. The solution is indeed as the comment suggests to do what Python - SSL Issue with Oauth2 says in combination with following the "SSL Certificate Note" on https://pypi.python.org/pypi/python-upwork. I did the following:
Read https://github.com/upwork/python-upwork/issues/9
Downloaded cacert.pem from Python - SSL Issue with Oauth2
Set the HTTPLIB_CA_CERTS_PATH environment variable to /path/to/cacert.pem
Then, the import error went away. My use case was the upwork API and yours may be different but the solution is the same either way.

pysimplesoap web service return connection refused

I've created some web services using pysimplesoap like on this documentation:
https://code.google.com/p/pysimplesoap/wiki/SoapServer
When I tested it, I called it like this:
from SOAPpy import SOAPProxy
from SOAPpy import Types
namespace = "http://localhost:8008"
url = "http://localhost:8008"
proxy = SOAPProxy(url, namespace)
response = proxy.dummy(times=5, name="test")
print response
And it worked for all of my web services, but when I try to call it by using an library which is needed to specify the WSDL, it returns "Could not connect to host".
To solve my problem, I used the object ".wsdl()" to generate the correct WSDL and saved it into a file, the WSDL generated by default wasn't correct, was missing variable types and the correct server address...
The server name localhost is only meaningful on your computer. Once outside, other computers won't be able to see it.
1) find out your external IP, with http://www.whatismyip.com/ or another service. Note that IPs change over time.
2) plug the IP in to http://www.soapclient.com/soaptest.html
If your local service is answering IP requests as well as from localhost, you're done!

Categories