Python mongolab REST api - python

I am trying to access the mongolab REST api through python. Is the correct way to do this via pythons urllib2? I have tried the following:
import urllib2
p = urllib2.urlopen("https://api.mongolab.com/api/1/databases/mydb/collections/mycollection?apiKey=XXXXXXXXXXXXXXXX")
But this gives me an error:
urllib2.URLError: <urlopen error unknown url type: https>
What is the correct way of doing this? After connecting, how do I go on to POST a document to my collection? If someone could post up a code example, I would be very grateful. Thanks all for the help!
EDIT:
I've recompiled python with ssl support. How do I POST insert a document to a collection using mongolab REST API? Here is the code I have atm:
import urllib
import urllib2
url = "https://api.mongolab.com/api/1/databases/mydb/collections/mycollection?apiKey=XXXXXXXXXXXXXXXX"
data = {"x" : "1"}
request = urllib2.Request(url, data)
p = urllib2.urlopen(request)
Now, when I run this, I get the error
urllib2.HTTPError: HTTP Error 415: Unsupported Media Type
How do I insert documents using HTTP POST? Thanks!

That error is raised if you version of python does not include ssl support. What version are you using? Did you compile it yourself?
That said, when you get a version including ssl, using requests is a lot easier than urllib2, especially when POSTing data.

Related

Getting a 401 response while using Requests package

I am trying to access a server over my internal network under https://prodserver.de/info.
I have the code structure as below:
import requests
from requests.auth import *
username = 'User'
password = 'Hello#123'
resp = requests.get('https://prodserver.de/info/', auth=HTTPBasicAuth(username,password))
print(resp.status_code)
While trying to access this server via browser, it works perfectly fine.
What am I doing wrong?
By default, requests library verifies the SSL certificate for HTTPS requests. If the certificate is not verified, it will raise a SSLError. You check this by disabling the certificate verification by passing verify=False as an argument to the get method, if this is the issue.
import requests
from requests.auth import *
username = 'User'
password = 'Hello#123'
resp = requests.get('https://prodserver.de/info/', auth=HTTPBasicAuth(username,password), verify=False)
print(resp.status_code)
try using requests' generic auth, like this:
resp = requests.get('https://prodserver.de/info/', auth=(username,password)
What am I doing wrong?
I can not be sure without investigating your server, but I suggest checking if assumption (you have made) that server is using Basic authorization, there exist various Authentication schemes, it is also possible that your server use cookie-based solution, rather than headers-based one.
While trying to access this server via browser, it works perfectly
fine.
You might then use developer tools to see what is actually send inside and with request which does result in success.

How to read JSON from URL in Python?

I am trying to use Python to get a JSON file from the Web. If I open the URL in my browser (Mozilla or Chromium) I do see the JSON. But when I do the following with the Python:
response = urllib2.urlopen(url)
data = json.loads(response.read())
I get an error message that tells me the following (after translation in English): Errno 10060, a connection troughs an error, since the server after a certain time period did not react, or the connection was erroneous, or the host did not react.
ADDED
It looks like there are many people who faced the described problem. There are also some answers to the similar (or the same) question. For example here we can see the following solution:
import requests
r = requests.get("http://www.google.com", proxies={"http": "http://61.233.25.166:80"})
print(r.text)
It is already a step forward for me (I think that it is very likely that the proxy is the reason of the problem). However, I still did not get it done since I do not know URL of my proxy and I probably will need user name and password. Howe can I find them? How did it happen that my browsers have them I do not?
ADDED 2
I think I am now one step further. I have used this site to find out what my proxy is: http://www.whatismyproxy.com/
Then I have used the following code:
proxies = {'http':'my_proxy.blabla.com/'}
r = requests.get(url, proxies = proxies)
print r
As a result I get
<Response [404]>
Looks not so good, but at least I think that my proxy is correct, because when I randomly change the address of the proxy I get another error:
Cannot connect to proxy
So, I can connect to proxy but something is not found.
I think there might be something wrong, when you're trying to get the json from the online source(URL). Just to make things clear, here is a small code snippet
#!/usr/bin/env python
try:
# For Python 3+
from urllib.request import urlopen
except ImportError:
# For Python 2
from urllib2 import urlopen
import json
def get_jsonparsed_data(url):
response = urlopen(url)
data = str(response.read())
return json.loads(data)
If you still get a connection error, You can try a couple of steps:
Try to urlopen() a random site from the Interpreter (Interactive Mode). If you are able to grab the source code you're good. If not check internet conditions or try the request module. Check here
Check and see if the json in the URL is in the correct syntax. For sample json syntax check here
Try the simplejson module.
Edit 1:
if you want to access websites using a system wide proxy you will have to use a proxy handler to use loopback(local host) to connect to that proxy.. A sample code is shown below.
proxy = urllib2.ProxyHandler({
'http': '127.0.0.1',
'https': '127.0.0.1'
})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
# this way you can send both http and https request using proxies
urllib2.urlopen('http://www.google.com')
urllib2.urlopen('https://www.google.com')
I have not not worked a lot with ProxyHandler. I just know the theory and code. I am sure there are better ways to access websites through proxies; One which does not involve installing the opener everytime you run the program. But hopefully it will point you in the right direction.

TestLink xmlrpc API (via Python) 404 not found

I'm trying to connect to TestLink via the xmlrpc API. I've set the following in TestLink's config.inc.php:
$tlCfg->api->enabled = TRUE;
$tlCfg->exec_cfg->enabled_test_automation = ENABLED;
and restarted the apache sever. I tried to connect the TestLink server via the python package TestLink-API-Python-client (https://github.com/orenault/TestLink-API-Python-client)
from testlink import TestlinkAPIClient, TestLinkHelper
import sys
URL = 'http://MYSERVER/testlink/lib/api/xmlrpc.php'
DevKey = 'MYKEY'
tl_helper = TestLinkHelper()
myTestLink = tl_helper.connect(TestlinkAPIClient)
myTestLink.__init__(URL, DEVKEY)
myTestLink.checkDevKey()
And then I receive a TLConnectionError, stating my url, and 404 Not Found...
Does anyone have any idea?
Thanks.
I didn't solve it.
I reverted to working on the TestLink DB directly. I'm sure it's more fragile than using the API, but it works...
If you are still looking for help, this code worked for me:
set TESTLINK_API_PYTHON_SERVER_URL=http://[YOURSERVER]/testlink/lib/api/xmlrpc/v1/xmlrpc.php
set TESTLINK_API_PYTHON_DEVKEY=[Users devKey generated by TestLink]
python
import testlink
tls = testlink.TestLinkHelper().connect(testlink.TestlinkAPIClient)
tls.countProjects()
Check out TestLink API Documentation to learn more
In a glance your XML-RPC URL seems wrong. It should be
http://YOURSERVER/testlink/lib/api/xmlrpc/v1/xmlrpc.php

SUDS python connection

im trying to build an client for an webservice in python with suds. i used the tutorial
on this site: http://www.jansipke.nl/python-soap-client-with-suds. Its working with my own written Webservice and WSDL, but not with the wsdl file i got. The wsdl file is working in soapUI, i can send requests and get an answer. So the problem is, i think, how suds is parsing the wsdl file. I get following error:
urllib2.URLError: <urlopen error [Errno -2] Name or service not known>
Any ideas how to fix that? If you need more information please ask. Thank you!
The error you have given us seems to imply that the URL you are using to access the WSDL is not correct. could you show us a bit more of your code? for example the client instatiation and the url to the WSDL. this might allow others to actually help you.
Olly
# SUDS is primarily built for Python 2.6/7 (Lightweight SOAP client)
# SUDS does not work properly with other version, absolutely no support for 3.x
# Test your code with Python 2.7.12 (I am using)
from suds.client import Client
from suds.sax.text import Raw
# Use your tested URL same format with '?wsdl', Check once in SOAP-UI, below is dummy
# Make sure to use same Method name in below function 'client.service.MethodName'
url = 'http://localhost:8080/your/path/MethodName?wsdl'
#Use your Request XML, below is dummy, format xml=Raw('xml_text')
xml = Raw('<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:diag=" </soapenv:Body></soapenv:Envelope>')
def GetResCode(url, xml):
client = Client(url)
xml_response = (client.service.MethodName(__inject={'msg':xml}))
return xml_response
print(GetResCode(url,xml))

Does urllib2 in Python 2.6.1 support proxy via https

Does urllib2 in Python 2.6.1 support proxy via https?
I've found the following at http://www.voidspace.org.uk/python/articles/urllib2.shtml:
NOTE
Currently urllib2 does not support
fetching of https locations through a
proxy. This can be a problem.
I'm trying automate login in to web site and downloading document, I have valid username/password.
proxy_info = {
'host':"axxx", # commented out the real data
'port':"1234" # commented out the real data
}
proxy_handler = urllib2.ProxyHandler(
{"http" : "http://%(host)s:%(port)s" % proxy_info})
opener = urllib2.build_opener(proxy_handler,
urllib2.HTTPHandler(debuglevel=1),urllib2.HTTPCookieProcessor())
urllib2.install_opener(opener)
fullurl = 'https://correct.url.to.login.page.com/user=a&pswd=b' # example
req1 = urllib2.Request(url=fullurl, headers=headers)
response = urllib2.urlopen(req1)
I've had it working for similar pages but not using HTTPS and I suspect it does not get through proxy - it just gets stuck in the same way as when I did not specify proxy. I need to go out through proxy.
I need to authenticate but not using basic authentication, will urllib2 figure out authentication when going via https site (I supply username/password to site via url)?
EDIT:
Nope, I tested with
proxies = {
"http" : "http://%(host)s:%(port)s" % proxy_info,
"https" : "https://%(host)s:%(port)s" % proxy_info
}
proxy_handler = urllib2.ProxyHandler(proxies)
And I get error:
urllib2.URLError: urlopen error
[Errno 8] _ssl.c:480: EOF occurred in
violation of protocol
Fixed in Python 2.6.3 and several other branches:
_bugs.python.org/issue1424152 (replace _ with http...)
http://www.python.org/download/releases/2.6.3/NEWS.txt
Issue #1424152: Fix for httplib, urllib2 to support SSL while working through
proxy. Original patch by Christopher Li, changes made by Senthil Kumaran.
I'm not sure Michael Foord's article, that you quote, is updated to Python 2.6.1 -- why not give it a try? Instead of telling ProxyHandler that the proxy is only good for http, as you're doing now, register it for https, too (of course you should format it into a variable just once before you call ProxyHandler and just repeatedly use that variable in the dict): that may or may not work, but, you're not even trying, and that's sure not to work!-)
Incase anyone else have this issue in the future I'd like to point out that it does support https proxying now, make sure the proxy supports it too or you risk running into a bug that puts the python library into an infinite loop (this happened to me).
See the unittest in the python source that is testing https proxying support for further information:
http://svn.python.org/view/python/branches/release26-maint/Lib/test/test_urllib2.py?r1=74203&r2=74202&pathrev=74203

Categories