I'm trying to connect to TestLink via the xmlrpc API. I've set the following in TestLink's config.inc.php:
$tlCfg->api->enabled = TRUE;
$tlCfg->exec_cfg->enabled_test_automation = ENABLED;
and restarted the apache sever. I tried to connect the TestLink server via the python package TestLink-API-Python-client (https://github.com/orenault/TestLink-API-Python-client)
from testlink import TestlinkAPIClient, TestLinkHelper
import sys
URL = 'http://MYSERVER/testlink/lib/api/xmlrpc.php'
DevKey = 'MYKEY'
tl_helper = TestLinkHelper()
myTestLink = tl_helper.connect(TestlinkAPIClient)
myTestLink.__init__(URL, DEVKEY)
myTestLink.checkDevKey()
And then I receive a TLConnectionError, stating my url, and 404 Not Found...
Does anyone have any idea?
Thanks.
I didn't solve it.
I reverted to working on the TestLink DB directly. I'm sure it's more fragile than using the API, but it works...
If you are still looking for help, this code worked for me:
set TESTLINK_API_PYTHON_SERVER_URL=http://[YOURSERVER]/testlink/lib/api/xmlrpc/v1/xmlrpc.php
set TESTLINK_API_PYTHON_DEVKEY=[Users devKey generated by TestLink]
python
import testlink
tls = testlink.TestLinkHelper().connect(testlink.TestlinkAPIClient)
tls.countProjects()
Check out TestLink API Documentation to learn more
In a glance your XML-RPC URL seems wrong. It should be
http://YOURSERVER/testlink/lib/api/xmlrpc/v1/xmlrpc.php
Related
I am trying to connect jira cloud using REST API. This is my Python code for it:
pip install jira
from jira import JIRA
jiraOptions = {'server': "https:url"}
user = 'emailid'
apikey = 'api token'
server = 'https:url'
options = {
'server': server
}
After this when I execute this line I get a connection error
jira = JIRA(basic_auth=(user,apikey))
ConnectionRefusedError
I am not sure what has gone wrong and I tried finding the right syntax but I dont see what the problem is can anyone please help?
Hey #Bobby I ran into some similar problems. I created this repo to use straight Python and the API to get most things done - https://github.com/dren79/JiraScripting_public
A lot of the basic functionality is in the helpers.py file, let me know if it helps.
My goal is to authenticate my client that uses the requests library (2.11.1) in Python 3.5.2 through NTLM with SSPI so that the user does not have to manually enter her domain credentials (used to login to the PC).
I have found the following possibilities, but none work for me:
HttpNtlmSspiAuth provokes an exception in requests:
import requests
from requests_ntlm import HttpNtlmAuth, HttpNtlmSspiAuth
requests.get(site_url, auth=HttpNtlmSspiAuth())
requests-sspi-ntlm always gets a 401:
import requests
from requests_sspi_ntlm import HttpNtlmAuth
session = requests.Session()
session.auth = HttpNtlmAuth()
session.get("http://ntlm_protected_site.com")
And requests-negotiate-sspi also triggers an exception in requests:
import requests
from requests_negotiate_sspi import HttpNegotiateAuth
r = requests.get('https://iis.contoso.com', auth=HttpNegotiateAuth())
Am I doing something wrong?
The package requests-negotiate-sspi works for me.
I probably had the same issue with PO, but I was too lazy to try PO's solution and integrate PO's code into mine. And Google helped me out. In case anyone encounters the same exception raised from sspi.py ValueError: year 30828 is out of range, it's a known issue for python 3.6 of requests-negotiate-sspi. See here: Github-Issue
I solved this by creating a new conda environment with python 3.4. Then reinstall some dependencies as well as requests-negotiate-sspi, boom, all works.
Same issue here but solved when I realized I was in a adm account that doesn’t have authorization to that resource uri.
I have my web app API running.
If I go to http://127.0.0.1:5000/ via any browser I get the right response.
If I use the Advanced REST Client Chrome app and send a GET request to my app at that address I get the right response.
However this gives me a 503:
import requests
response = requests.get('http://127.0.0.1:5000/')
I read to try this for some reason:
s = requests.Session()
response = s.get('http://127.0.0.1:5000/')
But I still get a 503 response.
Other things I've tried: Not prefixing with http://, not using a port in the URL, running on a different port, trying a different API call like Post, etc.
Thanks.
Is http://127.0.0.1:5000/ your localhost? If so, try 'http://localhost:5000' instead
Just in case someone is struggling with this as well, what finally worked was running the application on my local network ip.
I.e., I just opened up the web app and changed the app.run(debug=True) line to app.run(host="my.ip.address", debug = True).
I'm guessing the requests library perhaps was trying to protect me from a localhost attack? Or our corporate proxy or firewall was preventing communication from unknown apps to the 127 address. I had set NO_PROXY to include the 127.0.0.1 address, so I don't think that was the problem. In the end I'm not really sure why it is working now, but I'm glad that it is.
From Python, I would like to retrieve content from a web site via HTTPS with basic authentication. I need the content on disk. I am on an intranet, trusting the HTTPS server. Platform is Python 2.6.2 on Windows.
I have been playing around with urllib2, however did not succeed so far.
I have a solution running, calling wget via os.system():
wget_cmd = r'\path\to\wget.exe -q -e "https_proxy = http://fqdn.to.proxy:port" --no-check-certificate --http-user="username" --http-password="password" -O path\to\output https://fqdn.to.site/content'
I would like to get rid of the os.system(). Is that possible in Python?
Proxy and https wasn't working for a long time with urllib2. It will be fixed in the next released version of python 2.6 (v2.6.3).
In the meantime you can reimplement the correct support, that's what we did for mercurial: http://hg.intevation.org/mercurial/crew/rev/59acb9c7d90f
Try this (notice that you'll have to fill in the realm of your server also):
import urllib2
authinfo = urllib2.HTTPBasicAuthHandler()
authinfo.add_password(realm='Fill In Realm Here',
uri='https://fqdn.to.site/content',
user='username',
passwd='password')
proxy_support = urllib2.ProxyHandler({"https" : "http://fqdn.to.proxy:port"})
opener = urllib2.build_opener(proxy_support, authinfo)
fp = opener.open("https://fqdn.to.site/content")
open(r"path\to\output", "wb").write(fp.read())
You could try this too:
http://code.google.com/p/python-httpclient/
(It also supports the verification of the server certificate.)
Does urllib2 in Python 2.6.1 support proxy via https?
I've found the following at http://www.voidspace.org.uk/python/articles/urllib2.shtml:
NOTE
Currently urllib2 does not support
fetching of https locations through a
proxy. This can be a problem.
I'm trying automate login in to web site and downloading document, I have valid username/password.
proxy_info = {
'host':"axxx", # commented out the real data
'port':"1234" # commented out the real data
}
proxy_handler = urllib2.ProxyHandler(
{"http" : "http://%(host)s:%(port)s" % proxy_info})
opener = urllib2.build_opener(proxy_handler,
urllib2.HTTPHandler(debuglevel=1),urllib2.HTTPCookieProcessor())
urllib2.install_opener(opener)
fullurl = 'https://correct.url.to.login.page.com/user=a&pswd=b' # example
req1 = urllib2.Request(url=fullurl, headers=headers)
response = urllib2.urlopen(req1)
I've had it working for similar pages but not using HTTPS and I suspect it does not get through proxy - it just gets stuck in the same way as when I did not specify proxy. I need to go out through proxy.
I need to authenticate but not using basic authentication, will urllib2 figure out authentication when going via https site (I supply username/password to site via url)?
EDIT:
Nope, I tested with
proxies = {
"http" : "http://%(host)s:%(port)s" % proxy_info,
"https" : "https://%(host)s:%(port)s" % proxy_info
}
proxy_handler = urllib2.ProxyHandler(proxies)
And I get error:
urllib2.URLError: urlopen error
[Errno 8] _ssl.c:480: EOF occurred in
violation of protocol
Fixed in Python 2.6.3 and several other branches:
_bugs.python.org/issue1424152 (replace _ with http...)
http://www.python.org/download/releases/2.6.3/NEWS.txt
Issue #1424152: Fix for httplib, urllib2 to support SSL while working through
proxy. Original patch by Christopher Li, changes made by Senthil Kumaran.
I'm not sure Michael Foord's article, that you quote, is updated to Python 2.6.1 -- why not give it a try? Instead of telling ProxyHandler that the proxy is only good for http, as you're doing now, register it for https, too (of course you should format it into a variable just once before you call ProxyHandler and just repeatedly use that variable in the dict): that may or may not work, but, you're not even trying, and that's sure not to work!-)
Incase anyone else have this issue in the future I'd like to point out that it does support https proxying now, make sure the proxy supports it too or you risk running into a bug that puts the python library into an infinite loop (this happened to me).
See the unittest in the python source that is testing https proxying support for further information:
http://svn.python.org/view/python/branches/release26-maint/Lib/test/test_urllib2.py?r1=74203&r2=74202&pathrev=74203