How to connect to HTTPS connection to use an API in Python? - python

This is the requirement from the API's documentation
A HTTPS connection is needed to use the API. This means that you will require a secure SSL/TLS connection to be able to communicate with our API Server.
This is the Curl command of getting the clients in their documentation
curl -i -X GET -H "X-KEYALI-API:{API_KEY}" -u {API_USERNAME}:{API_PASSWORD} https://aliphia.com/v1/api_public/clients/
So, I need to implement the same thing in Python

import requests
headers = {
'X-KEYALI-API': '{API_KEY}',
}
response = requests.get('https://aliphia.com/v1/api_public/clients/', headers=headers, auth=('{API_USERNAME}', '{API_PASSWORD}'))

Related

access ADFS/OIDC protected webapi with curl or python

I would like to access a webapi by a script(bash or python), which is protected by mod_openidc/apache2 and an self-hosted ADFS.
For the authentication, a certificate from a smartcard or locally stored certificate is required.
I already tried several approaches with python or curl, but got no nearly working script.
approach at python:
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
client_id="abcdef-abcd-abcd-abcd-abcdefghijk"
client = BackendApplicationClient(client_id=client_id)
#client = BackendApplicationClient()
oauth = OAuth2Session(client=client)
protected_url="https://protectedurl/page/"
oauth.fetch_token(token_url='https://sts.myserver.net/adfs/oauth2/token/', include_client_id=True, cert=('/home/user/cert.pem', '/home/user/server.key'))
which lead to: "oauthlib.oauth2.rfc6749.errors.InvalidClientError: (invalid_client) MSIS9627: Received invalid OAuth client credentials request. Client credentials are missing or found empty"
curl:
curl --cert /home/user/cert.pem --key /home/user/server.key
https://sts.example.net/adfs/oauth2/authorize/?response_type=code&scope=openid%20email%20profile%20allatclaims&client_id=XXX&state=XXXredirect_uri=https%3A%2F%2Fexample.net%2Fpage%2Fredirect_uri&nonceXXX
Which gives the sts page in html
So I think I dont have some small bug, but a wrong approach
Since it works in a browser, I dont suggest a issue on server side
Any approaches and examples are warmly welcome

503 Reponse when trying to use python request on local website

I'm trying to scrape my own site from my local server. But when I use python requests on it, it gives me a response 503. Other ordinary sites on the web work. Any reason/solution for this?
import requests
url = 'http://127.0.0.1:8080/full_report/a1uE0000002vu2jIAA/'
r = requests.get(url)
print r
prints out
<Response [503]>
After further investigation, I've found a similar problem to mine.
Python requests 503 erros when trying to access localhost:8000
However, I don't think he's solved it yet. I can access the local website via the web browser but can't access using the requests.get function. I'm also using Django to host the server.
python manage.py runserver 8080
When I use:
curl -vvv http://127.0.0.1:8080
* Rebuilt URL to: http://127.0.0.1:8080/
* Trying 10.37.135.39...
* Connected to proxy.kdc.[company-name].com (10.37.135.39) port 8099 (#0)
* Proxy auth using Basic with user '[company-id]'
> GET http://127.0.0.1:8080/ HTTP/1.1
> Host: 127.0.0.1:8080
> Proxy-Authorization: Basic Y2FhNTc2OnJ2YTkxQ29kZQ==
> User-Agent: curl/7.49.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: BlueCoat-Security-Appliance
< Location:http://10.118.216.201
< Connection: Close
<
<HTML>
<HEAD><TITLE>Redirection</TITLE></HEAD>
<BODY><H1>Redirect</H1></BODY>
* Closing connection 0
I cannot request a local url using python requests because the company's network software won't allow it. This is a dead end and other avenues must be pursued.
EDIT: Working Solution
>>> import requests
>>> session = requests.Session()
>>> session.trust_env = False
>>> r = session.get("http://127.0.0.1:8080")
>>> r
<Response [200]>
Maybe you should disable your proxies in your requests.
import requests
proxies = {
"http": None,
"https": None,
}
requests.get("http://127.0.0.1:8080/myfunction", proxies=proxies)
ref:
https://stackoverflow.com/a/35470245/8011839
https://2.python-requests.org//en/master/user/advanced/#proxies
HTTP Error 503 means:
The Web server (running the Web site) is currently unable to handle the HTTP request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. Some servers in this state may also simply refuse the socket connection, in which case a different error may be generated because the socket creation timed out.
You may do following things:
Check you are able to open URL in the browser
If URL is opening, then check the domain in your code, it might be incorrect.
If in browser also it is not opening, your site may be overloaded or server resources are full to perform request
The most common cause of a 503 error is that a proxy host of some form is unable to communicate with the back end. For example, if you have Varnish trying to handle a request but Apache is down.
In your case, you have Django running on port 8080. (That's what the 8080 means). When you try to get content from 127.0.0.1, though, you're going to the default HTTP port (80). This means that your default server (Apache maybe? NginX?) is trying to find a host to serve 127.0.0.1 and can't find one.
You have two choices. Either you can update your server's configuration, or you can include the port in the URL.
url = 'http://127.0.0.1:8080/full_report/a1uE0000002vu2jIAA/'

Run a curl tlsv1.2 http get request in python?

I have the following command that I run using curl in linux.
curl --tlsv1.2 --cert ~/aws-iot/certs/certificate.pem.crt --key ~/aws-iot/certs/private.pem.key --cacert ~/aws-iot/certs/root-CA.crt -X GET https://data.iot.us-east-1.amazonaws.com:8443/things/pi_3/shadow
This command returns JSON text that I want. However I want to be able to run the above command in Python3. I do not know what library to use in order to get the same JSON response.
P.S. I replace "data" with my account number in AWS to get JSON
After playing around with it on my own I was able to successfully do it in python using the requests library.
import requests
s = requests.Session()
r = s.get('https://data.iot.us-east-1.amazonaws.com:8443/things/pi_3/shadow',
cert=('/home/pi/aws-iot/certs/certificate.pem.crt', '/home/pi/aws-iot/certs/private.pem.key', '/home/pi/aws-iot/certs/root-CA.crt'))
print(r.text)

WSO2 API POST Python Web Service - Empty or No Payload to server

I am not able to send request payload to my POST service from WSO2.
On rest console, my service is working.
From WSO2 server I am able to do curl to my server with successful response.
here is my API configuration
Payload to send:
{"query":"Hi I am a POST query parameter"}
My server is receiving {} as request payload. It expect RAW body in JSON (as above) in payload. I have tried all combinations for Parameter Type, but still not able to send payload to my server from WSO2.
How can I do this?
EDIT 1
I have tried all possible ways of sending data including following.
Am I doing something wrong here???
and
From both I get error that my payload is empty or incorrect!!
Edit 2
I am able to connect with Java based services but not with Python based services.
Do I need any special settings on my python server?
enable wirelogs and check following
payload is coming into the API manager (swagger -> AM )
Payload is going out from api manager (AM -> backend)
Also check the request headers coming in and going out and compare them with the stuff from curl request (successful request)
I am using Flask and I am afraid Flask can not deal with this issue currently.
I could reproduce this issue, the message send to back-end correctly, but Python only handle message until timeout.
Python Flask cannot receive post request from WSO2
The work-round may be using Java or Python get method.
I solved this problem by using apache to proxy this request.
I think this is related with wsgi.
Processing chunked encoded HTTP POST requests in python (or generic CGI under apache)
ProxyPass / http://localhost:8001/
ProxyPassReverse / http://localhost:8001/

Python HTTPS client with basic authentication via proxy

From Python, I would like to retrieve content from a web site via HTTPS with basic authentication. I need the content on disk. I am on an intranet, trusting the HTTPS server. Platform is Python 2.6.2 on Windows.
I have been playing around with urllib2, however did not succeed so far.
I have a solution running, calling wget via os.system():
wget_cmd = r'\path\to\wget.exe -q -e "https_proxy = http://fqdn.to.proxy:port" --no-check-certificate --http-user="username" --http-password="password" -O path\to\output https://fqdn.to.site/content'
I would like to get rid of the os.system(). Is that possible in Python?
Proxy and https wasn't working for a long time with urllib2. It will be fixed in the next released version of python 2.6 (v2.6.3).
In the meantime you can reimplement the correct support, that's what we did for mercurial: http://hg.intevation.org/mercurial/crew/rev/59acb9c7d90f
Try this (notice that you'll have to fill in the realm of your server also):
import urllib2
authinfo = urllib2.HTTPBasicAuthHandler()
authinfo.add_password(realm='Fill In Realm Here',
uri='https://fqdn.to.site/content',
user='username',
passwd='password')
proxy_support = urllib2.ProxyHandler({"https" : "http://fqdn.to.proxy:port"})
opener = urllib2.build_opener(proxy_support, authinfo)
fp = opener.open("https://fqdn.to.site/content")
open(r"path\to\output", "wb").write(fp.read())
You could try this too:
http://code.google.com/p/python-httpclient/
(It also supports the verification of the server certificate.)

Categories