I'am new in ALM. I just read some guides from REST API and try to repeat this. But I face up to the situation. In my last request I have 401 return (User not authenticated). What am I doing wrong?
import requests
from requests.auth import HTTPBasicAuth
url = "https://almalmqc1250saastrial.saas.hpe.com"
login = "+++++++"
password = "+++++"
cookies = dict()
headers = {}
r = requests.get(url + "/qcbin/rest/is-authenticated")
print(r.status_code, r.headers.get('WWW-Authenticate'))
r = requests.get(url + "/qcbin/authentication-point/authentication",
auth=HTTPBasicAuth(login, password), headers=headers)
print(r.status_code, r.headers)
cookie = r.headers.get('Set-Cookie')
LWSSO_COOKIE_KEY = cookie[cookie.index("=") + 1: cookie.index(";")]
cookies['LWSSO_COOKIE_KEY'] = LWSSO_COOKIE_KEY
print(cookies)
r = requests.post(url + "/qcbin/rest/site-session", cookies=cookies)
print(r.status_code, r.headers)
The solution was found. The problem is incorrect URL. To authentication you need this URL:
url_log = "https://login.software.microfocus.com/msg/actions/doLogin.action"
And you need this headers:
self.__headers = {
"Content-Type": "application/x-www-form-urlencoded",
'Host': 'login.software.microfocus.com'
}
The POST request to authenticate will be next:
r = self.__session.post(self.url_log, data=self.input_auth, headers=self.__headers)
Where data is:
self.input_auth = 'username=' + login + '&' + 'password=' + password
Related
Basically, I am trying to pass a list of ids in payloads of 100 from a spreadsheet to delete organizations using the destroy many endpoint.
import json
import xlrd
import requests
session = requests.Session()
session.headers = {'Content-Type': 'application/json'}
session.auth = 'my email', 'password'
url = 'https://domain.zendesk.com/api/v2/organizations/destroy_many.json'
payloads = []
organizations_dict = {}
book = xlrd.open_workbook('orgs_list_destroy.xls')
sheet = book.sheet_by_name('Sheet1')
for row in range(1, sheet.nrows):
if sheet.row_values(row)[2]:
organizations_dict = {'ids': int(sheet.row_values(row)[2])}
if len(organizations_dict) == 100:
payloads.append(json.dumps(organizations_dict))
organizations_dict = {}
if organizations_dict:
payloads.append(json.dumps(organizations_dict))
for payload in payloads:
response = session.delete(url, data=payload)
if response.status_code != 200:
print('Import failed with status {}'.format(response.status_code))
exit()
print('Successfully imported a batch of organizations')
Try placing it outside the for loop, where you're defining your request headers:
url = 'https://{{YOURDOMAIN}}.zendesk.com/api/v2/organizations/destroy_many.json'
user = 'YOUR_EMAIL#DOMAIN.com' + '/token'
pwd = '{{YOUR_TOKEN}}'
headers = {'Content-Type': 'application/json'}
response = requests.delete(url, auth=(user, pwd), headers=headers)
I have a problem with Computer Vision resource on Azure. This code is based on documentation example and it already worked.(https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python-disk)
Suddenly i started getting 400 error:
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://nameofmyresource.cognitiveservices.azure.com/vision/v2.0/analyze?visualFeatures=Objects%2CTags
My piece of code:
for img_path in img_path_list:
image_data = open(img_path, "rb").read()
print(image_data)
headers = {'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/octet-stream'}
params = {'visualFeatures': 'Objects,Tags'}
response = requests.post(
analyze_url, headers=headers, params=params, data=image_data)
response.raise_for_status()
analysis = response.json()
I've printed image_data (seems okay) and created new resource - nothing. Any thoughts?
Seems the url you are generating is wrong, Can you try the following code,
apikey = "e720e03190c41148ec555889daf2f64"
assert apikey
api_url = "https://southeastasia.api.cognitive.microsoft.com/vision/v2.0/"
analyse_api = api_url + "analyze"
image_data = img
headers = {"Ocp-Apim-Subscription-Key": apikey,
'Content-Type': 'application/octet-stream'}
params = {'visualFeatures':'Categories,Description,Color,Objects,Faces'}
response = requests.post(
analyse_api, headers=headers, params=params, data=image_data)
response.raise_for_status()
analysis = response.json()
#image_caption = analysis["description"]["captions"][0]["text"].capitalize()
people = 0
for i in analysis['objects']:
if i['object'] == 'person':
people += 1
describepeople = []
for i in analysis['faces']:
describepeople.append(i['gender'] + ' ' + str(i['age']))
tags = analysis['description']['tags']
return[people, describepeople, tags]
Something was bad with particular photo - next photo was okay
I have a little problem with authentication. I am writting a script, which is getting login and password from user(input from keyboard) and then I want to get some data from the website(http not https), but every time I run the script the response is 401.I read some similar posts from stack and I tried this solutions:
Solution 1
c = HTTPConnection("somewebsite")
userAndPass = b64encode(b"username:password").decode("ascii")
headers = { 'Authorization' : 'Basic %s' % userAndPass }
c.request('GET', '/', headers=headers)
res = c.getresponse()
data = res.read()
Solution 2
with requests.Session() as c:
url = 'somewebsite'
USERNAME = 'username'
PASSWORD = 'password'
c.get(url)
login_data = dict(username = USERNAME, password = PASSWORD)
c.post(url,data = login_data)
page = c.get('somewebsite', headers = {"Referer": "somwebsite"})
print(page)
Solution 3
www = 'somewebsite'
value ={'filter':'somefilter'}
data = urllib.parse.urlencode(value)
data=data.encode('utf-8')
req = urllib.request.Request(www,data)
resp = urllib.request.urlopen(req)
respData = resp.read()
print(respData)
x = urllib.request.urlopen(www,"username","password")
print(x.read())'
I don't know how to solve this problem. Can somebody give me some link or tip ?
Have you tried the Basic Authentication example from requests?
>>> from requests.auth import HTTPBasicAuth
>>> requests.get('https://api.github.com/user', auth=HTTPBasicAuth('user', 'pass'))
<Response [200]>
Can I know what type of authentication on the website?
this is an official Basic Auth example (http://docs.python-requests.org/en/master/user/advanced/#http-verbs)
from requests.auth import HTTPBasicAuth
auth = HTTPBasicAuth('fake#example.com', 'not_a_real_password')
r = requests.post(url=url, data=body, auth=auth)
print(r.status_code)
To use api with authentication, we need to have token_id or app_id that will provide the access for our request. Below is an example how we can formulate the url and get the response:
strong text
import requests
city = input()
api_call = "http://api.openweathermap.org/data/2.5/weather?"
app_id = "892d5406f4811786e2b80a823c78f466"
req_url = api_call + "q=" + city + "&appid=" + app_id
response = requests.get(req_url)
data = response.json()
if (data["cod"] == 200):
hum = data["main"]["humidity"]
print("Humidity is % d " %(hum))
elif data["cod"] != 200:
print("Error occurred : " ,data["cod"], data["message"])
I am able to use the below code to do a get request on the concourse api to fetch the pipeline build details.
However post request to trigger the pipeline build does not work and no error is reported .
Here is the code
url = "http://192.168.100.4:8080/api/v1/teams/main/"
r = requests.get(url + 'auth/token')
json_data = json.loads(r.text)
cookie = {'ATC-Authorization': 'Bearer '+ json_data["value"]}
r = requests.post(url + 'pipelines/pipe-name/jobs/job-name/builds'
, cookies=cookie)
print r.text
print r.content
r = requests.get(url + 'pipelines/pipe-name/jobs/job-name/builds/17', cookies=cookie)
print r.text
You may use Session :
[...] The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance [...]
url = "http://192.168.100.4:8080/api/v1/teams/main/"
req_sessions = requests.Session() #load session instance
r = req_sessions.get(url + 'auth/token')
json_data = json.loads(r.text)
cookie = {'ATC-Authorization': 'Bearer '+ json_data["value"]}
r = req_sessions.post(url + 'pipelines/pipe-name/jobs/job-name/builds', cookies=cookie)
print r.text
print r.content
r = req_sessions.get(url + 'pipelines/pipe-name/jobs/job-name/builds/17')
print r.text
I have a couple of web services calls using the request package in python one is purely form and WORKS:
r = requests.post('http://localhost:5000/coordinator/finished-crawl', \
data = {'competitorId':value})
And the other uses JSON and does not work:
service_url = 'http://localhost:5000/coordinator/save-page'
data = {'Url': url, 'CompetitorId': competitorID, \
'Fetched': self.generateTimestamp(), 'Html': html}
headers = {'Content-type': 'application/json'}
r = requests.post(service_url, data=json.dumps(data), headers=headers)
Now if do not include headers I use the headers as above, I get a 404, but if I do not include as
r = requests.post(service_url, data=json.dumps(data))
I get a 415. I have tried looking at other post on stackoverflow and from what I can tell the call is correct. I have tested the web service via the application postman and it works. Can some tell me what is wrong or point me in the right direction?
THE FULL METHOD
def saveContent(self, url, competitorID, html):
temp = self.cleanseHtml(html)
service_url = 'http://localhost:5000/coordinator/save-page'
data = {'Url': url, 'CompetitorId': competitorID, \
'Fetched': self.generateTimestamp(), \
'Html': temp}
headers = {'Content-type': 'application/json'}
r = requests.post(service_url, json=json.dumps(data), headers=headers)
r = requests.post(service_url, json=json.dumps(data))
And cleanseHTML:
def cleanseHtml(self, html):
return html.replace("\\", "\\\\")\
.replace("\"", "\\\"")\
.replace("\n", "")\
.replace("\r", "")