I'm trying to pull a complete user list from our private gitlab server instance and input it into an S3 Bucket to reference whenever we need. Eventually I will have some form of Lambda/cfn deleting it and running it again every week to update it. I'm not so great with Python and this is what I have so far..
import json
import boto3
import re
import os
import sys
import botocore
import urllib3
from pprint import pprint
sess = boto3.Session(profile_name="sso-profile-here")
s3_client = sess.client("s3")
bucket_name = "user-statistics"
http = urllib3.PoolManager()
baseuri = "https://git.tools.dev.mycompany.net/api/v4/"
access_token = "access-token-code"
def get_gitlab_users(access_token=access_token, baseuri=baseuri):
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {}".format(access_token),
}
url = "{}/users/?per_page=100&active=true&without_project_bots=true&next_page=x-next-page".format(
baseuri
)
req = http.request(method="GET", url=url, headers=headers)
result = json.loads(req.data)
s3_client.put_object(
Bucket=bucket_name, Key="get_users_gitlab.json", Body=json.dumps(result)
)
if __name__ == "__main__":
get_gitlab_users(access_token=access_token, baseuri=baseuri)
What I would like to be able to do is pull all the users on each page and also format it a bit neater in the S3 bucket, When I download it from the bucket the format is really unreadable and I'm not sure if I can improve it, can anyone suggest anything I can do?
Please also ignore the fact my access token is directly in the code here, it's for testing at this stage and I will make sure it's not stored directly in code.
Thanks in advance for any suggestions.
You can try to use python-gitlab package instead of requests. It should be a lot easier to get user infos :
import gitlab
baseuri = "https://git.tools.dev.mycompany.net"
access_token = "access-token-code"
gl = gitlab.Gitlab(baseuri , private_token=access_token)
users = [user.asdict() for user in gl.users.list()]
users
# [{'id': 1,
# 'username': 'username1',
# 'name': 'name1',
# 'state': 'active',
# 'avatar_url': 'https://avatar.com/1',
# 'web_url': 'https://git.tools.dev.mycompany.net/username1'},
# ...]
Related
I'm trying to use the sandbox from https://fhir.epic.com/ for Backend Services.
I am following this tutorial : https://fhir.epic.com/Documentation?docId=oauth2§ion=BackendOAuth2Guide :
I already register a new app,
created a JWT (using SSL keys)
tested the JWT on https://jwt.io/ : works fine!
But I cannot POST the JWT to the endpoint to obtain the access token. I should send a POST request to this URL: https://fhir.epic.com/interconnect-fhir-oauth/oauth2/token.
I'm using python and this is my code so far:
import json
import requests
from datetime import datetime, timedelta, timezone
from requests.structures import CaseInsensitiveDict
from jwt import (
JWT,
jwk_from_dict,
jwk_from_pem,
)
from jwt.utils import get_int_from_datetime
def main():
instance = JWT()
message = {
# Client ID for non-production
'iss': '990573e-13e3-143b-8b03-4fbb577b660',
'sub': '990573e-13e3-143b-8b03-4fbb577b660',
'aud': 'https://fhir.epic.com/interconnect-fhir-oauth/oauth2/token',
'jti': 'f9eaafba-2e49-11ea-8880-5ce0c5aee679',
'iat': get_int_from_datetime(datetime.now(timezone.utc)),
'exp': get_int_from_datetime(datetime.now(timezone.utc) + timedelta(hours=1)),
}
# Load a RSA key from a PEM file.
with open('/home/user/ssl/privatekey.pem', 'rb') as fh:
signing_key = jwk_from_pem(fh.read())
compact_jws = instance.encode(message, signing_key, alg='RS384')
print(compact_jws)
headers = CaseInsensitiveDict()
headers['Content-Type'] = 'application/x-www-form-urlencoded'
data = {
'grant_type': 'client_credentials',
'client_assertion_type': 'urn:ietf:params:oauth:client-assertion-type:jwt-bearer',
'client_assertion': compact_jws
}
x = requests.post('https://fhir.epic.com/interconnect-fhir-oauth/oauth2/token', headers=headers, data=data)
print(x.text)
But I always get a 400 error:
{
"error": "invalid_client",
"error_description": null
}
Is the URL correct? How can I get an Access Token to play with the Sandbox?
'exp': get_int_from_datetime(datetime.now(timezone.utc) + timedelta(hours=1)),
At first glance, this appears to be your issue. Epic requires that exp be no more than 5 minutes in the future.
Couple of pieces of advice, beyond that:
Use a library available from jwt.io
Jwt.io also has a debugger where you can paste in your JWT to verify it is valid
I am learning how to use the Kucoin and am having trouble with the authenticating myself to the API server.
I am trying to load all of the active orders however keep getting a 401 error.
The Kucoin API documentation states that I need to add this:
{
"KC-API-KEY": "59c5ecfe18497f5394ded813",
"KC-API-NONCE" : 1506219855000 //Client timestamp (exact to
milliseconds), before using the calibration time, the server does not
accept calls with a time difference of more than 3 seconds
"KC-API-SIGNATURE" :
"fd83147802c361575bbe72fef32ba90dcb364d388d05cb909c1a6e832f6ca3ac"
//signature after client encryption
}
as a parameter to headers of request. I am unsure what this means. Any help will be appreciated.
Creating the header can be a little tricky.
For the nonce value, or any millisecond timestamp value, I've found the best way to generate this is like this
import time
int(time.time() * 1000)
The signature requires you to order the parameters alphabetically in a query string format, combine that with the path and nonce and then hash the string using sha256 with your secret key.
If you'd like to implement it yourself feel free to copy the code from here, it's split over a few functions and should be quite readable https://github.com/sammchardy/python-kucoin/blob/0ece729c406056a428a57853345c9931d449be02/kucoin/client.py#L117
Or alternatively you may be best off just using that library. (Note: I'm the author and maintainer of python-kucoin)
Here are my working codes in Python 3:
import requests
import json
import hmac
import hashlib
import base64
from urllib.parse import urlencode
import time
api_key = 'xxxxx'
api_secret = 'xx-xxx-xx'
api_passphrase = 'xxx' #note that this is *not* trading password
base_uri = 'https://api.kucoin.com'
def get_headers(method, endpoint):
now = int(time.time() * 1000)
str_to_sign = str(now) + method + endpoint
signature = base64.b64encode(hmac.new(api_secret.encode(), str_to_sign.encode(), hashlib.sha256).digest()).decode()
passphrase = base64.b64encode(hmac.new(api_secret.encode(), api_passphrase.encode(), hashlib.sha256).digest()).decode()
return {'KC-API-KEY': api_key,
'KC-API-KEY-VERSION': '2',
'KC-API-PASSPHRASE': passphrase,
'KC-API-SIGN': signature,
'KC-API-TIMESTAMP': str(now)
}
#List Accounts
method = 'GET'
endpoint = '/api/v1/accounts'
response = requests.request(method, base_uri+endpoint, headers=get_headers(method,endpoint))
print(response.status_code)
print(response.json())
Output
200
{'code': '200000', 'data': [{'available': blah,blah,blah }]}
I only manage to use the Emotion API subscription key for pictures but never for videos. It makes no difference whether I use the API Testing Console or try to call the Emotion API by Pathon 2.7. In both cases I get a response status 202 Accepted, however when opening the Operation-Location it says
{ "error": { "code": "Unauthorized", "message": "Access denied due to
invalid subscription key. Make sure you are subscribed to an API you are
trying to call and provide the right key." } }
On the Emotion API explanatory page it says that Response 202 means that
The service has accepted the request and will start the process later.
In the response, there is a "Operation-Location" header. Client side should further query the operation status from the URL specified in this header.
Then there is Response 401, which is exactly what my Operation-Location contains. I do not understand why I'm getting a response 202 which looks like response 401.
I have tried to call the API with Python using at least three code versions that I found on the Internet that
all amount to the same, I found the code here :
Microsoft Emotion API for Python - upload video from memory
python-upload-video-from-memory
import httplib
import urllib
import base64
import json
import pandas as pd
import numpy as np
import requests
_url = 'https://api.projectoxford.ai/emotion/v1.0/recognizeInVideo'
_key = '**********************'
_maxNumRetries = 10
paramsPost = urllib.urlencode({'outputStyle' : 'perFrame', \
'file':'C:/path/to/file/file.mp4'})
headersPost = dict()
headersPost['Ocp-Apim-Subscription-Key'] = _key
headersPost['content-type'] = 'application/octet-stream'
jsonGet = {}
headersGet = dict()
headersGet['Ocp-Apim-Subscription-Key'] = _key
paramsGet = urllib.urlencode({})
responsePost = requests.request('post', _url + "?" + paramsPost, \
data=open('C:/path/to/file/file.mp4','rb').read(), \
headers = headersPost)
print responsePost.status_code
videoIDLocation = responsePost.headers['Operation-Location']
print videoIDLocation
Note that changing _url = 'https://api.projectoxford.ai/emotion/v1.0/recognizeInVideo' to _url =
'https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognizeInVideo' doesn't help.
However, afterwards I wait and run every half an hour:
getResponse = requests.request('get', videoIDLocation, json = jsonGet,\
data = None, headers = headersGet, params = paramsGet)
print json.loads(getResponse.text)['status']
The outcome has been 'Running' for hours and my video is only about half an hour long.
Here is what my Testing Console looks like Testing Console for Emotion API, Emotion Recognition in Video
Here I used another video that is about 5 minutes long and available on the internet. I found the video in a different usage example
https://benheubl.github.io/data%20analysis/fr/
that uses a very similar code, which again gets me a response status 202 Accepted and when opening the Operation-Location the subscription key is wrong
Here the code:
import httplib
import urllib
import base64
import json
import pandas as pd
import numpy as np
import requests
# you have to sign up for an API key, which has some allowances. Check the
API documentation for further details:
_url = 'https://api.projectoxford.ai/emotion/v1.0/recognizeinvideo'
_key = '*********************' #Here you have to paste your
primary key
_maxNumRetries = 10
# URL direction: I hosted this on my domain
urlVideo = 'http://datacandy.co.uk/blog2.mp4'
# Computer Vision parameters
paramsPost = { 'outputStyle' : 'perFrame'}
headersPost = dict()
headersPost['Ocp-Apim-Subscription-Key'] = _key
headersPost['Content-Type'] = 'application/json'
jsonPost = { 'url': urlVideo }
responsePost = requests.request( 'post', _url, json = jsonPost, data = None,
headers = headersPost, params = paramsPost )
if responsePost.status_code == 202: # everything went well!
videoIDLocation = responsePost.headers['Operation-Location']
print videoIDLocation
There are further examples on the internet and they all seem to work but replicating any of them never worked for me. Does anyone have any idea what could be wrong?
The Video Feature of Emotion API retires October 30th, so maybe you should change your procedure to screenshots anyways.
But for your question: The API returns you an URL where your results are accessible. You cannot open this URL in your browser, this will give you the notice of "invalid key", instead you need to call over python again this URL including your key.
I will post you my code how to get the score, I am using Python 3, so there might be some adjustments necessary. Only "tricky" point is getting the Operation ID, which is just the ID in the URL ( =location in my case) which leads to your request. Rest of the parameters like subscription key etc. is as before.
#extract operation ID from location-string
OID = location[67:]
bod = ""
try:
conn =
http.client.HTTPSConnection('westus.api.cognitive.microsoft.com')
conn.request("GET", "/emotion/v1.0/operations/"+OID+"?%s" %params, bod, headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
Did you verify your API call is working using curl? Always prototype calls using curl first. If it works in curl but not in Python, use Fiddler to observe the API request and response.
I also found an answer in the following link, all steps are explained:
https://gigaom.com/2017/04/10/discover-your-customers-deepest-feelings-using-microsoft-facial-recognition/
I'm trying to post the data from a DataFrame file to the Watson Personality Insights API using Object storage in IBM DataScienceExperience..
I've loaded the txt file into ObjectStorage and created a DataFrame. Works fine. Don't understand how to post the data in the dataframe to the API. The provided documentation does not point me into the right direction.
This is what I've done
from io import StringIO
import requests
import json
import pandas as pd
def get_object_storage_file_with_credentials(container, filename):
"""This functions returns a StringIO object containing
the file content from Bluemix Object Storage."""
url1 = ''.join(['https://identity.open.softlayer.com', '/v3/auth/tokens'])
data = {
'auth': {
'identity': {
'methods': ['password'],
'password': {
'user': {
'name': 'UID UID UID',
'domain': {
'id': 'ID ID ID'
},
'password': 'PASS PASS'
}
}
}
}
}
headers1 = {'Content-Type': 'application/json'}
resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']=='dallas'):
url2 = ''.join([e2['url'],'/', container, '/', filename])
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
resp2 = requests.get(url=url2, headers=headers2)
return StringIO(resp2.text)
PI_text = get_object_storage_file_with_credentials('MyDSXProjects', 'myPI.txt')
Next I want to post the DataFrame content to the API
I would like to know how, hope someone can provide a tip...
My Python knowledge is lacking here.
According to the Watson Personality Insights API reference, you can provide text, HTML or JSON input. You dataset is available as a pandas dataframe. Try converting the relevant column in the DataFrame to text format. For example by:
pi_api_text = PI_text['<TEXT_COLUMN>'].str.cat(sep='. ').encode('ascii', 'ignore')
Make sure you have the Python package installed:
pip install --upgrade watson-developer-cloud
Once you have the relevant data in text format make a call to the Watson Personality Insights API. For example as:
personality_insights = PersonalityInsightsV3(
version='xxxxxxxxx',
username='xxxxxxxxxx',
password='xxxxxxxxxx')
profile = personality_insights.profile(
pi_api_text, content_type='text/plain',
raw_scores=True, consumption_preferences=True)
The response will be a JSON object containing the personality traits, which you can re transform to a pandas dataframe.
I'm trying to upload a file to the Folder object in salesforce using python and simple_salesforce. The file is uploaded but is empty. can anyone tell me why and how to fix the problem? Thanks.
import base64
import json
from simple_salesforce import Salesforce
userName = 'username'
passWord = 'password'
securitytoken = 'securitytoken'
sf=Salesforce(username='userName', password='passWord', security_token='securitytoken', sandbox = True)
sessionId = sf.session_id
body = ""
with open("Info.txt", "r") as f:
body = base64.b64encode(f.read())
response = requests.post('https://cs17.my.salesforce.com/services/data/v23.0/sobjects/Document/',
headers = { 'Content-type': 'application/json', 'Authorization':'Bearer %s' % sessionId},
data = json.dumps({
'Description':'Information',
'Keywords':'Information',
'FolderId': '00lg0000000MQykAAG',
'Name': 'Info',
'Type':'txt'
})
)
print response.text
Based on the code above, your request is incomplete. You create the body value as the b64encoding iof the file you wish to upload, but it isn't included in your data associated with a 'body' key.
Depending on your version of json, you may experience some problems getting the output to play nice with Salesforce. I switched from json to simplejson and that worked better for me.