File uploaded to salesforce empty using python and simple_salesforce - python

I'm trying to upload a file to the Folder object in salesforce using python and simple_salesforce. The file is uploaded but is empty. can anyone tell me why and how to fix the problem? Thanks.
import base64
import json
from simple_salesforce import Salesforce
userName = 'username'
passWord = 'password'
securitytoken = 'securitytoken'
sf=Salesforce(username='userName', password='passWord', security_token='securitytoken', sandbox = True)
sessionId = sf.session_id
body = ""
with open("Info.txt", "r") as f:
body = base64.b64encode(f.read())
response = requests.post('https://cs17.my.salesforce.com/services/data/v23.0/sobjects/Document/',
headers = { 'Content-type': 'application/json', 'Authorization':'Bearer %s' % sessionId},
data = json.dumps({
'Description':'Information',
'Keywords':'Information',
'FolderId': '00lg0000000MQykAAG',
'Name': 'Info',
'Type':'txt'
})
)
print response.text

Based on the code above, your request is incomplete. You create the body value as the b64encoding iof the file you wish to upload, but it isn't included in your data associated with a 'body' key.
Depending on your version of json, you may experience some problems getting the output to play nice with Salesforce. I switched from json to simplejson and that worked better for me.

Related

Gitlab User list API with Python and Amazon S3

I'm trying to pull a complete user list from our private gitlab server instance and input it into an S3 Bucket to reference whenever we need. Eventually I will have some form of Lambda/cfn deleting it and running it again every week to update it. I'm not so great with Python and this is what I have so far..
import json
import boto3
import re
import os
import sys
import botocore
import urllib3
from pprint import pprint
sess = boto3.Session(profile_name="sso-profile-here")
s3_client = sess.client("s3")
bucket_name = "user-statistics"
http = urllib3.PoolManager()
baseuri = "https://git.tools.dev.mycompany.net/api/v4/"
access_token = "access-token-code"
def get_gitlab_users(access_token=access_token, baseuri=baseuri):
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {}".format(access_token),
}
url = "{}/users/?per_page=100&active=true&without_project_bots=true&next_page=x-next-page".format(
baseuri
)
req = http.request(method="GET", url=url, headers=headers)
result = json.loads(req.data)
s3_client.put_object(
Bucket=bucket_name, Key="get_users_gitlab.json", Body=json.dumps(result)
)
if __name__ == "__main__":
get_gitlab_users(access_token=access_token, baseuri=baseuri)
What I would like to be able to do is pull all the users on each page and also format it a bit neater in the S3 bucket, When I download it from the bucket the format is really unreadable and I'm not sure if I can improve it, can anyone suggest anything I can do?
Please also ignore the fact my access token is directly in the code here, it's for testing at this stage and I will make sure it's not stored directly in code.
Thanks in advance for any suggestions.
You can try to use python-gitlab package instead of requests. It should be a lot easier to get user infos :
import gitlab
baseuri = "https://git.tools.dev.mycompany.net"
access_token = "access-token-code"
gl = gitlab.Gitlab(baseuri , private_token=access_token)
users = [user.asdict() for user in gl.users.list()]
users
# [{'id': 1,
# 'username': 'username1',
# 'name': 'name1',
# 'state': 'active',
# 'avatar_url': 'https://avatar.com/1',
# 'web_url': 'https://git.tools.dev.mycompany.net/username1'},
# ...]

posting text from DataFrame to IBM PersonalityInsights API

I'm trying to post the data from a DataFrame file to the Watson Personality Insights API using Object storage in IBM DataScienceExperience..
I've loaded the txt file into ObjectStorage and created a DataFrame. Works fine. Don't understand how to post the data in the dataframe to the API. The provided documentation does not point me into the right direction.
This is what I've done
from io import StringIO
import requests
import json
import pandas as pd
def get_object_storage_file_with_credentials(container, filename):
"""This functions returns a StringIO object containing
the file content from Bluemix Object Storage."""
url1 = ''.join(['https://identity.open.softlayer.com', '/v3/auth/tokens'])
data = {
'auth': {
'identity': {
'methods': ['password'],
'password': {
'user': {
'name': 'UID UID UID',
'domain': {
'id': 'ID ID ID'
},
'password': 'PASS PASS'
}
}
}
}
}
headers1 = {'Content-Type': 'application/json'}
resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']=='dallas'):
url2 = ''.join([e2['url'],'/', container, '/', filename])
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
resp2 = requests.get(url=url2, headers=headers2)
return StringIO(resp2.text)
PI_text = get_object_storage_file_with_credentials('MyDSXProjects', 'myPI.txt')
Next I want to post the DataFrame content to the API
I would like to know how, hope someone can provide a tip...
My Python knowledge is lacking here.
According to the Watson Personality Insights API reference, you can provide text, HTML or JSON input. You dataset is available as a pandas dataframe. Try converting the relevant column in the DataFrame to text format. For example by:
pi_api_text = PI_text['<TEXT_COLUMN>'].str.cat(sep='. ').encode('ascii', 'ignore')
Make sure you have the Python package installed:
pip install --upgrade watson-developer-cloud
Once you have the relevant data in text format make a call to the Watson Personality Insights API. For example as:
personality_insights = PersonalityInsightsV3(
version='xxxxxxxxx',
username='xxxxxxxxxx',
password='xxxxxxxxxx')
profile = personality_insights.profile(
pi_api_text, content_type='text/plain',
raw_scores=True, consumption_preferences=True)
The response will be a JSON object containing the personality traits, which you can re transform to a pandas dataframe.

Authorization header in Requests package in python

I am trying write a python script using requests package to use an online mongodb query service API hosted within the organization. The API expects the authorization header in the format 'websitename/username:Password' and using the basic authentication base64 encoding. I tried to create the GET request using the requests package which has the authorization header in the following format:
import requests
headers = {'Authorization': 'Basic %s' % 'Base64encoded
websitename/username:Password string here'}
content_res = requests.get(get_url, headers = headers).json()
But I am getting a parse error here for the string as I think the expected string for header in requests package is in form of 'username:password' here and not the desired format i.e. 'websitename/username:password'.
Is there a way in which I could use the base64 encoded sting in the format which the service is expecting i.e. 'websitename/username:password' in requests package?
Any help is highly appreciated.
Thanks.
It sounds to me like you are getting a HTTP response error because the authorization header value you are passing is not base64 encoded. To correct this you can simply encode the string using the base64 python module:
Python 2.7 https://docs.python.org/2/library/base64.html
Python 3.5 https://docs.python.org/3.5/library/base64.html
An example would be something like this:
import base64
import requests
website = 'example.com'
username = 'root'
password = '1234'
auth_str = '%s/%s:%s' % (website, username, password)
b64_auth_str = base64.b64encode(auth_str)
headers = {'Authorization': 'Basic %s' % b64_auth_str}
content_res = requests.get(get_url, headers=headers).json()
import base64
import requests
website = 'example.com'
username = 'root'
password = '1234'
auth_str = '%s/%s:%s' % (website, username, password)
b64_auth_str = base64.b64encode(auth_str.encode('ascii'))
headers = {'Authorization': 'Basic %s' % b64_auth_str}
content_res = requests.get(get_url, headers=headers).json()

Project Oxford Speaker Recognition- Invalid Audio Format

I have been trying a lot to use the Project Oxford Speaker Recognition API
(https://dev.projectoxford.ai/docs/services/563309b6778daf02acc0a508/operations/5645c3271984551c84ec6797).
I have been successfully able to record the sound on my microphone convert it to the required WAV(PCM,16bit,16K,Mono).
The problem is when I try to post this file as a binary stream to the API it returns an Invalid audio format error message.
The same file is accepted by the demo on the website(https://www.projectoxford.ai/demo/SPID).
I am using python 2.7 with this code.
import httplib
import urllib
import base64
import json
import codecs
headers = {
# Request headers
'Content-Type': 'application/octet-stream',
'Ocp-Apim-Subscription-Key': '{KEY}',
}
params = urllib.urlencode({
})
def enroll(audioId):
conn = httplib.HTTPSConnection('api.projectoxford.ai')
file = open('test.wav','rb')
body = file.read()
conn.request("POST", "/spid/v1.0/verificationProfiles/" + audioId +"/enroll?%s" % params, str(body), headers)
response = conn.getresponse()
data = response.read()
print data
conn.close()
return data
And this is the response that i am getting.
{
"error": {
"code": "BadRequest",
"message": "Invalid Audio Format"
}
}
Please if anyone can guide me as to what I am missing. I have verified all the properties of the audio file and the requirements needed by the API but with no luck.
All answers and comments are appreciated.
I sent this file to Project oxford with my test program that is in ruby and it works properly. I think the issue might be in the other params you are sending. Try changing your 'Content Type' header to 'audio/wav; samplerate=1600' this is the header that I used. I also send a 'Content Length' header with the size of the file. I'm not sure if 'Content Length' is required but it is good standard to include it.

parsing JSON formatted data in python

Hey I am trying to import data that is already formatted as JSON data. I am trying to get it to be read in Python so I can use it for a http post request. I have tried saving it as .JSON and .txt and using json.dumps on both files but I'm still getting it in the wrong format. The code is below. I am guessing it is reading in the wrong format as the response from the post is getting an error. However when I use postman for the job, no error.
workingFile = 'D:\\test.json'
file = open(workingFile, 'r')
read = [file.read()]
data = json.dumps(read)
url = 'http://webaddress'
username = 'username'
password = 'password'
requestpost = requests.post(url, data, auth=(username, password))
workingFile = 'D:\\test.json'
with open(workingFile, 'r') as fh:
data = json.load(fh)
url = 'http://webaddress'
username = 'username'
password = 'password'
requestpost = requests.post(url, json=data, auth=(username, password))
By specifying json=data, requests encodes the payload as json instead of form data
To read json data from file
Parsing values from a JSON file using Python?
To read json data from string
Convert string to JSON using Python

Categories