Is there a library or package which we can use with python to connect to salesforce and get data?
I use beatbox
Example to query for a lead by email address
import beatbox
sf_username = "Username"
sf_password = "password"
sf_api_token = "api token"
def get_lead_records_by_email(email)
sf_client = beatbox.PythonClient()
password = str("%s%s" % (sf_password, sf_api_token))
sf_client.login(sf_username, password)
lead_qry = "SELECT id, Email, FirstName, LastName, OwnerId FROM Lead WHERE Email = '%s'" % (email)
records = sf_client.query(lead_qry)
return records
To get other data look at the salesforce api docs
view other beatbox examples here
There's also a package called simple_salesforce .
You can install it with:
$ pip install simple_salesforce
You can gain access to your Salesforce account with the following:
from simple_salesforce import Salesforce
sf = Salesforce(username='youremail#abc.com', password='password', security_token='token')
The readme is helpful with regards to details...
This one is the best in my experience:
http://code.google.com/p/salesforce-python-toolkit/
Here is the ready code to get anyone started. For fetching reports from SFDC.
import pandas as pd
import numpy as np
from pandas import DataFrame, Series
from simple_salesforce import Salesforce #imported salesforce
sf = Salesforce(username='youremail#domain.com', password='enter_password', security_token = 'Salesforce_token')
salesforce token is received in email everytime you change your password.
import requests #imported requests
session = requests.Session() #starting sessions
from io import StringIO #to read web data
error_report_defined = session.get("https://na4.salesforce.com/xxxxxxxxxxxx?export=1&enc=UTF-8&xf=csv".format('xxxxxxxxxxxx'), headers=sf.headers, cookies={'sid': sf.session_id})
df_sfdc_error_report_defined = pd.DataFrame.from_csv(StringIO(error_report_defined.text))
df_sfdc_error_report_defined = df_sfdc_error_report_defined.to_csv('defined.csv', encoding = 'utf-8')
error_report = pd.read_csv('defined.csv') #your report is saved in csv format
print (error_report)
Although this is not Python specific. I came across a cool tool for the command line. You could run bash commands as an option..
https://force-cli.heroku.com/
Usage: force <command> [<args>]
Available commands:
login force login [-i=<instance>] [<-u=username> <-p=password>]
logout Log out from force.com
logins List force.com logins used
active Show or set the active force.com account
whoami Show information about the active account
describe Describe the object or list of available objects
sobject Manage standard & custom objects
bigobject Manage big objects
field Manage sobject fields
record Create, modify, or view records
bulk Load csv file use Bulk API
fetch Export specified artifact(s) to a local directory
import Import metadata from a local directory
export Export metadata to a local directory
query Execute a SOQL statement
apex Execute anonymous Apex code
trace Manage trace flags
log Fetch debug logs
eventlogfile List and fetch event log file
oauth Manage ConnectedApp credentials
test Run apex tests
security Displays the OLS and FLS for a give SObject
version Display current version
update Update to the latest version
push Deploy artifact from a local directory
aura force aura push -resourcepath=<filepath>
password See password status or reset password
notify Should notifications be used
limits Display current limits
help Show this help
datapipe Manage DataPipes
Related
I read some documentation on internet official and non official and i'm currently unable to import the logs from bigquery like "bigquery_resource" (for getting all my insert, update, merge ... processing on my gcp project ) from a gcp project where i'm owner with python on my local.
Mandatory prerequisite :
Only use the scripts to read and catch the logs with a filter without creating CF, data in bucket, manual action from user on the gcp project etc...
Using a service account in the process
Import the bigquery logs from the gcp on a local when i execute my script python
Here the code below where i try to get the logs :
import google.protobuf
from google.cloud.bigquery_logging_v1 import AuditData
import google.cloud.logging
from datetime import datetime, timedelta, timezone
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="C:\\mypath\\credentials.json"
project_id = os.environ["GOOGLE_CLOUD_PROJECT"] = "project1"
yesterday = datetime.now(timezone.utc) - timedelta(days=2)
time_format = "%Y-%m-%dT%H:%M:%S.%f%z"
filter_str = (
f'logName="projects/{project_id}/logs/cloudaudit.googleapis.com%2Factivity"'
f' AND resource.type="bigquery_resource"'
f' AND timestamp>="{yesterday.strftime(time_format)}"'
)
client = google.cloud.logging.Client(project="project1")
for entry in client.list_entries(filter_=filter_str):
decoded_entry = entry.to_api_repr()
#print(decoded_entry)
print(entry) #the same output as print(decoded_entry)
open("C:\\mypath\\logs.txt", "w").close()
with open("C:\\mypath\\logs.txt", "w") as f:
for entry in client.list_entries(filter_=filter_str):
f.write(entry)
Unfortunately , it doesn't work(and my code is messy), i get a ProtobufEntry with the var entry like below and i don't know how get my data from my gcp project in a proper way.
All the help is welcome ! (please don't answer me with a deprecated answer from openaichatgpt )
Here how i export my logs without creating bucket, sink, pubsub, cloud function, table in bigquery etc..
=> Only 1 Service account with rights on my project and 1 script .py on my local and added an option in the python script for scan only bigquery ressource during the last hour.
I add the path of gcloud because i have some problem with path in my envvar in my local with the popen lib, maybe you won't need to do it.
from subprocess import Popen, PIPE
import json
from google.cloud.bigquery_logging_v1 import AuditData
import google.cloud.logging
from datetime import datetime, timedelta, timezone
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="C:\\Users\\USERAAAA\\Documents\\Python Scripts\\credentials.json"
gcloud_path = "C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd"
process = Popen([gcloud_path, "logging", "read", "resource.type=bigquery_resource AND logName=projects/PROJECTGCP1/logs/cloudaudit.googleapis.com%2Fdata_access", "--freshness=1h"], stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
output_str = stdout.decode()
# data string into a a file
with open("C:\\Users\\USERAAAA\\Documents\\Python_Scripts\\testes.txt", "w") as f:
f.write(output_str)
One way to achieve this as follows:
Create a dedicated logging sink for BigQuery logs:
gcloud logging sinks create my-example-sink bigquery.googleapis.com/projects/my-project-id/datasets/auditlog_dataset \
--log-filter='protoPayload.metadata."#type"="type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata"'
The above command will create logging sink in a dataset named auditlog_dataset that only includes BigQueryAuditMetadata messages. Refer BigQueryAuditMetadata for all the events which are captured as part of GCP AuditData.
Create a service account and give access to above created dataset.
For creating service account refer here and for granting access to dataset refer here.
Use this service account to authenticate from your local environment and query the above created dataset using BigQuery Python client to get filtered BigQuery data.
from google.cloud import bigquery
client = bigquery.Client()
# Select rows from log dataset
QUERY = (
'SELECT name FROM `MYPROJECTID.MYDATASETID.cloudaudit_googleapis_com_activity`'
'LIMIT 100')
query_job = client.query(QUERY) # API request
rows = query_job.result() # Waits for query to finish
for row in rows:
print(row.name)
Also, you can query the audit tables from the console directly.
Reference BigQuery audit logging.
Another option is to use Python Script to query log events. And one more option is to use Cloud Pub/Sub to route logs to external (out of gcp) clients.
I mostly prefer to keep the filtered logs in dedicated Log Analytics bucket and query as per needs and create custom log based metrics using Cloud Monitoring. Moving logs out of GCP may incur network egress charges, refer the documentation, if you are querying large volume of data.
I am trying to download a report I created on SalesForce using simple_salesforce package in python.
Below is the sample code:
from simple_salesforce import Salesforce
import requests
import pandas as pd
from io import StringIO
sf = Salesforce(username='myusername',
password='mypassword',
security_token='mytoken',
version='46.0')
report_id = 'myreportid'
sf.restful('analytics/reports/{}'.format(report_id))
However, this chunk of code yields the following error:
SalesforceExpiredSession: Expired session for https://company_name.my.salesforce.com/services/data/v46.0/analytics/reports/myreporid. Response content: [{'message': 'This session is not valid for use with the REST API', 'errorCode': 'INVALID_SESSION_ID'}]
(continuing from comments)
My bad, typo. Does your Profile have "API Enabled" checkbox ticked?
And you said you can see success in login history, active session?
What happens when you try to do same thing manually with workbench. Login, then in top menu Utilities -> REST Explorer should let you run your report.
Maybe simple is creating a SOAP session id which for whatever reason is incompatible with REST API (to be fair I thought they were pretty interchangeable, maybe your company disabled REST API, I heard it's possible to request that from SF support...)
If workbench works - you may have to login in simple different way, creating "connected app" and reading up about https://help.salesforce.com/s/articleView?id=remoteaccess_oauth_username_password_flow.htm&type=5&language=en_US for example
I'm writing a program to find changesets based on created date using Microsoft TFS Python Library (TFS API Python client).
I read through the documentation and found that the get_changesets() method can be used for this. But there are no arguments that can help filter out the changesets based on date.
On further reading, I found that get_tfs_resource() could be used, but being new to using APIs, I cannot figure out how to set payload for the method call, that would help me to filter out the changesets using date.
Can someone help me out with the correct methods to be used, or the payload that can be sent as specified?
You can use the TFS Rest API get get changeset and do this
https://{instance}/{collection}/{project}/_apis/tfvc/changesets?searchCriteria.fromDate=2020-03-11&searchCriteria.toDate=2020-03-12&api-version=4.1
You may check the code below:
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
import pprint
# Fill in with your personal access token and org URL
personal_access_token = 'YOURPAT'
organization_url = 'https://dev.azure.com/YOURORG'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
tfsv_client = connection.clients.get_tfvc_client()
project_name="myprojname"
search_criteria = type('',(object,),{"item_path":'$/myprojname/Trunk/Main',"fromDate":'01-01-2019', "toDate":'11-13-2019-2:00PM'})
changeset_info = tfvcclient.get_changesets(project=project_name,search_criteria=search_criteria)
Reference from:
https://github.com/microsoft/azure-devops-python-api
https://github.com/microsoft/azure-devops-python-api/blob/dev/azure-devops/azure/devops/v5_1/tfvc/tfvc_client.py
How to access azure Dev Ops data such as Changeset between dates using python?
Authenticating requests, especially with Google's API's is so incredibly confusing!
I'd like to make authorized HTTP POST requests through python in order to query data from the datastore. I've got a service account and p12 file all ready to go. I've looked at the examples, but it seems no matter which one I try, I'm always unauthorized to make requests.
Everything works fine from the browser, so I know my permissions are all in order. So I suppose my question is, how do I authenticate, and request data securely from the Datastore API through python?
I am so lost...
You probably should not be using raw POST requests to use Datastore, instead use the gcloud library to do the heavy lifting for you.
I would also recommend the Python getting started page, as it has some good tutorials.
Finally, I recorded a podcast where I go over the basics of using Datastore with Python, check it out!
Here is the code, and here is an example:
#Import libraries
from gcloud import datastore
import os
#The next few lines will set up your environment variables
#Replace "YOUR_RPOEJCT_ID_HERE" with the correct value in code.py
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "key.json"
projectID = "YOUR_RPOEJCT_ID_HERE"
os.environ["GCLOUD_TESTS_PROJECT_ID"] = projectID
os.environ["GCLOUD_TESTS_DATASET_ID"] = projectID
datastore.set_default_dataset_id(projectID)
#Let us build a message board / news website
#First, create a fake email for our fake user
email = "me#fake.com"
#Now, create a 'key' for that user using the email
user_key = datastore.Key('User', email)
#Now create a entity using that key
new_user = datastore.Entity( key=user_key )
#Add some fields to the entity
new_user["name"] = unicode("Iam Fake")
new_user["email"] = unicode(email)
#Push entity to the Cloud Datastore
datastore.put( new_user )
#Get the user from datastore and print
print( datastore.get(user_key) )
This code is licensed under Apache v2
I'm new to Python, new to the jira-python library, and new to network programming, though I do have quite a bit of experience with application and integration programming and database queries (though it's been a while).
Using Python 2.7 and requests 1.0.3
I'm trying to use this library - http://jira-python.readthedocs.org/en/latest/ to query Jira 5.1 using Python. I successfully connected using an unauthenticated query, though I had to make a change to a line in client.py, changing
I changed
self._session = requests.session(verify=verify, hooks={'args': self._add_content_type})
to
self._session = requests.session()
I didn't know what I was doing exactly but before the change I got an error and after the change I got a successful list of project names returned.
Then I tried basic authentication so I can take advantage of my Jira permissions and do reporting. That failed initially too. And I made the same change to
def _create_http_basic_session
in client.py , but now I just get another error. So problem not solved. Now I get a different error:
HTTP Status 415 - Unsupported Media Type
type Status report
message Unsupported Media Type
description The server refused this request because the request entity is in
a format not` `supported by the requested resource for the requested method
(Unsupported Media Type).
So then I decided to do a super simple test just using the requests module, which I believe is being used by the jira-python module and this code seemed to log me in. I got a good response:
import requests
r = requests.get(the_url, auth=(my username , password))
print r.text
Any suggestions?
Here's how I use the jira module with authentication in a Python script:
from jira.client import JIRA
import logging
# Defines a function for connecting to Jira
def connect_jira(log, jira_server, jira_user, jira_password):
'''
Connect to JIRA. Return None on error
'''
try:
log.info("Connecting to JIRA: %s" % jira_server)
jira_options = {'server': jira_server}
jira = JIRA(options=jira_options, basic_auth=(jira_user, jira_password))
# ^--- Note the tuple
return jira
except Exception,e:
log.error("Failed to connect to JIRA: %s" % e)
return None
# create logger
log = logging.getLogger(__name__)
# NOTE: You put your login details in the function call connect_jira(..) below!
# create a connection object, jc
jc = connect_jira(log, "https://myjira.mydom.com", "myusername", "mypassword")
# print names of all projects
projects = jc.projects()
for v in projects:
print v
Below Python script connects to Jira and does basic authentication and lists all projects.
from jira.client import JIRA
options = {'server': 'Jira-URL'}
jira = JIRA(options, basic_auth=('username', 'password'))
projects = jira.projects()
for v in projects:
print v
It prints a list of all the project's available within your instance of Jira.
Problem:
As of June 2019, Atlassian Cloud users who are using a REST endpoint in Jira or Confluence Cloud with basic or cookie-based authentication will need to update their app or integration processes to use an API token, OAuth, or Atlassian Connect.
After June 5th, 2019 attempts to authenticate via basic auth with an Atlassian account password will return an invalid credentials error.
Reference: Deprecation of basic authentication with passwords for Jira and Confluence APIs
Solution to the Above-mentioned Problem:
You can use an API token to authenticate a script or other process with an Atlassian cloud product. You generate the token from your Atlassian account, then copy and paste it to the script.
If you use two-step verification to authenticate, your script will need to use a REST API token to authenticate.
Steps to Create an API Token from your Atlassian Account:
Log in to https://id.atlassian.com/manage/api-tokens
Click Create API token.
From the dialog that appears, enter a memorable and concise Label for your token and click Create.
Click Copy to clipboard, then paste the token to your script.
Reference: API tokens
Python 3.8 Code Reference
from jira.client import JIRA
jira_client = JIRA(options={'server': JIRA_URL}, basic_auth=(JIRA_USERNAME, JIRA_TOKEN))
issue = jira_client.issue('PLAT-8742')
print(issue.fields.summary)
Don't change the library, instead put your credentials inside the ~/.netrc file.
If you put them there you will also be able to test your calls using curl or wget.
I am not sure anymore about compatibility with Jira 5.x, only 7.x and 6.4 are currently tested. If you setup an instance for testing I could modify the integration tests to run against it, too.
My lucky guess is that you broke it with that change.
As of 2019 Atlassian has deprecated authorizing with passwords.
You can easily replace the password with an API Token created here.
Here's a minimalistic example:
pip install jira
from jira import JIRA
jira = JIRA("YOUR-JIRA-URL", basic_auth=("YOUR-EMAIL", "YOUR-API-TOKEN"))
issue = jira.issue("YOUR-ISSUE-KEY (e.g. ABC-13)")
print(issue.fields.summary)
I recommend storing your API Token as an environment variable and accessing it with os.environ[key].