Invalid API key/application pair–Clarifai - python

I was reading the documentation and wanted try out the api in pycharm. So I copied the code and it told me that I had an "Invalid API key/application pair". I copied my Api key straight from my app I made on https://portal.clarifai.com/ and put it in.
My code literally an exact copy except api key was deleted I copied straight from my app was deleted, and I running it on pycharm.
`
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_pb2, status_code_pb2
# Construct one of the channels you want to use
channel = ClarifaiChannel.get_json_channel()
channel = ClarifaiChannel.get_insecure_grpc_channel()
# Note: You can also use a secure (encrypted) ClarifaiChannel.get_grpc_channel() however
# it is currently not possible to use it with the latest gRPC version
stub = service_pb2_grpc.V2Stub(channel)
# This will be used by every Clarifai endpoint call.
metadata = (('authorization', 'Key {9f3d8b8ea01245e6b61c2a1311622db1}'),)
# Insert here the initialization code as outlined on this page:
# https://docs.clarifai.com/api-guide/api-overview/api-clients#client-installation-instructions
post_inputs_response = stub.PostInputs(
service_pb2.PostInputsRequest(
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(
url="https://samples.clarifai.com/metro-north.jpg",
allow_duplicate_url=True
)
)
)
]
),
metadata=metadata
)
if post_inputs_response.status.code != status_code_pb2.SUCCESS:
raise Exception("Post inputs failed, status: " + post_inputs_response.status.description)
`

Philip, your metadata declaration is not correct. You don't put your API key within {}
# This will be used by every Clarifai endpoint call.
metadata = (('authorization', 'Key {9f3d8b8ea01245e6b61c2a1311622db1}'),)
There is an extra } towards the end, please adjust it to this
metadata = (('authorization', 'Key 9f3d8b8ea01245e6b61c2a1311622db1'),)

Could you please try generating a new API key and trying that.

Related

Pyetrade / Etrade API for option-chains function only returns options for apple?

I'm trying to get some option chains using the pyetrade package. I'm working in Sandbox mode for a newly made Etrade account.
When I execute the following code, it executes fine, but the returned information is incorrect: I keep getting options for Apple between 2012 and 2015, instead of current Exxon-Mobil options (what I'm inputting). This is also true for if I ask for Google or Facebook or Netflix, I just keep getting outdated Apple options.
I'm not sure where I messed up, or if this is just something that's part of sandbox mode, so that's why I asked for help. Thank you!
(Note: Some of the code is sourced from: https://github.com/1rocketdude/pyetrade_option_chains/blob/master/etrade_option_chains.py)
The following is the function to get the option chain from the API:
def getOption(thisSymbol):
#Renew session / or start session
try:
authManager.renew_access_token()
except:
authenticate() #this works fine
#make a market object to pull what you need from
market = pyetrade.ETradeMarket(
consumer_key,
consumer_secret,
tokens['oauth_token'],
tokens['oauth_token_secret'],
dev=True
)
try:
#the dates returned are also
q = market.get_option_expire_date(thisSymbol,resp_format='xml')
#just formats the dates to be more comprehensible:
expiration_dates = option_expire_dates_from_xml(q)
except Exception:
raise
rtn = []
for this_expiry_date in expiration_dates:
q = market.get_option_chains(thisSymbol, this_expiry_date)
chains = q['OptionChainResponse']['OptionPair']
rtn.append(chains)
print()
return rtn
ret = getOption("XOM")
print(ret[0])
The API provider is explicit on this:
Note:
E*TRADE's sandbox doesn't actually produce correct option chains so this will return an error.
The sandbox is still useful for debugging e.g. the OAuth stuff.
No one could hardly make the sandbox-ed code work otherwise.

Create or Replace AWS Glue Crawler

Using boto3:
Is it possible to check if AWS Glue Crawler already exists and create it if it doesn't?
If it already exists I need to update it.
How would the crawler create script look like?
Would this be similar to CREATE OR REPLACE TABLE in an RDBMS...
Has anyone done this or has recommendations?
Thank you :)
Michael
As far as I know, there is no API for this. We manually list the crawlers using list_crawlers and iterate through the list to decide whether to add or update the crawlers(update_crawler).
Check out the API #
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue.html
Yes, you can do all of that using boto3, however, there is no single function that can do this all at once. Instead, you would have to make a series of the following API calls:
list_crawlers
get_crawler
update_crawler
create_crawler
Each time these function would return response, which you would need to parse/verify/check manually.
AWS is pretty good on their documentation, so definetely check it out. It might seem overwhelming, but at the beggining you might find it easy to simply copy and paste a request systex that they provide in docs and then strip down unnesessary parts etc. Although boto3 is very helpful with for autocompletion/suggestions but there is a project that can help with that mypy_boto3_builder and its predecessors mypy_boto3, boto3_type_annotations.
If something goes wrong, i.e you haven't specified some parameters correcly, their error responses are pretty good and helpful.
Here is an example of how you can list all existing crawlers
import boto3
from pprint import pprint
client = boto3.client('glue')
response = client.list_crawlers()
available_crawlers = response["CrawlerNames"]
for crawler_name in available_crawlers:
response = client.get_crawler(Name=crawler_name)
pprint(response)
Assuming that in IAM you have AWSGlueServiceRoleDefault with all required permissions for glue crawler, here is how you can create one:
response = client.create_crawler(
Name='my-crawler-via-api',
Role='AWSGlueServiceRoleDefault',
Description='Crawler generated with Python API', # optional
Targets={
'S3Targets': [
{
'Path': 's3://some/path/in/s3/bucket',
},
],
},
)
I ended up using standard Python exception handling:
#Instantiate the glue client.
glue_client = boto3.client(
'glue',
region_name = 'us-east-1'
)
#Attempt to create and start a glue crawler on PSV table or update and start it if it already exists.
try:
glue_client.create_crawler(
Name = 'crawler name',
Role = 'role to be used by glue to create the crawler',
DatabaseName = 'database where the crawler should create the table',
Targets =
{
'S3Targets':
[
{
'Path':'full s3 path to the directory that crawler should process'
}
]
}
)
glue_client.start_crawler(
Name = 'crawler name'
)
except:
glue_client.update_crawler(
Name = 'crawler name',
Role = 'role to be used by glue to create the crawler',
DatabaseName = 'database where the crawler should create the table',
Targets =
{
'S3Targets':
[
{
'Path':'full s3 path to the directory that crawler should process'
}
]
}
)
glue_client.start_crawler(
Name = 'crawler name'
)

Get elasticloadbalancers names with boto3

When I try to print the load balancers from aws I get a huge dictionary with a lot of keys, but when I'm trying to print only the 'LoadBalancerName' value I get: None, I want to print all the load balancers names in our environment how I can do it? thanks!
What I tried:
import boto3
client = boto3.client('elbv2')
elb = client.describe_load_balancers()
Name = elb.get('LoadBalancerName')
print(Name)
The way in which you were handling the response object was incorrect, and you'll need to put it in a loop if you want all the Names and not just one. What you'll you'll need is this :
import boto3
client = boto3.client('elbv2')
elb = client.describe_load_balancers()
for i in elb['LoadBalancers']:
print(i['LoadBalancerArn'])
print(i['LoadBalancerName'])
However if your still getting none as a value it would be worth double checking what region the load balancers are in as well as if you need to pass in the use of a profile too.

Getting issue comments JIRA python

I am trying to get all comments of issues created in JIRA of a certain search query. My query is fairly simple:
import jira
from jira.client import JIRA
def fetch_tickets_open_yesterday(jira_object):
# JIRA query to fetch the issues
open_issues = jira_object.search_issues('project = Support AND issuetype = Incident AND \
(status = "Open" OR status = "Resolved" OR status = "Waiting For Customer")', maxResults = 100,expand='changelog')
# returns all open issues
return open_issues
However, if I try to access the comments of tickets created using the following notation, I get a key error.
for issue in issues:
print issue.raw['fields']['comment']
If I try to get comments of a single issue like below, I can access the comments:
single_issue = jira_object.issue('SUP-136834')
single_issue.raw['fields']['comment']
How do I access these comments through search_issues() function?
The comment field is not returned by the search_issues method you have to manually state the fields that must be included by setting the corresponding parameter.
just include the 'fields' and 'json_result' parameter in the search_issue method and set it like this
open_issues = jira_object.search_issues('project = Support AND issuetype = Incident AND \
(status = "Open" OR status = "Resolved" OR status = "Waiting For Customer")', maxResults = 100,expand='changelog',fields = 'comment',json_result ='True')
Now you can access the comments without getting keytype error
comm=([issue.raw['fields']['comment']['comments'] for issue in open_issues])
I struggled with the same issue. Assuming "issue" is an object of type Issue, and "jira" is an object of type JIRA, according to http://jira.readthedocs.org/en/latest/#issues
issue.fields.comment.comments
should work, but the fields object does not have any key "comment".
The other option mentioned there works for me:
jira.comments(issue)
So, for it to work you use the issues from your search result and call jira.comments. E.g.
issues = jira.search_issues(query)
comments = jira.comments(issues[index])
(My version of the library is 1.0.3, python 2.7.10)
from jira import JIRA
Jira = JIRA('https://jira.atlassian.com')
issue_num = "ISSUE-123"
issue = Jira.issue(issue_num)
comments = issue.fields.comment.comments
for comment in comments:
print("Comment text : ",comment.body)
print("Comment author : ",comment.author.displayName)
print("Comment time : ",comment.created)

Downloading via boto

I am using boto client to download and upload my files to s3 and do a whole bunch of other things like copy from one folder key to another and etc. The problem arises when I try to copy a key whose size is 0 bytes. The code that I use to copy is below
# Get the connection to the bucket
conn = boto.connect_s3(AWS_KEY, SECRET_KEY)
bucket = conn.get_bucket('mybucket')
# bucket.name is the name of my bucket
# candidate is the source key
destination_key = "destination/path/on/s3"
candidate = "the/file/to/copy"
# now copy the key
bucket.copy_key(destination_key, bucket.name, candidate) # --> This throws an exception
# just in case, see if the key ended up in the destination.
copied_key = bucket.lookup(destination_key)
The exception that I get is
3ResponseError: 404 Not Found
<Error><Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>the/file/to/copy</Key><RequestId>ABC123</RequestId><HostId>XYZ123</HostId>
</Error>
Now I have verified that the key infact exists by logging into the aws console and navigating to the source key location, the key is there and the aws console shows that its size is 0 (there are cases in my application that I may end up with empty files but I need them on s3).
So upload works fine, boto uploads the key without any issue, but when I attempt to copy it, then I get the error that the key does not exist
So is there any other logic that I should be using to copy such keys? Any help in this regard would be appreciated
Make sure you include the bucket of the source key. Should be something like bucket/path/to/file/to/copy
Try this:
from boto.s3.key import Key
download_path = '/tmp/dest_test.jpg'
bucket_key = Key(bucket)
bucket_key.key = file_key # e.g. images/source_test.jpg
bucket_key.get_contents_to_filename(download_path)

Categories