Unable to implement ICodePipelineActionFactory in Python - python

I am trying to create an arbitrary CodePipeline action as part of a CDK pipeline implemented in Python. Specifically in this case it's a step function invocation, but I would like to call other types as well. No matter what I do, I keep getting the same error saying it can't find the add_action attribute on the stage object.
jsii.errors.JSIIError: '' object has no attribute 'add_action'
I have tried different variations of the method name, checking the object with dir() (stage is a very opaque InterfaceDynamicProxy object), reading jsii documentation to see if they have a way to list available attributes, but got nowhere.
Does anyone have a working example of jsii interface implementation in Python? Or can you tell what's wrong with the code below?
I am using CDK 1.118.0 with Python 3.9.6 and node.js 16.6.2 on Mac OS X.
from aws_cdk import core, pipelines, aws_codepipeline_actions, aws_codepipeline, aws_stepfunctions
import jsii
#jsii.implements(pipelines.ICodePipelineActionFactory)
class SomeStep(pipelines.Step):
def __init__(self, id_):
super().__init__(id_)
#jsii.member(jsii_name="produceAction")
def produce_action(
self, stage: aws_codepipeline.IStage,
options: pipelines.ProduceActionOptions,
# TODO why are these not passed?
# *,
# action_name, artifacts, pipeline, run_order, scope,
# before_self_mutation=None,
# code_build_defaults=None,
# fallback_artifact=None
) -> pipelines.CodePipelineActionFactoryResult:
stage.add_action(
aws_codepipeline_actions.StepFunctionInvokeAction(
state_machine=aws_stepfunctions.StateMachine.from_state_machine_arn("..."),
action_name="foo",
state_machine_input=aws_codepipeline_actions.StateMachineInput.literal({"foo": "bar"}),
run_order=options["run_order"],
)
)
return pipelines.CodePipelineActionFactoryResult(run_orders_consumed=1)
app = core.App()
stage = core.Stage(app, "stage")
stack = core.Stack(stage, "stack")
pipeline_stack = core.Stack(app, "pipeline-stack")
pipeline = pipelines.CodePipeline(
pipeline_stack,
"pipeline",
synth=pipelines.ShellStep("synth", input=pipelines.CodePipelineSource.git_hub("foo/bar", "main"), commands=["cdk synth"])
)
pipeline.add_wave("wave").add_stage(stage, pre=[SomeStep("some")])
app.synth()
The complete error:
jsii.errors.JavaScriptError:
Error: '' object has no attribute 'add_action'
at KernelHost.completeCallback (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/tmpwwmvzicu/lib/program.js:9462:35)
at KernelHost.callbackHandler (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/tmpwwmvzicu/lib/program.js:9453:41)
at Step.value (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/tmpwwmvzicu/lib/program.js:8323:49)
at CodePipeline.pipelineStagesAndActionsFromGraph (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/codepipeline/codepipeline.js:154:48)
at CodePipeline.doBuildPipeline (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/codepipeline/codepipeline.js:116:14)
at CodePipeline.buildPipeline (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/main/pipeline-base.js:93:14)
at CodePipeline.buildJustInTime (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/main/pipeline-base.js:101:18)
at Object.visit (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/main/pipeline-base.js:42:57)
at recurse (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/core/lib/private/synthesis.js:86:20)
at recurse (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/core/lib/private/synthesis.js:98:17)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~/dev/cdk-playground/app.py", line 55, in <module>
app.synth()
File ".../lib/python3.9/site-packages/aws_cdk/core/__init__.py", line 16432, in synth
return typing.cast(aws_cdk.cx_api.CloudAssembly, jsii.invoke(self, "synth", [options]))
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 128, in wrapped
return _recursize_dereference(kernel, fn(kernel, *args, **kwargs))
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 348, in invoke
return _callback_till_result(self, response, InvokeResponse)
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 216, in _callback_till_result
response = kernel.sync_complete(
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 386, in sync_complete
return self.provider.sync_complete(
File ".../lib/python3.9/site-packages/jsii/_kernel/providers/process.py", line 382, in sync_complete
resp = self._process.send(_CompleteRequest(complete=request), response_type)
File ".../lib/python3.9/site-packages/jsii/_kernel/providers/process.py", line 326, in send
raise JSIIError(resp.error) from JavaScriptError(resp.stack)
jsii.errors.JSIIError: '' object has no attribute 'add_action'
Subprocess exited with error 1

AWS has resolved the issue in jsii.
https://github.com/aws/jsii/issues/2963
It should be available with CDK 1.121.0.

Related

Getting KeyError: 'Endpoint' error in Python when calling Custom Vision API

from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.customvision import prediction
from PIL import Image
endpoint = "https://southcentralus.api.cognitive.microsoft.com/"
project_id = "projectidhere"
prediction_key = "predictionkeyhere"
predict = CustomVisionPredictionClient(prediction_key, endpoint)
with open("c:/users/paul.barbin/pycharmprojects/hw3/TallowTest1.jpg", mode="rb") as image_data:
tallowresult = predict.detect_image(project_id, "test1", image_data)
Python 3.7, and I'm using Azure Custom Vision 3.1? (>azure.cognitiveservices.vision.customvision) (3.1.0)
Note that I've seen the same question on SO but no real solution. The posted answer on the other question says to use the REST API instead.
I believe the error is in the endpoint (as stated in the error), and I've tried a few variants - with the slash, without, using an environment variable, without, I've tried appending various strings to my endpoint but I keep getting the same message. Any help is appreciated.
Full error here:
Traceback (most recent call last):
File "GetError.py", line 15, in <module>
tallowresult = predict.detect_image(project_id, "test1", image_data)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\azure\cognitiveservices\vision\customvision\prediction\operations\_custom_vision_
prediction_client_operations.py", line 354, in detect_image
request = self._client.post(url, query_parameters, header_parameters, form_content=form_data_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 193, in post
request = self._request('POST', url, params, headers, content, form_content)
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 108, in _request
request = ClientRequest(method, self.format_url(url))
File "C:\Users\paul.barbin\PycharmProjects\hw3\.venv\lib\site-packages\msrest\service_client.py", line 155, in format_url
base = self.config.base_url.format(**kwargs).rstrip('/')
KeyError: 'Endpoint'
CustomVisionPredictionClient takes two required, positional parameters: endpoint and credentials. Endpoint needs to be passed in before credentials, try swapping the order:
predict = CustomVisionPredictionClient(endpoint, prediction_key)

coreapi : can not get schema

python 3.7
I have a python app for which I run tests:
$ python -m unittest
package code goes like this:
import coreapi
from coreapi import codecs
class myClient():
myApiUrl = None
client = None
def __init__(self, myApiUrl, authenticationToken):
self.myApiUrl = myApiUrl
auth = coreapi.auth.TokenAuthentication(
scheme='Token',
token=authenticationToken
)
decoders = [
codecs.CoreJSONCodec(),
codecs.JSONCodec()
]
self.client = coreapi.Client(auth=auth, decoders=decoders)
def getSomething(self):
....at this point self.client.decoders are present....
schema = self.client.get(self.myApiUrl)
.......blah-blah....
This tests run ends with this error:
ERROR: test_doFirstTest (myclient.tests.SomeTestClass)
Traceback (most recent call last): File "/usr/src/app/myclient/tests.py", line 82, in test_doFirstTest
output = client.test_doFirstTest() File "/usr/src/app/myclient/myClient.py", line 42, in getSomething
schema = self.client.get(self.myApiUrl) File "/opt/conda/lib/python3.7/site-packages/coreapi/client.py", line 136, in get
return transport.transition(link, decoders, force_codec=force_codec) File "/opt/conda/lib/python3.7/site-packages/coreapi/transports/http.py", line 380, in transition
result = _decode_result(response, decoders, force_codec) File "/opt/conda/lib/python3.7/site-packages/coreapi/transports/http.py", line 284, in _decode_result
codec = utils.negotiate_decoder(decoders, content_type) File "/opt/conda/lib/python3.7/site-packages/coreapi/utils.py", line 207, in negotiate_decoder
raise exceptions.NoCodecAvailable(msg) coreapi.exceptions.NoCodecAvailable: Unsupported media in Content-Type header 'text/html'
I realise it's telling me that it received text/html instead of json (maybe an empty string?), but why? I am not doing any request yet, I am doing a preparation step of getting the schema object.
And this is not a connectivity issue, when it can not connect at all it gives different error.
Thanks
Ok this did not have anything to do with coreapi, codex, unittest, requests, or anything I could possible think of. The reason for that error is some docker container involved was exiting silently after starting because another docker container on which the first one depends was not started. So the manifestation just happened to be non-decipherable.

Using Python to Manage AWS

I’m trying to use Python to create EC2 instances but I keep getting these errors.
Here is my code:
#!/usr/bin/env python
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
ImageId='ami-0922553b7b0369273',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro')
print instance[0].id
Here are the errors I'm getting
Traceback (most recent call last):
File "./createinstance.py", line 8, in <module>
InstanceType='t2.micro')
File "/usr/lib/python2.7/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 320, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 623, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAMIID.NotFound) when calling the RunInstances operation: The image id '[ami-0922553b7b0369273]' does not exist
I also get an error when trying to create a key pair
Here's my code for creating the keypair
import boto3
ec2 = boto3.resource('ec2')
# create a file to store the key locally
outfile = open('ec2-keypair.pem','w')
# call the boto ec2 function to create a key pair
key_pair = ec2.create_key_pair(KeyName='ec2-keypair')
# capture the key and store it in a file
KeyPairOut = str(key_pair.key_material)
print(KeyPairOut)
outfile.write(KeyPairOut)
response = ec2.instance-describe()
print response
Here's are the error messages
./createkey.py: line 1: import: command not found
./createkey.py: line 2: syntax error near unexpected token `('
./createkey.py: line 2: `ec2 = boto3.resource('ec2')'
What I am I missing?
For your first script, one of two possibilities could be occurring:
1. The AMI you are referencing by the ID is not available because the key is incorrect or the AMI doesn't exist
2. AMI is unavailable in the region that your machine is setup for
You most likely are running your script from a machine that is not configured for the correct region. If you are running your script locally or on a server that does not have roles configured, and you are using the aws-cli, you can run the aws configure command to configure your access keys and region appropriately. If you are running your instance on a server with roles configured, your server needs to be ran in the correct region, and your roles need to allow access to EC2 AMI's.
For your second question (which in the future should probably be posted separate), your syntax error in your script is a side effect of not following the same format for how you wrote your first script. It is most likely that your python script is not in fact being interpreted as a python script. You should add the shebang at the top of the file and remove the spacing preceding your import boto3 statement.
#!/usr/bin/env python
import boto3
# create a file to store the key locally
outfile = open('ec2-keypair.pem','w')
# call the boto ec2 function to create a key pair
key_pair = ec2.create_key_pair(KeyName='ec2-keypair')
# capture the key and store it in a file
KeyPairOut = str(key_pair.key_material)
print(KeyPairOut)
outfile.write(KeyPairOut)
response = ec2.instance-describe()
print response

Python telegram bot's `get_chat_members_count` & avoiding flood limits or how to use wrappers and decorators

I'm checking a list of around 3000 telegram chats to get and retrieve the number of chat members in each chat using the get_chat_members_count method.
At some point I'm hitting a flood limit and getting temporarily banned by Telegram BOT.
Traceback (most recent call last):
File "C:\Users\alexa\Desktop\ico_icobench_2.py", line 194, in <module>
ico_tel_memb = bot.get_chat_members_count('#' + ico_tel_trim, timeout=60)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 60, in decorator
result = func(self, *args, **kwargs)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 2006, in get_chat_members_count
result = self._request.post(url, data, timeout=timeout)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 278, in post
**urlopen_kwargs)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 208, in _request_wrapper
message = self._parse(resp.data)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 168, in _parse
raise RetryAfter(retry_after)
telegram.error.RetryAfter: Flood control exceeded. Retry in 85988 seconds
The python-telegram-bot wiki gives a detailed explanation and example on how to avoid flood limits here.
However, I'm struggling to implement their solution and I hope someone here has more knowledge of this than myself.
I have literally made a copy and paste of their example and can't get it to work no doubt because i'm new to python. I'm guessing I'm missing some definitions but I'm not sure which. Here is the code below and after that the first error I'm receiving. Obviously the TOKEN needs to be replaced with your token.
import telegram.bot
from telegram.ext import messagequeue as mq
class MQBot(telegram.bot.Bot):
'''A subclass of Bot which delegates send method handling to MQ'''
def __init__(self, *args, is_queued_def=True, mqueue=None, **kwargs):
super(MQBot, self).__init__(*args, **kwargs)
# below 2 attributes should be provided for decorator usage
self._is_messages_queued_default = is_queued_def
self._msg_queue = mqueue or mq.MessageQueue()
def __del__(self):
try:
self._msg_queue.stop()
except:
pass
super(MQBot, self).__del__()
#mq.queuedmessage
def send_message(self, *args, **kwargs):
'''Wrapped method would accept new `queued` and `isgroup`
OPTIONAL arguments'''
return super(MQBot, self).send_message(*args, **kwargs)
if __name__ == '__main__':
from telegram.ext import MessageHandler, Filters
import os
token = os.environ.get('TOKEN')
# for test purposes limit global throughput to 3 messages per 3 seconds
q = mq.MessageQueue(all_burst_limit=3, all_time_limit_ms=3000)
testbot = MQBot(token, mqueue=q)
upd = telegram.ext.updater.Updater(bot=testbot)
def reply(bot, update):
# tries to echo 10 msgs at once
chatid = update.message.chat_id
msgt = update.message.text
print(msgt, chatid)
for ix in range(10):
bot.send_message(chat_id=chatid, text='%s) %s' % (ix + 1, msgt))
hdl = MessageHandler(Filters.text, reply)
upd.dispatcher.add_handler(hdl)
upd.start_polling()
The first error I get is:
Traceback (most recent call last):
File "C:\Users\alexa\Desktop\z test.py", line 34, in <module>
testbot = MQBot(token, mqueue=q)
File "C:\Users\alexa\Desktop\z test.py", line 9, in __init__
super(MQBot, self).__init__(*args, **kwargs)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 108, in __init__
self.token = self._validate_token(token)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 129, in _validate_token
if any(x.isspace() for x in token):
TypeError: 'NoneType' object is not iterable
The second issue I have is how to use wrappers and decorators with get_chat_members_count.
The code I have added to the example is:
#mq.queuedmessage
def get_chat_members_count(self, *args, **kwargs):
return super(MQBot, self).get_chat_members_count(*args, **kwargs)
But nothing happens and I don't get my count of chat members. I'm also not saying which chat I need to count so not surprising I'm getting nothing back but where am I supposed to put the telegram chat id?
You are getting this error because MQBot receives an empty token. For some reason, it does not raise a descriptive exception but instead crashes unexpectedly.
So why token is empty? It seems that you are using os.environ.get incorrectly. The os.environ part is a dictionary and its' method get allows one to access dict's contents safely. According to docs:
get(key[, default])
Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.
According to your question, in this part token = os.environ.get('TOKEN') you pass token itself as a key. Instead, you should've passed the name of the environmental variable which contains your token.
You can fix this either rewriting that part like this token = 'TOKEN' or by setting environmental variable correctly and accessing it from os.environ.get via correct name.

Upload file to GCS from Appengine (Endpoints). AttributeTypeError:

I'm trying to upload a file to GCS from Appengine Endpoints. I'm using Python. When the file ends to upload, shows an error " AttributeError: 'str' object has no attribute 'ToMessage' ".
So, if I go to GCS File Explorer, in the browser, I see the recently filename uploaded but its size is 0K.
This is my model:
class File(EndpointsModel):
_message_fields_schema = ('blob', 'url')
blob = ndb.BlobKeyProperty() #stored in GCS
url = ndb.StringProperty()
enable = ndb.BooleanProperty(default=True)
def create_file(filename):
file_info = blobstore.FileInfo(filename)
filename = '/gs'+ str(file_info.filename.blob)
gcs.open(secrets.BUCKET_NAME +'/' + filename, 'w').close()
return blobstore.create_gs_key(filename)
So, what I need to do to upload correctly a file to GCS from Appengine Endpoints.
Traceback:
ERROR 2014-11-25 20:35:22,654 service.py:191] Encountered unexpected error from ProtoRPC method implementation: AttributeError ('str' object has no attribute 'ToMessage')
Traceback (most recent call last):
File "/home/alpocr/workspace/google_appengine/lib/protorpc-1.0/protorpc/wsgi/service.py", line 181, in protorpc_service_app
response = method(instance, request)
File "/home/alpocr/workspace/google_appengine/lib/endpoints-1.0/endpoints/api_config.py", line 1332, in invoke_remote
return remote_method(service_instance, request)
File "/home/alpocr/workspace/google_appengine/lib/protorpc-1.0/protorpc/remote.py", line 412, in invoke_remote_method
response = method(service_instance, request)
File "/home/alpocr/workspace/mall4g-backend/libs/endpoints_proto_datastore/ndb/model.py", line 1429, in EntityToRequestMethod
response = response.ToMessage(fields=response_fields)
AttributeError: 'str' object has no attribute 'ToMessage'
It sounds like you have defined the return type correctly for your endpoints method, and it's expecting to turn the result into a Message object, but the endpoints method code is actually returning a string. Can you post the endpoints method that is called when this error occurs?
Either that or endpoints proto model is acting weird when you (somewhere in your code) assign a string value to one of its properties. When it tries to convert it to a Message (and thus recursively to turn its properties into Messages), it finds the String and bugs out. It's hard to tell without seeing the affected endpoint method's code.
UPDATE: Also, checking the source of endpoints_proto_datastore, we see the following comment above the line that bugs:
# If developers using a custom request message class with
# response_fields to create a response message class for them, it is
# up to them to return an instance of the current EndpointsModel
# class. If not, their API users will receive a 503 from an uncaught
# exception.
Could this apply to you?

Categories