I have a Pulumi script to deploy few services. Everything is fine except I don't understand how to apply a runtime to Functions App
api_function_app = azure.appservice.FunctionApp("ApiFunctionApp",
location=resource_group.location,
enabled= True,
enable_builtin_logging = True,
resource_group_name=resource_group.name,
app_service_plan_id=functionStoragePlan.id,
storage_account_name=functionStorageAccount.name,
storage_account_access_key=functionStorageAccount.primary_access_key, os_type="linux", version="~3")
Has anyone seen any example of this?
I believe you need to set a value for FUNCTIONS_WORKER_RUNTIME application setting
...
app_settings={
"FUNCTIONS_WORKER_RUNTIME": "python"
},
...
Related
I have 3 operators imported from airflow.providers.google.cloud.operators.dataproc
DataprocCreateBatchOperator
DataprocDeleteBatchOperator
DataprocGetBatchOperator
Need the same kind-of operators for Azure.
Can please someone look into this or I have to create a new operator ?
I believe the apache-airflow-providers-microsoft-azure provider package equivalent for Dataproc operators would be Azure Synapse Operators.
Specifically, the AzureSynapseRunSparkBatchOperator allows users to "execute a spark application within Synapse Analytics".
If you're running Spark jobs on Azure Databricks, there are also several Databricks Operators that might be able to help.
Here's an example PythonOperator (via Taskflow API) that uses the AzureSynapseHook. Note that I didn't test this, and I'm just using this as a demonstration of what it might look like:
#task()
def cancel_spark_job(job_id: str):
hook = AzureSynapseHook(azure_synapse_conn_id="your_conn_id")
if hook.wait_for_job_run_status(job_id, expected_statuses=("error", "dead", "killed"):
hook.cancel_job_run(job_id)
This task will wait for a spark job to enter a status of "error", "dead", or "killed" or timeout. If the spark job enters one of the statuses previously mentioned, it will cancel the job. Again, this is just for a demonstration of how to use the AzureSynapseHook within a PythonOperator, and I'm not sure if it would work or if it even makes sense to implement it this way.
#Mazlum Tosun
For GCP in my code DataprocCreateBatchOperator used like this:-
create_batch = DataprocCreateBatchOperator(
task_id="CREATE_BATCH",
batch={
"pyspark_batch": {
"main_python_file_uri": f"gs://{ARTIFACT_BUCKET}/spark-jobs/main.py",
"args": app_args,
"python_file_uris": [
f"gs://{ARTIFACT_BUCKET}/spark-jobs/jobs.zip",
f"gs://{ARTIFACT_BUCKET}/spark-jobs/libs.zip"
],
"jar_file_uris": test_jars,
"file_uris": [
f"gs://{ARTIFACT_BUCKET}/config/params.yaml"
]
},
"environment_config": {
"peripherals_config": {
"spark_history_server_config": {}
}
}
},
region=REGION,batch_id=batch_id_str,)
i am facing a chalenge if one of the given ec2 instance id is wrong then test run under lambda is not working and error ( i-9876277sgshj) is not in running state or not exists. So if an ec2 instance id was wrong why didn't it registere the correct ec2 instance(i-26377gdhdhj) .please help if any ec2 instance id is wrong it will skip it and register correct ec2 (i-26377gdhdhj) . And how can i get the result of the script when it executed.
Here is the lambda function code python code.
Import boto3
response_tg = clients.register_targets(
TargetGroupArn='arn:aws:elasticloadbalancing:us-east-1:123456789123:targetgroup/target-demo/c64e6bfc00b4658f',
Targets=[
{
'Id': 'i-26377gdhdhj',
},
{
'Id': 'i-9876277sgshj',
}
]
)
One of the solution might be, add a feature into your python script wihch verifies/gets the instance id of ec2-instance on the fly. For this, first of all assign a unique tag to all ec2-instance which you are planning to register in TG. Then use ec2 list & describe function of boto3 for getting all instance ids, and then register it into the TG.
I am trying to accomplish the following
If you are using the Fargate launch type for your tasks, all you need to do to turn on the awslogs log driver is add the required logConfiguration parameters to your task definition.
I am using CDK to generate the FargateTaskDefn
task_definition = _ecs.FargateTaskDefinition(self, "TaskDefinition",
cpu=2048,
memory_limit_mib=4096,
execution_role=ecs_role,
task_role = ecs_role,
)
task_definition.add_container("getFileTask",
memory_limit_mib = 4096,
cpu=2048,
image = _ecs.ContainerImage.from_asset(directory="assets", file="Dockerfile-ecs-file-download"))
I looked up the documentation and did not find the any attribute called logConfiguration.
What am I missing?
I am not able to send the logs from Container running on ECS/Fargate to Cloudwatch and what is needed is to enable this logConfiguration option in the task defn.
Thank you for your help.
Regards
Finally figured out that Logging option in add_container is the one.
I'm starting to work with Pulumi for IaC design of my project but I'm having a hard time in understanding how to bind my existing code to the use of Pulumi.
For example, suppose I've made a lambda function in python with the following content:
# test_lambda.py
import boto3
import json
sqs_client = boto3.client('sqs')
ssm_client = boto3.client('ssm')
def get_auth_token():
response = ssm_client.get_parameters(
Names=[
'lambda_auth_token',
],
WithDecryption=False
)
return response["Parameters"][0]["Value"]
def handler(event, _):
body = json.loads(event['body'])
if body['auth_token'] == get_auth_token():
sqs_client.send_message(
QueueUrl='my-queue',
MessageBody='validated auth code',
MessageDeduplicationId='akjseh3278y7iuad'
)
return {'statusCode': 200}
else:
return {'statusCode': 403}
How do I reference this whole file containing the lambda function in a Pulumi project? So I could then use this lambda integrated with SNS service.
And also, since I'm using Pulumi for my architecture, boto3 seems uneeded, I could just replace it by Pulumi aws library right? And then the python interpreter will just use Pulumi as a common interface library for my aws resources (like boto3)?
This last question might seem odd but for now I've only seen the use of pulumi as the stack and architecture "builder" when running pulumi up.
You could try importing the lambda into your pulumi stack like below
aws.lambda_.Function("sqs_lambda", opts=ResourceOptions(import=[
"lambda_id",
]))
A pulumi up , post this will mean the lambda is managed by pulumi from there on.You should also be cautious of
pulumi stack destroy
as that would mean the imported resources are also deleted.
You can read Pulumi import resource
I am trying to create aws cognito user pool using aws cdk.
below is my code -
user_pool = _cognito.UserPool(
stack,
id="user-pool-id",
user_pool_name="temp-user-pool",
self_sign_up_enabled=True,
sign_in_aliases={
"username": False,
"email": True
},
required_attributes={
"email": True
}
)
I want to set "Attributes" section in User pool for email .
But above code gives me this exception -
Invalid AttributeDataType input, consider using the provided AttributeDataType enum. (Service: AWSCognitoIdentityProviderService; Status Code: 400; Error Code: InvalidParameterException; Request ID:
I have tried many scenarios but it didn't work. Am I missing something here. Any help would be appreciated. Thanks!
I was referring this AWS doc to create userpool - https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_cognito/UserPool.html and https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_cognito/RequiredAttributes.html#aws_cdk.aws_cognito.RequiredAttributes
According to a comment on this GitHub issue this error is thrown when an attempt is made to modify required attributes for a UserPool. This leaves you two options:
Update the code such that existing attributes are not modified.
Remove the UserPool and create a new one. E.g. cdk destroy followed by cdk deploy will recreate your whole stack (this is probably not what you want if your stack is in production).
https://github.com/terraform-providers/terraform-provider-aws/issues/3891
Found a way to get around it in production as well, where you don't need to recreate the user pool.