I'm using AWS Lambda with Python and I'm getting strange garbage in my logs when errors occur. Does anybody know what this is?
My stack is AWS Lambda with Zappa framework https://github.com/Miserlou/Zappa, Flask and Python 2.7.
<trimmed>
dklIL2ZsYXNrL2ZsYXNrL2FwcC5weSIsIGxpbmUgMTQ3NSwgaW4gZnVsbF9kaXNwYXRjaF9yZXF1ZXN0
PGJyIC8+ICBGaWxlICIvcHJpdmF0ZS92YXIvZm9sZGVycy8xai81emN4anpreDI5Yjh0ZmdyaDJ5OTB3
dDgwMDAwZ24vVC9waXAtYnVpbGQtdjZYdklIL2ZsYXNrL2ZsYXNrL2FwcC5weSIsIGxpbmUgMTQ2MSwg
aW4gZGlzcGF0Y2hfcmVxdWVzdDxiciAvPiAgRmlsZSAiL3ByaXZhdGUvdmFyL2ZvbGRlcnMvMWovNXpj
eGp6a3gyOWI4dGZncmgyeTkwd3Q4MDAwMGduL1QvcGlwLWJ1aWxkLWp2RVlXSS9mbGFzay1yZXN0ZnVs
L2ZsYXNrX3Jlc3RmdWwvX19pbml0X18ucHkiLCBsaW5lIDQ3NywgaW4gd3JhcHBlcjxiciAvPiAgRmls
ZSAiL3ByaXZhdGUvdmFyL2ZvbGRlcnMvMWovNXpjeGp6a3gyOWI4dGZncmgyeTkwd3Q4MDAwMGduL1Qv
cGlwLWJ1aWxkLXY2WHZJSC9mbGFzay9mbGFzay92aWV3cy5weSIsIGxpbmUgODQsIGluIHZpZXc8YnIg
Lz4gIEZpbGUgIi9wcml2YXRlL3Zhci9mb2xkZXJzLzFqLzV6Y3hqemt4MjliOHRmZ3JoMnk5MHd0ODAw
MDBnbi9UL3BpcC1idWlsZC1qdkVZV0kvZmxhc2stcmVzdGZ1bC9mbGFza19yZXN0ZnVsL19faW5pdF9f
LnB5IiwgbGluZSA1ODcsIGluIGRpc3BhdGNoX3JlcXVlc3Q8YnIgLz4gIEZpbGUgIi9Vc2Vycy9kYXZl
bWFuL0ludGVybmFsL1NjcmF0Y2gvV2ViRGV2L2F3cy1sYW1iZGEvemFwcGEvY29udHJvbGxlcnMvZXJy
b3JfY29udHJvbGxlci5weSIsIGxpbmUgMTUsIGluIGdldDxiciAvPlRlc3RFcnJvcjogJ1RoaXMgaXMg
YSB0ZXN0IGVycm9yJzxiciAvPjwvcHJlPg==:
Exception Traceback (most recent call last):
File "/var/task/handler.py", line 161, in lambda_handler return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 55, in lambda_handler return cls().handler(event, context)
File "/var/task/handler.py", line 155, in handler raise Exception(exception)
Exception: PCFET0NUWVBFIGh0bWw+NTAwLiBGcm9tIFphcHBhOiA8cHJlPidUaGlzIGlzIGEgdGVzd
CBlcnJvcic8L3ByZT48YnIgLz48cHJlPlRyYWNlYmFjayAobW9zdCByZWNlbnQgY2FsbCBsYXN0KTo8Y
nIgLz4gIEZpbGUgIi92YXIvdGFzay9oYW5kbGVyLnB5IiwgbGluZSA5NiwgaW4gaGFuZGxlcjxiciAvP
iAgICByZXNwb25zZSA9IFJlc3BvbnNlLmZyb21fYXBwKGFwcCwgZW52aXJvbik8YnIgLz4gIEZpbGUgI
i9wcml2YXRlL3Zhci9mb2xkZXJzLzFqLzV6Y3hqemt4MjliOHRmZ3JoMnk5MHd0ODAwMDBnbi9UL3Bpc
C1idWlsZC12Nlh2SUgvV2Vya3pldWcvd2Vya3pldWcvd3JhcHBlcnMucHkiLCBsaW5lIDg2NSwgaW4gZ
nJvbV9hcHA8YnIgLz4gIEZpbGUgIi9wcml2YXRlL3Zhci9mb2xkZXJzLzFqLzV6Y3hqemt4MjliOHRmZ
3JoMnk5MHd0ODAwMDBnbi9UL3BpcC1idWlsZC12Nlh2SUgvV2Vya3pldWcvd2Vya3pldWcvdGVzdC5we
SIsIGxpbmUgODcxLCBpbiBydW5fd3NnaV9hcHA8YnIgLz4gIEZpbGUgIi9wcml2YXRlL3Zhci9mb2xkZ
XJzLzFqLzV6Y3hqemt4MjliOHRmZ3JoMnk5MHd0ODAwMDBnbi9UL3BpcC1idWlsZC12Nlh2SUgvemFwc
GEvemFwcGEvbWlkZGxld2FyZS5weSIsIGxpbmUgNzgsIGluIF9fY2FsbF9fPGJyIC8+ICBGaWxlICIvc
<trimmed>
When an unauthorized request is performed by the user, AWS returns Client.UnauthorizedOperation response with encoded details of the request operation and reasons for failure. Only permitted users can decipher the content either using command-line or API. But decoding requires additional rights for that IAM user.
For CLI:
Use ' decode-authorization-message --encoded-message <value> ' .For more information refer http://docs.aws.amazon.com/cli/latest/reference/sts/decode-authorization-message.html
For API:
http://docs.aws.amazon.com/STS/latest/APIReference/API_DecodeAuthorizationMessage.html
Related
I'm trying to set up an Airflow DAG that is able to send emails through the EmailOperator in Composer 2, Airflow 2.3.4. I've followed this guide. I tried running the example DAG that is provided in the guide, but I get an HTTP 400 error.
The log looks like this:
[2023-01-20, 10:46:45 UTC] {taskinstance.py:1904} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/email.py", line 75, in execute
send_email(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/email.py", line 58, in send_email
return backend(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/sendgrid/utils/emailer.py", line 123, in send_email
_post_sendgrid_mail(mail.get(), conn_id)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/sendgrid/utils/emailer.py", line 142, in _post_sendgrid_mail
response = sendgrid_client.client.mail.send.post(request_body=mail_data)
File "/opt/python3.8/lib/python3.8/site-packages/python_http_client/client.py", line 277, in http_request
self._make_request(opener, request, timeout=timeout)
File "/opt/python3.8/lib/python3.8/site-packages/python_http_client/client.py", line 184, in _make_request
raise exc
python_http_client.exceptions.BadRequestsError: HTTP Error 400: Bad Request
I've looked at similar threads on Stackoverflow but none of those suggestions worked for me.
I have set up and verified the from email address in Sendgrid and it
uses a whole email address including the domain.
I also set this email address up in Secret Manager (as well as the API key).
I haven't changed the test DAG from the guide, except for the 'to' address.
In another DAG I've tried enabling 'email_on_retry' and that also didn't trigger any mail.
I'm at a loss here, can someone provide me with suggestions on things to try?
I have a process where I need to list off failed workflows in Google cloud platform so they can be highlighted to be fixed. I have managed to do this quite simply writing a gcloud command and calling it in a shell script, but I need to transfer this to Python.
I have the following shell command as an example where I am able to filter on a specific workflow to pull back any failures using the --filter flag.
gcloud workflows executions list --project test-project "projects/test-project/locations/europe-west4/workflows/test-workflow" --location europe-west4 --filter STATE:FAILED
In the documentation for filtering it not possible to do this on the executions but on the list workflows instead, which is fine. You can see within the code snippet below I am trying to filter on STATE:FAILED like in the gcloud command.
The example below doesn't work and there are no examples within the Google cloud documents. I have checked in the following webpage:
https://cloud.google.com/python/docs/reference/workflows/latest/google.cloud.workflows_v1.types.ListWorkflowsRequest
from google.cloud import workflows_v1
from google.cloud.workflows import executions_v1
# Create a client
workflow_client = workflows_v1.WorkflowsClient()
execution_client = executions_v1.ExecutionsClient()
project = "test-project"
location = "europe-west4"
# Initialize request argument(s)
request = workflows_v1.ListWorkflowsRequest(
parent=f"projects/{project}/locations/{location}",
filter="STATE:FAILED"
)
# Make the request
workflow_page_result = workflow_client.list_workflows(request=request)
# Handle the response
with open("./workflows.txt", "w") as workflow_file:
for workflow_response in workflow_page_result:
name = workflow_response.name
request = executions_v1.ListExecutionsRequest(
parent=name,
)
execution_page_result = execution_client.list_executions(request=request)
# Handle the response
for execution_response in execution_page_result:
print(execution_response)
workflow_file.write(name)
What is the correct syntax for filtering on the failed state within the Python code? Where would I look to find this information in the Google documentation?
I get the following error message:
/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/bin/python /Users/richard.drury/code/gcp/git/service-level-monitoring/daily_checks/sensor_checks/workflows.py
Traceback (most recent call last):
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 50, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/lib/python3.10/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/lib/python3.10/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The request was invalid: invalid list filter: Field 'STATE' not found in 'resource'."
debug_error_string = "{"created":"#1670416298.880340000","description":"Error received from peer ipv4:216.58.213.10:443","file":"src/core/lib/surface/call.cc","file_line":967,"grpc_message":"The request was invalid: invalid list filter: Field 'STATE' not found in 'resource'.","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/daily_checks/sensor_checks/workflows.py", line 15, in <module>
workflow_page_result = workflow_client.list_workflows(request=request)
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/lib/python3.10/site-packages/google/cloud/workflows_v1/services/workflows/client.py", line 537, in list_workflows
response = rpc(
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py", line 154, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/richard.drury/code/gcp/git/service-level-monitoring/venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 52, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 The request was invalid: invalid list filter: Field 'STATE' not found in 'resource'. [field_violations {
field: "filter"
description: "invalid list filter: Field \'STATE\' not found in \'resource\'."
}
]
Process finished with exit code 1
I know just enough devops to be dangerous. I've successfully deployed a VERY simple python flask app to App Engine that basically publishes received post data as a message to PubSub. It is almost identical to Google's sample code to do so. Only difference is it uses a service account I push with the app repository to access PubSub to circumvent this issue.
Works very well so far, but I've started seeing a very small number of errors around starting a new thread in threading.py:
1)
Traceback (most recent call last):
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 33, in grpc._cython.cygrpc._spawn_callback_async
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 24, in grpc._cython.cygrpc._spawn_callback_in_thread
File "/usr/lib/python2.7/threading.py", line 736, in start
_start_new_thread(self.__bootstrap, ())
thread.error: can't start new thread
2)
Traceback (most recent call last):
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 33, in grpc._cython.cygrpc._spawn_callback_async
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 24, in grpc._cython.cygrpc._spawn_callback_in_thread
3)
Traceback (most recent call last):
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 33, in grpc._cython.cygrpc._spawn_callback_async
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 33, in grpc._cython.cygrpc._spawn_callback_async
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 24, in grpc._cython.cygrpc._spawn_callback_in_thread
File "src/python/grpcio/grpc/_cython/_cygrpc/credentials.pyx.pxi", line 24, in grpc._cython.cygrpc._spawn_callback_in_thread
File "/usr/lib/python2.7/threading.py", line 736, in start
File "/usr/lib/python2.7/threading.py", line 736, in start
I have 2 questions, in order of importance:
This is an app that basically needs 100% uptime in order to not lose data (not confident the clients attempt retries if there is an error on my server side). Are these errors internal to how App Engine is managing my app's resources, and not resulting in errors handling actual requests? How can I determine if I ever responded with an HTTP error/didn't successfully handle a request? I don't see any errors in my nginx logs...is that the place I need to look to see if anything failed?
Is there a way I can fix this error?
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/pubsub/google/cloud/pubsub_v1/publisher/client.py#L143
it looks like publisher.publish(topic_path, data=data) is an async operation, returning a concurrent.futures.Future object
Have you trying calling the Future's result()? https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future.result
This will block until the future object is successful, fails, or timesout.
You could then forward that result as your HTTP response.
Hopefully, the result object will give you more information about the error.
Ended up changing the methodology a bit. Instead of posting a pubsub message then having dataflow ingest through GCS to BigQuery I decided to stream directly into BQ using the BigQuery python client. Updated dependencies for the python flask app to:
Flask==1.0.2
google-cloud-pubsub==0.39.1
gunicorn==19.9.0
google-cloud-bigquery==1.11.2
and I am no longer seeing any of those exceptions. It's worth noting that I'm still using a service account .json credentials file in the same directory as the app source, and I'm creating the BigQuery client with
bq_client = bigquery.Client.from_service_account_json(BQ_SVC_ACCT_FILE).
For anyone else with similar issues I'd recommend updating your dependencies (esp any Google Cloud client libraries) and create the Client you need from a local service account credentials file. I attempted to use the inherited Compute engine environment credentials (basically the default project compute engine service account) but that was less stable than pushing up an actual credential file and using that locally. However...assess your own security needs before doing the same.
Generating an access token for the Python Instagram API requires running this file and then entering a Client ID, Client Secret, Redirect URI, and Scope. The console then outputs a URL to follow to authorize the app and asks for the code generated afterwards. Theoretically after this process it should return an access token.
Instead, it's throwing an error:
Traceback (most recent call last):
File "get_access_token.py", line 40, in <module>
access_token = api.exchange_code_for_access_token(code)
File "C:\Users\Daniel Leybzon\Anaconda2\lib\site-packages\instagram\oauth2.py", line 48, in exchange_code_for_access_token
return req.exchange_for_access_token(code=code)
File "C:\Users\Daniel Leybzon\Anaconda2\lib\site-packages\instagram\oauth2.py", line 115, in exchange_for_access_token
raise OAuth2AuthExchangeError(parsed_content.get("error_message", ""))
instagram.oauth2.OAuth2AuthExchangeError: You must provide a client_id
Screenshot provided for context:
I am having trouble upgrading my session token in google app engine if my user is not logged into my application using the google accounts user api. If the user is currently logged in then it functions perfectly.
If not then i am getting this error:
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 511, in __call__
handler.get(*groups)
File "/base/data/home/apps/5th-anniversary/1.341853888797531127/main.py", line 78, in get
u.upgradeToken(self)
File "/base/data/home/apps/5th-anniversary/1.341853888797531127/upload.py", line 47, in upgradeToken
client.UpgradeToSessionToken()
File "/base/data/home/apps/5th-anniversary/1.341853888797531127/gdata/service.py", line 903, in UpgradeToSessionToken
raise NonAuthSubToken
NonAuthSubToken
What are my best options here? I do not want the user to have to log into the google accounts api and then the youtube site to upload a video.
here is my method for updating the token:
def upgradeToken(data,self):
get = self.request.GET
authsub_token = get['token']
gdata.alt.appengine.run_on_appengine(client)
client.SetAuthSubToken(authsub_token)
client.UpgradeToSessionToken()
client is simply client = gdata.youtube.service.YouTubeService()
pretty sure i'm missing something on the authentication side but i can't seem to see what, thanks!
I solved this by using:
client.UpgradeToSessionToken(gdata.auth.extract_auth_sub_token_from_url(self.request.url))
but this raised another issue when building the upload form with
GetFormUploadToken
i receive:
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 513, in __call__
handler.post(*groups)
File "/base/data/home/apps/5th-anniversary/1.341859541699944556/upload.py", line 106, in post
form = u.getUploadForm(self,title,description,keywords)
File "/base/data/home/apps/5th-anniversary/1.341859541699944556/upload.py", line 65, in getUploadForm
response = client.GetFormUploadToken(video_entry,'http://gdata.youtube.com/action/GetUploadToken')
File "/base/data/home/apps/5th-anniversary/1.341859541699944556/gdata/youtube/service.py", line 716, in GetFormUploadToken
raise YouTubeError(e.args[0])
YouTubeError: {'status': 401L, 'body': '<HTML>\n<HEAD>\n<TITLE>User authentication required.</TITLE>\n</HEAD>\n<BODY BGCOLOR="#FFFFFF" TEXT="#000000">\n<H1>User authentication required.</H1>\n<H2>Error 401</H2>\n</BODY>\n</HTML>\n', 'reason': ''}
Try this:
new_token = client.UpgradeToOAuthAccessToken(
gdata.auth.extract_auth_sub_token_from_url(self.request.url)
client.SetOAuthToken(new_token)
client.GetFormUploadToken(my_video_entry)