Sendgrid HTTP: 400 error using Cloud Composer - python

I'm trying to set up an Airflow DAG that is able to send emails through the EmailOperator in Composer 2, Airflow 2.3.4. I've followed this guide. I tried running the example DAG that is provided in the guide, but I get an HTTP 400 error.
The log looks like this:
[2023-01-20, 10:46:45 UTC] {taskinstance.py:1904} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/email.py", line 75, in execute
send_email(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/email.py", line 58, in send_email
return backend(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/sendgrid/utils/emailer.py", line 123, in send_email
_post_sendgrid_mail(mail.get(), conn_id)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/sendgrid/utils/emailer.py", line 142, in _post_sendgrid_mail
response = sendgrid_client.client.mail.send.post(request_body=mail_data)
File "/opt/python3.8/lib/python3.8/site-packages/python_http_client/client.py", line 277, in http_request
self._make_request(opener, request, timeout=timeout)
File "/opt/python3.8/lib/python3.8/site-packages/python_http_client/client.py", line 184, in _make_request
raise exc
python_http_client.exceptions.BadRequestsError: HTTP Error 400: Bad Request
I've looked at similar threads on Stackoverflow but none of those suggestions worked for me.
I have set up and verified the from email address in Sendgrid and it
uses a whole email address including the domain.
I also set this email address up in Secret Manager (as well as the API key).
I haven't changed the test DAG from the guide, except for the 'to' address.
In another DAG I've tried enabling 'email_on_retry' and that also didn't trigger any mail.
I'm at a loss here, can someone provide me with suggestions on things to try?

Related

Catching Firebase 504 gateway timeout

I'm building a simple IOT device (with a Raspberry Pi Zero) which pulls data from Firebase Realtime Database every 1 second and checks for updates.
However, after a certain time (not sure exactly how much but somewhere between 1 hour and 3 hours) the program exits with a 504 Server Error: Gateway Time-out message.
I couldn't understand exactly why this is happening, I tried to recreate this error by disconnecting the Pi from the internet and I did not get this message. Instead, the program simply paused in a ref.get() line and automatically resumed running once the connection was back.
This device is meant to be always on, so ideally if I get some kind of error, I would like to restart the program / reinitiate the connection / reboot the Pi. Is there a way to achieve something like this?
It seems like the message is actually generated by the firebase_admin package.
Here is the error message:
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/db.py", line 944, in request
return super(_Client, self).request(method, url, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/_http_client.py", line 105, in request
resp.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://someFirebaseProject.firebaseio.com/someRef/subSomeRef/payload.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/Desktop/project/main.py", line 94, in <module>
lastUpdate = ref.get()['lastUpdate']
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/db.py", line 223, in get
return self._client.body('get', self._add_suffix(), params=params)
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/_http_client.py", line 117, in body
resp = self.request(method, url, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/firebase_admin/db.py", line 946, in request
raise _Client.handle_rtdb_error(error)
firebase_admin.exceptions.UnknownError: Internal server error.
>>>
To reboot the whole Raspberry Pi, you can just run a shell command:
import os
os.system("sudo reboot")
I've had this problem too and usually feel safer with that, but there's obvious downsides. I'd try resetting the wifi connection or network interface in a similar way

StatusCode.PERMISSION_DENIED error while publishing message to Google PubSub

I am trying to publish messages to Google PubSub in Python
Here is the code that I tried
from google.cloud import pubsub
ps = pubsub.Client()
topic = ps.topic("topic_name")
topic.publish("Message to topic")
I am getting the below error
File "/usr/local/lib/python2.7/dist-packages/google/cloud/iterator.py", line 218, in _items_iter
for page in self._page_iter(increment=False):
File "/usr/local/lib/python2.7/dist-packages/google/cloud/iterator.py", line 247, in _page_iter
page = self._next_page()
File "/usr/local/lib/python2.7/dist-packages/google/cloud/iterator.py", line 445, in _next_page
items = six.next(self._gax_page_iter)
File "/usr/local/lib/python2.7/dist-packages/google/gax/__init__.py", line 455, in next
return self.__next__()
File "/usr/local/lib/python2.7/dist-packages/google/gax/__init__.py", line 465, in __next__
response = self._func(self._request, **self._kwargs)
File "/usr/local/lib/python2.7/dist-packages/google/gax/api_callable.py", line 376, in inner
return a_func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/google/gax/retry.py", line 127, in inner
' classified as transient', exception)
google.gax.errors.RetryError: GaxError(Exception occurred in retry method that was not classified as transient, caused by <_Rendezvous of RPC that terminated with (StatusCode.PERMISSION_DENIED, User not authorized to perform this action.)>)
I've download service-account.json and the path of service-account.json is set to GOOGLE_APPLICATION_CREDENTIALS
I've also tried installing gcloud and executing gcloud auth application-default login
Please note that I am able to publish message using gcloud command and in java
$ gcloud beta pubsub topics publish sample "hello"
messageIds: '127284267552464'
Java code
TopicName topicName = TopicName.create(SRC_PROJECT, SRC_TOPIC);
Publisher publisher = Publisher.defaultBuilder(topicName).build();
ByteString data1 = ByteString.copyFromUtf8("hello");
PubsubMessage pubsubMessage1 = PubsubMessage.newBuilder().setData(data1).build();
publisher.publish(pubsubMessage1);
What is missing in python code?
I followed the steps described over here
This is an issue with my setup. My service-account.json does not have enough permission to publish messages to PubSub

Instagram API access token generation throws "You must provide a client_id" error even though I provided one

Generating an access token for the Python Instagram API requires running this file and then entering a Client ID, Client Secret, Redirect URI, and Scope. The console then outputs a URL to follow to authorize the app and asks for the code generated afterwards. Theoretically after this process it should return an access token.
Instead, it's throwing an error:
Traceback (most recent call last):
File "get_access_token.py", line 40, in <module>
access_token = api.exchange_code_for_access_token(code)
File "C:\Users\Daniel Leybzon\Anaconda2\lib\site-packages\instagram\oauth2.py", line 48, in exchange_code_for_access_token
return req.exchange_for_access_token(code=code)
File "C:\Users\Daniel Leybzon\Anaconda2\lib\site-packages\instagram\oauth2.py", line 115, in exchange_for_access_token
raise OAuth2AuthExchangeError(parsed_content.get("error_message", ""))
instagram.oauth2.OAuth2AuthExchangeError: You must provide a client_id
Screenshot provided for context:

Understanding AWS Lambda CloudWatch logs

I'm using AWS Lambda with Python and I'm getting strange garbage in my logs when errors occur. Does anybody know what this is?
My stack is AWS Lambda with Zappa framework https://github.com/Miserlou/Zappa, Flask and Python 2.7.
<trimmed>
dklIL2ZsYXNrL2ZsYXNrL2FwcC5weSIsIGxpbmUgMTQ3NSwgaW4gZnVsbF9kaXNwYXRjaF9yZXF1ZXN0
PGJyIC8+ICBGaWxlICIvcHJpdmF0ZS92YXIvZm9sZGVycy8xai81emN4anpreDI5Yjh0ZmdyaDJ5OTB3
dDgwMDAwZ24vVC9waXAtYnVpbGQtdjZYdklIL2ZsYXNrL2ZsYXNrL2FwcC5weSIsIGxpbmUgMTQ2MSwg
aW4gZGlzcGF0Y2hfcmVxdWVzdDxiciAvPiAgRmlsZSAiL3ByaXZhdGUvdmFyL2ZvbGRlcnMvMWovNXpj
eGp6a3gyOWI4dGZncmgyeTkwd3Q4MDAwMGduL1QvcGlwLWJ1aWxkLWp2RVlXSS9mbGFzay1yZXN0ZnVs
L2ZsYXNrX3Jlc3RmdWwvX19pbml0X18ucHkiLCBsaW5lIDQ3NywgaW4gd3JhcHBlcjxiciAvPiAgRmls
ZSAiL3ByaXZhdGUvdmFyL2ZvbGRlcnMvMWovNXpjeGp6a3gyOWI4dGZncmgyeTkwd3Q4MDAwMGduL1Qv
cGlwLWJ1aWxkLXY2WHZJSC9mbGFzay9mbGFzay92aWV3cy5weSIsIGxpbmUgODQsIGluIHZpZXc8YnIg
Lz4gIEZpbGUgIi9wcml2YXRlL3Zhci9mb2xkZXJzLzFqLzV6Y3hqemt4MjliOHRmZ3JoMnk5MHd0ODAw
MDBnbi9UL3BpcC1idWlsZC1qdkVZV0kvZmxhc2stcmVzdGZ1bC9mbGFza19yZXN0ZnVsL19faW5pdF9f
LnB5IiwgbGluZSA1ODcsIGluIGRpc3BhdGNoX3JlcXVlc3Q8YnIgLz4gIEZpbGUgIi9Vc2Vycy9kYXZl
bWFuL0ludGVybmFsL1NjcmF0Y2gvV2ViRGV2L2F3cy1sYW1iZGEvemFwcGEvY29udHJvbGxlcnMvZXJy
b3JfY29udHJvbGxlci5weSIsIGxpbmUgMTUsIGluIGdldDxiciAvPlRlc3RFcnJvcjogJ1RoaXMgaXMg
YSB0ZXN0IGVycm9yJzxiciAvPjwvcHJlPg==:
Exception Traceback (most recent call last):
File "/var/task/handler.py", line 161, in lambda_handler return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 55, in lambda_handler return cls().handler(event, context)
File "/var/task/handler.py", line 155, in handler raise Exception(exception)
Exception: PCFET0NUWVBFIGh0bWw+NTAwLiBGcm9tIFphcHBhOiA8cHJlPidUaGlzIGlzIGEgdGVzd
CBlcnJvcic8L3ByZT48YnIgLz48cHJlPlRyYWNlYmFjayAobW9zdCByZWNlbnQgY2FsbCBsYXN0KTo8Y
nIgLz4gIEZpbGUgIi92YXIvdGFzay9oYW5kbGVyLnB5IiwgbGluZSA5NiwgaW4gaGFuZGxlcjxiciAvP
iAgICByZXNwb25zZSA9IFJlc3BvbnNlLmZyb21fYXBwKGFwcCwgZW52aXJvbik8YnIgLz4gIEZpbGUgI
i9wcml2YXRlL3Zhci9mb2xkZXJzLzFqLzV6Y3hqemt4MjliOHRmZ3JoMnk5MHd0ODAwMDBnbi9UL3Bpc
C1idWlsZC12Nlh2SUgvV2Vya3pldWcvd2Vya3pldWcvd3JhcHBlcnMucHkiLCBsaW5lIDg2NSwgaW4gZ
nJvbV9hcHA8YnIgLz4gIEZpbGUgIi9wcml2YXRlL3Zhci9mb2xkZXJzLzFqLzV6Y3hqemt4MjliOHRmZ
3JoMnk5MHd0ODAwMDBnbi9UL3BpcC1idWlsZC12Nlh2SUgvV2Vya3pldWcvd2Vya3pldWcvdGVzdC5we
SIsIGxpbmUgODcxLCBpbiBydW5fd3NnaV9hcHA8YnIgLz4gIEZpbGUgIi9wcml2YXRlL3Zhci9mb2xkZ
XJzLzFqLzV6Y3hqemt4MjliOHRmZ3JoMnk5MHd0ODAwMDBnbi9UL3BpcC1idWlsZC12Nlh2SUgvemFwc
GEvemFwcGEvbWlkZGxld2FyZS5weSIsIGxpbmUgNzgsIGluIF9fY2FsbF9fPGJyIC8+ICBGaWxlICIvc
<trimmed>
When an unauthorized request is performed by the user, AWS returns Client.UnauthorizedOperation response with encoded details of the request operation and reasons for failure. Only permitted users can decipher the content either using command-line or API. But decoding requires additional rights for that IAM user.
For CLI:
Use ' decode-authorization-message --encoded-message <value> ' .For more information refer http://docs.aws.amazon.com/cli/latest/reference/sts/decode-authorization-message.html
For API:
http://docs.aws.amazon.com/STS/latest/APIReference/API_DecodeAuthorizationMessage.html

gdata internal error

Since yesterday a working Python gdata program has stopped working after I changed the IP address used.
I receive the following stack trace:
Traceback (most recent call last):
File "C:\prod\googleSite\googleSite2.py", line 23, in
feed = client.GetContentFeed()
File "C:\Python27\lib\site-packages\gdata\sites\client.py", line 155, in get_c
ontent_feed
auth_token=auth_token, **kwargs)
File "C:\Python27\lib\site-packages\gdata\client.py", line 635, in get_feed
**kwargs)
File "C:\Python27\lib\site-packages\gdata\client.py", line 320, in request
RequestError)
gdata.client.RequestError: Server responded with: 500, Internal Error
The code is as follow:
import gdata.sites.client
import gdata.sites.data
client = gdata.sites.client.SitesClient(source='xxx', site='yyy')
client.ssl = True # Force API requests through HTTPS
client.ClientLogin('user#googlemail.com', 'password', client.source);
feed = client.GetContentFeed();
Update:
The issue fixes itself after an hour - is there any kind of commit or logout to avoid this?
Since you're not passing anything in GetContentFeed, it's using CONTENT_FEED_TEMPLATE % (self.domain, self.site) as the URI. I'm not sure if the IP change had an impact on what the self.domain/self.site values should be, but it might be worth checking those out.

Categories