Django - how to remove INFO logs of "channels" package - python

I recently started using the channels package in Django (versions: channels==3.0.4 and channels-redis==3.3.1)
The application is sending massive amount of unwanted logs for each request i make to Django. log for example:
{"time": "2023-01-11 16:12:09 UTC", "msg": "HTTP %(method)s %(path)s %(status)s [%(time_taken).2f, %(client)s]", "logger": "edison", "level": "INFO", "log_trace": "/Volumes/dev/venv3.8/lib/python3.8/site-packages/channels/management/commands/runserver.py"}
the log seems exactly the same no matter what request i send.
I tried to set the channels logging level to ERROR using logging.getLogger('channels').setLevel(logging.ERROR) just like i do with other packages but it doesn't help
any ideas what I need to do to remove those logs?

Related

HTTP 404 Error setting hooking up Google's Cloud Run to Firebase Hosting

I was following this guide https://medium.com/firebase-developers/hosting-flask-servers-on-firebase-from-scratch-c97cfb204579 and I'm stuck on firebase's ./node_modules/.bin/firebase serve command
I was successfully able to deploy the project using Cloud Run and get a service url that is working for the site, but when I try and serve it locally it produces this error on the static page when running to localhost:5000:
A problem occurred while trying to handle a proxied rewrite: error looking up URL for Cloud Run service: FirebaseError: HTTP Error: 404, Resource 'flask-fire' of kind 'SERVICE' in region 'us-central1' in project 'cjcflaskapp10192021' does not exist.
Here is my firebase.json file.
{
"hosting": {
"public": "static",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [{
"source": "**",
"run": {
"serviceId": "flask-fire"
}
}]
}
}
From the article comments I discovered several users who also experienced the same error, one comment in special refers to a region mismatch between Firebase Hosting and the Cloud Run service:
Looks like the region selection is an important step. Because in the first step, I deployed the server side code ( app.py ) in region us-west-1 and it worked fine. But after setting up the firebase deployment setup, it strangely looked up for my URL in region us-central1.
Error Log :
A problem occurred while trying to handle a proxied rewrite: error looking up URL for Cloud Run service: FirebaseError: HTTP Error: 404, Resource 'flask-simple-pwa' of kind 'SERVICE' in region 'us-central1' in project 'flask-simple-pwa' does not exist.
When I checked the documentation for firebase, in Location us-west1 is nowhere to be seen .
It could be that this has changed since the time you wrote the article. Is this something that you can verify?
There is an official documentation on how to serve dynamic content from Cloud Run to use with Firebase Hosting, which is more complete and it explains that a region should be set on firebase.json that matches the region of your current Cloud Run deployment. If no region is set, FIrebase hosting will default to us-central1.
"hosting": {
// ...
// Add the "rewrites" attribute within "hosting"
"rewrites": [{
"source": "/helloworld",
"run": {
"serviceId": "helloworld", // "service name" (from when you deployed the container image)
"region": "us-central1" // optional (if omitted, default is us-central1)
}
}]
}
If this does not match with your Cloud Run deployment, it can cause this error. For a list of supported locations for Firebase, you can refer to this page.
The above answer from ErnestoC is correct.
I had the same issue and figured I had to choose e.g. europe-west3 as my region for the Flask API.
Thank you!

AWS SNS push notification request returns "DeviceTokenNotForTopic"

iOS app requests Token and sends it to the python API. I add it to AWS API as a device token and subscribe to the topic.
Then trying to send push notification I receive "DeviceTokenNotForTopic" error like this:
{
"notification": {
"messageMD5Sum": "71f457fe91ebc62efdce2acc25406ec8",
"messageId": "6124ef9c-860d-561a-94fa-b98e2392fd2a",
"topicArn": "arn:aws:sns:us-west-2:XXXXXXXXXXXX:all",
"timestamp": "2019-10-05 14:06:23.427"
},
"delivery": {
"deliveryId": "................",
"destination": "...............",
"providerResponse": "{\"reason\":\"DeviceTokenNotForTopic\"}",
"dwellTimeMs": 145,
"attempts": 1,
"token": "............",
"statusCode": 400
},
"status": "FAILURE"
}
The error "DeviceTokenNotForTopic" is usually returned to SNS from APNS (Apple Push Notification Service) side.
If we look through the APNS docs, you'll see that this error mainly occurs when "The device token does not match the specified topic" where the "topic" refers to the bundle ID of the application. This would mean either :
a). The tokens were not generated for that particular bundle id.
b). When the bundle ID in the certificate used to authenticate with APNS does not match the device token's registered app, the endpoint on SNS becomes disabled because SNS is essentially saying "Unless you change the device token, we won't be able to reach this endpoint".
Here's some tips for consideration in-case anyone else runs into this issue :
I would recommend to make sure the token is registered to the correct platform application if you have multiple iOS apps and confirm that the certificate is the correct certificate for that application environment.
If the iOS app was recently moved from sandbox to production, the certificates need to change as well and vice-versa is true.
Hope this helps.

How to check if message failed in django-channels

I am using channels in my project which made django to use websockets very easy to setup and use.
In my application every user that logs in opens a Group to which we can send information if there is any activity.
def ws_connect(message):
Group("%s" % message.user.id).add(message.reply_channel)
so whenever I want to send a message to that user, I use
Group('%s' % user.id).send(
{
'text': json.dumps({
'message': 'Some message'
})
}
)
But this fails silently if there is any error.
So question is there way to check if it failed or if there is any way we can check if the Group exists (live or listening ), even before I send data in order to handle it better.

Setting the SendAs via python gmail api returns "Custom display name disallowed"

I can't find any results when searching Google for this response.
I'm using the current Google Python API Client to make requests against the Gmail API. I can successfully insert a label, I can successfully retrieve a user's SendAs settings, but I cannot update, patch, or create a SendAS without receiving this error.
Here's a brief snippit of my code:
sendAsResource = {"sendAsEmail": "existingalias#test.com",
"isDefault": True,
"replyToAddress": "existingalias#test.com",
"displayName": "Test Sendas",
"isPrimary": False,
"treatAsAlias": False
}
self.service.users().settings().sendAs().create(userId = "me", body=sendAsResource).execute()
The response I get is:
<HttpError 400 when requesting https://www.googleapis.com/gmail/v1/users/me/settings/sendAs?alt=json returned "Custom display name disallowed">
I've tried userId="me" as well as the user i'm authenticated with, both result in this error. I am using a service account with domain wide delegation. Since adding a label works fine, I'm confused why this doesn't.
All pip modules are up to date as of this morning (google-api-python-client==1.5.3)
Edit: After hours of testing I decided to try on another user and this worked fine. There is something unique about my initial test account.
This was a bug in the Gmail API. It is fixed now.

How to set up celery and django with Amazon SQS

I'm trying to convert videos uploaded by users in a django app. Problem is, it's eating up resources and the site becomes unavailable during this time and a new instance has to be sprung up to scale it(I'm using Elastic Beanstalk). I did some research and decided to use SQS with a worker enviroment
I set up celery and added necessary configs to the settings.py file
BROKER_TRANSPORT = 'sqs'
BROKER_TRANSPORT_OPTIONS = {
'region': 'us-east-1',
'polling_interval': 3,
'visibility_timeout': 3600,
}
BROKER_USER = AWS_ACCESS_KEY_ID
BROKER_PASSWORD = AWS_SECRET_ACCESS_KEY
CELERY_DEFAULT_QUEUE = 'celery-convert-video'
CELERY_QUEUES = {
CELERY_DEFAULT_QUEUE: {
'exchange': CELERY_DEFAULT_QUEUE,
'binding_key': CELERY_DEFAULT_QUEUE,
}
}
I set up the POST url to /celery-convert-video/
I also do:
video = TestVideo.objects.create(uploaded_video=videoFile)
then
ConvertVideo.delay(video_id=video.id)
to send the task to SQS. It grabs he uploaded file using a url and converts it. Locally it works but the problem comes in the cloud.
I seem to be having problems with setting up the Worker Environment because it's health always ends up becoming "Severe" and everytime I check the cause It says 100% of requests return error 4xx . Logs say it's a 403.
The tasks show up fine in SQS (but are indecipherable, just random letters, I'm guessing it's encoded?) and get "in flight" so the problem I'm assuming is the worker. I have no idea how to set it up properly.
So a few questions:
Do I have to use the same deployment files for the worker and the main enviroment?
Do I have to edit the deployment files and add a /celery-convert-video/ view in my worker environment and then send it to the equivalent ConvertVideo function in the worker?
How do I connect the worker to the RDS database used by the main enviroment?
How do I get rid of the 403 errors and get the worker health back to green?
If anyone has a step by step tutorial or something that they can point me to that would be a big help!
P.S. I'm not an expert in cloud computing, It's actually my first stab at it so forgive my ignorance..

Categories