How to use GAE deferred functionality? - python

I do the following:
from google.appengine.ext import deferred
def send_message(client_id, message):
logging.info("sending message...")
class MyHandler(webapp.RequestHandler):
def get(self, field_name):
...
scUpdate = {
'val': value,
'name': field_name_converted
}
message = simplejson.dumps(scUpdate)
deferred.defer(send_message, client_id, message, _countdown=random.randrange(0, 5, 1))
and getting
PermanentTaskFailure: 'module' object has no attribute 'send_message'
What is wrong here?
Upd. looks like the problem is the same as described there - PermanentTaskFailure: 'module' object has no attribute 'Migrate' - but I don't understand how to fix that.

See https://developers.google.com/appengine/articles/deferred:
Limitations of the deferred library
You can't call a method in the request handler module.
The function that is called via deferred.defer must not be in the same
module like the request handler where deferred.defer is called.

Related

AttributeError: '_thread._local' object has no attribute 'token'

There are already questions that address this problem, e.g.
Python: 'thread._local object has no attribute 'todo'
But the solutions doesn't seem to apply to my problem. I make sure to access threding.local() in the same thread that sets the value. I'm trying to use this feature in conjuction with socket server. This is my code
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
token = str(uuid4())
global_data = threading.local()
global_data.token = token
logger.info(f"threading.local().token: {threading.local().token}") # This line raises the error
The server code I'm using:
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def run_server():
server = ThreadedTCPServer(
(definitions.HOST, definitions.PORT), ThreadedTCPRequestHandler
)
with server:
server.serve_forever()
Your code does this:
Create a brand new threading.local
Store a reference to it in the variable global_data
Give it a token
Create a brand new threading.local
Print its token
Step 5 throws an exception because the new threading.local you created in step 4 does not have a token because it is not the same threading.local you created in step 1.
Perhaps you meant {global_data.token}?

Django Testing: AttributeError: 'Client' object has no attribute 'get'

I am new to Django framework & I am trying write some tests for my apps in the project.Currently I have two apps hoardings & clients both have same basic CRUD features.For testing purpose I have created a test directory & it looks like this
clients
- tests
-__init__.py
- test_views.py
That's how I am maintaining my tests for both the apps.My test_views.py has following code,
from django.test import TestCase
from django.urls import reverse
from hoardings.models import State, City
from clients.models import Client
class ClientManagementTest(TestCase):
def setUp(self):
self.state = State.objects.create(desc='West Bengal')
self.city = City.objects.create(state=self.state, desc='Kolkata')
self.client = Client()
def test_client_creation_form_can_be_rendered(self):
response = self.client.get(reverse('clients:create'))
# Check that the response is 200 OK.
self.assertEqual(response.status_code, 200)
# check if csrf token is present
self.assertContains(response, 'csrfmiddlewaretoken')
# Check that the response contains a form.
self.assertContains(response, '<form')
# assert the context values
self.assertIn('url', response.context)
self.assertIn('heading', response.context)
self.assertIn('states', response.context)
self.assertIn('client_types', response.context)
As you can see in the setup method I am creating an object of Client which is used to send the request.But every time I run the tests I get following errors,
ERROR: test_client_creation_form_can_be_rendered
(tests.test_views.ClientManagementTest)
---------------------------------------------------------------------- Traceback (most recent call last): File
"/home/ropali/Development/PythonWorkspace/hms_venv/hms/clients/tests/test_views.py",
line 19, in test_client_creation_form_can_be_rendered response =
self.client.get(reverse('clients:create')) AttributeError: 'Client'
object has no attribute 'get'
As per my understanding It means that client object is not being created so it cannot find the get attribute & I get the similar error for the POST request as well.
But one thing is bugging me that I have similar test setup for the hoardings app it runs perfectly fine.
Can anyone please help me what I am doing wrong here.Let me know if you need other details.

Can't set SameSite attribute of cookie in Tornado

I'm trying to set a cookie with the SameSite header in a Tornado handler. I already looked at this answer and used the following monkeypatch:
from http.cookies import Morsel
Morsel._reserved["samesite"] = "SameSite"
Then, in a different file which imports the monkeypatch above, I'm trying to do the following in a handler class that extends RequestHandler:
from tornado.web import RequestHandler
class UserHandler(RequestHandler):
async def login(self):
# Application logic....
self.set_secure_cookie("session_id", session_key, samesite: "None")
However, for some reason this doesn't work, and instead I'm getting an "invalid syntax" error.
Note that I'm using Python 3.7.4 and tornado v6.0.3.
samesite: "None" is not the way to pass keyword arguments to functions. You should use =
self.set_secure_cookie("session_id", session_key, samesite="None")

How to mock a pika connection for a different module?

I have a class that imports the following module:
import pika
import pickle
from apscheduler.schedulers.background import BackgroundScheduler
import time
import logging
class RabbitMQ():
def __init__(self):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))
self.channel = self.connection.channel()
self.sched = BackgroundScheduler()
self.sched.add_job(self.keep_connection_alive, id='clean_old_data', trigger='cron', hour = '*', minute='*', second='*/50')
self.sched.start()
def publish_message(self, message , path="path"):
message["path"] = path
logging.info(message)
message = pickle.dumps(message)
self.channel.basic_publish(exchange="", routing_key="server", body=message)
def keep_connection_alive(self):
self.connection.process_data_events()
rabbitMQ = RabbitMQ()
def publish_message(message , path="path"):
rabbitMQ.publish_message(message, path=path)
My class.py:
import RabbitMQ as rq
class MyClass():
...
When generating unit tests for MyClass I can't mock the connection for this part of the code. And keeping throwing exceptions. And it will not work at all
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I tried a couple of approaches to mock this connection but none of those seem to work. I was wondering what can I do to support this sort of test? Mock the entire RabbitMQ module? Or maybe mock only the connection
Like the commenter above mentions, the issue is your global creation of your RabbitMQ.
My knee-jerk reaction is to say "just get rid of that, and your module-level publish_message". If you can do that, go for that solution. You have a publish_message on your RabbitMQ class that accepts the same args; any caller would then be expected to create an instance of your RabbitMQ class.
If you don't want to or can't do that for whatever reason, you should just move the instantiation of move that object instantiation in your module-level publish_message like this:
def publish_message(message , path="path"):
rabbitMQ = RabbitMQ()
rabbitMQ.publish_message(message, path=path)
This will create a new connection every time you call it though. Maybe that's ok...but maybe it's not. So to avoid creating duplicate connections, you'd want to introduce something like a singleton pattern:
class RabbitMQ():
__instance = None
...
#classmethod
def get_instance(cls):
if cls.__instance is None:
cls.__instance = RabbitMQ()
return cls.__instance
def publish_message(message , path="path"):
RabbitMQ.get_instance().publish_message(message, path=path)
Ideally though, you'd want to avoid the singleton pattern entirely. Whatever caller should store a single instance of your RabbitMQ object and call publish_message on it directly.
So the TLDR/ideal solution IMO: Just get rid of those last 3 lines. The caller should create a RabbitMQ object.
EDIT: Oh, and the why it's happening -- When you import that module, this is being evaluated: rabbitMQ = RabbitMQ(). Your attempt to mock it is happening after that is evaluated, and fails to connect.

Accessing the API logger in Flask-Restful's resources

I'm currently using flask-restful (http://flask-restful.readthedocs.io/en/0.3.5/index.html) to deploy resources as endpoints and I'm wondering if there's a way to access the API logger from within then resources classes. I've skimmed through the docs and couldn't find the appropriate answer.
Basically I want to do that :
from flask_restful import Resource
class SomeEndpoint(Resource):
def get(self):
try:
... something throws an exception
except SomeException as se:
... send custom message to API logger <----- Here!
return response
What I though of doing was passing the logger from the API through the constructor of the Resource like that :
App = Flask(__name__)
api = Api(App)
api.add_resource(SomeEndpoint, '/', resource_class_kwargs={'logger': App.logger})
Is this the most appropriate way to access the logger inside flask-restful resource endpoints ?
Thanks a lot
I know the answer has been chosen already, but there is a slightly different approach that also works.
First, import
from flask import current_app as app
in the resource file, and when calling the logger, do:
app.logger.info("This is an info message")
You need to define constructor of Resource. Here an example:
import logging
class SomeEndpoint(Resource):
def __init__(self, **kwargs):
self.logger = kwargs.get('logger')
def get(self):
# self.logger - 'logger' from resource_class_kwargs
return self.logger.name
api.add_resource(SomeEndpoint, '/', resource_class_kwargs={
# any logger here...
'logger': logging.getLogger('my_custom_logger')
})
Open your endpoint. You will see my_custom_logger.
Hope this helps.
Setting debug=False temporarily fixes the problem.
But I dont really know what is the issue when debug is set to True

Categories