Python3 Flask - missing 1 required positional argument: 'self' - python

I have very simple python code to access Amazon Simple Queue Service. But I get
builtins.TypeError
TypeError: get_queue() missing 1 required positional argument: 'self'
My code:
class CloudQueue(object):
conn = boto.sqs.connect_to_region("eu-west-1",
aws_access_key_id="abc",
aws_secret_access_key="abc")
#app.route('/get/<name>')
def get_queue(self, name):
if(name != None):
queue = self.conn.get_queue(str(name)) <--------- HERE
return conn.get_all_queues()
if __name__ == "__main__":
cq = CloudQueue()
app.debug = True
app.run()

You cannot register methods as routes; at the time the decorator runs the class is still being defined and all you registered is the unbound function object. Since it is not bound to an instance there is no self to pass in.
Do not use a class here; create the connection anew for each request:
#app.route('/get/<name>')
def get_queue(name):
conn = boto.sqs.connect_to_region("eu-west-1",
aws_access_key_id="abc",
aws_secret_access_key="abc")
queue = conn.get_queue(name)
return 'some response string'
You could set it as a global but then you need to make sure you re-create the connection on the first request (so it continues to work even when using a WSGI server using child processes to handle requests):
#app.before_first_request()
def connect_to_boto():
global conn
conn = boto.sqs.connect_to_region("eu-west-1",
aws_access_key_id="abc",
aws_secret_access_key="abc")
#app.route('/get/<name>')
def get_queue(name):
queue = conn.get_queue(name)
return 'some response string'
Use this only if you are sure that boto connection objects are thread-safe.

Related

AttributeError: '_thread._local' object has no attribute 'token'

There are already questions that address this problem, e.g.
Python: 'thread._local object has no attribute 'todo'
But the solutions doesn't seem to apply to my problem. I make sure to access threding.local() in the same thread that sets the value. I'm trying to use this feature in conjuction with socket server. This is my code
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
token = str(uuid4())
global_data = threading.local()
global_data.token = token
logger.info(f"threading.local().token: {threading.local().token}") # This line raises the error
The server code I'm using:
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def run_server():
server = ThreadedTCPServer(
(definitions.HOST, definitions.PORT), ThreadedTCPRequestHandler
)
with server:
server.serve_forever()
Your code does this:
Create a brand new threading.local
Store a reference to it in the variable global_data
Give it a token
Create a brand new threading.local
Print its token
Step 5 throws an exception because the new threading.local you created in step 4 does not have a token because it is not the same threading.local you created in step 1.
Perhaps you meant {global_data.token}?

How to mock a pika connection for a different module?

I have a class that imports the following module:
import pika
import pickle
from apscheduler.schedulers.background import BackgroundScheduler
import time
import logging
class RabbitMQ():
def __init__(self):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))
self.channel = self.connection.channel()
self.sched = BackgroundScheduler()
self.sched.add_job(self.keep_connection_alive, id='clean_old_data', trigger='cron', hour = '*', minute='*', second='*/50')
self.sched.start()
def publish_message(self, message , path="path"):
message["path"] = path
logging.info(message)
message = pickle.dumps(message)
self.channel.basic_publish(exchange="", routing_key="server", body=message)
def keep_connection_alive(self):
self.connection.process_data_events()
rabbitMQ = RabbitMQ()
def publish_message(message , path="path"):
rabbitMQ.publish_message(message, path=path)
My class.py:
import RabbitMQ as rq
class MyClass():
...
When generating unit tests for MyClass I can't mock the connection for this part of the code. And keeping throwing exceptions. And it will not work at all
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I tried a couple of approaches to mock this connection but none of those seem to work. I was wondering what can I do to support this sort of test? Mock the entire RabbitMQ module? Or maybe mock only the connection
Like the commenter above mentions, the issue is your global creation of your RabbitMQ.
My knee-jerk reaction is to say "just get rid of that, and your module-level publish_message". If you can do that, go for that solution. You have a publish_message on your RabbitMQ class that accepts the same args; any caller would then be expected to create an instance of your RabbitMQ class.
If you don't want to or can't do that for whatever reason, you should just move the instantiation of move that object instantiation in your module-level publish_message like this:
def publish_message(message , path="path"):
rabbitMQ = RabbitMQ()
rabbitMQ.publish_message(message, path=path)
This will create a new connection every time you call it though. Maybe that's ok...but maybe it's not. So to avoid creating duplicate connections, you'd want to introduce something like a singleton pattern:
class RabbitMQ():
__instance = None
...
#classmethod
def get_instance(cls):
if cls.__instance is None:
cls.__instance = RabbitMQ()
return cls.__instance
def publish_message(message , path="path"):
RabbitMQ.get_instance().publish_message(message, path=path)
Ideally though, you'd want to avoid the singleton pattern entirely. Whatever caller should store a single instance of your RabbitMQ object and call publish_message on it directly.
So the TLDR/ideal solution IMO: Just get rid of those last 3 lines. The caller should create a RabbitMQ object.
EDIT: Oh, and the why it's happening -- When you import that module, this is being evaluated: rabbitMQ = RabbitMQ(). Your attempt to mock it is happening after that is evaluated, and fails to connect.

Initialize a python class only once on webpy

I am using web.py to host a simple web service. The web service runs an analytics application in the backend (inside ClassA). During the initialization of web.py, I'd like to pre-load all data into the memory (i.e call a = ClassA() only once when web server is started), and when the user sends a web request, the web server will just response with the pre-calculated result (i.e return a.do_something).
The code below seems to run init() of class 'add' everytime a HTTP POST request is received. This is a waste of time because the initialization stage takes pretty long. Is it possible to initialize ClassA only once?
import web
from aclass import ClassA
urls = (
'/add', 'add'
)
class add:
def __init__(self):
a = ClassA()
def POST(self):
return a.do_something()
if __name__ == "__main__":
app = web.application(urls, globals())
app.run()
Try:
class add:
a = ClassA()
def POST(self):
return add.a.do_something()
This will make it a class-bound parameter instead of a instance-bound one, i.e. only initializing it once.

Why is the perspective argument in a pb.Viewable passed as None?

I am trying to understand how to find out how to allow a server to know which client is making remote requests in twisted's perspective broker. I think I'm supposed to use twisted.spread.pb.Viewable for this, but when I try the perspective argument in the Viewable's view_* methods is None.
I run this server
import twisted.spread.pb as pb
import twisted.internet.reactor as reactor
class Server(pb.Root):
def __init__(self):
self.v = MyViewable()
def remote_getViewable(self):
return self.v
class MyViewable(pb.Viewable):
def view_foo(self, perspective):
print ("Perspective %s"%perspective)
if __name__ == "__main__":
reactor.listenTCP(54321, pb.PBServerFactory(Server()))
print("Starting reactor")
reactor.run()
and this client
import twisted.spread.pb as pb
import twisted.internet.reactor as reactor
from twisted.internet.defer import inlineCallbacks
#inlineCallbacks
def gotRoot(root):
v1 = yield root.callRemote("getViewable")
v2 = yield root.callRemote("getViewable")
print(v1)
print(v2)
yield v1.callRemote("foo")
yield v2.callRemote("foo")
factory = pb.PBClientFactory()
reactor.connectTCP("localhost", 54321, factory)
d = factory.getRootObject()
d.addCallback(gotRoot)
reactor.run()
The output from the server is
Starting reactor
Perspective None
Perspective None
Why are the perspective arguments None?
Through experimentation I believe I have determined the answer.
In order for remote invocations of a view_* method on a pb.Viewable to properly receive the perspective argument, the reference to that Viewable held by the client must have been obtained as the return value from a perspective_* method called on an instance of pb.Avatar (or subclass). The perspective argument passed into the view_* methods then corresponds to the Avatar that originally gave the client the reference to the Viewable.
The example code in the original posting doesn't work properly because the remote references to the Viewable are passed to the client from a pb.Root, not as return values from a perspective_* method on a pb.Avatar.
I note here that while this information is implied by the way the examples in the official documents are written, it does not seem to be explicitly stated there.
EDIT: I've figured out the right way to do this. One of the arguments to the Realm's requstAvatar method is the user's mind. All you have to do is set mind.perspective to the new Avatar instance and all subsequent remote calls work how you'd expect. For example:
class SimpleRealm:
implements(IRealm)
def requestAvatar(self, avatarId, mind, *interfaces):
avatar = MyAvatarSubclass()
mind.perspective = avatar
return pb.IPerspective, avatar, avatar.logout
OLD EDIT: A (crummy) way to make this work is to explicitly contruct a pb.ViewPoint and pass that as an argument to the remote client. For example if p is an instance of an Avatar subclass and v is a viewable on the server side, we can do this on the server
referenceToClient.callRemote("take", ViewPoint(p, v))
where on the client side we have something like
def remote_take(self, objToReceive):
self.myView = objToReceive
Subsequent invocations of self.myView.callRemote(...) by the client will work properly

Tornado memory leak on dropped connections

I've got a setup where Tornado is used as kind of a pass-through for workers. Request is received by Tornado, which sends this request to N workers, aggregates results and sends it back to client. Which works fine, except when for some reason timeout occurs — then I've got memory leak.
I've got a setup which similar to this pseudocode:
workers = ["http://worker1.example.com:1234/",
"http://worker2.example.com:1234/",
"http://worker3.example.com:1234/" ...]
class MyHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def post(self):
responses = []
def __callback(response):
responses.append(response)
if len(responses) == len(workers):
self._finish_req(responses)
for url in workers:
async_client = tornado.httpclient.AsyncHTTPClient()
request = tornado.httpclient.HTTPRequest(url, method=self.request.method, body=body)
async_client.fetch(request, __callback)
def _finish_req(self, responses):
good_responses = [r for r in responses if not r.error]
if not good_responses:
raise tornado.web.HTTPError(500, "\n".join(str(r.error) for r in responses))
results = aggregate_results(good_responses)
self.set_header("Content-Type", "application/json")
self.write(json.dumps(results))
self.finish()
application = tornado.web.Application([
(r"/", MyHandler),
])
if __name__ == "__main__":
##.. some locking code
application.listen()
tornado.ioloop.IOLoop.instance().start()
What am I doing wrong? Where does the memory leak come from?
I don't know the source of the problem, and it seems gc should be able to take care of it, but there's two things you can try.
The first method would be to simplify some of the references (it looks like there may still be references to responses when the RequestHandler completes):
class MyHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def post(self):
self.responses = []
for url in workers:
async_client = tornado.httpclient.AsyncHTTPClient()
request = tornado.httpclient.HTTPRequest(url, method=self.request.method, body=body)
async_client.fetch(request, self._handle_worker_response)
def _handle_worker_response(self, response):
self.responses.append(response)
if len(self.responses) == len(workers):
self._finish_req()
def _finish_req(self):
....
If that doesn't work, you can always invoke garbage collection manually:
import gc
class MyHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def post(self):
....
def _finish_req(self):
....
def on_connection_close(self):
gc.collect()
The code looks good. The leak is probably inside Tornado.
I only stumbled over this line:
async_client = tornado.httpclient.AsyncHTTPClient()
Are you aware of the instantiation magic in this constructor?
From the docs:
"""
The constructor for this class is magic in several respects: It actually
creates an instance of an implementation-specific subclass, and instances
are reused as a kind of pseudo-singleton (one per IOLoop). The keyword
argument force_instance=True can be used to suppress this singleton
behavior. Constructor arguments other than io_loop and force_instance
are deprecated. The implementation subclass as well as arguments to
its constructor can be set with the static method configure()
"""
So actually, you don't need to do this inside the loop. (On the other
hand, it should not do any harm.) But which implementation are you
using CurlAsyncHTTPClient or SimpleAsyncHTTPClient?
If it is SimpleAsyncHTTPClient, be aware of this comment in the code:
"""
This class has not been tested extensively in production and
should be considered somewhat experimental as of the release of
tornado 1.2.
"""
You can try switching to CurlAsyncHTTPClient. Or follow
Nikolay Fominyh's suggestion and trace the calls to __callback().

Categories