I have a class that imports the following module:
import pika
import pickle
from apscheduler.schedulers.background import BackgroundScheduler
import time
import logging
class RabbitMQ():
def __init__(self):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))
self.channel = self.connection.channel()
self.sched = BackgroundScheduler()
self.sched.add_job(self.keep_connection_alive, id='clean_old_data', trigger='cron', hour = '*', minute='*', second='*/50')
self.sched.start()
def publish_message(self, message , path="path"):
message["path"] = path
logging.info(message)
message = pickle.dumps(message)
self.channel.basic_publish(exchange="", routing_key="server", body=message)
def keep_connection_alive(self):
self.connection.process_data_events()
rabbitMQ = RabbitMQ()
def publish_message(message , path="path"):
rabbitMQ.publish_message(message, path=path)
My class.py:
import RabbitMQ as rq
class MyClass():
...
When generating unit tests for MyClass I can't mock the connection for this part of the code. And keeping throwing exceptions. And it will not work at all
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I tried a couple of approaches to mock this connection but none of those seem to work. I was wondering what can I do to support this sort of test? Mock the entire RabbitMQ module? Or maybe mock only the connection
Like the commenter above mentions, the issue is your global creation of your RabbitMQ.
My knee-jerk reaction is to say "just get rid of that, and your module-level publish_message". If you can do that, go for that solution. You have a publish_message on your RabbitMQ class that accepts the same args; any caller would then be expected to create an instance of your RabbitMQ class.
If you don't want to or can't do that for whatever reason, you should just move the instantiation of move that object instantiation in your module-level publish_message like this:
def publish_message(message , path="path"):
rabbitMQ = RabbitMQ()
rabbitMQ.publish_message(message, path=path)
This will create a new connection every time you call it though. Maybe that's ok...but maybe it's not. So to avoid creating duplicate connections, you'd want to introduce something like a singleton pattern:
class RabbitMQ():
__instance = None
...
#classmethod
def get_instance(cls):
if cls.__instance is None:
cls.__instance = RabbitMQ()
return cls.__instance
def publish_message(message , path="path"):
RabbitMQ.get_instance().publish_message(message, path=path)
Ideally though, you'd want to avoid the singleton pattern entirely. Whatever caller should store a single instance of your RabbitMQ object and call publish_message on it directly.
So the TLDR/ideal solution IMO: Just get rid of those last 3 lines. The caller should create a RabbitMQ object.
EDIT: Oh, and the why it's happening -- When you import that module, this is being evaluated: rabbitMQ = RabbitMQ(). Your attempt to mock it is happening after that is evaluated, and fails to connect.
Related
I am writing a python tcp proxy: whenever a connection gets established from the client, the proxy establishes the connection to the server, and transparently forwards both streams. Additionally, when the packet being forwarded has some conditions, I want to have it parsed and have that sent to another server.
This is the contents of my unittest:
class TestParsing(TestCase):
def setUp(self) -> None:
self.patcher = patch('EnergyDataClient.EnergyDataClient', autospec=True)
self.EC_mock = self.patcher.start()
EnergyAgent.EC = self.EC_mock()
EnergyAgent.GP = MyParser.MyParser()
self.server = multiprocessing.Process(target=tcp_server, args=(1235,))
self.gp = multiprocessing.Process(target=EnergyAgentRunner, args=(1234, 1235))
self.server.start()
self.gp.start()
def tearDown(self) -> None:
self.patcher.stop()
self.server.terminate()
self.gp.terminate()
while self.server.is_alive() or self.gp.is_alive():
sleep(0.1)
def test_parsemessage(self):
# start the client process, and wait until done
result = tcp_client(1234, correct_packets['DATA04']['request'])
self.assertEqual(correct_packets['DATA04']['request'], result)
EnergyAgent.EC.post.assert_called_once()
I want to validate that the 'post' method on the object EC is called with the contents I expect to have intercepted... but, as that object is on another process, mocking seems not to be helping. What am I doing wrong?
I figured out what is happening here. When calling multiprocessing.Process, python is spawning a new process using fork(), that produces a copy of the memory pages on the child process. That is the reason because I can patch the EnergyAgent.GP and works (as that object is only read from that point on, and we do not require it back at the main process) and EnergyAgent.EC does not work: The mocked object gets successfully updated on the child process, but the parent never realizes that.
I have a python pexpect script that logs in to servers frequently, and each time it tries to log in, I get a DUO push, which is very unreliable (not sure if it's the app on Android or the DUO system itself, but that is not relevant to my question). I'm trying to avoid DUO pushes by re-using sessions in the queue.
I have a class called Session to open/close sessions. I also have a global Queue defined. Whenever I'm done using a session, instead of closing the pexpect handle, I do q.put(self). self contains the active pexpect session. Next time I need to login, I first check to see if there is an item in the Queue. If there is, I would like to do self = q.get(), hence overwriting my "self" with the object in the Queue. Here is example code of what I'm trying to accomplish:
from globals import q
class Session:
def __init__(self, ip):
self.user = flask_login.current_user.saneid
self.pass = flask_login.current_user.sanepw
self.ip = ip
self.handle = None
def __enter__(self):
if not q.empty():
self = q.get()
else:
# login to node
self.handle = pexpect.spawn('ssh user#node')
...
return(self.handle)
def __exit__(self, *args):
q.put(self)
Is this good practice? Is there a better way?
I have a question that could well belong to Twisted or could be directly related to Python.
My problem, as the other is related to the disconnection process in Twisted.
As I read on this site, if I want to I have to perform the following steps:
The server must stop listening.
The client connection must disconnect.
The server connection must disconnect.
According to what I read on the previous page to make the first step would have to run the stopListening method.
In the example mentioned in the web all actions are performed in the same script. Making it easy to access the different variables and methods.
For me I have a server and a client are in different files and different locations.
I have a function that creates a server, and assigns a protocol and want, from the client protocol in another file, make an AMP call to a method for stop the connector.
The call AMP calls the SendMsg command.
class TESTServer(protocol.Protocol):
factory = None
sUsername = ""
credProto = None
bGSuser = None
slot = None
"""
Here was uninteresting code.
"""
# upwards=self.bGSuser, forwarded=True, tx_timestamp=iTimestamp,\
# message=sMsg)
log.msg("self.connector")
log.msg(self.connector)
return {'bResult': True}
SendMsg.responder(vSendMsg)
def _testfunction(self):
logger = logging.getLogger('server')
log.startLogging(sys.stdout)
pf = CredAMPServerFactory()
sslContext = ssl.DefaultOpenSSLContextFactory('key/server.pem',\
'key/public.pem',)
self.connector = reactor.listenSSL(1234, pf, contextFactory = sslContext,)
log.msg('Server running...')
reactor.run()
if __name__ == '__main__':
TESTServer()._testfunction()
The class CredAMPServerFactory assign the corresponding protocol.
class CredAMPServerFactory(ServerFactory):
"""
Server factory useful for creating L{CredReceiver} and L{SATNETServer} instances.
This factory takes care of associating a L{Portal} with the L{CredReceiver}
instances it creates. If the login is succesfully achieved, a L{SATNETServer}
instance is also created.
"""
protocol = CredReceiver
In the "CredReceiver" class I have a call that assigns the protocol to the TestServer class. I do this to make calls using the AMP method "Responder".
self.protocol = SATNETServer
My problem is that when I make the call the program responds with an error indicating that the connector doesn't belong to CredReceiver attribute object.
File "/home/sgongar/Dev/protocol/server_amp.py", line 248, in vSendMsg
log.msg(self.connector)
exceptions.AttributeError: 'CredReceiver' object has no attribute 'connector'
How could I do this? Does anyone know of a similar example of that may take note?
Thank you.
EDIT.
Server side:
server_amp.py
Starts a reactor: reactor.listenSSL(1234, pf, contextFactory =
sslContext,) from within the SATNETServer class.
Assigns protocol, pf, to CredAMPServerFactory class who belongs to module server.py also from within the SATNETServer class.
server.py
Within the class CredAMPServerFactory assigns CredReceiver class to protocol.
Once the connection is established the class SATNETServer is assigned to the protocol.
Client side:
client_amp
Makes a call to the SendMsg method belonging to theSATNETServer class.
I have a django project that uses celery for async task processing. I am using python 2.7.
I have a class in a module client.py in my django project:
# client.py
class Client:
def __init__(self):
# code for opening a persistent connection and saving the connection client in a class variable
...
self.client = <connection client>
def get_connection_client(self):
return self.client
def send_message(self, message):
# --- Not the exact code but this is the function I need to access to for which I need access to the client variable---
self.client.send(message)
# Other functions that use the above method to send messages
...
This class needs to be instantiated only once to create one persistent connection to a remote server.
I run a script connection.py that runs indefinitely:
# connection.py
from client import Client
if __name__ == '__main__':
clientobj = Client()
client = clientobj.get_connection_client()
# Blocking process
while True:
# waits for a message from the remote server
...
I need to access the variable client from another module tasks.py (needed for celery).
# tasks.py
...
from client import Client
#app.task
def function():
# Need access to the client variable
# <??? How do I get an access to the client variable for the
# already established connection???>
message = "Message to send to the server using the established connection"
client.send_message(message)
All the three python modules are on the same machine. The connection.py is executed as a standalone script and is executed first. The method function() in tasks.py is called multiple times across other modules of the project whenever required, thus, I can't instantiate the Client class inside this method. Global variables don't work.
In java, we can create global static variable and access it throughout the project. How do we do this in python?
Approaches I can think of but not sure if they can be done in python:
Save this variable in a common file such that it is accessible in other modules in my project?
Save this client as a setting in either django or celery and access this setting in the required module?
Based on suggestions by sebastian, another way is to share variables between running processes. I essentially want to do that. How do I do this in python?
For those interested to know why this is required, please see this question. It explains the complete system design and the various components involved.
I am open to suggestions that needs a change in the code structure as well.
multiprocessing provides all the tools you need to do this.
connection.py
from multiprocessing.managers import BaseManager
from client import Client()
client = Client()
class ClientManager(BaseManager): pass
ClientManager.register('get_client', callable=lambda: client)
manager = ClientManager(address=('', 50000), authkey='abracadabra')
server = manager.get_server()
server.serve_forever()
tasks.py
from multiprocessing.managers import BaseManager
class ClientManager(BaseManager): pass
ClientManager.register('get_client')
manager = ClientManager(address=('localhost', 50000), authkey='abracadabra')
manager.connect()
client = manager.get_client()
#app.task
def function():
message = "Message to send to the server using the established connection"
client.send_message(message)
I dont have experience working with django, but if they are executed from the same script you could make the Client a singleton, or maybe declaring the Client in the init.py and then import it wherever you need it.
If you go for the singleton, you can make a decorator for that:
def singleton(cls):
instances = {}
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
Then you would define:
# client.py
#singleton
class Client:
def __init__(self):
# code for opening a persistent connection and saving the connection client in a class variable
...
self.client = <connection client>
def get_connection_client(self):
return self.client
Thats all I can suggest with the little description you have given. Maybe try to explain a little better how everything is run or the parts that are involved.
Python has class attributes (attributes that are shared amongst instances) and class methods (methods that act on the class itself). Both are readable on either the class and an instance.
# client.py
class Client(object):
_client = None
#classmethod
def connect(cls):
# dont do anything if already connected
if cls._client is None:
return
# code for opening a persistent connection and saving the connection client in a class variable
...
cls._client = <connection client>
#classmethod
def get_connection_client(cls):
return cls._client
def __init__(self):
# make sure we try to have a connection on initialisation
self.connect()
Now I'm not sure this is the best solution to your problem.
If connection.py is importing tasks.py, you can do it in your tasks.py:
import __main__ # connection.py
main_globals = __main__.__dict__ # this "is" what you getting in connection.py when you write globals()
client = main_globals["client"] # this client has the same id with client in connection.py
BaseManager is also an answer but it uses socket networking on localhost and it is not a good way of accessing a variable if you dont already using multiprocessing. I mean if you need to use multiprocessing, you should use BaseManager. But if you dont need multiprocessing, it is not a good option to use multiprocessing. My code is just taking pointer of "client" variable in connection.py from
interpreter.
Also if you want to use multiprocessing, my code won't work because the interpreters in different processes are different.
Use pickle when reading it from file.
I've just got started using pika(v 0.9.4) with Tornado (through the use of pika.adapters.tornado_connection.TornadoConnection) and I was wondering what's the appropriate way of catching errors when using, say: queue_delete for when the queue you're trying to delete doesn't exist. RabbitMQ raises AMQPError, but I am not sure how this can be handled in an async way.
Anyone has any insights on this ?
Disclaimer: I'm the author of stormed-amqp
I would suggest trying with stormed-amqp
import logging
logging.basicConfig()
from tornado.ioloop import IOLoop
from stormed import Connection
def on_connect():
ch = conn.channel()
ch.queue_declare(queue='hello', durable=False)
ch.queue_declare(queue='hello', durable=True)
def on_error(e):
print "Got Connection error", e.reply_text, e.reply_code
io_loop.stop()
conn = Connection(host='localhost')
conn.on_error = on_error
conn.connect(on_connect)
io_loop = IOLoop.instance()
io_loop.start()
Try to avoid the error. If you declare a connection to the queue, and it doesn't exist, it will be created. Then immediately delete it.
Or, if you will use that queue again in the next week or so, i.e. it is not single-use, then just leave it around and handle deletion as a system admin activity that cleans up long term idle queues.
Or just declare your queues with the auto-delete attribute and they will go away when you disconnect.