CLIENT:
#!/usr/bin/env python
from twisted.internet import reactor, protocol
class EchoClient(protocol.Protocol):
def __init__(self, arg):
self.arg = arg
def connectionMade(self):
self.transport.write("hello, world!")
def dataReceived(self, data):
print "Server said:", data
self.transport.loseConnection()
def connectionLost(self, reason):
print "connection lost"
class EchoFactory(protocol.ClientFactory):
protocol = EchoClient
def buildProtocol(self, address):
proto = protocol.ClientFactory.buildProtocol(self, address, 12)
self.connectedProtocol = proto
return proto
def clientConnectionFailed(self, connector, reason):
print "Connection failed - goodbye!"
reactor.stop()
def clientConnectionLost(self, connector, reason):
print "Connection lost - goodbye!"
reactor.stop()
def main():
f = EchoFactory()
reactor.connectTCP("localhost", 8000, f)
reactor.run()
if __name__ == '__main__':
main()
SERVER:
#!/usr/bin/env python
from twisted.internet import reactor, protocol
from twisted.application import service, internet
class Echo(protocol.Protocol):
def dataReceived(self, data):
self.transport.write(data)
def main():
factory = protocol.ServerFactory()
factory.protocol = Echo
reactor.listenTCP(8000,factory)
reactor.run()
if __name__ == '__main__':
main()
ERROR:
exceptions.TypeError: buildProtocol() takes exactly 2 arguments (3 given)
QUESTION:
How can I get the EchoClient class in the CLIENT to accept parameters and assign instance variables ( such as arg in the EchoClient constructor above)? As noted below, it was previously suggested that I override the buildProtocol function, but my attempt at doing so has lead me to the above error. I am not really sure where to go from here. I suppose my question can be generalize to: how can I add instance variables to a protocol?
you wrote:
def buildProtocol(self, address):
proto = protocol.ClientFactory.buildProtocol(self, address, 12)
that is, you are overriding ClientFactory.buildProtocol and calling the parent class with a different signature than it knows how to handle.
Passing data from the factory to the client is only a little tricky. You can provide any __init__ you want to the factory, but twisted creates instances of IProtocol itself. Fortunately, most factories assign themselves to the factory attribute of the protocol, once it's ready to go:
class MyClientProtocol(protocol.Protocol):
def connectionMade(self):
# use self.factory here:
self.transport.write(self.factory.arg)
class MyClientFactory(protocol.ClientFactory):
protocol = MyClientProtocol
def __init__(self, arg):
self.arg = arg
In fact, the whole ProtocolFactory business is to support this kind of use; but be mindful; many instances of Protocol will share a single instance of their factory; use the factory for configuration but manage state in the protocol.
It's certainly possible that the way the standard family of Protocol/Factory implementations don't suit your needs, and that's also reasonable, so long as you fully implement the IProtocol and IProtocolFactory interfaces. The base classes exist because they handle most of the cases for you, not because they are the only possible implementation.
It's not clear from your question what exactly your tryed and what exactly the error was, but anyway you have to do two steps:
Make EchoClient's constructor take whatever arguments you need it to take and initialise whatever field you need it to initialise.
Override buildProtocol method in your factory to supply those arguments to your protocol.
Related
I would like to use Twisted as a client/server manager that is part of regular Python objects.
The solution I am trying to implement is to isolate Twisted in its own process using multiprocessing.Process, and communicate with this process through multiprocessing.Pipe.
I have coded the client/server logic with Twisted already, but now I am stuck at interfacing the multiprocessing.Pipe communication with the reactor.
I am a beginner with Twisted so I may be missing something obvious, but from what I understand about how reactors work, I guess the reactor is somehow supposed to poll from my multiprocessing.Pipe along with the sockets that it already seems to handle nicely. So my question is, how can I make the reactor listen to my multiprocessing.Pipe on top of what it is already doing please?
Thus far my code looks something like this:
class ServerProtocol(Protocol):
def __init__(self, server):
self._server = server
def connectionMade(self):
pass
def connectionLost(self, reason):
pass
def dataReceived(self, data):
pass
class ServerProtocolFactory(Factory):
protocol = ServerProtocol
def __init__(self, server):
self.server = server
def buildProtocol(self, addr):
return ServerProtocol(self.server)
class Server:
def __init__(self):
pass
def run(self, pipe):
"""
This is called in its own process
"""
from twisted.internet import reactor
endpoint = TCP4ServerEndpoint(reactor, self._port)
endpoint.listen(ServerProtocolFactory(self))
reactor.run() # main Twisted reactor loop
class MyObject:
def __init__(self):
self._pipe = Pipe()
self._server = Server()
self._p = Process(target=self._server.run, args=(self._pipe, ))
self._p.start()
def stop(self):
# I want to send some stop command through the Pipe here
self._p.join()
if __name__ == "__main__":
obj = MyObject()
# do stuff here
obj.stop()
I don't know if Twisted will work as run this way (i.e., as the target of a multiprocessing.Process). Let's assume it will, though.
multiprocessing.Pipe is documented as returning a two-tuple of multiprocessing.Connection objects. multiprocessing.Connection is documented as having a fileno method returning a file descriptor (or handle) used by the Connection.
If it is a file descriptor then there is probably a very easy path to integrating it with a Twisted reactor. Most Twisted reactors implement IReactorFDSet which has an addReader method which accepts an IReadDescriptor value.
Connection is not quite an IReadDescriptor but it is easily adapted to be one:
from attrs import define
from multiprocessing import Connection
from twisted.python.failure import Failure
#define
class ConnectionToDescriptor:
_conn: Connection
def fileno(self) -> int:
return self._conn.fileno()
def doRead(self) -> None:
some_data = self._conn.recv()
# Process some_data how you like
def connectionLost(self, reason: Failure) -> None:
self._conn.close()
If you wrap this around your read Connection and then pass the result to reactor.addReader the reactor will use fileno to figure out what to monitor for readiness and call doRead when there is something to read.
You could apply similar treatment to the write end of the pipe if you also want reactor-friendly support for sending bytes back to the parent process.
I'm attempting to do the following:
connect as client to an existing websocket
process the streaming data received from this socket, and publish it on another websocket
I'm using twisted and autobahn to do so. I have managed to have the two parts working separately, by deriving a WebSocketClientProtocol for the client, and deriving an ApplicationSession in the second. The two run with the same reactor.
I am not sure however as to how to make them communicate. I would like to send a message on my server when the client receives a message, but I don't know how to get the running instance of the WebSocketClientProtocol...
Perhaps this isn't the right approach to do this either. What's the right way to do this?
I've been trying to solve similiar problem recently, here's what worked:
f = XLeagueBotFactory()
app = Application(f)
reactor.connectTCP("irc.gamesurge.net", 6667, f)
reactor.listenTCP(port, app, interface=host)
^ This is in if __name__ == "__main__":
class Application(web.Application):
def __init__(self, botfactory):
self.botfactory = botfactory
Define the instance as self, then in my instance I was sending it to another handler for http post request (using cyclone)
class requestvouch(web.RequestHandler):
def __init__(self, application, request, **kwargs):
super(requestvouch, self).__init__(application, request, **kwargs)
self.botfactory = application.botfactory
def msg(self, channel, msg):
bot = self.botfactory.getProtocolByName("XLeagueBot")
sendmsg(bot, channel, msg) # function that processed the msg through stuff like encoding and logging and then sent it to bot.msg() function that posts it to IRC (endpoint in my case)
def post(self):
msg = "What I'm sending to the protocol of the other thing"
self.msg("#xleague", msg)
Now the important part comes in factory
class XLeagueBotFactory(protocol.ClientFactory):
protocol = XLeagueBot
def __init__(self):
self.protocols = {}
def getProtocolByName(self, name):
return self.protocols.get(name)
def registerProtocol(self, protocol):
self.protocols[protocol.nickname] = protocol
def unregisterProtocol(self, protocol):
del self.protocols[protocol.nickname]
Finally in my client class:
class XLeagueBot(irc.IRCClient):
nickname = "XLeagueBot"
def connectionMade(self):
irc.IRCClient.connectionMade(self)
self.factory.registerProtocol(self)
def connectionLost(self, reason):
self.factory.unregisterProtocol(self)
irc.IRCClient.connectionLost(self, reason)
I'm not entirely sure that this code is perfect, or bugfree, but it should +- tell you how to deal with calling instance of protocol class. The problem afaik comes from name of instance protocol being generated inside of it's factory and not being sent elsewhere.
The Python documentation includes an example of creating an HTTP server:
def run(server_class=HTTPServer, handler_class=BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
A RequestHandler class is provided to the Server, which then takes care of instantiating the handler automatically.
Let's say I want to pass in custom parameters to the request handler when it's created. How can and should I do that?
More specifically, I want to pass in parameters from the command line, and having to access sys.argv inside the request handler class seems unnecessarily clunky.
It seems like this should be possible by overriding parts of the Server class, but I feel like I'm overlooking a simpler and better solution.
I solved this in my code using "partial application".
Example is written using Python 3, but partial application works the same way in Python 2:
from functools import partial
from http.server import HTTPServer, BaseHTTPRequestHandler
class ExampleHandler(BaseHTTPRequestHandler):
def __init__(self, foo, bar, qux, *args, **kwargs):
self.foo = foo
self.bar = bar
self.qux = qux
# BaseHTTPRequestHandler calls do_GET **inside** __init__ !!!
# So we have to call super().__init__ after setting attributes.
super().__init__(*args, **kwargs)
def do_HEAD(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
def do_GET(self):
self.do_HEAD()
self.wfile.write('{!r} {!r} {!r}\n'
.format(self.foo, self.bar, self.qux)
.encode('utf8'))
# We "partially apply" the first three arguments to the ExampleHandler
handler = partial(ExampleHandler, sys.argv[1], sys.argv[2], sys.argv[3])
# .. then pass it to HTTPHandler as normal:
server = HTTPServer(('', 8000), handler)
server.serve_forever()
This is very similar to a class factory, but in my opinion it has a couple of subtle advantages:
partial objects are much easier to introspect for what's inside them than nested classes defined and returned by factory functions.
partial objects can be serialized with pickle in modern Python, whereas nested class definitions inside factory functions cannot (at least not without going out of your way to code a __reduce__ method on the class to make it possible).
In my limited experience explicit "pre-attaching" of arguments with partial to an otherwise Pythonic and normal class definition is easier (less cognitive load) to read, understand, and verify for correctness than a nested class definition with the parameters of the wrapping function buried somewhere inside it.
The only real disadvantage is that many people are unfamiliar with partial - but in my experience it is better for everyone to become familiar with partial anyway, because partial has a way of popping up as an easy and composable solution in many places, sometimes unexpectedly, like here.
Use a class factory:
def MakeHandlerClassFromArgv(init_args):
class CustomHandler(BaseHTTPRequestHandler):
def __init__(self, *args, **kwargs):
super(CustomHandler, self).__init__(*args, **kwargs)
do_stuff_with(self, init_args)
return CustomHandler
if __name__ == "__main__":
server_address = ('', 8000)
HandlerClass = MakeHandlerClassFromArgv(sys.argv)
httpd = HTTPServer(server_address, HandlerClass)
httpd.serve_forever()
At the time of this writing all the answers here essentially stick to the (very awkward) intention that the author of the socketserver module seemed to have that the handler passed in be a class (i.e. constructor). Really the only thing that's required of the handler is that it's callable, so we can work around the socketserver API by making instances of our handler class callable and having them run the superclass's __init__ code when called. In Python 3:
class MyHandler(http.server.BaseHTTPRequestHandler):
def __init__(self, message):
self.message = message
def __call__(self, *args, **kwargs):
"""Handle a request."""
super().__init__(*args, **kwargs)
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(self.message.encode("utf-8"))
This keeps the superclass "constructor" call out of __init__ which eliminates the possibility of dispatching a request (from the superclass's constructor) before the subclass's constructor is finished. Note that the __init__ override must be present to divert execution even if it's not needed for initialization; an empty implementation using pass would work.
With this design the weird interface is hidden and using the API looks more natural:
handler = MyHandler("Hello world")
server = http.server.HTTPServer(("localhost", 8000), handler)
server.serve_forever()
I would just comment on Thomas Orozco's answer but since I can't..
Perhaps this will help others who also run into this problem. Before Python3, Python has "old-style" classes, and BaseHTTPRequestHandler seems to be one of them. So, the factory should look like
def MakeHandlerClassFromArgv(init_args):
class CustomHandler(BaseHTTPRequestHandler, object):
def __init__(self, *args, **kwargs):
do_stuff_with(self, init_args)
super(CustomHandler, self).__init__(*args, **kwargs)
return CustomHandler
to avoid errors like TypeError: must be type, not classobj.
Why not just subclass the RequestHandler ?
class RequestHandler(BaseHTTPRequestHandler):
a_variable = None
class Server(HTTPServer):
def serve_forever(self, variable):
self.RequestHandlerClass.a_variable = variable
HTTPServer.serve_forever(self)
def run(server_class=Server, handler_class=RequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
variable = sys.argv
httpd.serve_forever(variable)
Ref Subclassing the HTTPServer is another option. Variables on the server are accessible in the Request Handler methods via self.server.context. It basically works like this:
class MyHTTPServer(HTTPServer):
def __init__(self, *args, **kwargs):
HTTPServer.__init__(self, *args, **kwargs)
self.context = SomeContextObject()
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
context = self.server.context
...
# Drawback, Notice that you cannot actually pass the context parameter during constructor creation, but can do it within the __init__ of the MyHTTPServer
server = MyHTTPServer(('', port), MyHandler)
server.serve_forever()
If you do not need instance properties, but only class properties you could use this approach:
def run(server_class=HTTPServer, handler_class=BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.RequestHandlerClass.my_custom_variable = "hello!"
httpd.serve_forever()
or maybe you could:
def run(server_class=HTTPServer, handler_class=BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.my_custom_variable = "hello!"
httpd.serve_forever()
and retrieve in your RequestHandler with:
self.server.my_custom_variable
Using a lambda is a pretty simple way to create a new function that takes the request handler args and creates your custom class.
Here I want to pass a variable that will be used in do_POST(), and set the directory used by SimpleHTTPRequestHandler, so setup calls
HTTPServer(('', 8001), lambda *_: _RequestHandler("[1, 2]", *_, directory=sys.path[0]))
Full program:
from http.server import HTTPServer, SimpleHTTPRequestHandler
import sys
class _RequestHandler(SimpleHTTPRequestHandler):
def __init__(self, x, *args, **kwargs):
self.x = x # NEEDS TO HAPPEN BEFORE super().__init__()
super().__init__(*args, **kwargs)
def _set_headers(self):
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
def do_POST(self):
print("POST")
length = int(self.headers.get('content-length'))
message = self.rfile.read(length).decode('utf-8')
print(message)
self._set_headers()
self.wfile.write(self.x.encode('utf-8'))
def run_server():
server_address = ('', 8001)
httpd = HTTPServer(server_address, lambda *_: _RequestHandler("[1, 2]", *_, directory=sys.path[0]))
print('serving http://localhost:8001')
httpd.serve_forever()
if __name__ == '__main__':
run_server()
Never do it with a global. Use the factory described in other answers.
CONFIG = None
class MyHandler(BaseHTTPRequestHandler):
def __init__(self, ...
self.config = CONFIG # CONFIG is now 'stuff'
if __name__ == "__main__":
global CONFIG
CONFIG = 'stuff'
server_address = ('', 8000)
httpd = HTTPServer(server_address, MyHandler)
httpd.serve_forever()
(except maybe in the privacy of your own home)
What I need is a sort of man-in-the-middle implementation: I need a server who receives connections from clients (binary data with different lengths) and forwards the stream to a server it connects to (acting as a client), and then sends the data back from the server it is connected to, to the clients.
It actually works standing between the clients and the servers, and passing the data they exchange (which is a stream, so it continuously get from one side and sends to the other one).
The server is static, so it is always the same, and its address can even be hardcoded; however when a client drops the connection, this server must also drop the connection to the "real" server.
I've been looking around, but couldn't find a solution or an example for such a simple problem.
The code I've made works actually, but I have not yet managed to find how to put a reference into the server part that says "this is your assigned client", or into the client that says "this is your server". Here's my code:
#!/usr/bin/env python
from twisted.internet import protocol, reactor
from twisted.protocols import basic
client = None
server = None
class ServerProtocol(protocol.Protocol):
def connectionMade(self):
global server
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
server = self
reactor.connectTCP('localhost', 1324, factory)
def dataReceived(self, data):
global client
client.transport.write(data)
class ClientProtocol(protocol.Protocol):
def connectionMade(self):
global client
# Here's the instance of the client
client = self
def dataReceived(self, data):
global server
server.transport.write(data)
def main():
import sys
from twisted.python import log
log.startLogging(sys.stdout)
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
# Here's the instance of the server
server = ServerProtocol
reactor.listenTCP(2593, factory)
reactor.run()
if __name__ == '__main__':
main()
Now, the point is that the instance can't be contained into the global objects, and should be put inside the two classes: how?
I've managed to solve the issue by myself and, for future references (or to help anybody else who had this problem), here's the code I used to solve it.
I think both my solution and the one kindly given by jedwards work; now I just have to study his own a little more to be sure that what I've done is correct: this is my first application using the Twisted framework and studying somebody else's solution is the way to learn something new! :)
#!/usr/bin/env python
from twisted.internet import protocol, reactor
from twisted.protocols import basic
class ServerProtocol(protocol.Protocol):
def __init__(self):
self.buffer = None
self.client = None
def connectionMade(self):
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
factory.server = self
reactor.connectTCP('gameserver16.gamesnet.it', 2593, factory)
def dataReceived(self, data):
if (self.client != None):
self.client.write(data)
else:
self.buffer = data
def write(self, data):
self.transport.write(data)
print 'Server: ' + data.encode('hex')
class ClientProtocol(protocol.Protocol):
def connectionMade(self):
self.factory.server.client = self
self.write(self.factory.server.buffer)
self.factory.server.buffer = ''
def dataReceived(self, data):
self.factory.server.write(data)
def write(self, data):
self.transport.write(data)
print 'Client: ' + data.encode('hex')
def main():
import sys
from twisted.python import log
log.startLogging(sys.stdout)
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
reactor.listenTCP(2593, factory)
reactor.run()
if __name__ == '__main__':
main()
Consider this approach
#!/usr/bin/env python
import sys
from twisted.internet import reactor
from twisted.internet.protocol import ServerFactory, ClientFactory, Protocol
from twisted.protocols import basic
from twisted.python import log
LISTEN_PORT = 2593
SERVER_PORT = 1234
class ServerProtocol(Protocol):
def connectionMade(self):
reactor.connectTCP('localhost', SERVER_PORT, MyClientFactory(self))
def dataReceived(self, data):
self.clientProtocol.transport.write(data)
class ClientProtocol(Protocol):
def connectionMade(self):
# Pass ServerProtocol a ref. to ClientProtocol
self.serverProtocol.clientProtocol = self;
def dataReceived(self, data):
self.serverProtocol.transport.write(data)
class MyServerFactory(ServerFactory):
protocol = ServerProtocol
def buildProtocol(self, addr):
# Create ServerProtocol
p = ServerFactory.buildProtocol(self, addr)
return p
class MyClientFactory(ClientFactory):
protocol = ClientProtocol
def __init__(self, serverProtocol_):
self.serverProtocol = serverProtocol_
def buildProtocol(self, addr):
# Create ClientProtocol
p = ClientFactory.buildProtocol(self,addr)
# Pass ClientProtocol a ref. to ServerProtocol
p.serverProtocol = self.serverProtocol
return p
def main():
log.startLogging(sys.stdout)
reactor.listenTCP(LISTEN_PORT, MyServerFactory())
reactor.run()
if __name__ == '__main__':
main()
The ServerProtcol instance, passes a reference of itself to the MyClientFactory constructor, which then sets tells the ClientProtcol what ServerProtocol instance it's associated with.
Similarly, when the ClientProtocol connection is established, it uses it's reference to the ServerProtocol to tell the ServerProtocol what ClientProtocol to use.
Note: There's no error checking in this code, so you may encounter errors regarding NoneType if things go wrong (for example, if the real server isn't listening).
The important lines are:
reactor.connectTCP('localhost', SERVER_PORT, MyClientFactory(self))
#...
def __init__(self, serverProtocol_):
self.serverProtocol = serverProtocol_
Here, you pass a reference to the ServerProtocol to the MyClientFactory constructor. It stores this reference in a member variable. You do this so that that when the client factory creates a ClientProtocol, it can pass the reference on:
# Pass ClientProtocol a ref. to ServerProtocol
p.serverProtocol = self.serverProtocol
Then, once the connection is made from your script to the real server, the reverse happens. The ClientProtocol gives the ServerProtocol a reference to itself:
# Pass ServerProtocol a ref. to ClientProtocol
self.serverProtocol.clientProtocol = self;
Finally, both protocols use the stored references of each other to send data when it is received:
def dataReceived(self, data):
self.clientProtocol.transport.write(data)
#...
def dataReceived(self, data):
self.serverProtocol.transport.write(data)
I have a server application written in python using twisted and I'd like to know how to kill instances of my protocol (bottalk). Everytime I get a new client connection, I see the instance in memory (print Factory.clients) .. but let's say I want to kill one of those instances from the server side (drop a specific client connection)? Is this possible? I've tried looking for a phrase using lineReceived, then if it matches, self.transport.loseConnection(). But that doesn't seem to reference the instance anymore or something..
class bottalk(LineReceiver):
from os import linesep as delimiter
def connectionMade(self):
Factory.clients.append(self)
print Factory.clients
def lineReceived(self, line):
for bots in Factory.clients[1:]:
bots.message(line)
if line == "killme":
self.transport.loseConnection()
def message(self, message):
self.transport.write(message + '\n')
class botfactory(Factory):
def buildProtocol(self, addr):
return bottalk()
Factory.clients = []
stdio.StandardIO(bottalk())
reactor.listenTCP(8123, botfactory())
reactor.run()
You closed the TCP connection by calling loseConnection. But there's no code anywhere in your application that removes items from the clients list on the factory.
Try adding this to your protocol:
def connectionLost(self, reason):
Factory.clients.remove(self)
This will remove the protocol instance from the clients list when the protocol's connection is lost.
Also, you should consider not using the global Factory.clients to implement this functionality. It's bad for all the usual reasons globals are bad. Instead, give each protocol instance a reference to its factory and use that:
class botfactory(Factory):
def buildProtocol(self, addr):
protocol = bottalk()
protocol.factory = self
return protocol
factory = botfactory()
factory.clients = []
StandardIO(factory.buildProtocol(None))
reactor.listenTCP(8123, factory)
Now each bottalk instance can use self.factory.clients instead of Factory.clients.