Python Twisted best way to signal events to a proxy - python

I will be hosting a service that will be acting somewhat like a proxy for something I am a client to.
So I want my ProxyService (a twisted.protocol server) to takes lots of actors (clients). On the server side of things, I need a global connection (only need 1 connection to it for all clients) to an ExistingService (code I didn't write, and I'm a client to it).
When the ExistingService says something interesting, I need to broadcast it to all actors.
When an actor says something to my ProxyService, I need to check if it looks good to me. If it does, I need to inform the ExistingService.
I think I know how to solve this using global variables, but just wondering if better way to push the messages.

You have the basic design well established.
It's a basic "man in the middle" approach.
There are many ways to implement it, but this should get you started:
from twisted.internet import endpoints, protocol, reactor
class ProxyClient(protocol.Protocol):
def connectionMade(self):
print('[x] proxy connection made to server')
self.factory.proxy_proto = self
def connectionLost(self, reason):
print('[ ] proxy connection to server lost: {0}'.format(reason))
self.factory.proxy_proto = None
def dataReceived(self, data):
print('==> received {0} from server'.format(data))
print('<== transmitting data to all actors')
for actor in self.factory.actors:
actor.transport.write(data)
class Actor(protocol.Protocol):
def connectionMade(self):
print('[x] actor connection established')
self.factory.actors.add(self)
def connectionLost(self, reason):
print('[ ] actor disconnected: {0}'.format(reason))
self.factory.actors.remove(self)
def dataReceived(self, data):
print('==> received {0} from actor'.format(data))
proxy_connection = self.factory.proxy_factory.proxy_proto
if proxy_connection is not None:
print('<== transmitting data to server through the proxy')
proxy_connection.transport.write(data)
else:
print('[ ] proxy connection to server has not been established')
def setup_servers():
PROXY_HOST = '127.0.0.1'
PROXY_PORT = 9000
proxy_factory = protocol.ClientFactory()
proxy_factory.protocol = ProxyClient
proxy_factory.proxy_proto = None
proxy_factory.actors = set()
proxy_client = endpoints.TCP4ClientEndpoint(reactor, port=PROXY_PORT, host=PROXY_HOST)
proxy_client.connect(proxy_factory)
ACTOR_HOST = '127.0.0.1'
ACTOR_PORT = 8000
actor_factory = protocol.Factory()
actor_factory.protocol = Actor
actor_factory.proxy_factory = proxy_factory
actor_factory.actors = proxy_factory.actors
actor_server = endpoints.TCP4ServerEndpoint(reactor, port=ACTOR_PORT, interface=ACTOR_HOST)
actor_server.listen(actor_factory)
def main():
setup_servers()
reactor.run()
main()
The core logic that allows the data received from the server to be proxied to actors is proxy_factory.actors = set() and actor_factory.actors = proxy_factory.actors.
Most "list-like" containers, for lack of better words, are "global" and this example gives context into each connection's factory objects.
When an actor connects to the server, an Actor protocol is appended to the set and when data is received, each protocol in the set will get the data.
See the respective dataReceived() methods of each protocol object on how that works.
The example above doesn't use global variables at all, but that's not to say that you couldn't use them.
See how far you can get using this method of passing around variables that give context into other objects.
Also, certain situations weren't explicitly handled, such as caching received data in the event the server or actors haven't connected yet.
Hopefully there's enough information here for you to determine the best course of action based on your needs.
There's some room for streamlining the syntax to make it shorter as well.
As a side note. An alternative to global variables is picobox. It's a dependency injector library but I've found that it satisfies most my needs when I require parameters from external sources.

Related

How can I implement port forwarding in a Paramiko server?

A "direct-tcpip" request (commonly known as port-forwarding) occurs when you run SSH as ssh user#host -L <local port>:<remote host>:<remote port> and then try to connect over the local port.
I'm trying to implement direct-tcpip on a custom SSH server, and Paramiko offers the check_channel_direct_tcpip_request in the ServerInterface class in order to check if the "direct-tcpip" request should be allowed, which can be implemented as follows:
class Server(paramiko.ServerInterface):
# ...
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
return paramiko.OPEN_SUCCEEDED
However, when I use the aforementioned SSH command, and connect over the local port, nothing happens, probably because I need to implement the connection handling myself.
Reading the documentation, it also appears that the channel is only opened after OPEN_SUCCEDED has been returned.
How can I handle the direct-tcpip request after returning OPEN_SUCCEEDED for the request?
You indeed do need to set up your own connection handler. This is a lengthy answer to explain the steps I took - some of it you will not need if your server code already works. The whole running server example in its entirety is here: https://controlc.com/25439153
I used the Paramiko example server code from here https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py as a basis and implanted some socket code on that. This does not have any error handling, thread related niceties or anything else "proper" for that matter but it allows you to use the port forwarder.
This also has a lot of things you do not need as I did not want to start tidying up a dummy example code. Apologies for that.
To start with, we need the forwarder tools. This creates a thread to run the "tunnel" forwarder. This also answers to your question where you get your channel. You accept() it from the transport but you need to do that in the forwarder thread. As you stated in your OP, it is not there yet in the check_channel_direct_tcpip_request() function but it will be eventually available to the thread.
def tunnel(sock, chan, chunk_size=1024):
while True:
r, w, x = select.select([sock, chan], [], [])
if sock in r:
data = sock.recv(chunk_size)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(chunk_size)
if len(data) == 0:
break
sock.send(data)
chan.close()
sock.close()
class ForwardClient(threading.Thread):
daemon = True
# chanid = 0
def __init__(self, address, transport, chanid):
threading.Thread.__init__(self)
self.socket = socket.create_connection(address)
self.transport = transport
self.chanid = chanid
def run(self):
while True:
chan = self.transport.accept(10)
if chan == None:
continue
print("Got new channel (id: %i).", chan.get_id())
if chan.get_id() == self.chanid:
break
peer = self.socket.getpeername()
try:
tunnel(self.socket, chan)
except:
pass
Back to the example server code. Your server class needs to have transport as a parameter, unlike in the example code:
class Server(paramiko.ServerInterface):
# 'data' is the output of base64.b64encode(key)
# (using the "user_rsa_key" files)
data = (
b"AAAAB3NzaC1yc2EAAAABIwAAAIEAyO4it3fHlmGZWJaGrfeHOVY7RWO3P9M7hp"
b"fAu7jJ2d7eothvfeuoRFtJwhUmZDluRdFyhFY/hFAh76PJKGAusIqIQKlkJxMC"
b"KDqIexkgHAfID/6mqvmnSJf0b5W8v5h2pI/stOSwTQ+pxVhwJ9ctYDhRSlF0iT"
b"UWT10hcuO4Ks8="
)
good_pub_key = paramiko.RSAKey(data=decodebytes(data))
def __init__(self, transport):
self.transport = transport
self.event = threading.Event()
Then you will override the relevant method and create the forwarder there:
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
print(chanid, origin, destination)
f = ForwardClient(destination, self.transport, chanid)
f.start()
return paramiko.OPEN_SUCCEEDED
You need to add transport parameter to the creation of the server class:
t.add_server_key(host_key)
server = Server(t)
This example server requires you to have a RSA private key in the directory named test_rsa.key. Create any RSA key there, you do not need it but I did not bother to strip the use of it off the code.
You can then run your server (runs on port 2200) and issue
ssh -p 2200 -L 2300:www.google.com:80 robey#localhost
(password is foo)
Now when you try
telnet localhost 2300
and type something there, you will get a response from Google.

python3.5: asyncio, How to wait for "transport.write(data)" to finish or to return an error?

I'm writing a tcp client in python3.5 using asyncio
After reading How to detect write failure in asyncio? that talk about the high-level streaming api, I've tried to implement using the low level protocol api.
class _ClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
class Client:
def __init__(self, loop=None):
self.protocol = _ClientProtocol()
if loop is None:
loop = asyncio.get_event_loop()
self.loop = loop
loop.run_until_complete(self._connect())
async def _connect(self):
await self.loop.create_connection(
lambda: self.protocol,
'127.0.0.1',
8080,
)
# based on https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/#bug-3-closing-time
self.protocol.transport.set_write_buffer_limits(0)
def write(self, data):
self.protocol.transport.write(data)
def wait_all_data_have_been_written_or_throw():
pass
client = Client()
client.write(b"some bytes")
client.wait_all_data_have_been_written_or_throw()
As per the python documentation, I know write is non-blocking, and I would like the wait_all_data_have_been_written_or_throw to tell me if all data have been written or if something bad happened in the middle (like a connection lost, but I assume there's way more things that can go bad, and that the underlying socket already return exception about it?)
Does the standard library provide a way to do so ?
The question is mainly related to TCP sockets functionality, not asyncio implementation itself.
Let's look on the following code:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
s.send(b'data')
Successful send() call means the data was transferred into kernel space buffer for the socket, nothing more.
Data was not sent via wire, not received by peer and, obviously, not processed by received.
Actual sending is performed asynchronously by Operation System Kernel, user code has no control over it.
What's why wait_all_data_have_been_written_or_throw() make not much sense: writing without an error doesn't assume receiving these data by peer but only successful moving from user-space buffer to kernel-space one.

Cant receive data from socket

I'm making a client-server program, and there is problem with client part.
Problem is in infinite receiving data. I've tested this particular class, listed below, in a python interpreter. I've succesfuly(maybe not) connected to google, but then program stoped in function recvData() in data = self.socket.recv(1024)
class client():
def __init__(self, host, port):
self.host = host
self.port = port
self.socket = self.connect()
self.command = commands()
def connect(self):
'''
Connect to a remote host.
'''
try:
import socket
return socket.create_connection((self.host, self.port))
except socket.error:
print(":: Failed to connect to a remote port : ")
def sendCommand(self, comm):
'''
Send command to remote host
Returns server output
'''
comman = comm.encode()
# for case in switch(comman):
# if case(self.command.RETRV_FILES_LIST.encode()):
# self.socket.send(b'1')
# return self.recvData()
# if case():
# print(":: Got wrong command")
if (comman == b'1'):
self.socket.send(b'1')
return self.recvData()
def recvData(self):
'''
Receives all the data
'''
i = 0
total_data = []
while(True):
data = self.socket.recv(1024)
if not data: break
total_data.append(data)
i += 1
if i > 9:
break
return total_data
about commented part :
I thought problem in Case realization, so used just if-then statement. But it's not.
Your problem is that self.socket.recv(1024) only returns an empty string when the socket has been shut down on the server side and all data has been received. The way you coded your client, it has no idea that the full message has been received and waits for more. How you deal with the problem depends very much on the protocol used by the server.
Consider a web server. It sends a line-delimited header including a content-length parameter telling the client exactly how many bytes it should read. The client scans for newlines until the header is complete and then uses that value to do recv(exact_size) (if large, it can read chunks instead) so that the recv won't block when the last byte comes in.
Even then, there a decisions to make. The client knows how large the web page is but may want to send a partial data to the caller so it can start painting the page before all the data is received. Of course, the caller needs to know that is what happens - there is a protocol or set of rules for the API itself.
You need to define how the client knows a message is complete and what exactly it passes back to its caller. A great way to deal with the problem is to let some other protocol such as [zeromq](http://zeromq.org/ do the work for you. A simple python client / server can be implemented with xmlrpc. And there are many other ways.
You said you are implementing a client/server program then you mentioned "connected to google" and telnet... These are all very different things and a single client strategy won't work with all of them.

Implementing Referenceable objects client-side with Twisted Perspective Broker

I am trying to implement a simple server reply in Perspective Broker.
Possible implementation (please suggest others if possible):
Client requests server to execute a server method, Server executes then replies (by executing a client method whose sole purpose is to print a message):
[Client-side]:
class ClientPrint(pb.Referenceable):
def remote_clientprint(self, message):
print "Printing the message from the server: ", message
[Server-side]:
class RootServerObject(pb.Root):
def remote_OneFunc(self, ...):
...
print "Now sending the reply..."
*get ClientPrint object?*
clientprintobj.callRemote("clientprint", "this is the reply!")
How can I implement the grabbing of client-side objects? Is there a better way to implement server replies than grabbing a client-side object and calling a print-only client method?
Here is the full code where I am trying to implement the replies:
[Client-side]:
from twisted.internet import reactor
from twisted.spread import pb
class Client():
def __init__(self, addr, port, spec):
self.addr = None
self.port = None
self.SomeData = None
def connect(self, addr, port):
factory = pb.PBClientFactory()
reactor.connectTCP(addr, port, factory)
def1 = factory.getRootObject()
def1.addCallbacks(self.got_obj, self.err_obj)
def got_obj(self, rootsrvobj):
print "Got root server obj:", rootsrvobj
self.server = rootsrvobj
def2 = self.server.callRemote("SomeFunc", SomeData)
def err_obj(self, reason):
print "Error getting root server obj:", reason
self.quit()
def cmdsub(addr, port, SomeData):
c = Client(addr, port, SomeData)
c.connect(addr, port)
[Server-side]:
class RootServerObject(pb.Root):
def __init__(self):
self.DataOut = None
def remote_SomeFunc(self, SomeData):
self.DataOut = hash(SomeData)
print "Now sending reply..."
*implement a reply?*
Perhaps there are some more advanced Twisted (or Twisted PB) features that will make this simpler.
Documentation: https://twistedmatrix.com/documents/12.3.0/core/howto/pb-usage.html#auto3
Thanks.
The simplest way to do this is to take the client-side object that the server needs to use and pass it to the server. Almost any solution I can think of has this at its core.
Change your client's got_obj method to be something more like this:
def got_obj(self, rootsrvobj):
print "Got root server obj:", rootsrvobj
self.server = rootsrvobj
def2 = self.server.callRemote("SomeFunc", self, SomeData)
And change the implementation of remote_SomeFunc to be something more like this:
def remote_SomeFunc(self, client, SomeData):
self.DataOut = hash(SomeData)
print "Now sending reply..."
client.callRemote("client_print", "Here is your reply")
You might want to investigate Twisted Cred as a more structured way to manage references to your client object - but cred is just building on this exact feature of Perspective Broker to provide its more abstract, more featureful interface.
However, notice that I said "almost" above...
Keep in mind that Twisted's implementation of Perspective Broker has well-integrated support for Deferreds. If a remote_ method returns a Deferred then no response will be sent to the method call until the Deferred fires and then the result will be sent as the result of the method call. You might consider putting the logic of client_print into a callback on the Deferred returned by self.server.callRemote("SomeFunc", SomeData) and making the server's remote_SomeFunc return the reply, either synchronously or asynchronously (as a Deferred).

Run Socket Script on Multiple Ports

What I want to do is run the following script on every port, 1025+. What I am doing is making a Blackjack iPhone app that interacts with this script for online gaming. The thing is, I would want to put this on each port manually by changing the port to listen each time for all the ports. How can I do it so that there is a new table on every port. Each table has an ID the app will check for to see the amount of players and who is at the table.
The socket sets the ID for the Table class, but I need to be on multiple ports to be able to keep that table going and saving every player moves and such.
Bottom line, how can I make this script run on every port, how can I change the listening port by itself, and how can I have python make all Tables by itself on each port?
class Table:
def __init__(self, id):
self.players = []
self.positions = {'1': '', '2': '', '3': '', '4': ''}
def sit(self, player_id, position):
self.positions[position] = player_id
# --------------------------------------------- #
# --------------------------------------------- #
class Socket(Protocol):
def connectionMade(self):
#self.transport.write("""connected""")
self.factory.clients.append(self)
print "Clients are ", self.factory.clients
def connectionLost(self, reason):
self.factory.clients.remove(self)
def dataReceived(self, data):
#print "data is ", data
a = data.split(':')
if len(a) > 1:
command = a[0]
content = a[1]
msg = ""
print msg
for c in self.factory.clients:
c.message(msg)
def message(self, message):
self.transport.write(message)
factory = Factory()
factory.protocol = Socket
factory.clients = []
reactor.listenTCP(1025, factory)
print "Blackjack server started"
reactor.run()
Answering your question
You ask:
How can I make this script run on every port?
How can I change the listening port by itself?
How can I have python make all Tables by itself on each port?
I think the answer here is to simply use a loop to bind the factory to as many ports as you want.
However, since you're storing a list of clients in your factory as well, you'll need to create a new factory for reach port, as well. So something like:
factories = [ ]
for i in range(0, NUM_TABLES):
factory = Factory()
factory.protocol = Socket()
factory.clicents = []
factories.append(factory)
reactor.listenTCP(1025 + i, factory)
reactor.run()
You're using classes, so each factory keeps its own list of clients, each one has its own Socket instance to manage the connection. You don't show how Table instances are instantiated, but as long as each Socket or Factory instance instantiates and maintains a reference to the Table, this should allow you to have multiple connections, each with its own state.
By keeping a list of all factories, you can iterate over them to make a list of running games, etc.
Considering a different architecture
While the above might work, it's not how client-server systems are typically architected.
In particular, your system here requires your client to know what port to use. That may work ad hoc when you're all in your living room together, but it's tedious and won't scale in general.
What you want is something, like a webserver, that listens on one port to establish the connection, and then tells the client: "hey, your table id is 25, use that whenever you want to talk". In addition, this means making a list of tables available to the client, so they can pick one. And, you can get fancier from there: a special expiring cookie given to client so that it doesn't accidentally hack/disturb a game that it's no longer part of, etc.

Categories