Why does gevent.socket break multiprocessing.connection's auth - python

I have an application that uses both grequests and multiprocessing.managers for a combination of IPC communication and Asynchronous RESTful communications over HTTP.
It seems that grequests, in using gevent.monkey's patch_all() method, breaks the multiprocessing.connection module used by the multiprocessing.manager.SyncManager class and its derivatives.
This is apparently not an isolated issue, but affects any use case that implements multiprocessing.connetion, such as multiprocessing.pool, for example.
Drilling down into the code in gevent/monkey.py, I found that the swapping of the stdlib socket module with gevent.socket is what causes the breakage.
This can be found at line 115 in gevent/monkey.py under the patch_socket() function:
def patch_socket(dns=True, aggressive=True):
"""Replace the standard socket object with gevent's cooperative sockets.
...
_socket.socket = socket.socket # This line breaks multiprocessing.connection!
...
My question is then why does this swappage break multiprocessing.connection, and what advantages are derived from using gevent.socket instead of the stdlib's socket module? That is, what performance loss, if any, will I incur from not patching the socket module?
Traceback
Traceback (most recent call last):
File "clientWithGeventMonkeyPatch.py", line 49, in <module>
client = GetClient(host, port, authkey)
File "clientWithGeventMonkeyPatch.py", line 39, in GetClient
client.connect()
File "/usr/lib/python2.7/multiprocessing/managers.py", line 500, in connect
conn = Client(self._address, authkey=self._authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 175, in Client
answer_challenge(c, authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 414, in answer_challenge
response = connection.recv_bytes(256) # reject large message
IOError: [Errno 11] Resource temporarily unavailable
code to reproduce the error
(on ubuntu server 11.10, python2.7.3, with gevent, greenlet, and grequests installed)
manager.py
## manager.py
import multiprocessing
import multiprocessing.managers
import datetime
class LocalManager(multiprocessing.managers.SyncManager):
def __init__(self, *args, **kwargs):
multiprocessing.managers.SyncManager.__init__(self, *args, **kwargs)
self.__type__ = 'LocalManager'
def GetManager(host, port, authkey):
def getdatetime():
return '{}'.format(datetime.datetime.now())
LocalManager.register('getdatetime', callable = getdatetime)
manager = LocalManager(address = (host, port), authkey = authkey)
manager.start()
return manager
if __name__ == '__main__':
# define our manager connection parameters
port = 55555
host = 'localhost'
authkey = 'auth1234'
# start a manager
man = GetManager(host, port, authkey)
# wait for user input to shut down
raw_input('return to shutdown')
man.shutdown()
client.py
## client.py -- this one works
import time
import multiprocessing.managers
class RemoteClient(multiprocessing.managers.SyncManager):
def __init__(self, *args, **kwargs):
multiprocessing.managers.SyncManager.__init__(self, *args, **kwargs)
self.__type__ = 'RemoteClient'
def GetClient(host, port, authkey):
RemoteClient.register('getdatetime')
client = RemoteClient(address = (host, port), authkey = authkey)
client.connect()
return client
if __name__ == '__main__':
# define our client connection parameters
port = 55555
host = 'localhost'
authkey = 'auth1234'
# start a manager
client = GetClient(host, port, authkey)
print 'connected', client
print 'client.getdatetime()', client.getdatetime()
# wait a couple of seconds, then do it again
time.sleep(2)
print 'client.getdatetime()', client.getdatetime()
# exit...
clientWithGeventMonkeyPatch.py
## clientWithGeventMonkeyPatch.py -- breaks, depending on patch_all() parameters
import time
import multiprocessing.managers
# this part is copied from grequests
# bear in mind that it doesn't actually do anything in this module.
try:
import gevent
from gevent import monkey as curious_george
from gevent.pool import Pool
except ImportError:
raise RuntimeError('Gevent is required for grequests.')
# this line causes breakage of the multiprocessing.manager connection auth method:
# Monkey-patch.
# patch_all() parameters with default values: socket=True, dns=True, time=True, select=True, thread=True, os=True, ssl=True, aggressive=True
curious_george.patch_all(thread=False, select=False) # breaks
#~ curious_george.patch_all(thread=False, select=False, socket = False) # works!
#~ curious_george.patch_all(thread=False, select=False, socket = True, aggressive = True, dns = True) # same as (thread=False, select=False); breaks
#~ curious_george.patch_all(thread=False, select=False, socket = True, aggressive = True, dns = False) # breaks
#~ curious_george.patch_all(thread=False, select=False, socket = True, aggressive = False, dns = True) # breaks
#~ curious_george.patch_all(thread=False, select=False, socket = True, aggressive = False, dns = False) # breaks
class RemoteClient(multiprocessing.managers.SyncManager):
def __init__(self, *args, **kwargs):
multiprocessing.managers.SyncManager.__init__(self, *args, **kwargs)
self.__type__ = 'RemoteClient'
def GetClient(host, port, authkey):
RemoteClient.register('getdatetime')
client = RemoteClient(address = (host, port), authkey = authkey)
client.connect()
return client
if __name__ == '__main__':
# define our client connection parameters
port = 55555
host = 'localhost'
authkey = 'auth1234'
# start a manager
client = GetClient(host, port, authkey)
print 'connected', client
print 'client.getdatetime()', client.getdatetime()
# wait a couple of seconds, then do it again
time.sleep(2)
print 'client.getdatetime()', client.getdatetime()
# exit...

If you don't patch the socket module, gevent's ability to not block on network operations won't be available, and thus most of the benefit of using gevent in the first place won't be available.
gevent and multiprocessing aren't really designed to play nicely with one another - gevent mostly assumes that you're doing your network connections through it, and not bypassing the highest level Python socket interfaces (which multiprocessing does).

Related

Twisted Producer was never unregistered

I am writing a custom SSL proxy with Twisted. I keep running in to an issue that happens every so often and I cant figure out what the problem is.
When I try to connect the client transport to the server's transport through the registerProducer function exactly as twisted.protocols.portforward functions I keep getting this error.
File "/opt/Memory/Mobile/Proxy/forwarder.py", line 40, in connectionMade
self.peer.transport.registerProducer(self.transport, True)
File "/usr/lib/python3.9/site-packages/twisted/protocols/tls.py", line 602, in registerProducer
self.transport.registerProducer(producer, True)
File "/usr/lib/python3.9/site-packages/twisted/internet/_newtls.py", line 233, in registerProducer
FileDescriptor.registerProducer(self, producer, streaming)
File "/usr/lib/python3.9/site-packages/twisted/internet/abstract.py", line 104, in registerProducer
raise RuntimeError(
builtins.RuntimeError: Cannot register producer <twisted.protocols.tls._ProducerMembrane object at 0x7fb5799f0910>, because producer <twisted.protocols.tls._ProducerMembrane object at 0x7fb579b474c0> was never unregistered.
Here are my Inherited Classes from twisted?
from twisted.internet import reactor
from twisted.internet import ssl
from twisted.protocols import portforward
from twisted.internet import protocol
from twisted.python import log
import sys
##SSLProxy base class that will be inherited
class SSLProxy(protocol.Protocol):
noisy = True
peer = None
def setPeer(self, peer):
#log.msg("SSLProxy.setPeer")
self.peer = peer
def connectionLost(self, reason):
#log.msg("SSLProxy.connectionLost")
if self.peer is not None:
self.peer.transport.loseConnection()
self.peer = None
elif self.noisy:
log.msg("Unable to connect to peer: {}".format(reason))
def dataReceived(self, data):
#log.msg("SSLProxy.dataReceived")
if self.peer is not None:
self.peer.transport.write(data)
##Foward data from Proxy to => Remote Server
class SSLProxyClient(SSLProxy):
def connectionMade(self):
#log.msg("SSLProxyClient.connectionMade")
self.peer.setPeer(self)
self.transport.registerProducer(self.peer.transport, True)
self.peer.transport.registerProducer(self.transport, True)
# We're connected, everybody can read to their hearts content.
self.peer.transport.resumeProducing()
class SSLProxyClientFactory(protocol.ClientFactory):
protocol = SSLProxyClient
def setServer(self, server):
#log.msg("SSLProxyClientFactory.setServer")
self.server = server
def buildProtocol(self, *args, **kw):
#log.msg("SSLProxyClientFactory.buildProtocol")
prot = protocol.ClientFactory.buildProtocol(self, *args, **kw)
prot.setPeer(self.server)
return prot
def clientConnectionFailed(self, connector, reason):
#log.msg("SSLProxyClientFactory.clientConnectionFailed")
self.server.transport.loseConnection()
class SSLProxyServer(SSLProxy):
clientProtocolFactory = SSLProxyClientFactory
reactor = None
def connectionMade(self):
log.msg("SSLProxyServer.connectionMade")
#Get Current SSL Context
ssl_context = self.transport._tlsConnection.get_context()
#Hack to get SNI to do two functions in diffrent classes
ssl_context._finishSNI = self.SNICallback
def SNICallback(self, connection):
#log.msg("SSLProxyServer.SNICallback: {}".format(connection))
#print(connection.get_context().new_host)
self.transport.pauseProducing()
#self.transport.transport.pauseProducing()
#print(dir())
self.dst_host, self.dst_port = connection.get_context().new_host
#Setup Clients
self.client = self.clientProtocolFactory()
self.client.setServer(self)
#Start stuff
log.msg('Redirecting to {}:{}'.format(self.dst_host, self.dst_port))
if self.reactor is None:
self.reactor = reactor
log.msg("Making Connection to Dest Server: {}:{}".format(self.dst_host, self.dst_port))
self.reactor.connectSSL(self.dst_host, self.dst_port, self.client, ssl.ClientContextFactory())
#self.transport.resumeProducing()
#Client -> Proxy
def dataReceived(self, data):
log.msg("SSLProxyServer.dataReceived: {}".format(data))
#Call Inherited Function
super().dataReceived(data)
class SSLProxyFactory(protocol.Factory):
"""Factory for port forwarder."""
protocol = SSLProxyServer
def __init__(self):
super().__init__()
#log.msg("SSLProxyFactory.__init__")
def sslToSSL(localport, remotehost, remoteport, serverContextFactory):
log.msg("SSL on localhost:{} forwarding to SSL {}:{}".format(localport, remotehost, remoteport))
return reactor.listenSSL(localport, SSLProxyFactory(), serverContextFactory)
Any guidance would be appreciated.
I found that if you just unregister the current Producer you can register the new one.
##Foward data from Proxy to => Remote Server
class SSLProxyClient(SSLProxy):
def connectionMade(self):
#log.msg("SSLProxyClient.connectionMade")
self.peer.setPeer(self)
self.transport.registerProducer(self.peer.transport, True)
self.peer.transport.unregisterProducer()
self.peer.transport.registerProducer(self.transport, True)
# We're connected, everybody can read to their hearts content.
self.peer.transport.resumeProducing()

Create linux daemon which uses Multiprocessing and Multiprocessing.Queues

I have a task to listen UDP datagrams, decode them(datagrams have binary information), decoded information put in dictionary, dump dictionary to json string and then send json string to remote server(ActiveMQ).
Both decoding and sending to remote could be time consuming. In order to make program more scalable we create two processes (Multiprocessing.Process):
Listner(listen datagrams, analize, create json and put it in Multiprocessing.Queue)
Sender(constantly tries to get a json string from the queue to array, if length of array becomes above threshold - send all collected strings to remote server)
Now I need to make from it a proper linux daemon (which could be start, stop and restart via service command).
The question: How to make a daemon from python multiprocessing program. I have found no guide about this. Does anybody know how to do this, or have working example.
The following text is my attemts to acheve this:
I found small example of python daemon: http://www.gavinj.net/2012/06/building-python-daemon-process.html
so I rewrited my code (sorry for big code):
import socket
import time
import os
from select import select
import multiprocessing
from multiprocessing import Process, Queue, Value
import stomp
import json
import logging
logger = logging.getLogger("DaemonLog")
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.FileHandler("/var/log/testdaemon/testdaemon.log")
handler.setFormatter(formatter)
logger.addHandler(handler)
log = logger
#Config listner
domain = 'example.host.ru'
port = int(9930)
#Config remote queue access
queue_cfg = {
'host': 'queue.test.ru',
'port': 61113,
'user': 'user',
'password': 'pass',
'queue': '/topic/test.queue'
}
class UDPListener():
def __init__(self, domain, port, queue_cfg):
# If I initialize socket during init I see strange error:
# on the line: data, addr = sock_inst.recvfrom(int(10000))
# error: [Errno 88] Socket operation on non-socket
# So I put initialization to runListner function
#self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
#self.sock.bind((domain, port))
self.domain = domain
self.port = port
self.remote_queue_cfg = queue_cfg
self.queue = Queue()
self.isWorking = Value('b', True)
self.decoder = Decoder()
self.reactor = ParallelQueueReactor(self.queue)
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
self.pidfile_path = '/var/run/testdaemon/testdaemon.pid'
self.pidfile_timeout = 5
def __assignData(self, addr, data):
receive_time = time.time()
messages = self.decoder.decode(receive_time, addr, data)
for msg in messages:
self.reactor.addMessage(msg)
def runListner(self):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.bind((domain, port))
while self.isWorking.value:
inputready, outputready, exceptready = select([self.sock], [], [])
for sock_inst in inputready:
if sock_inst == self.sock:
data, addr = sock_inst.recvfrom(int(10000))
if data:
self.__assignData(addr[0], data)
self.sock.close()
def runQueueDispatcher(self):
while self.isWorking.value:
connected = False
while not connected:
try:
conn = stomp.Connection(host_and_ports=[(self.remote_queue_cfg['host'], self.remote_queue_cfg['port'])])
conn.start()
conn.connect(self.remote_queue_cfg['user'], self.remote_queue_cfg['password'], wait=True)
connected = True
except socket.error:
log.error('Could not connect to activemq server.')
time.sleep(20)
if connected == True:
while self.isWorking.value:
msg = None
if not self.queue.empty():
#Now error appear hear even when not self.queue.empty()
msg = self.queue.get()
else:
time.sleep(1)
if msg is not None:
try:
data = json.dumps(msg)
conn.send(body=data, destination=self.remote_queue_cfg['queue'])
count += 1
except:
log.error('Failed to send message to queue.')
time.sleep(1)
def stop(self):
self.isWorking.value = False
def run(self):
log.error('StartProcesses')
dispatcher_process = Process(target=self.runQueueDispatcher, name='Dispatcher')
listner_process = Process(target=self.runListner, name='Listner')
dispatcher_process.start()
listner_process.start()
dispatcher_process.join()
listner_process.join()
log.info('Finished')
#------------------------------------------------------------------
def main():
from daemon import runner
app = UDPListener(domain, port, queue_cfg)
daemon_runner = runner.DaemonRunner(app)
daemon_runner.daemon_context.files_preserve=[handler.stream]
daemon_runner.do_action()
if __name__ == "__main__":
main()
Now I see error on msg = self.queue.get()
Traceback (most recent call last): File "/usr/lib64/python2.6/multiprocessing/process.py", line 232, in
_bootstrap
self.run() File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run
self._target(*self._args, **self._kwargs) File "/root/ipelevan/dream/src/parallel_main.py", line 116, in runQueueDispatcher
msg = self.queue.get() File "/usr/lib64/python2.6/multiprocessing/queues.py", line 91, in get
res = self._recv() EOFError
I did not see this errors when run UDPListner.run() manually. But with daemon runner it looks like new instances of UDPListner is created underneath and in different processes we have different Queue(and different self.socket too, when it initialized in init).
First of all: It was a bad idea to keep links to shared objects(Queue, Value) as members of class for the purpose of using by processes. It was working somehow without demonization. But when the same code was run in DaemonContext the os.fork() happened and somehow messed up links to objects. I am not quite sure if Multiprocessing module was designed to work 100% correctly inside a method of an object.
Second: DaemonContext helps to detach process from shell, redirect streams and do several other things related to daemon processes but I have not found any good way to check if such a daemon is already running. So I just used
if os.path.isfile(pidfile_path):
print 'pidfile %s exists. Already running?' % pidfile_path
sys.exit(1)

How do I run pyzmq and a webserver in one ioloop?

I want to write a single threaded program that hosts a webserver using Tornado and also receive messages on a ZMQ socket (using PyZMQ Tornado event loop: http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/multisocket/tornadoeventloop.html), but I'm not sure how to structure it. Should I be using
from zmq.eventloop import ioloop
or
from tornado.ioloop import IOLoop
or both?
Before all Tornado imports you need import zmq.eventloop.ioloop and call zmq.eventloop.ioloop.install function. Then you may import Tornado ioloop and use it.
See:
http://zeromq.github.io/pyzmq/eventloop.html
Here is an example with Tornado HTTP server with zeroMQ PUB SUB sockets.
#!/usr/bin/env python
import json
import tornado
import tornado.web
import zmq
from tornado import httpserver
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
ioloop.install()
tornado.ioloop = ioloop
import sys
def ping_remote():
"""callback to keep the connection with remote server alive while we wait
Network routers between raspberry pie and cloud server will close the socket
if there is no data exchanged for long time.
"""
pub_inst.send_json_data(msg="Ping", req_id="##")
sys.stdout.write('.')
sys.stdout.flush()
pending_requests = {}
class ZMQSub(object):
def __init__(self, callback):
self.callback = callback
context = zmq.Context()
socket = context.socket(zmq.SUB)
# socket.connect('tcp://127.0.0.1:5559')
socket.bind('tcp://*:8081')
self.stream = ZMQStream(socket)
self.stream.on_recv(self.callback)
socket.setsockopt(zmq.SUBSCRIBE, "")
def shutdown_zmq_sub(self):
self.stream.close()
class ZMQPub(object):
def __init__(self):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind('tcp://*:8082')
self.publish_stream = ZMQStream(socket)
def send_json_data(self, msg, req_id):
topic = str(req_id)
self.publish_stream.send_multipart([topic, msg])
def shutdown_zmq_sub(self):
self.publish_stream.close()
def SensorCb(msg):
# decode message from raspberry pie and the channel ID.
key, msg = (i for i in msg)
if not key == "##":
msg = json.loads(msg)
if key in pending_requests.keys():
req_inst = pending_requests[key]
req_inst.write(msg)
req_inst.finish()
del pending_requests[key]
else:
print "no such request"
print pending_requests
else:
print "received ping"
class Handler(tornado.web.RequestHandler):
def __init__(self, *args, **kwargs):
super(Handler, self).__init__(*args, **kwargs)
# get the unique req id
self.req_id = str(self.application.req_id) + "#"
self.application.req_id += 1
# set headers
self.set_header("Access-Control-Allow-Origin", "*")
self.set_header("Access-Control-Allow-Headers", "x-requested-with")
self.set_header('Access-Control-Allow-Methods', 'POST, GET, OPTIONS, PUT')
#tornado.web.asynchronous
def get(self):
print self.request
if self.req_id not in pending_requests.keys():
pending_requests[self.req_id] = self
else:
print "WTF"
pub_inst.send_json_data(msg=json.dumps({"op": "ServiceCall"}), req_id=self.req_id)
if __name__ == "__main__":
pub_inst = ZMQPub()
sub_inst = ZMQSub(callback=SensorCb)
application = tornado.web.Application(
[(r'/get_sensor_data', Handler), (r'/(.*)')])
application.req_id = 0
server = httpserver.HTTPServer(application, )
port = 8080
server.listen(port)
print "Sensor server ready on port: ", port
ping = ioloop.PeriodicCallback(ping_remote, 3000)
ping.start()
tornado.ioloop.IOLoop.instance().start()

issues with socket programming - python

I am doing a client-server project for my college project,
we have to allocate the login to the client.
Client system will request its status for every 2 seconds(to check whether the client is locked or unlocked). and server will accept the client request and reply the client status to the system.
But the problem is server thread is not responding to the client request.
CLIENT THREAD:
def checkPort():
while True:
try:
s = socket.socket()
s.connect((host, port))
s.send('pc1') # send PC name to the server
status = s.recv(1024) # receive the status from the server
if status == "unlock":
disableIntrrupts() # enable all the functions of system
else:
enableInterrupts() # enable all the functions of system
time.sleep(5)
s.close()
except Exception:
pass
SERVER THREAD:
def check_port():
while True:
try:
print "hello loop is repeating"
conn, addr = s.accept()
data = conn.recv(1024)
if exit_on_click == 1:
break
if (any(sublist[0] == data for sublist in available_sys)):
print "locked"
conn.send("lock")
elif (any(sublist[0] == data for sublist in occupied_sys)):
conn.send("unlock")
print "unlocked"
else:
print "added to gui for first time"
available_sys.append([data,addr[0],nameText,usnText,branchText])
availSysList.insert('end',data)
except Exception:
pass
But my problem is server thread is not executing more than 2 time,
So its unable to accept client request more than one time.
can't we handle multiple client sockets using single server socket?
How to handle multiple client request from server ?
Thanks for any help !!
Its because your server, will block waiting for a new connection on this line
conn, addr = s.accept()
This is because calls like .accept and .read are blocking calls that hold the process
You need to consider an alternative design, where in you either.
Have one process per connection (this idea is stupid)
One thread per connection (this idea is less stupid than the first but still mostly foolish)
Have a non blocking design that allows multiple clients and read/write without blocking execution.
To achieve the first, look at multiprocessing, the second is threading the third is slightly more complicated to get your head around but will yield the best results, the go to library for event driven code in Python is twisted but there are others like
gevent
tulip
tornado
And so so many more that I haven't listed here.
here's an full example of implementing a threaded server. it's fully functional and comes with the benefit of using SSL as well. further, i use threaded event objects to signal another class object after storing my received data in a database.
please note, _sni and _cams_db are additional modules purely of my own. if you want to see the _sni module (provides SNI support for pyOpenSSL), let me know.
what follows this, is a snippet from camsbot.py, there's a whole lot more that far exceeds the scope of this question. what i've built is a centralized message relay system. it listens to tcp/2345 and accepts SSL connections. each connection passes messages into the system. short lived connections will connect, pass message, and disconnect. long lived connections will pass numerous messages after connecting. messages are stored in a database and a threading.Event() object (attached to the DB class) is set to tell the bot to poll the database for new messages and relay them.
the below example shows
how to set up a threaded tcp server
how to pass information from the listener to the accept handler such as config data and etc
in addition, this example also shows
how to employ an SSL socket
how to do some basic certificate validations
how to cleanly wrap and unwrap SSL from a tcp socket
how to use poll() on the socket instead of select()
db.pending is a threading.Event() object in _cams_db.py
in the main process we start another thread that waits on the pending object with db.pending.wait(). this makes that thread wait until another thread does db.pending.set(). once it is set, our waiting thread immediately wakes up and continues to work. when our waiting thread is done, it calls db.pending.clear() and goes back to the beginning of the loop and starts waiting again with db.pending.wait()
while True:
db.pending.wait()
# after waking up, do code. for example, we wait for incoming messages to
# be stored in the database. the threaded server will call db.pending.set()
# which will wake us up. we'll poll the DB for new messages, relay them, clear
# our event flag and go back to waiting.
# ...
db.pending.clear()
snippet from camsbot.py:
import sys, os, sys, time, datetime, threading, select, logging, logging.handlers
import configparser, traceback, re, socket, hashlib
# local .py
sys.path.append('/var/vse/python')
import _util, _webby, _sni, _cams_db, _cams_threaded_server, _cams_bot
# ...
def start_courier(config):
# default values
host = '::'
port = 2345
configp = config['configp']
host = configp.get('main', 'relay msp hostport')
# require ipv6 addresses be specified in [xx:xx:xx] notation, therefore
# it is safe to look for :nnnn at the end
if ':' in host and not host.endswith(']'):
port = host.split(':')[-1]
try:
port = int(port, 10)
except:
port = 2345
host = host.split(':')[:-1][0]
server = _cams_threaded_server.ThreadedTCPServer((host, port), _cams_threaded_server.ThreadedTCPRequestHandler, config)
t = threading.Thread(target=server.serve_forever, name='courier')
t.start()
_cams_threaded_server.py:
import socket, socketserver, select, datetime, time, threading
import sys, struct
from OpenSSL.SSL import SSLv23_METHOD, SSLv3_METHOD, TLSv1_METHOD, OP_NO_SSLv2
from OpenSSL.SSL import VERIFY_NONE, VERIFY_PEER, VERIFY_FAIL_IF_NO_PEER_CERT, Context, Connection
from OpenSSL.SSL import FILETYPE_PEM
from OpenSSL.SSL import WantWriteError, WantReadError, WantX509LookupError, ZeroReturnError, SysCallError
from OpenSSL.crypto import load_certificate
from OpenSSL import SSL
# see note at beginning of answer
import _sni, _cams_db
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass, config):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET6
self.connected = []
self.logger = config['logger']
self.config = config
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sc = Context(TLSv1_METHOD)
sc.set_verify(VERIFY_PEER|VERIFY_FAIL_IF_NO_PEER_CERT, _sni.verify_cb)
sc.set_tlsext_servername_callback(_sni.pick_certificate)
self.sc = sc
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
config = self.server.config
logger = self.server.logger
connected = self.server.connected
sc = self.server.sc
try:
self.peer_hostname = socket.gethostbyaddr(socket.gethostbyname(self.request.getpeername()[0]))[0]
except:
self.peer_hostname = '!'+self.request.getpeername()[0]
logger.info('peer: {}'.format(self.peer_hostname))
ssl_s = Connection(sc, self.request)
ssl_s.set_accept_state()
try:
ssl_s.do_handshake()
except:
t,v,tb = sys.exc_info()
logger.warn('handshake failed {}'.format(v))
ssl_s.setblocking(True)
self.ssl_s = ssl_s
try:
peercert = ssl_s.get_peer_certificate()
except:
peercert = False
t,v,tb = sys.exc_info()
logger.warn('SSL get peer cert failed: {}'.format(v))
if not peercert:
logger.warn('No peer certificate')
else:
acl = config['configp']['main'].get('client cn acl', '').split(' ')
cert_subject = peercert.get_subject().CN
logger.info('Looking for {} in acl: {}'.format(cert_subject,acl))
if cert_subject in acl:
logger.info('{} is permitted'.format(cert_subject))
else:
logger.warn('''client CN not approved''')
# it's ok to block here, every socket has its own thread
ssl_s.setblocking(True)
self.db = config['db']
msgcount = 0
p = select.poll()
# don't want writable, just readable
p.register(self.request, select.POLLIN|select.POLLPRI|select.POLLERR|select.POLLHUP|select.POLLNVAL)
peername = ssl_s.getpeername()
x = peername[0]
if x.startswith('::ffff:'):
x = x[7:]
peer_ip = x
try:
host = socket.gethostbyaddr(x)[0]
except:
host = peer_ip
logger.info('{}/{}:{} connected'.format(host, peer_ip, peername[1]))
connected.append( [host, peername[1]] )
if peercert:
threading.current_thread().setName('{}/port={}/CN={}'.format(host, peername[1], peercert.get_subject().CN))
else:
threading.current_thread().setName('{}/port={}'.format(host, peername[1]))
sockclosed = False
while not sockclosed:
keepreading = True
#logger.debug('starting 30 second timeout for poll')
pe = p.poll(30.0)
if not pe:
# empty list means poll timeout
# for SSL sockets it means WTF. we get an EAGAIN like return even if the socket is blocking
continue
logger.debug('poll indicates: {}'.format(pe))
#define SSL_NOTHING 1
#define SSL_WRITING 2
#define SSL_READING 3
#define SSL_X509_LOOKUP 4
while keepreading and not sockclosed:
data,sockclosed,keepreading = self._read_ssl_data(2, head=True)
if sockclosed or not keepreading:
time.sleep(5)
continue
plen = struct.unpack('H', data)[0]
data,sockclosed,keepreading = self._read_ssl_data(plen)
if sockclosed or not keepreading:
time.sleep(5)
continue
# send thank you, ignore any errors since we appear to have gotten
# the message
try:
self.ssl_s.sendall(b'ty')
except:
pass
# extract the timestamp
message_ts = data[0:8]
msgtype = chr(data[8])
message = data[9:].decode()
message_ts = struct.unpack('d', message_ts)[0]
message_ts = datetime.datetime.utcfromtimestamp(message_ts).replace(tzinfo=datetime.timezone.utc)
self.db.enqueue(config['group'], peer_ip, msgtype, message, message_ts)
self.db.pending.set()
# we're recommended to use the return socket object for any future operations rather than the original
try:
s = ssl_s.unwrap()
s.close()
except:
pass
connected.remove( [host, peername[1]] )
t_name = threading.current_thread().getName()
logger.debug('disconnect: {}'.format(t_name))
def _read_ssl_data(self, wantsize=16384, head=False):
_w = ['WANT_NOTHING','WANT_READ','WANT_WRITE','WANT_X509_LOOKUP']
logger = self.server.logger
data = b''
sockclosed = False
keepreading = True
while len(data) < wantsize and keepreading and not sockclosed:
rlen = wantsize - len(data)
try:
w,wr = self.ssl_s.want(),self.ssl_s.want_read()
#logger.debug(' want({}) want_read({})'.format(_w[w],wr))
x = self.ssl_s.recv(rlen)
#logger.debug(' recv(): {}'.format(x))
if not ( x or len(x) ):
raise ZeroReturnError
data += x
if not (len(x) == len(data) == wantsize):
logger.info(' read={}, len(data)={}, plen={}'.format(len(x),len(data),wantsize))
except WantReadError:
# poll(), when ready, read more
keepreading = False
logger.info(' got WantReadError')
continue
except WantWriteError:
# poll(), when ready, write more
keepreading = False
logger.info(' got WantWriteError')
continue
except ZeroReturnError:
# socket got closed, a '0' bytes read also means the same thing
keepreading = False
sockclosed = True
logger.info(' ZRE, socket closed normally')
continue
except SysCallError:
keepreading = False
sockclosed = True
t,v,tb = sys.exc_info()
if v.args[0] == -1: # normal EOF
logger.info(' EOF found, keepreading=False')
else:
logger.info('{} terminated session abruptly while reading plen'.format(self.peer_hostname))
logger.info('t: {}'.format(t))
logger.info('v: {}'.format(v))
continue
except:
t,v,tb = sys.exc_info()
logger.warning(' fucked? {}'.format(v))
raise
if not head and not len(data) == wantsize:
logger.warn(' short read {} of {}'.format(len(data), wantsize))
return data,sockclosed,keepreading
let's start with a bare bones threaded tcp server.
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
# self.request is your accepted socket, do all your .read() and .wirte() on it
s = self.request
request = s.read(1024)
# decide locked or unlocked. this example arbitrarily writes back 'locked'
s.write('locked')
# we're done, close the socket and exit with a default return of None
s.close()
ok, start your threaded server with this in your main() function:
server = threading.ThreadedTCPServer(('127.0.0.1', 1234), ThreadedTCPRequestHandler)
t = threading.Thread(target=server.serve_forever, name='optional_name')
t.start()
now you can let the threading module handle the semantics of concurrency and not worry about it.
You might want to take a look at 0MQ and concurrent.futures. 0MQ has a Tornado event loop in the library and it reduces the complexity of socket programming. concurrent.futures is a high level interface over threading or multiprocessing.
You can see different concurrent server approaches at
https://bitbucket.org/arco_group/upper/src
These will help you to choose the better way for you.
Cheers

Python + Twisted + FtpClient + SOCKS

I just started using Twisted. I want to connect to an FTP server and perform some basic operations (use threading if possible). I am using this example.
Which does the job quite well. The question is how to add a SOCKS4/5 proxy usage to the code? Can somebody please provide a working example? I have tried this link too.
But,
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
An example of using the FTP client
"""
# Twisted imports
from twisted.protocols.ftp import FTPClient, FTPFileListProtocol
from twisted.internet.protocol import Protocol, ClientCreator
from twisted.python import usage
from twisted.internet import reactor, endpoints
# Socks support test
from socksclient import SOCKSv4ClientProtocol, SOCKSWrapper
from twisted.web import client
# Standard library imports
import string
import sys
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
class BufferingProtocol(Protocol):
"""Simple utility class that holds all data written to it in a buffer."""
def __init__(self):
self.buffer = StringIO()
def dataReceived(self, data):
self.buffer.write(data)
# Define some callbacks
def success(response):
print 'Success! Got response:'
print '---'
if response is None:
print None
else:
print string.join(response, '\n')
print '---'
def fail(error):
print 'Failed. Error was:'
print error
def showFiles(result, fileListProtocol):
print 'Processed file listing:'
for file in fileListProtocol.files:
print ' %s: %d bytes, %s' \
% (file['filename'], file['size'], file['date'])
print 'Total: %d files' % (len(fileListProtocol.files))
def showBuffer(result, bufferProtocol):
print 'Got data:'
print bufferProtocol.buffer.getvalue()
class Options(usage.Options):
optParameters = [['host', 'h', 'example.com'],
['port', 'p', 21],
['username', 'u', 'webmaster'],
['password', None, 'justapass'],
['passive', None, 0],
['debug', 'd', 1],
]
# Socks support
def wrappercb(proxy):
print "connected to proxy", proxy
pass
def run():
def sockswrapper(proxy, url):
dest = client._parse(url) # scheme, host, port, path
endpoint = endpoints.TCP4ClientEndpoint(reactor, dest[1], dest[2])
return SOCKSWrapper(reactor, proxy[1], proxy[2], endpoint)
# Get config
config = Options()
config.parseOptions()
config.opts['port'] = int(config.opts['port'])
config.opts['passive'] = int(config.opts['passive'])
config.opts['debug'] = int(config.opts['debug'])
# Create the client
FTPClient.debug = config.opts['debug']
creator = ClientCreator(reactor, FTPClient, config.opts['username'],
config.opts['password'], passive=config.opts['passive'])
#creator.connectTCP(config.opts['host'], config.opts['port']).addCallback(connectionMade).addErrback(connectionFailed)
# Socks support
proxy = (None, '1.1.1.1', 1111, True, None, None)
sw = sockswrapper(proxy, "ftp://example.com")
d = sw.connect(creator)
d.addCallback(wrappercb)
reactor.run()
def connectionFailed(f):
print "Connection Failed:", f
reactor.stop()
def connectionMade(ftpClient):
# Get the current working directory
ftpClient.pwd().addCallbacks(success, fail)
# Get a detailed listing of the current directory
fileList = FTPFileListProtocol()
d = ftpClient.list('.', fileList)
d.addCallbacks(showFiles, fail, callbackArgs=(fileList,))
# Change to the parent directory
ftpClient.cdup().addCallbacks(success, fail)
# Create a buffer
proto = BufferingProtocol()
# Get short listing of current directory, and quit when done
d = ftpClient.nlst('.', proto)
d.addCallbacks(showBuffer, fail, callbackArgs=(proto,))
d.addCallback(lambda result: reactor.stop())
# this only runs if the module was *not* imported
if __name__ == '__main__':
run()
I know the code is wrong. I need Solution.
Okay, so here's a solution (gist) that uses python's built-in ftplib, as well as the open source SocksiPy module.
It doesn't use twisted, and it doesn't explicitly use threads, but using and communicting between threads is pretty easily done with threading.Thread and threading.Queue in python's standard threading module
Basically, we need to subclass ftplib.FTP to support substituting our own create_connection method and add proxy configuration semantics.
The "main" logic just configures an FTP client that connects via a localhost socks proxy, such as one created by ssh -D localhost:1080 socksproxy.example.com, and downloads a source snapshot for GNU autoconf to the local disk.
import ftplib
import socket
import socks # socksipy (https://github.com/mikedougherty/SocksiPy)
class FTP(ftplib.FTP):
def __init__(self, host='', user='', passwd='', acct='',
timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
proxyconfig=None):
"""Like ftplib.FTP constructor, but with an added `proxyconfig` kwarg
`proxyconfig` should be a dictionary that may contain the following
keys:
proxytype - The type of the proxy to be used. Three types
are supported: PROXY_TYPE_SOCKS4 (including socks4a),
PROXY_TYPE_SOCKS5 and PROXY_TYPE_HTTP
addr - The address of the server (IP or DNS).
port - The port of the server. Defaults to 1080 for SOCKS
servers and 8080 for HTTP proxy servers.
rdns - Should DNS queries be preformed on the remote side
(rather than the local side). The default is True.
Note: This has no effect with SOCKS4 servers.
username - Username to authenticate with to the server.
The default is no authentication.
password - Password to authenticate with to the server.
Only relevant when username is also provided.
"""
self.proxyconfig = proxyconfig or {}
ftplib.FTP.__init__(self, host, user, passwd, acct, timeout)
def connect(self, host='', port=0, timeout=-999):
'''Connect to host. Arguments are:
- host: hostname to connect to (string, default previous host)
- port: port to connect to (integer, default previous port)
'''
if host != '':
self.host = host
if port > 0:
self.port = port
if timeout != -999:
self.timeout = timeout
self.sock = self.create_connection(self.host, self.port)
self.af = self.sock.family
self.file = self.sock.makefile('rb')
self.welcome = self.getresp()
return self.welcome
def create_connection(self, host=None, port=None):
host, port = host or self.host, port or self.port
if self.proxyconfig:
phost, pport = self.proxyconfig['addr'], self.proxyconfig['port']
err = None
for res in socket.getaddrinfo(phost, pport, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socks.socksocket(af, socktype, proto)
sock.setproxy(**self.proxyconfig)
if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(self.timeout)
sock.connect((host, port))
return sock
except socket.error as _:
err = _
if sock is not None:
sock.close()
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
else:
sock = socket.create_connection((host, port), self.timeout)
return sock
def ntransfercmd(self, cmd, rest=None):
size = None
if self.passiveserver:
host, port = self.makepasv()
conn = self.create_connection(host, port)
try:
if rest is not None:
self.sendcmd("REST %s" % rest)
resp = self.sendcmd(cmd)
# Some servers apparently send a 200 reply to
# a LIST or STOR command, before the 150 reply
# (and way before the 226 reply). This seems to
# be in violation of the protocol (which only allows
# 1xx or error messages for LIST), so we just discard
# this response.
if resp[0] == '2':
resp = self.getresp()
if resp[0] != '1':
raise ftplib.error_reply, resp
except:
conn.close()
raise
else:
raise Exception("Active transfers not supported")
if resp[:3] == '150':
# this is conditional in case we received a 125
size = ftplib.parse150(resp)
return conn, size
if __name__ == '__main__':
ftp = FTP(host='ftp.gnu.org', user='anonymous', passwd='guest',
proxyconfig=dict(proxytype=socks.PROXY_TYPE_SOCKS5, rdns=False,
addr='localhost', port=1080))
with open('autoconf-2.69.tar.xz', mode='w') as f:
ftp.retrbinary("RETR /gnu/autoconf/autoconf-2.69.tar.xz", f.write)
To elaborate why I asked some of my original questions:
1) Do you need to support active transfers or will PASV transfers be sufficient?
Active transfers are much harder to do via a socks proxy because they require the use of the PORT command. With the PORT command, your ftp client tells the FTP server to connect to you on a specific port (e.g., on your PC) in order to send the data. This is likely to not work for users behind a firewall or NAT/router. If your SOCKS proxy server is not behind a firewall, or has a public IP, it is possible to support active transfers, but it is complicated: It requires your SOCKS server (ssh -D does support this) and client library (socksipy does not) to support remote port binding. It also requires the appropriate hooks in the application (my example throws an exception if passiveserver = False) to do a remote BIND instead of a local one.
2) Does it have to use twisted?
Twisted is great, but I'm not the best at it, and I haven't found a really great SOCKS client implementation. Ideally there would be a library out there that allowed you to define and/or chain proxies together, returning an object that implements the IReactorTCP interface, but I have not yet found anything like this just yet.
3) Is your socks proxy behind a VIP or just a single host directly connected to the Internet?
This matters because of the way PASV transfer security works. In a PASV transfer, the client asks the server to provide a port to connect in order to start a data transfer. When the server accepts a connection on that port, it SHOULD verify the client is connected from the same source IP as the connection that requested the transfer. If your SOCKS server is behind a VIP, it is less likely that the outbound IP of the connection made for the PASV transfers will match the outbound IP of the primary communication connection.

Categories