How to close requests.Session()? - python

I am trying to close a requests.Session() but its not getting closed.
s = requests.Session()
s.verify = 'cert.pem'
res1 = s.get("https://<ip>:<port>/<route>")
s.close() #Not working
res2 = s.get("https://<ip>:<port>/<route>") # this is still working which means s.close() didn't work.
How do I close the session? s.close() is not throwing any error also which means it is a valid syntax but I am not understanding what exactly it is doing.

In requests's source code, Session.close only close all underlying Adapter. And further closing a Adapter is clearing underlying PoolManager. Then all the
established connections inside this PoolManager will be closed. But PoolManager will create a fresh connection if there is no usable connection.
Critical code:
# requests.Session
def close(self):
"""Closes all adapters and as such the session"""
for v in self.adapters.values():
v.close()
# requests.adapters.HTTPAdapter
def close(self):
"""Disposes of any internal state.
Currently, this closes the PoolManager and any active ProxyManager,
which closes any pooled connections.
"""
self.poolmanager.clear()
for proxy in self.proxy_manager.values():
proxy.clear()
# urllib3.poolmanager.PoolManager
def connection_from_pool_key(self, pool_key, request_context=None):
"""
Get a :class:`ConnectionPool` based on the provided pool key.
``pool_key`` should be a namedtuple that only contains immutable
objects. At a minimum it must have the ``scheme``, ``host``, and
``port`` fields.
"""
with self.pools.lock:
# If the scheme, host, or port doesn't match existing open
# connections, open a new ConnectionPool.
pool = self.pools.get(pool_key)
if pool:
return pool
# Make a fresh ConnectionPool of the desired type
scheme = request_context['scheme']
host = request_context['host']
port = request_context['port']
pool = self._new_pool(scheme, host, port, request_context=request_context)
self.pools[pool_key] = pool
return pool
So if I understand its structure well, when you close a Session, you are almost the same as creating a new Session and assign it to old one. So you can still use it to send request.
Or if I misunderstand anything, welcome to correct me :D

Related

How can I implement port forwarding in a Paramiko server?

A "direct-tcpip" request (commonly known as port-forwarding) occurs when you run SSH as ssh user#host -L <local port>:<remote host>:<remote port> and then try to connect over the local port.
I'm trying to implement direct-tcpip on a custom SSH server, and Paramiko offers the check_channel_direct_tcpip_request in the ServerInterface class in order to check if the "direct-tcpip" request should be allowed, which can be implemented as follows:
class Server(paramiko.ServerInterface):
# ...
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
return paramiko.OPEN_SUCCEEDED
However, when I use the aforementioned SSH command, and connect over the local port, nothing happens, probably because I need to implement the connection handling myself.
Reading the documentation, it also appears that the channel is only opened after OPEN_SUCCEDED has been returned.
How can I handle the direct-tcpip request after returning OPEN_SUCCEEDED for the request?
You indeed do need to set up your own connection handler. This is a lengthy answer to explain the steps I took - some of it you will not need if your server code already works. The whole running server example in its entirety is here: https://controlc.com/25439153
I used the Paramiko example server code from here https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py as a basis and implanted some socket code on that. This does not have any error handling, thread related niceties or anything else "proper" for that matter but it allows you to use the port forwarder.
This also has a lot of things you do not need as I did not want to start tidying up a dummy example code. Apologies for that.
To start with, we need the forwarder tools. This creates a thread to run the "tunnel" forwarder. This also answers to your question where you get your channel. You accept() it from the transport but you need to do that in the forwarder thread. As you stated in your OP, it is not there yet in the check_channel_direct_tcpip_request() function but it will be eventually available to the thread.
def tunnel(sock, chan, chunk_size=1024):
while True:
r, w, x = select.select([sock, chan], [], [])
if sock in r:
data = sock.recv(chunk_size)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(chunk_size)
if len(data) == 0:
break
sock.send(data)
chan.close()
sock.close()
class ForwardClient(threading.Thread):
daemon = True
# chanid = 0
def __init__(self, address, transport, chanid):
threading.Thread.__init__(self)
self.socket = socket.create_connection(address)
self.transport = transport
self.chanid = chanid
def run(self):
while True:
chan = self.transport.accept(10)
if chan == None:
continue
print("Got new channel (id: %i).", chan.get_id())
if chan.get_id() == self.chanid:
break
peer = self.socket.getpeername()
try:
tunnel(self.socket, chan)
except:
pass
Back to the example server code. Your server class needs to have transport as a parameter, unlike in the example code:
class Server(paramiko.ServerInterface):
# 'data' is the output of base64.b64encode(key)
# (using the "user_rsa_key" files)
data = (
b"AAAAB3NzaC1yc2EAAAABIwAAAIEAyO4it3fHlmGZWJaGrfeHOVY7RWO3P9M7hp"
b"fAu7jJ2d7eothvfeuoRFtJwhUmZDluRdFyhFY/hFAh76PJKGAusIqIQKlkJxMC"
b"KDqIexkgHAfID/6mqvmnSJf0b5W8v5h2pI/stOSwTQ+pxVhwJ9ctYDhRSlF0iT"
b"UWT10hcuO4Ks8="
)
good_pub_key = paramiko.RSAKey(data=decodebytes(data))
def __init__(self, transport):
self.transport = transport
self.event = threading.Event()
Then you will override the relevant method and create the forwarder there:
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
print(chanid, origin, destination)
f = ForwardClient(destination, self.transport, chanid)
f.start()
return paramiko.OPEN_SUCCEEDED
You need to add transport parameter to the creation of the server class:
t.add_server_key(host_key)
server = Server(t)
This example server requires you to have a RSA private key in the directory named test_rsa.key. Create any RSA key there, you do not need it but I did not bother to strip the use of it off the code.
You can then run your server (runs on port 2200) and issue
ssh -p 2200 -L 2300:www.google.com:80 robey#localhost
(password is foo)
Now when you try
telnet localhost 2300
and type something there, you will get a response from Google.

How to get IP address and port of newly accepted connection in Python asyncio server?

I'm using the asyncio library in Python 3.8
https://docs.python.org/3/library/asyncio.html
I am creating a server, and in the "newly accepted connection" callback function, I want to find out the remote IP address and port of the new client.
The arguments to the callback function are one instance each of StreamReader and StreamWriter used to read and write from the client. Is there a straightforward way to find the IP address and port of the streams? Note that I want to do this for both SSL and non-SSL connections.
Here I create the server:
async def create_server(self, new_client_cb, host, port):
srvsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
srvsocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
srvsocket.bind((host, port))
srvsocket.listen(5)
return await asyncio.start_server(new_client_cb, sock=srvsocket, start_serving=False)
I pass in the callback function, which adheres to the documentation and accepts an instance of a StreamReader and a StreamWriter.
Here is said callback function. It's part of a class, hence the leading self argument.
async def _new_client(self, client_r, client_w):
try:
self.logger.debug("New client on incoming proxy")
dests_r = {}
dests_w = {}
for addr in self.config['addrlist']:
host, port = addr.split(':')
host = socket.gethostbyname(host)
self.logger.debug(f"Connecting to {addr}...")
r, w = await self.protocol.open_connection(host, port)
self.logger.debug(f"Connected to {addr}")
dests_r[addr] = r
dests_w[addr] = w
done, pending = await asyncio.wait(
[self._tunnel(list(dests_r.values()), [client_w]), self._tunnel([client_r], list(dests_w.values()))],
return_when=asyncio.FIRST_EXCEPTION
)
for result in done:
if result.exception():
raise result.exception()
except Exception as e:
self.logger.error(f"Caught exception: {str(e)}")
traceback.print_exc()
There's a lot going in that function related to other aspects of my
application.
I think my question ultimately boils down to: how do I found out the remote address and port associated with the new client given these inputs, the StreamReader and StreamWriter? I'm looking into asyncio's Transport classes: https://docs.python.org/3/library/asyncio-protocol.html
but perhaps others can point me in the right direction.
Wrt asyncio's Transport classes, I can see that they allow you to query "extra" information via the get_extra_info(str) function, eg:
client_r._transport.get_extra_info('socket')
Okay, this works for non-encrypted (non-SSL) traffic. But I can't query the socket on an encrypted transport. I can only get the SSL object:
https://docs.python.org/3/library/ssl.html#ssl.SSLObject
This object provides an attribute "server_hostname" which will give me the hostname/IP that was used to connect, so at this point I just need the port.
Ok, I was able to figure it out eventually.
I really just needed to pass a different key to get_extra_info
Both SSL and non-SSL transports support the "peername" key
So I modified my code to the following:
client_r._transport.get_extra_info('peername')
client_w._transport.get_extra_info('peername')
A separate issue I was running into is that I was querying the 'peername' key after the Stream had been closed, and so I was getting None back.
More information on get_extra_info can be found in the asyncio documentation:

Python Twisted best way to signal events to a proxy

I will be hosting a service that will be acting somewhat like a proxy for something I am a client to.
So I want my ProxyService (a twisted.protocol server) to takes lots of actors (clients). On the server side of things, I need a global connection (only need 1 connection to it for all clients) to an ExistingService (code I didn't write, and I'm a client to it).
When the ExistingService says something interesting, I need to broadcast it to all actors.
When an actor says something to my ProxyService, I need to check if it looks good to me. If it does, I need to inform the ExistingService.
I think I know how to solve this using global variables, but just wondering if better way to push the messages.
You have the basic design well established.
It's a basic "man in the middle" approach.
There are many ways to implement it, but this should get you started:
from twisted.internet import endpoints, protocol, reactor
class ProxyClient(protocol.Protocol):
def connectionMade(self):
print('[x] proxy connection made to server')
self.factory.proxy_proto = self
def connectionLost(self, reason):
print('[ ] proxy connection to server lost: {0}'.format(reason))
self.factory.proxy_proto = None
def dataReceived(self, data):
print('==> received {0} from server'.format(data))
print('<== transmitting data to all actors')
for actor in self.factory.actors:
actor.transport.write(data)
class Actor(protocol.Protocol):
def connectionMade(self):
print('[x] actor connection established')
self.factory.actors.add(self)
def connectionLost(self, reason):
print('[ ] actor disconnected: {0}'.format(reason))
self.factory.actors.remove(self)
def dataReceived(self, data):
print('==> received {0} from actor'.format(data))
proxy_connection = self.factory.proxy_factory.proxy_proto
if proxy_connection is not None:
print('<== transmitting data to server through the proxy')
proxy_connection.transport.write(data)
else:
print('[ ] proxy connection to server has not been established')
def setup_servers():
PROXY_HOST = '127.0.0.1'
PROXY_PORT = 9000
proxy_factory = protocol.ClientFactory()
proxy_factory.protocol = ProxyClient
proxy_factory.proxy_proto = None
proxy_factory.actors = set()
proxy_client = endpoints.TCP4ClientEndpoint(reactor, port=PROXY_PORT, host=PROXY_HOST)
proxy_client.connect(proxy_factory)
ACTOR_HOST = '127.0.0.1'
ACTOR_PORT = 8000
actor_factory = protocol.Factory()
actor_factory.protocol = Actor
actor_factory.proxy_factory = proxy_factory
actor_factory.actors = proxy_factory.actors
actor_server = endpoints.TCP4ServerEndpoint(reactor, port=ACTOR_PORT, interface=ACTOR_HOST)
actor_server.listen(actor_factory)
def main():
setup_servers()
reactor.run()
main()
The core logic that allows the data received from the server to be proxied to actors is proxy_factory.actors = set() and actor_factory.actors = proxy_factory.actors.
Most "list-like" containers, for lack of better words, are "global" and this example gives context into each connection's factory objects.
When an actor connects to the server, an Actor protocol is appended to the set and when data is received, each protocol in the set will get the data.
See the respective dataReceived() methods of each protocol object on how that works.
The example above doesn't use global variables at all, but that's not to say that you couldn't use them.
See how far you can get using this method of passing around variables that give context into other objects.
Also, certain situations weren't explicitly handled, such as caching received data in the event the server or actors haven't connected yet.
Hopefully there's enough information here for you to determine the best course of action based on your needs.
There's some room for streamlining the syntax to make it shorter as well.
As a side note. An alternative to global variables is picobox. It's a dependency injector library but I've found that it satisfies most my needs when I require parameters from external sources.

Twisted - How can I tell the reactor to dispose a Protocol object after using adoptStreamConnection in a subprocess?

I'm trying to pass a TCP connection to a Twisted subprocess with adoptStreamConnection, but I can't figure out how to get the Process disposed in the main process after doing that.
My desired flow looks like this:
Finish writing any data the Protocol transport has waiting
When we know the write buffer is empty send the AMP message to transfer the socket to the subprocess
Dispose the Protocol instance in the main process
I tried doing nothing, loseConnection, abortConnection, and monkey patching _socketClose out and using loseConnection. See code here:
import weakref
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ServerEndpoint
from twisted.python.sendmsg import getsockfam
from twisted.internet.protocol import Factory, Protocol
import twisted.internet.abstract
class EchoProtocol(Protocol):
def dataReceived(self, data):
self.transport.write(data)
class EchoFactory(Factory):
protocol = EchoProtocol
class TransferProtocol(Protocol):
def dataReceived(self, data):
self.transport.write('main process still listening!: %s' % (data))
def connectionMade(self):
self.transport.write('this message should make it to the subprocess\n')
# attempt 1: do nothing
# everything works fine in the adopt (including receiving the written message), but old protocol still exists (though isn't doing anything)
# attempt 1: try calling loseConnection
# we lose connection before the adopt opens the socket (presumably TCP disconnect message was sent)
#
# self.transport.loseConnection()
# attempt 2: try calling abortConnection
# result is same as loseConnection
#
# self.transport.abortConnection()
# attempt 3: try monkey patching the socket close out and calling loseConnection
# result: same as doing nothing-- adopt works (including receiving the written message), old protocol still exists
#
# def ignored(*args, **kwargs):
# print 'ignored :D'
#
# self.transport._closeSocket = ignored
# self.transport.loseConnection()
reactor.callLater(0, adopt, self.transport.fileno())
class ServerFactory(Factory):
def buildProtocol(self, addr):
p = TransferProtocol()
self.ref = weakref.ref(p)
return p
f = ServerFactory()
def adopt(fileno):
print "does old protocol still exist?: %r" % (f.ref())
reactor.adoptStreamConnection(fileno, getsockfam(fileno), EchoFactory())
port = 1337
endpoint = TCP4ServerEndpoint(reactor, port)
d = endpoint.listen(f)
reactor.run()
In all cases the Protocol object still exists in the main process after the socket has been transferred. How can I clean this up?
Thanks in advance.
Neither loseConnection nor abortConnection tell the reactor to "forget" about a connection; they close the connection, which is very different; they tell the peer that the connection has gone away.
You want to call self.transport.stopReading() and self.transport.stopWriting() to remove the references to it from the reactor.
Also, it's not valid to use a weakref to test for the remaining existence of an object unless you call gc.collect() first.
As far as making sure that all the data has been sent: the only reliable way to do that is to have an application-level acknowledgement of the data that you've sent. This is why protocols that need a handshake that involves changing protocols - say, for example, STARTTLS - have a specific handshake where the initiator says "I'm going to switch" (and then stops sending), then the peer says "OK, you can switch now". Another way to handle that in this case would be to hand the data you'd like to write to the subprocess via some other channel, instead of passing it to transport.write.

How can I ensure closing all connection in loadbalancer when something fails or hangs?

I'm trying to write a simple load-balancer. It works ok till one of servers (BalanceServer) doesn't close connection then...
Client (ReverseProxy) disconnects but the connection in with BalanceServer stays open.
I tried to add callback (#3) to ReverseProxy.connectionLost to close the connection with one of the servers as I do with closing connection when server disconnects (clientLoseConnection), but at that time the ServerWriter is Null and I cannot terminate it at #1 and #2
How can I ensure that all connections are closed when one of sides disconnects? I guess that also some kind of timeout here would be nice when both client and one of servers hang, but how can I add it so it works on both connections?
from twisted.internet.protocol import Protocol, Factory, ClientCreator
from twisted.internet import reactor, defer
from collections import namedtuple
BalanceServer = namedtuple('BalanceServer', 'host port')
SERVER_LIST = [BalanceServer('127.0.0.1', 8000), BalanceServer('127.0.0.1', 8001)]
def getServer(servers):
while True:
for server in servers:
yield server
# this writes to one of balance servers and responds to client with callback 'clientWrite'
class ServerWriter(Protocol):
def sendData(self, data):
self.transport.write(data)
def dataReceived(self, data):
self.clientWrite(data)
def connectionLost( self, reason ):
self.clientLoseConnection()
# callback for reading data from client to send it to server and get response to client again
def transferData(serverWriter, clientWrite, clientLoseConnection, data):
if serverWriter:
serverWriter.clientWrite = clientWrite
serverWriter.clientLoseConnection = clientLoseConnection
serverWriter.sendData(data)
def closeConnection(serverWriter):
if serverWriter: #1 this is null
#2 So connection is not closed and hangs there, till BalanceServer close it
serverWriter.transport.loseConnection()
# accepts clients
class ReverseProxy(Protocol):
def connectionMade(self):
server = self.factory.getServer()
self.serverWriter = ClientCreator(reactor, ServerWriter)
self.client = self.serverWriter.connectTCP( server.host, server.port )
def dataReceived(self, data):
self.client.addCallback(transferData, self.transport.write,
self.transport.loseConnection, data )
def connectionLost(self, reason):
self.client.addCallback(closeConnection) #3 adding close doesn't work
class ReverseProxyFactory(Factory):
protocol = ReverseProxy
def __init__(self, serverGenerator):
self.getServer = serverGenerator
plainFactory = ReverseProxyFactory( getServer(SERVER_LIST).next )
reactor.listenTCP( 7777, plainFactory )
reactor.run()
You may want to look at twisted.internet.protocols.portforward for an example of hooking up two connections and then disconnecting them. Or just use txloadbalancer and don't even write your own code.
However, loseConnection will never forcibly terminate the connection if there is never any traffic going over it. So if you don't have an application-level ping or any data going over your connections, they may still never shut down. This is a long-standing bug in Twisted. Actually, the longest-standing bug. Perhaps you'd like to help work on the fix :).

Categories