I make a web server on sockets. I want to support persistent connections.
When I write in the address bar of the browser a request to the server (on the localhost) in the headers I see "Connection: keep-alive", but the browser displays the data sent only after the connection is closed. I even do a "flush" on the connection (in python you can create a connection file and make a "flush" on it). I guess I don’t quite understand how sockets should behave in the persistent connection.
Please, help me figure out. If it is possible with a Python code examples. Sorry for my bad English.
I guess I don’t quite understand how sockets should behave in the persistent connection
This seems to be the case. Persistent HTTP connection just means that the server is may keep the TCP connection open after sending the HTTP response in order to process another HTTP request and that the client may send another request on the same TCP connection if the TCP connection is still open by the server. Both server and client might decide to not send/receive another request and close the connection whenever it is idle (i.e. no outstanding HTTP response).
Persistent HTTP connection in no way change the semantics of HTTP from a request-response protocol to "anything sockets can do". This means the way you want to use persistence is wrong.
Related
Why do we need socket despite of request library in python?
If we wanna socket to connect to other server so what is request library for?
Request is a higher level API for handling HTTP requests (which uses socket internally). There are dozens of other network protocols not covered by it. Of course, you could handle HTTP by using socket directly, but unless you have an extremely good reason to do so, you'd just be reinventing the wheel.
Requests is a Python HTTP library, whereas sockets are used for sending or receiving data on a computer network. HTTP is an application layer protocol that specifies how request and replies from client and server should be made. In socket programming, you make connection by specifying destination IP/Port and send your data to remote host.
I'm working in a NetHack clone that is supposed to be playing through Telnet, like many NetHack servers. As I've said, this is a clone, so it's being written from scratch, on Python.
I've set up my socket server reusing code from a SMTP server I wrote a while ago and all of suddenly my attention jumped to this particular line of code:
s.listen(15)
My server was designed to be able to connect to 15 simultaneous clients just in case the data exchange with any took too long, but ideally listen(1) or listen(2) would be enough. But this case is different.
As it happens with Alt.org when you telnet their NetHack servers, people connected to my server should be able to play my roguelike remotely, through a single telnet session, so I guess this connection should not be interrupted. Yet, I've read here that
[...] if you are really holding more than 128 queued connect requests you are
a) taking too long to process them or b) need a heavy-weight
distributed server or c) suffering a DDoS attack.
What is the better practice to carry out here? Should I keep every connection open until the connected user disconnects or is there any other way? Should I go for listen(128) (or whatever my system's socket.SOMAXCONN is) or is that a bad practice?
number in listen(number) request limits number of pending connect requests.
Connect request is pending from initial SYN request received by OS until you called accept socket method. So number does not limits open (established) connection number but it limits number of connections in SYN_RECV state.
It is bad idea not to answer on incoming connection because:
Client will retransmit SYN requests until answer SYN is received
Client can not distinguish situation when your server is not available and it just in queue.
Better idea is to answer on connection but send some message to client with rejection reason and then close connection.
What's the easiest way to establish an emulated TCP connection over HTTP with python 2.7.x?
Server: a python program on pythonanywhere (or some analogue) free hosting, that doesn't provide a dedicated ip. Client: a python program on a Windows PC.
Connection is established via multiprocessing.BaseManager and works fine when testing both server and client on the same machine.
Is there a way to make this work over HTTP with minimal additions to the code?
P.S. I need this for a grid computing project.
P.P.S. I'm new to python & network & web programming, started studying it several days ago.
Found this: http://code.activestate.com/recipes/577643-transparent-http-tunnel-for-python-sockets-to-be-u/. Appears to be exactly what I need, though I don't understand how to invoke setup_http_proxy() on server/client side. Tried setup_http_proxy("my.proxy", 8080) on both sides, but it didn't work.
Also found this: http://docs.python.org/2/library/httplib.html. What does the HTTPConnection.set_tunnel method actually do? Can I use it to solve the problem in question?
Usage on the client:
setup_http_proxy("THE_ADRESS", THE_PORT_NUMBER) # address of the Proxy, port the Proxy is listening on
The code wraps sockets to perform an initial HTTP CONNECT request to the proxy setup to get an HTTP Proxy to proxy the TCP connection for you but for that you'll need a compliant proxy (most won't allow you to open TCP connections unless it's for HTTPS).
HTTPConnection.set_tunnel basically does the same thing.
For your use case, a program running on free hosting, this just won't work. Your free host probably will only allow you to handle http requests, not have long running processes listen for tcp connections(which the code assumes).
You should rethink your need to tunnel and organize your communication to post data (and poll for messages from the server, unless they're answers to the stuff you post). Or you can purchase a VPS hosting that will give you more control over what you can host remotely.
I'm running a Twisted server with the LineReceiver protocol. Sometimes clients will disconnect silently, so Twisted keeps the connection open. And because the server doesn't send anything unless requested of it, there's never a TCP timeout. In other words, some connections are never closed server-side.
How can I have Twisted close a connection that's been inactive for a few hours?
You can schedule timed events using reactor.callLater. Based on this, there's a helper for adding timeouts to protocols, twisted.protocols.policies.TimeoutMixin.
Another approach is to use TCP keep-alives, which you can enable using the transport's setTcpKeepAlive method.
And another approach is to use application-level keep-alives. Essentially send a ''noop'' once in a while. It doesn't need a response. If the connection has been lost, the extra data in the send buffer will cause the TCP stack to eventually notice.
See also the FAQ entry.
I am little stumped: I have a simple messenger client program (pure python, sockets), and I wanted to add proxy support (http/s, socks), however I am a little confused on how to go about it. I am assuming that the connection on the socket level will be done to the proxy server, at which point the headers should contain a CONNECT + destination IP (of the chat server) and authentication, (if proxy requires so), however the rest is a little beyond me. How is the subsequent connection handled, specifically the reading/writing, etc...
Are there any guides on proxy support implementation for socket based (tcp) programming in Python?
Thank you
Maybe use something like SocksiPy which does all the protocol details for you and would let you connect through a SOCKS proxy as you would without it?
It is pretty simple - after you send the HTTP request: CONNECT example.com:1234 HTTP/1.0\r\nHost: example.com:1234\r\n<additional headers incl. authentication>\r\n\r\n, the server responds with HTTP/1.0 200 Connection established\r\n\r\n and then (after the double line ends) you can communicate just as you would communicate with example.com port 1234 without the proxy (as I understand you already have the client-server communication part done).
Have a look at stunnel.
Stunnel can allow you to secure
non-SSL aware daemons and protocols
(like POP, IMAP, LDAP, etc) by having
Stunnel provide the encryption,
requiring no changes to the daemon's
code