I run my script on computer "A". Then I connect to computer "A" from computer "B" through my script. I send my message to computer "A" and my script runs it with an exec() instruction.
I want to see the result of executing my message on computer "A", through a socket on computer "B".
I tried to change sys.stdout = socket_response but had a error: "Socket object has no attribute write()"
So, how can I redirect standard output (for print or exec()) from computer "A" to computer "B" through socket connection?
It will be some kind of 'python interpreter' into my script.
SORRY, I CAN'T ANSWER MY OWN QUESTION WITHOUT REPUTATION
Thanks to all!
I use a simple way, which #Torxed advised me of. Here's my pseudo-code (it's just an example, not my real script)
#-*-coding:utf-8-*-
import socket
import sys
class stdout_():
def __init__(self, sock_resp):
self.sock_resp = sock_resp
def write(self, mes):
self.sock_resp.send(mes)
MY_IP = 'localhost'
MY_PORT = 31337
srv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print("Start server")
old_out = sys.stdout
srv.bind((MY_IP, MY_PORT))
srv.listen(0)
sock_resp, addr_resp = srv.accept()
new_out = stdout_(sock_resp)
sys.stdout = new_out
#sys.stdout = sock_resp ### sock_object has no attribute 'write'
while 1:
try:
a = sock_resp.recv(1024)
exec(a)
except socket.timeout:
#print('server timeout!!' + '\n')
continue
I connected to script with Putty and sent "print 'abc'" and then I received the answer 'abc'.
There is the makefile function in Python's socket class:
socket.makefile(mode='r', buffering=None, *, encoding=None,
errors=None, newline=None)
Return a file object associated with the socket. The exact returned
type depends on the arguments given to makefile(). These arguments are
interpreted the same way as by the built-in open() function.
Closing the file object won’t close the socket unless there are no
remaining references to the socket. The socket must be in blocking
mode; it can have a timeout, but the file object’s internal buffer may
end up in a inconsistent state if a timeout occurs.
You can read how to use it in Mark Lutz's book (chapter 12, "Making Sockets Look Like Files and Streams").
An example from the book (the idea is simple: make a file object from a socket with socket.makefile and link sys.stdout with it):
def redirectOut(port=port, host=host):
"""
connect caller's standard output stream to a socket for GUI to listen
start caller after listener started, else connect fails before accept
"""
sock = socket(AF_INET, SOCK_STREAM)
sock.connect((host, port)) # caller operates in client mode
file = sock.makefile('w') # file interface: text, buffered
sys.stdout = file # make prints go to sock.send
return sock # if caller needs to access it raw
Server side:
from subprocess import Popen, STDOUT, PIPE
from socket import socket
from time import sleep
server_sock = socket()
server_sock.bind(('', 8000))
server_sock.listen(4)
def close_process(p):
p.stdin.close()
p.stdout.close()
while 1:
try:
client, client_address = server_sock.accept()
data = client.recv(8192)
except:
break
# First, we open a handle to the external command to be run.
process = Popen(data.decode('utf-8'), shell=True, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
# Wait for the command to finish
# (.poll() will return the exit code, None if it's still running)
while process.poll() == None:
sleep(0.025)
# Then we send whatever output the command gave us back via the socket
# Python3: sockets never convert data from byte objects to strings,
# so we'll have to do this "manually" in case you're confused from Py2.X
try:
client.send(bytes(process.stdout.read(), 'UTF-8'))
except:
pass
# And finally, close the stdout/stdin of the process,
# otherwise you'll end up with "to many filehandles openened" in your OS.
close_process(process)
client.close()
server_sock.close()
This assumes Python3.
If no one else have a better way of just redirecting output to a socket from a process, this is a solution you could work with.
Related
Note: Based on the answer below, I think that I have not properly communicated this question. I am currently re-writing it with code to be more clear.
I'm writing a python server which accepts connections multiple clients and stores them.
If I print the properly connected socket which is used to speak with one of the connected clients, I'd get something like the following as output:
<socket.socket fd=4, family=AddressFamily.AF_INET, type=2049, proto=0, laddr=('3.3.3.3', 1234), raddr=('4.4.4.4', 63402)>
where for the purposes of privacy I've replaced my server's IP with 3.3.3.3 and the client's IP with 4.4.4.4. What I was really hoping would work, would be to save the information to a file in the format:
4 2049
and then when the child process boots, it would pass this information to a socket constructor using:
recovered_client = socket(AF_INET, 2049, 0, 4)
But this does not work. When I apply this process and print the recovered client, I see the following:
<socket.socket fd=4, family=AddressFamily.AF_INET, type=2049, proto=0>
It seems that the fields laddr and raddr from the original connection are not recovered by passing the file descriptor to the constructor.
I tried manually repairing this by adding the host and port from laddr and raddr to the file also, and then connecting with the command:
recovered_client.connect(('4.4.4.4', 63402))
But this yields the error:
OSError: [Errno 88] Socket operation on non-socket
As an experiment, I left the connection open in the parent process, then had the child process accept a new fresh connection and print it, and what I got was:
<socket.socket fd=4, family=AddressFamily.AF_INET, type=2049, proto=0, laddr=('3.3.3.3', 1234), raddr=('75.159.78.189', 49709)>
In other words, a new connection has been made, with the same value for fd, with a different client port. The original connection was never closed, but rather it hung indefinitely because, as intended, the parent process froze when it called the child process.
So this means I have two different active connections (although one was frozen), whose sockets have the same file descriptors. Does this mean that the value assigned to the field fd for a socket is relative to the process that created it?
If so, my approach is clearly hopeless. How can I pass my client's connection created in my parent process to its child process?
If so, my approach is clearly hopeless. How can I pass my client's connection created in my parent process to its child process?
A child inherits all open file descriptors from its parent. There's no need to "pass" anything. Consider the following code:
#!/usr/bin/python
import os
import socket
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('localhost', 2049))
s.listen(5)
def child_process(fd, addr):
while True:
data = fd.recv(10)
if len(data) == 0:
break
print('read:', data)
print('client {} has disconnected'.format(addr))
def main():
while True:
c_fd, c_addr = s.accept()
print('new connection from', c_addr)
pid = os.fork()
if pid > 0:
# This is the parent process
c_fd.close()
else:
# This is the child process
child_process(c_fd, c_addr)
return
try:
main()
finally:
s.close()
Every new connection is handled by a child process. File descriptors that were open in the parent (such as the client socket returned by the accept call) are already available in the client. We just need to make sure we close the client socket in the parent, since it's already been inherited by the child.
The story is mostly the same if you're spawning subprocesses using the subprocess module, because subprocess is just calling fork() and exec() under the hood. This is why I said "subprocess" and "child process" are synonyms.
There's a catch, though. Two of them, in fact:
By default, subprocess will close all open file descriptors before spawning a child process. Fortunately, there is a close_fds keyword argument to disable that behavior.
Unfortunately, even if we disable the close_fds behavior in subprocess, the file descriptors returned by accept have the CLOSE_ON_EXEC flag set, which means they get closed by the kernel when a process calls exec.
But no worries, we can work around this by clearing the CLOSE_ON_EXEC flag like this:
c_fd, c_addr = s.accept()
flags = fcntl.fcntl(c_fd, fcntl.F_GETFD, 0)
fcntl.fcntl(c_fd, fcntl.F_SETFD, flags & ~fcntl.FD_CLOEXEC)
Afer that, the socket will be inherited by processes spawned using subprocess.call and friends. For example, if we rewrite our parent code like this:
#!/usr/bin/python
import fcntl
import socket
import subprocess
s = socket.socket(socket.AF_INET,
socket.SOCK_STREAM|socket.SOCK_CLOEXEC)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('localhost', 2049))
s.listen(5)
def main():
while True:
c_fd, c_addr = s.accept()
flags = fcntl.fcntl(c_fd, fcntl.F_GETFD, 0)
fcntl.fcntl(c_fd, fcntl.F_SETFD, flags & ~fcntl.FD_CLOEXEC)
print('new connection from', c_addr)
# Here we call the child command, passing the
# integer file descriptor as the first argument.
subprocess.check_call(['python', 'socketchild.py',
'{}'.format(c_fd.fileno()), c_addr[0]],
close_fds=False)
c_fd.close()
try:
main()
finally:
s.close()
We can then write child code that uses the socket.fromfd method to convert that integer file descriptor back into a socket:
#!/usr/bin/python
import socket
import sys
def child_process(fd, addr):
while True:
data = fd.recv(10)
if len(data) == 0:
break
print('read:', data)
print('client {} has disconnected'.format(addr))
def main():
fdno = int(sys.argv[1])
print('got fd:', fdno)
addr = sys.argv[2]
fd = socket.fromfd(fdno, socket.AF_INET, socket.SOCK_STREAM)
child_process(fd, addr)
if __name__ == '__main__':
main()
Python code snippet
sock = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
socket reuse address is the key point to note here
You can then use
sock.file_no to pass it on to client.
client can use
s_client = socket.socket(<filehandle from parent>, socket.AF_INET,
socket.SOCK_STREAM))
I have a (very) simple web server I wrote in C and I want to test it. I wrote it so it takes data on stdin and sends out on stdout. How would I connect the input/output of a socket (created with socket.accept()) to the input/output of a process created with subprocess.Popen?
Sounds simple, right? Here's the killer: I'm running Windows.
Can anyone help?
Here's what I've tried:
Passing the client object itself as stdin/out to subprocess.Popen. (It never hurts to try.)
Passing socket.makefile() results as stdin/out to subprocess.Popen.
Passing the socket's file number to os.fdopen().
Also, in case the question was unclear, here's a slimmed-down version of my code:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('', PORT))
sock.listen(5)
cli, addr = sock.accept()
p = subprocess.Popen([PROG])
#I want to connect 'p' to the 'cli' socket so whatever it sends on stdout
#goes to the client and whatever the client sends goes to its stdin.
#I've tried:
p = subprocess.Popen([PROG], stdin = cli.makefile("r"), stdout = cli.makefile("w"))
p = subprocess.Popen([PROG], stdin = cli, stdout = cli)
p = subprocess.Popen([PROG], stdin = os.fdopen(cli.fileno(), "r"), stdout = os.fdopen(cli.fileno(), "w"))
#but all of them give me either "Bad file descriptor" or "The handle is invalid".
I had the same issue and tried the same way to bind the socket, also on windows. The solution I came out with was to share the socket and bind it on the process to stdin and stdout. My solutions are completely in python but I guess that they are easily convertible.
import socket, subprocess
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('', PORT))
sock.listen(5)
cli, addr = sock.accept()
process = subprocess.Popen([PROG], stdin=subprocess.PIPE)
process.stdin.write(cli.share(process.pid))
process.stdin.flush()
# you can now use `cli` as client normally
And in the other process:
import sys, os, socket
sock = socket.fromshare(os.read(sys.stdin.fileno(), 372))
sys.stdin = sock.makefile("r")
sys.stdout = sock.makefile("w")
# stdin and stdout now write to `sock`
The 372 is the len of a measured socket.share call. I don't know if this is constant, but it worked for me. This is possible only in windows, as the share function is only available on that OS.
I have a basic client-server script in Python using sockets. The server binds to a specific port and waits for a client connection. When a client connects, they are presented with a raw_input prompt that sends the entered commands to a subproc on the server and pipes the output back to the client.
Sometimes when I execute commands from the client, the output will hang and not present me with the raw_input prompt until I press the [enter] key.
At first I thought this might have been a buffer problem but it happens when I use commands with a small output, like 'clear' or 'ls', etc.
The client code:
import os, sys
import socket
from base64 import *
import time
try:
HOST = sys.argv[1]
PORT = int(sys.argv[2])
except IndexError:
print("You must specify a host IP address and port number!")
print("usage: ./handler_client.py 192.168.1.4 4444")
sys.exit()
socksize = 4096
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
server.connect((HOST, PORT))
print("[+] Connection established!")
print("[+] Type ':help' to view commands.")
except:
print("[!] Connection error!")
sys.exit(2)
while True:
data = server.recv(socksize)
cmd = raw_input(data)
server.sendall(str(cmd))
server.close()
Server code:
import os,sys
import socket
import time
from subprocess import Popen,PIPE,STDOUT,call
HOST = ''
PORT = 4444
socksize = 4096
activePID = []
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conn.bind((HOST, PORT))
conn.listen(5)
print("Listening on TCP port %s" % PORT)
def reaper():
while activePID:
pid,stat = os.waitpid(0, os.WNOHANG)
if not pid: break
activePID.remove(pid)
def handler(connection):
time.sleep(3)
while True:
cmd = connection.recv(socksize)
proc = Popen(cmd,
shell=True,
stdout=PIPE,
stderr=PIPE,
stdin=PIPE,
)
stdout, stderr = proc.communicate()
if cmd == ":killme":
connection.close()
sys.exit(0)
elif proc:
connection.send( stdout )
connection.send("\nshell => ")
connection.close()
os._exit(0)
def accept():
while 1:
global connection
connection, address = conn.accept()
print "[!] New connection!"
connection.send("\nshell => ")
reaper()
childPid = os.fork() # forks the incoming connection and sends to conn handler
if childPid == 0:
handler(connection)
else:
activePID.append(childPid)
accept()
The problem I see is that the final loop in the client only does one server.recv(socksize), and then it calls raw_input(). If that recv() call does not obtain all of the data sent by the server in that single call, then it also won't collect the prompt that follows the command output and therefore won't show that next prompt. The uncollected input will sit in the socket until you enter the next command, and then it will be collected and shown. (In principle it could take many recv() calls to drain the socket and get to the appended prompt, not just two calls.)
If this is what's happening then you would hit the problem if the command sent back more than one buffer's worth (4KB) of data, or if it generated output in small chunks spaced out in time so that the server side could spread that data over multiple sends that are not coalesced quickly enough for the client to collect them all in a single recv().
To fix this, you need have the client do as many recv() calls as it takes to completely drain the socket. So you need to come up with a way for the client to know that the socket has been drained of everything that the server is going to send in this interaction.
The easiest way to do this is to have the server add boundary markers into the data stream and then have the client inspect those markers to discover when the final data from the current interaction has been collected. There are various ways to do this, but I'd probably have the server insert a "this is the length of the following chunk of data" marker ahead of every chunk it sends, and send a marker with a length of zero after the final chunk.
The client-side main loop then becomes:
forever:
read a marker;
if the length carried in the marker is zero then
break;
else
read exactly that many bytes;.
Note that the client must be sure to recv() the complete marker before it acts on it; stuff can come out of a stream socket in lumps of any size, completely unrelated to the size of the writes that sent that stuff into the socket at the sender's side.
You get to decide whether to send the marker as variable-length text (with a distinctive delimiter) or as fixed-length binary (in which case you have to worry about endian issues if the client and server can be on different systems). You also get to decide whether the client should show each chunk as it arrives (obviously you can't use raw_input() to do that) or whether it should collect all of the chunks and show the whole thing in one blast after the final chunk has been collected.
let's consider this code in python:
import socket
import threading
import sys
import select
class UDPServer:
def __init__(self):
self.s=None
self.t=None
def start(self,port=8888):
if not self.s:
self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.s.bind(("",port))
self.t=threading.Thread(target=self.run)
self.t.start()
def stop(self):
if self.s:
self.s.close()
self.t.join()
self.t=None
def run(self):
while True:
try:
#receive data
data,addr=self.s.recvfrom(1024)
self.onPacket(addr,data)
except:
break
self.s=None
def onPacket(self,addr,data):
print addr,data
us=UDPServer()
while True:
sys.stdout.write("UDP server> ")
cmd=sys.stdin.readline()
if cmd=="start\n":
print "starting server..."
us.start(8888)
print "done"
elif cmd=="stop\n":
print "stopping server..."
us.stop()
print "done"
elif cmd=="quit\n":
print "Quitting ..."
us.stop()
break;
print "bye bye"
It runs an interactive shell with which I can start and stop an UDP server.
The server is implemented through a class which launches a thread in which there's a infinite loop of recv/onPacket callback inside a try/except block which should detect the error and the exits from the loop.
What I expect is that when I type "stop" on the shell the socket is closed and an exception is raised by the recvfrom function because of the invalidation of the file descriptor.
Instead, it seems that recvfrom still to block the thread waiting for data even after the close call.
Why this strange behavior ?
I've always used this patter to implements an UDP server in C++ and JAVA and it always worked.
I've tried also with a "select" passing a list with the socket to the xread argument, in order to get an event of file descriptor disruption from select instead that from recvfrom, but select seems to be "insensible" to the close too.
I need to have a unique code which maintain the same behavior on Linux and Windows with python 2.5 - 2.6.
Thanks.
The usual solution is to have a pipe tell the worker thread when to die.
Create a pipe using os.pipe. This gives you a socket with both the reading and writing ends in the same program. It returns raw file descriptors, which you can use as-is (os.read and os.write) or turn into Python file objects using os.fdopen.
The worker thread waits on both the network socket and the read end of the pipe using select.select. When the pipe becomes readable, the worker thread cleans up and exits. Don't read the data, ignore it: its arrival is the message.
When the master thread wants to kill the worker, it writes a byte (any value) to the write end of the pipe. The master thread then joins the worker thread, then closes the pipe (remember to close both ends).
P.S. Closing an in-use socket is a bad idea in a multi-threaded program. The Linux close(2) manpage says:
It is probably unwise to close file descriptors while they may be in use by system calls in other threads in the same process. Since a file descriptor may be re-used, there are some obscure race conditions that may cause unintended side effects.
So it's lucky your first approach did not work!
This is not java. Good hints:
Don't use threads. Use asynchronous IO.
Use a higher level networking framework
Here's an example using twisted:
from twisted.internet.protocol import DatagramProtocol
from twisted.internet import reactor, stdio
from twisted.protocols.basic import LineReceiver
class UDPLogger(DatagramProtocol):
def datagramReceived(self, data, (host, port)):
print "received %r from %s:%d" % (data, host, port)
class ConsoleCommands(LineReceiver):
delimiter = '\n'
prompt_string = 'myserver> '
def connectionMade(self):
self.sendLine('My Server Admin Console!')
self.transport.write(self.prompt_string)
def lineReceived(self, line):
line = line.strip()
if line:
if line == 'quit':
reactor.stop()
elif line == 'start':
reactor.listenUDP(8888, UDPLogger())
self.sendLine('listening on udp 8888')
else:
self.sendLine('Unknown command: %r' % (line,))
self.transport.write(self.prompt_string)
stdio.StandardIO(ConsoleCommands())
reactor.run()
Example session:
My Server Admin Console!
myserver> foo
Unknown command: 'foo'
myserver> start
listening on udp 8888
myserver> quit
I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
Short answer:
use a non-blocking recv(), or a blocking recv() / select() with a very
short timeout.
Long answer:
The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors.
TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close.
Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after.
Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running.
So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed.
Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv().
Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred.
I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking".
In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea.
What you can do is:
set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full
stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do
use select() call or asyncore module with a very short timeout. Error reporting is still system-specific.
For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel.
I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either:
receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation.
perform a successful shutdow() and close() on the socket
The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :)
Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.
It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through
close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.
From the link Jweede posted:
exception socket.timeout:
This exception is raised when a timeout occurs on a socket
which has had timeouts enabled via a prior call to settimeout().
The accompanying value is a string whose value is currently
always “timed out”.
Here are the demo server and client programs for the socket module from the python docs
# Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
And the client:
# Echo client program
import socket
HOST = 'daring.cwi.nl' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
On the docs example page I pulled these from, there are more complex examples that employ this idea, but here is the simple answer:
Assuming you're writing the client program, just put all your code that uses the socket when it is at risk of being dropped, inside a try block...
try:
s.connect((HOST, PORT))
s.send("Hello, World!")
...
except socket.timeout:
# whatever you need to do when the connection is dropped
If I'm not mistaken this is usually handled via a timeout.
I translated the code sample in this blog post into Python: How to detect when the client closes the connection?, and it works well for me:
from ctypes import (
CDLL, c_int, POINTER, Structure, c_void_p, c_size_t,
c_short, c_ssize_t, c_char, ARRAY
)
__all__ = 'is_remote_alive',
class pollfd(Structure):
_fields_ = (
('fd', c_int),
('events', c_short),
('revents', c_short),
)
MSG_DONTWAIT = 0x40
MSG_PEEK = 0x02
EPOLLIN = 0x001
EPOLLPRI = 0x002
EPOLLRDNORM = 0x040
libc = CDLL('libc.so.6')
recv = libc.recv
recv.restype = c_ssize_t
recv.argtypes = c_int, c_void_p, c_size_t, c_int
poll = libc.poll
poll.restype = c_int
poll.argtypes = POINTER(pollfd), c_int, c_int
class IsRemoteAlive: # not needed, only for debugging
def __init__(self, alive, msg):
self.alive = alive
self.msg = msg
def __str__(self):
return self.msg
def __repr__(self):
return 'IsRemoteAlive(%r,%r)' % (self.alive, self.msg)
def __bool__(self):
return self.alive
def is_remote_alive(fd):
fileno = getattr(fd, 'fileno', None)
if fileno is not None:
if hasattr(fileno, '__call__'):
fd = fileno()
else:
fd = fileno
p = pollfd(fd=fd, events=EPOLLIN|EPOLLPRI|EPOLLRDNORM, revents=0)
result = poll(p, 1, 0)
if not result:
return IsRemoteAlive(True, 'empty')
buf = ARRAY(c_char, 1)()
result = recv(fd, buf, len(buf), MSG_DONTWAIT|MSG_PEEK)
if result > 0:
return IsRemoteAlive(True, 'readable')
elif result == 0:
return IsRemoteAlive(False, 'closed')
else:
return IsRemoteAlive(False, 'errored')
Trying to improve on #kay response. I made a more pythonic version
(Note that it was not yet tested in a "real-life" environment, and only on Linux)
This detects if the remote side closed the connection, without actually consuming the data:
import socket
import errno
def remote_connection_closed(sock: socket.socket) -> bool:
"""
Returns True if the remote side did close the connection
"""
try:
buf = sock.recv(1, socket.MSG_PEEK | socket.MSG_DONTWAIT)
if buf == b'':
return True
except BlockingIOError as exc:
if exc.errno != errno.EAGAIN:
# Raise on unknown exception
raise
return False
Here is a simple example from an asyncio echo server:
import asyncio
async def handle_echo(reader, writer):
addr = writer.get_extra_info('peername')
sock = writer.get_extra_info('socket')
print(f'New client: {addr!r}')
# Initial of client command
data = await reader.read(100)
message = data.decode()
print(f"Received {message!r} from {addr!r}")
# Simulate a long async process
for _ in range(10):
if remote_connection_closed(sock):
print('Remote side closed early')
return
await asyncio.sleep(1)
# Write the initial message back
print(f"Send: {message!r}")
writer.write(data)
await writer.drain()
writer.close()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
if __name__ == '__main__':
asyncio.run(main())