don't want to wait for socket.accept() each loop iteration - python

I'm using the line "conn, addr = httpSocket.accept()", but I don't want to wait for it every iteration of my loop because there won't always be someone trying to connect. Is there a way to check if anyone is trying to connect, and move on if there isn't?
I have looked at using asyncio (I can't use threads because this is micropython on an esp8266, and threading is not supported) but my line is not awaitable.
with open('page.html', 'r') as file:
html = file.read()
while True:
conn, addr = httpSocket.accept()
print('Got a connection from %s' % str(addr))
conn.send('HTTP/1.1 200 OK\n')
conn.send('Content-Type: text/html\n')
conn.sendall(html)
conn.close()

If threads isn't an option you can always use the select module.
With select you basically split your sockets into 3 categories:
Sockets that you want to read data from them (including new connections).
Sockets that you want to send them data.
Exceptional sockets ( usually for error checking).
And with each iteration select returns to you lists of sockets by these categories, so you know how to handle each one instead of waiting for a new connection each time.
You can see an example here:
https://steelkiwi.com/blog/working-tcp-sockets/

Related

python socket programming for transferring a photo

I'm new to socket programming in python. Here is an example of opening a TCP socket in a Mininet host and sending a photo from one host to another. In fact I changed the code that I had used to send a simple message to another host (writing the received data to a text file) in order to meet my requirements. Although when I implement this revised code, there is no error and it seems to transfer correctly, I am not sure whether this is a correct way to do this transmission or not. Since I'm running both hosts on the same machine, I thought it may have an influence on the result. I wanted to ask you to check whether this is a correct way to transfer or I should add or remove something.
mininetSocketTest.py
#!/usr/bin/python
from mininet.topo import Topo, SingleSwitchTopo
from mininet.net import Mininet
from mininet.log import lg, info
from mininet.cli import CLI
def main():
lg.setLogLevel('info')
net = Mininet(SingleSwitchTopo(k=2))
net.start()
h1 = net.get('h1')
p1 = h1.popen('python myClient2.py')
h2 = net.get('h2')
h2.cmd('python myServer2.py')
CLI( net )
#p1.terminate()
net.stop()
if __name__ == '__main__':
main()
myServer2.py
import socket
import sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('10.0.0.1', 12345))
buf = 1024
f = open("2.jpg",'wb')
s.listen(1)
conn , addr = s.accept()
while 1:
data = conn.recv(buf)
print(data[:10])
#print "PACKAGE RECEIVED..."
f.write(data)
if not data: break
#conn.send(data)
conn.close()
s.close()
myClient2.py:
import socket
import sys
f=open ("1.jpg", "rb")
print sys.getsizeof(f)
buf = 1024
data = f.read(buf)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('10.0.0.1',12345))
while (data):
if(s.sendall(data)):
#print "sending ..."
data = f.read(buf)
print(f.tell(), data[:10])
else:
s.close()
s.close()
This loop in client2 is wrong:
while (data):
if(s.send(data)):
print "sending ..."
data = f.read(buf)
As the send
docs say:
Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. For further information on this topic, consult the Socket Programming HOWTO.
You're not even attempting to do this. So, while it probably works on localhost, on a lightly-loaded machine, with smallish files, it's going to break as soon as you try to use it for real.
As the help says, you need to do something to deliver the rest of the buffer. Since there's probably no good reason you can't just block until it's all sent, the simplest thing to do is to call sendall:
Unlike send(), this method continues to send data from bytes until either all data has been sent or an error occurs. None is returned on success. On error, an exception is raised…
And this brings up the next problem: You're not doing any exception handling anywhere. Maybe that's OK, but usually it isn't. For example, if one of your sockets goes down, but the other one is still up, do you want to abort the whole program and hard-drop your connection, or do you maybe want to finish sending whatever you have first?
You should at least probably use a with clause of a finally, to make sure you close your sockets cleanly, so the other side will get a nice EOF instead of an exception.
Also, your server code just serves a single client and then quits. Is that actually what you wanted? Usually, even if you don't need concurrent clients, you at least want to loop around accepting and servicing them one by one.
Finally, a server almost always wants to do this:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
Without this, if you try to run the server again within a few seconds after it finished (a platform-specific number of seconds, which may even depend whether it finished with an exception instead of a clean shutdown), the bind will fail, in the same way as if you tried to bind a socket that's actually in use by another program.
First of all, you should use TCP and not UDP. TCP will ensure that your client/server has received the whole photo properly. UDP is more used for content streaming.
Absolutely not your use case.

Python socket won't timeout

I'm having an issue with Python's socket module that I haven't been able to find anywhere else.
I'm building a simple TCP chat client, and while it successfully connects to the server initially, the script hangs endlessly on sock.recv() despite the fact that I explicitly set a timeout length.
I've tried using different timeout values and including setblocking(False) but no matter what I do it keeps acting like the socket is in blocking mode.
Here are the relevant parts of my code:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
def listen_to_server():
global connected
while connected:
ready_to_read, ready_to_write, in_error = select.select([sock], [], [])
if ready_to_read:
try:
data = sock.recv(1024)
except socket.timeout:
print('TIMEOUT')
if not data:
append_to_log('Disconnected from server.\n')
connected = False
else:
append_to_log(str(data))
Any suggestions would be helpful, I'm at a total loss here.
You've mixed two things the socket timeout and the select.
When you set socket timeout then you are telling to that socket: if I try do some operation (e.g. recv()) and it won't be finished until my limit then raise timeout exception.
The select takes file descriptors (on Windows only sockets) and start checking if the rlist (the first parameter) contains any socket ready to read (aka some data have arrived). If any data arrived then the program continues.
Now your code do this:
Set timeout for socket operations
Select start waiting for data (if you don't send them then they never arrives)
and that's it. You stuck at the select.
You should just call the recv() without select. Than your timeout should be applied.
If you need manage multiple sockets at once than you have to use select and set the 4th parameter timeout.

Socket refresh python

I coded a basic socket system with "select". I want get the list of connected clients instantly.
When the timeout of "select" has passed and several clients come after, it's the drama..
Example - Concerns:
I have 3 clients with one that connects before the timeout, 2 others are connected after the timeout, so I'm going to refresh my list if it took into account two other clients after the timeout.
1st result: I display my variable "list", I see the first socket that is connected before the timeout + one of the other socket who is connected after the timeout. Total: 2 of 3 clients
2nd result: I still re-display my variable "list", and the three clients are there ....
But I want the list without having to re-display the list every time for every customer you can imagine I have 10 clients and I have to show my liste10 times
So I thought to use the asyncore module who is more fluid, what do you think? Do you have a solution for me (easier)? Should I use the multi-threading or stayed on asyncore or select module?
EDIT CODE SOURCE:
import socket, select
hote = ''
port = 81
mainConnection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mainConnection.bind((hote, port))
mainConnection.listen(5)
print("Listen to {}".format(port))
client_online = []
while True:
connection_access, wlist, xlist = select.select([mainConnection], [], [], 10)
for connexion in connection_access:
connection_client, infos_connexion = connexion.accept()
client_online.append(connection_client)
refresh = input(">>> ")
while True:
try:
refresh = int(refresh)
except ValueError:
print("Not allowed")
refresh = int(refresh)
else:
break
if refresh == 1:
print("List client : {}".format(client_online))
There are three major problems with your code:
You call input in your loop. This function will block until ENTER is pressed.
If a non-integer is input from the console, you will get an exception. You handle that exception, but you handle it wrongly. Instead or asking for input again, you simply try to perform the same operation that caused the exception again.
You only check for incoming connection in your select call. You never check if any of the connected sockets have sent anything.
The major problem here for you is the call to input as it will completely stop your program until input from the console is entered.
Your post is very unclear but I can tell you that the problem is that you aren't understanding how to use select.
The code you posted only calls select one time. The program gets to the select() call and waits for mainConnection to be readable (or for the timeout). If mainConnection becomes readable before the timeout, select() returns with one readable file descriptor which you then process in your for loop. But that's it. select is never called again and so your program never checks for any more incoming connections.
In almost every application select should be in a loop. Each time through the loop the program waits in the select() call until one or more sockets is ready for reading or writing. When that happens, select gives you the file descriptors that are ready and it's your job to have other code actually do something. For example, if select returns a socket's file descriptor as readable it's your job to call .recv() on that socket.
You can certainly use asyncore. In fact, I think you should study the source code for asyncore to learn how to properly use select.

Python: Non-blocking socket or Asynchronos I/O

I am new to Python and currently have to write a python socket to be run as a script that communicates with a device over TCP/IP (a weather station).
The device acts as the Server Side (listening over IP:PORT, accepting connection, receiving request, transferring data).
I only need to send one message, receive the answer and then peacefully and nicely shutdown and close the socket.
try:
comSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except socket.error, msg:
sys.stderr.write("[ERROR] %s\n" % msg[1])
sys.exit(1)
try:
comSocket.connect((''))
except socket.error, msg:
sys.stderr.write("[ERROR] %s\n" % msg[1])
sys.exit(2)
comSocket.send('\r')
comSocket.recv(128)
comSocket.send('\r')
comSocket.recv(128)
comSocket.send('\r\r')
comSocket.recv(128)
comSocket.send('1I\r\r3I\r\r4I\r\r13I\r\r5I\r\r8I\r\r7I\r\r9I\r\r')
rawData = comSocket.recv(512)
comSocket.shutdown(1)
comSocket.close()
The problem I'm having is:
The communication channel is unreliable, the device is slow. So, sometimes the device response with message of length 0 (just an ACK), the my code will freeze and wait for response forever.
This piece of code contains the portion that involves SOCKET, the whole code will be run under CRON so freezing is not a desirable behavior.
My question is:
What would be the best way in Python to handle that behavior, so that the code doesn't freeze and wait forever but will attempt to move on to the next send (or such).
You can try a timeout approach, like Russel code or you can use a non-blocking socket, as shown in the code below. It will never block at socket.recv and you can use it inside a loop to retry as many times you want. This way your program will not hang at timeout. This way, you can test if data is available and if not, you can do other things and try again later.
socket.setblocking(0)
while (retry_condition):
try:
data = socket.recv(512)
except socket.error:
'''no data yet..'''
I'd recommend eventlet and green threads for this.
Twisted is a good library but a little steep learning curve for such a simple use case.
Check out some examples here.
Try, before receiving, putting a timeout on the socket:
comSocket.settimeout(5.0)
try:
rawData = comSocket.recv(512)
except socket.timeout:
print "No response from server"

non-blocking read/log from an http stream

I have a client that connects to an HTTP stream and logs the text data it consumes.
I send the streaming server an HTTP GET request... The server replies and continuously publishes data... It will either publish text or send a ping (text) message regularly... and will never close the connection.
I need to read and log the data it consumes in a non-blocking manner.
I am doing something like this:
import urllib2
req = urllib2.urlopen(url)
for dat in req:
with open('out.txt', 'a') as f:
f.write(dat)
My questions are:
will this ever block when the stream is continuous?
how much data is read in each chunk and can it be specified/tuned?
is this the best way to read/log an http stream?
Hey, that's three questions in one! ;-)
It could block sometimes - even if your server is generating data quite quickly, network bottlenecks could in theory cause your reads to block.
Reading the URL data using "for dat in req" will mean reading a line at a time - not really useful if you're reading binary data such as an image. You get better control if you use
chunk = req.read(size)
which can of course block.
Whether it's the best way depends on specifics not available in your question. For example, if you need to run with no blocking calls whatever, you'll need to consider a framework like Twisted. If you don't want blocking to hold you up and don't want to use Twisted (which is a whole new paradigm compared to the blocking way of doing things), then you can spin up a thread to do the reading and writing to file, while your main thread goes on its merry way:
def func(req):
#code the read from URL stream and write to file here
...
t = threading.Thread(target=func)
t.start() # will execute func in a separate thread
...
t.join() # will wait for spawned thread to die
Obviously, I've omitted error checking/exception handling etc. but hopefully it's enough to give you the picture.
You're using too high-level an interface to have good control about such issues as blocking and buffering block sizes. If you're not willing to go all the way to an async interface (in which case twisted, already suggested, is hard to beat!), why not httplib, which is after all in the standard library? HTTPResponse instance .read(amount) method is more likely to block for no longer than needed to read amount bytes, than the similar method on the object returned by urlopen (although admittedly there are no documented specs about that on either module, hmmm...).
Another option is to use the socket module directly. Establish a connection, send the HTTP request, set the socket to non-blocking mode, and then read the data with socket.recv() handling 'Resource temporarily unavailable' exceptions (which means that there is nothing to read). A very rough example is this:
import socket, time
BUFSIZE = 1024
s = socket.socket()
s.connect(('localhost', 1234))
s.send('GET /path HTTP/1.0\n\n')
s.setblocking(False)
running = True
while running:
try:
print "Attempting to read from socket..."
while True:
data = s.recv(BUFSIZE)
if len(data) == 0: # remote end closed
print "Remote end closed"
running = False
break
print "Received %d bytes: %r" % (len(data), data)
except socket.error, e:
if e[0] != 11: # Resource temporarily unavailable
print e
raise
# perform other program tasks
print "Sleeping..."
time.sleep(1)
However, urllib.urlopen() has some benefits if the web server redirects, you need URL based basic authentication etc. You could make use of the select module which will tell you when there is data to read.
Yes when you catch up with the server it will block until the server produces more data
Each dat will be one line including the newline on the end
twisted is a good option
I would swap the with and for around in your example, do you really want to open and close the file for every line that arrives?

Categories