ZMQError: Address already in use while using sockets in loops - python

I am trying to "simulate message passing between 6 nodes in a distributed environment" using Zero MQ in Python, in particular with the classic client/server architecture with REQ and REP. My idea is, while using a TCP/IP connection between these nodes, in the first iteration node-1 must be the server and the clients are the other nodes. In the next one, node-2 will be server and the rest (including node-1) should be clients and so on. At every iteration, server tells that it has established itself and clients send requests to the server to which an acknowledgement is sent back. Once the ACK has been received, the clients send their "MESSAGE" to the server (which is of course viewed as output) and we move to the next iteration.
The problem now is that I am facing the well known ZMQError: Address already in use
I'm not sure if it's due to the socket binding. I have added a socket.close() and context.term() into both client and server functions, but in vain.
And when I somehow try to run the code the VM goes into deadlock and I'm unable to recover unless I perform a hard reboot. Here is a snippet of my code -
#staticmethod
def server(node_id):
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%s" % port_s)
print "Running server node %s on port: %s and value of server = __temp__" % (node_id, port_s)
message = socket.recv()
print "Received request : %s from c_node %s and value (temp): __value__" % (message, c_node)
socket.send("Acknowledged - from %s" % port_s)
time.sleep(1)
socket.close()
context.term()
#staticmethod
def client(c_node):
context = zmq.Context()
# print "Server node __num__ with port %s" % port_s
socket = context.socket(zmq.REQ)
#for port in ports:
socket.connect ("tcp://localhost:%s" % port_c)
#for request in range(20):
print "c_node %s Sending request to server node __num__" % c_node
socket.send ("Hello")
message = socket.recv()
print "Received ack from server %s and message %s" % (node_id, message)
time.sleep (1)
socket.close()
context.term()
def node(self, node_id):
#global node_id
# global key
# ser_p = Process(target=self.server, args=(node_id,))
print 'Memory content of node %d\n' % node_id
for key in nodes_memory[node_id]:
print 'Neighbor={%s}, Temp={%s}\n' % (key, nodes_memory[node_id][key])
#return key
global c_node
#key1 = key
# cli_p = Process(target=self.client, args=(c_node,))
with open("Book5.csv","r+b") as input:
has_header = csv.Sniffer().has_header(input.read(1024))
input.seek(0) # rewind
incsv = csv.reader(input)
if has_header:
next(incsv) # skip header
csv_dict = csv.DictReader(input, skipinitialspace=True, delimiter=",")
node_id = 0
for row in csv_dict:
for i in row:
#print(row[i])
if type(row[i]) is str:
g.add_edge(node_id, int(i), conn_prob=(float(row[i])))
max_wg_ngs = sorted(g[node_id].items(), key=lambda e: e[1]["conn_prob"], reverse=True)[:2]
#maxim = max_wg_ngs.values.tolist()
#sarr = [str(a) for a in max_wg_ngs]
print "\nNeighbours of Node %d are:" % node_id
#print(max_wg_ngs)
ser_p = multiprocessing.Process(target=self.server, args=(node_id,))
ser_p.start()
for c_node, data in max_wg_ngs:
for key in nodes_memory[node_id]: #print ''.join(str(item))[1:-1]
#if type(key1) == node_id:
cli_p = multiprocessing.Process(target=self.client, args=(c_node,))
cli_p.start()
print('Node {a} with Connection Rate = {w}'.format(a=c_node, w=data['conn_prob']))
print('Temperature of Node {a} = {b}'.format(a=c_node, b=nodes_memory[node_id][key]))
node_id += 1
pos=nx.spring_layout(g, scale=100.)
nx.draw_networkx_nodes(g, pos)
nx.draw_networkx_edges(g,pos)
nx.draw_networkx_labels(g,pos)
#plt.axis('off')
#plt.show()
The "message" is "temperature" (file not shown in the code snippet, but not needed at the moment) and for reference the values of Book5.csv are -
0,1,2,3,4,5
0,0.257905291,0.775104118,0.239086843,0.002313744,0.416936603
0.346100279,0,0.438892758,0.598885794,0.002263231,0.406685237
0.753358102,0.222349243,0,0.407830809,0.001714776,0.507573592
0.185342928,0.571302688,0.51784403,0,0.003231018,0.295197533
0,0,0,0,0,0
0.478164621,0.418192795,0.646810223,0.410746629,0.002414973,0
ser_p and cli_p are objects for the server and client functions which are called in the node function, i.e ser_p is called in the loop for row in csv_dict and cli_p is called even further in for c_node, data in max_wg_ngs. I'm using Networkx Python library here as well (Only to find 2 nearest neighbours out of the clients using the connection probability values from Book5.csv).
Does anyone know where I might be going wrong? Why is it that it shows address is already in use even though the socket is closed at every iteration?
Thanks a lot in advance :) (Using Ubuntu 14.04 32-bit VM)

StackOverflow has agreed to use an MCVE-based question asking:
Would you mind to reach that level and complete the missing part of the MCVE -- neither port_c, nor port_s are lexically correct ( never defined, elsewhere ).
If the code presented here works with a file, always kindly prepare such Minimum-version of the file, that will ensure the MCVE-code work as your expect it to work with the file. Statements like: "file not shown in the code snippet, but not needed at the moment" are not compatible with StackOverflow MCVE-rules.
Next, analyse the logic.
If multiple processes try to .bind() onto a same port# set via port_s, they simply will ( and have to ) collide and fall into an exception with ZMQError: Address already in use. First reboot the O/S, next pre-scan the already used IP:port#-s, next setup the non-colliding server-side .bind() ( there could be still a hanging .context() with non-terminated .socket() instance(s) ( typically from manual prototyping or from unhandled exceptions ), that keep the IP:port# without making it free. So the reboot + port scan is a way ahead.
Use any deterministic, principally non-colliding server-2-<transport-class://address:port> mapping ( .bind()-s using wildcards, locking on all IP-adresses, is a bit dangerous habit ) & your code will work smooth.
Always use <socket>.setsockopt( zmq.LINGER, 0 ) so as to prevent infinite deadlocks.
Always use try: {...} except: {...} finally: {...} formal expressions, so as to avoid any unhandled exceptions to orphan any .context() instance outside of your control, perhaps without a graceful .term()-ination and release ( even if new API tells you, that this is not necessary - it is professional to explicitly handle these situations and remain in control, so no exceptions, no excuse ).

Related

One peer object handle multiple clients

I have a server-client code using TCP and Twisted. I want the first peer object that is created (by order of the first connected client) to serve (send messages) future upcoming clients as well. So I save the first peer (global list) and I use it for all upcoming connections but it only serves the first client (that it's connected to) while ignoring the others.
How can I make the peer to serve all connected clients simultaneously? (I'll test it for no more than 3 clients).
def connectionMade(self):
global connectedList
if self.pt == 'client':
self.connected = True
else:
print "Connected from", self.transport.client
try:
self.transport.write('<connection up>')
except Exception, e:
print e.args[0]
self.ts = time.time()
reactor.callLater(5, self.sendUpdate)
connectedList.append(self.transport) # add peer object
def sendUpdate(self):
global updateCounter, connectedList
print "Sending update"
try:
updateCounter += 1
print(connectedList[0])
# Send updates through first connected peer
connectedList[0].write('<update ' + str(updateCounter) + '>')
except Exception, ex1:
print "Exception trying to send: ", ex1.args[0]
if self.connected == True:
reactor.callLater(5, self.sendUpdate)
to serve (send messages) future upcoming clients as well
This sentence is difficult to understand. My interpretation is that you want sendUpdate to send messages to all of the clients except the first (ordered by when they connected).
but it only serves the first client
This is similarly difficult. My interpretation is that you observe a behavior in which only the first client (ordered by when they connected) receives any messages from the server.
Here is your code for sending messages to clients:
connectedList[0].write('<update ' + str(updateCounter) + '>')
Notice that this code always sends a message to connectedList[0]. That is, it only sends a message to one client - regardless of how many there are - and it always selects the first client in connectedList (which corresponds to the first client to connect to the server).
You may want something more like this:
for c in connectedList[1:]:
c.write('<update ' + str(updateCounter) + '>')
Notice how this sends a message to more than one client.
Also, unrelated to your question, you should eliminate your use of globals and you should avoid using a bare ITransport as your protocol interface.

ZeroMQ Pub/Sub action last element in queue an other elements

I started using zeromq with python with the Publisher/Subscriber reference. However, I don't find any documentation about how to treat messages in the queue. I want to treat the last received message different as the rest of the elements of the queue.
Example
publisher.py
import zmq
import random
import time
port = "5556"
topic = "1"
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:%s" % port)
while True:
messagedata = random.randrange(1,215)
print "%s %d" % (topic, messagedata)
socket.send("%s %d" % (topic, messagedata))
time.sleep(.2)
subscriber.py
import zmq
port = "5556"
topic = "1"
context = zmq.Context()
socket = context.socket(zmq.SUB)
print "Connecting..."
socket.connect ("tcp://localhost:%s" % port)
socket.setsockopt(zmq.SUBSCRIBE,topic)
while True:
if isLastMessage(): # probably based on socket.recv()
analysis_function() # time consuming function
else:
simple_function() # something simple like print and save in memory
I just want to know how to create the isLastMessage() function described in the subscriber.py file. If there's something directly in zeromq or a workaround.
Welcome to the world of non-blocking messaging / signalling
this is a cardinal feature for any serious distributed-system design.
If you assume a "last" message via a not having another one in the pipe, then a Poller() instance may help your main event-loops, where you may control the amount of time to "wait"-a-bit before considering the pipe "empty", not to devastate your IO-resources with zero-wait spinning-loops.
Explicit signalling is always better ( if you can design the remote end behaviour )
There is Zero-knowledge on the receiver-side, what is the context of the "last"-message received ( and explicit signalling is advised to be rather broadcast from the message sender-side ), however there is a reversed feature to this -- that instructs ZeroMQ archetypes to "internally"-throw away all such messages, that are not the "last"-message, thus reducing the receiver-side processing to right the "last"-message available.
aQuoteStreamMESSAGE.setsockopt( zmq.CONFLATE, 1 )
If you may like to read more on ZeroMQ patterns and anti-patterns, do not miss Pieter HINTJENS' fabulous book "Code Connected, Volume 1" ( also in pdf ) and may like a broader view on distributed-computing using principally a non-blocking ZeroMQ approach
If isLastMessage() is meant to identify the last message within the stream of messages produced by publisher.py, than this is impossible since there is no last message. publisher.py produces an infinite amount of messages!
However, if publisher.py knows its last "real" message, i.e. no while True:, it could send a "I am done" message afterwards. Identifying that in subscriber.py is trivial.
Sorry, I will keep the question for reference. I just found the answer, in the documentation there is a NOBLOCK flag that you can add to the receiver. With this the recv command doesn't block. A simple workaround, extracted from a part of an answer, is the following:
while True:
try:
#check for a message, this will not block
message = socket.recv(flags=zmq.NOBLOCK)
#a message has been received
print "Message received:", message
except zmq.Again as e:
print "No message received yet"
As for the real implementation, one is not sure that it is the last call you use the flag NOBLOCK and once you have entered the exception block. Wich translates to something like the following:
msg = subscribe(in_socket)
is_last = False
while True:
if is_last:
msg = subscribe(in_socket)
is_last = False
else:
try:
old_msg = msg
msg = subscribe(in_socket,flags=zmq.NOBLOCK)
# if new message was received, then process the old message
process_not_last(old_msg)
except zmq.Again as e:
process_last(msg)
is_last = True # it is probably the last message

Cant receive data from socket

I'm making a client-server program, and there is problem with client part.
Problem is in infinite receiving data. I've tested this particular class, listed below, in a python interpreter. I've succesfuly(maybe not) connected to google, but then program stoped in function recvData() in data = self.socket.recv(1024)
class client():
def __init__(self, host, port):
self.host = host
self.port = port
self.socket = self.connect()
self.command = commands()
def connect(self):
'''
Connect to a remote host.
'''
try:
import socket
return socket.create_connection((self.host, self.port))
except socket.error:
print(":: Failed to connect to a remote port : ")
def sendCommand(self, comm):
'''
Send command to remote host
Returns server output
'''
comman = comm.encode()
# for case in switch(comman):
# if case(self.command.RETRV_FILES_LIST.encode()):
# self.socket.send(b'1')
# return self.recvData()
# if case():
# print(":: Got wrong command")
if (comman == b'1'):
self.socket.send(b'1')
return self.recvData()
def recvData(self):
'''
Receives all the data
'''
i = 0
total_data = []
while(True):
data = self.socket.recv(1024)
if not data: break
total_data.append(data)
i += 1
if i > 9:
break
return total_data
about commented part :
I thought problem in Case realization, so used just if-then statement. But it's not.
Your problem is that self.socket.recv(1024) only returns an empty string when the socket has been shut down on the server side and all data has been received. The way you coded your client, it has no idea that the full message has been received and waits for more. How you deal with the problem depends very much on the protocol used by the server.
Consider a web server. It sends a line-delimited header including a content-length parameter telling the client exactly how many bytes it should read. The client scans for newlines until the header is complete and then uses that value to do recv(exact_size) (if large, it can read chunks instead) so that the recv won't block when the last byte comes in.
Even then, there a decisions to make. The client knows how large the web page is but may want to send a partial data to the caller so it can start painting the page before all the data is received. Of course, the caller needs to know that is what happens - there is a protocol or set of rules for the API itself.
You need to define how the client knows a message is complete and what exactly it passes back to its caller. A great way to deal with the problem is to let some other protocol such as [zeromq](http://zeromq.org/ do the work for you. A simple python client / server can be implemented with xmlrpc. And there are many other ways.
You said you are implementing a client/server program then you mentioned "connected to google" and telnet... These are all very different things and a single client strategy won't work with all of them.

Drop Incoming 'packets' for Datagram Socket

this is question is really focused on my problem and not relative to any of the other question I could find on this topic.
PSA: When I say "packet" I mean a full string received in a single socket.recv(maxsize)
I developed similar code for the same result on Java (my pref language) and it is ok, now I have to do in python.
I have two processes that run in parallel:
1-Normal client socket connected to a specific IP
2-A "client" Datagram socket binded to "ALL" IPs.
The normal socket is working correctly as I expect, while the datagram not.
I continuosly receive packets from a server (not mine and not opensource) at a rate of more than 5 per second, but I want to process only one of them every 3 seconds. In java I did just a "sleep" and it was ok, I was getting only the last live packet, while in Python with a "time.sleep(3)" the packets are queued (I don't know how and where) and not dropped.
I HAVE to drop them because those are not need and I have to do an HTTP call between one and the other so I can't fire an HTTP post for every set of data received at that rate!
here it is my "code" for the listening socket, some comments are for private code:
def listenPositions():
lsSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
lsSocket.bind(("0.0.0.0", 8787))
lsSocket.setblocking(0)
try:
while True:
ready = select.select([lsSocket], [], [], 1)
if ready[0]:
lsSocket.settimeout(1)
recvData = lsSocket.recv(16384)
if len(recvData) != 0:
recv = recvData[0:len(recvData)].decode("utf-8")
#print("LS: Received: " + recv)
strings = filter(None, str(recv).split('\n'))
print("Strings count=" + str(len(strings))+ ": " + str(strings))
for item in strings:
#parse the received strings as json and get the items
jsonPosition = json.loads(item)
strId = jsonPosition["id"]
coordinates = jsonPosition.get("coordinates")
if coordinates is None:
continue
print("coordinates not null:" + str(coordinates))
#DO THE HTTP POST REQUEST
time.sleep(3) #Pause the system for X seconds, but other packets are queued!
else:
print("LS: Received empty")
else:
print("LS: No data, timeout")
except Exception as e:
print(e)
#handle exceptions...
print("Exception, close everything")
When you have an open socket, all correctly addressed packets should be delivered to the application. We want to have our network connections as realiable as possible, don't we? Dropping a packet is an action of last resort.
If you want to get a packet only from time to time, you could create a listening socket, get a packet and close the socket.
However there is nothing easier than ignoring a packet. Just skip its processing and move on. The code below is incomplete, but hopefully expresses what I mean.
TIMEOUT = 1.0
INT = 3.0 # interval in seconds
# create udp_socket
last = time.time() - INT
udp_socket.settimeout(TIMEOUT)
while True:
try:
packet = udp_socket.recv(MAXSIZE)
except socket.timeout:
# handle recv timeout
continue # or break, or return
except OSError:
# handle recv error (Python 3.3+)
break # or continue, or return
now = time.time()
if now - last >= INT:
# process the packet
last = now
Please note that the select is not needed if you read only from one source.

Python: Won't Connect to Server

I'm not able to connect to the server it will print out
"Connecting to port..." then it will just say "Sockets timed out."
My program is due tomorrow and it'd be nice to have this actually work.
EDITED CODE: Now it will only use Connecting to Port....
nothing else printed.
import socket, string, time, random, re, urllib2, cookielib, smtplib, os
class Pibot: #main class
def __init__(self): #basic information to allow for the rest of the program to work.
self.server= 'irc.evilzone.org'
self.port = 6667
self.botname= 'pibot'
self.chan= 'test'
self.owner = 'Josh.H'
self.nick = "bawt"
self.irc = None
self.data = ''
def iConnect(self): #trys to connect to the server and allows the user to see if it failed to connect.
print ("Connecting to ports...")
print self.data
time.sleep(3)
try:
self.irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.irc.connect((self.server, self.port))
except (socket.error, socket.herror, socket.gaierror):
print "Failed to connect to Ports"
def iStart(self):
#Not guaranteed to send all your data, iisn't checking the return values
#however this function iStart is used to send the NICK of the bot and the USER to the server through particle data
#it then auto joins the channel
#in future development I'd like to get accuainted with Twisted or IRCutils as they allow it to be quiet powerful and less buggy
self.irc.send('NICK %s\r\n' % self.nick)
self.irc.send("USER %s %s bla :%s\r\n" % ("Ohlook", 'itsnotmy', 'Realname'))
time.sleep(4)
self.irc.send("JOIN #%s\r\n" % self.chan)
self.data = self.irc.recv( 4096 )
def MainLoop(self,iParse = 0): #MainLoop is used to make the commands executable ie !google !say etc;
try:
while True:
# This method sends a ping to the server and if it pings it will send a pong back
#in other clients they keep receiving till they have a complete line however mine does not as of right now
#The PING command is used to test the presence of an active client or
#server at the other end of the connection. Servers send a PING
#message at regular intervals if no other activity detected coming
#from a connection. If a connection fails to respond to a PING
#message within a set amount of time, that connection is closed. A
#PING message MAY be sent even if the connection is active.
#PONG message is a reply to PING message. If parameter <server2> is
#given, this message will be forwarded to given target. The <server>
#parameter is the name of the entity who has responded to PING message
#and generated this message.
self.data = self.irc.recv( 4096 )
if self.data.find ( 'PING' ) != -1:
self.irc.send(( "PONG %s \r\n" ) % (self.recv.split() [ 1 ])) #Possible overflow problem
if self.data.find( "!google" ) != -1:
#googles the search term and displays the first 5 results
#format = !google: <Search Term>
#One thing that I noticed is that it will print on a seperate line without the header
#In the next Update I would have fixed this.
fin = data.split(':')
if not fin:
irc.send("PRIVMSG #%s :syntax'^google :search term\r\n'" % chan)
else:
#In the next version to avoid overflow I will create another if statement and edit the search code
#However I am using what xgoogle has reccomended.
fin = fin[3].strip()
gs = GoogleSearch(fin)
gs.results_per_page = 5
results = gs.get_results()
for result in results:
irc.send("PRIVMSG #%s :%s\r\n" % (chan, result.url.encode("utf8")))
###############################################################################################################################
# No excpetion checking here, these functions can and will fail in time and in later versions will need to be edited.
# If hellboundhackers changes this code may break
# This function takes a quote from the header of hellboundhackers
# it first looks at the header of the USer agent then the header of the website (HBH) and reads it then prints
# the quote when QUOTEM is recognized in the irc closes the connection to the wbesite and deletes the cookie
###############################################################################################################################
if "QUOTEM" in self.data:
#Pulls a quote from HBH
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
opener.addheaders.append(('User-agent', 'Mozilla/4.0'))
opener.addheaders.append( ('Referer', 'http://www.hellboundhackers.org/index.php') )
resp = opener.open('http://www.hellboundhackers.org/index.php')
r = resp.read()
resp.close()
del cj, opener
da = re.findall("Enter; width:70%;'>(.*)",r)
self.irc.send("PRIVMSG #%s :%s\r\n" % (chan, da[0])) # Note Possible overflow
if "!whoareyou" in self.data:
#bot info allows users on IRC to see which commands are currently working
self.irc.send("PRIVMSG #%s :I am %s, I was created By:%s \r\n" % (self.chan, self.nick,self.owner))
self.irc.send("PRIVMSG #%s :I was written in Python 27, and edited with IDLE\r\n" % self.chan)
self.irc.send("PRIVMSG #%s :The Classes used are socket, string, time, re, urllib2, cookielib\r\n" % self.chan)
self.irc.send("PRIVMSG #%s :As well as some functions from various other sources(supybot,twisted,xgoogle)\r\n" % self.chan)
self.irc.send("PRIVMSG #%s :type ^commands for a list of things I can do\r\n" % self.chan)
except (socket.error, socket.timeout):
print "Sockets timed out."
bot = Pibot()
bot.iConnect()
bot.MainLoop()
Side Note: No errors present.
Greatly Appreciated. Also I am just learning so don't flame me. :(
EDIT2: I have fixed most of the problems and am now getting error:
Traceback (most recent call last):
File "L:\txtbot.py", line 119, in <module>
bot.MainLoop()
File "L:\txtbot.py", line 64, in MainLoop
self.irc.send(( "PONG %s \r\n" ) % (self.recv.split() [ 1 ])) #Possible overflow problem
AttributeError: Pibot instance has no attribute 'recv'
It seems you're never passing the connection information to the socket:
self.irc = socket.socket()
I think it should be something like this:
self.irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.irc.connect((self.server, self.port))
In iConnect you're just creating a socket, not connecting it to the server. You need to use socket.create_connection.
Also, lumping together socket.error and socket.timeout is not a good idea as it might be misleading when debugging. Also, you should print the error, not just a generic message. It will help you figure out what's wrong.
You don't call iStart anywhere. If I remember my IRC correctly, you need to send your nick information before it will send you any data back.

Categories