I have a server running on a desktop machine with IP address 192.168.1.11, and client code is running on server accessing through OpenVPN connect. When I run the below code client sends the request but server doesn't receives it.
Server.py:
context=zmq.Context()
socket=context.socket(zmq.REP)
socket.bind("tcp://*:8080")
while True:
message=socket.recv_pyobj()
print("%s:%s" %(message.get(1)[0],message.get(1)[1]))
socket.send_pyobj({1:[message.get(1)[0],message.get(1)[1]]})
Client.py
socket=context.socket(zmq.REQ)
socket.connect("tcp://192.168.1.11:8080")
name="Test"
while True:
message=input("Test Message")
socket.send_pyobj(({1:[name,message]}))
Thanks help is highly appreciated.
Q : "Issue in connecting client socket with server socket"
Step 0 : proof there has been achieved an OSI-ISO-Layer-3 visibility traceroute <targetIP>
Step 1 : having achieved a visible route to <targetIP>, repair the code to meet documented REQ/REP properties
Step 2 : having achieved a visible route to <targetIP> and REQ/REP, we should improve robustness of the code
context = zmq.Context()
socket = context.socket( zmq.REP )
socket.bind( "tcp://*:8080" )
#---------------------------------------------- # ROBUSTNESS CONFIGs
socket.setsockopt( zmq.LINGER, 0 ) # .set explicitly
socket.setsockopt( zmq.MAXMSGSIZE, ... ) # .set safety ceiling
socket.setsockopt( ..., ... ) # .set ...
#---------------------------------------------- # ROBUSTNESS CONFIGs
while True:
message = socket.recv_pyobj() # .recv() a request from REQ-side
print( "%s:%s" % ( message.get(1)[0], # shall improve robustness
message.get(1)[1] # for cases other than this
)
)
socket.send_pyobj( { 1: [ message.get(1)[0], # REP must "answer" to REQ
message.get(1)[1]
]
}
)
TARGET_IP = "<targetIP>" # <targetIP> from Step 0
PORT_NUMBER = 8080
socket = context.socket( zmq.REQ )
socket.connect( "tcp://{0:}:{1:}".format( TARGET_IP, PORT_NUMBER ) )
#---------------------------------------------- # ROBUSTNESS CONFIGs
socket.setsockopt( zmq.LINGER, 0 ) # .set explicitly
socket.setsockopt( zmq.MAXMSGSIZE, ... ) # .set safety ceiling
socket.setsockopt( ..., ... ) # .set ...
#---------------------------------------------- # ROBUSTNESS CONFIGs
name = "Test"
while True:
message = input( "Test Message" )
socket.send_pyobj( ( { 1: [ name, # REQ-side sends a request
message # here
] # bearing a tuple
} # with a dict
) # having a list
) # for a single key
#------------------------------------------ # REQ-side now MUST also .recv()
_ = socket.recv() # before it can .send() again
Related
I have created simple client/server using pyzmq.
One thing I am not sure is the .recv() does not receive the message even though it has been sent from server. It just ignores it and throws an error which I find to be strange.
Client.py
try:
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:2222")
print("Sending request")
socket.send(b"send the message")
message = socket.recv(flags=zmq.NOBLOCK)
print("Received reply %s " % message)
except Exception as e:
print(str(e))
Server.py
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:2222")
while True:
message = socket.recv()
socket.send(b"Ack")
I think the client should receive the Ack and print it instead of throwing the exception.
The document says,
With flags=NOBLOCK, this raises ZMQError if no messages have arrived
Clearly the server is responding with "Ack" as soon as it receives the message.
The Error message is,
Resource temporarily unavailable
Remember that in concurrent environments there are no guarantees about the order of execution of independent processes. Even though you are responding immediately to the message in the server.py, the response may not get to the receiving socket before you call socket.recv. When you call socket.send the message needs to go over the network to your server, the server needs to create the message and respond and then the message needs to go back over the network to your client code. The time to send the messages over the network will be quite long and you are calling socket.recv immediately after socket.send.
So in fact when you call message = socket.recv(flags=zmq.NOBLOCK) the client socket will not have received the Ack from the server yet, and since you are using NOBLOCK an error is thrown since no messages have been received on the socket.
NOBLOCK is likely not appropriate in this scenario. You can experiment with this by adding a sleep call in between send and recv to show that the time delay of waiting for the response from the server is indeed the issue but that's not a good solution for your client code long-term.
If you want to exit after waiting for a certain amount of time you should use socket.poll instead.
event = socket.poll(timeout=3000) # wait 3 seconds
if event == 0:
# timeout reached before any events were queued
pass
else:
# events queued within our time limit
msg = socket.recv()
Pyzmq Doc for socket.poll()
Q : say the server is not up in that case the recv() in the client will be blocked forever which I don't want.
ZeroMQ is a fabulous framework for doing smart signaling/messaging in distributed-systems
Let's sketch a demo of a principally non-blocking modus-operandi, with some inspirations of how the resources ought be both acquired and also gracefully released before process termination.
Maybe a bit of reading about the main conceptual differences in ZeroMQ hierarchy in less than a five seconds will also help.
Server.py
aContext = zmq.Context()
aLightHouse = aContext.socket( zmq.PUB )
aRepSocket = aContext.socket( zmq.REP )
aRepSocket.setsockopt( zmq.LINGER, 0 )
aRepSocket.setsockopt( zmq.COMPLETE, 1 )
aRepSocket.bind( "tcp://*:2222" )
aLightHouse.bind( "tcp://*:3333" )
aLightHouse.setsockopt( zmq.LINGER, 0 )
aLightHouse.setsockopt( zmq.CONFLATE, 1 )
aLightHouse_counter = 0
#------------------------------------------------------------
print( "INF: Server InS: ZeroMQ({0:}) going RTO:".format( zmq.zmq_version() ) )
#------------------------------------------------------------
while True:
try:
aLightHouse_counter += 1
aLightHouse.send( "INF: server-RTO blink {0:}".format( repr( aLightHouse_counter ) ),
zmq.NOBLOCK
)
if ( 0 < aRepSocket.poll( 0, zmq.POLLIN ) ):
try:
message = aRepSocket.recv( zmq.NOBLOCK ); print( "INF: .recv()ed {0:}".format( message ) )
pass; aRepSocket.send( b"Ack", zmq.NOBLOCK ); print( "INF: .sent() ACK" )
except:
# handle EXC: based on ...
print( "EXC: reported as Errno == {0:}".format( zmq.zmq_errno() ) )
else:
# NOP / Sleep / do other system work-units to get processed during the infinite-loop
except:
# handle EXC:
print( "EXC: will break ... and terminate OoS ..." )
break
#------------------------------------------------------------
print( "INF: will soft-SIG Server going-OoS..." )
aLightHouse.send( "INF: server goes OoS ... " )
#------------------------------------------------------------
print( "INF: will .close() and .term() resources on clean & graceful exit..." )
Sleep( 0.987654321 )
aRepSocket.unbind( "tcp://*:2222" )
aRepSocket.close()
aLightHouse.unbind( "tcp://*:3333" )
aLightHouse.close()
aContext.term()
#------------------------------------------------------------
print( "INF: over and out" )
Client.py
try:
aContext = zmq.Context()
aReqSocket = aContext.socket( zmq.REQ )
aBeeper = aContext.socket( zmq.SUB )
aReqSocket.setsockopt( zmq.LINGER, 0 )
aReqSocket.setsockopt( zmq.COMPLETE, 1 )
aReqSocket.connect( "tcp://localhost:2222" )
aBeeper.connect( "tcp://localhost:3333" )
aBeeper.setsockopt( zmq.SUBSCRIBE, "" )
aBeeper.setsockopt( zmq.CONFLATE, 1 )
#------------------------------------------------------------
print( "INF: Client InS: ZeroMQ({0:}) going RTO.".format( zmq.zmq_version() ) )
#------------------------------------------------------------
try:
while True:
if ( 0 == aBeeper.poll( 1234 ) ):
print( "INF: Server OoS or no beep visible within a LoS for the last 1234 [ms] ... " )
else:
print( "INF: Server InS-beep[{0:}]".format( aBeeper.recv( zmq.NOBLOCK ) ) )
try:
print( "INF: Going to sending a request" )
aReqSocket.send( b"send the message", zmq.NOBLOCK )
print( "INF: Sent. Going to poll for a response to arrive..." )
while ( 0 == aReqSocket.poll( 123, zmq.POLLIN ) ):
print( "INF: .poll( 123 ) = 0, will wait longer ... " )
message = socket.recv( flags = zmq.NOBLOCK )
print( "INF: Received a reply %s " % message )
except Exception as e:
print( "EXC: {0:}".format( str( e ) ) )
print( "INF: ZeroMQ Errno == {0:}".format( zmq.zmq_errno() ) )
print( "INF: will break and terminate" )
break
except Exception as e:
print( "EXC: {0:}".format( str( e ) ) )
finally:
#------------------------------------------------------------
print( "INF: will .close() and .term() resources on clean & graceful exit..." )
aBeeper.close()
aReqSocket.close()
aContext.term()
#------------------------------------------------------------
print( "INF: over and out" )
you are using non-blocking mode which mean it will raise an error to inform you, that there's nothing that could be done with the message and you should try again later, but if you are using blocking mode it blocks until the peers connects.
this answer is from here
basically if you remove flags=zmq.NOBLOCK it will work.
Update
if you want to use non-blocking mode you should have a look to this
I am using mininet to simulate a linear topology, 3 switches connected to each other and one host connected to each switch. Looks like this.
I used the mobility script provided on github to perform mobility within mininet. The script uses the default controller of Mininet.
Now I would like to use a remote Opendaylight controller to perform the mobility, or more specifically to move h1 from s1 to s2 and then back to s2.
My code is a modified version of the original mobility script. Changes are in the mobilityTest() method and the CLI was added for debugging.
#!/usr/bin/python
"""
Simple example of Mobility with Mininet
(aka enough rope to hang yourself.)
We move a host from s1 to s2, s2 to s3, and then back to s1.
Gotchas:
The reference controller doesn't support mobility, so we need to
manually flush the switch flow tables!
Good luck!
to-do:
- think about wifi/hub behavior
- think about clearing last hop - why doesn't that work?
"""
from mininet.node import RemoteController
from mininet.net import Mininet
from mininet.node import OVSSwitch
from mininet.topo import LinearTopo
from mininet.log import info, output, warn, setLogLevel
from mininet.examples.clustercli import CLI
from random import randint
import time
class MobilitySwitch( OVSSwitch ):
"Switch that can reattach and rename interfaces"
def delIntf( self, intf ):
"Remove (and detach) an interface"
port = self.ports[ intf ]
del self.ports[ intf ]
del self.intfs[ port ]
del self.nameToIntf[ intf.name ]
def addIntf( self, intf, rename=False, **kwargs ):
"Add (and reparent) an interface"
OVSSwitch.addIntf( self, intf, **kwargs )
intf.node = self
if rename:
self.renameIntf( intf )
def attach( self, intf ):
"Attach an interface and set its port"
port = self.ports[ intf ]
if port:
if self.isOldOVS():
self.cmd( 'ovs-vsctl add-port', self, intf )
else:
self.cmd( 'ovs-vsctl add-port', self, intf,
'-- set Interface', intf,
'ofport_request=%s' % port )
self.validatePort( intf )
def validatePort( self, intf ):
"Validate intf's OF port number"
ofport = int( self.cmd( 'ovs-vsctl get Interface', intf,
'ofport' ) )
if ofport != self.ports[ intf ]:
warn( 'WARNING: ofport for', intf, 'is actually', ofport,
'\n' )
def renameIntf( self, intf, newname='' ):
"Rename an interface (to its canonical name)"
intf.ifconfig( 'down' )
if not newname:
newname = '%s-eth%d' % ( self.name, self.ports[ intf ] )
intf.cmd( 'ip link set', intf, 'name', newname )
del self.nameToIntf[ intf.name ]
intf.name = newname
self.nameToIntf[ intf.name ] = intf
intf.ifconfig( 'up' )
def moveIntf( self, intf, switch, port=None, rename=True ):
"Move one of our interfaces to another switch"
self.detach( intf )
self.delIntf( intf )
switch.addIntf( intf, port=port, rename=rename )
switch.attach( intf )
def printConnections( switches ):
"Compactly print connected nodes to each switch"
for sw in switches:
output( '%s: ' % sw )
for intf in sw.intfList():
link = intf.link
if link:
intf1, intf2 = link.intf1, link.intf2
remote = intf1 if intf1.node != sw else intf2
output( '%s(%s) ' % ( remote.node, sw.ports[ intf ] ) )
output( '\n' )
def moveHost( host, oldSwitch, newSwitch, newPort=None ):
"Move a host from old switch to new switch"
hintf, sintf = host.connectionsTo( oldSwitch )[ 0 ]
oldSwitch.moveIntf( sintf, newSwitch, port=newPort )
return hintf, sintf
def mobilityTest():
"A simple test of mobility"
info( '* Simple mobility test\n' )http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/
net = Mininet( topo=LinearTopo( 3 ), controller=lambda a: RemoteController( a, ip='127.0.0.1', port=6653), switch=MobilitySwitch )
info( '* Starting network:\n' )
net.start()
printConnections( net.switches )
info( '* Testing network\n' )
time.sleep( 3 )
net.pingAll()
info( '* Identifying switch interface for h1\n' )
h1, old = net.get( 'h1', 's1' )
CLI( net )
for s in 2, 3, 1:
new = net[ 's%d' % s ]
port = randint( 10, 20 )
info( '* Moving', h1, 'from', old, 'to', new, 'port', port, '\n' )
hintf, sintf = moveHost( h1, old, new, newPort=port )
info( '*', hintf, 'is now connected to', sintf, '\n' )
info( '* Clearing out old flows\n' )
for sw in net.switches:
sw.dpctl( 'del-flows' )
info( '* Add flow rule\n' )
CLI( net )
info( '* New network:\n' )
printConnections( net.switches )
info( '* Testing connectivity:\n' )
net.pingAll()
old = new
net.stop()
if __name__ == '__main__':
setLogLevel( 'info' )
mobilityTest()
When running the script the linear topology is set up successfully, which I can see in the delux GUI of the ODL controller. I can also ping between all hosts.
The problem occurs when mobility is performed and h1 is connected to s2.
Pinging does not work anymore.
Populating the flow-tables with a flow-rule to the controller did not solve the problem.
mininet> dpctl add-flow "table=0, priority=100,dl_type=0x88cc actions=CONTROLLER:65535"
*** s1 ------------------------------------------------------------------------
*** s2 ------------------------------------------------------------------------
*** s3 ------------------------------------------------------------------------
mininet> dpctl dump-flows
*** s1 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=7.914s, table=0, n_packets=0, n_bytes=0, idle_age=7, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
*** s2 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=7.916s, table=0, n_packets=1, n_bytes=85, idle_age=2, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
*** s3 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=7.917s, table=0, n_packets=1, n_bytes=85, idle_age=2, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
mininet> pingall
*** Ping: testing ping reachability
h1 -> X X
h2 -> X X
h3 -> X X
*** Results: 100% dropped (0/6 received)
Then I monitored with tcpdump all interfaces when I try to ping from h1 to h2, which are now both connected to s2. h1 sends out an ARP request, which is received by the s2 switch, but this switch s2 is not forwarding the ARP request to the other switches and hosts.
I would like to know why my ping does not work anymore after performing mobility.
Any help is highly appreciated.
Thanks
Well the bot I made in python and it is connecting to random servers. I cant figure out why. so far this code is mishmash of other projects so maybe I'm overlooking something. basicly I want it to connect to irc.rizon.net join #brook_nise then lurk there.
I conect to these servers when I run my script:
irc.rizon.io
irc.sxci.net
irc.broke-it.com
irc.rizon.sexy
.
import socket
network = 'irc.rizon.net'
network = network.encode(encoding='UTF-8',errors='strict')
port = 6667
irc = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
USR = "USER boxxxy boxxxy boxxxy :boxxxy\r\n"
PAS = '/msg NickServ IDENTIFY pass\r\n'
JOI = 'JOIN #brook_nise\r\n'
pi = 'PING'
po = 'PONG'
PING = pi.encode(encoding='UTF-8',errors='strict')
PONG = po.encode(encoding='UTF-8',errors='strict')
USER = USR.encode(encoding='UTF-8',errors='strict')
PASS = PAS.encode(encoding='UTF-8',errors='strict')
JOIN = JOI.encode(encoding='UTF-8',errors='strict')
irc.connect ( ( network, port ) )
print (irc.recv ( 4096 ))
irc.send (USER)
irc.send (PASS)
irc.send (JOIN)
while True:
data = irc.recv ( 4096 )
if data.find ( PING ) != -1:
irc.send ( PONG + data.split() [ 1 ] + '\r\n' )
print (data)
This happens because irc.rizon.net is a geobalanced DNS record. It checks where your bot is coming from and then automatically assigns it a server to connect to.
Basically there is no such server as 'irc.rizon.net', if you always want the same one (you don't) then just specify one of the servers you have listed.
I am trying to get an acknowledgement from my subscriber back to my publisher using ZeroMQ in Python.
I tried a few code examples, using a zmq.PUSH and zmq.PULL code sequence, but to no avail.
My code as it stands:
PUB_SERVER.PY
import zmq
import random
import sys
import time
port = "5556"
if len( sys.argv ) > 1:
port = sys.argv[1]
int( port )
context = zmq.Context()
socket = context.socket( zmq.PUB )
socket.bind( "tcp://*:%s" % port )
while True:
# topic = random.randrange( 9999, 10005 )
topic = 10000
messagedata = random.randrange( 1, 215 ) - 80
print "%d %d" % ( topic, messagedata )
socket.send( "%d %d" % ( topic, messagedata ) )
time.sleep( 1 )
SUB_CLIENT.PY
import sys
import zmq
port = "5556"
if len( sys.argv ) > 1:
port = sys.argv[1]
int( port )
if len( sys.argv ) > 2:
port1 = sys.argv[2]
int( port1 )
# Socket to talk to server
context = zmq.Context()
socket = context.socket( zmq.SUB )
print "Collecting updates from weather server..."
socket.connect( "tcp://192.168.0.21:%s" % port )
if len( sys.argv ) > 2:
socket.connect( "tcp://192.168.0.21:%s" % port1 )
# Subscribe to zipcode, default is NYC, 10001
topicfilter = "10000"
socket.setsockopt( zmq.SUBSCRIBE, topicfilter )
# Process 5 updates
total_value = 0
for update_nbr in range( 5 ):
string = socket.recv()
topic, messagedata = string.split()
total_value += int( messagedata )
print topic, messagedata
print "Average messagedata value for topic '%s' was %dF" % ( topicfilter, total_value / update_nbr )
That code gives me the output of the server in one SSH window (on a Parallella), and the received filtered messages of the client in another SSH window (on a RaspberryPi) which is working great.
Where I am lost is, once the client has gotten a filtered message from the server, how would it acknowledge that filtered message being received, and then have the server log those acknowledged messages?
Eventually, I'd want to do some intelligent decision making of sending a file to the subscriber who acknowledges.
How to acknowledge?
May create a parallel messaging structure for a soft-signalling for that purpose.
Extend PUB_SERVER.PY with a .SUB Rx access point:
anAckRxSOCKET = context.socket( zmq.SUB ) # create SUB side
anAckRxSOCKET.bind( "tcp://*:%s" % aServerAckPORT ) ) # .bind()
anAckRxSOCKET.setsockopt( zmq.SUBSCRIBE, "" ) # SUB to *-anything
# ...
anAckRxSTRING = anAckRxSOCKET.recv() # .recv()
Extend SUB_CLIENT.PY with a .PUB Tx socket to the Server side access point:
anAckTxSOCKET = context.socket( zmq.PUB ) # create PUB side(s)
anAckTxSOCKET.connect( "tcp://192.168.0.21:%s" % aServerAckPORT ) )
and
send ACK(s) with "a-proxy-ID" for any server-side processing you may want or need
anAckTxSOCKET.send( topicfilter ) # ACK with an "identity"-proxy
I trying to write a client server application in Python, but I faced a problem, in the client side I'm not getting all the sent data. First I tried to send the numbers 1 to 10 and I received 1,2,5,6,10, so there are missing a lot of numbers.
Server side:
def __init__( self ):
super( MCCommunication, self ).__init__()
HOST, PORT = socket.gethostbyname( socket.gethostname() ), 31000
self.server = SocketServer.ThreadingTCPServer( ( HOST, PORT ), MCRequestHandler )
ip, port = self.server.server_address
# Start a thread with the server
# Future task: Make the server a QT-Thread...
self.server_thread = threading.Thread( target = self.server.serve_forever )
# Exit the server thread when the main thread terminates
self.server_thread.setDaemon( True )
self.textUpdated.emit( 'Server Started!' )
print( 'Server Started!' )
self.server_thread.start()
def handle( self ):
#self.request.setblocking( 0 )
i = 10;
while True:
if( self.clientname == 'MasterClient' ):
try:
#ans = self.request.recv( 4096 )
#print( 'after recv' )
""" Sendign data, testing purpose """
while i:
mess = str( i );
postbox['MasterClient'].put( self.creatMessage( 0, 0 , mess ) )
i = i - 1
while( postbox['MasterClient'].empty() != True ):
sendData = postbox['MasterClient'].get_nowait()
a = self.request.send( sendData )
print( a );
#dic = self.getMessage( sendData )
#print 'Sent:%s\n' % str( dic )
except:
mess = str( sys.exc_info()[0] )
postbox['MasterClient'].put( self.creatMessage( 1, 0 , mess ) )
pass
def creatMessage( self, type1 = 0, type2 = 0, message = ' ', extra = 0 ):
return pickle.dumps( {"type1":type1, "type2":type2, "message":message, "extra":extra} );
Where the postbox['MasterClient'] is a Queue with the serialized message.
And this is the client:
def run( self ):
sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
addr = ( self.ip, self.port )
#print addr
sock.connect( addr )
#sock.setblocking( 0 )
while True:
try:
ans = sock.recv( 4096 )
dic = self.getMessage( ans )
self.recvMessageHandler( dic )
print 'Received:%s\n' % str( dic )
except:
pass
The server may have sent multiple messages by the time the client attempts to read them, and if these fit within the same 4k buffer, the recv() call will obtain both of them.
You don't show the getMessage code, but I'd guess you're doing something like pickle.loads(msg), but this will only give you the first message and discard the rest of the string, hence the dropped messages. You'll also get another issue if more than 4096 bytes are buffered by the time you read, as you could end up getting a fragment of a message and thus an unpickling error.
You'll need to break up the string you get back into seperate messages, or better, just treat the socket as a stream and let pickle.load pull a single message from it.