I want to make a simple connection between a Python program and a Ruby program using ZeroMQ, I am trying to use a PAIR connection, but I have not been able.
This is my code in python (the server):
import zmq
import time
port = "5553"
context = zmq.Context()
socket = context.socket(zmq.PAIR)
socket.bind("tcp://*:%s" % port)
while True:
socket.send(b"Server message to client3")
print("Enviado mensaje")
time.sleep(1)
It does not display anything until I connect a client.
This is the code in Ruby (the client)
require 'ffi-rzmq'
context = ZMQ::Context.new
subscriber = context.socket ZMQ::PAIR
subscriber.connect "tcp://localhost:5553"
loop do
address = ''
subscriber.recv_string address
puts "[#{address}]"
end
The ruby script just freezes, it does not print anything, and the python script starts printing Enviando mensaje
B.T.W: I am using Python 3.6.9 and Ruby 2.6.5
What is the correct way to connect a zmq PAIR between Ruby and Python?
Welcome to the Zen of Zero!
In case one has never worked with ZeroMQ,
one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds" before diving into further details
Q : It does not display anything until I connect a client.
Sure, it does not, your code imperatively asked to block until a PAIR/PAIR delivery channel got happen to become able to deliver a message. As the v4.2+ API defines, the .send()-method will block during all the duration of a "mute state".
When a ZMQ_PAIR socket enters the mute state due to having reached the high water mark for the connected peer, or if no peer is connected, then any zmq_send(3) operations on the socket shall block until the peer becomes available for sending; messages are not discarded.
May try non-blocking mode of sending ( always a sign of a good engineering practice to avoid blocking, the more in distributed-computing ) and better include also <aSocket>.close() and <aContext>.term() as a rule of thumb ( best with explicit .setsockopt( zmq.LINGER, 0 ) ) for avoiding hang-ups and as a good engineering practice to explicitly close resources and release them back to the system
socket.send( b"Server message #[_{0:_>10d}_] to client3".format( i ), zmq.NOBLOCK )
Last but not least :
Q : What is the correct way to connect a zmq PAIR between Ruby and Python?
as the API documentation explains:
ZMQ_PAIR sockets are designed for inter-thread communication across the zmq_inproc(7) transport and do not implement functionality such as auto-reconnection.
there is no best way to do this, as Python / Ruby are not a case of inter-thread communications. ZeroMQ has since v2.1+ explicitly warned, that the PAIR/PAIR archetype is an experimental and ought be used only with bearing that in mind.
One may always substitute each such use-case with a tandem of PUSH/PULL-simplex channels, providing the same comfort with a pair of a .send()-only + .recv()-only channels.
Related
I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.
I am currently trying to implement simple communication between an I/O module as a CanOpen slave and my computer(Python script). The I/O module is connected to my computer with a PEAK USB-CAN adapter.
My goal would be to read or write the inputs/outputs. Is this even possible with the hardware, since I don't have a real "master" from that point of view?
Unfortunately I don't know what else I have to do to be able to communicate correctly with my I/O module.
import canopen
import time
network = canopen.Network()
network.connect(bustype='pcan', channel='PCAN_USBBUS1', bitrate=500000)
#add node and DCF File
IO_module = network.add_node(1, 'path to my DIO.DCF')
network.add_node(IO_module)
IO_module.nmt.state = 'RESET COMMUNICATION' # 000h 82 01
print(IO_module.nmt.state)
time.sleep(5)
IO_module.nmt.state = 'OPERATIONAL'
print(IO_module.nmt.state)
for node_id in network:
print(network[node_id])
IO_module.load_configuration()
i see some kind of communication in my console with timeout errors
INITIALISING
OPERATIONAL
<canopen.node.remote.RemoteNode object at 0x000002A023493A30>
Transfer aborted by client with code 0x05040000
No SDO response received
Transfer aborted by client with code 0x05040000
No SDO response received
Any advices ?
I can't get any further with the documentation alone
https://canopen.readthedocs.io/en/latest/
thank you
The good news is, you probably have all the required hardware. You are doing the "master" part from Python, that's fine. (The CAN bus isn't really master/slave, just broadcast. CANopen can be master/slave sometimes, but it's still all broadcast messages among equals on the same bus.)
You haven't provided information about your device, but I would start checking at a lower level.
Do you even have the CAN-Bus wired up correctly, and if so, what did you do to verify it? (Most common mistake: CAN-Bus not terminated with two 120ohm resistors. Though you usually can get away with just one instead of two.) And have you verified that you are using the correct baud rate?
The library docu example suggests to wait for the heartbeat with node.nmt.wait_for_heartbeat(). Why are you using a sleep instead? If there is no heartbeat, you don't need to continue. (Unless the device docu says that it doesn't implement NMT heartbeat - would be unusual.)
I certainly wouldn't try to go ahead with SDOs if you cannot confirm a NMT heartbeat. Also, some devices don't implement SDOs but only PDOs.
Try sniffing the CAN bus at a lower level (e.g. not PDOs/SDOs but just print the raw messages received - from Python, or with a separate application - e.g. candump on Linux.) Try getting statistics of the CAN "network" interface (on Linux, e.g. ifconfig). If everything is okay, the adapter should be in state "ERROR-ACTIVE", and you should see the frame counter increase for frames you've sent via Python.
Pseudo-code to better explain question:
#!/usr/bin/env python2.7
import pycurl, threading
def threaded_work():
conn = pycurl.Curl()
conn.setopt(pycurl.TIMEOUT, 10)
# Make a request to host #1 just to open the connection to it.
conn.setopt(pycurl.URL, 'https://host1.example.com/')
conn.perform_rs()
while not condition_that_may_take_very_long:
conn.setopt(pycurl.URL, 'https://host2.example.com/')
print 'Response from host #2: ' + conn.perform_rs()
# Now, after what may be a very long time, we must request host #1 again with a (hopefully) already established connection.
conn.setopt(pycurl.URL, 'https://host1.example.com/')
print 'Response from host #1, hopefully with an already established connection from above: ' + conn.perform_rs()
conn.close()
for _ in xrange(30):
# Multiple threads must work with host #1 and host #2 individually.
threading.Thread(target = threaded_work).start()
I am omitting extra, only unnecessary details for brevity so that the main problem has focus.
As you can see, I have multiple threads that must work with two different hosts, host #1 and host #2. Mostly, the threads will be working with host #2 until a certain condition is met. That condition may take hours or even longer to be met, and will be met at different times in different threads. Once the condition (condition_that_may_take_very_long in the example) is met, I would like host #1 to be requested as fast as possible with the connection that I have already established at the start of the threaded_work method. Is there any efficient way to efficiently accomplish this (open to the suggestion of using two PycURL handles, too)?
Pycurl uses libcurl. libcurl keeps connections alive by default after use, so as long as you keep the handle alive and use that for the subsequent transfer, it will keep the connection alive and ready for reuse.
However, due to modern networks and network equipment (NATs, firewalls, web servers), connections without traffic are often killed off relatively soon so having an idle connection and expecting it to actually work after "hours", is a very slim chance and rare occurance. Typically, libcurl will then discover that the connection has been killed in the mean time and create a new one to use at the next use.
Additionally, and in line with what I've described above, since libcurl 7.65.0 it now defaults to not reusing connections anymore that are older than 118 seconds. Changeable with the CURLOPT_MAXAGE_CONN option. The reason is that they barely ever work so by avoiding having to keep them around, detect them to be dead and reissue the request, this is an optimization.
I have a simple PUSH/PULL ZeroMQ code in Python. It looks like below.
def zmqtest(self):
print('zmq')
Process(target=start_consumer, args=('1', 9999)).start()
Process(target=start_consumer, args=('2', 9999)).start()
ctx = zmq.Context()
socket = ctx.socket(zmq.PUSH)
socket.bind('tcp://127.0.0.1:9999')
# sleep(.5) # I have to wait here...
for i in range(5):
socket.send_unicode('{}'.format(i))
The problem is I have to wait more than .5 second before sending message, otherwise only one consumer process can receive a message. If I wait more than .5 second, everything looks fine.
I guess it takes a while before the socket binding to settle down, and it is done asynchronously.
I wonder if there's a more reliable way to know when the socket is ready.
Sure it takes a while.Sure it is done async.
Let's damage first a bit the terminology.
ZeroMQ is a great framework. Each distributed-system's client, willing to use it ( except using just the inproc:// transport class ), first instantiates an async data-pumping engine .. the Context() instance(s), as needed.
Each Scalable Formal Communication Pattern { PUSH | PULL | ... | XSUB | SUB | PAIR } does not create a socket,
but
rather instantiates an access-point, that may later .connect() or .bind() to some counterparty ( another access-point, of a suitable type, in some Context() instance, be it local or not ( again, the local-inproc://-only infrastructures being the known exception to this rule ) ).
In this sense, an answer to a question "When the socket is ready?" requires an end-to-end investigation "across" the distributed-system, handling all the elements, that participate on the socket-alike behaviour's implementation.
Testing a "local"-end access-point RTO-state:
For this, your agent may self-connect a receiving access-point ( working as a PULL archetype ), so as to "sniff", when the local-end Context() instance has reached an RTO-state + a .bind()- created O/S L3+ interface starts distributing the intended agent's-PUSH-ed messages.
Testing a "remote"-agent's RTO-state:
This part can have an indirect or an explicit testing. An indirect way may use a message-embedded index. That can contain a raising number ( an ordinal ), which bears a weak information about an order. Given the PUSH-side message-routing strategy is Round-robin, the local-agent can be sure, that until it's local PULL-access-point receives all messages indicating a contiguous sequence of ordinals, there is no other "remote"-PULL-ing agent in an RTO-state. Once the "local" PULL-access-point receives "gap" in the stream of ordinals, that means ( sure, only in the case all the PUSH's .setsockopt()-s were setup properly ) there is another -- non-local -- PULL-ing agent in an RTO-state.
Is this usefull?
Maybe yes, maybe not. The point was to better understand the new challenges that any distributed-system has to somehow cope with.
The nature of multi-stage message queuing, multi-layered implementation ( local-PUSH-agent's-code, local Context()-thread(s), local-O/S, local-kernel, LAN/WAN, remote-kernel, remote-O/S, remote Context()-thread(s), remote-PULL-agent's-code to name just a few ) and multi-agent behaviour simply introduce many places, where an operation may gain latency / block / deadlock / fail in some other manner.
Yes, a walk on a wild-side.
Nevertheless, one may opt to use a much richer, explicit signalling ( besides the initially thought just a raw-data transport ) and help to solve the context-specific, signalling-RTO-aware behaviour inside the multi-agent worlds, that may better reflect the actual situations and survive also the other issues that start to appear in non-monolythic worlds of distributed-systems.
Explicit signalling is one way to cope with.
Fine-tune the ZeroMQ infrastructure. Forget using defaults. Always!
Recent API versions started to add more options to fine-tune the ZeroMQ behaviour for particular use-cases. Be sure to read carefully all details available to setup Context()-instance to tweak the socket instance access-point behaviour, so that it best matches your distributed-system signalling + transport needs:
.setsockopt( ZMQ_LINGER, 0 ) # always, indeed ALWAYS
.setsockopt( ZMQ_SNDBUF, .. ) # always, additional O/S + kernel rules apply ( read more about proper sizing )
.setsockopt( ZMQ_SNDHWM, .. ) # always, problem-specific data-engineered sizing
.setsockopt( ZMQ_TOS, .. ) # always, indeed ALWAYS for critical systems
.setsockopt( ZMQ_IMMEDIATE, .. ) # prevents "loosing" messages pumped into incomplete connections
and many more. Without these, design would remain nailed into a coffin in the real-world transaction's jungle.
I have a small software where I have a separate thread which is waiting for ZeroMQ messages. I am using the PUB/SUB communication protocol of ZeroMQ.
Currently I am aborting that thread by setting a variable "cont_loop" to False.
But I discovered that, when no messages arrive to the ZeroMQ subscriber I cannot exit the thread (without taking down the whole program).
def __init__(self):
Thread.__init__(self)
self.cont_loop = True
def abort(self):
self.continue_loop = False
def run(self):
zmq_context = zmq.Context()
zmq_socket = zmq_context.socket(zmq.SUB)
zmq_socket.bind("tcp://*:%s" % *(5556))
zmq_socket.setsockopt(zmq.SUBSCRIBE, "")
while self.cont_loop:
data = zmq_socket.recv()
print "Message: " + data
zmq_socket.close()
zmq_context.term()
print "exit"
I tried to move socket.close() and context.term() to abort-method. So that it shuts down the subscriber but this killed the whole program.
What is the correct way to shut down the above program?
Q: What is the correct way to ... ?
A: There are many ways to achieve the set goal. Let me pick just one, as a mock-up example on how to handle distributed process-to-process messaging.
First. Assume, there are more priorities in typical software design task. Some higher, some lower, some even so low, that one can defer an execution of these low-priority sub-tasks, so that there remains more time in the scheduler, to execute those sub-tasks, that cannot handle waiting.
This said, let's view your code. The SUB-side instruction to .recv() as was being used, causes two things. One visible - it performs a RECEIVE operation on a ZeroMQ-socket with a SUB-behaviour. The second, lesser visible is, it remains hanging, until it gets something "compatible" with a current state of the SUB-behaviour ( more on setting this later ).
This means, it also BLOCKS all the time since such .recv() method call UNTIL some unknown, locally uncontrollable coincidence of states/events makes it to deliver a ZeroMQ-message, with it's content being "compatible" with the locally pre-set state of this (still blocking) SUB-behaviour instance.
That may take ages.
This is exactly why .recv() is being rather used inside a control-loop, where external handling gets both the chance & the responsibility to do what you want ( including abort-related operations & a fair / graceful termination with proper resources' release(s) ).
Receive process becomes .recv( flags = zmq.NOBLOCK ) in rather a try: except: episode. Such a way your local process does not lose it's control over the stream-of-events ( incl. the NOP being one such ).
The best next step?
Take your time and get through a great book of gems, "Code Connected, Volume 1", Pieter HINTJENS, co-father of the ZeroMQ, has published ( also as PDF ).
Many his thoughts & errors to be avoided that he had shared with us is indeed worth your time.
Enjoy the powers of ZeroMQ. It's very powerful & worth getting mastered top-down.