I'm trying to get an arduino board communicate with a beaglebone ( BB) white running Ubuntu using UART. I have read that the BB uart driver is already interrupt driven.
I want to store all incoming data into a sort of buffer which I can read when required, similar to the way it's done in microcontrollers. But I'm trying to avoid kernel programming so I won't be able to use the driver's data structures. I'm looking for a complete user space solution.
I'm planning to use two python processes, one to write all incoming data (to a shared list) and the other to read it as required so that the read is non blocking.
I have two questions:
Is this the right approach? if yes, please suggest a simple interprocess communication method that will suffice.
What is the right way to implement this?
Note: I'm using the PyBBIO library that reads and writes directly to the /dev/mem special file.
You might want to use pyserial, which uses the kernel interfaces (I don't know what PyBBIO does). It provides automatic input buffering - so you don't need an extra process. If you do want to have more processes use multiprocessing. A simpler alternative is threading, which saves you the communication part. For multiprocessing with network support use Ipython's cluster
Related
I have 3 Raspberry Pi's, all on the same LAN doing stuff that is monitored by Python and I want them to talk to each other, and to my PC. Sockets seem like the way to go, but the examples are so simplistic. Here's the issue I am stuck on - the listen and receive processes are all blocking, unless you set a timeout, in which case they still block, just for less time.
So, if I set up a round-robin, then each Pi will only be listened to (or received on) for 1/3 of the time, or less if there is stuff to transmit as well.
What I'd like to understand better is what happens to the data (or connection requests) when I am not listening/receiving - are these buffered by the OS, or lost..? What happens to the socket when there is no method called, is it happy to be ignored for a while, or will the socket itself be dumped by the OS..?
I am starting to split these into separate processes now, which is getting messy and seems inefficient, but I can't think of another way except to run this as 3 (currently), maybe 6 (transmit/receive) or even 9 (listen/transmit/receive) separate processes..?
Sorry I don't have a code example, but it is already way tooo big, and it doesn't work. plus a lot of the issue seems to me to be in the murky part of the sockets - that part between the socket and the OS. I feel I need to understand this better to get to the right architecture for my bit of code before I really start debugging the various exceptions and communication failures...
You can handle multiple sockets in a single process using I/O multiplexing. This is usually done using calls such as epoll(), poll() or select(). These calls monitor multiple sockets and return when one or more sockets have data available for reading. Or are ready to write data to. In many cases this is more convenient than using multiple processes and/or threads.
These calls are pretty low level OS calls. Python seems to have higher level functionality that might be easier to use but I haven't tried this myself.
first question so please be gentle.
i am using python.
when creating a named pipe to a c++ windows program with
PIPE = open(r'\\.\pipe\NamedPipe','rb+',0)
as global i can read/write from and to the pipe.
def pipe_writer():
PIPE.write(some_stuff)
def pipe_reader():
data = struct.unpack("byte-type",PIPE.read(number_of_bytes),0)
pipe_writer()
pipe_reader()
this is fine to collect data from the pipe and process the complete data with several functions, one function after the other.
unfortunately i have to process the data bit by bit as i pull it from the pipe with several functions in a serialized manner.
i thought that queueing the data would just do the job so i use the multiprocess module.
when i try to multiprocess i am able to create the pipe and send data once when opening it it after:
if __name__ == '__main__':
PIPE = open(r'\\.\pipe\NamedPipe','rb+',0)
PIPE.write(some_stuff)
when I then try to .start() the functions as processes and read from the pipe I get an error that the pipe doesn't exist or is open in the wrong mode, which can't really be as it works just fine when reading/writing to it without using Process() on the functions AND i can write to it ... even if it's only once.
any suggestions? Also I think I kinda need to use multiprocess as threading doesn't work ... probably ... because of the GIL and slowing stuff down.
If you're in control of the C++ source code too, you can save yourself a lot of code and hassle by moving on to using ZeroMQ or Nanomsg instead of the pipe, and Google Protocol Buffers instead of interpreting a byte stream yourself.
ZeroMQ and Nanomsg are like networks / pipes /IPC on steroids, and are much easier to use than raw pipes, sockets, etc. You have less source code and more functionality : win-win.
Google's protocol Buffers allow you to define data structures (messages) in a language neutral way, and then auto generate source code in C++, Python, Java or whatever. This source code defines structs, classes, etc that represent the messages and also converts them to a standard binary format. That binary data is what you'll send via ZeroMQ. Again, less source code for you to write, more functionality.
This is ideal for getting C++ classes into Python and vice versa.
nanomsg python wrapper is also available on GitHub at Nanomsg Python.
Examples you can see at Examples. I guess this wrapper will serve your purpose. It's always better to use this in place of raw PIPEs. It supports IPC, Between Process and TCP communication patterns.
Moreover it is crossplatform and it's basic implementation is in C. So I guess communication between python and C process can also be made possible.
I need to read and plot data in real time from multiple Android phones simultaneously. I'm trying to build a server (in python) that each phone can connect to simultaneously, which will receive the data streams from each phone and plot in real time, using matplotlib. I'm not very experienced in socket programming, although I know the basics (single request servers and such). How should I go about doing this? I looked at asyncore, SocketServer, and other modules, but I'm not sure I grasp how to allow multiple long standing connections.
I was thinking I should create a new thread for each phone (although I'm not sure if it's safe to pass a socket to a new thread), but I also want to be able to plot using subplots (eg, 4 plots side by side), although this is not that important.
I just need a point in the right direction. Small code samples appreciated to illustrate the concept.
Using threads due to the Python's implementation of threading might lead to a degraded performance, depending on what your threads do.
I'd suggest using a framework for building asynchronous server. A one such framework is Gevent. Using asynchronous event loop you can do calculations while other "threads" (in case of gevent, greenlets) are waiting for I/O and thus getting better performance. The model is also ideal for long-lasting idle connections.
I'm currently in the process of programming a server which can let clients interact with a piece of hardware. For the interested readers it's a device which monitors the wavelength of a set of lasers concurrently (and controls the lasers). The server should be able to broadcast the wavelengths (a list of floats) on a regular basis and let the clients change the settings of the device through dll calls.
My initial idea was to write a custom protocol to handle the communication, but after thinking about how to handle TCP fragmentation and data encoding I bumped into Twisted, and it looks like most of the work is already done if I use perspective broker to share the data and call server methods directly from the clients. This solution might be a bit overkill, but for me it appeared obvious, what do you think?
My main concern arrose when I thought about the clients. Basically I need two types of clients, one which just displays the wavelengths (this should be straight forward) and a second which can change the device settings and get feedback when it's changed. My idea was to create a single client capable of both, but thinking about combining it with our previous system got me thinking... The second client should be controlled from an already rather complex python framework which controls a lot of independant hardware with relatively strict timing requirements, and the settings of the wavelengthmeter should then be called within this sequential code. Now the thing is, how do I mix this with the Twisted client? As I understand Twisted is not threadsafe, so I can't simply spawn a new thread running the reactor and then inteact with it from my main thread, can I?
Any suggestions for writing this server/client framework through different means than Twisted are very welcome!
Thanks
You can start the reactor in a dedicated thread, and then issue calls to it with blockingCallFromThread from your existing "sequential" code.
Also, I'd recommend AMP for the protocol rather than PB, since AMP is more amenable to heterogeneous environments (see amp-protocol.net for independent protocol information), and it sounds like you have a substantial amount of other technology you might want to integrate with this system.
Have you tried zeromq?
It's a library that simplifies working with sockets. It can operate over TCP and implements several topologies, such as publisher/subscriber (for broadcasting data, such as your laser readings) and request/response (that you can use for you control scheme).
There are bindings for several languages and the site is full of examples. Also, it's amazingly fast.
Good stuff.
Good afternoon,
I would ask some suggestion about the best way to monitor events over the serial port.
I'm using PySerial to write "commands" over the serial port towards some devices and
I would like to receive feedback about the status of this devices.
Wich is the best way: 1) fullfill a pipe and read into, 2) a new thread delegated to read only, or what?
Can I also ask for a simple code to implement the solution?
For general tips on working with pyserial, look at the search S.Lott suggested in the comment.
Regarding the best strategy to implement your application - it all depends on how your protocols are defined. Do the devices immediately respond to queries? Or do they continually send data that must be monitored? This is important to define, as it certainly affects the way you'll want to handle the communication.
Generally, I've found it simple and stable to have a separate thread reading everything from the serial port and just pumping the data into a Queue. The main application logic then can query this queue whenever it needs to and read the data.
The strategy choosen is to use python multiprocessing and queue
see:
http://www.ibm.com/developerworks/aix/library/au-threadingpython/index.html
and
http://www.ibm.com/developerworks/aix/library/au-multiprocessing/index.html?ca=dgr-lnxw9dPython-Multi&S_TACT=105AGX59&S_CMP=grsitelnxw9d
for reference