GNU Radio--raw data from uhd_fft.py - python

I would like to do spectrum sensing with GNU radio. Is there a good way to get the raw output from uhd_fft.py (the value for each frequency)? I would like to do this programatically (with code), rather than through a GUI.
I have tried doing spectrum sensing with usrp_spectrum_sense.py, and this script has questionable accuracy and seems to be much slower than uhd_fft.py.
Thanks!

You should really direct your question to the GNURadio mailing list. This is a very application-specific question, which isn't necessarily appropriate for SO.
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
To answer your question a bit, uhd_fft.py is just a Python program that is doing a transform on your data. You can do the same thing in C++ with GNURadio. Just edit the Python code to dump the bin data instead of plotting it and you should get what you want.

Related

Simple Network Graph Plotting in Python?

I am working on some algorithms that create network graphs, and I am finding it really hard to debug the output. The code is written in Python, and I am looking for the simplest way to view the resulting network.
Every node has a reference to its parent elements, but a helper function could be written to format the network in any other way.
What is the simplest way to display a network graph from Python? Even if it's not fully written in Python, ie it uses some other programs available to Linux, it would be fine.
It sounds like you want something to help debugging the network you are constructing. For this you might want to consider implementing a function that converts your network to DOT, a graph description language, which can then be rendered to a graph visualization using a number of tools, such as GraphViz. You can then log the output from this function to help debug.
Have you tried Netwulf? It takes a networkx.Graph object as input and launches an interactive d3-powered visualization in a separate browser window. The resulting image (and data) can then be posted back to Python for further processing.
Disclaimer: I'm a co-author of Netwulf.
Think about using existing graph libraries for your problem domain, e.g. NetworkX. Drawing can be done from there with matplotlib or pygraphviz.
For bigger projects, you might also want to check out a graph database like Neo4j with its toolkit (and own query language CYPHER) for working with python.
A good interface markup is also GraphML, can be useful with drawing tools like yEd in case you have small graphs and need some manual finish.

Program a butterworth filter using numpy (not scipy!) on a BeagleBone Black

I am a new user of Python and an amateur programmer in general - I am hoping to be able to filter a signal using just the numpy library. It will be programmed onto a BeagleBone Black and the OS is Angstrom Linux, so the furthest numpy library it will update to is 1.4 and due to either rumored data limitations (I am not actually sure how to check) or just the version of numpy being too early, scipy will not work on the board.
So the first solution is to get a new operating system but I would not know where to start; I am more comfortable in the realm of putting equations into a program.
I was hoping to use the filtfilt function but maybe it would be best to start with lfilter. This site seemed helpful for implementing it but it is a bit beyond me:
http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.signal.lfilter.html
I am capable of getting the filter coefficients in MATLAB then transferring them to the BeagleBone. The x is just the array that is my signal which I can upload.
The second section is a bit of a jump - so is there a way to perform a z-transform in just numpy, not scipy? Also, based on all of the secrecy of the filter algorithm in MATLAB, I do not have faith in working that out, but is there some sort of mathematical algorithm description, or better yet code, describing how I may accomplish this?
Thanks for your patience in reading through this and the response. Please do not use complicated language in the response!
-Rob
For the filter design functions, you can copy the code from sicpy.signal.filter_design.py, they are almost pure python code.
But to do lfilter for IIR filters, you need a for loop for every sample in the data array. Since for loop in Python is slow, I think you need to implement it in C, and call it throught ctypes. Do you have a c compile in the target machine?
If you can design your filter as a FIR filter, then you can use numpy.convolve(b, x).

Linked brushing possible in R

I saw this just yesterday but it's for matplotlib which, as far as I know, is Python only. This functionality that would be stupendously useful for my work.
Is anything similar available for R? I've looked around and the closest I've seen mentioned is iPlots/Acinonyx, but the websites for those are a few years out of date. Do those packages work reasonably well? I've not seen any examples of their use.
Alternatively, does mpld3/matplotlib/python play well with R? By that I mean, could I load my dataframes in R, use mpld3/matplotlib/python for exploring my data, then making up final/pretty plots in R?
Full disclosure: I'm a newbie (R is the first programming language that I've really tried to learn since QBASIC as a child...).
While R doesn't seem to have anything quite like this yet, I want to note that mpld3 now has a well-defined JSON layout for figure representations, in some ways similar to Vega (but at a much lower level). I'm not an R/ggplot user, but it seems like the ggvis ggplot-to-vega approach could be rather easily adapted to convert from ggplot to mpld3.
I've forgotten how to do linked plots with brushing in R, but I know the capability is there. I use GGobi for that, however - http://ggobi.org/. It's designed for exploratory data analysis using visualizations, and there are R packages to communicate with it and script it.
There's a pretty good book on GGobi - Interactive and Dynamic Graphics for Data Analysis: With R and GGobi.
The R package ggvis will have similar functionality. It is still in relatively early development, as version 0.1 was just tagged a few days ago. (Although that's also true of mpld3).
To answer your second question, yes they work reasonably well together. The easiest way to do what you suggested would use the R magic function in the IPython notebook.
The package JGR provides a java interface for R. From here, you can call the library iplots. In your R terminal, type
install.packages("JGR");
library(JGR);
JGR()
This will open a new window that you can use just like the standard R terminal.
You should now be able to brush using iplots:
X = matrix(rnorm(900), ncol = 3);
iplot(X[,1], X[,2]);
iplot(X[,1], X[,3]);
ihist(X[,1])
Also take a look at http://cranvas.org/ - it might be somewhat hard to install (especially for a newbie) but it's well worth the effort.

Octave/MATLAB vs. Python On An Embedded Computer

I want to perform image processing on a low-end (Atom processor) embedded computer or microcontroller that is running Linux.
I'm trying to decide whether I should write my image processing code in Octave or Python. I feel comfortable in both languages, but is there any reason why I should use one over the other? Are there huge performance differences? I feel as though Octave may more closely resemble, syntax-wise, the domain of image processing than Python.
Thanks for your input.
Edit: The motivation for this question comes from the fact that I design in Octave and get a working algorithm and then port the algorithm to C++. I am trying to avoid this double work and go from design to deployment easily.
I am bit surprised that you don't stick to C/C++ - many convenient image processing libraries exists. Even though, I have like 20 years of experience with C, 8 years of experience with Matlab and only 1 years of experience with Python, I would choose Python together with OpenCV, which is an extremely optimized library for computer vision supporting Intel Performance Primitives. Once you have a working Python solution, it is easy to translate this to C or C++ to get the additional performance or reduce the power consumption. I would start with Python and Numpy using matplotlib for displaying / prototyping, optimize using OpenCV from within Python and finally use C++ and test it against the Python reference implementation.
MATLAB has a code generation feature which could potentially help with your workflow. Have a look at this example. My understanding is that the Atom is x86 architecture, so the generated code ought to work on it too. You could consider getting a Trial version and giving the above example a spin on your specific target to evaluate performance and inspect the generated C code.

DFT in Python taking significantly longer than C

I'm currently working on translating some C code To Python. This code is being used to help identify errors arising from the CLEAN algorithm used in Radio Astronomy. In order to do this analysis the value of the Fourier Transforms of Intensity Maps, Q Stokes Map and U Stokes Map must be found at specific pixel values (given by ANT_pix). These Maps are just 257*257 arrays.
The below code takes a few seconds to run with C but takes hours to run with Python. I'm pretty sure that it is terribly optimized as my knowledge of Python is quite poor.
Thanks for any help you can give.
Update My question is if there is a better way to implement the loops in Python which will speed things up. I've read quite a few answer here for other questions on Python which recommend avoiding nested for loops in Python if possible and I'm just wondering if anyone knows a good way of implementing something like the Python code below without the loops or with better optimised loops. I realise this may be a tall order though!
I've been using the FFT up till now but my supervisor wants to see what sort of difference the DFT will make. This is because the Antenna position will not, in general, occur at exact pixels values. Using FFT requires round to the closest pixel value.
I'm using Python as CASA, the computer program used to reduce Radio Astronomy datasets is written in python and implementing Python scripts in it is far far easier than C.
Original Code
def DFT_Vis(ANT_Pix="",IMap="",QMap="",UMap="", NMap="", Nvis=""):
UV=numpy.zeros([Nvis,6])
Offset=(NMap+1)/2
ANT=ANT_Pix+Offset;
i=0
l=0
k=0
SumI=0
SumRL=0
SumLR=0
z=0
RL=QMap+1j*UMap
LR=QMap-1j*UMap
Factor=[math.e**(-2j*math.pi*z/NMap) for z in range(NMap)]
for i in range(Nvis):
X=ANT[i,0]
Y=ANT[i,1]
for l in range(NMap):
for k in range(NMap):
Temp=Factor[int((X*l)%NMap)]*Factor[int((Y*k)%NMap)];
SumI+=IMap[l,k]*Temp
SumRL+=RL[l,k]*Temp
SumLR+=IMap[l,k]*Temp
k=1
UV[i,0]=SumI.real
UV[i,1]=SumI.imag
UV[i,2]=SumRL.real
UV[i,3]=SumRL.imag
UV[i,4]=SumLR.real
UV[i,5]=SumLR.imag
l=1
k=1
SumI=0
SumRL=0
SumLR=0
return(UV)
You should probably use numpy's fourier transform code, rather than writing your own: http://docs.scipy.org/doc/numpy/reference/routines.fft.html
If you are interested in boosting the performance of your script cython could be an option.
I am not an expert on the FFT, but my understanding is that the FFT is simply a fast way to compute the DFT. So to me your question sounds like you are trying to write a bubble sort algorithm to see if it gives a better answer than quicksort. They are both sorting algorithms that would give the same result!
So I am questioning your basic premise. I am wondering if you can just change your rounding on your data and get the same result from the SciPy FFT code.
Also, according to my DSP textbook, the FFT can produce a more accurate answer than computing the DFT the long way, simply because floating point operations are inexact, and the FFT invokes fewer floating point operations along the way to finding the correct answer.
If you have some working C code that does the calculation you want, you could always wrap the C code to let you call it from Python. Discussion here: Wrapping a C library in Python: C, Cython or ctypes?
To answer your actual question: as #ZoZo123 noted, it would be a big win to change from range() to xrange(). With range(), Python has to build a list of numbers, and then destroy the list when done; with xrange() Python just makes an iterator that yields up the numbers one at a time. (But note that in Python 3.x, range() makes an iterator and there is no xrange().)
Also, if this code does not have to integrate with the rest of your code, you might try running this code under PyPy. This is exactly the sort of code that PyPy can best optimize. The problem with PyPy is that currently your project must be "pure" Python, and it looks like you are using NumPy. (There are projects to get NumPy and PyPy to work together, but that's not done yet.) http://pypy.org/
If this code does need to integrate with the rest of your code, then I think you need to look at Cython (as noted by #Krzysztof RosiƄski).

Categories