I'm writing a program for converting media file to mp3 file with GStreamer. It works, but I would like to know the duration of audio stream, too. Following is the simplified code.
import logging
import pygst
pygst.require('0.10')
import gst
# this is very important, without this, callbacks from gstreamer thread
# will messed our program up
import gobject
gobject.threads_init()
def on_new_buffer(appsink):
buf = appsink.emit('pull-buffer')
print 'new buffer', len(buf)
def on_new_preroll(appsink):
buf = appsink.emit('pull-preroll')
print 'new preroll', len(buf)
def on_pad_added(decoder, pad):
print 'Pad added'
decoder.link(converter)
pipeline.set_state(gst.STATE_PLAYING)
def on_msg(msg):
if msg.type == gst.MESSAGE_ERROR:
error, debug = msg.parse_error()
print error, debug
elif msg.type == gst.MESSAGE_EOS:
duration = pipeline.query_duration(gst.FORMAT_TIME)
print 'Duration', duration
pipeline = gst.Pipeline('pipeline')
appsrc = gst.element_factory_make('appsrc', 'src')
decoder = gst.element_factory_make('decodebin2', 'decoder')
converter = gst.element_factory_make('audioconvert', 'converter')
lame = gst.element_factory_make('lamemp3enc', 'lame')
appsink = gst.element_factory_make('appsink', 'sink')
pipeline.add(appsrc, decoder, lame, converter, appsink)
gst.element_link_many(appsrc, decoder)
gst.element_link_many(converter, lame, appsink)
# -- setup appskink --
# -- setup decoder --
decoder.connect('pad-added', on_pad_added)
# -- setup mp3 encoder --
lame.set_property('bitrate', 128)
# -- setup appsink --
# this makes appsink emit singals
appsink.set_property('emit-signals', True)
# turns off sync to make decoding as fast as possible
appsink.set_property('sync', False)
appsink.connect('new-buffer', on_new_buffer)
appsink.connect('new-preroll', on_new_preroll)
pipeline.set_state(gst.STATE_PAUSED)
data = open(r'D:\Musics\Fiona Fung - Proud Of You.mp3', 'rb').read()
buf = gst.Buffer(data)
appsrc.emit('push-buffer', buf)
appsrc.emit('end-of-stream')
bus = pipeline.get_bus()
while True:
msg = bus.poll(gst.MESSAGE_ANY, -1)
on_msg(msg)
I didn't use filesrc as source, I use appsrc instead. I would like to read streaming data from Internet rather than a file. Strangely, as result, the output duration is -1
....
new buffer 315
new buffer 320
new buffer 335
new buffer 553
Duration (-1L, <enum GST_FORMAT_TIME of type GstFormat>)
If I switch the appsrc to filesrc, then the duration is correct
import logging
import pygst
pygst.require('0.10')
import gst
# this is very important, without this, callbacks from gstreamer thread
# will messed our program up
import gobject
gobject.threads_init()
def on_new_buffer(appsink):
buf = appsink.emit('pull-buffer')
print 'new buffer', len(buf)
def on_new_preroll(appsink):
buf = appsink.emit('pull-preroll')
print 'new preroll', len(buf)
def on_pad_added(decoder, pad):
print 'Pad added'
decoder.link(converter)
pipeline.set_state(gst.STATE_PLAYING)
def on_msg(msg):
if msg.type == gst.MESSAGE_ERROR:
error, debug = msg.parse_error()
print error, debug
elif msg.type == gst.MESSAGE_EOS:
duration = pipeline.query_duration(gst.FORMAT_TIME)
print 'Duration', duration
pipeline = gst.Pipeline('pipeline')
filesrc = gst.element_factory_make('filesrc', 'src')
decoder = gst.element_factory_make('decodebin2', 'decoder')
converter = gst.element_factory_make('audioconvert', 'converter')
lame = gst.element_factory_make('lamemp3enc', 'lame')
appsink = gst.element_factory_make('appsink', 'sink')
pipeline.add(filesrc, decoder, lame, converter, appsink)
gst.element_link_many(filesrc, decoder)
gst.element_link_many(converter, lame, appsink)
# -- setup filesrc --
filesrc.set_property('location', r'D:\Musics\Fiona Fung - Proud Of You.mp3')
# -- setup decoder --
decoder.connect('pad-added', on_pad_added)
# -- setup mp3 encoder --
lame.set_property('bitrate', 128)
# -- setup appsink --
# this makes appsink emit singals
appsink.set_property('emit-signals', True)
# turns off sync to make decoding as fast as possible
appsink.set_property('sync', False)
appsink.connect('new-buffer', on_new_buffer)
appsink.connect('new-preroll', on_new_preroll)
pipeline.set_state(gst.STATE_PAUSED)
bus = pipeline.get_bus()
while True:
msg = bus.poll(gst.MESSAGE_ANY, -1)
on_msg(msg)
As you can see, the result is correct now.
new buffer 322
new buffer 323
new buffer 315
new buffer 320
new buffer 549
Duration (189459000000L, <enum GST_FORMAT_TIME of type GstFormat>)
So, my question is - how to get correct duration of audio stream data from appsrc?
Thanks.
Unfortunately with appsrc it is not possible to get the exact duration of the stream, although with some formats that have fixed bitrate it is possible to estimate it based on file length, but other formats that use variable bitrates report an unknown length.
Because appsrc works on the incoming buffers (either push or pull model), by it receiving a chunk of data, consuming it and then either it requests or is supplied next chunk of data and therefore making the estimation of media duration almost impossible. Also, in push model it is not possible to seek media.
Related
The following code get in inputs a list of url and it create a Gstreamer pipeline. Specifically, for each url, a uridecodebin element is initialized and attached to a queue element. Each queue has the following properties: leaky=2, max-size-buffers=1 and flush-on-eos=1. When I start the pipeline, I can see from nvidia-smi dmon that some video is decoded (NVDEC is used). After a second, everything stops.
I would expect that decoding to keep going, and to push frames into the queue with each queue dropping the old frame every time it receives a new one. Am I wrong?
Code to initialize a Bin to decode the video (source_bin.py). You probably don't need to read it, but here it is:
import sys
from gi.repository import Gst
from pipeline.utils.pipeline import create_gst_elemement
class SourceBin:
#classmethod
def create_source_bin(cls, index: int, uri: str):
# Create a source GstBin to abstract this bin's content from the rest of the pipeline
bin_name = "source-bin-%02d" % index
nbin = Gst.Bin.new(bin_name)
if not nbin:
sys.stderr.write(" Unable to create source bin \n")
# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.
uri_decode_bin = create_gst_elemement("uridecodebin", f"uridecodebin_{index}")
# We set the input uri to the source element
uri_decode_bin.set_property("uri", uri)
# Connect to the "pad-added" signal of the decodebin which generates a
# callback once a new pad for raw data has been created by the decodebin
uri_decode_bin.connect("pad-added", cls.callback_newpad, nbin)
uri_decode_bin.connect("pad-removed", cls.callback_pad_removed, nbin)
uri_decode_bin.connect("no-more-pads", cls.callback_no_more_pads, nbin)
uri_decode_bin.connect("child-added", cls.decodebin_child_added, nbin)
# We need to create a ghost pad for the source bin which will act as a proxy
# for the video decoder src pad. The ghost pad will not have a target right
# now. Once the decode bin creates the video decoder and generates the
# cb_newpad callback, we will set the ghost pad target to the video decoder
# src pad.
Gst.Bin.add(nbin, uri_decode_bin)
bin_pad = nbin.add_pad(
Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC)
)
if not bin_pad:
sys.stderr.write(" Failed to add ghost pad in source bin \n")
return None
# Connect bus
return nbin
#classmethod
def callback_newpad(cls, uridecodebin, uridecodebin_new_src_pad, data):
print(f"SourceBin: added pad {uridecodebin_new_src_pad.name} to {uridecodebin.name}")
caps = uridecodebin_new_src_pad.get_current_caps()
gststruct = caps.get_structure(0)
gstname = gststruct.get_name()
source_bin = data
features = caps.get_features(0)
# Need to check if the pad created by the decodebin is for video and not audio.
if gstname.find("video") != -1:
# Link the decodebin pad only if decodebin has picked nvidia decoder plugin nvdec_*.
# We do this by checking if the pad caps contain NVMM memory features.
if features.contains("memory:NVMM"):
# Get the source bin ghost pad
bin_ghost_pad = source_bin.get_static_pad("src")
if not bin_ghost_pad.set_target(uridecodebin_new_src_pad):
sys.stderr.write(
"Failed to link decoder src pad to source bin ghost pad\n"
)
else:
sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")
#classmethod
def decodebin_child_added(cls, child_proxy, Object, name, user_data):
if name.find("decodebin") != -1:
Object.connect("child-added", cls.decodebin_child_added, user_data)
#classmethod
def callback_pad_removed(cls, uridecodebin, uridecodebin_removed_src_pad, data):
print(f"SourceBin: Removed pad {uridecodebin_removed_src_pad.name} from {uridecodebin.name}")
#classmethod
def callback_no_more_pads(cls, uridecodebin, data):
print(f"SourceBin: No more pads for {uridecodebin.name}")
Pipeline:
import sys
sys.path.append("../")
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
from gi.repository import GObject, Gst
from source_bin import SourceBin
def create_gst_elemement(factory_name, instance_name):
element = Gst.ElementFactory.make(factory_name, instance_name)
if not element:
sys.stderr.write(f" Unable to create {factory_name} {instance_name} \n")
return element
ai_rules = fususcore_api.get_ai_rules()
urls = [
"my rtsp url..."
]
GObject.threads_init()
Gst.init(None)
pipeline = Gst.Pipeline()
source_bins = [
SourceBin.create_source_bin(i, url)
for i, url in enumerate(urls)
]
frames_queues = list()
for i in range(len(source_bins)):
frames_queue = create_gst_elemement("queue", f"frame_queue_{i}")
frames_queue.set_property("leaky", 2)
frames_queue.set_property("max-size-buffers", 1)
frames_queue.set_property("flush-on-eos", 1)
frames_queues.append(frames_queue)
for source_bin, frames_queue in zip(source_bins, frames_queues):
pipeline.add(source_bin)
pipeline.add(frames_queue)
source_bin.link(frames_queue)
# loop = GObject.MainLoop()
# bus = pipeline.get_bus()
# bus.add_signal_watch()
# bus.connect("message", bus_call, loop)
pipeline.set_state(Gst.State.PLAYING)
For your convenience, you can find a graph of the pipeline: https://drive.google.com/file/d/1Vu8DR1Puam14k-fUKVBW2bG1gN4AgSPm/view?usp=sharing . Everything you see on the left of a queue is the Bin to decode the video. Each row represent a stream and it's identical to the other ones.
I have an FPGA that streams data on the USB bus through an FT2232H and I have observed that about 10% of the data has to be thrown away because some bytes in the frame are missing. Here are the technical details:
FPGA is an Artix 7. A batch of 4002 byte is ready every 9 ms. So that works out to 444,667 byte/s of data.
My laptop runs python 3.7 (from anaconda) on Ubuntu 18.04LTS
The FPGA/FT2232H is opened via the following initialization lines:
SYNCFF = 0x40
SIO_RTS_CTS_HS = (0x1 << 8)
self.device = pylibftdi.Device(mode='t', interface_select=pylibftdi.INTERFACE_A, encoding='latin1')
self.device.ftdi_fn.ftdi_set_bitmode(0xff, SYNCFF)
self.device.ftdi_fn.ftdi_read_data_set_chunksize(0x10000)
self.device.ftdi_fn.ftdi_write_data_set_chunksize(0x10000)
self.device.ftdi_fn.ftdi_setflowctrl(SIO_RTS_CTS_HS)
self.device.flush()
Then the data is read via this simple line:
raw_usb_data = my_fpga.device.read(0x10000)
I have observed the following:
I always get 0x10000 of data per batch, which is what I expect.
Reading 2**16 = 65,536 byte at once using device.read should take 147.4 ms given that a batch is ready every 9 ms. But timing that line gives a mean of 143 ms with a std deviation of 6.6 ms.
My first guess is that there is no buffer/a tiny buffer somewhere and that some information is lost because the OS (priority issue?) or python (garbage collection?) does something else at some point for too long.
How can I reduce the amount of bytes lost while reading the device?
The FT2232H has internal FIFO buffers with a capacity of ~4 kbits. Chances are that you are limited by them. Not sure how pylibftdi deals with them but maybe using an alternative approach might work if you can use the VCP driver. This allows you to address the FT2232H as standard comport e.g. via pyserial.
Some excerpts from one of my projects which actually works for baud rates >12 Mbps (UART is limited to 12 Mbps but e.g. fast opto can reach ~25 Mbps):
import traceback
import serial
import serial.tools.list_ports
import multiprocessing
import multiprocessing.connection
def IO_proc(cntr_pipe, data_pipe):
try:
search_str="USB VID:PID=0403:6010 SER="
ports = [x.device for x in serial.tools.list_ports.comports() if search_str in x.hwid]
baud_rate = 12000000 #only matters for uart and not for fast opto or fifo mode
ser = serial.Serial(port, baud_rate)
while not cntr_pipe.closed:
time.sleep(0)
in_data = ser.read(ser.inWaiting())
[...do some pattern matching, package identification etc...]
data_pipe.send_bytes(in_data)
except EOFError:
ret_code = 2
except Exception as e:
cntr_pipe.send(traceback.format_exc())
cntr_pipe.close()
ret_code = 4
finally:
cntr_pipe.close()
ser.close()
multiprocessing.connection.BUFSIZE = 2 ** 20 #only required for windows
child_cntr, parent_cntr = multiprocessing.Pipe()
child_data, parent_data = multiprocessing.Pipe()
process = multiprocessing.Process(target = IO_proc, args=(child_cntr, child_data))
#called frequently
def update():
if child_cntr.poll():
raise Exception("error",child_cntr.recv())
buf = bytes()
while parent_data.poll():
buf += parent_data.recv_bytes()
[...do something fancy...]
I tried to c&p a minimum example. It is untested so please forgive me if it is not working out of the box. To get this working one actually needs to make sure that the VCP and not the D2XX driver is loaded.
P.S: Actually while scanning through my files I realized that the pylibftdi way should work as well as I use a "decorator" class in case the D2XX driver is loaded:
try: import pylibftdi
except: pylibftdi = None
class pylibftdi_device:
def __init__(self,speed):
self.dev = pylibftdi.Device(interface_select=2)
self.dev.baudrate = speed
self.buf = b''
def write(self, data):
self.dev.write(data)
def read(self, bytecount):
while bytecount > len(self.buf):
self._read()
ret = self.buf[:bytecount]
self.buf = self.buf[bytecount:]
return ret
def flushInput(self):
self.dev.flush_input()#FT_PURGE_RX
self.buf = b''
def _read(self):
self.buf += self.dev.read(2048)
#property
def in_waiting(self):
self._read()
return len(self.buf)
def close(self):
self.dev.close()
def find_device_UART(baudrate=12000000,index=1, search_string="USB VID:PID=0403:6010 SER="):
if pylibftdi:
return pylibftdi_device(baudrate),"pylibftdi_device"
try:
ports = [x.device for x in serial.tools.list_ports.comports() if search_string in x.hwid]
module_logger.info(str(ports))
if len(ports) == 0:
return None,"no device found"
else:
ser = serial.Serial(ports[index],baudrate)
return ser,"found device %s %d"%(ser.name,ser.baudrate)
except serial.SerialException as e:
return None,"error during device detection - \n"+str(e)
So main difference to your example is that the recv buffer is read more frequently and put into a buffer which is then searched for the packets later on. And maybe this all is a complete overkill for your application and you just need to make smaller read calls to ensure the buffers never overflow.
I use the gstreamer and python-gi to get encoded video stream data. My launch is like gst-launch-1.0 v4l2src device=/dev/video0 ! x264enc bitrate=1000 ! h264parse ! flvmux ! appsink.
Now I code in python like:
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstApp', '1.0')
from gi.repository import GObject, Gst, GstApp
GObject.threads_init()
Gst.init(None)
class Example:
def __init__(self):
self.mainloop = GObject.MainLoop()
self.pipeline = Gst.Pipeline()
self.bus = self.pipeline.get_bus()
self.bus.add_signal_watch()
self.bus.connect('message::eos', self.on_eos)
self.bus.connect('message::error', self.on_error)
# Create elements
self.src = Gst.ElementFactory.make('v4l2src', None)
self.encoder = Gst.ElementFactory.make('x264enc', None)
self.parse = Gst.ElementFactory.make('h264parse', None)
self.mux = Gst.ElementFactory.make('flvmux', None)
self.sink = Gst.ElementFactory.make('appsink', None)
# Add elements to pipeline
self.pipeline.add(self.src)
self.pipeline.add(self.encoder)
self.pipeline.add(self.parse)
self.pipeline.add(self.mux)
self.pipeline.add(self.sink)
# Set properties
self.src.set_property('device', "/dev/video0")
# Link elements
self.src.link(self.encoder)
self.encoder.link(self.parse)
self.parse.link(self.mux)
self.mux.link(self.sink)
def run(self):
self.pipeline.set_state(Gst.State.PLAYING)
# self.mainloop.run()
appsink_sample = GstApp.AppSink.pull_sample(self.sink)
while True:
buff = appsink_sample.get_buffer()
size, offset, maxsize = buff.get_sizes()
frame_data = buff.extract_dup(offset, size)
print(frame_data)
def kill(self):
self.pipeline.set_state(Gst.State.NULL)
self.mainloop.quit()
def on_eos(self, bus, msg):
print('on_eos()')
self.kill()
def on_error(self, bus, msg):
print('on_error():', msg.parse_error())
self.kill()
example = Example()
example.run()
But I got the same data for every time, just like "FLV\0x01\0x01".
And than I use C languare to code the function, but I got the same result. Why? Could anyone help me?
print prints strings I assume? The buffer contains binary data. It just starts with data that resembles a string. So probably it starts with FLV\0x01\0x01\0x00.. followed by more binary data. String functions will treat 0x00 as the end marker of a string and will stop printing (as the print function does not take a size argument this is the agreement where your data ends). The size property should change though.. unless all the data has the same data chunk size.. but you need to find another function that prints your binary data - although I'm not sure if that is really what you want. Maybe you want to write this data to a file instead?
The problem is gstreamer's process. We should use signal function to get the stream data in appsink.
I'd like to create thumbnails for MPEG-4 AVC videos using Gstreamer and Python. Essentially:
Open the video file
Seek to a certain point in time (e.g. 5 seconds)
Grab the frame at that time
Save the frame to disc as a .jpg file
I've been looking at this other similar question, but I cannot quite figure out how to do the seek and frame capture automatically without user input.
So in summary, how can I capture a video thumbnail with Gstreamer and Python as per the steps above?
To elaborate on ensonic's answer, here's an example:
import os
import sys
import gst
def get_frame(path, offset=5, caps=gst.Caps('image/png')):
pipeline = gst.parse_launch('playbin2')
pipeline.props.uri = 'file://' + os.path.abspath(path)
pipeline.props.audio_sink = gst.element_factory_make('fakesink')
pipeline.props.video_sink = gst.element_factory_make('fakesink')
pipeline.set_state(gst.STATE_PAUSED)
# Wait for state change to finish.
pipeline.get_state()
assert pipeline.seek_simple(
gst.FORMAT_TIME, gst.SEEK_FLAG_FLUSH, offset * gst.SECOND)
# Wait for seek to finish.
pipeline.get_state()
buffer = pipeline.emit('convert-frame', caps)
pipeline.set_state(gst.STATE_NULL)
return buffer
def main():
buf = get_frame(sys.argv[1])
with file('frame.png', 'w') as fh:
fh.write(str(buf))
if __name__ == '__main__':
main()
This generates a PNG image. You can get raw image data using gst.Caps("video/x-raw-rgb,bpp=24,depth=24") or something like that.
Note that in GStreamer 1.0 (as opposed to 0.10), playbin2 has been renamed to playbin and the convert-frame signal is named convert-sample.
The mechanics of seeking are explained in this chapter of the GStreamer Application Development Manual. The 0.10 playbin2 documentation no longer seems to be online, but the documentation for 1.0 is here.
An example in Vala, with GStreamer 1.0 :
var playbin = Gst.ElementFactory.make ("playbin", null);
playbin.set ("uri", "file:///path/to/file");
// some code here.
var caps = Gst.Caps.from_string("image/png");
Gst.Sample sample;
Signal.emit_by_name(playbin, "convert-sample", caps, out sample);
if(sample == null)
return;
var sample_caps = sample.get_caps ();
if(sample_caps == null)
return;
unowned Gst.Structure structure = sample_caps.get_structure(0);
int width = (int)structure.get_value ("width");
int height = (int)structure.get_value ("height");
var memory = sample.get_buffer().get_memory (0);
Gst.MapInfo info;
memory.map (out info, Gst.MapFlags.READ);
uint8[] data = info.data;
It's an old question but I still haven't found it documented anywhere.
I found that the following worked on a playing video with Gstreamer 1.0
import gi
import time
gi.require_version('Gst', '1.0')
from gi.repository import Gst
def get_frame():
caps = Gst.Caps('image/png')
pipeline = Gst.ElementFactory.make("playbin", "playbin")
pipeline.set_property('uri','file:///home/rolf/GWPE.mp4')
pipeline.set_state(Gst.State.PLAYING)
#Allow time for it to start
time.sleep(0.5)
# jump 30 seconds
seek_time = 30 * Gst.SECOND
pipeline.seek(1.0, Gst.Format.TIME,(Gst.SeekFlags.FLUSH | Gst.SeekFlags.ACCURATE),Gst.SeekType.SET, seek_time , Gst.SeekType.NONE, -1)
#Allow video to run to prove it's working, then take snapshot
time.sleep(1)
buffer = pipeline.emit('convert-sample', caps)
buff = buffer.get_buffer()
result, map = buff.map(Gst.MapFlags.READ)
if result:
data = map.data
pipeline.set_state(Gst.State.NULL)
return data
else:
return
if __name__ == '__main__':
Gst.init(None)
image = get_frame()
with open('frame.png', 'wb') as snapshot:
snapshot.write(image)
The code should run with both Python2 and Python3, I hope it helps someone.
Use playbin2. set the uri to the media file, use gst_element_seek_simple to seek to the desired time position and then use g_signal_emit to invoke the "convert-frame" action signal.
I'm trying to transmit TCP/IP over a radio that is connected to my computer (specifically, the USRP). Right now, it's done very simply using Tun/Tap to set up a new network interface. Here's the code:
from gnuradio import gr, gru, modulation_utils
from gnuradio import usrp
from gnuradio import eng_notation
from gnuradio.eng_option import eng_option
from optparse import OptionParser
import random
import time
import struct
import sys
import os
# from current dir
from transmit_path import transmit_path
from receive_path import receive_path
import fusb_options
#print os.getpid()
#raw_input('Attach and press enter')
# Linux specific...
# TUNSETIFF ifr flags from <linux/tun_if.h>
IFF_TUN = 0x0001 # tunnel IP packets
IFF_TAP = 0x0002 # tunnel ethernet frames
IFF_NO_PI = 0x1000 # don't pass extra packet info
IFF_ONE_QUEUE = 0x2000 # beats me ;)
def open_tun_interface(tun_device_filename):
from fcntl import ioctl
mode = IFF_TAP | IFF_NO_PI
TUNSETIFF = 0x400454ca
tun = os.open(tun_device_filename, os.O_RDWR)
ifs = ioctl(tun, TUNSETIFF, struct.pack("16sH", "gr%d", mode))
ifname = ifs[:16].strip("\x00")
return (tun, ifname)
# /////////////////////////////////////////////////////////////////////////////
# the flow graph
# /////////////////////////////////////////////////////////////////////////////
class my_top_block(gr.top_block):
def __init__(self, mod_class, demod_class,
rx_callback, options):
gr.top_block.__init__(self)
self.txpath = transmit_path(mod_class, options)
self.rxpath = receive_path(demod_class, rx_callback, options)
self.connect(self.txpath);
self.connect(self.rxpath);
def send_pkt(self, payload='', eof=False):
return self.txpath.send_pkt(payload, eof)
def carrier_sensed(self):
"""
Return True if the receive path thinks there's carrier
"""
return self.rxpath.carrier_sensed()
# /////////////////////////////////////////////////////////////////////////////
# Carrier Sense MAC
# /////////////////////////////////////////////////////////////////////////////
class cs_mac(object):
"""
Prototype carrier sense MAC
Reads packets from the TUN/TAP interface, and sends them to the PHY.
Receives packets from the PHY via phy_rx_callback, and sends them
into the TUN/TAP interface.
Of course, we're not restricted to getting packets via TUN/TAP, this
is just an example.
"""
def __init__(self, tun_fd, verbose=False):
self.tun_fd = tun_fd # file descriptor for TUN/TAP interface
self.verbose = verbose
self.tb = None # top block (access to PHY)
def set_top_block(self, tb):
self.tb = tb
def phy_rx_callback(self, ok, payload):
"""
Invoked by thread associated with PHY to pass received packet up.
#param ok: bool indicating whether payload CRC was OK
#param payload: contents of the packet (string)
"""
if self.verbose:
print "Rx: ok = %r len(payload) = %4d" % (ok, len(payload))
if ok:
os.write(self.tun_fd, payload)
def main_loop(self):
"""
Main loop for MAC.
Only returns if we get an error reading from TUN.
FIXME: may want to check for EINTR and EAGAIN and reissue read
"""
min_delay = 0.001 # seconds
while 1:
payload = os.read(self.tun_fd, 10*1024)
if not payload:
self.tb.send_pkt(eof=True)
break
if self.verbose:
print "Tx: len(payload) = %4d" % (len(payload),)
delay = min_delay
while self.tb.carrier_sensed():
sys.stderr.write('B')
time.sleep(delay)
if delay < 0.050:
delay = delay * 2 # exponential back-off
self.tb.send_pkt(payload)
# /////////////////////////////////////////////////////////////////////////////
# main
# /////////////////////////////////////////////////////////////////////////////
def main():
mods = modulation_utils.type_1_mods()
demods = modulation_utils.type_1_demods()
parser = OptionParser (option_class=eng_option, conflict_handler="resolve")
expert_grp = parser.add_option_group("Expert")
parser.add_option("-m", "--modulation", type="choice", choices=mods.keys(),
default='gmsk',
help="Select modulation from: %s [default=%%default]"
% (', '.join(mods.keys()),))
parser.add_option("-v","--verbose", action="store_true", default=False)
expert_grp.add_option("-c", "--carrier-threshold", type="eng_float", default=30,
help="set carrier detect threshold (dB) [default=%default]")
expert_grp.add_option("","--tun-device-filename", default="/dev/net/tun",
help="path to tun device file [default=%default]")
transmit_path.add_options(parser, expert_grp)
receive_path.add_options(parser, expert_grp)
for mod in mods.values():
mod.add_options(expert_grp)
for demod in demods.values():
demod.add_options(expert_grp)
fusb_options.add_options(expert_grp)
(options, args) = parser.parse_args ()
if len(args) != 0:
parser.print_help(sys.stderr)
sys.exit(1)
if options.rx_freq is None or options.tx_freq is None:
sys.stderr.write("You must specify -f FREQ or --freq FREQ\n")
parser.print_help(sys.stderr)
sys.exit(1)
# open the TUN/TAP interface
(tun_fd, tun_ifname) = open_tun_interface(options.tun_device_filename)
# Attempt to enable realtime scheduling
r = gr.enable_realtime_scheduling()
if r == gr.RT_OK:
realtime = True
else:
realtime = False
print "Note: failed to enable realtime scheduling"
# If the user hasn't set the fusb_* parameters on the command line,
# pick some values that will reduce latency.
if options.fusb_block_size == 0 and options.fusb_nblocks == 0:
if realtime: # be more aggressive
options.fusb_block_size = gr.prefs().get_long('fusb', 'rt_block_size', 1024)
options.fusb_nblocks = gr.prefs().get_long('fusb', 'rt_nblocks', 16)
else:
options.fusb_block_size = gr.prefs().get_long('fusb', 'block_size', 4096)
options.fusb_nblocks = gr.prefs().get_long('fusb', 'nblocks', 16)
#print "fusb_block_size =", options.fusb_block_size
#print "fusb_nblocks =", options.fusb_nblocks
# instantiate the MAC
mac = cs_mac(tun_fd, verbose=True)
# build the graph (PHY)
tb = my_top_block(mods[options.modulation],
demods[options.modulation],
mac.phy_rx_callback,
options)
mac.set_top_block(tb) # give the MAC a handle for the PHY
if tb.txpath.bitrate() != tb.rxpath.bitrate():
print "WARNING: Transmit bitrate = %sb/sec, Receive bitrate = %sb/sec" % (
eng_notation.num_to_str(tb.txpath.bitrate()),
eng_notation.num_to_str(tb.rxpath.bitrate()))
print "modulation: %s" % (options.modulation,)
print "freq: %s" % (eng_notation.num_to_str(options.tx_freq))
print "bitrate: %sb/sec" % (eng_notation.num_to_str(tb.txpath.bitrate()),)
print "samples/symbol: %3d" % (tb.txpath.samples_per_symbol(),)
#print "interp: %3d" % (tb.txpath.interp(),)
#print "decim: %3d" % (tb.rxpath.decim(),)
tb.rxpath.set_carrier_threshold(options.carrier_threshold)
print "Carrier sense threshold:", options.carrier_threshold, "dB"
print
print "Allocated virtual ethernet interface: %s" % (tun_ifname,)
print "You must now use ifconfig to set its IP address. E.g.,"
print
print " $ sudo ifconfig %s 192.168.200.1" % (tun_ifname,)
print
print "Be sure to use a different address in the same subnet for each machine."
print
tb.start() # Start executing the flow graph (runs in separate threads)
mac.main_loop() # don't expect this to return...
tb.stop() # but if it does, tell flow graph to stop.
tb.wait() # wait for it to finish
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass
(Anyone familiar with GNU Radio will recognize this as tunnel.py)
My question is, is there a better way to move packets to and from the kernel than tun/tap? I've been looking at ipip or maybe using sockets, but I'm pretty sure those won't be very fast. Speed is what I'm most concerned with.
Remember that tunnel.py is a really, really rough example, and hasn't been updated in a while. It's not really meant to be a basis for other code, so be careful of how much you rely on the code.
Also, remember that TCP over unreliable radio links has significant issues:
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_over_wireless_networks