Gstreamer of python's gst.LinkError problem - python

I am wiring a gstreamer application with Python. And I get a LinkError with following code:
import pygst
pygst.require('0.10')
import gst
import pygtk
pygtk.require('2.0')
import gtk
# this is very important, without this, callbacks from gstreamer thread
# will messed our program up
gtk.gdk.threads_init()
def main():
pipeline = gst.Pipeline('pipleline')
filesrc = gst.element_factory_make("filesrc", "filesrc")
filesrc.set_property('location', 'C:/a.mp3')
decode = gst.element_factory_make("decodebin", "decode")
convert = gst.element_factory_make('audioconvert', 'convert')
sink = gst.element_factory_make("autoaudiosink", "sink")
pipeline.add(filesrc, decode, convert, sink)
gst.element_link_many(filesrc, decode, convert, sink)
pipeline.set_state(gst.STATE_PLAYING)
gtk.main()
main()
And the error:
ImportError: could not import gio
Traceback (most recent call last):
File "H:\workspace\ggg\src\test2.py", line 37, in <module>
main()
File "H:\workspace\ggg\src\test2.py", line 31, in main
gst.element_link_many(filesrc, decode, convert, sink)
gst.LinkError: failed to link decode with convert
It is very strange, with same pipeline, but built with parse_launch, it works. Here is the code:
import pygst
pygst.require('0.10')
import gst
import pygtk
pygtk.require('2.0')
import gtk
# this is very important, without this, callbacks from gstreamer thread
# will messed our program up
gtk.gdk.threads_init()
def main():
player = gst.parse_launch('filesrc location=C:/a.mp3 ! decodebin ! audioconvert ! autoaudiosink')
player.set_state(gst.STATE_PLAYING)
gtk.main()
main()
Here comes the question, why the manual one failed, but the parsed one success? What's wrong with that? How can I fix it?
Thanks.

your problem is here:
gst.element_link_many(filesrc, decode, convert, sink)
the reason is that not all elements have simple, static inputs and outputs. at this point in your program, your decodebin does not have any source pads (that is: no outputs).
a pad is like a nipple - it's an input / output to an element. pads can appear, disappear or just sit there. there are three classes of pads: static pads (the easiest and what you would expect), request pads (that appear only when you ask for them) and sometimes pads (that appear only when the element wants to make them appear). the outputs of decodebin are sometimes pads.
if you inspect the output of gst-inspect decodebin, you can see this for yourself:
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
ANY
SRC template: 'src%d'
Availability: Sometimes
Capabilities:
ANY
at line 26 of your program, you can't link decode to anything, because it doesn't have any source pads to link with. source pads on a decodebin appear only as the input stream is decoded: this doesn't happen instantaneously. any number of source pads may appear (e.g one for an audio stream, two for a video stream with audio, none for an un-decodable stream).
you need to wait until the pads are created, and then link them. decodebin emits a signal, "new-decoded-pad" to tell you when this happens (this is also documented in gst-inspect decodebin). you must connect a callback function to this signal, and link your decode and audioconvert in the callback. here is your corrected code:
#!/usr/bin/python
import pygst
pygst.require('0.10')
import gst
import pygtk
pygtk.require('2.0')
import gtk
# this is very important, without this, callbacks from gstreamer thread
# will messed our program up
gtk.gdk.threads_init()
def on_new_decoded_pad(dbin, pad, islast):
decode = pad.get_parent()
pipeline = decode.get_parent()
convert = pipeline.get_by_name('convert')
decode.link(convert)
pipeline.set_state(gst.STATE_PLAYING)
print "linked!"
def main():
pipeline = gst.Pipeline('pipleline')
filesrc = gst.element_factory_make("filesrc", "filesrc")
filesrc.set_property('location', 'C:/a.mp3')
decode = gst.element_factory_make("decodebin", "decode")
convert = gst.element_factory_make('audioconvert', 'convert')
sink = gst.element_factory_make("autoaudiosink", "sink")
pipeline.add(filesrc, decode, convert, sink)
gst.element_link_many(filesrc, decode)
gst.element_link_many(convert, sink)
decode.connect("new-decoded-pad", on_new_decoded_pad)
pipeline.set_state(gst.STATE_PAUSED)
gtk.main()
main()
gst.parse_launch works because it takes care of all these niggly details for you. there is also the high level element playbin which automatically creates and links a decodebin internally.

Related

GStreamer, Tee in RTSP server

New to Gstreamer, trying to create an RTSP server that consumes a source once per output stream. Code follows. In the factory do_create_element(), if I use the source_pipeline as a return value, I am able to connect to the server and consume the stream. However, as expected, if I connect multiple clients, the server crashes with an error saying the src pad is already connected. My idea was to use the tee element, add a queue after it, and wrap the queue in a pipeline (since I need a GstBin element to give back from the factory). However, this does not work. When I connect a first client to server, nothing happens server-side (as-in, no crashes. The function is called and exits successfully). However, client side, no stream is read, with a 'SDP contains no stream error' (see the console output after the code).
I tried connecting the tee manually to the queue (line left in comment) instead of using the pipeline to do it, but nothing changes. I suspect the queue needs something to consume it, but I'm not sure how/what to use (or if that's the actual issue).
Here is a minimal example:
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
# noinspection PyUnresolvedReferences
from gi.repository import Gst, GstRtspServer, GObject
def create_source():
s_src = "videotestsrc ! video/x-raw,rate=30,width=320,height=240,format=I420"
s_h264 = "x264enc tune=zerolatency"
pipeline_str = "( {s_src} ! queue max-size-buffers=1000 name=q_enc ! {s_h264} ! rtph264pay name=pay0 pt=96 )".format(
**locals()
)
return Gst.parse_launch(pipeline_str)
def _main():
GObject.threads_init()
Gst.init(None)
source = create_source()
tee = Gst.ElementFactory.make("tee", "tee")
source_pipeline = Gst.Pipeline()
source_pipeline.add(source)
source_pipeline.add(tee)
source_pipeline.set_state(Gst.State.PLAYING)
class Factory(GstRtspServer.RTSPMediaFactoryURI):
def __init__(self):
super(Factory, self).__init__()
self._n = 0
def do_create_element(self, url): # -> GstBin (Pipeline)
print("in [Factory] do_create_element")
source_pipeline.set_state(Gst.State.PAUSED)
stream_pipeline = Gst.Pipeline()
# Create a new queue to buffer the tee's output
q = Gst.ElementFactory.make("queue")
stream_pipeline.add(q)
source_pipeline.add(tee)
source_pipeline.add(stream_pipeline)
# tee.link(q)
stream_pipeline.set_state(Gst.State.PLAYING)
source_pipeline.set_state(Gst.State.PLAYING)
return stream_pipeline
factory = Factory()
factory.set_shared(True)
server = GstRtspServer.RTSPServer()
server.get_mount_points().add_factory("/endpoint", factory)
server.attach(None)
loop = GObject.MainLoop()
loop.run()
if __name__ == "__main__":
_main()
Here is the shell output when connecting:
> gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/endpoint ! rtph264depay ! h264parse ! mp4mux ! filesink location=file.mp4
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Pipeline is PREROLLED ...
Prerolled, waiting for progress to finish...
Progress: (connect) Connecting to rtsp://127.0.0.1:8554/endpoint
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not get/set settings from/on resource.
Additional debug info:
../gst/rtsp/gstrtspsrc.c(7637): gst_rtspsrc_setup_streams_start (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
SDP contains no streams
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
Thanks ahead!

Problem with GStreamer and WPEWebKit in Python3

I want to create a program in Python that allows to edit text that moves and show on top of a video. I want to do that in a website overlay (WPEWebKit).
I have found some code to start a pipeline with wpe and gst-launch-1.0, problem is that when I run gst-launch-1.0 -v wpesrc location="https://gstreamer.freedesktop.org" ! queue ! glimagesink from https://gstreamer.freedesktop.org/documentation/wpe/wpesrc.html it says WARNING: erroneous pipeline: no element "wpesrc". I have checked and I have installed gstreamer1.0-plugins-bad.
The other challange is that I want to create it in a python application. Here's my code so far:
import threading
import time
import gi
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib
Gst.init()
main_loop = GLib.MainLoop()
thread = threading.Thread(target=main_loop.run)
thread.start()
pipeline = Gst.parse_launch("videotestsrc ! decodebin ! videoconvert ! autovideosink")
pipeline.set_state(Gst.State.PLAYING)
try:
while True:
time.sleep(0.1)
except KeyboardInterrupt:
pass
pipeline.set_state(Gst.State.NULL)
main_loop.quit()
I will be probably rewrite this because I want it in a GTK application.

How change source of file, while playing? - GStreamer

I am trying to create a program, that creates a HLS stream of two images, which switch when I input into the keyboard. I have this sample code:
import gi
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib
from threading import Thread
Gst.init(None)
main_loop = GLib.MainLoop()
thread = Thread(target=main_loop.run)
thread.start()
pipeline = Gst.Pipeline()
source = Gst.ElementFactory.make("filesrc", "source")
source.set_property("location", "image1.jpeg")
decoder = Gst.ElementFactory.make("jpegdec", "decoder")
freeze = Gst.ElementFactory.make("imagefreeze", "freeze")
convert = Gst.ElementFactory.make("videoconvert", "convert")
encoder = Gst.ElementFactory.make("x264enc", "encoder")
encoder.set_property("speed-preset", "ultrafast")
encoder.set_property("tune", "zerolatency")
muxer = Gst.ElementFactory.make("mpegtsmux", "muxer")
queue = Gst.ElementFactory.make("queue", "queue")
hls = Gst.ElementFactory.make("hlssink", "hls")
hls.set_property("max-files", 0)
hls.set_property("playlist-length", 0)
hls.set_property("target-duration", 1)
pipeline.add(source, decoder, freeze, convert, encoder, muxer, queue, hls)
source.link(decoder)
decoder.link(freeze)
freeze.link(convert)
convert.link(encoder)
encoder.link(muxer)
muxer.link(queue)
queue.link(hls)
pipeline.set_state(Gst.State.PLAYING)
input()
source.set_property("location", "image2.jpeg")
input()
pipeline.set_state(Gst.State.NULL)
main_loop.quit()
The problem is when I run it and then I hit enter, the playlist.m3u8 resets, which I don't want. This is because of the state change to READY, when I remove this state change it works but I get a warning. Is there an approach that is safe and doesn't produce a warning. Is there a way to change source while play?
I did find gst-interpipe, but I couldn't get it to work and have filed an appropriate bug report there.
Any help appreciated.
Thanks.

convert gstreamer pipeline to python code

Im trying to convert gstreamer pipeline to python code using gi library.
This is the pipeline which is running successfully in terminal:
gst-launch-1.0 rtspsrc location="rtsp://admin:123456#192.168.0.150:554/H264?ch=1&subtype=0&proto=Onvif" latency=300 ! rtph264depay ! h264parse ! nvv4l2decoder drop-frame-interval=1 ! nvvideoconvert ! video/x-raw,width=1920,height=1080,formate=I420 ! queue ! nveglglessink window-x=0 window-y=0 window-width=1080 window-height=720
but while running the same pipeline using python code, there is no output window displaying rtsp stream and also no error on the terminal. The terminal simply stuck until i press ctrl+c.
This is the code that im using to run the gstreamer command:
import gi
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GObject
def main(device):
GObject.threads_init()
Gst.init(None)
pipeline = Gst.Pipeline()
source = Gst.ElementFactory.make("rtspsrc", "video-source")
source.set_property("location", device)
source.set_property("latency", 300)
pipeline.add(source)
depay = Gst.ElementFactory.make("rtph264depay", "depay")
pipeline.add(depay)
source.link(depay)
parse = Gst.ElementFactory.make("h264parse", "parse")
pipeline.add(parse)
depay.link(parse)
decoder = Gst.ElementFactory.make("nvv4l2decoder", "decoder")
decoder.set_property("drop-frame-interval", 2)
pipeline.add(decoder)
parse.link(decoder)
convert = Gst.ElementFactory.make("nvvideoconvert", "convert")
pipeline.add(convert)
decoder.link(convert)
caps = Gst.Caps.from_string("video/x-raw,width=1920,height=1080,formate=I420")
filter = Gst.ElementFactory.make("capsfilter", "filter")
filter.set_property("caps", caps)
pipeline.add(filter)
convert.link(filter)
queue = Gst.ElementFactory.make("queue", "queue")
pipeline.add(queue)
filter.link(queue)
sink = Gst.ElementFactory.make("nveglglessink", "video-sink")
sink.set_property("window-x", 0)
sink.set_property("window-y", 0)
sink.set_property("window-width", 1280)
sink.set_property("window-height", 720)
pipeline.add(sink)
queue.link(sink)
loop = GObject.MainLoop()
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass
pipeline.set_state(Gst.State.NULL)
if __name__ == "__main__":
main("rtsp://admin:123456#192.168.0.150:554/H264?ch=1&subtype=0&proto=Onvif")
Does anyone know what is the mistake? Thank you!
The reason it doesn't work is because rtspsrc's source pad is a so-called "Sometimes pad". The link here explains it quite well, but basically you cannot know upfront how many pads will become available on the rtspsrc, since this depends on the SDP provided by the RTSP server.
As such, you should listen to the "pad-added" signal of the rtspsrc, where you can link the rest of your pipeline to the source pad that just showed up in the callback.
So summarised:
def main(device):
GObject.threads_init()
Gst.init(None)
pipeline = Gst.Pipeline()
source = Gst.ElementFactory.make("rtspsrc", "video-source")
source.set_property("location", device)
source.set_property("latency", 300)
source.connect("pad-added", on_rtspsrc_pad_added)
pipeline.add(source)
# We will add/link the rest of the pipeline later
loop = GObject.MainLoop()
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass
pipeline.set_state(Gst.State.NULL)
def on_rtspsrc_pad_added(rtspsrc, pad, *user_data):
# Create the rest of your pipeline here and link it
depay = Gst.ElementFactory.make("rtph264depay", "depay")
pipeline.add(depay)
rtspsrc.link(depay)
# and so on ....

Using threads in PyGObject with GTK3 in python3

So I am trying to use threads to implement a blocking operation in a Python3 based application.
#!/usr/bin/env python3
import gi, os, threading, Skype4Py
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, GLib, GObject
skype = Skype4Py.Skype()
def ConnectSkype():
skype.Attach()
class Contacts_Listbox_Row(Gtk.ListBoxRow):
def __init__(self, name):
# super is not a good idea, needs replacement.
super(Gtk.ListBoxRow, self).__init__()
self.names = name
self.add(Gtk.Label(label=name))
class MainInterfaceWindow(Gtk.Window):
"""The Main User UI"""
def __init__(self):
Gtk.Window.__init__(self, title="Python-GTK-Frontend")
# Set up Grid object
main_grid = Gtk.Grid()
self.add(main_grid)
# Create a listbox which will contain selectable contacts
contacts_listbox = Gtk.ListBox()
for handle, name in self.GetContactTuples():
GLib.idle_add(contacts_listbox.add, Contacts_Listbox_Row(name))
GLib.idle_add(main_grid.add, contacts_listbox)
# Test label for debug
label = Gtk.Label()
label.set_text("Test")
GLib.idle_add(main_grid.attach_next_to, label, contacts_listbox, Gtk.PositionType.TOP, 2, 1)
def GetContactTuples(self):
"""
Returns a list of tuples in the form: (username, display name).
Return -1 if failure.
"""
print([(user.Handle, user.FullName) for user in skype.Friends]) # debug
return [(user.Handle, user.FullName) for user in skype.Friends]
if __name__ == '__main__':
threads = []
thread = threading.Thread(target=ConnectSkype) # potentially blocking operation
thread.start()
threads.append(thread)
main_window = MainInterfaceWindow()
main_window.connect("delete-event", Gtk.main_quit)
main_window.show_all()
print('Calling Gtk.main')
Gtk.main()
The basic idea is this simple program should fetch a list of contacts from the Skype API, and build a list of tuples. The GetContactTuples function succeeds in its design, the print call I placed verifies that. However, the program hangs indefinitely, and never renders an interface. Sometimes, it will yield random errors involving threads and/or resource availability. Once such error is
(example.py:31248): Gdk-WARNING **: example.py: Fatal IO error 11 (Resource temporarily unavailable) on X server :1.
I know it is related to the use of threads, but based on the documentation here, it seems like just adding GLib.idle_add calls before interface updates should be sufficient. So the questions are, why does this not work, and how could I correct the above sample?
UPDATE:
If GLib.idle_add is prepended to every line that interacts with GTK that it can be, I get a different error.
[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
python: xcb_io.c:179: dequeue_pending_request: Assertion '!xcb_xlib_unknown_req_in_deq' failed.
Aborted (core dumped)
Depending on your library version (this was no longer necessary in Gobject 3.10.2) you might need to actually need to explicitly initialize your threads using GObject.threads_init() as below:
if __name__ == '__main__':
threads = []
thread = threading.Thread(target=ConnectSkype) # potentially blocking operation
thread.start()
threads.append(thread)
main_window = MainInterfaceWindow()
main_window.connect("delete-event", Gtk.main_quit)
GObject.threads_init()
main_window.show_all()
print('Calling Gtk.main')
Gtk.main()

Categories