I was trying to test a stream in a docker instance. It was pretty common in work flow
docker pull ubuntu
docker run -it ubuntu /bin/sh
apt-get install -y python python3.6 vlc curl
curl https://bootstrap.pypa.io/get-pip.py > git-pip.py
python get-pip.py
pip install streamlink
useradd vlcuser
su vlcuser
pip install vlc
streamlink https:www//myurl worst
and then it will print something like:
$ streamlink https:www//myurl worst
[cli][info] Found matching plugin twitch for URL https:www//myurl
[cli][info] Available streams: audio_only, 160p (worst), 360p, 480p, 720p (best)
[cli][info] Opening stream: 160p (hls)
[cli][info] Starting player: /usr/bin/vlc
[cli][info] Player closed
[cli][info] Stream ended
[cli][info] Closing currently open stream...
but i cant figure out why the player immediately closes. Is there a way to keep it open?
I was originally having issues with VLC but running it as non root got me to this point. Im just not sure why the stream fails to stay open. As of right now, I am not Authenticated for Twitch etc. I was trying to set it up to be user agnostic as it is just a public stream i wanted to look at
It seems like the trick is to not use VLC at all.
Inside of streamlink there is a param called: --player-external-http which wont open the player but essentially set up a means to forward the stream through.
This will keep the streams open and VLC will not close. Im not sure if it has the same effect as running VLC. I figure syncing onto a stream would count as a view.
Related
You could start downloading remotely from linux to
http://localhost:9091/transmission/web/
using the command in
transmission-remote 9091-nid:password -a {magnet}.
I want to do this on Windows as well.
I used Python and now installed transmission and daemon for window and confirmed that it works on the web.
But I don't know how to send a magnet.
I have built a simple docker image and am trying to figure out why PyAudio will not output any sound.
speaker-test outputs pink noise to the headphone jack.
aplay sound.wav also works
python3 play_wave.py sound.wav hangs and doesn't output any sound.
play_wave.py is an example/test program included with the pyaudio package.
I setup this test repository so you can witness the exact behavior: https://github.com/PaulWieland/pyaudio_test
git clone https://github.com/PaulWieland/pyaudio_test.git
cd pyaudio_test
docker build -t paulwieland/pyaudio_test .
docker run -it --rm --device /dev/snd paulwieland/pyaudio_test /bin/sh
Once inside the container, run aplay Front_Center.wav - the audio is played through the raspberry Pi's headphone jack.
Now run python3 play.py Front_Center.wav
In my case the script hangs and never finishes. I may get a blip of audio after a few minutes but it will not play the sound correctly.
EDIT:
This issue is some sort of compatibility problem with PortAudio running on a Raspberry Pi 4 using the latest Raspbian OS.
I'm now convinced it has nothing to do with Docker or Python, because I cannot get a simple C program which plays a wav using portaudio to work either.
I made a bit of progress today and here's my stab at a helpful answer. Audio on Linux can be a pain, but I thought this was a promising clue when I was playing with my pi3 (+Raspbian Stretch) today.
Like I said in my comment a few days ago, on my pi3 stuff sounded bad both in the host and the container when I played sound with pyaudio, but sounded good in the host and the guest when I played with aplay. I installed a pulseaudio server (packaged by default in most non-Raspbian Debians) on the host and pyaudio started sounding comparably good to aplay in the host! I tried installing pulseaudio in the container as well, and the installation succeeded, plus I got the daemon up and running, but the daemon complained some about not being able to connect to dbus, and after it was running, aplay played the sound but pyaudio did not. Then I tried running pulseaudio with the --system flag in the container (because the container user is root, and the daemon said that root should only run pulseaudio with that flag), and the sound came out again but it sounded bad in the same way it used to. I would give it a shot to get your container to talk to a pulseaudio server, though - it feels like it would be a good move to me.
You have two options there, either get a pulseaudio server running in the guest, or run one on the host like normal and permit the container to talk to it, and presumably dbus as well (sorry, I don't know how to do that). I do know for sure that if pulseaudio was running on my host, the container couldn't talk to it, because pyaudio printed some messages about being unable to connect to the pulse server. The latter feels like a good move to me because I can tell that it's easy to get a known-good setup for pulseaudio+pyaudio+dbus in the host, so maybe it's easy to get a good setup for pulseaudio+dbus in the host and pyaudio in the container. Worth a shot!
Another tidbit, for what that's worth - something is not the same about the ALSA configuration in your container and at least my pi3 + raspbian stretch. The alsa.conf files are not identical, and I think other stuff is going on too. I didn't look too far in to it since I don't really have exactly the same problem as you anyway.
Instead of p = pyaudio.PyAudio() do p = pyaudio.init()
I am running a Python script that collects data and is running inside a Virtual Environment hosted remotely on a VPS (Debian based).
My PC crashed and I am trying to get back into the visual logs of the python script.
I know that the script is still running because it saves its data into a CSV file. That CSV is still being written.
If I activate the source again, then I can rerun the script. It sounds to me that I will have 2 instances of the same script running in this case...
I am not familiar with the virtual environment and I cannot find the right way to do it without deactivating and reactivating it. I am running my script on the cheapest OVH VPS I could buy because my computer is clearly not reliable for running 24/7.
You might use screen to run your script in a separate terminal session. This will avoid losing logging if the ssh connection gets dropped.
The workflow would be something in the lines of (on your host):
# Install screen
$ sudo apt udpate
$ sudo apt install screen
# Start a screen session
$ screen
# Run your script
$ python myscript.py
In case of dropping your ssh connections, it'll be enough to:
# ssh back into the host from your client
# reattach previous screen session
$ screen -r
For advanced use the official docs are quite comprehensive.
Note: As a more general note, what explained above is pretty much the basic logic of a terminal mulitplexer. You'll be able to achieve the same using tmux.
I have written some Web services in Python.I want to deploy it in AWS, I have created the instance.
I tried to run using putty and it was coming up well using the command python Flo.py, which starts the server 0.0.0.0:8080. But the problem is when I close the putty window the server is terminating. How i can start a server in 8080 just like httpd?
All helps are invited
I highly recommend you use screen (or tmux). And you may want to use upstart as well.
Screen:
Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells).
tmux and screen are doing the same thing - which is terminal multiplexing. This will give you a terminal you can attach to and disconnect from to keep it running when you're not on the server.
To test it simply install using:
sudo apt-get install screen
Now use the following to open a screen terminal under the name my_screen, running your script as it starts:
screen -dmS my_screen python Flo.py
And attach to it using:
screen -r my_screen
Detach using ctrl+A followed by ctrl+D, and now you can leave the server (screen will keep running with the process in it)
Read more here.
Upstart:
Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.
Upstart is the new way to start services on debian as soon as the system starts.
To add an upstart service you need to add a configuration file under /etc/init (open one of the files there and see an example).
These files can be extremely simple so don't be intimidated by what you see there.
You can make a service to run your server / service and send output to a log file which you can then use to keep track of what's happening.
Read more here.
As I am trying to connect the VLC Python Bindings with ffmpeg (see Exchange data between ffmpeg and video player) I thought that making ffmpeg to output the RTSP stream to STDOUT and "catching" it with a Python script and sending over HTTP would be a good idea. So I made a tiny HTTP server using SimpleHTTPServer from which I get the STDIN from FFMpeg and "output" it to the web.
This is the syntax I am using:
ffmpeg.exe -y -i rtsp://fms30.mediadirect.ro/live/utv/utv?tcp -acodec copy -vcodec copy -f flv - | \Python27\python.exe -u stdin2http.py
This seems to work, I can access the stream but nor the video, nor audio is playing. I tried with VLC on Windows, VLC and MPlayer on Linux and no success. Simply running
ffmpeg.exe -y -i rtsp://fms30.mediadirect.ro/live/utv/utv?tcp -acodec copy -vcodec copy -f flv - | vlc.exe -
works perfectly. So the problem seems to be when I'm writing the data from stdin to the web server. What I am doing wrong?
I played around with your stdin2http.py script. First, I was able to stream a media file (H.264 .mp4 file in my case) from a simple web server (webfsd) to VLC via HTTP. Then I tried the same thing using 'stdin2http.py < h264-file.mp4'. It didn't take.
I used the 'ngrep' utility to study the difference in the network conversations between the 2 servers. I think if you want to make this approach work, you will need to make stdin2http.py smarter and work with HTTP content ranges (which might involve a certain amount of buffering stdin data in your script in order to deal possible jumps in the stream).