I am new to game development. I am trying to start high and make a 3D RPG. I know the road is not gonna be easy. That is why i decided to use Ursina and python to make my game.
However i wanna add a cutscene showing a Backstory. I have the video in mp4 format but i cannot seem to know how to play it inside the game with Ursina.
Anyhelp will be much appreciated.
(Side question : do you think Ursina is good for a beginner in 3D gaming? If i want to publish my game on my website, isn't it better for me to learn javascript ? I read about Unity but it is too big to download for a little side project)
You can set the video as a texture of any element. You'll want to fix it to the UI using its parent attribute and you have to load the sound separately from the same file (as described in the Panda3D documentation).
from ursina import *
app = Ursina()
video = 'video.mp4'
video_player = Entity(model='quad', parent=camera.ui, scale=(1.5, 1), texture=video)
video_sound = loader.loadSfx(video)
video_player.texture.synchronizeTo(video_sound)
video_sound.play()
app.run()
The above code assumes that the video file is in the same folder as the Python script.
Well, I don't think there is a way to do that. the closest thing you can do to that is having a folder filled with all the frames of your video in .png or .jpg files, then adding a quad to the world and changing the texture of it to the next frame every fraction of a second depending on the framerate. this, however would make your computer l a g. trust me, I've tried it. it would probably be better to have a separate window with some sort of module that plays .mp4 files for playing the file.
In other words, there is no feasible way to do that.
From Entity Basics in the documentation:
e4 = Entity(model='cube', texture='movie_name.mp4') # set video texture
Related
So far i already have the Application buttons etc... And i can open the file location of the music players music list which i can select the specific tracks.
I can also play the track and pause/stop the track
At the moment i am trying to use Aubio but its not going so well since i dont know how to use it well. Tutorials on this is scares as well.
Basically my objective is to make a music player which tracks change in pitch - when pitch changes switch the image placed in the tK after 1 sec - (i dont know if this can be done but) image change is changed to a random image in a specific folder
I have tried using different sources to get me to understand but its not working well - i am a slow beginner and need some sort of help to get me to my goal
Sorry forgot to mention - This is all being done in a TK window - i
Is there any way of embedding a video player into the GUI of a python program and then displaying video from a piCamera to it in real time?
Currently, when I preview it fills the whole screen and makes any GUI unusable.
I would like to overlay information over the video capture.
I'm not too hot on Python but on Visual Studio for example you could use the VLC plug-in and use it as a display component.
There's lots of tutorials about how to host video and stream it to a server but nothing covers local display.
Cheers.
I'm not sure if this is exactly what you are looking for, but I use RPi-Cam-Web-Interface. If you are not aware of it you may want to take a look: http://elinux.org/RPi-Cam-Web-Interface
I am writing a program for audio-visual experiments, which will present a pre-generated list of audio-only and video-with-audio stimuli to experiment subjects. I have decided to use PyQT and Phonon for this, despite the fact that I'm fairly new to writing QT-based programs (and GUI programming in general).
The issue I'm having is that, when the previous file played was video (.mov in this case), and the current file is audio-only (.wav file), the image from the last frame of the video file remains on the screen while the audio file is playing. The video image remains until the next .mov file rolls around in the stimulus list.
Is there a way to clear the Phonon screen, in order to show just an empty black screen while the audio-only files are playing? I've done a fair bit of poking around with Google, and though this question has been asked by a number of people on different forums, it seems to have gone unanswered.
Any suggestions will be greatly appreciated!
This seems to be a bug, or missing feature, and it's hard to come up with a good workaround.
One somewhat hacky solution is to force a resize of the video widget:
size = self.video.size()
self.video.resize(0, 0)
self.video.resize(size)
but I wouldn't bet on this working on all platforms.
A more reliable workaround would be to put the video-widget inside a container widget with a black background, and then simply hide/show the video-widget when stopping/starting the media.
I am using the Pygame module in python to take pictures with my webcam. The problem is that I would like to export a video file (don't care what type) to use elsewhere. Since pygame cannot export video directly, I guess that there is two ways to do it:
Somehow stitch the photos Pygame creates into a video. (my preferred method)
Use an external library.
I only need 4 frames per second, and I don't care about the picture quality.
How can I make a video with python / pygame?
I have the same problem and am searching the solution.
I found this
This seems work well though I didn't try yet.
Hope this helps.
I am starting out on a project in which I need to build a customized annotation tool for movies and video. Some person (not technically minded) will need to pop open a GUI that I create, open either a video file or a directory of frames that result from chopping up a video file, and then use a window (much like QuickTime or VLC player, etc., i.e. a video window with a simple slider bar allowing the user to move back and forth at will). In this window, the user will be able to click on interesting points, give them semantic labels and meta-data (such as whether or not the point is occluded by something else in the picture), and then basically "press go" and start a tracker. The tracker will follow the points, frame by frame and the user can press the space bar or something to move forward and backward. The idea is to allow the human to intervene any time the tracker gets confused, but hopefully the tracker works well enough that humans don't have to hand-label every frame in an entire multi-thousand frame video sequence.
I am planning to do this all in Python, (a) because it is the language I know best for non-trivial programming, (b) I have easy access to both OpenCV Python (for image processing algorithms) and PyQt which seems to have a powerful enough GUI toolbox for what I want to do and (c) some other aspects of this same project are being developed by other programmers to work in Python and with MySQL databases. Python just seems like the natural choice to streamline it all together.
I am experienced using computer vision algorithms for the tracking, and I am reasonably sure that I can figure out simple PyQt GUI devices for making points clickable, buttons, entering simple text data, etc. However, the part I am having trouble understanding is how to actually construct my own video window with a slider bar that either moves ahead according to a frame's number or else is actually manipulating a video file. Can I leverage other movie players like VLC from within PyQt when programming in Python? Any suggestions or links that describe similar movie/video editing GUIs and how to develop them at home would be greatly appreciated.
Qt(PyQt) has a good multimedia support via Phonon module. You can easily use that module to achieve Video window, it can provide an easy-to-use video player and you can get playing position etc.