I am starting a new project with a friend of mine, we want to design a system that would alert the driver if the car is diverting from its original path and its dangerous.
so in a nutshell we have to design a real-time algorithm that would take pictures from the camera and process them. All of this will be done in Python.
I was wondering if anyone has any advises for us or maybe point out some stuff that we have to consider
Cheers !
You can search for this libraries: dlib, PIL (pillow), opencv and scikit learn image. This libraries are image processing libraries for python.
Hope it helps.
Related
I am relatively new to coding and I apologize if my questions are straightforward to you.
I am trying to understand OpenCV code to be able to add my contributions (mainly converting 2D tools to 3D as it would be useful for my machine learning projects and for medical projects). There is also some extra-curiosity since I like to understand how things work.
1) On the example of the GaussianBlur method. What happens when I call it in Python? Namely, how the Python code is bind to the C++ one? When I browse the repository, there are all C++ files, and I do not find where it is done. When I installed cv2 with pip all was automatic, but I would like to understand the process.
2) if I want to understand the whole GaussianBlur algorithm, I am also not familiar with C++ browsing, so how should I proceed to retrieve what files are used (methods and also inherited classes).
I've found on another answer that https://github.com/opencv/opencv/blob/9c23f2f1a682faa9f0b2c2223a857c7d93ba65a6/modules/imgproc/src/smooth.cpp#L4085 contains the method, but how can I find any method on my own? Why isn't it in the master folder but in the blob folder? How to find then the other methods or classes called by this one?
3) this is more a curiosity question since I am not familiar with makefiles, but when is done the binding between Python and C++? When I install OpenCV with pip it is done automatically, but I would like to understand the process.
Thanks a lot for your answers! I would appreciate any tutorial since I've googled a lot before asking, of course, but did not find what could help me on my own.
In C++ you have to download the library and link them in the compilation and linking process (when creating an executable from source code).
The C++ bind is done with python.h library for c++. using this binding OpenCV module is created for python.
For learning gaussian blur, etc. you can learn image processing.
The methods of Opencv are kept in their respective files. like opencv2/highgui.hpp for OpenCV GUI like imshow. you can import them to C++ with #include <opencv2/highgui.hpp>(the methods are separated to different files to reduce importing unnecessary methods).
CMake is like a scripting language(its a tool) where you can write a script on how the tool should build the executable from source code.
The starter tutorial is Here
I am working on "Kinect for XBox One" on my HP Laptop with Windows 10 and 64-bit operating system. I have worked on python before and want to work in it only with jupyter notebok or python command line.
The topic of my project is Dynamic Sign Language Recognition and till now I have worked on only static images. I found many tutorials for working with kinect camera but every tutorial has been done with C++, C# or Sketch in Processing 3. I have downloaded Processing 3 and tried some programs in Sketch also by following this link: https://www.youtube.com/watch?v=XKatPT3HlqA
But even after 2 days, I am not able to run a simple program in it and only a black picture is there as an output, kinect is detected though.
I have also tried Pykinect and python example from this link: https://github.com/Kinect/PyKinect2
It was good and I was able to track the skeleton of the body. I want to learn Pykinect and many more such examples but I am not getting any source from where I can learn all these. My aim is to use all the three cues:RGB, Depth, and Skeleton for my work.
Even for dynamic gesture recognition, there are projects in C++ and languages other than python.
If you have any suggestions regarding kinect with python and dynamic gesture recognition, then you are welcome.
After searching for days, I figured out that there are no tutorials on Kinect using Python. Those who want to learn kinect with python and Windows should go to this link first: https://github.com/Kinect/PyKinect2
Go by the instructions and run the example programs whether in Visual studio, python command line or jupyter notebook. There are no tutorials defining the programming functions of Pykinect library. The only way to learn it is through one more link:
https://github.com/Microsoft/PTVS
Explore this link as it has one or two more examples which will help in understanding the functions. I am not done yet so I will keep updating my answer if I find any more sources.
I'm trying to capture my screen using Python because I'll use it on OpenCV, but I couldn't find a way to make it work on Gnome, since Gnome uses Wayland and all libraries that I've found only work with X11.
For now I'm not considering change my interface. I'm searching a solution to this problem.
Does someone know a solution?
To be more specific, I'll use the images to train an AI and so I need they continuously.
EDIT:
I've found this but how can I pass frames to OpenCV in Python instead of save a video file?
The proper way to do screencasting these days is by using the Screencast portal, which is part of XDG desktop portals and is already supported by GNOME, KDE, wlroots (and more). As an added advantage, this will also work in containerized formats like Flatpaks.
You can find an example on how to do screencasting in Pyhon using this snippet, created by one of the Mutter maintainers. If you look for parse_launch(), you will see a GStreamer pipeline which you can modify to include the GStreamer OpenCV elements that can do the processing for you.
Note: in your edit, you link to a predecessor of that portal, which is GNOME-specifc, internal API, so I wouldn't rely on it ;-)
Can someone show me an example of how to use cv.calcOpticalFlowPyrLK in python? I've looked over the internet and can't quite figure it out.
This webpage from the OpenCV-Python Documentation can help understanding the Optical Flow algorithm with source code. The source code can also be found in the OpenCV installation folder : C:\opencv\samples\python2\lk-track.py.
EDIT
This also can help.(Explanation+SC+Demo)
I'm working on a project to develop a Digital Image Correlation tool for measuring surface strains. Does anyone know of any Mac compatible libraries that I can use to do the DIC processing. I was thinking that this might be something where there is a Python library but I have not yet managed to find one.
Have a look at this thread Image Processing, In Python?
Then there is the OpenCV with Python interface
Don't know about the MAC-part though...