I'm trying to make a program that rotates and crops an image ROI (without losing a single pixel of the frame) based just on what minAreaRect() returns (from seeing what it can mark with drawContours).
Now since I don't understand what the function returns other than the rotation angle(list [0:1]) I'm struggling to make that myself. All I found on the internet was a Stack Overflow question with code that wasn't really explained very well and didn't realy work (atleast for openCV 3.6 not)
May I have any clues to what is the return syntax of this function and what is the method and keywords to search for such things, as well as a short little function that can maybe do that rotation and cropping? Since that looks like a quite common and simple thing to achieve.
Related
I have an assignment to do which is given a tomography image i have to remove anything but the brain and also find the left and right hemisphere of the brain by painting or extracting them.
Examples of tomography
tomography1
tomography2
Any ideas??
I will post my reply here even though is more like a comment (I don't have enough points to comment on posts).
Are you obliged to use Python and OpenCV?
Why don't you use Freesurfer? I work with MRI images but it makes all the steps you cited automatically using reconall function.
https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all
https://andysbrainbook.readthedocs.io/en/latest/FreeSurfer/FS_ShortCourse/FS_03_ReconAll.html
I am required to prevent an extruder from touching the base of a stage. I chose to use opencv on python to detect collision between the two objects. After researching past posts and reading up on opencv's processing tools I decided to try a few methods. The first method was edge detection which proved to be lacking in position detection. Next I tried using color as an advantage to isolate the needle and the stage, place rectangles surrounding the objects, and then prevent the rectangles from touching. This is proving to be a challenge because the background and needle are pretty much the same color. The last method requires using box-box collision but my guess is that if I am having issues with the second method then this method will also prove difficult. I am thinking about trying out deep learning on opencv, taking a bunch of photos, and the training the program but I am not sure how that would play out since ill be getting feed from video. Can anyone give me any tips? Any algorithms that would be helpful here? I see that the needle is clearly defined due to its edges so how can I use that to my advantage here? Any help is appreciated.
Photo of needle and stage:
I am trying to make a DIY Virtual Reality kit, and am not sufficient in Python. But by far, I think Python is the only way to achieve the following; The program takes each frame from the webcam feed, get's the blue pixels, and averages all of the XY positions of the pixels, and makes the mouse move there. In OpenCV and Python.
I've done a milion Google searches and cannot find what I need. I spent around 11 hours searching the OpenCV documentation and found nothing. I just need this program done. I'm running Python 3.7.3, and with OpenCV 2.
Any help would be much appreciated. I am fine with people giving me the full code as long as it is understandable and is legible.
Thank you.
*Edit, I am using Windows x86.
Convert your image to HSV. HSV is better for detection specific coloured areas generally. This SO link deals with the same issue as you described and have useful links on them. After you detect blue pixels with inRange function, use the mask to get an average of coordinate values(np.argwhere is useful for getting coordinates, then use np.mean over axis=0).
I get a problem when handling images taken from cell phones.
Image sample:
So, get ghosting especially for the question number area.
I think the reason is a little joggle when press the shutter.
Is there any way to remove the ghosting thus question number area will look more clear?
There is another worse one:
Actually, I find some image denoising functions like cv2.fastNlMeansDenoisingColored(), and it indeed works well upon some images.
Unfortunately, doesn't work for the above two images.
Env: Python3.6.5 Opencv:3.4.0
Thanks.
Wesley
I need to detect the color change in a certain point (or line) and it must do it live(not on a footage taken before). https://youtu.be/wi_dJrCWb54 here is exactly what i want to do , i commented on the video and searched on the internet but no answer seems to come. Can any of you give an idea on how to do it or if you saw code of sth like this system can you send it to me.
Probably not worth doing it on a color image, so you can convert it to gray-scale using cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) and then apply some kind of background subtraction algorithm like BackgroundSubtractorMOG2.
The logic of counting the cars will be up to you. That would be my approach since you need real time. If the results are not good, then you can try other things.