I wish to use Python to show a cutout in a certain shape of one video overlaid on top of another video. The invisible parts of the overlaid video should be translucent, so the 'background' video can be seen in those parts. The issue here is that the location of the overlay is dynamic, while the shape remains the same. This means I cannot simply preprocess the videos.
I was thinking to take stills from the overlaid video at runtime, cut out the overlay and superimpose it on the background video in the right location. This would have to be done at a high frequency (30 fps+, probably).
As an example:
I would want the red cutout of this image:
http://i.imgur.com/jEQqvR0.jpg
to appear on top of another image
The Python Image Library (PIL) seems to be able to crop images easily, but only in rectangles and not in a custom shape. I could probably add rectangles together to create a custom shape, but I was hoping there would be an easier way. Maybe I'm overlooking things.
So my question is: What would be the easiest way to do the cut-out? I'm also open to other suggestions for approaches. Ideally, I would use a dynamically positioned translucent video mask partially obscuring the background video with parts of the overlaid video but I'm not sure if this is at all possible.
Related
I want to create a mask over the whole human figure and replace it with an image that is responsive to the moving human figure. The image attached will give you a better idea of what is I mean.
The Image in the background will be still and won't move but the figure will seem like it punched a hole into the foreground. The figures however they move will go over the static background image and show the contents of the image. The figures, if they overlap as seen in the first image (see overlapping leg) should not break the mask and should still display the contents of the background.
Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ??
I am not sure, what you want to do or achieve, here's one thing to mention about color profiles.
The most common color profile for cameras, screens and printers is sRGB, which is a limited color spectrum which does not include the whole RGB range (because the cheap hardware can't visualize it anyways).
Some cameras (and probably scanners) allow to use different color profiles like AdobeRGB, which increases the color space and "allows" more colors.
The problem is, if you capture (e.g. scan) an image in AdobeRGB color profile, but the system (browser/screen/printer) interprets it as sRGB, you'll probably get washed out colors, just because of wrong interpretation (like you'll get blue faces in your image, if you interpret BGR images as RGB images).
OpenCV and many browsers, printers, etc. always interpret images as sRGB images, according to http://fotovideotec.de/adobe_rgb/
As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed..
The pixel values are the ones you read in with
imgread
It depends on the flags you set for it. The original image may have a greater bit-depth (depending on your scanner) than the one you loaded.
Also the real file extension is determined from the first bytes of the file, not by the file name extension.
So it may not be the pixel value of the scanned image if the bit-depths differ.
Please have a look at the imgread documentation.
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.
I'm trying to make a GUI in tkinter that uses one image as an overlay on top of another, but when I place the image over the lower one, the transparent area of the image appears as grey.
I've searched for solutions to this, but all the results that I've found have pointed towards using PIL.
Is it possible for me to use transparent, or partially transparent images in the python tkinter module without using PIL?
You could use some basic photoshop tools like the magic wand tool to remove the background, but keep in mind, some PNG format images have a faint background. This is either in the from of a watermark, or the image background was rendered with a lower opacity than the rest of the image. Your GUI may also have a layer placed above the images by default. Does it appear on each image seperatly when loaded into the GUI?
I have a django based website in which I have created profiles of people working in the organisation. Since this is a redesign effort, I used the already existing profile pictures. The size of current profile image style is 170x190px. Since the images already exist and are of different sizes, I want to crop them to the size specified above. But how do I decide from which side I have to crop?
Currently, I have applied style of 170by190 to all the images while displaying in profiles, but most of them look distorted as the aspect ratios do not match.
I have tried PIL thumbnail function but it does not fit the need.
Please suggest a solution.
Well, you have to resize pictures, but images ratio create huge impact on final result. As images have some ratio, and you cannot simply resize them to 170px190px without prior adjusting of their ratio, so you have to update( not crop them!) images before resizing them to get best possible output, it can be done in next ways:
Crop them manually to desired ratio (17:19). (take a while if you have plenty of images)
Create script which add padding to that images if image ratio is close to required, all images which ratio is far away from desired mark as 'human cropping required' and work with their ratio later by own (semi-manual, so still may be really time consuming)
Spend some time and write face recognation function, then process images with that function and find faces, then crop them from origin image, but before: add padding to achieve desired radio (17:19) at top and bottom of face. (recommended)
Some links which may be use full for you:
Face Recognition With Python, in Under 25 Lines of Code
facereclib module, they probably are able to help you.
Image Manipulation, The Hitchhiker’s Guide
Good luck !
Use sorl-thumbnail, you don't need to crop every image manually.