Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ??
I am not sure, what you want to do or achieve, here's one thing to mention about color profiles.
The most common color profile for cameras, screens and printers is sRGB, which is a limited color spectrum which does not include the whole RGB range (because the cheap hardware can't visualize it anyways).
Some cameras (and probably scanners) allow to use different color profiles like AdobeRGB, which increases the color space and "allows" more colors.
The problem is, if you capture (e.g. scan) an image in AdobeRGB color profile, but the system (browser/screen/printer) interprets it as sRGB, you'll probably get washed out colors, just because of wrong interpretation (like you'll get blue faces in your image, if you interpret BGR images as RGB images).
OpenCV and many browsers, printers, etc. always interpret images as sRGB images, according to http://fotovideotec.de/adobe_rgb/
As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed..
The pixel values are the ones you read in with
imgread
It depends on the flags you set for it. The original image may have a greater bit-depth (depending on your scanner) than the one you loaded.
Also the real file extension is determined from the first bytes of the file, not by the file name extension.
So it may not be the pixel value of the scanned image if the bit-depths differ.
Please have a look at the imgread documentation.
Related
Input image
I need to group the region in green and get its coordinates, like this output image. How to do this in python?
Please see the attached images for better clarity
At first, split the green channel of the image, put a threshold on that and have a binary image. This binary image contains the objects of the green area. Start dilating the image with the suitable kernel, this would make adjacent objects stick to each other and become to one big object. Then use findcontour to take the sizes of all objects, then hold the biggest object and remove the others, this image would be your mask. Now you can reconstruct the original image (green channel only) with this mask and fit a box to the remained objects.
You can easily find the code each part.
I wish to use Python to show a cutout in a certain shape of one video overlaid on top of another video. The invisible parts of the overlaid video should be translucent, so the 'background' video can be seen in those parts. The issue here is that the location of the overlay is dynamic, while the shape remains the same. This means I cannot simply preprocess the videos.
I was thinking to take stills from the overlaid video at runtime, cut out the overlay and superimpose it on the background video in the right location. This would have to be done at a high frequency (30 fps+, probably).
As an example:
I would want the red cutout of this image:
http://i.imgur.com/jEQqvR0.jpg
to appear on top of another image
The Python Image Library (PIL) seems to be able to crop images easily, but only in rectangles and not in a custom shape. I could probably add rectangles together to create a custom shape, but I was hoping there would be an easier way. Maybe I'm overlooking things.
So my question is: What would be the easiest way to do the cut-out? I'm also open to other suggestions for approaches. Ideally, I would use a dynamically positioned translucent video mask partially obscuring the background video with parts of the overlaid video but I'm not sure if this is at all possible.
I have a django based website in which I have created profiles of people working in the organisation. Since this is a redesign effort, I used the already existing profile pictures. The size of current profile image style is 170x190px. Since the images already exist and are of different sizes, I want to crop them to the size specified above. But how do I decide from which side I have to crop?
Currently, I have applied style of 170by190 to all the images while displaying in profiles, but most of them look distorted as the aspect ratios do not match.
I have tried PIL thumbnail function but it does not fit the need.
Please suggest a solution.
Well, you have to resize pictures, but images ratio create huge impact on final result. As images have some ratio, and you cannot simply resize them to 170px190px without prior adjusting of their ratio, so you have to update( not crop them!) images before resizing them to get best possible output, it can be done in next ways:
Crop them manually to desired ratio (17:19). (take a while if you have plenty of images)
Create script which add padding to that images if image ratio is close to required, all images which ratio is far away from desired mark as 'human cropping required' and work with their ratio later by own (semi-manual, so still may be really time consuming)
Spend some time and write face recognation function, then process images with that function and find faces, then crop them from origin image, but before: add padding to achieve desired radio (17:19) at top and bottom of face. (recommended)
Some links which may be use full for you:
Face Recognition With Python, in Under 25 Lines of Code
facereclib module, they probably are able to help you.
Image Manipulation, The Hitchhiker’s Guide
Good luck !
Use sorl-thumbnail, you don't need to crop every image manually.
My overall goal is to crop several regions from an input mirax (.mrxs) slide image to JPEG output files.
Here is what one of these images looks like:
Note that the darker grey area is part of the image, and the regions I ultimately wish to extract in JPEG format are the 3 black square regions.
Now, for the specifics:
I'm able to extract the color channels from the mirax image into 3 separate TIFF files using vips on the command line:
vips extract_band INPUT.mrxs OUTPUT.tiff[tile,compression=jpeg] C --n 1
Where C corresponds to the channel number (0-2), and each output file is about 250 MB in size.
The next job is to somehow recognize and extract the regions of interest from the images, so I turned to several python imaging libraries, and this is where I encountered difficulties.
When I try to load any of the TIFFs using OpenCV using:
i = cv2.imread('/home/user/input_img.tiff',cv2.IMREAD_ANYDEPTH)
I get an error error: (-211) The total matrix size does not fit to "size_t" type in function setSize
I managed to get a little more traction with Pillow, by doing:
from PIL import Image
tiff = Image.open('/home/user/input_img.tiff')
print len(tiff.tile)
print tiff.tile[0]
print tiff.info
which outputs:
636633
('jpeg', (0, 0, 128, 128), 8, ('L', ''))
{'compression': 'jpeg', 'dpi': (25.4, 25.4)}
However, beyond loading the image, I can't seem to perform any useful operations; for example doing tiff.tostring() results in a MemoryError (I do this in an attempt to convert the PIL object to a numpy array) I'm not sure this operation is even valid given the existence of tiles.
From my limited understanding, these TIFFs store the image data in 'tiles' (of which the above image contains 636633) in a JPEG-compressed format.
It's not clear to me, however, how would one would extract these tiles for use as regular JPEG images, or even whether the sequence of steps in the above process I outlined is a potentially useful way of accomplishing the overall goal of extracting the ROIs from the mirax image.
If I'm on the right track, then some guidance would be appreciated, or, if there's another way to accomplish my goal using vips/openslide without python I would be interested in hearing ideas. Additionally, more information about how I could deal with or understand the TIFF files I described would also be helpful.
The ideal situations would include:
1) Some kind of autocropping feature in vips/openslide which can generate JPEGs from either the TIFFs or original mirax image, along the lines of what the following command does, but without generated tens of thousands of images:
vips dzsave CMU-1.mrxs[autocrop] pyramid
2) Being able to extract tiles from the TIFFs and store the data corresponding to the image region as a numpy array in order to detect the 3 ROIs using OpenCV or another methd.
I would use the vips Python binding, it's very like PIL but can handle these huge images. Try something like:
from gi.repository import Vips
slide = Vips.Image.new_from_file(sys.argv[1])
tile = slide.extract_area(left, top, width, height)
tile.write_to_file(sys.argv[2])
You can also extract areas on the command-line, of course:
$ vips extract_area INPUT.mrxs OUTPUT.tiff left top width height
Though that will be a little slower than a loop in Python. You can use crop as a synonym for extract_area.
openslide attaches a lot of metadata to the image describing the layout and position of the various subimages. Try:
$ vipsheader -a myslide.mrxs
And have a look through the output. You might be able to calculate the position of your subimages from that. I would also ask on the openslide mailing list, they are very expert and very helpful.
One more thing you could try: get a low-res overview, corner-detect on that, then extract the tiles from the high-res image. To get a low-res version of your slide, try:
$ vips copy myslide.mrxs[level=7] overview.tif
Level 7 is downsampled by 2 ** 7, so 128x.
How can I replace a colour across multiple images with another in python? I have a folder with 400 sprite animations. I would like to change the block coloured shadow (111,79,51) with one which has alpha transparencies. I could easily do the batch converting using:
img = glob.glob(filepath\*.bmp)
however I dont know how I could change the pixel colours. If it makes any difference, the images are all 96x96 and i dont care how long the process is. I am using python 3.2.2 so I cant really use PIL (I think)
BMP is a windows file format, so you will need PIL or something like it; or you can roll your own reader/writer. The basic modules won't help as far as I'm aware. You can read PPM and GIF using Tk (PhotoImage()) which is part of the standard distribution and use get() and put() on that image to change pixel values. See references online, because it's not straight-forward - the pixels come from get() as 3-tuple integers, but need to go back to put() as space-separated hex text!
Are your images in indexed mode (8 bit per pixel with a palette),or "truecolor" 32bpp images? If they are in indexed modes, it would not be hard to simply mark the palette entry for that color to be transparent across all files.
Otherwise, you will really have to process all pixel data. It also could be done by writting a Python script for GIMP - but that would require Python-2 nonetheless.