How can i convert the pixels of an image from square to hexagonal? Doing so i need to extract the rgb values from each hex pixel. Is there any library or function that simplify this process?
Example : Mona Lisa Hexagonal Pixel Shape
Nothing tried. Thanks
Here's a possible approach, though I am sure if you are able to write code to read, manipulate and use pixels from a file format that hasn't been invented yet, you should be able to create that file yourself ;-)
You could generate a hexagonal grid, using ImageMagick which is installed on most Linux distros and is available for OSX and Windows. Here, I am just doing things at the command-line in the Terminal, but there are Python, Perl, PHP, .Net, C/C++ and other bindings too - so take your pick.
First make a grid of hexagons - you'll have to work out the size you need, mine is arbitrary:
convert -size 512x256 pattern:hexagons hexagons.png
Now, fill in the hexagons, each with a different colour, I am just doing some examples of flood-filling here to give you the idea. Ideally, you would colour the first (top-left) hexagon with colour #000 and the next one across with #001 so that you could iterate through the coordinates of the output image as consecutive colours. Also, depending on your output image size, you may need to use a 32-bit PNG to accommodate the number of hexels (hexagonal pixels).
convert hexagons.png \
-fill red - draw "color 100,100 floodfill" \
-fill blue -draw "color 200,200 floodfill" \
colouredmask.png
Now iterate through all the colours, making every colour except that colour transparent. Note that I have added a black border just so you can see the context on StackOverflow's white background:
convert colouredmask.png -fill none +opaque red onecell.png
Now mask the original image with that mask and get the average colour of that one cell and write it to your yet-to-be-invented file format. Repeat for all cells/colours.
Note that the basic hexagon pattern is 30x18, so you should size your grid as multiples of that for it to tesselate properly.
Note that if you have lots of these to process, you should consider using something like GNU Parallel to take advantage of multiple cores. So, if you make a script called ProcessOneImage and you have 2,000 images to do, you would use:
parallel ProcessOneImage ::: *.png
and it will keep, say 8, jobs running all the time if your PC has 8 cores. There are many more options, try man parallel.
Fred has an Imagemagick script on his site that may do what you want: STAINEDGLASS
First of all, I think there is no such a function that is ready for you to perform the lattice conversion, thus you may need to implement the conversion process by yourself.
The lattice conversion is a re-sampling process, and it is also a interpolation process. There are many algorithms that have been developed in the hexagonal image processing papers.
Please see the example for you:
Related
For an analysis, I'd like to take the a bunch of TIF images I have, and fill in the black background to create a perfect square.
I would like to keep the same general pattern of the image when I fill in the black spots, instead of just filling in the black with random bits of white and blue. My first thought for doing this is to impose some sort of symmetrical "reflection" of the image onto the black portions - the concept is detailed below.
The thing is, I'm not sure how to go about this - my first thought was to convert the image to a NumPy array and copy the individual rows of pixels over for a pseudo-reflection, but that could take a lot of time since I would be accounting for the length of the black portion in each row, and it wouldn't even be the desired result. I was wondering if there was a package or method that did something like this already, perhaps in PIL.
Any ideas are appreciated, as I am not to familiar with image processing in Python (or in general) - thank you!
EDIT: Here is a google drive link for the example image in question.
EDIT 2: Here is a google drive link for another example, this time with two "overlapping" black background areas (in other words, the actual data has a corner)
The images that I have gives me inconsistent results. My thought process is: my text is always in white font; if I can switch the pixel of my text to black and turned everything else to white or transparent, I will have better success.
My question is, what library or language is best for this? Do I have to turn my white pixel into some unique RGB, turn everything else to white or transparent, then find the unique RGB and make that black? Any help is appreciated.
Yes, if you could make the text pixels black and all the rest of the documents white you would have better success, although this is not always possible, there are processes that can help.
The median filter (and other low pass filters) can be used to remove noise present in the image.
erosion can also help to remove things that are not characters, like thin lines and also noise.
align the text is also a good idea, the OCR accuracy can drop considerably if the text is not aligned. To do this you could try the Hough transform followed by a rotation. Use the Hough transform to find a line in your text and then rotate the image in the same angle as the line.
All processing steps mentioned can be done with opencv or scikit-image.
Is also good to point out that there are many other ways to process text, too many to mention.
What functions (and how I should use them) should I use to crop out the center part of this image? I want to take just the less-dense parts, not the dense borders.
Thanks!
In the end, I want to either count the tiny circles/dots (cells) in the areas or calculate the area of the less-dense parts, outlined in the second image. I've done this before with ImageJ by tracing out the area by hand, but it is a really tedious process with lots of images.
Original
Area traced
I've currently looked at Scipy, but they are big and I don't really know how to approach this. If someone would point me in the right direction, that would be great!
It would take me a bit longer to do in Python, but I tried a few ideas just on the command-line with ImageMagick which is installed on most Linux distros and is available for free for macOS and Windows.
First, I trimmed your image to get rid of extraneous junk:
Then, the steps I did were:
discard the alpha/transparency channel
convert to greyscale as there is no useful colour information,
normalised to stretch contrast and make all pixels in range 0-255,
thresholded to find cells
replaced each pixel by the mean of their surrounding 49x49 pixels (box blur)
thresholded again at 90%
That command looks like this in Terminal/Command Prompt:
convert blobs.png -alpha off -colorspace gray -normalize -threshold 50% -statistic mean 49x49 -threshold 90% result.png
The result is:
If that approach looks promising for your other pictures we can work out a Python version pretty quickly, so let me know.
Of course, if you know other useful information about your image that could help improve things... maybe you know the density is always higher at the edges, for example.
In case anyone wants to see the intermediate steps, here is the image after grey scaling and normalising:
And here it is after blurring:
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.
How can I replace a colour across multiple images with another in python? I have a folder with 400 sprite animations. I would like to change the block coloured shadow (111,79,51) with one which has alpha transparencies. I could easily do the batch converting using:
img = glob.glob(filepath\*.bmp)
however I dont know how I could change the pixel colours. If it makes any difference, the images are all 96x96 and i dont care how long the process is. I am using python 3.2.2 so I cant really use PIL (I think)
BMP is a windows file format, so you will need PIL or something like it; or you can roll your own reader/writer. The basic modules won't help as far as I'm aware. You can read PPM and GIF using Tk (PhotoImage()) which is part of the standard distribution and use get() and put() on that image to change pixel values. See references online, because it's not straight-forward - the pixels come from get() as 3-tuple integers, but need to go back to put() as space-separated hex text!
Are your images in indexed mode (8 bit per pixel with a palette),or "truecolor" 32bpp images? If they are in indexed modes, it would not be hard to simply mark the palette entry for that color to be transparent across all files.
Otherwise, you will really have to process all pixel data. It also could be done by writting a Python script for GIMP - but that would require Python-2 nonetheless.