I am trying to create an image made up of coloured squares. I only need each square to be one pixel large, as it is just a single block colour. However, when I use this code, the image generated is extremely blurry. Is there anyway to make the boarders sharp?
def fancycolnw2(seq,m):
data=numbwall(seq,m)
#print(data)
for i in range(len(data)):
for j in range(len(data[i])):
if data[i][j]==' ':
data[i][j]=-1
im = Image.new('RGBA', (len(data[0]),len(data))) # create the Image of size 1 pixel
#print(data)
for i in range(len(data)-1):
for j in range(len(data[i])-1):
#print(i,j)
if data[i][j]==-1:
im.putpixel((j,i), ImageColor.getcolor('black', 'RGBA'))
if data[i][j]==0:
#print('howdy')
im.putpixel((j,i), ImageColor.getcolor('red', 'RGBA'))
if data[i][j]==1:
im.putpixel((j,i), ImageColor.getcolor('blue', 'RGBA'))
if data[i][j]==2:
im.putpixel((j,i), ImageColor.getcolor('grey', 'RGBA'))
im.show()
im.save('simplePixel.png') # or any image format
The result I get looks like this:
Image
It is the correct image, I just wish the boundaries between pixels were sharp. Any help would be greatly appreciated!
The image is perfectly sharp, but rather small. I suspect that you are "zooming in" to view it clearer, and that whatever program you are zooming with is filtering the image, because with most images this looks better. You need to find a viewing program that uses "nearest neighbour" resampling when zooming in, or generate a larger image to start with, for example by setting a 4-by-4 pixel block rather than individual pixels.
(Also, the code says "# or any other image format". Don’t use JPEG for this, as the lossy compression will likely wreck your image.)
Related
I am trying to visualize a tiff image in an ipython notebook using the following code, import statements are ignored for clarity purposes.
from PIL import Image
orig_img_path = os.path.join("path/to/tiff/0.tiff")
img_orig = Image.open(orig_img_path,"r")
plt.imshow(img_orig)
plt.show()
The above snippet just shows me the following black image -
I know that the image pixel values are non-zero since I can open the original images on my MacBook and they look like the following -
I also double check that the pixel values are nonzero by using the following code of casting the PIL image to an np array using the following code and printing the array out -
img_arr = np.array(img_orig)
print(img_arr)
which gives me the following output -
I think I know what the issue is as well - that the matplotlib pyplot thinks that this is a PNG image or something like that and as we can see the pixel values are very small to be perceivable by the human eyes. I mean I can simply multiply the image by 255 which gives the following image as expected, although scaling by 255 is not entirely right since the pixel values need to be normalized for the minimum to corrspond to 0 and max to 255 assuming mat plot lib thinks that it is showing a PNG image -
I would like to know if there is a better way of doing so rather than me manually doing it all the time, any native way of displaying tiff in a Jupyter/ipython notebook which preserves all the good properties of tiff images such as floating point numbers etc.
Thanks and please let me know if anything is unclear.
Edit 1: Link to the original file is here - https://drive.google.com/file/d/1O1-QM6aeU5-QZhT36vOMjDND2vkZNgqB/view?usp=sharing
-- Megh
If you share your original image in .tiff format the solution might be more precise.
You are reading the image pixels as type float: 0-1, and after that, you parse them as uint8 : 0-255 which will turn all pixels values into 0 or 1: Black or almost Black
You can try the following approach to read your image (supposedly Black and White) and parse it:
import cv2
gray = cv2.imread("path/to/tiff/0.tiff", cv2.IMREAD_UNCHANGED)
cv2.namedWindow("MyImage", cv2.WINDOW_NORMAL)
cv2.imshow("MyImage", gray)
cv2.waitKey(0)
What is the range of values that you expect in that image? Did you do any preprocessing to it?
The image you are trying to display in matplotlib contains negative values. Visualizing works well for any kind of uint8 data (Grayscale and RGB) and data in the range of 0 - 1. Your displaying issue can be addressed by adding the min value of the image and then dividing by the max value (effectively normalizing your data to the range 0-1).
[...]
img = np.array(img_orig, dtype=float)
img += abs(np.min(img))
img /= np.max(img)
plt.imshow(img)
plt.show()
I'm working on a project where I need to find the RGB values of each pixel in a picture. How could I do this using PIL? I know that Pillow is better, but since I only need to do this one thing I thought I could just use PIL. If this won't work as well please tell me.
from PIL import Image
img = Image.open("filename.png")
pixels = img.load()
#get the B value of the pixel at x=23, y=42
print pixels[23, 42][2]
The previous answer is a good solution but just suggesting another way which is one line:
from scipy import misc;
imgData = misc.imread('./image.png');
You can then easily get the colors at every pixels you need.
Kevin got it pretty much spot on, you can also use getdata() to return a list of tuples.
I may have got this totally wrong, but load() might work better if you need particular pixels, and getdata() if you need all of them. Also, it's a good idea to convert to RGB if it's just a normal image, I've had errors before by not doing that.
image = Image.open('filename').convert('RGB')
width, height = image.size
#Get pixels in a list of tuples
pixels = image_input.getdata()
#If you need a flat list containing all the colours
bytes = [j for i in pixels for j in i]
If you needed to do stuff to the pixels and rebuild the image after, that's where the image size comes in useful.
I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in pyGame
def image_and_sharpen_array(file_name):
#read the image data and return it, with the sharpened image
image = misc.imread(file_name)
blurred = ndimage.gaussian_filter(image,3)
edge = ndimage.gaussian_filter(blurred,1)
alpha = 20
out = blurred + alpha*(blurred - edge)
return image,out
#get image data
scan,sharpen = image_and_sharpen_array('foo.jpg')
w,h,c = scan.shape
#setting up pygame
pygame.init()
screen = pygame.display.set_mode((w,h))
pygame.surfarray.blit_array(screen,scan)
pygame.display.update()
And the image is displayed on the screen only rotated and inverted. Is this due to differences between misc.imread and pyGame? Or is this due to something wrong in my code?
Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''.
I often use the numpy swapaxes() method:
In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :
return image.swapaxes(0,1),out
I thought technico provided a good solution - just a little lean on info. Assuming get_arr() is a function that returns the pixel array:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
new_surf = pygame.pixelcopy.make_surface(pixl_arr)
screen.blit(new_surf, (dest_x, dest_y))
Alternatively, if you know that the image will always be of the same dimensions (as in iterating through frames of a video or gif file), it would be more efficient to reuse the same surface:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
pygame.pixelcopy.array_to_surface(old_surf, pixl_arr)
screen.blit(old_surf, (dest_x, dest_y))
YMMV, but so far this is working well for me.
Every lib has its own way of interpreting image arrays. By 'rotated' I suppose you mean transposed. That's the way PyGame shows up numpy arrays. There are many ways to make it look 'correct'. Actually there are many ways even to show up the array, which gives you full control over channel representation and so on. In pygame version 1.9.2, this is the fastest array rendering that I could ever achieve. (Note for earlier version this will not work!).
This function will fill the surface with array:
def put_array(surface, myarr): # put array into surface
bv = surface.get_view("0")
bv.write(myarr.tostring())
If that is not working, use this, should work everywhere:
# put array data into a pygame surface
def put_arr(surface, myarr):
bv = surface.get_buffer()
bv.write(myarr.tostring(), 0)
You probably still get not what you want, so it is transposed or have swapped color channels. The idea is, manage your arrays in that form, which suites this surface buffer. To find out what is correct channel order and axes order, use openCV library (cv2.imread(filename)). With openCV you open images in BGR order as standard, and it has a lot of conversion functions. If I remember correctly, when writing directly to surface buffer, BGR is the correct order for 24 bit and BGRA for a 32 bit surface. So you can try to put the image array which you get out of file with this function and blit to the screen.
There are other ways to draw arrays e.g. here is whole set of helper functions http://www.pygame.org/docs/ref/surfarray.html
But I would not recommend using it, since surfaces are not for direct pixel manipulating, you will probably get lost in references.
Small tip: To do 'signalling test' use a picture, like this. So you will immediately see if something is wrong, just load as array and try to render.
My suggestion is to use the pygame.transform module. There are the flip and rotate methods, which you can use to however your transformation is. Look up the docs on this.
My recommendation is to save the output image to a new Surface, and then apply the transformations, and blit to the display.
temp_surf = pygame.Surface((w,h))
pygame.surfarray.blit(temp_surf, scan)
'''transform temp_surf'''
screen.blit(temp_surf, (0,0))
I have no idea why this is. It is probably something to do with the order in which the axes are transferred from a 2d array to a pygame Surface.
I am trying to get the Python 2.7 PIL Library to work with JPEG images that are only available as a stream coming from a HDD image and are not complete.
I have set the option:
ImageFile.LOAD_TRUNCATED_IMAGES = True
And load the stream as far as it is available (or better said: as far as I am 100% sure that this data is still a image, not some other file type). I have tested different things and as far as I can tell (for JPEGs) PIL only accepts it as a valid JPEG Image if it finds the 0xFFDA (Start of Scan Marker). This is a short example of how I load the data:
from PIL import Image
from StringIO import StringIO
ImageFile.LOAD_TRUNCATED_IMAGES = True
with open("/path/to/image.raw", 'rb') as fp:
fp.seek("""jump to position in image where JPEG starts""")
data = fp.read("""number of bytes I know that those belong to that jpeg""")
img = Image.open(StringIO(data)) # This would throw exception if the data does
# not contain the 0xffda marker
pixel = img.load() # Would throw exception if LOAD_TRUNCATED_IMAGES = false
height,width = img.size
for i in range(height):
for j in range(width):
print pixel[i,j]
On the very last line I expected (or hoped) to see at least the read pixel data to be displayed. But for every pixel it returns (0,0,0).
The Question: Is what I am trying here not possible with PIL?
Some weeks ago I tried the same with a image file I truncated myself, simply by cutting data from it with an editor. It worked for the pixel-data that was available. As soon as it reached a pixel that I cut off, the program threw an exception (I will try this again later today to make sure that I am not remembering wrong).
If somebody is wondering why I am doing this: I need to make sure that the image/picture inside that hdd image is in consecutive blocks/clusters and is not fragmented. To make sure of this I wanted to use pixel matching.
EDIT:
I have tried it again and this is what I have seen.
I opened a truncated image in GIMP and it showed me a few pixel lines in the upper part, but PIL was not able to at least give me the RGB values of those pixels. It always returns (0,0,0).
I made the image slightly bigger such that the lower 4/5 of the image was not visible, but that was enough for PIL to show me the RGB values that were available. Everything else was (0,0,0).
I am still not 100% sure whether PIL can show me the RGB values, even if only view pixel-data is available.
I would try it with an uncompressed format like TGA. JPG being a compressed format may not make any sense to extract pixels from an incomplete image. JPEG actually stores the parameters for equations that describe the image, not pixel values. When you query a JPEG for a pixel value it evaluates the equations at that point and returns the result.
I have the same problem with Pillow==9.2.0
Let's downgrade to Pillow==8.3.2 and it works.
I don't really know about streaming, but I think that you simply cannot access rgb value the way you do.
Try:
rgb_im = img.convert('RGB')
r, g, b = rgb_im.getpixel((i, j))
Any ideas how to use Python with the PIL module to shrink select all? I know this can be achieved with Gimp. I'm trying to package my app as small as possible, a GIMP install is not an option for the EU.
Say you have 2 images, one is 400x500, other is 200x100. They both are white with a 100x100 textblock somewhere within each image's boundaries. What I'm trying to do is automatically strip the whitespace around that text, load that 100x100 image textblock into a variable for further text extraction.
It's obviously not this simple, so just running the text extraction on the whole image won't work! I just wanted to query about the basic process. There is not much available on Google about this topic. If solved, perhaps it could help someone else as well...
Thanks for reading!
If you put the image into a numpy array, it's simple to find the edges which you can use PIL to crop. Here I'm assuming that the whitespace is the color (255,255,255), you can adjust to your needs:
from PIL import Image
import numpy as np
im = Image.open("test.png")
pix = np.asarray(im)
pix = pix[:,:,0:3] # Drop the alpha channel
idx = np.where(pix-255)[0:2] # Drop the color when finding edges
box = map(min,idx)[::-1] + map(max,idx)[::-1]
region = im.crop(box)
region_pix = np.asarray(region)
To show what the results look like, I've left the axis labels on so you can see the size of the box region:
from pylab import *
subplot(121)
imshow(pix)
subplot(122)
imshow(region_pix)
show()
The general algorithmn would be to find the color of the top left pixel, and then do a spiral scan inwards until you find a pixel not of that color. That will define one edge of your bounding box. Keep scanning until you hit one more of each edge.
http://blog.damiles.com/2008/11/basic-ocr-in-opencv/
might be of some help. You can use the simple bounding box method described in that tutorial or #Tyler Eaves spiral suggestion which works equally as well