I was wondering if this was possible.
I'm currently drafting a simple project that would transform my text files into images by using the values of the characters to determine the RGB values of the outputted image.
I know it sounds counterintuitive and no, I don't want to print a string into an image file, I want the text itself to determine the RGB values of each pixel. This is just a rough idea and is far from refined.
I just want a simple program that will work as a proof of concept.
Code so far:
#first contact
from ctypes import sizeof
from PIL import Image
import math as m
def test():
f='qran.txt'
file = open(f)
text = file.read()
file.close() # this is dumb, should just read from file instead of dumping it into a
text = list(text) #rudimentary fix, turn text into list so we can manage the characters
size = m.floor(m.sqrt(len(text)//3)) #round the value for a square image
print(size)
# for elem in text:
# print(ord(elem))
img = Image.new('RGB', (size,size))
pixels = img.load() # create the pixel map
c = 0
for i in range(img.size[0]): # for every col:
for j in range(img.size[1]): # For every row
pixels[i,j] = (ord(text[c]), ord(text[c+1]), ord(text[c+2])) # set the colour accordingly
c+=1
c+=1
img.show()
img.save('qran.png')
test()
As you can see right now my idea is working as a rough concept. You can copy the quran in plaintext and paste it in the same folder as this simple py program to see this output
The image comes out as dull, since characters are converted into integers and their values are too high, and so most colors come off as light-dark gray.
Are there some libraries that could help with exaggerating the values so that they would come off as more representative? I've thought of multiplying by 10 and truncating the result of inverting the values then applying some filters.
I know its pretty much trial and error by this point (as well as polishing the actual code to provide usable functions that allow tweaking images without editing the function over and over again) but I'd like some outside input from people that have dwelved into image processing and such in python.
I apologize in advance if this post was too wordy or contained some unnecessary tidbits, it's my first post in this community.
Just implementing Christoph's idea in the comments:
#!/usr/bin/env python3
from PIL import Image
import math as m
import pathlib
import numpy as np
# Load document as bytes
qran = pathlib.Path('qran.txt').read_bytes()
size = m.floor(m.sqrt(len(qran))) #round the value for a square image
# Make palette image from bytes
img = Image.frombuffer('P', (size,size), qran, "raw", 'P', 0, 1)
# Add random palette of 256 RGB triplets to image
palette = np.random.randint(0,256, 768, np.uint8)
img.putpalette(palette)
img.save('qran.png')
Related
I need to split an RGBA image into an arbitrary number of boxes that are as equally sized as possible
I have attempted to use numpy.array_split, but am unsure of how to do so while preserving the RGBA channels
I have looked the following questions, none of them detail how to split an image into n boxes, they reference splitting the image into boxes of predetermined pixel size, or how to split the image into some shape.
While it seems that it would be some simple math to get number of boxes from box size and image size, I am unsure of how to do so.
How to Split Image Into Multiple Pieces in Python
Cutting one image into multiple images using the Python Image Library
Divide image into rectangles information in Python
While attempting to determine the number of boxes from pixel box size, I used the formula
num_boxes = (img_size[0]*img_size[1])/ (box_size_x * box_size_y)
but that did not result in the image being split up properly
To clarify, I would like to be able to input an image that is a numpy array of size (a,b,4) and a number of boxes and output the images in some form (np array preferred, but whatever works)
I appreciate any help, even if you aren't able to provide the full method, I would appreciate some direction.
I have tried
def split_image(image, n_boxes):
return numpy.array_split(image,n_boxes)
#doesn't work with colors
def split_image(image, n_boxes):
box_size = factor_int(n_boxes)
M = im.shape[0]//box_size[0]
N = im.shape[1]//box_size[1]
return [im[x:x+M,y:y+N] for x in range(0,im.shape[0],M) for y in range(0,im.shape[1],N)]
factor_int returns integer as close to a square as possible from Factor an integer to something as close to a square as possible
I am still not sure if your inputs are actually the image and the dimensions of the boxes or the image and the number of boxes. Nor am I sure if your problem is deciding where to chop the image or knowing how to chop a 4-channel image, but maybe something in here will get you started.
I started with this RGBA image - the circles are transparent, not white:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
import math
# Open image and get dimensions
im = Image.open('start.png').convert('RGBA')
# Make Numpy array from image and get height and width
ni = np.array(im)
h ,w = ni.shape[:2]
print(f'Height: {h}, width: {w}')
BOXES = 4
for i in range(BOXES):
this = ni[:, i*w//BOXES:(i+1)*w//BOXES, :]
Image.fromarray(this).save(f'box-{i}.png')
You can change BOXES but leaving it at 4 gets you these 4 output images:
[] []4
In the input of a program is given height amount of lines that have width amount of RRGGBB values in them, with RR/GG/BB being a hexadecimal value of the corresponding color in an RGB format.
I need to take the input and convert it to an OpenCV image so that I could interact with it using the OpenCV library. How would I accomplish this?
Example of input:
https://drive.google.com/file/d/1XuKRuAiQLUv4rbVxl2xTgqYr_8JQeu63/view?usp=sharing
The first number is height, second is width, the rest of the text file is the image itself.
That is a really inefficient way to store an image, and this is a correspondingly inefficient way to unpack it!
#!/usr/bin/env python3
import numpy as np
import re
import cv2
# Read in entire file
with open('in.txt') as f:
s = f.read()
# Find anything that looks like numbers
l=re.findall(r'[0-9a-f]+',s)
# Determine height and width
height = int(l[0])
width = int(l[1])
# Create numpy array of BGR triplets
im = np.zeros((height,width,3), dtype=np.uint8)
i = 2
for row in range (height):
for col in range(width):
hex = l[i]
R = int(hex[0:2],16)
G = int(hex[2:4],16)
B = int(hex[4:6],16)
im[row,col] = (B,G,R)
i = i+1
# Save to disk
cv2.imwrite('result.png', im)
In case the data file disappears in future, this is how the first few lines look:
1080 1920
232215 18180b 18170b 18180b 18170a 181609 181708 171708 15160c 14170d
15170d 16170d 16160d 16170d 16170d 16170d 15160d 15160d 17170e 17180f
17180f 18180f 191a11 191a12 1c1c0f 1d1d0f 1e1d0f 1f1e10 1e1e10 1f1f12
202013 202113 212214 242413 242413 242413 242412 242410 242611 272610
272612 262712 262710 282811 27290f 2a2b10 2b2c12 2c2d12 2e3012 303210
Keywords: Python, Numpy, OpenCV, parse, hex, hexadecimal, image, image processing, regex
I'm wanting to work on an idea using images but I can't get it to write pixel values properly, it always just ends up grey with some pattern like artefacts, and no matter what I try, the artefacts change but the image remains grey.
Here's the basic code I have:
from PIL import Image
data = ""
for i in range( 128**2 ):
data += "(255,0,0),"
im = Image.fromstring("RGB", (128,128), data)
im.save("test.png", "PNG")
There is no information in http://effbot.org/imagingbook/pil-index.htm on how to format data, so I've tried using 0-1, 0-255, 00000000-111111111 (binary), brackets, square bracks, no brackets, extra value for alpha and changing RGB to RGBA (which turns it light grey but that's it), comma after, and no comma after, but absolutely nothing has worked.
For the record, I'm not wanting to just store a single colour, I'm just doing this to initially get it working.
The format string should be arranged like:
"RGBRGBRGBRGB..."
Where R is a single character indicating the red value of a particular pixel, and the same for G and B.
"But 255 is three characters long, how can I turn that into a single character?" you ask. Use chr to convert your numbers into byte values.
from PIL import Image
data = ""
for i in range( 128**2 ):
data += chr(255) + chr(0) + chr(0)
im = Image.fromstring("RGB", (128,128), data)
im.save("test.png", "PNG")
Result:
Alternative solution:
Using fromstring is a rather roundabout way to generate an image if your data isn't already in that format. Instead, consider using Image.load to directly manipulate pixel values. Then you won't have to do any string conversion stuff.
from PIL import Image
im = Image.new("RGB", (128, 128))
pix = im.load()
for x in range(128):
for y in range(128):
pix[x,y] = (255,0,0)
im.save("test.png", "PNG")
Is there any way to make an image half transparent?
the pseudo code is something like this:
from PIL import Image
image = Image.open('image.png')
image = alpha(image, 0.5)
I googled it for a couple of hours but I can't find anything useful.
I realize this question is really old, but with the current version of Pillow (v4.2.1), there is a function called putalpha. It seems to work fine for me. I don't know if will work for every situation where you need to change the alpha, but it does work. It sets the alpha value for every pixel in the image. It seems, though that you can use a mask: http://www.leancrew.com/all-this/2013/11/transparency-with-pil/.
Use putalpha like this:
from PIL import Image
img = Image.open(image)
img.putalpha(127) # Half alpha; alpha argument must be an int
img.save(dest)
Could you do something like this?
from PIL import Image
image = Image.open('image.png') #open image
image = image.convert("RGBA") #convert to RGBA
rgb = image.getpixel(x,y) #Get the rgba value at coordinates x,y
rgb[3] = int(rgb[3] / 2) or you could do rgb[3] = 50 maybe? #set alpha to half somehow
image.putpixel((x,y), rgb) #put back the modified reba values at same pixel coordinates
Definitely not the most efficient way of doing things but it might work. I wrote the code in browser so it might not be error free but hopefully it can give you an idea.
EDIT: Just noticed how old this question was. Leaving answer anyways for future help. :)
I put together Pecan's answer and cr333's question from this question:
Using PIL to make all white pixels transparent?
... and came up with this:
from PIL import Image
opacity_level = 170 # Opaque is 255, input between 0-255
img = Image.open('img1.png')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
newData.append((0, 0, 0, opacity_level))
else:
newData.append(item)
img.putdata(newData)
img.save("img2.png", "PNG")
In my case, I have text with black background and wanted only the background semi-transparent, in which case:
from PIL import Image
opacity_level = 170 # Opaque is 255, input between 0-255
img = Image.open('img1.png')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] == 0 and item[1] == 0 and item[2] == 0:
newData.append((0, 0, 0, opacity_level))
else:
newData.append(item)
img.putdata(newData)
img.save("img2.png", "PNG")
I had an issue, where black boxes were appearing around my image when applying putalpha().
This workaround (applying alpha in a copied layer) solved it for me.
from PIL import Image
with Image.open("file.png") as im:
im2 = im.copy()
im2.putalpha(180)
im.paste(im2, im)
im.save("file2.png")
Explanation:
Like I said, putalpha modifies all pixels by setting their alpha value, so fully transparent pixels become only partially transparent. The code I posted above first sets (putalpha) all pixels to semi-transparent in a copy, then copies (paste) all pixels to the original image using the original alpha values as a mask. This means that fully transparent pixels in the original image are skipped during the paste.
Credit: https://github.com/nulano # https://github.com/python-pillow/Pillow/issues/4687#issuecomment-643567573
I just did this by myself...even though my code maybe a little bit weird...But it works fine. So I share it here. Hopes it could help anybody. =)
The idea: To transparent a pic means lower alpha which is the 4th element in the tuple.
my frame code:
from PIL import Image
img=open(image)
img=img.convert('RGBA') #you can make sure your pic is in the right mode by check img.mode
data=img.getdata() #you'll get a list of tuples
newData=[]
for a in data:
a=a[:3] #you'll get your tuple shorten to RGB
a=a+(100,) #change the 100 to any transparency number you like between (0,255)
newData.append(a)
img.putdata(newData) #you'll get your new img ready
img.save(filename.filetype)
I didn't find the right command to fulfil this job automatically, so I write this by myself. Hopes it'll help again. XD
This method helps to reduce opacity of logo with transparency before pasting it over image
# pip install Pillow
# PIL.__version__ is 9.3.0
from PIL import Image, ImageEnhance
im = Image.open('logo.png').convert('RGBA')
alpha = im.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(.5)
im.putalpha(alpha)
I have a multiband satellite image stored in the band interleaved pixel (BIP) format along with a separate header file. The header file provides the details such as the number of rows and columns in the image, and the number of bands (can be more than the standard 3).
The image itself is stored like this (assume a 5 band image):
[B1][B2][B3][B4][B5][B1][B2][B3][B4][B5] ... and so on (basically 5 bytes - one for each band - for each pixel starting from the top left corner of the image).
I need to separate out each of these bands as PIL images in Python 3.2 (on Windows 7 64 bit), and currently I think I'm approaching the problem incorrectly. My current code is as follows:
def OpenBIPImage(file, width, height, numberOfBands):
"""
Opens a raw image file in the BIP format and returns a list
comprising each band as a separate PIL image.
"""
bandArrays = []
with open(file, 'rb') as imageFile:
data = imageFile.read()
currentPosition = 0
for i in range(height * width):
for j in range(numberOfBands):
if i == 0:
bandArrays.append(bytearray(data[currentPosition : currentPosition + 1]))
else:
bandArrays[j].extend(data[currentPosition : currentPosition + 1])
currentPosition += 1
bands = [Image.frombytes('L', (width, height), bytes(bandArray)) for bandArray in bandArrays]
return bands
This code takes way too long to open a BIP file, surely there must be a better way to do this. I do have the numpy and scipy libraries as well, but I'm not sure how I can use them, or if they'll even help in any way.
Since the number of bands in the image are also variable, I'm finding it hard to figure out a way to read the file quickly and separate the image into its component bands.
And just for the record, I have tried messing with the list methods in the loops (using slices, not using slices, using only append, using only extend etc), it doesn't particularly make a difference as the major time is lost because of the number of iterations involved - (width * height * numberOfBands).
Any suggestions or advice would be really helpful. Thanks.
If you can find a fast function to load the binary data in a big python list (or numpy array), you can de-interleave the data using the slicing notation:
band0 = biglist[::nbands]
band1 = biglist[1::nbands]
....
Does that help?
Standard PIL
To load an image from a file, use the open function in the Image module.
>>> import Image
>>> im = Image.open("lena.ppm")
If successful, this function returns an Image object. You can now use instance attributes to examine the file contents.
>>> print im.format, im.size, im.mode
PPM (512, 512) RGB
The format attribute identifies the source of an image. If the image was not read from a file, it is set to None. The size attribute is a 2-tuple containing width and height (in pixels). The mode attribute defines the number and names of the bands in the image, and also the pixel type and depth. Common modes are "L" (luminance) for greyscale images, "RGB" for true colour images, and "CMYK" for pre-press images.
The Python Imaging Library also allows you to work with the individual bands of an multi-band image, such as an RGB image. The split method creates a set of new images, each containing one band from the original multi-band image. The merge function takes a mode and a tuple of images, and combines them into a new image. The following sample swaps the three bands of an RGB image:
Splitting and merging bands
r, g, b = im.split()
im = Image.merge("RGB", (b, g, r))
So I think you should simply derive the mode and then split accordingly.
PIL with Spectral Python (SPy python module)
However, as you pointed out in your comments below, you are not dealing with a normal RGB image with 3 bands. So to deal with that, SpectralPython (a pure python module which requires PIL) might just be what you are looking for.
Specifically - http://spectralpython.sourceforge.net/class_func_ref.html#spectral.io.bipfile.BipFile
spectral.io.bipfile.BipFile deals with Image files with Band Interleaved Pixel (BIP) format.
Hope this helps.
I suspect that the repetition of extend is not good better allocate all first
def OpenBIPImage(file, width, height, numberOfBands):
"""
Opens a raw image file in the BIP format and returns a list
comprising each band as a separate PIL image.
"""
bandArrays = []
with open(file, 'rb') as imageFile:
data = imageFile.read()
currentPosition = 0
for j in range(numberOfBands):
bandArrays[j]= bytearray(b"\0"*(height * width)):
for i in xrange(height * width):
for j in xrange(numberOfBands):
bandArrays[j][i]=data[currentPosition])
currentPosition += 1
bands = [Image.frombytes('L', (width, height), bytes(bandArray)) for bandArray in bandArrays]
return bands
my measurements doesn't show nsuch a slow down
def x():
height,width,numberOfBands=1401,801,6
before = time.time()
for i in range(height * width):
for j in range(numberOfBands):
pass
print (time.time()-before)
>>> x()
0.937999963760376
EDITED