PIL's Image.fromarray function() confused me - python

I want to create a small image with PIL, my idea is first creating an ndarray object with numpy, then transforming it into a Image object, but it doesn't work!
small = np.array([[0, 1, 1, 0, 1, 0],
[0, 1, 1, 0, 1, 0]])
small = Image.fromarray(small, 'L')
print(small.size)
these codes print (6, 2), so why it transposes my original input?
What made my even more confused is that when I try to print all the pixels:
for i in range(6):
for j in range(2):
print(small.getpixel((i, j)), end='')
it prints out : 0 0 0 0 0 1 0 0 0 0 0 0
I had no idea about what have happened ........

Image.fromarray(small,'L')
expects an 8 bit Input? converting the Input Array to np.int8 works for me..
small = np.array([[0, 1, 1, 0, 1, 0],
[0, 1, 1, 0, 1, 0]],dtype=np.int8)
Just removing the 'L' will also work

Related

Python3 initializing 2D Array ignores index specification range(start,end,step)

I want to initialize an array in python3 with an index NOT starting at zero and 2 dimensions.
x_length=16
y_length=4
x_start=-4
y_start=-400
# Later passing those variables with:
level_matrix=generate_area(x_length,y_length,x_start,y_start)
Part of the function:
def generate_area(xlen,ylen,xstart,ystart):
matrix=[[0 for y in range(ystart, ystart+ylen)] for x in range(xstart, xstart+xlen)]
for index, value in enumerate(matrix):
print(f"S1_vec('{index}')<= {value}")
for index, value in enumerate(matrix[0]):
print(f"S1_vec('{index}')<= {value}")
for x in range(xstart, xstart+xlen):
for y in range(ystart, ystart+ylen):
print("Y:"+str(y))
print("X:"+str(x))
Output:
S1_vec('0')<= [0, 0, 0, 0]
S1_vec('1')<= [0, 0, 0, 0]
S1_vec('2')<= [0, 0, 0, 0]
S1_vec('3')<= [0, 0, 0, 0]
S1_vec('4')<= [0, 0, 0, 0]
S1_vec('5')<= [0, 0, 0, 0]
S1_vec('6')<= [0, 0, 0, 0]
S1_vec('7')<= [0, 0, 0, 0]
S1_vec('8')<= [0, 0, 0, 0]
S1_vec('9')<= [0, 0, 0, 0]
S1_vec('10')<= [0, 0, 0, 0]
S1_vec('11')<= [0, 0, 0, 0]
S1_vec('12')<= [0, 0, 0, 0]
S1_vec('13')<= [0, 0, 0, 0]
S1_vec('14')<= [0, 0, 0, 0]
S1_vec('15')<= [0, 0, 0, 0]
S1_vec('0')<= 0
S1_vec('1')<= 0
S1_vec('2')<= 0
S1_vec('3')<= 0
Y:-400
X:-4
IndexError: list assignment index out of range
Well, as you can clearly see, there is no negative indexes in the array. This also does not properly work with positive offset as well. The loops want to access offset index values and the script obviously fails, since the index only gets created from 0 to the var1 in case of in range(var1, var2). This makes no sense, since it should work like: range(starting_point, end_point, steps_if_needed). And the copy paste for loop syntax gets successfully used later in the script in multiple instances.
What causes such weird behavior and how to fix this without changing anything else except the initialization of the array in the code? I need 2D arrays to exactly work within the specified region. Is there a simple solution?
Edit:
Just to clarify the goal:
I need a 2D array with negative index capabilities. The range is known, but each entry needs to be defined. Append is useless, because it will not add a specific negative index for me.
If I for example need to define matrix[-4][-120]="Jeff", this needs to work. I do not even care at all, if there is a solution like in Bash, where you have to write matrix["-4,-120"]. Yet I need a reasonable way to address such entries.
You can use a virtual indexing strategy to do the same. Here is my approach:
offset = 5
size = 4
a = [x for x in range(offset,offset+size)]
print(a) # [5, 6, 7, 8]
def get_loc(idx):
return a[idx-offset]
def set_loc(idx, val):
a[idx-offset] = val
set_loc(6,15)
print(a) # [5, 15, 7, 8]
set_loc(8, 112)
print(a) # [5, 15, 7, 112]
This code was just to understand the what i mean by virtual indexing strategy, you can simply use below to get and set values:
# to get a[n] use a[n-offset]
print(a[8-offset]) # 112
# to set a[n] = x | Use a[n-offset] = x
a[7-offset] = 21
print(a) # [5, 15, 21, 112]

Scanning for groups of the same value in numpy array

I have a numpy array where 0 denotes empty space and 1 denotes that a location is filled. I am trying to find a quick method of scanning the numpy array for where there are multiple values of zero adjacent to each other and return the location of the central zero.
For Example if I had the following array
[0 1 0 1]
[0 0 0 1]
[0 1 0 1]
[1 1 1 1]
I want to return the locations for which there is an adjacent zero on either side of a central zero
e.g
[1,1]
as this is the central of 3 zeros, i.e there is a zero either side of the zero at this location
Im aware that this can be calculated using if statements, but wondered if there was a more pythonic way of doing this.
Any help is greatly appreciated
The desired output here for arbitrary inputs is not exhaustively specified in the question, but here is a possible approach that might be useful for this kind of problem, and adapted to the details of the desired output. It uses np.cumsum, np.bincount, np.where, and np.median to find the middle index for groups of consecutive zeros along rows of a 2D array:
import numpy as np
def find_groups(x, min_size=3, value=0):
# Compute a sequential label for groups in each row.
xc = (x != value).cumsum(1)
# Count the number of occurances per group in each row.
counts = np.apply_along_axis(
lambda x: np.bincount(x, minlength=1 + xc.max()),
axis=1, arr=xc)
# Filter by minimum number of occurances.
i, j = np.where(counts >= min_size)
# Compute the median index of each group.
return [
(ii, int(np.ceil(np.median(np.where(xc[ii] == jj)[0]))))
for ii, jj in zip(i, j)
]
x = np.array([[0, 1, 0, 1],
[0, 0, 0, 1],
[0, 1, 0, 1],
[1, 1, 1, 1]])
print(find_groups(x))
# [(1, 1)]
It should work properly even for multiple rows with groups of varying sizes, and even multiple groups per row:
x2 = np.array([[0, 1, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0]])
print(find_groups(x2))
# [(1, 1), (1, 5), (2, 3), (3, 3)]

What is the most pythonic way to find all coordinate pairs in a numpy array that match a specific condition?

So given a 2d numpy array consisting of ones and zeros, I want to find every index where it is a value of one and where either to its top, left, right, or bottom consists of a zero. For example in this array
0 0 0 0 0
0 0 1 0 0
0 1 1 1 0
0 0 1 0 0
0 0 0 0 0
I only want coordinates for (1,2), (2,1), (2,3) and (3,2) but not for (2,2).
I have created code that works and creates two lists of coordinates, similar to the numpy nonzero method, however I don't think its very "pythonic" and I was hoping there was a better and more efficient way to solve this problem. (*Note this only works on arrays padded by zeros)
from numpy import nonzero
...
array= ... # A numpy array consistent of zeros and ones
non_zeros_pairs=nonzero(array)
coordinate_pairs=[[],[]]
for x, y in zip(temp[0],temp[1]):
if array[x][y+1]==0 or array[x][y-1]==0 or array[x+1][y]==0 or array[x-1][y]==0:
coordinate_pairs[0].append(x)
coordinate_pairs[1].append(y)
...
If there exist methods in numpy that can handle this for me, that would be awesome. If this question has already been asked/answered on stackoverflow before, I will gladly remove this, I just struggled to find anything. Thank You.
Setup
import scipy.signal
import numpy as np
a = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])
Create a window which matches the four directions from each value, and convolve. Then, you can check if elements are 1, and if their convolution is less than 4, since a value ==4 means that the value was surrounded by 1s
window = np.array([[0, 1, 0],
[1, 0, 1],
[0, 1, 0]])
m = scipy.signal.convolve2d(a, window, mode='same', fillvalue=1)
v = np.where(a & (m < 4))
list(zip(*v))
[(1, 2), (2, 1), (2, 3), (3, 2)]

Tensorflow: tensor binarization

I want to transform this dataset in such a way that each tensor has a given size n and that a feature at index i of this new tensor is set to 1 if and only if there is a i in the original feature (modulo n).
I hope the following example will make things clearer
Let's suppose I have a dataset like:
t = tf.constant([
[0, 3, 4],
[12, 2 ,4]])
ds = tf.data.Dataset.from_tensors(t)
I want to get (if n = 9)
t = tf.constant([
[1, 0, 0, 1, 1, 0, 0, 0, 0], # index set to 1 are 0, 3 and 4
[0, 0, 1, 1, 1, 0, 0, 0, 0]]) # index set to 1 are 2, 4, and 12%9 = 3
I know how to apply the modulo to a tensor, but I don't find how to do the rest of the transformation
thanks
That is similar to tf.one_hot, only for multiple values at the same time. Here is a way to do that:
import tensorflow as tf
def binarization(t, n):
# One-hot encoding of each value
t_1h = tf.one_hot(t % n, n, dtype=tf.bool, on_value=True, off_value=False)
# Reduce across last dimension of the original tensor
return tf.cast(tf.reduce_any(t_1h, axis=-2), t.dtype)
# Test
with tf.Graph().as_default(), tf.Session() as sess:
t = tf.constant([
[ 0, 3, 4],
[12, 2, 4]
])
t_m1h = binarization(t, 9)
print(sess.run(t_m1h))
Output:
[[1 0 0 1 1 0 0 0 0]
[0 0 1 1 1 0 0 0 0]]

Python — Randomly fill 2D array with set number of 1's

Suppose I have a 2D array (8x8) of 0's. I would like to fill this array with a predetermined number of 1's, but in a random manner. For example, suppose I want to place exactly 16 1's in the grid at random, resulting in something like this:
[[0, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 1]]
The resulting placement of the 1's does not matter in the slightest, as long as it is random (or as random as Python will allow).
My code technically works, but I imagine it's horrendously inefficient. All I'm doing is setting the probability of each number becoming a 1 to n/s, where n is the number of desired 1's and s is the size of the grid (i.e. number of elements), and then I check to see if the correct number of 1's was added. Here's the code (Python 2.7):
length = 8
numOnes = 16
while True:
board = [[(random.random() < float(numOnes)/(length**2))*1 for x in xrange(length)] for x in xrange(length)]
if sum([subarr.count(1) for subarr in board]) == 16:
break
print board
While this works, it seems like a roundabout method. Is there a better (i.e. more efficient) way of doing this? I foresee running this code many times (hundreds of thousands if not millions), so speed is a concern.
Either shuffle a list of 16 1s and 48 0s:
board = [1]*16 + 48*[0]
random.shuffle(board)
board = [board[i:i+8] for i in xrange(0, 64, 8)]
or fill the board with 0s and pick a random sample of 16 positions to put 1s in:
board = [[0]*8 for i in xrange(8)]
for pos in random.sample(xrange(64), 16):
board[pos//8][pos%8] = 1
I made the ones, made the zeros, concatenated them, shuffle them, and reshaped.
import numpy as np
def make_board(shape, ones):
o = np.ones(ones, dtype=np.int)
z = np.zeros(np.product(shape) - ones, dtype=np.int)
board = np.concatenate([o, z])
np.random.shuffle(board)
return board.reshape(shape)
make_board((8,8), 16)
Edit.
For what it's worth, user2357112's approach with numpy is fast...
def make_board(shape, ones):
size = np.product(shape)
board = np.zeros(size, dtype=np.int)
i = np.random.choice(np.arange(size), ones)
board[i] = 1
return board.reshape(shape)

Categories