I'm struggling to convert an image of a signal back to a python list (it was plotted a long time ago and I have lost the data I have only the images).
I've searched on the internet but I find answers about how to convert a 2d image into a 1d and I want to get the signal back.
Long story short:
I have this image of a signal:
and I want to convert this to a python list with a size of 65535 so my list should be looking like this:
list = [0.14, 0.144, 0.12 ...... ]
Thanks!
As a first plan, you could load the image using PIL/Pillow, or OpenCV, greyscale it and resize it to 65536 pixels wide by 100 pixels tall.
Then you will have a Numpy array with dimensions (100,65536). You can then run np.argmin() to find the index (y-value) of the darkest pixel in each column.
Or, find the indices of all the low valued pixels and take their median instead of the second step above.
The code starts off like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image and convert to greyscale
im = Image.open('signal.png').convert('L')
# Resize to match required output
big = im.resize((65536,100), resample=Image.NEAREST)
# Make Numpy array
na = np.array(big)
# This looks about right, I think
print(np.argmin(na,axis=0))
If you trim the image so that the signal touches the edges all the way around, then the first black pixel on the left comes out as list element 0, the last pixel on the right comes out as the last element of your list and the lowest black pixel comes out with y-value of 0 and the peak comes out with y-value of 100.
Trimming would look like this:
from PIL import Image, ImageOps
import numpy as np
# Load image and convert to greyscale
im = Image.open('signal.png').convert('L')
# Get bounding box
bbox = ImageOps.invert(im).getbbox()
# Trim original image so that signal touches edge on all sides
im = im.crop(bbox)
... continue as before ...
Essentially, you'll have to "scan" the images left to right and identify the correct signal value at each "time step." As the image you presented doesn't have scale / units, you'll probably want to normalize all signal values from 0 to 1, as you've implied in your answer. Later you can adjust the scale of the signal if that's not the right range.
It looks like your images have some anti-aliasing at each step of the signal, which means that you won't have columns of all zeros except for one "signal" value. You'll have a cluster of signal values at each time step, some of which are weaker, because the image compression has blurred the signal slightly. This shouldn't be a problem, since you'll just find the max at each time step.
Assuming these images are in grayscale (if not, convert to grayscale), you'd want to find the maximum (or minimum, if the signal is drawn in black) color value at each column of pixels in the images (representing timesteps of the signal).
Mark Setchell's suggestion of PIL/Pillow seems like a great first step.
numpy's amax takes a matrix and flattens it to the max across an entire axis.
I have an image, using steganography I want to save the data in border pixels only.
In other words, I want to save data only in the least significant bits(LSB) of border pixels of an image.
Is there any way to get border pixels to store data( max 15 characters text) in the border pixels?
Plz, help me out...
OBTAINING BORDER PIXELS:
Masking operations are one of many ways to obtain the border pixels of an image. The code would be as follows:
a= cv2.imread('cal1.jpg')
bw = 20 //width of border required
mask = np.ones(a.shape[:2], dtype = "uint8")
cv2.rectangle(mask, (bw,bw),(a.shape[1]-bw,a.shape[0]-bw), 0, -1)
output = cv2.bitwise_and(a, a, mask = mask)
cv2.imshow('out', output)
cv2.waitKey(5000)
After I get an array of ones with the same dimension as the input image, I use cv2.rectangle function to draw a rectangle of zeros. The first argument is the image you want to draw on, second argument is start (x,y) point and the third argument is the end (x,y) point. Fourth argument is the color and '-1' represents the thickness of rectangle drawn (-1 fills the rectangle). You can find the documentation for the function here.
Now that we have our mask, you can use 'cv2.bitwise_and' (documentation) function to perform AND operation on the pixels. Basically what happens is, the pixels that are AND with '1' pixels in the mask, retain their pixel values. Pixels that are AND with '0' pixels in the mask are made 0. This way you will have the output as follows:
.
The input image was :
You have the border pixels now!
Using LSB planes to store your info is not a good idea. It makes sense when you think about it. A simple lossy compression would affect most of your hidden data. Saving your image as JPEG would result in loss of info or severe affected info. If you want to still try LSB, look into bit-plane slicing. Through bit-plane slicing, you basically obtain bit planes (from MSB to LSB) of the image. (image from researchgate.net)
I have done it in Matlab and not quite sure about doing it in python. In Matlab,
the function, 'bitget(image, 1)', returns the LSB of the image. I found a question on bit-plane slicing using python here. Though unanswered, you might want to look into the posted code.
To access border pixel and enter data into it.
A shape of an image is accessed by t= img.shape. It returns a tuple of the number of rows, columns, and channels.A component is RGB which 1,2,3 respectively.int(r[0]) is variable in which a value is stored.
import cv2
img = cv2.imread('xyz.png')
t = img.shape
print(t)
component = 2
img.itemset((0,0,component),int(r[0]))
img.itemset((0,t[1]-1,component),int(r[1]))
img.itemset((t[0]-1,0,component),int(r[2]))
img.itemset((t[0]-1,t[1]-1,component),int(r[3]))
print(img.item(0,0,component))
print(img.item(0,t[1]-1,component))
print(img.item(t[0]-1,0,component))
print(img.item(t[0]-1,t[1]-1,component))
cv2.imwrite('output.png',img)
Lets say that you are given this image
and are given the instruction to programmatically color only the inside of it the appropriate color, but the program would have to not only work on this shape and other primitives but on any outlined shape, however complex it may be and shaded or not.
This is the problem I am trying to solve, but here's where I'm stuck, it seems like it should be simple to teach a computer to see black lines, and color inside them. But searching mostly turns up eigenface style recognition algorithms, which seems to me to be over fitting and far greater complexity than is needed for at least the basic form of this problem.
I would like to frame this as a supervised learning classifier problem, the purpose of which is to feed my model a complete image and it will output smaller numpy arrays consisting of pixels classsified as object or background. But in order to do that I would need to give it training data, which to me seems like I would need to hand label every pixel in my training set, which obviously defeats the purpose of the program.
Now that you have the background, here's my question, given this image, is there an efficient way to get two distinct arrays, each consisting of all adjacent pixels that do not contain any solid black (RGB(0,0,0)) pixels?
Which would make one set all pixels on the inside of the circle, and the other, all pixels on the outside of the circle
You can use scipy.ndimage.measurements.label to do all the heavy lifting for you:
import scipy.ndimage
import scipy.misc
data = scipy.misc.imread(...)
assert data.ndim == 2, "Image must be monochromatic"
# finds and number all disjoint white regions of the image
is_white = data > 128
labels, n = scipy.ndimage.measurements.label(is_white)
# get a set of all the region ids which are on the edge - we should not fill these
on_border = set(labels[:,0]) | set(labels[:,-1]) | set(labels[0,:]) | set(labels[-1,:])
for label in range(1, n+1): # label 0 is all the black pixels
if label not in on_border:
# turn every pixel with that label to black
data[labels == label] = 0
This will fill all closed shapes within the image, considering a shape cut by the edge of the image not to be closed
I have a Pygame black display on which I will draw a letter with white color, as shown in the image below. The size of the display can be anything above 100x100 pixels.
I know I can use something like this to get the surface 2d array:
miSuface = pygame.display.get_surface()
miCoso = pygame.surfarray.array2d(miSuface)
However, I would like to somehow translate this array to a 7x5 bit matrix, on which 0 will correspond to a black pixel and 1 to a white pixel. My final intent is to use the matrix to train a neural network and create a simple OCR. Is there any way I can achieve this? Or is there a better approach to get the 7x5 matrix?
I don't know of any way offhand to compress your array2d into either a smaller array or one with 1-bit of color information. But, you can do the following:
Iterate through the array. If the color is less or equal to 888888, change it to 000000. If it's greater, change it to FFFFFF.
Create a new [7][5] array.
Iterate through again. Add the values of each pixel (black = 0, white = 1) in any given 35th of the array. The sample size will depend entirely on the size of your original array2d. If the average for that block is greater than or equal to 17.5, add a white element to your matrix. If it's less than 17.5, add a black element to your matrix.
I'm not explicitly familiar with the call to pygame.surfarray.array2d(). However, since you're going from a binary color layout to a smaller binary color matrix, you can subdivide the original image using your new proportions in order to properly color the resulting square. I'll give an example.
Say your initial image is 14x10 and you wish to have a 7x5 matrix. Your initial image looks like this:
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,1,0,0,0,0,0,0],
[0,0,0,0,0,0,1,1,1,0,0,0,0,0],
[0,0,0,0,0,1,1,0,1,1,0,0,0,0],
[0,0,0,0,1,1,1,1,1,1,1,0,0,0],
[0,0,0,1,1,1,1,1,1,1,1,1,0,0],
[0,0,1,1,0,0,0,0,0,0,0,1,1,0],
[0,1,1,0,0,0,0,0,0,0,0,0,1,1]]
What you need to do is divide x-wise by 7, and y-wise by 5. Since I picked nice numbers, the slices of the large image you'll be looking at will be 2x2. Take the top left 2x2 block, for example:
[[0,0],
[0,0]] -> [0]
This mini-matrix maps to a single pixel of your 7x5 image. Obviously, in this case it will be 0. Let's look at the bottom right:
[[1,0],
[1,1]] -> [1]
This will map to a value of 1 in your 7x5 image. As you can see, the tricky case in this example is when you have equal 1s and 0s. This will not be a huge issue, fortunately, as your initial image is always at least 100x100.
Applying this method to my example, the shrunk 7x5 image looks like this:
[[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,1,1,0,0],
[0,0,1,1,1,1,0],
[0,1,0,0,0,0,1]]
Psuedocode steps:
Find the size of the mini-matrices (divide by 5 and 7). This will work with an image of any size larger than 7x5.
For each mini-matrix, count the black and white spaces (0 and 1).
Decide whether the space in your final 7x5 matrix should be black or white. In my example, I say that the final space should be black if (number of white squares >= number of black squares). I'm worried that using this will cause problems for you because your pen size is relatively thin compared to the size of your 7x5 divisions. If this is a problem, try something like if (number of white squares * 2 >= number of black squares). This effectively weights the white squares more.
I'm happy to elaborate on this psuedocode. Just let me know.
Finally, if you are still having issues, I might try using a size larger than 7x5. It will give you more precision at a cost to your OCR algorithm. Good luck.
I need to search outliers in more or less homogeneous images representing some physical array. The images have a resolution which is much higher than the screen resolution. Thus every pixel on screen originates from a block of image pixels. Is there the possibility to customize the algorithm which calculates the displayed value for such a block? Especially the possibility to either use the lowest or the highest value would be helpful.
Thanks in advance
Scipy provides several such filters. To get a new image (new) whose pixels are the maximum/minimum over a w*w block of an original image (img), you can use:
new = scipy.ndimage.filters.maximum_filter(img, w)
new = scipy.ndimage.filters.minimum_filter(img, w)
scipy.ndimage.filters has several other filters available.
If the standard filters don't fit your requirements, you can roll your own. To get you started here is an example that shows how to get the minimum in each block in the image. This function reduces the size of the full image (img) by a factor of w in each direction. It returns a smaller image (new) in which each pixel is the minimum pixel in a w*w block of pixels from the original image. The function assumes the image is in a numpy array:
import numpy as np
def condense(img, w):
new = np.zeros((img.shape[0]/w, img.shape[1]/w))
for i in range(0, img.shape[1]//w):
col1 = i * w
new[:, i] = img[:, col1:col1+w].reshape(-1, w*w).min(1)
return new
If you wanted the maximum, replace min with max.
For the condense function to work well, the size of the full image must be a multiple of w in each direction. The handling of non-square blocks or images that don't divide exactly is left as an exercise for the reader.