I have two pictures, one that was the original and another one that I have modified so that it's translated up and left a bit and then rotated 90 degrees (so the shape of the picture is transposed as well).
Now I'd like to determine how many pixels (or any distance unit) the modified picture is translated from the original, as well as the degrees of rotation relative to the original. Phase correlation is supposed to solve this problem by first converting the coordinates to logpolar coordinates, then doing a number of things so that in the end you get a correlation matrix. From that matrix I'm supposed to find the peak and the (x,y) combination will reveal the translation and rotation somehow. This link explains it much better:
Phase correlation
This is the following code I have:
import scipy as sp
from scipy import ndimage
from PIL import Image
from math import *
import numpy as np
def logpolar(input,silent=False):
# This takes a numpy array and returns it in Log-Polar coordinates.
if not silent: print("Creating log-polar coordinates...")
# Create a cartesian array which will be used to compute log-polar coordinates.
coordinates = sp.mgrid[0:max(input.shape)*2,0:360]
# Compute a normalized logarithmic gradient
log_r = 10**(coordinates[0,:]/(input.shape[0]*2.)*log10(input.shape[1]))
# Create a linear gradient going from 0 to 2*Pi
angle = 2.*pi*(coordinates[1,:]/360.)
# Using scipy's map_coordinates(), we map the input array on the log-polar
# coordinate. Do not forget to center the coordinates!
if not silent: print("Interpolation...")
lpinput = ndimage.interpolation.map_coordinates(input,
(log_r*sp.cos(angle)+input.shape[0]/2.,
log_r*sp.sin(angle)+input.shape[1]/2.),
order=3,mode='constant')
# Returning log-normal...
return lpinput
def load_image( infilename ) :
img = Image.open( infilename )
img.load()
data = np.asarray( img, dtype="int32" )
return data
def save_image( npdata, outfilename ) :
img = Image.fromarray( np.asarray( np.clip(npdata,0,255), dtype="uint8"), "L" )
img.save( outfilename )
image = load_image("C:/images/testing_image1.jpg")
target = load_image("C:/images/testing_otherimage.jpg")
# Conversion to log-polar coordinates
lpimage = logpolar(image)
lptarget = logpolar(target)
# Correlation through FFTs
Fcorr = np.fft.fft(lpimage)*np.fft.fft(lptarget)
correlation = np.fft.ifft(Fcorr)
The problem I have now is that this code will give as output:
Traceback (most recent call last):
File "./phase.py", line 44, in <module>
lpimage = logpolar(image)
File "./phase.py", line 24, in logpolar
order=3,mode='constant')
File "C:\Python27\lib\site-packages\scipy\ndimage\interpolation.py", line 295, in map_coordinates
raise RuntimeError('invalid shape for coordinate array')
RuntimeError: invalid shape for coordinate array
As I just have a very superficial understanding of what exactly is happening in the whole phase correlation process, I'm unclear on what the problem is about. I have tried to see if something's wrong with the input so I added save_image(image,"C:/testing.jpg") right after loading the image to see if there's something wrong with the numpy array from my images. And sure enough, the images I convert to np array, cannot be converted back to an image. This is the error I get:
Traceback (most recent call last):
File "./phase.py", line 41, in <module>
save_image(image,"C:/testing.jpg")
File "./phase.py", line 36, in save_image
img = Image.fromarray( np.asarray( np.clip(npdata,0,255), dtype="uint8"), "L" )
File "C:\Python27\lib\site-packages\PIL\Image.py", line 1917, in fromarray
raise ValueError("Too many dimensions.")
ValueError: Too many dimensions.
Taking a peek at the original documentation didn't give me much inspiration on what the problem could be. I don't think the code to convert images to numpy arrays are wrong as I've tested for the type with print type(image) and the results looked legit. Yet I can't convert it back to an image. Any help I can get would be greatly appreciated.
I think the problem is that you are trying to input a 3D image array (R,G,B,A?), into your function. Whereas the input only takes a 2D arrays. Try using a single channel to determine the transformation. E.g.
image = load_image("/path/to/image")[:,:,0]
Related
I am trying to use the function zonal_stats from rasterstats Python package to get the raster statistics from a .tif file of each shape in a .shp file. I manage to do it in QGIS without any problems, but I have to do the same with more than 200 files, which will take a lot of time, so I'm trying the Python way. Both files and replication code are in my Google Drive.
My script is:
import rasterio
import geopandas as gpd
import numpy as np
from rasterio.plot import show
from rasterstats import zonal_stats
from rasterio.transform import Affine
# Import .tif file
raster = rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Arroz_2019-03.tif')
# Read the raster values
array = raster.read(1)
# Get the affine
affine = raster.transform
# Import shape file
shapefile = gpd.read_file(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Setores_Censit_SP_WGS84.shp')
# Zonal stats
zs_shapefile = zonal_stats(shapefile, array, affine = affine,
stats=['min', 'max', 'mean', 'median', 'majority'])
I get the following error:
Input In [1] in <cell line: 22>
zs_shapefile = zonal_stats(shapefile, array, affine = affine,
File ~\Anaconda3\lib\site-packages\rasterstats\main.py:32 in zonal_stats
return list(gen_zonal_stats(*args, **kwargs))
File ~\Anaconda3\lib\site-packages\rasterstats\main.py:164 in gen_zonal_stats
rv_array = rasterize_geom(geom, like=fsrc, all_touched=all_touched)
File ~\Anaconda3\lib\site-packages\rasterstats\utils.py:41 in rasterize_geom
rv_array = features.rasterize(
File ~\Anaconda3\lib\site-packages\rasterio\env.py:387 in wrapper
return f(*args, **kwds)
File ~\Anaconda3\lib\site-packages\rasterio\features.py:353 in rasterize
raise ValueError("width and height must be > 0")
I have found this question about the same problem, but I can't make it work with the solution: I have tried to reverse the signal of the items in the Affine of my raster data, but I couldn't make it work:
''' Trying to use the same solution of question: https://stackoverflow.com/questions/62010050/from-zonal-stats-i-get-this-error-valueerror-width-and-height-must-be-0 '''
old_tif = rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Arroz_2019-03.tif')
print(old_tif.profile) # copy & paste the output and change signs
new_tif_profile = old_tif.profile
# Affine(0.004611149999999995, 0.0, -46.828504575,
# 0.0, 0.006521380000000008, -24.01169169)
new_tif_profile['transform'] = Affine(0.004611149999999995, 0.0, -46.828504575,
0.0, -0.006521380000000008, 24.01169169)
new_tif_array = old_tif.read(1)
new_tif_array = np.fliplr(np.flip(new_tif_array))
with rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\tentativa.tif', "w", **new_tif_profile) as dest:
dest.write(new_tif_array, indexes=1)
dem = rasterio.open(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\tentativa.tif')
# Read the raster values
array = dem.read(1)
# Get the affine
affine = dem.transform
# Import shape file
shapefile = gpd.read_file(r'M:\PUBLIC\Felipe Dias\Pesquisa\Interpolação Espacial\Setores_Censit_SP_WGS84.shp')
# Zonal stats
zs_shapefile = zonal_stats(shapefile, array, affine=affine,
stats=['min', 'max', 'mean', 'median', 'majority'])
Doing this way, I don't get the "width and height must be > 0" error! But every stat in zs_shapefile is "NoneType", so it doesn't help my problem.
Does anyone understands why this error happens, and which sign I have to reverse for making it work? Thanks in advance!
I would be careful with overriding the geotransform of your raster like this, unless you are really convinced the original metadata is incorrect. I'm not too familiar with Affine, but it looks like you're setting the latitude now as positive? Placing the raster on the northern hemisphere. My guess would be that this lack of intersection between the vector and raster causes the NoneType results.
I'm also not familiar with raster_stats, but I'm guessing it boils down to GDAL & Numpy at the core of it. So something you could try as a test is to add the all_touched=True keyword:
https://pythonhosted.org/rasterstats/manual.html#rasterization-strategy
If that works, it might indicate that the rasterization fails because your polygons are so small compared to the pixels, that the default rasterization method results in a rasterized polygon of size 0 (in at least one of the dimensions). And that's what the error also hints at (my guess).
Keep in mind that all_touched=True changes the stats you get in result, so I would only do it for testing, or if you're comfortable with this difference.
If you really need a valid value for these (too) small polygons, there are a few workarounds you could try. Something I've done is to simply take the centroid for these polygons, and take the value of the pixel where this centroid falls on.
A potential way to identify these polygons would be to use all_touched with the "count" statistic, every polygon with a count of only 1 might be too small to get rasterized correctly. To really find this out you would probably have to do the rasterization yourself using GDAL, given that raster_stats doesn't seem to allow it.
Note that due to the shape of some of the polygons you use, the centroid might fall outside of the polygon. But given how course your raster data is, relative to the vector, I don't think it would impact the result all that much.
An alternative is, instead of modifying the vector, to significantly increase the resolution of your raster. You could use gdal_translate to output this to a VRT, with some form of resampling, and avoid having to write this data to disk. Once the resolution is high enough that all polygons rasterize to at least a 1x1 array, it should probably work. But your polygons are tiny compared to the pixels, so it'll be a lot. You could guess it, or analyze the envelopes of all polygons. For example take the smallest edge of the envelope as more or less the resolution that's necessary for a correct rasterization.
Edit; To clarify the above a bit further.
The default rasterization strategy of GDAL (all_touched=False) is to consider a pixel "within" the polygon if the centroid of the pixel intersects with the polygon.
Using QGIS you can for example convert the pixels to points, and then do a spatial join with your vector. If you remove polygons that can't be joined (there's a checkbox), you'll get a different vector that most likely should work with raster_stats, given your current raster.
You could perhaps use that in the normal way (all_touched=False), and get the stats for the small polygons using all_touched=True.
In the image below, the green polygons are the ones that intersect with the centroid of a pixel, the red ones don't (and those are probably the ones raster_stats "tries" to rasterize to a size 0 array).
I want to create a program that takes a picture of the same area at 2 different times and then compares the images, and then creates an entirely new image that has just the difference between the 2 images (what changed). I am using RGB values and I am looking to see if they are more than 90% different in value then I want to add those pixels to the new matrix, which will be mapped.
I am fairly new to Raspberry Pi and python so I ran into an error and I don't understand why it is giving me that error.
I have tried using both PIL and Numpy but both methods produce errors that I can't fix
THIS ISN'T THE ENTIRE CODE, BUT THIS IS THE FUNCTION THAT IS GIVING ME THE ERROR:
from PIL import Image
import numpy as np
import picamera
import time
import RPi.GPIO
from guizero import ...
def processimage():
before = Image.open('before.jpg')
after = Image.open('after.jpg')
beforeRGB = np.array(before)
afterRGB = np.array(after)
outputRGB = Image.new('RGB', (800,480))
x=0
y=0
for x in range(800):
for y in range(480):
if(((beforeRGB[x,y,0])/afterRGB[x,y,0])<0.9):
outputRGB[x,y,0] = afterRGB[x,y,0]
else:
output[x,y,0] = 255
if(((beforeRGB[x,y,1])/afterRGB[x,y,1])<0.9):
outputRGB[x,y,1] = afterRGB[x,y,1]
else:
output[x,y,1] = 255
if(((beforeRGB[x,y,2])/afterRGB[x,y,2])<0.9):
outputRGB[x,y,2] = afterRGB[x,y,2]
else:
output[x,y,2] = 255
y=y+1
x=x+1
Image.fromarray(outputRGB).save('output.jpg')
THIS IS THE ERROR I AM GETTING
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.5/tkinter/__init__.py", line 1562, in __call__
return self.func(*args)
File "/usr/local/lib/python3.5/dist-packages/guizero/PushButton.py",
line 146, in _command_callback
self._command()
File "/home/pi/ButtonTest/GUI_interface.py", line 70, in mode
lifetime(key)
File "/home/pi/ButtonTest/GUI_interface.py", line 158, in lifetime
processimage()
File "/home/pi/ButtonTest/GUI_interface.py", line 115, in processimage
outputRGB[x,y,0] = afterRGB(x,y,0)
TypeError: 'numpy.ndarray' object is not callable
1) The error message doesn't match the code. The error message is about a different version of the code where () where used accidentally instead of [], see last code line of the error message.
2) Iterating over individual pixels is very slow when using Python. Please read through a couple of image processing tutorials to get an understanding of vectorization and broadcasting.
For example, the code above could be shortened to something like:
output = np.where(beforeRGB/afterRGB < 0.9, afterRGB, 255]
3) for x in range() already iterates over all x values in the range. There is no need to increment x or y at the end of the loop.
4) The formula chosen for the image difference produces division by zero for pixels where at least one of the channels is 0. Do some research and choose a better metric for image differences.
I am trying to convert an image from cartesian to polar so that I can unravel the image, but I am getting a runtime error. If you are curious how this looks visually, see this example.
Code:
import scipy
import scipy.ndimage
import numpy as np
from math import *
import cv2
def logpolar(input):
# This takes a numpy array and returns it in Log-Polar coordinates.
coordinates = np.mgrid[0:max(input.shape[:])*2,0:360] # We create a cartesian array which will be used to compute log-polar coordinates.
log_r = 10**(coordinates[0,:]/(input.shape[0]*2.)*log10(input.shape[1])) # This contains a normalized logarithmic gradient
angle = 2.*pi*(coordinates[1,:]/360.) # This is a linear gradient going from 0 to 2*Pi
# Using scipy's map_coordinates(), we map the input array on the log-polar coordinate. Do not forget to center the coordinates!
lpinput = scipy.ndimage.interpolation.map_coordinates(input,(log_r*np.cos(angle)+input.shape[0]/2.,log_r*np.sin(angle)+input.shape[1]/2.),order=3,mode='constant')
# Returning log-normal...
return lpinput
# Load image
image = cv2.imread("test.jpg")
result = logpolar(image)
Error message in console:
Traceback (most recent call last):
File "test.py", line 23, in <module>
result = logpolar(image)
File "test.py", line 15, in logpolar
lpinput = scipy.ndimage.interpolation.map_coordinates(input,(log_r*np.cos(angle)+input.shape[0]/2.,log_r*np.sin(angle)+input.shape[1]/2.),order=3,mode='constant')
File "/Library/Python/2.7/site-packages/scipy-0.13.0.dev_c31f167_20130415-py2.7-macosx-10.8-intel.egg/scipy/ndimage/interpolation.py", line 295, in map_coordinates
raise RuntimeError('invalid shape for coordinate array')
RuntimeError: invalid shape for coordinate array
My first guess would be that you are passing in a colour image which is 3 dimensional. At first glance I don't think your code could handle that.
My guess was based off of the error you pasted, specifically
"invalid shape for coordinate array"
When using higher dimensional arrays like that usually you have to pass extra parameters around specifying which axis to repeat the operations over and even then sometimes it does not work. I didn't see a repeated extra integer at the end of your argument lists so I figured you weren't trying to handle that case explicitly and might have forgotten to check your array dimensions after reading in the image.
Glad it helped :)
I am a beginner to python and I am implementing Principal component analysis (PCA) using python, but I am having a problem computing the mean.
Here is my code:
import Image
import os
from PIL import Image
from numpy import *
import numpy as np
#import images
dirname = "C:\\Users\\Karim\\Downloads\\att_faces\\New folder"
X = [np.asarray(Image.open(os.path.join(dirname, fn))) for fn in os.listdir(dirname)]
#get number of images and dimentions
path, dirs, files = os.walk(dirname).next()
num_images = len(files)
image_file = "C:\\Users\\Karim\\Downloads\\att_faces\\New folder\\2.pgm"
img = Image.open(image_file)
width, height = img.size
print width
print height
print num_images
M = (X-mean(X.T,axis=1)).T # subtract the mean (along columns)
I get the error:
AttributeError: 'list' object has no attribute 'T'
The problem is X.T in your last line because X is a python list, not a numpy.ndarray. It isn't clear what you're trying to do here but if you wanted to combine all the image arrays into a single numpy array, you could convert X = np.array(X) before the last line.
Also, unless you specifically want to roll your own PCA implementation, you can do this much more easily with numpy by using np.cov (for covariance calculation) and np.linalg.eig (to compute the eigenvalues and eigenvectors of the covariance matrix).
images -= np.mean(images, axis=0)
I'm trying to make a histogram of some data that is being stored in an ndarray. The histogram is part of a set of analysis which I've made into a class in a python program. The part of the code that isn't working is below.
def histogram(self, iters):
samples = T.MCMC(iters) #Returns an [iters,3,4] ndarray
histAC = plt.figure(self.ip) #plt is matplotlib's pyplot
self.ip+=1 #defined at the beginning of the class to start at 0
for l in range(0,4):
h = histAC.add_subplot(2,(iters+1)/2,l+1)
for i in range(0,0.5*self.chan_num):
intAvg = mean(samples[:,i,l])
print intAvg
for k in range(0,iters):
samples[k,i,l]=samples[k,i,l]-intAvg
print "Samples is ",samples
h.hist(samples,bins=5000,range=[-6e-9,6e-9],histtype='step')
h.legend(loc='upper right')
h.set_title("AC Pulse Integral Histograms: "+str(l))
figname = 'ACHistograms.png'
figpath = 'plot'+str(self.ip)
print "Finished!"
#plt.savefig(figpath + figname, format = 'png')
This gives me the following error message:
File "johnmcmc.py", line 257, in histogram
h.hist(samples,bins=5000,range=[-6e-9,6e-9],histtype='step') #removed label=apdlabel
File "/x/tsfit/local/lib/python2.6/site-packages/matplotlib/axes.py", line 7238, in hist
ymin = np.amin(m[m!=0]) # filter out the 0 height bins
File "/x/tsfit/local/lib/python2.6/site-packages/numpy/core/fromnumeric.py", line 1829, in amin
return amin(axis, out)
ValueError: zero-size array to ufunc.reduce without identity
The only search results I've found have been multiple copies of the same two conversations, from which the only thing I learned was that python histograms don't like getting fed empty arrays, which is why I added the print statement right above the line that's giving me trouble to make sure the array isn't empty.
Has anyone else come across this error before?