I have two fits image of different wavelength. They have different angular resolution.
I want to convolve the higher resolution image to the lower one.
I have tried astropy.convolution.convolve and astropy.convolution.Gaussian2DKernel.
Resolution is 0.184" for 1600nm, and 0.124" for 606nm. So I think the resolution for the kernel should be 0.136". Then I tried following code:
import os
from astropy.io import fits
from astropy.convolution import Gaussian2DKernel
from astropy.convolution import convolve
kernel = Gaussian2DKernel(x_stddev=0.136)
hdu = fits.open('/Users/lpr/Data/fits/pridata/goodsn_f606/606.fits')[0]
img = hdu.data
astropy_conv = convolve(img,kernel)
hdu.data = astropy_conv
hdu.writeto('/Users/lpr/Data/fits/expdata/CONVOLIMAGE/convolved_606.fits')
print('done')
Of course, that's wrong. The resolution of higher one(606) almost unchanged. Then I realize that I convolve two different type thing. One is flux(or electrons/s), the other is kernel.
Now I don't know how to match the higher resolution image to lower one. Thank you for answering my question!
I think the first issue is that the standard deviation of your kernel should be in pixels, not in arcseconds.
Then you may be interested in two packages that allow to compute the matching kernel between two PSFs:
the first one is in photutils: https://photutils.readthedocs.io/en/stable/psf_matching.html
the second one is a dedicated package: https://pypher.readthedocs.io/en/latest/
Related
I am having an issue using the morphology functions in OpenCV on xarray data with python.
I have generated a list of binary xarray datarrays (I understand that I probably should make these just another dimension of the dataset, but I haven't gotten there yet).
I am trying to run morphological closing on this data. In the code snippet below, I extract the actual data from the datarray as a numpy array, then try to run the closing on that. My understanding of OpenCV is that when it reads an image in, it essentially translates it into a numpy array anyway, so I thought this might work. I should mention, the data is originally a geotiff, and I am reading it in using rioxarray, if that makes a difference.
Essentially, what seems to be happening is that the data is being shifted by the size of the kernel for each iteration. I have confirmed this by reading the same slice of the data in the array before and after running the closing operation. It also throws an error that "Axis Limits cannot be NaN or Inf". The output numpy array is the same size as the original, but parts of it have been cut off in one corner, and the other corner, it appears that a No Data value has been added(?) (value is -1.798e+308).
I am not sure why this is doing this. When I read an image in using imread, the same process seems to work just fine from what I can tell. I also cannot tell if the closing operation is even doing what it is supposed to be doing either. First glance, it only seems like it is shifting it. Code and image below.
import rioxarray as rxr
import xarray as xr
import cv2 as cv
import numpy as np
kSize = 15 #Kernel size
iters = 2 #number of iterations
#Binary list is a list of several binary images generated using xarray.where function
binaryCopy = binaryList[0].copy() #Copy one datarray from list of datarrays...prob should just put this all into one xr dataset, but oh well
inAttrs = binaryCopy.attrs.copy() #Copy attributes to read back in at the end
inDims = binaryCopy.dims
inCoords = binaryCopy.coords
kern = cv.getStructuringElement(cv.MORPH_RECT,(kSize,kSize)) #Create square kernel
print(binaryCopy.data.shape) #Print shape of data array (appears to just be a numpy array)
#I had tried this v first, but it did not work, so I tried the steps individually...same issue
#closed = cv.morphologyEx(binaryCopy.data, cv.MORPH_CLOSE, kern)
dilated = cv.dilate(binaryCopy.data, kern, iters)
closed = cv.erode(dilated, kern, iters)
newBinaryArray= xr.DataArray(closed,
coords=inCoords,
dims=inDims,
attrs=inAttrs)
fig, ax = plt.subplots(nrows=1, ncols=2, sharey=True)
#Plot the original first
binaryList[0].plot(ax=ax[0])
#Plot the closed data
newBinaryArray.plot(ax=ax[1])
plt.rcParams['figure.figsize'] = (15,8)
Before (left) and after (right) I run morphological closing. Notice the blue bar on the bottom and left of the image to right. This appears to be a no data value (-1.798e+308)
Ok, so it looks like my issue was the anchor. If you set argument anchor=(0,0), that seems to prevent the shift in the data. (I thought this should be (-1,-1) since other places on the internet seem to indicate that puts the anchor in the middle, but the (0,0) appears to work better for me.
Also, when I added borderType=cv.BORDER_REPLICATE (see here) as an argument to the morphologEx function (I just did it with that one instead of doing dilation and erosion), that prevented the extra strip that was inserted when the data was shifted from being an enormous "No Data" value and uses the data value at the border.
I would like to voxelise a .stl file and write it into an np.array. The resolution of the voxels should be adjustable.
Here is my code for this:
component_path = r"C:\Users\User\documents\components\Test_1.stl"
mesh = o3d.io.read_triangle_mesh(component_path)
voxel_grid = o3d.geometry.VoxelGrid.create_from_triangle_mesh(mesh, voxel_size = 3)
ply_path = "voxel.ply"
o3d.io.write_voxel_grid(ply_path, voxel_grid, True,True,True)
pcd = o3d.io.read_point_cloud(ply_path)
list_path = "list.xyz"
o3d.io.write_point_cloud(list_path, pcd)
Then I read the coordinate points from the list, write them into a 3D array and plot them. When plotting, the border is not displayed for certain voxel sizes, as can be seen in the image (although it is present in the original). Is there a solution for this so that it doesn't happen no matter what voxel size?
voxelized picture with missing border
In addition, the voxel size changes the maximum dimension. So the component originally has three times the length as it is shown here. How can this be adjusted? (If I just multiply a factor, the voxels stay small but pull the distances apart).
Is there perhaps a more reasonable way to write a voxelisation of a .stl file and put the centers of voxels into an np.array?
If anyone ever has the same problem and is looking for a solution:
This project worked for me: GitHub: stl-to-voxel
The model is then also filled. If the maximum dimension is known, you can determine the exact voxel size via the resolution.
Here is some code:
import stl_reader
import stltovoxel
import numpy as np
import copy
import os
import sys
input=r"C:\Users\user\Example.stl"
output=r"C:\Users\user\Test.xyz"
resolution = 50 #Resolution, into how many layers the model should be divided
stltovoxel.doExport(input, output, resolution)
Afterwards, you can read the coordinates from the list, write them into an array and process them further (quite normally).
I have an image. I want to get the perimeter of every object in my image. For example, in this image , the perimeter of an object is 33 (the number of pixels at its edges).
I have written the following algorithm, but it is very timely.
Does anyone have an idea to increase the speed of the algorithm?
What I have tried:
def cal_perimeter_object(object, image):
peri_ = 0
for pixel_ in image:
if pixel_is_in_neigbor_of_object() is True:
peri_ += 1
return peri_
As mentioned in the comment by #Piinthesky having a boolean (or labelled image) where you know the label for the object you want to find the contour for is the first step. There are a number of ways of doing this, the simplest of which is thresholding. Once you have your labelled image you can find the perimeter in a number of ways - e.g. the number of pixels along the border. To give you a head start here is a way to do it on the image you put in the link. I have used scikit-image but there are other python libraries you may use.
# If your python version is not 3.x uncomment line below
#from __future__ import print_function
from skimage.measure import label, regionprops
import skimage.io as io
# read in the image (enter the path where you downloaded it on your computer below
im = io.imread('/home/kola/Downloads/perimeter.png')
# To simplify things I am only using the first channel and thresholding
# to get a boolean image
bw = im[:,:,0] > 230
regions = regionprops(bw.astype(int))
print(regions[0].perimeter)
I am reading VTK uniform grid into python. When I visualize a slice through the data in Paraview, I get the following (correct) image:
Then I visualize the slice using via numpy & pylab using the following script:
import vtk
from vtk.util.numpy_support import vtk_to_numpy
import pylab
imr=vtk.vtkXMLImageDataReader()
imr.SetFileName('flow.vti')
imr.Update()
im=imr.GetOutput()
nx,ny,nz=im.GetDimensions()
orig=im.GetOrigin()
extent=im.GetExtent()
spacing=im.GetSpacing()
flowVtk=im.GetPointData().GetArray("|flow|")
flow=vtk_to_numpy(flowVtk).reshape(nx,ny,nz)
# bottom z-slice
flowZ0=flow[:,:,0]
# set extent so that axes units are physical
img=pylab.imshow(flowZ0,extent=[orig[0],orig[0]+extent[1]*spacing[0],orig[1],orig[1]+extent[3]*spacing[1]],cmap=pylab.gray())
img.set_clim(vmin=0,vmax=1000)
pylab.show()
which seems to be out-of-phase. I tried reordering dimensions in reshape(...), it did something, but it has never shown the data it is actually supposed to show.
Is there something obviously wrong?
EDIT: I also tried reshape((nx,ny,nz),order="F") (fortran ordering) and now I get a much better image (with jet colormap for better clarity) which is almost correct, but the data is suspiciously rotated by 90°, plus I would like some authoritative explanation which ordering to use and why (which one is used by VTK internally?).
EDIT2: to get the same view as in Paraview, I had to do pylab.imshow(np.rot90(flowZ0)); not sure why, so the question is still open:
I have a three dimensional binary image for which I am working on determining the two-point cluster function. The first step to doing this is to define all of the connected regions within the image. I have successfully done so with skimage as:
from skimage.measure import label
from skimage.morphology import remove_small_objects
from skimage.measure import label
from skimage.morphology import remove_small_objects
min = 64
label_file = label(newim_int, return_num=True, connectivity=2)
image_clean = remove_small_objects(label_file[0], min_size=min,connectivity=2, in_place=True)
label_file = label(image_clean, return_num=True, connectivity=2)
with a bunch of stuff missing to read in the file, etc between min= and label_file=
I would now like to know the distribution of sizes of the regions labeled
here. Unfortunately, skimage.measure.regionprops tells me it only works for 2D images. Is there another way to do this?
Thanks
regionprops has already started being expanded to include 3d images; at present some of the properties return a NotImplementedError.
Fortunately, area already works for 3d so you should be able to use this property.