I am trying to use python svgwrite to make an object scale and rotate at the same time. My efforts has so far been to add two consecutive "animateTransform". It does however seem to only take the last action into account, as seen in my example.
import svgwrite
path = [(100,100),(100,200),(200,200),(200,100)]
image = svgwrite.Drawing('test.svg',size=(300,300))
rectangle = image.add(image.polygon(path,id ='polygon',stroke="black",fill="white"))
rectangle.add(image.animateTransform("rotate","transform",id="polygon", from_="0 150 150", to="360 150 150",dur="4s",begin="0s",repeatCount="indefinite"))
rectangle.add(image.animateTransform("scale","transform",id="polygon", from_="0", to="1",dur="4s",begin="0s",repeatCount="indefinite"))
image.save()
display(SVG('test.svg'))
Can anyone help?
This maybe comes too late, but what worked for me is adding additive = "sum" to both animations. Be aware that the order in which you add the animations impacts the end result.
import svgwrite
path = [(100,100),(100,200),(200,200),(200,100)]
image = svgwrite.Drawing('test.svg',size=(300,300))
rectangle = image.add(image.polygon(path,id ='polygon',stroke="black",fill="white"))
rectangle.add(image.animateTransform("scale","transform",id="polygon", from_="0", to="1",dur="4s",begin="0s",repeatCount="indefinite", additive = "sum"))
rectangle.add(image.animateTransform("rotate","transform",id="polygon", from_="0 150 150", to="360 150 150",dur="4s",begin="0s", additive = "sum", repeatCount="indefinite"))
image.save()
display(SVG('test.svg'))
Related
I'm using Vedo in Python to visualize some 3D scans of indoor locations.
I would like to, e.g., add a 'camera' at (0,0,0), look left 90 degrees (or where ever), and see the camera's output.
Can this be done with Vedo? If not, is there a different python programming framework where I can open .obj files and add a camera and view through it programmatically?
I usually use schema:
...
plt = Plotter(bg='bb', interactive=False)
camera = plt.camera
plt.show(actors, axes=4, viewup='y')
for i in range(360):
camera.Azimuth(1)
camera.Roll(-1)
plt.render()
...
plt.interactive().close()
Good Luck
You can plot the same object in an embedded renderer and control its behaviour via a simple callback function:
from vedo import *
settings.immediateRendering = False # can be faster for multi-renderers
# (0,0) is the bottom-left corner of the window, (1,1) the top-right
# the order in the list defines the priority when overlapping
custom_shape = [
dict(bottomleft=(0.00,0.00), topright=(1.00,1.00), bg='wheat', bg2='w' ),# ren0
dict(bottomleft=(0.01,0.01), topright=(0.15,0.30), bg='blue3', bg2='lb'),# ren1
]
plt = Plotter(shape=custom_shape, size=(1600,800), sharecam=False)
s = ParametricShape(0) # whatever object to be shown
plt.show(s, 'Renderer0', at=0)
plt.show(s, 'Renderer1', at=1)
def update(event):
cam = plt.renderers[1].GetActiveCamera() # vtkCamera of renderer1
cam.Azimuth(1) # add one degree in azimuth
plt.addCallback("Interaction", update)
interactive()
Check out a related example here.
Check out the vtkCamera object methods here.
I have some fairly large (~150MB) 3 channel 3D images I'm trying to process using python-simpleitk. I need to determine if objects in the red channel overlap with objects in the green channel and determine their distance from an object in the blue channel.
I haven't found anything about colocalization in the simpleitk documentation so I've been trying this using numpy to extract coordinates and determining how many voxels overlap. I haven't found any particular method for edge-to-edge distance measuring anywhere yet.
However, as expected, the numpy version takes quite a while and I'd rather use simpleitk for this (I've also looked into regular itk, but it causes problems converting to ndarrays).
I'm wondering if anyone has had any luck performing this type of image processing using these tools. Or can recommend improvements.
Here is my code so far.
class ChannelImage(object):
def __init__(self, image:np.ndarray, metadata:dict):
self.object_map = None
self.image = sitk.GetImageFromArray(image)
self.metadata = metadata
self.channel_ID = metadata['Color']
# threshold hardcoded for now.
if self.channel_ID == "Blue":
self.threshold = 20000
else:
self.threshold = 10000
del self.metadata['ID']
del self.metadata['Color']
def get_coords(self):
cc = sitk.ConnectedComponent(self.image>self.threshold)
self.object_map = sitk.GetArrayFromImage(cc)
stats = sitk.LabelIntensityStatisticsImageFilter()
stats.Execute(cc, self.image)
labels = stats.GetLabels()
print(f"Getting coordinates for {self.channel_ID}")
self.coords = {label:np.where(self.object_map==label) for label in labels}
SimpleITK does not currently have a method to directly get the coordinates of a label. I would recommend opening a feature request for SimpleITK.
In the mean time you can improve the efficiency of np.where by cropping the ndarray based on the sitk.LabelIntensityStatisticsImageFilter.GetBoundingBox() method.
I've got a high-resolution healpix map (nside = 4096) that I want to smooth in disks of a given radius, let's say 10 arcmin.
Being very new to healpy and having read the documentation I found that one - not so good - way to do this was to perform a "cone search", that is to find around each pixels the ones inside the disk, average them and give this new value to the pixel at the center. However this is very time-consuming.
import numpy as np
import healpy as hp
kappa = hp.read_map("zs_1.0334.fits") #Reading my file
NSIDE = 4096
t = 0.00290888 #10 arcmin
new_array = []
n = len(kappa)
for i in range(n):
a = hp.query_disc(NSIDE,hp.pix2vec(NSIDE,i),t)
new_array.append(np.mean(kappa[a]))
I think the healpy.sphtfunc.smoothing function could be of some help as it states that you can enter any custom beam window function but I don't understand how this works at all...
Thanks a lot for your help !
As suggested, I can easily make use of the healpy.sphtfunc.smoothing function by specifying a custom (circular) beam window.
To compute the beam window, which was my problem, healpy.sphtfunc.beam2bl is very useful and simple in the case of a top-hat.
The appropriated l_max would roughly be 2*Nside but it can be smaller depending on specific maps. One could for example compute the angular power-spectra (the Cls) and check if it dampens for smaller l than l_max which could help gain some more time.
Thanks a lot to everyone who helped in the comments section!
since I spent a certain amount of time trying to figure out how the function smoothing was working. There is a bit of code that allows you to do a top_hat smoothing.
Cheers,
import healpy as hp
import numpy as np
import matplotlib.pyplot as plt
def top_hat(b, radius):
return np.where(abs(b)<=radius, 1, 0)
nside = 128
npix = hp.nside2npix(nside)
#create a empy map
tst_map = np.zeros(npix)
#put a source in the middle of the map with value = 100
pix = hp.ang2pix(nside, np.pi/2, 0)
tst_map[pix] = 100
#Compute the window function in the harmonic spherical space which will smooth the map.
b = np.linspace(0,np.pi,10000)
bw = top_hat(b, np.radians(45)) #top_hat function of radius 45°
beam = hp.sphtfunc.beam2bl(bw, b, nside*3)
#Smooth map
tst_map_smoothed = hp.smoothing(tst_map, beam_window=beam)
hp.mollview(tst_map_smoothed)
plt.show()
I would like to change the window-level of my dicom images from lung window to chest window. I know the values need for the window-leveling. But how to implement it in python? Or else anyone can provide me with an detailed description of this process would be highly appreciated.
I have already implemented this in Python. Take a look at the function GetImage in dicomparser module in the dicompyler-core library.
Essentially it follows what kritzel_sw suggests.
Following open source code can implement bone window.
def get_pixels_hu(slices):
image = numpy.stack([s.pixel_array for s in slices])
image = image.astype(numpy.int16)
image[image == -2000] = 0
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(numpy.float64)
image[slice_number] = image[slice_number].astype(numpy.int16)
image[slice_number] += numpy.int16(intercept)
return numpy.array(image, dtype=numpy.int16)
And i added following code
image[slice_number] = image[slice_number]*3.5 + mean2*0.1
after
image[slice_number] += numpy.int16(intercept)
can change bone window to brain tissue window.
The point is setting of parameter 3.5 and 0.1. I just tried and got these two parameters suitable for brain tissue window. Maybe you can adjust them for chest window.
import scipy as sp
import scipy.misc
lena = sp.misc.lena()
plt.imshow2(lena)
What I'd like is then to add a bar indicative of distance. ie suppose this was an actual image captured with a camera and I knew that each pixel corresponds to 1cm. I would want to add a bar that is 10 x 100 pixels and add some text that says 1m above the bar. Is there a simple way to do this?
thank you
In the example bellow I made a simple solution of your problem. It should not be too hard to extend this to cover a more general case. Hardest thing to get right here is the pos_tuple.
Since pos_tuple represents the upper left corner of Rectangle you have to subtract the length of the bar itself and then still leave some padding, otherwise it will be plotted at the very edge of the graph and look ugly. So a more general pos_tuple would look something like
pos_tuple = (np.shape(lena)[0]-m2pix(1)-padding_right,
np.shape(lena)[1]-m2pix(0.1)-padding_bottom)
This whole thing could also be adapted into a neat function add_image_scale that would take in your figure and spit out a figure which has the scale "glued" on. m2pix could also be generalized to receive a scale instead of hardcoding it.
import scipy as sp
import scipy.misc
import numpy as np
lena = sp.misc.lena()
def m2pix(pix): #it takes a 100 pix to make a meter
return 100*pix
pos_tuple = (np.shape(lena)[0]-100-12, np.shape(lena)[1]-10-2)
rect = plt.Rectangle( pos_tuple, m2pix(1), m2pix(0.1))
plt.imshow2(lena)
plt.gca().add_patch(rect)
plt.show()
As far as adding text goes, you can use the annotations or text which are both very easy to use.