Window-Leveling in python - python

I would like to change the window-level of my dicom images from lung window to chest window. I know the values need for the window-leveling. But how to implement it in python? Or else anyone can provide me with an detailed description of this process would be highly appreciated.

I have already implemented this in Python. Take a look at the function GetImage in dicomparser module in the dicompyler-core library.
Essentially it follows what kritzel_sw suggests.

Following open source code can implement bone window.
def get_pixels_hu(slices):
image = numpy.stack([s.pixel_array for s in slices])
image = image.astype(numpy.int16)
image[image == -2000] = 0
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(numpy.float64)
image[slice_number] = image[slice_number].astype(numpy.int16)
image[slice_number] += numpy.int16(intercept)
return numpy.array(image, dtype=numpy.int16)
And i added following code
image[slice_number] = image[slice_number]*3.5 + mean2*0.1
after
image[slice_number] += numpy.int16(intercept)
can change bone window to brain tissue window.
The point is setting of parameter 3.5 and 0.1. I just tried and got these two parameters suitable for brain tissue window. Maybe you can adjust them for chest window.

Related

PySimpleGUI need help planning an intricate measurements calculator

i'm working on a program that should figure out the dimensions of individual pieces in kitchen cabinet modules, so you only set: height, depth, width of the material(18mm), and then you select modules from a list and after setting the dimensions of each you should be presented with a list of pieces and their dimensions.
Since all of this is somewhat standardized individual pieces's dimensions are figured out by simple math, but each consists of it's own set of operations, which should be ran once and display the results in the interface(eventually i'll figure out how to write it to an excel compatible format)
as you can see here it can get to be complex, i can work it out over time no problem, but right now i'm not sure PYGUI is what i need.
import PySimpleGUI as sg
layout1 = [[sg.Text('Altura', size=(10,1)),sg.Input('',key='Alt')], #Height
[sg.Text('Densidad Placa', size=(10,1)),sg.Input('',key='Placa')],# Material's density
[sg.Text('Profundidad', size=(10,1)),sg.Input('',key='Prof')]] #Depth
layout2 = [[sg.Text('Ancho Modulo', size=(10,1)),sg.Input('',key='WM')], #Module's width
[sg.Text('lateral', size=(10,1)),sg.Text('',key='Lat'),sg.Text('x'),sg.Text('',key='Prof2')], #side pieces
[sg.Text('Piso', size=(10,1)),sg.Text('',key='WM2'),sg.Text('x'),sg.Text('',key='Prof2')], # bottom piece
[sg.Button('Go')]]
#Define Layout with Tabs
tabgrp = [[sg.TabGroup([[sg.Tab('1', layout1),
sg.Tab('2', layout2)]])]]
window =sg.Window("Tabs",tabgrp)
#Read values entered by user
while True:
event,values=window.read()
if event in (sg.WINDOW_CLOSED, 'Close'):
break
elif event == 'Go':
anc = values['WM']
altura = values['Alt']
placa = values['Placa']
prof = values['Prof']
try:
v = int(anc) #width
w = int(prof)#depth
x = int(altura)#height
y = int (placa)# Material's density
altlat = str(x - y) #height of side pieces
prof2 = int(w - y) #depth of pieces,(total depth incluiding the door)
ancm = int(v) #width
except ValueError:
altlat = "error"
prof2 = "error"
ancm = "error"
window['Lat'].update(value=altlat)
window['Prof2'].update(value=prof2)
window['WM2'].update(value=ancm)
window.refresh
#access all the values and if selected add them to a string
window.close()
i figured i use functions for every set of operations and call them as i need them, but keys can't be reused and every tutorial i've seen points towards them, and other implementations i tried failed,. i've been using python since last night, so i'm not sure how many options i have, nor how limited my options will be with PYGUI's toolset.
I think what you are asking is, how can I make a function that will take the values and run an operation on them? This seems to be more of a general python question than one about pyGUI, but here is a quick answer.
def calc_side_panel_height(altura, placa):
x = int(altura)#height
y = int (placa)# Material's density
return (x - y) #height of side pieces
try {
height_of_side = calc_side_panel_height(altura, place)
# use height here.
altlat = str(height_of_side)
}
does that start to make sense? You would call functions and in those functions do the calculations, so you don't have to rewrite the code.
more info: https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Functions
Let me know if I'm confused!

Python svgwrite and simultaneous animationTransforms

I am trying to use python svgwrite to make an object scale and rotate at the same time. My efforts has so far been to add two consecutive "animateTransform". It does however seem to only take the last action into account, as seen in my example.
import svgwrite
path = [(100,100),(100,200),(200,200),(200,100)]
image = svgwrite.Drawing('test.svg',size=(300,300))
rectangle = image.add(image.polygon(path,id ='polygon',stroke="black",fill="white"))
rectangle.add(image.animateTransform("rotate","transform",id="polygon", from_="0 150 150", to="360 150 150",dur="4s",begin="0s",repeatCount="indefinite"))
rectangle.add(image.animateTransform("scale","transform",id="polygon", from_="0", to="1",dur="4s",begin="0s",repeatCount="indefinite"))
image.save()
display(SVG('test.svg'))
Can anyone help?
This maybe comes too late, but what worked for me is adding additive = "sum" to both animations. Be aware that the order in which you add the animations impacts the end result.
import svgwrite
path = [(100,100),(100,200),(200,200),(200,100)]
image = svgwrite.Drawing('test.svg',size=(300,300))
rectangle = image.add(image.polygon(path,id ='polygon',stroke="black",fill="white"))
rectangle.add(image.animateTransform("scale","transform",id="polygon", from_="0", to="1",dur="4s",begin="0s",repeatCount="indefinite", additive = "sum"))
rectangle.add(image.animateTransform("rotate","transform",id="polygon", from_="0 150 150", to="360 150 150",dur="4s",begin="0s", additive = "sum", repeatCount="indefinite"))
image.save()
display(SVG('test.svg'))

Indentation error with print function

Normally I do all my programming in Java, but I have a bit of python code I'm converting to java.
What I'm not getting is the indentation style, which is cool because that's the way things are done in python, what I need to do though is just add a
couple of print() functions to the code just to make sure that I'm getting the correct result.
For example
def relImgCoords2ImgPlaneCoords(self, pt, imageWidth, imageHeight):
ratio = imageWidth / float(imageHeight)
sw = ratio
sh = 1
return [sw * (pt[0] - 0.5), sh * (pt[1] - 0.5)]
It doesn't seem to matter how the print function is indented it shows an indent error, what's the trick?
or
else:
if scn.optical_center_type == 'camdata':
#get the principal point location from camera data
P = [x for x in activeSpace.clip.tracking.camera.principal]
#print("camera data optical center", P[:])
P[0] /= imageWidth
P[1] /= imageHeight
#print("normlz. optical center", P[:])
P = self.relImgCoords2ImgPlaneCoords(P, imageWidth, imageHeight)
elif scn.optical_center_type == 'compute':
if len(vpLineSets) < 3:
self.report({'ERROR'}, "A third grease pencil layer is needed to compute the optical center.")
return{'CANCELLED'}
#compute the principal point using a vanishing point from a third gp layer.
#this computation does not rely on the order of the line sets
vps = [self.computeIntersectionPointForLineSegments(vpLineSets[i]) for i in range(len(vpLineSets))]
vps = [self.relImgCoords2ImgPlaneCoords(vps[i], imageWidth, imageHeight) for i in range(len(vps))]
P = self.computeTriangleOrthocenter(vps)
else:
#assume optical center in image midpoint
pass
I want to see what the return values of the vps variables are in elif part of the code block, where do I put the print function call?
There does not seem to be any issues with where you are placing the commented out print statements.
The most likely case is that you are mixing indents with spaces causing indention errors.
Try using the reindent.py script in the Tools/scripts directory of where you installed Python.

How does Richardson–Lucy algorithm work? Code example?

I am trying to figure out how deconvolution works. I understand the idea behind it but I want to understand some of the actual algorithms which implement it - algorithms which take as input a blurred image with its point sample function (blur kernel) and produce as output the latent image.
So far I found Richardson–Lucy algorithm where the math does not seem to be that difficult however I can't figure how the actual algorithm works. At Wikipedia it says:
This leads to an equation for which can be solved iteratively according...
however it does not show the actual loop. Can anyone point me to a resource where the actual algorithm is explained. On Google I only manage to find methods which use Richardson–Lucy as one of its steps, but not the actual Richardson–Lucy algorithm.
Algorithm in any language or pseudo-code would be nice, however if one is available in Python, that would be amazing.
Thanx in advance.
Edit
Essentially what I want to figure out is given blurred image (nxm):
x00 x01 x02 x03 .. x0n
x10 x11 x12 x13 .. x1n
...
xm0 xm1 xm2 xm3 .. xmn
and the kernel (ixj) which was used in order to get the blurred image:
p00 p01 p02 .. p0i
p10 p11 p12 .. p1i
...
pj0 pj1 pj2 .. pji
What are the exact steps in the Richardson–Lucy algorithm in order to figure out the original image.
Here is a very simple Matlab implementation :
function result = RL_deconv(image, PSF, iterations)
% to utilise the conv2 function we must make sure the inputs are double
image = double(image);
PSF = double(PSF);
latent_est = image; % initial estimate, or 0.5*ones(size(image));
PSF_HAT = PSF(end:-1:1,end:-1:1); % spatially reversed psf
% iterate towards ML estimate for the latent image
for i= 1:iterations
est_conv = conv2(latent_est,PSF,'same');
relative_blur = image./est_conv;
error_est = conv2(relative_blur,PSF_HAT,'same');
latent_est = latent_est.* error_est;
end
result = latent_est;
original = im2double(imread('lena256.png'));
figure; imshow(original); title('Original Image')
hsize=[9 9]; sigma=1;
PSF = fspecial('gaussian', hsize, sigma);
blr = imfilter(original, PSF);
figure; imshow(blr); title('Blurred Image')
res_RL = RL_deconv(blr, PSF, 1000);
figure; imshow(res_RL); title('Recovered Image')
You can also work in the frequency domain instead of in the spatial domain as above. In that case the code would be :
function result = RL_deconv(image, PSF, iterations)
fn = image; % at the first iteration
OTF = psf2otf(PSF,size(image));
for i=1:iterations
ffn = fft2(fn);
Hfn = OTF.*ffn;
iHfn = ifft2(Hfn);
ratio = image./iHfn;
iratio = fft2(ratio);
res = OTF .* iratio;
ires = ifft2(res);
fn = ires.*fn;
end
result = abs(fn);
Only thing I don't quite understand is how this spatial reversal of the PSF works and what it's for. If anyone could explain that for me that would be cool! I'm also looking for a simple Matlab R-L implementation for spatially variant PSFs (ie spatially nonhomogeneous point spread functions) - if anyone would have one please let me know!
To get rid of the artefacts at the edges you could mirror the input image at the edges and then crop away the mirrored bits afterwards or use Matlab's image = edgetaper(image, PSF) before you call RL_deconv.
The native Matlab implementation deconvlucy.m is a bit more complicated btw - the source code of that one can be found here and uses an accelerated version of the basic algorithm.
The equation on Wikipedia gives a function for iteration t+1 in terms of iteration t. You can implement this type of iterative algorithm in the following way:
def iter_step(prev):
updated_value = <function from Wikipedia>
return updated_value
def iterate(initial_guess):
cur = initial_guess
while True:
prev, cur = cur, iter_step(cur)
if difference(prev, cur) <= tolerance:
break
return cur
Of course, you will have to implement your own difference function that is correct for whatever type of data you are working with. You also need to handle the case where convergence is never reached (e.g. limit the number of iterations).
Here's an open source Python implementation:
http://code.google.com/p/iocbio/wiki/IOCBioMicroscope
If it helps here is a implementation I wrote that includes some documentation....
https://github.com/bnorthan/projects/blob/master/truenorthJ/ImageJ2Plugins/functions/src/main/java/com/truenorth/functions/fft/filters/RichardsonLucyFilter.java
Richardson Lucy is a building block for many other deconvolution algorithms. For example the iocbio example above modified the algorithm to better deal with noise.
It is a relatively simple algorithm (as these things go) and is a starting point for more complicated algorithms so you can find many different implementations.

Maintaining view/scroll position in QGraphicsView when swapping images

I'm having trouble with zooming TIFF images loaded into a QGraphicsView with QGraphicsPixmapItem.
The problem is more maintaining image quality along with having a zoom speed that doesn't make the application choke. To begin with I was just replacing the image with a scaled QPixmap - I used Qt.FastTransformation while the user was holding down a horizontal slider and then when the slider was released replaced the pixmap again with another scaled pixmap using Qt.SmoothTransformation. This gave a nice quality zoomed image but the zooming was jerky after the image size started to increase to larger than its original size; zooming out of the image was fine.
Using QTransform.fromScale() on the QGraphicsView gives much smoother zooming but a lower quality image, even when applying .setRenderHints(QPainter.Antialiasing | QPainter.SmoothPixmapTransform | QPainter.HighQualityAntialiasing) to the QGraphicsView.
My latest approach is to combine the two methods and use a QTransform on the QGraphicsView to have the smooth zooming but when the user releases the slider replace the image in the QGraphicsView with a scaled pixmap. This works great, but the position in the view is lost - the user zooms in to one area and because the scaled pixmap is larger the view jumps to another location when the slider is released and the higher quality scaled pixmap replaces the previous image.
I figured that as the width height ratio is the same in both images I could take the percentages of the scrollbars before the image swap and apply the same percentages after the swap and things should work out fine. This works well mostly, but there are still times when the view 'jumps' after swapping the image.
I'm pretty sure I'm doing something quite wrong here. Does anybody know of a better way to do this, or can anyone spot something in the code below that could cause this jumping?
This is the code to save/restore the scrollbar location. They are methods of a subclassed QGraphicsView:
def store_scrollbar_position(self):
x_max = self.horizontalScrollBar().maximum()
if x_max:
x = self.horizontalScrollBar().sliderPosition()
self.scroll_x_percentage = x * (100 / float(x_max))
y_max = self.verticalScrollBar().maximum()
if y_max:
y = self.verticalScrollBar().sliderPosition()
self.scroll_y_percentage = y * (100 / float(y_max))
def restore_scrollbar_position(self):
x_max = self.horizontalScrollBar().maximum()
if self.scroll_x_percentage and x_max:
x = x_max * (float(self.scroll_x_percentage) / 100)
self.horizontalScrollBar().setSliderPosition(x)
y_max = self.verticalScrollBar().maximum()
if self.scroll_y_percentage and y_max:
y = y_max * (float(self.scroll_y_percentage) / 100)
self.verticalScrollBar().setSliderPosition(y)
And here is how I'm doing the scaling. self.imageFile is a QPixmap and self.image is my QGraphicsPixmapItem. Again, part of a subclassed QGraphicsView. The method is attached to the slider movement with the highQuality parameter set to False. It is called again on slider release with highQuality as True to swap the image.
def setImageScale(self, scale=None, highQuality=True):
if self.imageFile.isNull():
return
if scale is None:
scale = self.scale
self.scale = scale
self.image.setPixmap(self.imageFile)
self.scene.setSceneRect(self.image.boundingRect())
self.image.setPos(0, 0)
if not highQuality:
self.setTransform(QTransform.fromScale(self.scaleFactor, self.scaleFactor))
self.store_scrollbar_position()
else:
self.image.setPixmap(self.imageFile.scaled(self.scaleFactor * self.imageFile.size(),
Qt.KeepAspectRatio, Qt.SmoothTransformation))
self.setTransform(self.transform)
self.scene.setSceneRect(self.image.boundingRect())
self.image.setPos(0, 0)
self.restore_scrollbar_position()
return
Any help would be appreciated. I'm starting to get quite frustrated with this now.
I found a solution that works better than the code I first posted. It's still not perfect, but is much improved. Just in case anyone else is trying to solve a similar problem...
When setting the low quality image I call this method added to my QGraphicsView:
def get_scroll_state(self):
"""
Returns a tuple of scene extents percentages.
"""
centerPoint = self.mapToScene(self.viewport().width()/2,
self.viewport().height()/2)
sceneRect = self.sceneRect()
centerWidth = centerPoint.x() - sceneRect.left()
centerHeight = centerPoint.y() - sceneRect.top()
sceneWidth = sceneRect.width()
sceneHeight = sceneRect.height()
sceneWidthPercent = centerWidth / sceneWidth if sceneWidth != 0 else 0
sceneHeightPercent = centerHeight / sceneHeight if sceneHeight != 0 else 0
return sceneWidthPercent, sceneHeightPercent
This gets stored in self.scroll_state. When setting the high quality image I call another function to restore the percentages used in the previous function.
def set_scroll_state(self, scroll_state):
sceneWidthPercent, sceneHeightPercent = scroll_state
x = (sceneWidthPercent * self.sceneRect().width() +
self.sceneRect().left())
y = (sceneHeightPercent * self.sceneRect().height() +
self.sceneRect().top())
self.centerOn(x, y)
This sets the center position to the same location (percentage-wise) as I was at before swapping the image.

Categories