Removing 'noise' or 'holes' from image [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
So I have a grey-scale image (2D matrix with cell values ranging from 0.0 to 1.0). I am manipulating it using python.
I would expect it to have gradual changes of values, but it comes with some clearly undesired 'artifacts', as the one marked in red in the picture bellow (and the others around).
Is there an already-implemented library (or known algorithm) that programmatically 'fills' them with somewhat like the 'weighted average of the surrounding pixels'?
They can be characterized as 'groups of pixels surrounded by a value gradient of at -0.1 or less'.

interesting problem, i wrote a program that will recursively loop through the image and smooth out the pixels by averaging them. It looks for absolute values over a certain size, and if detected will average those 2 values rebuilding the matrix. let me know what you think:
from statistics import mean
myimage = [
[0,0,0,0,.1],
[0,.1,3,1,.1],
[1,.1,4,.2,.1],
[0,1,0,0,.1],
[.1,.9,0,0,.1]
]
def smooth(matrix, delta):
noise=0
reduction=matrix.copy()
for i,row in enumerate(matrix):
for j,pixel in enumerate(row):
if j<len(row)-1:
if (abs(row[j]-row[j+1]))>=delta:
noise=1
av = mean([row[j],row[j+1]])
mv,iv=max((v,i) for i,v in enumerate((row[j],row[j+1])))
if iv==0:
reduction[i][j]=av
else:
reduction[i][j+1]=av
if i<len(matrix)-1:
if abs(row[j]-matrix[i+1][j])>=delta:
noise=1
av = mean([row[j],matrix[i+1][j]])
mv,iv=max((v,i) for i,v in enumerate((row[j],matrix[i+1][j])))
if iv==0:
reduction[i][j]=av
else:
reduction[i+1][j]=av
if noise==1:
return smooth(reduction, delta)
else:
return reduction
x=smooth(myimage, 0.5)
for line in x:
print(line)
#[0, 0, 0, 0, 0.1]
#[0, 0.1, 0.4, 0.25, 0.1]
#[0.25, 0.1, 0.3625, 0.2, 0.1]
#[0, 0.275, 0, 0, 0.1]
#[0.1, 0.29375, 0, 0, 0.1]

Related

I want to plot a heatmap on a png [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
Improve this question
I have a dataset of x and y coordinates of eye gaze data with fixation duration.
I want to plot a heatmap on a png image and the output will be like in the picture or in the link
How do I plot it in Python
Let's assume that this is the database below
we have x , y and time [900.399, 980.142, 0.78] ,, so the longest time represents high temperature and the shortest time represents low temperature
x and y represent the coordinates of the eye focus on the image because the image = width and height x and y
data = [ [900.399, 980.142, 0.78], [922.252, 880.885, 0.68], [724.311, 780.543, 0.58], [523.195, 582.994, 0.46], [623.431, 680.427, 0.76], [926.363, 881.791, 1.81], [722.942, 783.257, 0.75], [223.751, 279.995, 0.16], [723.215, 781.004, 0.64], [724.541, 779.889, 0.55] ]
and let's also assume that this is the width and height image that I want to plot on it = [1920, 1080]
Can someone help me designing a method in python to generate heatmap.
https://i.insider.com/53ce61e16bb3f7dd693ffa82?width=1000&format=jpeg&auto=webp

Removing the single pixels from Geotiff bianary image [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
This is classified image of satellite. Can anybody tell me how to remove these single pixels of filter out them. Remember this is in Geotiff format. I already applied erosion or dilation techniques but no success.
I saw a similar question on SO but can't find it. There were a quite good answer that I remade for myself. So here is the method called particle_filter that will be the solution for your problem:
def particle_filter(image_, power):
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(image_, connectivity=8)
sizes = stats[1:, -1]
nb_components = nb_components - 1
min_size = power
img2 = np.zeros(output.shape, dtype=np.uint8)
for i in range(0, nb_components):
if sizes[i] >= min_size:
img_to_compare = threshold_gray_const(output, (i + 1, i + 1))
img2 = binary_or(img2, img_to_compare)
img2 = img2.astype(np.uint8)
return img2
def threshold_gray_const(image_, rang: tuple):
return cv2.inRange(image_, rang[0], rang[1])
def binary_or(image_1, image_2):
return cv2.bitwise_or(image_1, image_2)
All you need to do is to call this function and give your binary image as first parameter and filter power as the second.
A bit explanation: Whole method - is simply iterating over objects on an image, and if the area of one of an object is less than the power, then it is simply removed.
I would give a try Median Filter (cv2.medianBlur) which should remove single pixels, but might also have other effect. You need to test it with few different settings and decide if it does provide you acceptable result.
Kernel size should be odd for Median Filter, thus median is used on odd number of pixels (9 for size 3, 25 for size 5, 49 for size 7 and so on), therefore Median Filter never introduces new value, thus if you use binary image as input, you will get binary image as output.

Design of a Notch filter in Python [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm trying to design an IIR Notch filter in python using numpy array and the scipy librairy to remove a sine tone from a imported wave file (I'm using the wave module to do so). My file was generated by Adobe audition : it is a pure sine # 1.2 kHZ, sampled # 48, 96 or 192 kHz, in order to have a "pseudo-periodic" data for my circular fft (just ask if I'm not clear enough)
Here is the code I used to implement the coefficient of my filter (I get the coefficient from the article "Second-order IIR Notch Filter Design and implementation of digital signal
processing system" by C. M. Wang & W. C. Xiao)
f_cut = 1200.0
wn = f_cut/rate
r = 0.99
B, A = np.zeros(3), np.zeros(3)
A[0],A[1],A[2] = 1.0, -2.0*r*np.cos(2*np.pi*wn), r*r
B[0],B[1],B[2] = 1.0, -2.0*np.cos(2*np.pi*wn), 1.0
filtered = signal.lfilter(B, A, data_flt_R, axis=0)
Where data_flt_R is a numpy array containing my right channel in a float64 type, and rate is my sampling frequency. I plot the frequency response and the fft of my data using the matplotlib module to see if everything is ok.
N = len(data_flt_R)
w, h = signal.freqz(B,A, N)
pyplot.subplot(2,1,1)
pyplot.semilogx(w*rate/(2*np.pi), 20*np.log10(np.absolute(h)))
fft1 = fftpack.fft(data_flt_R, N)
fft_abs1 = np.absolute(fft1)
ref = np.nanmax(fft_abs1)
dB_unfiltered = 20*np.log10(fft_abs1/ref)
fft2 = fftpack.fft(filtered, N)
fft_abs2 = np.absolute(fft2)
dB_filtered = 20*np.log10(fft_abs2/ref)
abs = fftpack.fftfreq(N,1.0/rate)
pyplot.subplot(2,1,2)
pyplot.semilogx(abs,dB_unfiltered,'r', label='unfiltered')
pyplot.semilogx(abs,dB_filtered,'b', label='filtered')
pyplot.grid(True)
pyplot.legend()
pyplot.ylabel('power spectrum (in dB)')
pyplot.xlim(10,rate/2)
pyplot.xlabel('frequencies (in Hz)')
And here is what I get :
I don't understand the results and values I get before and after my fc. Shouldn't I get a plot which looks like the red one but whithout the main peak ? Why do I have a slope in HF? Is this linked with windowing?
Moreover, the result changes if I change my sampling frequency and/or the data length (16/24 or 32 bits). Can anyone enlighten me?

Tutorial for scipy.cluster.hierarchy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm trying to understand how to manipulate a hierarchy cluster but the documentation is too ... technical?... and I can't understand how it works.
Is there any tutorial that can help me to start with, explaining step by step some simple tasks?
Let's say I have the following data set:
a = np.array([[0, 0 ],
[1, 0 ],
[0, 1 ],
[1, 1 ],
[0.5, 0 ],
[0, 0.5],
[0.5, 0.5],
[2, 2 ],
[2, 3 ],
[3, 2 ],
[3, 3 ]])
I can easily do the hierarchy cluster and plot the dendrogram:
z = linkage(a)
d = dendrogram(z)
Now, how I can recover a specific cluster? Let's say the one with elements [0,1,2,4,5,6] in the dendrogram?
How I can get back the values of that elements?
There are three steps in hierarchical agglomerative clustering (HAC):
Quantify Data (metric argument)
Cluster Data (method argument)
Choose the number of clusters
Doing
z = linkage(a)
will accomplish the first two steps. Since you did not specify any parameters it uses the standard values
metric = 'euclidean'
method = 'single'
So z = linkage(a) will give you a single linked hierachical agglomerative clustering of a. This clustering is kind of a hierarchy of solutions. From this hierarchy you get some information about the structure of your data. What you might do now is:
Check which metric is appropriate, e. g. cityblock or chebychev will quantify your data differently (cityblock, euclidean and chebychev correspond to L1, L2, and L_inf norm)
Check the different properties / behaviours of the methdos (e. g. single, complete and average)
Check how to determine the number of clusters, e. g. by reading the wiki about it
Compute indices on the found solutions (clusterings) such as the silhouette coefficient (with this coefficient you get a feedback on the quality of how good a point/observation fits to the cluster it is assigned to by the clustering). Different indices use different criteria to qualify a clustering.
Here is something to start with
import numpy as np
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
a = np.array([[0.1, 2.5],
[1.5, .4 ],
[0.3, 1 ],
[1 , .8 ],
[0.5, 0 ],
[0 , 0.5],
[0.5, 0.5],
[2.7, 2 ],
[2.2, 3.1],
[3 , 2 ],
[3.2, 1.3]])
fig, axes23 = plt.subplots(2, 3)
for method, axes in zip(['single', 'complete'], axes23):
z = hac.linkage(a, method=method)
# Plotting
axes[0].plot(range(1, len(z)+1), z[::-1, 2])
knee = np.diff(z[::-1, 2], 2)
axes[0].plot(range(2, len(z)), knee)
num_clust1 = knee.argmax() + 2
knee[knee.argmax()] = 0
num_clust2 = knee.argmax() + 2
axes[0].text(num_clust1, z[::-1, 2][num_clust1-1], 'possible\n<- knee point')
part1 = hac.fcluster(z, num_clust1, 'maxclust')
part2 = hac.fcluster(z, num_clust2, 'maxclust')
clr = ['#2200CC' ,'#D9007E' ,'#FF6600' ,'#FFCC00' ,'#ACE600' ,'#0099CC' ,
'#8900CC' ,'#FF0000' ,'#FF9900' ,'#FFFF00' ,'#00CC01' ,'#0055CC']
for part, ax in zip([part1, part2], axes[1:]):
for cluster in set(part):
ax.scatter(a[part == cluster, 0], a[part == cluster, 1],
color=clr[cluster])
m = '\n(method: {})'.format(method)
plt.setp(axes[0], title='Screeplot{}'.format(m), xlabel='partition',
ylabel='{}\ncluster distance'.format(m))
plt.setp(axes[1], title='{} Clusters'.format(num_clust1))
plt.setp(axes[2], title='{} Clusters'.format(num_clust2))
plt.tight_layout()
plt.show()
Gives

Neural Network Example Source-code (preferably Python) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I wonder if anyone has some example code of a Neural network in python. If someone know of some sort of tutorial with a complete walkthrough that would be awesome, but just example source would be great as well!
Thanks
Found this interresting discusion on ubuntu forums
http://ubuntuforums.org/showthread.php?t=320257
import time
import random
# Learning rate:
# Lower = slower
# Higher = less precise
rate=.2
# Create random weights
inWeight=[random.uniform(0, 1), random.uniform(0, 1)]
# Start neuron with no stimuli
inNeuron=[0.0, 0.0]
# Learning table (or gate)
test =[[0.0, 0.0, 0.0]]
test+=[[0.0, 1.0, 1.0]]
test+=[[1.0, 0.0, 1.0]]
test+=[[1.0, 1.0, 1.0]]
# Calculate response from neural input
def outNeuron(midThresh):
global inNeuron, inWeight
s=inNeuron[0]*inWeight[0] + inNeuron[1]*inWeight[1]
if s>midThresh:
return 1.0
else:
return 0.0
# Display results of test
def display(out, real):
if out == real:
print str(out)+" should be "+str(real)+" ***"
else:
print str(out)+" should be "+str(real)
while 1:
# Loop through each lesson in the learning table
for i in range(len(test)):
# Stimulate neurons with test input
inNeuron[0]=test[i][0]
inNeuron[1]=test[i][1]
# Adjust weight of neuron #1
# based on feedback, then display
out = outNeuron(2)
inWeight[0]+=rate*(test[i][2]-out)
display(out, test[i][2])
# Adjust weight of neuron #2
# based on feedback, then display
out = outNeuron(2)
inWeight[1]+=rate*(test[i][2]-out)
display(out, test[i][2])
# Delay
time.sleep(1)
EDIT: there is also a framework named chainer
https://pypi.python.org/pypi/chainer/1.0.0
You might want to take a look at Monte:
Monte (python) is a Python framework
for building gradient based learning
machines, like neural networks,
conditional random fields, logistic
regression, etc. Monte contains
modules (that hold parameters, a
cost-function and a gradient-function)
and trainers (that can adapt a
module's parameters by minimizing its
cost-function on training data).
Modules are usually composed of other
modules, which can in turn contain
other modules, etc. Gradients of
decomposable systems like these can be
computed with back-propagation.
Here is a probabilistic neural network tutorial :http://www.youtube.com/watch?v=uAKu4g7lBxU
And my Python Implementation:
import math
data = {'o' : [(0.2, 0.5), (0.5, 0.7)],
'x' : [(0.8, 0.8), (0.4, 0.5)],
'i' : [(0.8, 0.5), (0.6, 0.3), (0.3, 0.2)]}
class Prob_Neural_Network(object):
def __init__(self, data):
self.data = data
def predict(self, new_point, sigma):
res_dict = {}
np = new_point
for k, v in self.data.iteritems():
res_dict[k] = sum(self.gaussian_func(np[0], np[1], p[0], p[1], sigma) for p in v)
return max(res_dict.iteritems(), key=lambda k : k[1])
def gaussian_func(self, x, y, x_0, y_0, sigma):
return math.e ** (-1 *((x - x_0) ** 2 + (y - y_0) ** 2) / ((2 * (sigma ** 2))))
prob_nn = Prob_Neural_Network(data)
res = prob_nn.predict((0.2, 0.6), 0.1)
Result:
>>> res
('o', 0.6132686067117191)

Categories