Neural Network Example Source-code (preferably Python) [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I wonder if anyone has some example code of a Neural network in python. If someone know of some sort of tutorial with a complete walkthrough that would be awesome, but just example source would be great as well!
Thanks

Found this interresting discusion on ubuntu forums
http://ubuntuforums.org/showthread.php?t=320257
import time
import random
# Learning rate:
# Lower = slower
# Higher = less precise
rate=.2
# Create random weights
inWeight=[random.uniform(0, 1), random.uniform(0, 1)]
# Start neuron with no stimuli
inNeuron=[0.0, 0.0]
# Learning table (or gate)
test =[[0.0, 0.0, 0.0]]
test+=[[0.0, 1.0, 1.0]]
test+=[[1.0, 0.0, 1.0]]
test+=[[1.0, 1.0, 1.0]]
# Calculate response from neural input
def outNeuron(midThresh):
global inNeuron, inWeight
s=inNeuron[0]*inWeight[0] + inNeuron[1]*inWeight[1]
if s>midThresh:
return 1.0
else:
return 0.0
# Display results of test
def display(out, real):
if out == real:
print str(out)+" should be "+str(real)+" ***"
else:
print str(out)+" should be "+str(real)
while 1:
# Loop through each lesson in the learning table
for i in range(len(test)):
# Stimulate neurons with test input
inNeuron[0]=test[i][0]
inNeuron[1]=test[i][1]
# Adjust weight of neuron #1
# based on feedback, then display
out = outNeuron(2)
inWeight[0]+=rate*(test[i][2]-out)
display(out, test[i][2])
# Adjust weight of neuron #2
# based on feedback, then display
out = outNeuron(2)
inWeight[1]+=rate*(test[i][2]-out)
display(out, test[i][2])
# Delay
time.sleep(1)
EDIT: there is also a framework named chainer
https://pypi.python.org/pypi/chainer/1.0.0

You might want to take a look at Monte:
Monte (python) is a Python framework
for building gradient based learning
machines, like neural networks,
conditional random fields, logistic
regression, etc. Monte contains
modules (that hold parameters, a
cost-function and a gradient-function)
and trainers (that can adapt a
module's parameters by minimizing its
cost-function on training data).
Modules are usually composed of other
modules, which can in turn contain
other modules, etc. Gradients of
decomposable systems like these can be
computed with back-propagation.

Here is a probabilistic neural network tutorial :http://www.youtube.com/watch?v=uAKu4g7lBxU
And my Python Implementation:
import math
data = {'o' : [(0.2, 0.5), (0.5, 0.7)],
'x' : [(0.8, 0.8), (0.4, 0.5)],
'i' : [(0.8, 0.5), (0.6, 0.3), (0.3, 0.2)]}
class Prob_Neural_Network(object):
def __init__(self, data):
self.data = data
def predict(self, new_point, sigma):
res_dict = {}
np = new_point
for k, v in self.data.iteritems():
res_dict[k] = sum(self.gaussian_func(np[0], np[1], p[0], p[1], sigma) for p in v)
return max(res_dict.iteritems(), key=lambda k : k[1])
def gaussian_func(self, x, y, x_0, y_0, sigma):
return math.e ** (-1 *((x - x_0) ** 2 + (y - y_0) ** 2) / ((2 * (sigma ** 2))))
prob_nn = Prob_Neural_Network(data)
res = prob_nn.predict((0.2, 0.6), 0.1)
Result:
>>> res
('o', 0.6132686067117191)

Related

Removing 'noise' or 'holes' from image [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
So I have a grey-scale image (2D matrix with cell values ranging from 0.0 to 1.0). I am manipulating it using python.
I would expect it to have gradual changes of values, but it comes with some clearly undesired 'artifacts', as the one marked in red in the picture bellow (and the others around).
Is there an already-implemented library (or known algorithm) that programmatically 'fills' them with somewhat like the 'weighted average of the surrounding pixels'?
They can be characterized as 'groups of pixels surrounded by a value gradient of at -0.1 or less'.
interesting problem, i wrote a program that will recursively loop through the image and smooth out the pixels by averaging them. It looks for absolute values over a certain size, and if detected will average those 2 values rebuilding the matrix. let me know what you think:
from statistics import mean
myimage = [
[0,0,0,0,.1],
[0,.1,3,1,.1],
[1,.1,4,.2,.1],
[0,1,0,0,.1],
[.1,.9,0,0,.1]
]
def smooth(matrix, delta):
noise=0
reduction=matrix.copy()
for i,row in enumerate(matrix):
for j,pixel in enumerate(row):
if j<len(row)-1:
if (abs(row[j]-row[j+1]))>=delta:
noise=1
av = mean([row[j],row[j+1]])
mv,iv=max((v,i) for i,v in enumerate((row[j],row[j+1])))
if iv==0:
reduction[i][j]=av
else:
reduction[i][j+1]=av
if i<len(matrix)-1:
if abs(row[j]-matrix[i+1][j])>=delta:
noise=1
av = mean([row[j],matrix[i+1][j]])
mv,iv=max((v,i) for i,v in enumerate((row[j],matrix[i+1][j])))
if iv==0:
reduction[i][j]=av
else:
reduction[i+1][j]=av
if noise==1:
return smooth(reduction, delta)
else:
return reduction
x=smooth(myimage, 0.5)
for line in x:
print(line)
#[0, 0, 0, 0, 0.1]
#[0, 0.1, 0.4, 0.25, 0.1]
#[0.25, 0.1, 0.3625, 0.2, 0.1]
#[0, 0.275, 0, 0, 0.1]
#[0.1, 0.29375, 0, 0, 0.1]

Additional audio feature extraction tips

I'm trying to create a speech emotion recognition model using Keras, I've done all of the code and have trained the model. It sits around 50% validation and is overfitting.
When i use model.predict() with unseen data it seems to have a hard time distinguishing between 'neutral', 'calm', 'happy' and 'suprised', but seems to be able to predict 'angry' correctly in the majority of cases - i assume because there's a clear difference in pitch or something.
I'm thinking it could possibly be that i'm not getting enough features from these emotions, which would help the model distinguish between them.
Currently i am using Librosa and coverting audio to MFCC's. Is there any other way, even using Librosa, that i can do to extract features for the model to help it better distinguish between the 'neutral', 'calm', 'happy', 'surprised' etc?
some feature extraction code:
wav_clip, sample_rate = librosa.load(file_path, duration=3, mono=True, sr=None)
mfcc = librosa.feature.mfcc(wav_clip, sample_rate)
Also, this is with 1400 samples.
A few observations for starter:
Likely you have far too few samples to efficiently use neural networks. Use a simple algorithm for starter to understand well how your model is making prediction.
Make sure you have enough (30% or more) samples from different speakers put aside for final testing. You can use this test set only once, so think about building a pipeline to generate train, validation and test sets. Make sure you don't put the same speaker into more than 1 set.
First coefficient from librosa gives you AFAIK an offset. I'd recommend plotting how your features correlate with labels and how far they overlap, some can be easily confused I guess. Find if there are any feature that would differentiate your classes. Don't do this by running your model, do visual inspection first.
To the actual features! You're right to assume pitch should play a vital role. I'd recommend checking out aubio - it has Python bindings.
Yaafe also offers excellent selection of features.
You might easily end up with 150+ features. You might want to reduce dimensionality of the problem, perhaps even compress it to 2d and see if you can somehow separate the classes. Here is my own example with Dash.
Last but not least, some basic code to extract frequencies from the audio. In this case I am also trying to find three peak frequencies.
import numpy as np
def spectral_statistics(y: np.ndarray, fs: int, lowcut: int = 0) -> dict:
"""
Compute selected statistical properties of spectrum
:param y: 1-d signsl
:param fs: sampling frequency [Hz]
:param lowcut: lowest frequency [Hz]
:return: spectral features (dict)
"""
spec = np.abs(np.fft.rfft(y))
freq = np.fft.rfftfreq(len(y), d=1 / fs)
idx = int(lowcut / fs * len(freq) * 2)
spec = np.abs(spec[idx:])
freq = freq[idx:]
amp = spec / spec.sum()
mean = (freq * amp).sum()
sd = np.sqrt(np.sum(amp * ((freq - mean) ** 2)))
amp_cumsum = np.cumsum(amp)
median = freq[len(amp_cumsum[amp_cumsum <= 0.5]) + 1]
mode = freq[amp.argmax()]
Q25 = freq[len(amp_cumsum[amp_cumsum <= 0.25]) + 1]
Q75 = freq[len(amp_cumsum[amp_cumsum <= 0.75]) + 1]
IQR = Q75 - Q25
z = amp - amp.mean()
w = amp.std()
skew = ((z ** 3).sum() / (len(spec) - 1)) / w ** 3
kurt = ((z ** 4).sum() / (len(spec) - 1)) / w ** 4
top_peaks_ordered_by_power = {'stat_freq_peak_by_power_1': 0, 'stat_freq_peak_by_power_2': 0, 'stat_freq_peak_by_power_3': 0}
top_peaks_ordered_by_order = {'stat_freq_peak_by_order_1': 0, 'stat_freq_peak_by_order_2': 0, 'stat_freq_peak_by_order_3': 0}
amp_smooth = signal.medfilt(amp, kernel_size=15)
peaks, height_d = signal.find_peaks(amp_smooth, distance=100, height=0.002)
if peaks.size != 0:
peak_f = freq[peaks]
for peak, peak_name in zip(peak_f, top_peaks_ordered_by_order.keys()):
top_peaks_ordered_by_order[peak_name] = peak
idx_three_top_peaks = height_d['peak_heights'].argsort()[-3:][::-1]
top_3_freq = peak_f[idx_three_top_peaks]
for peak, peak_name in zip(top_3_freq, top_peaks_ordered_by_power.keys()):
top_peaks_ordered_by_power[peak_name] = peak
specprops = {
'stat_mean': mean,
'stat_sd': sd,
'stat_median': median,
'stat_mode': mode,
'stat_Q25': Q25,
'stat_Q75': Q75,
'stat_IQR': IQR,
'stat_skew': skew,
'stat_kurt': kurt
}
specprops.update(top_peaks_ordered_by_power)
specprops.update(top_peaks_ordered_by_order)
return specprops

How to use propertly theano.scan in the SGD updates of a simple Probabilistic Matrix Factorization algorithm?

I am trying to implement Probabilistic Matrix Factorization with Stochastic Gradient Descent updates, in theano, without using a for loop.
I have just started learning the basics of theano; unfortunately on my experiment I get this error:
UnusedInputError: theano.function was asked to create a function
computing outputs given certain inputs, but the provided input
variable at index 0 is not part of the computational graph needed
to compute the outputs: trainM.
The source code is the following:
def create_training_set_matrix(training_set):
return np.array([
[_i,_j,_Rij]
for (_i,_j),_Rij
in training_set
])
def main():
R = movielens.small()
U_values = np.random.random((config.K,R.shape[0]))
V_values = np.random.random((config.K,R.shape[1]))
U = theano.shared(U_values)
V = theano.shared(V_values)
lr = T.dscalar('lr')
trainM = T.dmatrix('trainM')
def step(curr):
i = T.cast(curr[0],'int32')
j = T.cast(curr[1],'int32')
Rij = curr[2]
eij = T.dot(U[:,i].T, V[:,j])
T.inc_subtensor(U[:,i], lr * eij * V[:,j])
T.inc_subtensor(V[:,j], lr * eij * U[:,i])
return {}
values, updates = theano.scan(step, sequences=[trainM])
scan_fn = function([trainM, lr],values)
print "training pmf..."
for training_set in cftools.epochsloop(R,U_values,V_values):
training_set_matrix = create_training_set_matrix(training_set)
scan_fn(training_set_matrix, config.lr)
I realize that it's a rather unconventional way to use theano.scan: do you have a suggestion on how I could implement my algorithm better?
The main difficulty lies on the updates: a single update depends on possibly all the previous updates. For this reason I defined the latent matrices U and V as shared (I hope I did that correctly).
The version of theano I am using is: 0.8.0.dev0.dev-8d6800181bedb03a4bced4f456338e5194524317
Any hint and suggestion is highly appreciated. I am available to provide further details.

Design of a Notch filter in Python [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm trying to design an IIR Notch filter in python using numpy array and the scipy librairy to remove a sine tone from a imported wave file (I'm using the wave module to do so). My file was generated by Adobe audition : it is a pure sine # 1.2 kHZ, sampled # 48, 96 or 192 kHz, in order to have a "pseudo-periodic" data for my circular fft (just ask if I'm not clear enough)
Here is the code I used to implement the coefficient of my filter (I get the coefficient from the article "Second-order IIR Notch Filter Design and implementation of digital signal
processing system" by C. M. Wang & W. C. Xiao)
f_cut = 1200.0
wn = f_cut/rate
r = 0.99
B, A = np.zeros(3), np.zeros(3)
A[0],A[1],A[2] = 1.0, -2.0*r*np.cos(2*np.pi*wn), r*r
B[0],B[1],B[2] = 1.0, -2.0*np.cos(2*np.pi*wn), 1.0
filtered = signal.lfilter(B, A, data_flt_R, axis=0)
Where data_flt_R is a numpy array containing my right channel in a float64 type, and rate is my sampling frequency. I plot the frequency response and the fft of my data using the matplotlib module to see if everything is ok.
N = len(data_flt_R)
w, h = signal.freqz(B,A, N)
pyplot.subplot(2,1,1)
pyplot.semilogx(w*rate/(2*np.pi), 20*np.log10(np.absolute(h)))
fft1 = fftpack.fft(data_flt_R, N)
fft_abs1 = np.absolute(fft1)
ref = np.nanmax(fft_abs1)
dB_unfiltered = 20*np.log10(fft_abs1/ref)
fft2 = fftpack.fft(filtered, N)
fft_abs2 = np.absolute(fft2)
dB_filtered = 20*np.log10(fft_abs2/ref)
abs = fftpack.fftfreq(N,1.0/rate)
pyplot.subplot(2,1,2)
pyplot.semilogx(abs,dB_unfiltered,'r', label='unfiltered')
pyplot.semilogx(abs,dB_filtered,'b', label='filtered')
pyplot.grid(True)
pyplot.legend()
pyplot.ylabel('power spectrum (in dB)')
pyplot.xlim(10,rate/2)
pyplot.xlabel('frequencies (in Hz)')
And here is what I get :
I don't understand the results and values I get before and after my fc. Shouldn't I get a plot which looks like the red one but whithout the main peak ? Why do I have a slope in HF? Is this linked with windowing?
Moreover, the result changes if I change my sampling frequency and/or the data length (16/24 or 32 bits). Can anyone enlighten me?

how to speed up kernelize perceptron using parallelization? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am dealing with some kind of huge data set where I need to do binary classification using kernelized perceptron. I am using this source code: https://gist.github.com/mblondel/656147 .
Here there 3 things that can be paralelized , 1)kernel computation, 2)update rule 3)projection part. Also I did some kind of other speed up like calculation upper triangulated part of kernel then making it to full symmetric matrix :
K = np.zeros((n_samples, n_samples))
for index in itertools.combinations_with_replacement(range(n_samples),2):
K[index] = self.kernel(X[ index[0] ],X[ index[1] ],self.gamma)
#make the full KERNEL
K = K + np.triu(K,1).T
I also paralelized the projection part like:
def parallel_project(self,X):
""" Function to parallelizing prediction"""
y_predict=np.zeros(self.nOfWorkers,"object")
pool=mp.Pool(processes=self.nOfWorkers)
results=[pool.apply_async(prediction_worker,args=(self.alpha,self.sv_y,self.sv,self.kernel,(parts,))) for parts in np.array_split(X,self.nOfWorkers)]
pool.close()
pool.join()
i=0
for r in results:
y_predict[i]=r.get()
i+=1
return np.hstack(y_predict)
and worker:
def prediction_worker(alpha,sv_y,sv,kernel,samples):
""" WORKER FOR PARALELIZING PREDICTION PART"""
print "starting:" , mp.current_process().name
X= samples[0]
y_predict=np.zeros(len(X))
for i in range(len(X)):
s = 0
for a1, sv_y1, sv1 in zip(alpha, sv_y, sv):
s += a1 * sv_y1 * kernel(X[i], sv1)
y_predict[i]=s
return y_predict.flatten()
but still the code is too slow. So can you give me any hint regarding paralelization or any other speed up?
remark:
please prove general solution,I am not dealing with customize kernel functions.
thanks
Here's something that should give you an instant speedup. The kernels in Mathieu's example code take single samples, but then full Gram matrices are computed using them:
K = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
K[i,j] = self.kernel(X[i], X[j])
This is slow, and can be avoided by vectorizing the kernel functions:
def linear_kernel(X, Y):
return np.dot(X, Y.T)
def polynomial_kernel(X, Y, p=3):
return (1 + np.dot(X, Y.T)) ** p
# the Gaussian RBF kernel is a bit trickier
Now the Gram matrix can be computed as just
K = kernel(X, X)
The project function should be changed accordingly to speed that up as well.

Categories