how to create a proper sigmoid curve? - python

I'm trying to use logistic regression on the popularity of hits songs on Spotify from 2010-2019 based on their durations and durability, whose data are collected from an .csv file. Basically, since the popularity values of each song is numerical, I have converted each of them to binary numbers "0" to "1". If the popularity value of a hit song is less than 70, I will replace its current value to 0, and vice versa if its value is more than 70.
The current sigmoid curve is being "log" right now, hence it is showing a straight line. However, in the context of this code, I am still not sure how to add in a proper sigmoid curve, instead of just the straight line. Is there anything i need to add to my code in order to show both a solid sigmoid curve and the log of the curve in the same graph? It would be deeply appreciated if someone can help me with the final step.
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('top10s [SubtitleTools.com] (2).csv')
BPM = df.bpm
BPM = np.array(BPM)
Energy = df.nrgy
Energy = np.array(Energy)
Dance = df.dnce
Dance = np.array(Dance)
dB = df.dB
dB = np.array(dB)
Live = df.live
Live = np.array(Live)
Valence = df.val
Valence = np.array(Valence)
Acous = df.acous
Acous = np.array(Acous)
Speech = df.spch
Speech = np.array(Speech)
df.loc[df['popu'] <= 70, 'popu'] = 0
df.loc[df['popu'] > 70, 'popu'] = 1
def Logistic_Regression(X, y, iterations, alpha):
ones = np.ones((X.shape[0], ))
X = np.vstack((ones, X))
X = X.T
b = np.zeros(X.shape[1])
for i in range(iterations):
z = np.dot(X, b)
p_hat = sigmoid(z)
gradient = np.dot(X.T, (y - p_hat))/y.size
b = b + alpha * gradient
if (i % 1000 == 0):
print('LL, i ', log_likelihood(X, y, b), i)
return b
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def log_likelihood(X, y, b):
z = np.dot(X, b)
LL = np.sum(y*z - np.log(1 + np.exp(z)))
return LL
def LR1():
Dur = df.dur
Dur = np.array(Dur)
Pop = df.popu
Pop = [int(i) for i in Pop]; Pop = np.array(Pop)
plt.figure(figsize=(10,8))
colormap = np.array(['r', 'b'])
plt.scatter(Dur, Pop, c = colormap[Pop], alpha = .4)
b = Logistic_Regression(Dur, Pop, iterations = 8000, alpha = 0.00005)
print('Done')
p_hat = sigmoid(np.dot(Dur, b[1]) + b[0])
idxDur = np.argsort(Dur)
plt.plot(Dur[idxDur], p_hat[idxDur])
plt.show()
LR1()
My dataset:
CSV File
My Current Graph
What i want to have:
Shape of sigmoid i want

at first glance, your Logistic_Regression initialization seems very wrong.
I think you packed X with [X, 1] then tries to learn W = [Weight, bias], which should be [1, 0] to start with.
Note the 1 is vector [1, 1, 1...] with length = feature vector length.

try something like this:
x_range = np.linspace(Dur.min(), Dur.max(), 100)
p_hat = sigmoid(np.dot(x_range, b[1]), b[0])
plt.plot(x_range, p_hat)
plt.show()

Related

linear regression: my plotting doesn't show the line

I am working on implementing from scratch a linear regression model means without using Sklearn package.
all was working just fine , until i tried ploting the result.
my fit line isn't showing:
i looked at a bunch of solution but neither of them was for myy problem
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
data = pd.read_csv(r'C:\Salary.csv')
x=data['Salary']
y=data['YearsExperience']
#y= mx+b
m = 0
b = 0
Learning_Rate = .01
epochs = 5000
n = np.float(x.shape[0])
error = []
for i in range(epochs):
Y_hat = m*x+b
#error
mse= (1/n)*np.sum((y-Y_hat)**2)
error.append(mse)
#gradient descend
db = (-2/n) * np.sum(x*(y-Y_hat))
dm = (-2/n) * np.sum((y-Y_hat))
m = m - Learning_Rate * dm
b = b - Learning_Rate * db
#tracing x and y line
x_line = np.linspace(0, 15, 100)
y_line = (m*x_line)+ b
#ploting result
plt.figure(figsize=(8,6))
plt.title('LR result')
**plt.plot(x_line,y_line) #the problem is apparently here
# i just don't know what to do**
plt.scatter(x,y)
plt.show()
appart from that, there is no problem with the code .
Your code has multiple problems:
you are plotting the line from 0 and 15, while data range from about 40000 to 140000. Even if you are correctly computing the line, you are going to plot it in a region far away from your data
in the loop there is a mistake in the computation of dm and db, they are swapped. The corrected expressions are:
dm = (-2/n)*np.sum(x*(y - Y_hat))
db = (-2/n)*np.sum((y - Y_hat))
your x and y data are on very different scales: x is ~10⁴ magnitude, while y is ~10¹. For this reason, also m and b will likely be very different from each other (different orders of magnitude). This is the reason why you should use two different learning rate for the different quantities you are optimizing: Learning_Rate_m for m and Learning_Rate_b for b
finally, the gradient descent method is strongly affected by the initial guess: it may lead to find local minima (fake solutions) in place of the global minima (true solution). For this reason, you should try with different initial guesses for m and b, possibly close to their estimated value:
m = 0
b = -2
Complete Code
import numpy as np
import matplotlib.pyplot as plt
N = 40
np.random.seed(42)
x = np.random.randint(low = 38000, high = 145000, size = N)
y = (13 - 1)/(140000 - 40000)*(x - 40000) + 1 + 0.5*np.random.randn(N)
# initial guess
m = 0
b = -2
Learning_Rate_m = 1e-10
Learning_Rate_b = 1e-2
epochs = 5000
n = np.float(x.shape[0])
error = []
for i in range(epochs):
Y_hat = m*x + b
mse = 1/n*np.sum((y - Y_hat)**2)
error.append(mse)
dm = -2/n*np.sum(x*(y - Y_hat))
db = -2/n*np.sum((y - Y_hat))
m = m - Learning_Rate_m*dm
b = b - Learning_Rate_b*db
x_line = np.linspace(x.min(), x.max(), 100)
y_line = (m*x_line) + b
plt.figure(figsize=(8,6))
plt.title('LR result')
plt.plot(x_line,y_line, 'red')
plt.scatter(x,y)
plt.show()
Plot
The problem is not happening while plotting, the problem is with the parameters in plt.plot(x_line,y_line), I tested your code and found that y_line is all NaN values, double check the calculations (y_line, m, dm).

SIR Estimation Finding Cause of Error While Estimating Parameters

I'm trying to fit SIR Epidemics Spread Model to the current new case data of the countries. In order to do that I used the work here: https://github.com/epimath/param-estimation-SIR . Main Idea was to fit best possible SIR's Infected curve to the new case data for that specific country, and calculate total predicted case number and the days that belong to %98 and %95 of total cases. The problem is, when I select Brazil, Mexico or United States. It shows that it will never end. I am curious about the reason. Any help about what can be done to deal with this non converging cases would be appreciated.
Please change the selected_country variable from "Spain" to one of those three countries(Brazil, Mexico or United States) to reproduce the result that leads me to ask here.
P.S. I know the limitations of the work. For example, new case number is bound to the number of tests etc. Please ignore those limitations. I'd like to see what is needed to produce a result out of the following code.
Here are some outputs:
Spain (Expected Output Example)
Turkey (Expected Output Example)
France (Expected Output Example)
USA (Unexpected Output Example)
Brazil (Unexpected Output Example)
I suspect something that cause gamma(the rate of recovering) parameter too small which leads the same amount of cases for each day. But I couldn't go further and found out what causing that. (I understood that by checking paramests variable by printing and examining it's values.)
Here you can find my code below.
import scipy.optimize as optimize
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import poisson
from scipy.stats import norm
import json
from scipy.integrate import odeint as ode
import pandas as pd
from datetime import datetime
time_start = datetime.timestamp(datetime.now())
output = {"result": "error"}
error = False
def model(ini, time_step, params):
Y = np.zeros(3) # column vector for the state variables
X = ini
mu = 0
beta = params[0]
gamma = params[1]
Y[0] = mu - beta * X[0] * X[1] - mu * X[0] # S
Y[1] = beta * X[0] * X[1] - gamma * X[1] - mu * X[1] # I
Y[2] = gamma * X[1] - mu * X[2] # R
return Y
def x0fcn(params, data):
S0 = 1.0 - (data[0] / params[2])
I0 = data[0] / params[2]
R0 = 0.0
X0 = [S0, I0, R0]
return X0
def yfcn(res, params):
return res[:, 1] * params[2]
# cost function for the SIR model for python 2.7
# Marisa Eisenberg (marisae#umich.edu)
# Yu-Han Kao (kaoyh#umich.edu) -7-9-17
def NLL(params, data, times): # negative log likelihood
params = np.abs(params)
data = np.array(data)
res = ode(model, x0fcn(params, data), times, args=(params,))
y = yfcn(res, params)
nll = sum(y) - sum(data * np.log(y))
# note this is a slightly shortened version--there's an additive constant term missing but it
# makes calculation faster and won't alter the threshold. Alternatively, can do:
# nll = -sum(np.log(poisson.pmf(np.round(data),np.round(y)))) # the round is b/c Poisson is for (integer) count data
# this can also barf if data and y are too far apart because the dpois will be ~0, which makes the log angry
# ML using normally distributed measurement error (least squares)
# nll = -sum(np.log(norm.pdf(data,y,0.1*np.mean(data)))) # example WLS assuming sigma = 0.1*mean(data)
# nll = sum((y - data)**2) # alternatively can do OLS but note this will mess with the thresholds
# for the profile! This version of OLS is off by a scaling factor from
# actual LL units.
return nll
df = pd.read_csv('https://github.com/owid/covid-19-data/raw/master/public/data/owid-covid-data.csv')
selected_location = 'Spain'
selected_df = df[df.location == selected_location].reset_index()
selected_df.date = pd.to_datetime(selected_df.date)
print(selected_df.head())
selected_df.date = pd.to_datetime(selected_df.date)
selected_df = selected_df[['date', 'new_cases']]
print(selected_df)
df = selected_df
optimizer = optimize.minimize(NLL, params, args=(data, times), method='Nelder-Mead',
options={'disp': False, 'return_all': False, 'xatol': 3.1201, 'fatol': 0.0001,
'adaptive': False})
paramests = np.abs(optimizer.x)
iniests = x0fcn(paramests, data)
print('Paramests:')
print(paramests)
times_long = range(0, int(len(times) * 10))
start_day = df['date'][0]
dates_long = []
for i in range(0, int(len(times) * 10)):
dates_long.append(start_day + (np.timedelta64(1, 'D') * i))
# print(df)
# print(dates_long)
# sys.exit()
#### Re-simulate and plot the model with the final parameter estimates ####
xest = ode(model, iniests, times_long, args=(paramests,))
# print(xest)
est_measure = yfcn(xest, paramests)
# plt.plot(times, data, 'k-o', linewidth=1, label='Data')
json_dict = {}
time_end = datetime.timestamp(datetime.now())
json_dict['duration'] = time_end - time_start
json_df = pd.DataFrame()
json_df['dates'] = dates_long
json_df['new_cases'] = df['new_cases']
json_df['prediction'] = est_measure
json_df = json_df.fillna("")
json_df['cumulative'] = json_df['prediction'].cumsum()
json_df = json_df[json_df['prediction'] >= 1]
if error == True:
json_dict['result'] = 'error'
json_dict['message'] = error_message
json_dict['timestamp'] = datetime.timestamp(datetime.now())
json_dict['chart_data'] = json_df.drop(columns=['prediction'], axis=1)
else:
json_dict['result'] = 'success'
json_dict['day_for_95_percent_predicted_cases'] = \
json_df[json_df['cumulative'] > (json_df['cumulative'].iloc[-1] * 0.95)]['dates'].reset_index(drop=True)[0]
json_dict['day_for_98_percent_predicted_cases'] = \
json_df[json_df['cumulative'] > (json_df['cumulative'].iloc[-1] * 0.98)]['dates'].reset_index(drop=True)[0]
# json_dict['timestamp'] = str(f"{datetime.now():%Y-%m-%d %H:%M:%S}")
json_dict['timestamp'] = datetime.timestamp(datetime.now())
json_dict['chart_data'] = json_df.to_dict()
json_string = json.dumps(json_dict, default=str)
print(json_string)
output = json_string # json string
plt.plot(json_df['dates'], json_df['prediction'], 'r-', linewidth=3, label='Predicted New Cases')
plt.bar(df['date'], data)
plt.axvline(x=json_dict['day_for_95_percent_predicted_cases'], label='(95%) '+str(json_dict['day_for_95_percent_predicted_cases'].date()),color='red')
plt.axvline(x=json_dict['day_for_98_percent_predicted_cases'], label='(98%) '+str(json_dict['day_for_98_percent_predicted_cases'].date()),color='green')
plt.xlabel('Time')
plt.ylabel('Individuals')
plt.legend()
plt.show()

Gradient Descent algorithm for linear regression do not optmize the y-intercept parameter

I'm following Andrew Ng Coursera course on Machine Learning and I tried to implement the Gradient Descent Algorithm in Python. I'm having trouble with the y-intercept parameter because it doesn't look like to go to the best value. Here's my code:
# IMPORTS
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Acquiring Data
# Source: https://github.com/mattnedrich/GradientDescentExample
data = pd.read_csv('data.csv')
def cost_function(a, b, x_values, y_values):
'''
Calculates the square mean error for a given dataset
with (x,y) pairs and the model y' = a + bx
a: y-intercept for the model
b: slope of the curve
x_values, y_values: points (x,y) of the dataset
'''
data_len = len(x_values)
total_error = sum([((a + b * x_values[i]) - y_values[i])**2
for i in range(data_len)])
return total_error / (2 * float(data_len))
def a_gradient(a, b, x_values, y_values):
'''
Partial derivative of the cost_function with respect to 'a'
a, b: values for 'a' and 'b'
x_values, y_values: points (x,y) of the dataset
'''
data_len = len(x_values)
a_gradient = sum([((a + b * x_values[i]) - y_values[i])
for i in range(data_len)])
return a_gradient / float(data_len)
def b_gradient(a, b, x_values, y_values):
'''
Partial derivative of the cost_function with respect to 'b'
a, b: values for 'a' and 'b'
x_values, y_values: points (x,y) of the dataset
'''
data_len = len(x_values)
b_gradient = sum([(((a + b * x_values[i]) - y_values[i]) * x_values[i])
for i in range(data_len)])
return b_gradient / float(data_len)
def gradient_descent_step(a_current, b_current, x_values, y_values, alpha):
'''
Give a step in direction of the minimum of the cost_function using
the 'a' and 'b' gradiants. Return new values for 'a' and 'b'.
a_current, b_current: the current values for 'a' and 'b'
x_values, y_values: points (x,y) of the dataset
'''
new_a = a_current - alpha * a_gradient(a_current, b_current, x_values, y_values)
new_b = b_current - alpha * b_gradient(a_current, b_current, x_values, y_values)
return (new_a, new_b)
def run_gradient_descent(a, b, x_values, y_values, alpha, precision, plot=False, verbose=False):
'''
Runs the gradient_descent_step function and updates (a,b) until
the value of the cost function varies less than 'precision'.
a, b: initial values for the point a and b in the cost_function
x_values, y_values: points (x,y) of the dataset
alpha: learning rate for the algorithm
precision: value for the algorithm to stop calculation
'''
iterations = 0
delta_cost = cost_function(a, b, x_values, y_values)
error_list = [delta_cost]
iteration_list = [0]
# The loop runs until the delta_cost reaches the precision defined
# When the variation in cost_function is small it means that the
# the function is near its minimum and the parameters 'a' and 'b'
# are a good guess for modeling the dataset.
while delta_cost > precision:
iterations += 1
iteration_list.append(iterations)
# Calculates the initial error with current a,b values
prev_cost = cost_function(a, b, x_values, y_values)
# Calculates new values for a and b
a, b = gradient_descent_step(a, b, x_values, y_values, alpha)
# Updates the value of the error
actual_cost = cost_function(a, b, x_values, y_values)
error_list.append(actual_cost)
# Calculates the difference between previous and actual error values.
delta_cost = prev_cost - actual_cost
# Plot the error in each iteration to see how it decreases
# and some information about our final results
if plot:
plt.plot(iteration_list, error_list, '-')
plt.title('Error Minimization')
plt.xlabel('Iteration',fontsize=12)
plt.ylabel('Error',fontsize=12)
plt.show()
if verbose:
print('Iterations = ' + str(iterations))
print('Cost Function Value = '+ str(cost_function(a, b, x_values, y_values)))
print('a = ' + str(a) + ' and b = ' + str(b))
return (actual_cost, a, b)
When I run the algorithm with:
run_gradient_descent(0, 0, data['x'], data['y'], 0.0001, 0.01)
I get (a = 0.0496688656535 and b = 1.47825808018)
But the best value for 'a' is around 7.9 (tried another resources for linear regression).
Also, if I change the initial guess for the parameter 'a' the algorithm simply try to adjust the parameter 'b'.
For example, if I set a = 200 and b = 0
run_gradient_descent(200, 0, data['x'], data['y'], 0.0001, 0.01)
I get (a = 199.933763331 and b = -2.44824996193)
I couldn't find anything wrong with the code and I realized that the problem is the initial guess for a parameter. See my own answer above where I defined a helper function to get a range for search some values for initial a guess.
Gradient descent does not guarantee to find global optimum. Your chances of finding the global optimum depend on your starting value. To get the real values of the parameters, first I solved the least squares problem which guarantees global minimum.
data = pd.read_csv('data.csv',header=-1)
x,y = data[0],data[1]
from scipy.stats import linregress
linregress(x,y)
This results in following statistics:
LinregressResult(slope=1.32243102275536, intercept=7.9910209822703848, rvalue=0.77372849988782377, pvalue=3.855655536990139e-21, stderr=0.109377979589804)
Thus b = 1.32243102275536 and a = 7.9910209822703848. Given this, using your code I solved the problem a couple of times using randomized starting values a and b:
a,b = np.random.rand()*10,np.random.rand()*10
print("Initial values of parameters: ")
print("a=%f\tb=%f" % (a,b))
run_gradient_descent(a, b,x,y,1e-4,1e-2)
Here is the solution that I got:
Initial values of parameters:
a=6.100305 b=2.606448
Iterations = 21
Cost Function Value = 55.2093808263
a = 6.07601889437 and b = 1.36310312751
Therefore, it seems like the reason that you cannot get close to minimum is because of choice of your initial parameter values. You will see it yourself as well, if you put a and b obtained from least squares into your gradient descent algorithm, it will iterate only for one time and stay where it is.
Somehow, at some point delta_cost > precision is True and it stops there considering it a local optimum. If you decrease your precision and if you run it long enough then you might be able to find the global optimum.
The complete code for my Gradient Descent implementation could be found on my Github repository:
Gradient Descent for Linear Regression
Thinking about what #relay said that the Gradient Descent algorithm does not guarantee to find the global minima I tried to come up with an helper function to limit guesses for the parameter a in a certain search range, as follows:
def search_range(x, y, plot=False):
'''
Given a dataset with points (x, y) searches for a best guess for
initial values of 'a'.
'''
data_lenght = len(x) # Total size of of the dataset
q_lenght = int(data_lenght / 4) # Size of a quartile of the dataset
# Finding the max and min value for y in the first quartile
min_Q1 = (x[0], y[0])
max_Q1 = (x[0], y[0])
for i in range(q_lenght):
temp_point = (x[i], y[i])
if temp_point[1] < min_Q1[1]:
min_Q1 = temp_point
if temp_point[1] > max_Q1[1]:
max_Q1 = temp_point
# Finding the max and min value for y in the 4th quartile
min_Q4 = (x[data_lenght - 1], y[data_lenght - 1])
max_Q4 = (x[data_lenght - 1], y[data_lenght - 1])
for i in range(data_lenght - 1, data_lenght - q_lenght, -1):
temp_point = (x[i], y[i])
if temp_point[1] < min_Q4[1]:
min_Q4 = temp_point
if temp_point[1] > max_Q4[1]:
max_Q4 = temp_point
mean_Q4 = (((min_Q4[0] + max_Q4[0]) / 2), ((min_Q4[1] + max_Q4[1]) / 2))
# Finding max_y and min_y given the points found above
# Two lines need to be defined, L1 and L2.
# L1 will pass through min_Q1 and mean_Q4
# L2 will pass through max_Q1 and mean_Q4
# Calculatin slope for L1 and L2 given m = Delta(y) / Delta (x)
slope_L1 = (min_Q1[1] - mean_Q4[1]) / (min_Q1[0] - mean_Q4[0])
slope_L2 = (max_Q1[1] - mean_Q4[1]) / (max_Q1[0] -mean_Q4[0])
# Calculating y-intercepts for L1 and L2 given line equation in the form y = mx + b
# Float numbers are converted to int because they will be used as range for itaration
y_L1 = int(min_Q1[1] - min_Q1[0] * slope_L1)
y_L2 = int(max_Q1[1] - max_Q1[0] * slope_L2)
# Ploting L1 and L2
if plot:
L1 = [(y_L1 + slope_L1 * x) for x in data['x']]
L2 = [(y_L2 + slope_L2 * x) for x in data['x']]
plt.plot(data['x'], data['y'], '.')
plt.plot(data['x'], L1, '-', color='r')
plt.plot(data['x'], L2, '-', color='r')
plt.title('Scatterplot of Sample Data')
plt.xlabel('x',fontsize=12)
plt.ylabel('y',fontsize=12)
plt.show()
return y_L1, y_L2
The idea is to run the gradient descent with guesses for a within the range given by search_range() function and get the minimum possible value for the cost_function(). The new way to run the gradient descente becomes:
def run_search_gradient_descent(x_values, y_values, alpha, precision, verbose=False):
'''
Runs the gradient_descent_step function and updates (a,b) until
the value of the cost function varies less than 'precision'.
x_values, y_values: points (x,y) of the dataset
alpha: learning rate for the algorithm
precision: value for the algorithm to stop calculation
'''
from math import inf
a1, a2 = search_range(x_values, y_values)
best_guess = [inf, 0, 0]
for a in range(a1, a2):
cost, linear_coef, slope = run_gradient_descent(a, 0, x_values, y_values, alpha, precision)
# Saving value for cost_function and parameters (a,b)
if cost < best_guess[0]:
best_guess = [cost, linear_coef, slope]
if verbose:
print('Cost Function = ' + str(best_guess[0]))
print('a = ' + str(best_guess[1]) + ' and b = ' + str(best_guess[2]))
return (best_guess[0], best_guess[1], best_guess[2])
Running the code
run_search_gradient_descent(data['x'], data['y'], 0.0001, 0.001, verbose=True)
I've got:
Cost Function = 55.1294483959
a = 8.02595996606 and b = 1.3209768383
For comparison, using the linear regression from scipy.stats it returned
a = 7.99102098227and b = 1.32243102276

Trying to build neural net for digit recognition in Python. Unable to get theta2 and predictions correct

I am following Andrew's Coursera course on machine learning. I am trying to build a 3 layers neural net for digit recognition in Python (784 input, 25 hidden, 10 output). However, I am unable to get the predictions (of the training data) correct (accuracy < 5% at 100 iter, accuracy not increasing with iteration).
J (the cost function) seems to be going down (see photo 1) and I have done gradient checking (before minimizing) and it seems to match to around 1e-11 (see photo 2).
I have compared the theta1 and theta2 after 100 iterations to my working matlab code (see code snippet 1 for octave and code snippet 2 for python). It seems theta1 is reasonably similar but theta2 is very different -- see code snippet 2. (I know they should differ because of the different optimisation routines. However, firstly, I have place the same initial thetas into both codes. Secondly, my reasoning is that they should start to converge, or at least get close, after 100 iterations)
The only error I see is:
-c:32: RuntimeWarning: overflow encountered in exp
when running the sigmoid during the optimising. However, I was told that this is not essential and it is normal to encounter this error during optimising? Furthermore, because it is a sigmoid, anytime the input is large, it will tend towards 1 anyways.
I have also attached my code in snippet 3. I have cut out all the other non-essential bits (like gradient checking) to make it as short as possible.
I would appreciate any help into this as I cannot even find where it is going wrong, let alone fix it. Thank you.
Photos:
J (cost function) decreasing to 1.8 after 12 iterations
Gradient checking before optimizing, they look very similar
Code snippet:
Initializing Neural Network Parameters ...
initial1
-0.0100100
-0.0771400
-0.1113800
-0.0230100
0.0547800
-0.0505500
-0.0731200
-0.0988700
0.0128000
-0.0855400
-0.1002500
-0.1137200
-0.0669300
-0.0999900
0.0084500
-0.0363200
-0.0588600
-0.0431100
-0.1133700
-0.0326300
0.0282800
0.0052400
-0.1134600
-0.0617700
0.0267600
initial2
0.0273700
0.1026000
-0.0502100
-0.0699100
0.0190600
0.1004000
0.0784600
-0.0075900
-0.0362100
0.0286200
Doing fminunc
Training Neural Network...
Iteration 100 | Cost: 6.219605e-01
theta1
-0.0099719
-0.0768462
-0.1109559
-0.0229224
0.0545714
-0.0503575
-0.0728415
-0.0984935
0.0127513
-0.0852143
-0.0998682
-0.1132869
-0.0666751
-0.0996092
0.0084178
-0.0361817
-0.0586359
-0.0429458
-0.1129383
-0.0325057
0.0281723
0.0052200
-0.1130279
-0.0615348
0.0266581
theta2
1.124918
1.603780
-1.266390
-0.848874
0.037956
-1.360841
2.145562
-1.448657
-1.262285
-1.357635
theta1_initial
[-0.01001 -0.07714 -0.11138 -0.02301 0.05478 -0.05055 -0.07312 -0.09887
0.0128 -0.08554 -0.10025 -0.11372 -0.06693 -0.09999 0.00845 -0.03632
-0.05886 -0.04311 -0.11337 -0.03263 0.02828 0.00524 -0.11346 -0.06177
0.02676]
theta2_initial
[ 0.02737 0.1026 -0.05021 -0.06991 0.01906 0.1004 0.07846 -0.00759
-0.03621 0.02862]
Doing fminunc
-c:32: RuntimeWarning: overflow encountered in exp
theta1
[-0.00997202 -0.07680716 -0.11086841 -0.02292044 0.05455335 -0.05034252
-0.07280686 -0.09842603 0.01275117 -0.08516515 -0.0997987 -0.11319546
-0.06664666 -0.09954009 0.00841804 -0.03617494 -0.05861458 -0.04293555
-0.1128474 -0.0325006 0.02816879 0.00522031 -0.1129369 -0.06151103
0.02665508]
theta2
[ 0.27954826 -0.08007496 -0.36449273 -0.22988024 0.06849659 -0.47803973
1.09023041 -0.25570559 -0.24537494 -0.40341995]
#-----------------BEGIN HEADERS-----------------
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import csv
import scipy
#-----------------END HEADERS-----------------
#-----------------BEGIN FUNCTION 1-----------------
def randinitialize(L_in, L_out):
w = np.zeros((L_out, 1 + L_in))
epsilon_init = 0.12
w = np.random.rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init
return w
#-----------------END FUNCTION 1-----------------
#-----------------BEGIN FUNCTION 2-----------------
def sigmoid(lz):
g = 1.0/(1.0+np.exp(-lz))
return g
#-----------------END FUNCTION 2-----------------
#-----------------BEGIN FUNCTION 3-----------------
def sigmoidgradient(lz):
g = np.multiply(sigmoid(lz),(1-sigmoid(lz)))
return g
#-----------------END FUNCTION 3-----------------
#-----------------BEGIN FUNCTION 4-----------------
def nncostfunction(ltheta_ravel, linput_layer_size, lhidden_layer_size, lnum_labels, lx, ly, llambda_reg):
ltheta1 = np.array(np.reshape(ltheta_ravel[:lhidden_layer_size * (linput_layer_size + 1)], (lhidden_layer_size, (linput_layer_size + 1))))
ltheta2 = np.array(np.reshape(ltheta_ravel[lhidden_layer_size * (linput_layer_size + 1):], (lnum_labels, (lhidden_layer_size + 1))))
ltheta1_grad = np.zeros((np.shape(ltheta1)))
ltheta2_grad = np.zeros((np.shape(ltheta2)))
y_matrix = []
lm = np.shape(lx)[0]
eye_matrix = np.eye(lnum_labels)
for i in range(len(ly)):
y_matrix.append(eye_matrix[int(ly[i])-1,:]) #The minus one as python is zero based
y_matrix = np.array(y_matrix)
a1 = np.hstack((np.ones((lm,1)), lx)).astype(float)
z2 = sigmoid(ltheta1.dot(a1.T))
a2 = (np.concatenate((np.ones((np.shape(z2)[1], 1)), z2.T), axis=1)).astype(float)
a3 = sigmoid(ltheta2.dot(a2.T))
h = a3
J_unreg = 0
J = 0
J_unreg = (1/float(lm))*np.sum(\
-np.multiply(y_matrix,np.log(h.T))\
-np.multiply((1-y_matrix),np.log(1-h.T))\
,axis=None)
J = J_unreg + (llambda_reg/(2*float(lm)))*\
(np.sum(\
np.multiply(ltheta1[:,1:],ltheta1[:,1:])\
,axis=None)+np.sum(\
np.multiply(ltheta2[:,1:],ltheta2[:,1:])\
,axis=None))
delta3 = a3.T - y_matrix
delta2 = np.multiply((delta3.dot(ltheta2[:,1:])), (sigmoidgradient(ltheta1.dot(a1.T))).T)
cdelta2 = ((a2.T).dot(delta3)).T
cdelta1 = ((a1.T).dot(delta2)).T
ltheta1_grad = (1/float(lm))*cdelta1
ltheta2_grad = (1/float(lm))*cdelta2
theta1_hold = ltheta1
theta2_hold = ltheta2
theta1_hold[:,0] = 0;
theta2_hold[:,0] = 0;
ltheta1_grad = ltheta1_grad + (llambda_reg/float(lm))*theta1_hold;
ltheta2_grad = ltheta2_grad + (llambda_reg/float(lm))*theta2_hold;
thetagrad_ravel = np.concatenate((np.ravel(ltheta1_grad), np.ravel(ltheta2_grad)))
return (J, thetagrad_ravel)
#-----------------END FUNCTION 4-----------------
#-----------------BEGIN FUNCTION 5-----------------
def predict(ltheta1, ltheta2, x):
m, n = np.shape(x)
p = np.zeros(m)
h1 = sigmoid((np.hstack((np.ones((m,1)),x.astype(float)))).dot(ltheta1.T))
h2 = sigmoid((np.hstack((np.ones((m,1)),h1))).dot(ltheta2.T))
for i in range(0,np.shape(h2)[0]):
p[i] = np.argmax(h2[i,:])
return p
#-----------------END FUNCTION 5-----------------
## Setup the parameters you will use for this exercise
input_layer_size = 784; # 28x28 Input Images of Digits
hidden_layer_size = 25; # 25 hidden units
num_labels = 10; # 10 labels, from 0 to 9
data = []
#Reading in data, split into X and y, rewrite label 0 to 10 (for easy comparison to course)
with open('train.csv', 'rb') as csvfile:
has_header = csv.Sniffer().has_header(csvfile.read(1024))
csvfile.seek(0) # rewind
data_csv = csv.reader(csvfile, delimiter=',')
if has_header:
next(data_csv)
for row in data_csv:
data.append(row)
data = np.array(data)
x = data[:,1:]
y = data[:,0]
y = y.astype(int)
for i in range(len(y)):
if y[i] == 0:
y[i] = 10
#Set basic parameters
m, n = np.shape(x)
lambda_reg = 1.0
#Randomly initalize weights for Theta_initial
#theta1_initial = np.genfromtxt('tt1.csv', delimiter=',')
#theta2_initial = np.genfromtxt('tt2.csv', delimiter=',')
theta1_initial = randinitialize(input_layer_size, hidden_layer_size);
theta2_initial = randinitialize(hidden_layer_size, num_labels);
theta_initial_ravel = np.concatenate((np.ravel(theta1_initial), np.ravel(theta2_initial)))
#Doing optimize
fmin = scipy.optimize.minimize(fun=nncostfunction, x0=theta_initial_ravel, args=(input_layer_size, hidden_layer_size, num_labels, x, y, lambda_reg), method='L-BFGS-B', jac=True, options={'maxiter': 10, 'disp': True})
fmin
theta1 = np.array(np.reshape(fmin.x[:hidden_layer_size * (input_layer_size + 1)], (hidden_layer_size, (input_layer_size + 1))))
theta2 = np.array(np.reshape(fmin.x[hidden_layer_size * (input_layer_size + 1):], (num_labels, (hidden_layer_size + 1))))
p = predict(theta1, theta2, x);
for i in range(len(y)):
if y[i] == 10:
y[i] = 0
correct = [1 if a == b else 0 for (a, b) in zip(p,y)]
accuracy = (sum(map(int, correct)) / float(len(correct)))
print 'accuracy = {0}%'.format(accuracy * 100)
I think I have fixed the problem: it seems I messed up the index
should be:
y_matrix.append(eye_matrix[int(ly[i]),:])
instead of:
y_matrix.append(eye_matrix[int(ly[i])-1,:])

Separating gaussian components of a curve using python

I am trying to deblend the emission lines of low resolution spectrum in order to get the gaussian components. This plot represents the kind of data I am using:
After searching a bit, the only option I found was the application of the gauest function from the kmpfit package (http://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#gauest). I have copied their example but I cannot make it work.
I wonder if anyone could please offer me any alternative to do this or how to correct my code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
def CurveData():
x = np.array([3963.67285156, 3964.49560547, 3965.31835938, 3966.14111328, 3966.96362305,
3967.78637695, 3968.60913086, 3969.43188477, 3970.25463867, 3971.07714844,
3971.89990234, 3972.72265625, 3973.54541016, 3974.36791992, 3975.19067383])
y = np.array([1.75001533e-16, 2.15520995e-16, 2.85030769e-16, 4.10072843e-16, 7.17558032e-16,
1.27759917e-15, 1.57074192e-15, 1.40802933e-15, 1.45038722e-15, 1.55195653e-15,
1.09280316e-15, 4.96611341e-16, 2.68777266e-16, 1.87075114e-16, 1.64335999e-16])
return x, y
def FindMaxima(xval, yval):
xval = np.asarray(xval)
yval = np.asarray(yval)
sort_idx = np.argsort(xval)
yval = yval[sort_idx]
gradient = np.diff(yval)
maxima = np.diff((gradient > 0).view(np.int8))
ListIndeces = np.concatenate((([0],) if gradient[0] < 0 else ()) + (np.where(maxima == -1)[0] + 1,) + (([len(yval)-1],) if gradient[-1] > 0 else ()))
X_Maxima, Y_Maxima = [], []
for index in ListIndeces:
X_Maxima.append(xval[index])
Y_Maxima.append(yval[index])
return X_Maxima, Y_Maxima
def GaussianMixture_Model(p, x, ZeroLevel):
y = 0.0
N_Comps = int(len(p) / 3)
for i in range(N_Comps):
A, mu, sigma = p[i*3:(i+1)*3]
y += A * np.exp(-(x-mu)*(x-mu)/(2.0*sigma*sigma))
Output = y + ZeroLevel
return Output
def Residuals_GaussianMixture(p, x, y, ZeroLevel):
return GaussianMixture_Model(p, x, ZeroLevel) - y
Wave, Flux = CurveData()
Wave_Maxima, Flux_Maxima = FindMaxima(Wave, Flux)
EmLines_Number = len(Wave_Maxima)
ContinuumLevel = 1.64191e-16
# Define initial values
p_0 = []
for i in range(EmLines_Number):
p_0.append(Flux_Maxima[i])
p_0.append(Wave_Maxima[i])
p_0.append(2.0)
p1, conv = optimize.leastsq(Residuals_GaussianMixture, p_0[:],args=(Wave, Flux, ContinuumLevel))
Fig = plt.figure(figsize = (16, 10))
Axis1 = Fig.add_subplot(111)
Axis1.plot(Wave, Flux, label='Emission line')
Axis1.plot(Wave, GaussianMixture_Model(p1, Wave, ContinuumLevel), 'r', label='Fit with optimize.leastsq')
print p1
Axis1.plot(Wave, GaussianMixture_Model([p1[0],p1[1],p1[2]], Wave, ContinuumLevel), 'g:', label='Gaussian components')
Axis1.plot(Wave, GaussianMixture_Model([p1[3],p1[4],p1[5]], Wave, ContinuumLevel), 'g:')
Axis1.set_xlabel( r'Wavelength $(\AA)$',)
Axis1.set_ylabel('Flux' + r'$(erg\,cm^{-2} s^{-1} \AA^{-1})$')
plt.legend()
plt.show()
A typical simplistic way to fit:
def model(p,x):
A,x1,sig1,B,x2,sig2 = p
return A*np.exp(-(x-x1)**2/sig1**2) + B*np.exp(-(x-x2)**2/sig2**2)
def res(p,x,y):
return model(p,x) - y
from scipy import optimize
p0 = [1e-15,3968,2,1e-15,3972,2]
p1,conv = optimize.leastsq(res,p0[:],args=(x,y))
plot(x,y,'+') # data
#fitted function
plot(arange(3962,3976,0.1),model(p1,arange(3962,3976,0.1)),'-')
Where p0 is your initial guess. By the looks of things, you might want to use Lorentzian functions...
If you use full_output=True, you get all kind of info about the fitting. Also check out curve_fit and the fmin* functions in scipy.optimize. There are plenty of wrappers around these around, but often, like here, it's easier to use them directly.

Categories