How to increase accuracy in a Perceptron script using python - python

I'm trying to learn Perceptron, and Python is the easiest language for me now.
I`m using the training and fit database from Ivan Nunves' book about AI.
Follow my code here bellow.
from random import random
import csv
omega = []
with open(r'./training_data.csv') as file:
spamreader = csv.reader(file)
for row in spamreader:
row.insert(0, -1)
omega.append(row)
eta = 0.01
w = [random() for _ in range(len(omega[0]))]
epochs = 100
for _ in range(epochs):
for k in omega:
x = [float(xi) for xi in k[:-1]]
d = int(k[-1])
u = sum(wi * xi for wi, xi in zip(w, x))
y = -1 if u < 0 else 1
if y != d:
w = [wi + eta * (d - y) * xi for wi, xi in zip (w, x)]
error = 0
for k in omega:
x = [float(xi) for xi in k[:-1]]
d = int(k[-1])
u = sum(wi * xi for wi, xi in zip(w, x))
y = -1 if u < 0 else 1
if y != d:
error += 1
print(100 - error / len(omega) * 100)
x = []
with open(r'./data.csv') as file:
sampreader = csv.reader(file)
for row in sampreader:
x.append(row)
for xk in x:
u = sum(wi * float(xi) for wi, xi in zip(w, xk))
y = -1 if u < 0 else 1
print(f'{xk} {y}')
As environment I'm using Google Colab and two CSV files for the data I've mentioned.
training_data.csv
-0.6508,1.0970,4.2009,-1
-1.4492,8.8960,4.4005,-1
2.0850,6.8760,12.0710,-1
.
.
.
(30 lines)
data.csv
-1,-3.665,620,5.9891
-1,-7.842,1.1267,5.5912
-1,3.012,5.611,5.8234
.
.
.
(10 lines)
I cannot get an accuracy above 60%. Could someone tell me if I'm doing it right?
Obs.: This is my first post, so maybe I've made some mistakes asking my question.
I've tryed to implement Perceptron using python, but the accuracy is not reasonable.

Related

Linear regression outputs "inf" value

I'm trying to learn linear regression, gave this problem a try. The results of the adjusted b(bias) and m(linear coefficient) are being outputted as "inf" or "-inf", what should i do?
sorry if the problem in the code is obvius, I'm new at this.
from matplotlib import pyplot as plt
import random
x = [1,2,3,3,4,4,3,2,1,2,5,4]
y = [1,2,2,1,3,4,1,1,2,3,4,5]
b = random.random()
m = random.random()
learning_rate = 0.3
iterations = 1000
for i in range(iterations):
for k in range(len(x)):
X = m * x[k] + b
derivative_error = 2 * (X - y[k])
dX_dm = x[k]
dX_db = 1
m += derivative_error * dX_dm * learning_rate
b += derivative_error * learning_rate
If I get it right, you are trying to use gradient descent to solve the linear regression
model. Here are the problems with your approch:
First:
The derivative is incorrect, instead of of
X = m * x[k] + b
derivative_error = 2 * (X - y[k])
dX_dm = x[k]
dX_db = 1
m += derivative_error * dX_dm * learning_rate
b += derivative_error * learning_rate
it should be taking the derivate of the error with respect to m and b.
Second:
You don't update the gradient every time you see a data point x[k], like what you are doing in the inner for-loop of your code:
for k in range(len(x)):
X = m * x[k] + b
derivative_error = 2 * (X - y[k])
dX_dm = x[k]
dX_db = 1
m += derivative_error * dX_dm * learning_rate
b += derivative_error * learning_rate
Instead, you accumulate errors of all x and average them. Use the averaged error to update ypur m and n.
Third:
Perhaps your learning_rate set to 0.3 is too large, such that it 'overshoots' the optimimum point at each of your update and hence the value of m and b get to a very wild number all the way to inf.
That said, the following is my solution, with a error function to check the
average errors you get at every iteration.
def error(x,y, m, b):
error = 0
for k in range(len(x)):
error = error + ((x[k] * m + b - y[k]) **2)
return error
from matplotlib import pyplot as plt
import random
x = [1,2,3,3,4,4,3,2,1,2,5,4]
y = [1,2,2,1,3,4,1,1,2,3,4,5]
b = random.random()
m = random.random()
learning_rate = 0.01
iterations = 100
for i in range(iterations):
print(error(x, y, m, b))
d_m = 0
d_b = 0
for k in range(len(x)):
# Calulate the derivative w.r.t. m and accumulate the error
derivative_error_m = -2*(y[k] - m*x[k] - b)*x[k]
d_m = d_m + derivative_error_m
# Calulate the derivative w.r.t. b and accumulate the error
derirative_error_b = -2*(y[k] - m*x[k] - b)
d_b = d_b + derirative_error_b
# Average the derivate of errors.
d_m = d_m / len(x)
d_b = d_b / len(x)
# Update parameters to the negative direction of gradient.
m = m - d_m * learning_rate
b = b - d_b * learning_rate
After running the code for iterations = 10, you get:
15.443121587504484
14.019097680461613
13.123926121402514
12.561191094860135
12.207425702911078
11.985018705759003
11.8451837105445
11.757253610772613
11.70195107555181
11.66715838203049
where errors are shrinking at every update.
Besides, you should also notice that a simple model like linear regression. There is a nice closed-form solution which gets you the opitimum solution immediately without applying iterations such as gradient descent.

Txt Output is not what I am expecting

Goal of the assignment is to make a heat flow map of a soil profile at different depths (0, 5, and 10), the following code has not caused an error but the output text is not what is desired.
import math
kappa = 0.10e-6 # m^2 - thermal diffusivity
C = 0.58e6 # J/m^3/K - heat capacity
delta_z = 0.01 # m - grid spacing
model_depth = 1 # m
delta_t = 60 # s - time step
grid_boundaries = [x * delta_z for x in range(0, int(model_depth/delta_z))]
Flux = [x * 0 for x in range(0, int(model_depth/delta_z))]
grid_centres = []
T = []
Tsfc = []
T5 = []
T10 = []
for i in range(0, len(grid_boundaries) - 1):
grid_centres.append( (grid_boundaries[i] + grid_boundaries[i+1]) / 2)
T.append(20)
for i in range(1, int(86400/delta_t) * 7):
Flux[0] = 5 * math.sin(2*math.pi * i/(86400/delta_t))
for j in range(1, len(grid_centres)):
Flux[j] = -kappa * C * (T[j] - T[j-1]) / delta_z
for j in range(0, len(grid_centres)):
DeltaT = (Flux[j+1] - Flux[j] * delta_t / C / delta_z)
Tsfc.append(T[0])
T5.append(T[5])
T10.append(T[10])
T[j] = T[j] - DeltaT
with open('model_output.txt', 'w') as f:
for i in range(1, int(86400/delta_t) * 7):
print("%f\t%f\t%f\t%f" % (i/86400 * delta_t,
Tsfc[i-1], T5[i-1], T10[i-1]), file=f)
f.close
My output is supposed to be 4 columns, (Time, Tsfc, T5, and T10) however the second third and fourth columns all come out as 20.00000. Is there something I have missed? Or something I can add? Sorry I am very new to python and am following a lab outline so without an error I am unsure where I have gone wrong.

Why isn't my gradient descent algorithm working?

I made a gradient descent algorithm in Python and it doesn't work. My m and b values keep increasing and never stop until I get the -inf error or the overflow encountered in square error.
import numpy as np
x = np.array([2,3,4,5])
y = np.array([5,7,9,5])
m = np.random.randn()
b = np.random.randn()
error = 0
lr = 0.0001
for q in range(1000):
for i in range(len(x)):
ypred = m*x[i] + b
error += (ypred - y[i]) **2
m = m - (x * error) *lr
b = b - (lr * error)
print(b,m)
I expected my algorithm to return the best m and b values for my data (x and y) but it didn't work. What is going wrong?
import numpy as np
x = np.array([2,3,4,5])
y = 0.3*x+0.6
m = np.random.randn()
b = np.random.randn()
lr = 0.001
for q in range(100000):
ypred = m*x + b
error = (1./(2*len(x))) * np.sum(np.square(ypred - y)) #eq 1
m = m - lr * np.sum((ypred - y)*x)/len(x) # eq 2 and eq 4
b = b - lr * np.sum(ypred - y)/len(x) # eq 3 and eq 5
print (m , b)
Output:
0.30007724168011807 0.5997039817571881
Math behind it
Use numpy vectorized operations to avoid loops.
I think you implemented the formula incorrectly:
Use summation on x - error
divide by length of x
See below code:
import numpy as np
x = np.array([2,3,4,5])
y = np.array([5,7,9,11])
m = np.random.randn()
b = np.random.randn()
error = 0
lr = 0.1
print(b, m)
for q in range(1000):
ypred = []
for i in range(len(x)):
temp = m*x[i] + b
ypred.append(temp)
error += temp - y[i]
m = m - np.sum(x * (ypred-y)) *lr/len(x)
b = b - np.sum(lr * (ypred-y))/len(x)
print(b,m)
Output:
-1.198074371762264 0.058595039571115955 # initial weights
0.9997389097653074 2.0000681277214487 # Final weights

Scipy Minimization TNC Working, But Not CG

I'm trying to complete week 4 the Machine Learning course on Coursera. The assingment uses the MINST data for multi-class classification.
The dimensions are X (5000,401), y (5000,1), theta (10,401), which start off as arrays. X was inserted with 1's on the first feature column.
My cost and gradient functions are below:
def sigmoid(z):
g = 1 / (1 + np.exp(-z))
return g
def lrCostFunction(theta, X, y, my_lambda):
m = float(len(X))
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
#cost function:
term1 = np.multiply(-y,np.log(sigmoid(X*theta.T)))
term2 = np.multiply((1-y),np.log(1-sigmoid(X*theta.T)))
reg = np.power(theta[:,1:theta.shape[1]],2)
J = np.sum(term1-term2)/m + (my_lambda/(2.0*m) * np.sum(reg))
return J
def gradient (theta, X, y, my_lambda):
m = float(len(X))
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
#gradient:
error = sigmoid(X * theta.T) - y
g = (X.T * error /(m)).T + ((my_lambda/m) * theta)
g[0,0] = np.sum(np.multiply(error, X[:,0])) / m
return g
Here is my One vs All classification function with the TNC optimization:
def oneVsAll(X, y, num_labels, my_lambda):
m = float(X.shape[0])
n = float(X.shape[1])-1
all_theta = np.zeros((num_labels,n+1))
for K in range(1, num_labels + 1):
theta = np.zeros(n+1)
y_logical = np.array([1 if j == K else 0 for j in y]).reshape(m,1)
opt_theta = opt.minimize(fun=lrCostFunction, x0=theta, \
args=(X,y_logical,my_lambda), \
method='TNC', jac=gradient).x
all_theta[K-1,:] = opt_theta
return all_theta
When I try to run CG however, it returns the error at line 8: "shapes (1,401) and (1,401) not aligned: 401 (dim 1) != 1 (dim 0)":
def oneVsAll(X, y, num_labels, my_lambda):
m = float(X.shape[0])
n = float(X.shape[1])-1
all_theta = np.zeros((num_labels,n+1))
for K in range(1, num_labels + 1):
theta = np.zeros(n+1)
y_logical = np.array([1 if j == K else 0 for j in y]).reshape(m,1)
opt_theta = opt.fmin_cg(f=lrCostFunction, x0=theta, \
fprime=gradient, \
args=(X,y_logical,my_lambda))
all_theta[K-1,:] = opt_theta
return all_theta
I saw elsewhere that CG only likes 1-d vectors from y. If I try to flatten y or reduce its dimension, however, everything else breaks. Is it generally a bad idea to use np.matrix as oppose to use np.dot with arrays? I like being able to easily transpose with matrixes.
Any help would be greatly appreciated.

Julia Set Python

I've trying to make a julia set in python but my output is Nan at some early process. I don't know what causes it.
Just for the sake of confession: my programming classes are not good, I don't really know what I am doing, this is mostly from what I've learned from Google.
Here's the code:
import matplotlib.pyplot as plt
c = complex(1.5,-0.6)
xli = []
yli = []
while True:
z = c
for i in range(1,101):
if abs(z) > 2.0:
break
z = z*z + c
if i>0 and i <100:
break
xi = -1.24
xf = 1.4
yi = -2.9
yf = 2.1
#the loop for the julia set
for k in range(1,51):
x = xi + k*(xf-xi)/50
for n in range(51):
y = yi + n*(yf-yi)/50
z = z+ x + y* 1j
print z
for i in range(51):
z = z*z + c #the error is coming from somewhere around here
if abs(z) > 2: #not sure if this is correct
xli.append(x)
yli.append(y)
plt.plot(xli,yli,'bo')
plt.show()
print xli
print yli
Thank you in advance :)
Just for the sake of confession: I know nothing about Julia sets nor matplotlib.
pyplot seems an odd choice due to its low resolution and the fact that colors can't be specified as a vector alongside X & Y. And had it worked as written, 'bo' would have produced just a grid of blue circles.
Your first while True: loop isn't needed as you've picked what you believe to be a viable c.
Here's my rework of your code:
import matplotlib.pyplot as plt
c = complex(1.5, -0.6)
# image size
img_x = 100
img_y = 100
# drawing area
xi = -1.24
xf = 1.4
yi = -2.9
yf = 2.1
iterations = 8 # maximum iterations allowed (maps to 8 shades of gray)
# the loop for the julia set
results = {} # pyplot speed optimization to plot all same gray at once
for y in range(img_y):
zy = y * (yf - yi) / (img_y - 1) + yi
for x in range(img_x):
zx = x * (xf - xi) / (img_x - 1) + xi
z = zx + zy * 1j
for i in range(iterations):
if abs(z) > 2:
break
z = z * z + c
if i not in results:
results[i] = [[], []]
results[i][0].append(x)
results[i][1].append(y)
for i, (xli, yli) in results.items():
gray = 1.0 - i / iterations
plt.plot(xli, yli, '.', color=(gray, gray, gray))
plt.show()
OUTPUT

Categories