I'm given a problem that explicitly asks me not to use numpy and pandas
Prob : Selecting an element from the list A randomly with probability proportional to its magnitude. assume we are doing the same experiment for 100 times with replacement, in each experiment you will print a number that is selected randomly from A.
Ex 1: A = [0 5 27 6 13 28 100 45 10 79]
let f(x) denote the number of times x getting selected in 100 experiments.
f(100) > f(79) > f(45) > f(28) > f(27) > f(13) > f(10) > f(6) > f(5) > f(0)
Initially, I took the sum of all the elements of list A
I then divided (in order to normaliz) each element of list A by the sum and stored each of these values in another list (d_dash)
I then created another empty list (d_bar), that takes in cumalative sum of all elements of d_dash
created variable r, where r= random.uniform(0.0,1.0), and then for the length of d_dash comapring r to d_dash[k], if r<=d_dash[k], return A[k]
However, I'm getting the error list index out of range near d_dash[j].append((A[j]/sum)), not sure what is the issue here as I did not exceed the index of either d_dash or A[j].
Also, is my logic correct ? sharing a better way to do this would be appreciated.
Thanks in advance.
import random
A = [0,5,27,6,13,28,100,45,10,79]
def propotional_sampling(A):
sum=0
for i in range(len(A)):
sum = sum + A[i]
d_dash=[]
for j in range(len(A)):
d_dash[j].append((A[j]/sum))
#cumulative sum
d_bar =[]
d_bar[0]= 0
for k in range(len(A)):
d_bar[k] = d_bar[k] + d_dash[k]
r = random.uniform(0.0,1.0)
number=0
for p in range(len(d_bar)):
if(r<=d_bar[p]):
number=d_bar[p]
return number
def sampling_based_on_magnitued():
for i in range(1,100):
number = propotional_sampling(A)
print(number)
sampling_based_on_magnitued()
Below is the code to do the same :
A = [0, 5, 27, 6, 13, 28, 100, 45, 10, 79]
#Sum of all the elements in the array
S = sum(A)
#Calculating normalized sum
norm_sum = [ele/S for ele in A]
#Calculating cumulative normalized sum
cum_norm_sum = []
cum_norm_sum.append(norm_sum[0])
for itr in range(1, len(norm_sum), 1) :
cum_norm_sum.append(cum_norm_sum[-1] + norm_sum[itr])
def prop_sampling(cum_norm_sum) :
"""
This function returns an element
with proportional sampling.
"""
r = random.random()
for itr in range(len(cum_norm_sum)) :
if r < cum_norm_sum[itr] :
return A[itr]
#Sampling 1000 elements from the given list with proportional sampling
sampled_elements = []
for itr in range(1000) :
sampled_elements.append(prop_sampling(cum_norm_sum))
Below image shows the frequency of each element in the sampled points :
Clearly the number of times each elements appears is proportional to its magnitude.
Cumulative sum can be computed by itertools.accumulate. The loop:
for p in range(len(d_bar)):
if(r<=d_bar[p]):
number=d_bar[p]
can be substituted by bisect.bisect() (doc):
import random
from itertools import accumulate
from bisect import bisect
A = [0,5,27,6,13,28,100,45,10,79]
def propotional_sampling(A, n=100):
# calculate cumulative sum from A:
cum_sum = [*accumulate(A)]
# cum_sum = [0, 5, 32, 38, 51, 79, 179, 224, 234, 313]
out = []
for _ in range(n):
i = random.random() # i = [0.0, 1.0)
idx = bisect(cum_sum, i*cum_sum[-1]) # get index to list A
out.append(A[idx])
return out
print(propotional_sampling(A))
Prints (for example):
[10, 100, 100, 79, 28, 45, 45, 27, 79, 79, 79, 79, 100, 27, 100, 100, 100, 13, 45, 100, 5, 100, 45, 79, 100, 28, 79, 79, 6, 45, 27, 28, 27, 79, 100, 79, 79, 28, 100, 79, 45, 100, 10, 28, 28, 13, 79, 79, 79, 79, 28, 45, 45, 100, 28, 27, 79, 27, 45, 79, 45, 100, 28, 100, 100, 5, 100, 79, 28, 79, 13, 100, 100, 79, 28, 100, 79, 13, 27, 100, 28, 10, 27, 28, 100, 45, 79, 100, 100, 100, 28, 79, 100, 45, 28, 79, 79, 5, 45, 28]
The reason you got "list index out of range" message is that you created an empty list "d_bar =[]" and the started assigning value to it "d_bar[k] = d_bar[k] + d_dash[k]". I recoomment using the followoing structor isntead:
First, define it in this way:
d_bar=[0 for i in range(len(A))]
Also, I believe this code will return 1 forever as there is no break in the loop. you can resolve this issue by adding "break". here is updated version of your code:
A = [0, 5, 27, 6, 13, 28, 100, 45, 10, 79]
def pick_a_number_from_list(A):
sum=0
for i in A:
sum+=i
A_norm=[]
for j in A:
A_norm.append(j/sum)
A_cum=[0 for i in range(len(A))]
A_cum[0]=A_norm[0]
for k in range(len(A_norm)-1):
A_cum[k+1]=A_cum[k]+A_norm[k+1]
A_cum
r = random.uniform(0.0,1.0)
number=0
for p in range(len(A_cum)):
if(r<=A_cum[p]):
number=A[p]
break
return number
def sampling_based_on_magnitued():
for i in range(1,100):
number = pick_a_number_from_list(A)
print(number)
sampling_based_on_magnitued()
Related
In MATLAB, I am able to find to identify the values in data_b that come closest to the values in data_a, alongside the indices that indicate in which place in the matrix they occur, with the following code:
clear all; close all; clc;
data_a = [0; 15; 30; 45; 60; 75; 90];
data_b = randi([0, 90], [180, 101]);
[rows_a,cols_a] = size(data_a);
[rows_b,cols_b] = size(data_b);
val1 = zeros(rows_a,cols_b);
ind1 = zeros(rows_a,cols_b);
for i = 1:cols_b
for j = 1:rows_a
[val1(j,i),ind1(j,i)] = min(abs(data_b(:,i) - data_a(j)));
end
end
Since I would like to phase out MATLAB (I will be out of a license eventually), I decided to try the same in python, without any luck:
import numpy as np
data_a = np.array([[0],[15],[30],[45],[60],[75],[90]])
data_b = np.random.randint(91, size=(180, 101))
[rows_a,cols_a] = data_a.shape
[rows_b,cols_b] = data_b.shape
val1 = np.zeros((rows_a,cols_b))
ind1 = np.zeros((rows_a,cols_b))
for i in range(cols_b):
for j in range(rows_a):
[val1[j][i],ind1[j][i]] = np.amin(np.abs(data_b[:][i] - data_a[j]))
The code also produced an error that made me none the wiser:
TypeError: cannot unpack non-iterable numpy.int32 object
If anyone could find time to explain why I am an ignorant fool by indicating what I did wrong, and what I could do to fix it, I would be grateful as this has proven to become a major obstacle for my progress.
Thank you.
I think you are facing two problems:
Incorrect use of slicing for multidimensional arrays: use [i, j] instead of [i][j]
Improper translation of min() from MATLAB to NumPy: you have to use both argmin() and min().
Your fixed code would look like:
import numpy as np
# just to make it reproducible in testing, can be commented for production
np.random.seed(0)
data_a = np.array([[0],[15],[30],[45],[60],[75],[90]])
data_b = np.random.randint(91, size=(180, 101))
[rows_a,cols_a] = data_a.shape
[rows_b,cols_b] = data_b.shape
val1 = np.zeros((rows_a,cols_b), dtype=int)
ind1 = np.zeros((rows_a,cols_b), dtype=int)
for i in range(cols_b):
for j in range(rows_a):
ind1[j, i] = np.argmin(np.abs(data_b[:, i] - data_a[j]))
val1[j, i] = np.min(np.abs(data_b[:, i] - data_a[j])[ind1[j, i]])
However, I would avoid direct looping here and I would make good use of broadcasting:
import numpy as np
# just to make it reproducible in testing, can be commented for production
np.random.seed(0)
data_a = np.arange(0, 90 + 1, 15).reshape((-1, 1, 1))
data_b = np.random.randint(90 + 1, size=(1, 180, 101))
tmp_arr = np.abs(data_a.reshape(-1, 1, 1) - data_b.reshape(1, 180, -1), dtype=int)
min_idxs = np.argmin(tmp_arr, axis=1)
min_vals = np.min(tmp_arr, axis=1)
del tmp_arr # you can delete this if you no longer need it
where now ind1 == min_idxs and val1 == min_vals, i.e.:
print(np.all(min_idxs == ind1))
# True
print(np.all(min_vals == val1))
# True
Your error has to do with "[val1[j][i],ind1[j][i]] = (a single number)". You are trying to assign a single value to it which doesn't work in python. What about this?
import numpy as np
data_a = np.array([[0],[15],[30],[45],[60],[75],[90]])
data_b = np.random.randint(91, size=(180,101))
[rows_a,cols_a] = data_a.shape
[rows_b,cols_b] = data_b.shape
val1 = np.zeros((rows_a,cols_b))
ind1 = np.zeros((rows_a,cols_b))
for i in range(cols_b):
for j in range(rows_a):
array = np.abs(data_b[:][i] - data_a[j])
val = np.amin(array)
val1[j][i] = val
ind1[j][i] = np.where(val == array)[0][0]
Numpy amin does not return an index so you need to return it using np.where. This example does not store the full index, only the index of the first occurrence in the row. Then you can pull it out since your row order matches your column order in ind1 and data_b. So for instance on the first iteration.
In [2]: np.abs(data_b[:][0] - data_a[j0])
Out[2]:
array([ 3, 31, 19, 53, 28, 81, 10, 11, 89, 15, 50, 22, 40, 81, 43, 29, 63,
72, 22, 37, 54, 12, 19, 78, 85, 78, 37, 81, 41, 24, 29, 56, 37, 86,
67, 7, 38, 27, 83, 81, 66, 32, 68, 29, 71, 26, 12, 27, 45, 58, 17,
57, 54, 55, 23, 21, 46, 58, 75, 10, 25, 85, 70, 76, 0, 11, 19, 83,
81, 68, 8, 63, 72, 48, 18, 29, 0, 47, 85, 79, 72, 85, 28, 28, 7,
41, 80, 56, 59, 44, 82, 33, 42, 23, 42, 89, 58, 52, 44, 65, 65])
In [3]: np.amin(array)
Out[3]: 0
In [4]: val
Out[4]: 0
In [5]: np.where(val == array)[0][0]
Out[5]: 69
In [6]: data_b[0,69]
Out[6]: 0
edit: it's an image so the suggested (How can I efficiently process a numpy array in blocks similar to Matlab's blkproc (blockproc) function) isn't really working for me
I have the following matlab code
fun = #(block_struct) ...
std2(block_struct.data) * ones(size(block_struct.data));
B=blockproc(im2double(Icorrected), [4 4], fun);
I want to remake my code, but this time in Python. I have installed Scikit and i'm trying to work around it like this
b = np.std(a, axis = 2)
The problem of course it's that i'm not applying the std for a number of blocks, just like above.
How can i do something like this? Start a loop and try to call the function for each X*X blocks? Then i wouldn't keep the size the it was.
Is there another more efficient way?
If there is no overlap in the windows you can reshape the data to suit your needs:
Find the mean of 3x3 windows of a 9x9 array.
import numpy as np
>>> a
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23, 24, 25, 26],
[27, 28, 29, 30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41, 42, 43, 44],
[45, 46, 47, 48, 49, 50, 51, 52, 53],
[54, 55, 56, 57, 58, 59, 60, 61, 62],
[63, 64, 65, 66, 67, 68, 69, 70, 71],
[72, 73, 74, 75, 76, 77, 78, 79, 80]])
Find the new shape
>>> window_size = (3,3)
>>> tuple(np.array(a.shape) / window_size) + window_size
(3, 3, 3, 3)
>>> b = a.reshape(3,3,3,3)
Find the mean along the first and third axes.
>>> b.mean(axis = (1,3))
array([[ 10., 13., 16.],
[ 37., 40., 43.],
[ 64., 67., 70.]])
>>>
2x2 windows of a 4x4 array:
>>> a = np.arange(16).reshape((4,4))
>>> window_size = (2,2)
>>> tuple(np.array(a.shape) / window_size) + window_size
(2, 2, 2, 2)
>>> b = a.reshape(2,2,2,2)
>>> b.mean(axis = (1,3))
array([[ 2.5, 4.5],
[ 10.5, 12.5]])
>>>
It won't work if the window size doesn't divide into the array size evenly. In that case you need some overlap in the windows or if you just want overlap numpy.lib.stride_tricks.as_strided is the way to go - a generic N-D function can be found at Efficient Overlapping Windows with Numpy
Another option for 2d arrays is sklearn.feature_extraction.image.extract_patches_2d and for ndarray's - sklearn.feature_extraction.image.extract_patches. Each manipulate the array's strides to produce the patches/windows.
I did the following
io.use_plugin('pil', 'imread')
a = io.imread('C:\Users\Dimitrios\Desktop\polimesa\\arizona.jpg')
B = np.zeros((len(a)/2 +1, len(a[0])/2 +1))
for i in xrange(0, len(a), 2):
for j in xrange(0, len(a[0]), 2):
x.append(a[i][j])
if i+1 < len(a):
x.append(a[i+1][j])
if j+1 < len(a[0]):
x.append(a[i][j+1])
if i+1 < len(a) and j+1 < len(a[0]):
x.append(a[i+1][j+1])
B[i/2][j/2] = np.std(x)
x[:] = []
and i think it's correct. Iterating over the image by 2 and taking each neighbour node, adding them to a list and calculating std.
edit* later edited for 4x4 blocks.
We can implement blockproc() in python the following way:
def blockproc(im, block_sz, func):
h, w = im.shape
m, n = block_sz
for x in range(0, h, m):
for y in range(0, w, n):
block = im[x:x+m, y:y+n]
block[:,:] = func(block)
return im
Now, let's apply it to implement contrast enhancement with local histogram equalization, with the low-contrast moon image (of size 512x512) as input and choosing 32x32 blocks:
from skimage import data, exposure
img = data.moon()
img = img / img.max()
m, n = 64, 64
img_eq = blockproc(img.copy(), (m, n), exposure.equalize_hist)
Display the input and output images:
Note that the function does in-place modification to the image, hence a copy of the input image is passed instead.
I have a multidimensional array called resultsten, with the following shape
print np.shape(resultsten)
(3, 3, 6, 10, 1, 9)
In some occasions, I use a part of this array in a program called cleanup, which then further tears this array apart into x, y, and z arrays:
x,y,z = cleanup(resultsten[0,:,:,:,:,:])
def cleanup(resultsmat):
x = resultsmat[:,:,:,:,2]
y = resultsmat[:,:,:,:,1]
z = resultsmat[:,:,:,:,4]
return x,y,z
However, it might also occur that I do not want to put the entire matrix of resultsten in my program cleanup, thus:
x,y,z = cleanup(resultsten[0,0,:,:,:,:])
This, of course gives an error, as the indices given to cleanup do not match the indices expected.
I was wondering if it is possible to have a variable amount of dimensions included in your slice.
I would like to know a command that takes all the entries for every dimension, up until the last dimension, where it only takes one index.
I've seen that is possible to do this for all dimensions except the first, e.g
resultsten[1,:,:,:,:,:]
gives the same result as:
resultsten[1,:]
I tried this:
resultsten[:,1]
but it does not give the required result, Python interprets it like this:
resultsten[:,1,:,:,:,:]
MWE:
def cleanup(resultsmat):
x = resultsmat[:,:,:,0,2]
y = resultsmat[:,:,:,0,1]
z = resultsmat[:,:,:,0,4]
return x,y,z
resultsten=np.arange(3*3*6*10*1*9).reshape(3,3,6,10,1,9)
x0,y0,z0 = cleanup(resultsten[0,:,:,:,:,:]) #works
x0,y0,z0 = cleanup(resultsten[0,0,:,:,:,:]) #does not work
I would use a list of slice objects:
import numpy as np
A = np.arange(2*3*4*5).reshape(2,3,4,5)
#[:] <-> [slice(None,None, None)]
sliceList = [slice(None, None, None)]*(len(A.shape)-1)
a,b,c,d,e = [A[sliceList+[i]] for i in range(A.shape[-1])]
Output:
>>> A[:,:,:,0]
array([[[ 0, 5, 10, 15],
[ 20, 25, 30, 35],
[ 40, 45, 50, 55]],
[[ 60, 65, 70, 75],
[ 80, 85, 90, 95],
[100, 105, 110, 115]]])
>>> a
array([[[ 0, 5, 10, 15],
[ 20, 25, 30, 35],
[ 40, 45, 50, 55]],
[[ 60, 65, 70, 75],
[ 80, 85, 90, 95],
[100, 105, 110, 115]]])
Create a 'list' called my_randoms of 10 random numbers between 0 and 100.
This is what I have so far:
import random
my_randoms=[]
for i in range (10):
my_randoms.append(random.randrange(1, 101, 1))
print (my_randoms)
Unfortunately Python's output is this:
[34]
[34, 30]
[34, 30, 75]
[34, 30, 75, 27]
[34, 30, 75, 27, 8]
[34, 30, 75, 27, 8, 58]
[34, 30, 75, 27, 8, 58, 10]
[34, 30, 75, 27, 8, 58, 10, 1]
[34, 30, 75, 27, 8, 58, 10, 1, 59]
[34, 30, 75, 27, 8, 58, 10, 1, 59, 25]
It generates the 10 numbers like I ask it to, but it generates it one at a time. What am I doing wrong?
You could use random.sample to generate the list with one call:
import random
my_randoms = random.sample(range(100), 10)
That generates numbers in the (inclusive) range from 0 to 99. If you want 1 to 100, you could use this (thanks to #martineau for pointing out my convoluted solution):
my_randoms = random.sample(range(1, 101), 10)
import random
my_randoms = [random.randrange(1, 101, 1) for _ in range(10)]
Fix the indentation of the print statement
import random
my_randoms=[]
for i in range (10):
my_randoms.append(random.randrange(1,101,1))
print (my_randoms)
This is way late but in-case someone finds this helpful.
You could use list comprehension.
rand = [random.randint(0, 100) for x in range(1, 11)]
print(rand)
Output:
[974, 440, 305, 102, 822, 128, 205, 362, 948, 751]
Cheers!
Here I use the sample method to generate 10 random numbers between 0 and 100.
Note: I'm using Python 3's range function (not xrange).
import random
print(random.sample(range(0, 100), 10))
The output is placed into a list:
[11, 72, 64, 65, 16, 94, 29, 79, 76, 27]
xrange() will not work for 3.x.
numpy.random.randint().tolist() is a great alternative for integers in a specified interval:
#[In]:
import numpy as np
np.random.seed(123) #option for reproducibility
np.random.randint(low=0, high=100, size=10).tolist()
#[Out:]
[66, 92, 98, 17, 83, 57, 86, 97, 96, 47]
You also have np.random.uniform() for floats:
#[In]:
np.random.uniform(low=0, high=100, size=10).tolist()
#[Out]:
[69.64691855978616,
28.613933495037948,
22.68514535642031,
55.13147690828912,
71.94689697855631,
42.3106460124461,
98.07641983846155,
68.48297385848633,
48.09319014843609,
39.211751819415056]
import random
a=[]
n=int(input("Enter number of elements:"))
for j in range(n):
a.append(random.randint(1,20))
print('Randomised list is: ',a)
Simple solution:
indices=[]
for i in range(0,10):
n = random.randint(0,99)
indices.append(n)
The one random list generator in the random module not mentioned here is random.choices:
my_randoms = random.choices(range(0, 100), k=10)
It's like random.sample but with replacement. The sequence passed doesn't have to be a range; it doesn't even have to be numbers. The following works just as well:
my_randoms = random.choices(['a','b'], k=10)
If we compare the runtimes, among random list generators, random.choices is the fastest no matter the size of the list to be created. However, for larger lists/arrays, numpy options are much faster. So for example, if you're creating a random list/array to assign to a pandas DataFrame column, then using np.random.randint is the fastest option.
Code used to produce the above plot:
import perfplot
import numpy as np
import random
perfplot.show(
setup=lambda n: n,
kernels=[
lambda n: [random.randint(0, n*2) for x in range(n)],
lambda n: random.sample(range(0, n*2), k=n),
lambda n: [random.randrange(n*2) for i in range(n)],
lambda n: random.choices(range(0, n*2), k=n),
lambda n: np.random.rand(n),
lambda n: np.random.randint(0, n*2, size=n),
lambda n: np.random.choice(np.arange(n*2), size=n),
],
labels=['random_randint', 'random_sample', 'random_randrange', 'random_choices',
'np_random_rand', 'np_random_randint', 'np_random_choice'],
n_range=[2 ** k for k in range(17)],
equality_check=None,
xlabel='~n'
)
my_randoms = [randint(n1,n2) for x in range(listsize)]
I have a list of numbers in Python, like this:
x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
What's the best way to find the trend in these numbers? I'm not interested in predicting what the next number will be, I just want to output the trend for many sets of numbers so that I can compare the trends.
Edit: By trend, I mean that I'd like a numerical representation of whether the numbers are increasing or decreasing and at what rate. I'm not massively mathematical, so there's probably a proper name for this!
Edit 2: It looks like what I really want is the co-efficient of the linear best fit. What's the best way to get this in Python?
Possibly you mean you want to plot these numbers on a graph and find a straight line through them where the overall distance between the line and the numbers is minimized? This is called a linear regression
def linreg(X, Y):
"""
return a,b in solution to y = ax + b such that root mean square distance between trend line and original points is minimized
"""
N = len(X)
Sx = Sy = Sxx = Syy = Sxy = 0.0
for x, y in zip(X, Y):
Sx = Sx + x
Sy = Sy + y
Sxx = Sxx + x*x
Syy = Syy + y*y
Sxy = Sxy + x*y
det = Sxx * N - Sx * Sx
return (Sxy * N - Sy * Sx)/det, (Sxx * Sy - Sx * Sxy)/det
x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
a,b = linreg(range(len(x)),x) //your x,y are switched from standard notation
The trend line is unlikely to pass through your original points, but it will be as close as possible to the original points that a straight line can get. Using the gradient and intercept values of this trend line (a,b) you will be able to extrapolate the line past the end of the array:
extrapolatedtrendline=[a*index + b for index in range(20)] //replace 20 with desired trend length
The Link provided by Keith or probably the answer from Riaz might help you to get the poly fit, but it is always recommended to use libraries if available, and for the problem in your hand, numpy provides a wonderful polynomial fit function called polyfit . You can use polyfit to fit the data over any degree of equation.
Here is an example using numpy to fit the data in a linear equation of the form y=ax+b
>>> data = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
>>> x = np.arange(0,len(data))
>>> y=np.array(data)
>>> z = np.polyfit(x,y,1)
>>> print "{0}x + {1}".format(*z)
4.32527472527x + 17.6
>>>
similarly a quadratic fit would be
>>> print "{0}x^2 + {1}x + {2}".format(*z)
0.311126373626x^2 + 0.280631868132x + 25.6892857143
>>>
Here is one way to get an increasing/decreasing trend:
>>> x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
>>> trend = [b - a for a, b in zip(x[::1], x[1::1])]
>>> trend
[22, -5, 9, -4, 17, -22, 5, 13, -13, 21, 39, -26, 13]
In the resulting list trend, trend[0] can be interpreted as the increase from x[0] to x[1], trend[1] would be the increase from x[1] to x[2] etc. Negative values in trend mean that value in x decreased from one index to the next.
You could do a least squares fit of the data.
Using the formula from this page:
y = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
N = len(y)
x = range(N)
B = (sum(x[i] * y[i] for i in xrange(N)) - 1./N*sum(x)*sum(y)) / (sum(x[i]**2 for i in xrange(N)) - 1./N*sum(x)**2)
A = 1.*sum(y)/N - B * 1.*sum(x)/N
print "%f + %f * x" % (A, B)
Which prints the starting value and delta of the best fit line.
I agree with Keith, I think you're probably looking for a linear least squares fit (if all you want to know is if the numbers are generally increasing or decreasing, and at what rate). The slope of the fit will tell you at what rate they're increasing. If you want a visual representation of a linear least squares fit, try Wolfram Alpha:
http://www.wolframalpha.com/input/?i=linear+fit+%5B12%2C+34%2C+29%2C+38%2C+34%2C+51%2C+29%2C+34%2C+47%2C+34%2C+55%2C+94%2C+68%2C+81%5D
Update: If you want to implement a linear regression in Python, I recommend starting with the explanation at Mathworld:
http://mathworld.wolfram.com/LeastSquaresFitting.html
It's a very straightforward explanation of the algorithm, and it practically writes itself. In particular, you want to pay close attention to equations 16-21, 27, and 28.
Try writing the algorithm yourself, and if you have problems, you should open another question.
You can find the OLS coefficient using numpy:
import numpy as np
y = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
x = []
x.append(range(len(y))) #Time variable
x.append([1 for ele in xrange(len(y))]) #This adds the intercept, use range in Python3
y = np.matrix(y).T
x = np.matrix(x).T
betas = ((x.T*x).I*x.T*y)
Results:
>>> betas
matrix([[ 4.32527473], #coefficient on the time variable
[ 17.6 ]]) #coefficient on the intercept
Since the coefficient on the trend variable is positive, observations in your variable are increasing over time.
You can use simply scipy library
from scipy.stats import linregress
data = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
x = np.arange(1,len(data)+1)
y=np.array(data)
res = linregress(x, y)
print(f'Equation: {res[0]:.3f} * t + {res[1]:.3f}, R^2: {res[2] ** 2:.2f} ')
res
Output:
Equation: 4.325 * t + 13.275, R^2: 0.66
LinregressResult(slope=4.325274725274725, intercept=13.274725274725277, rvalue=0.8096297800892154, pvalue=0.0004497809466484867, stderr=0.9051717124425395, intercept_stderr=7.707259409345618)
Compute the beta coefficient.
y = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
x = range(1,len(y)+1)
def var(X):
S = 0.0
SS = 0.0
for x in X:
S += x
SS += x*x
xbar = S/float(len(X))
return (SS - len(X) * xbar * xbar) / (len(X) -1.0)
def cov(X,Y):
n = len(X)
xbar = sum(X) / n
ybar = sum(Y) / n
return sum([(x-xbar)*(y-ybar) for x,y in zip(X,Y)])/(n-1)
def beta(x,y):
return cov(x,y)/var(x)
print beta(x,y) #4.34285714286