random.shuffle erasing items and not shuffling properly - python

I am initializing two multivariate gaussian distributions like so and trying to implement a machine learning algorithm to draw a decision boundary between the classes:
import numpy as np
import matplotlib.pyplot as plt
import torch
import random
mu0 = [-2,-2]
mu1 = [2, 2]
cov = np.array([[1, 0],[0, 1]])
X = np.random.randn(10,2)
L = np.linalg.cholesky(cov)
Y0 = mu0 + X#L.T
Y1 = mu1 + X#L.T
I have two separated circles and I am trying to stack Y0 and Y1, shuffle them, and then break them into training and testing splits. First I append the class labels to the data, and then stack.
n,m = Y1.shape
class0 = np.zeros((n,1))
class1 = np.ones((n,1))
Y_0 = np.hstack((Y0,class0))
Y_1 = np.hstack((Y1,class1))
data = np.vstack((Y_0,Y_1))
Now when i try to call random.shuffle(data) the zero class takes over and I get a small number of class one instances.
random.shuffle(data)
Here is my data before shuffling:
print(data)
[[-3.16184428 -1.89491433 0. ]
[ 0.2710061 -1.41000924 0. ]
[-3.50742027 -2.04238337 0. ]
[-1.39966859 -1.57430259 0. ]
[-0.98356629 -3.02299622 0. ]
[-0.49583458 -1.64067853 0. ]
[-2.62577229 -2.32941225 0. ]
[-1.16005269 -2.76429318 0. ]
[-1.88618759 -2.79178253 0. ]
[-1.34790868 -2.10294791 0. ]
[ 0.83815572 2.10508567 1. ]
[ 4.2710061 2.58999076 1. ]
[ 0.49257973 1.95761663 1. ]
[ 2.60033141 2.42569741 1. ]
[ 3.01643371 0.97700378 1. ]
[ 3.50416542 2.35932147 1. ]
[ 1.37422771 1.67058775 1. ]
[ 2.83994731 1.23570682 1. ]
[ 2.11381241 1.20821747 1. ]
[ 2.65209132 1.89705209 1. ]]
and after shufffling:
data
array([[-0.335667 , -0.60826166, 0. ],
[-0.335667 , -0.60826166, 0. ],
[-0.335667 , -0.60826166, 0. ],
[-0.335667 , -0.60826166, 0. ],
[-2.22547604, -1.62833794, 0. ],
[-3.3287687 , -2.37694753, 0. ],
[-3.2915737 , -1.31558952, 0. ],
[-2.23912202, -1.54625136, 0. ],
[-0.335667 , -0.60826166, 0. ],
[-2.23912202, -1.54625136, 0. ],
[-2.11217077, -2.70157476, 0. ],
[-3.25714184, -2.7679462 , 0. ],
[-3.2915737 , -1.31558952, 0. ],
[-2.22547604, -1.62833794, 0. ],
[ 0.73756329, 1.46127708, 1. ],
[ 1.88782923, 1.29842524, 1. ],
[ 1.77452396, 2.37166206, 1. ],
[ 1.77452396, 2.37166206, 1. ],
[ 3.664333 , 3.39173834, 1. ],
[ 3.664333 , 3.39173834, 1. ]])
Why is random.shuffle deleting my data? I just need all twenty rows to be shuffled, but it is repeating lines and i am losing data. i'm not setting random.shuffle to a variable and am simply just calling random.shuffle(data). Are there any other ways to simply shuffle my data?

Because the swap method used by the random.shuffle does not work in ndarray:
# Python 3.10.7 random.py
class Random(_random.Random):
...
def shuffle(self, x, random=None):
...
if random is None:
randbelow = self._randbelow
for i in reversed(range(1, len(x))):
# pick an element in x[:i+1] with which to exchange x[i]
j = randbelow(i + 1)
x[i], x[j] = x[j], x[i] # <----------------
...
...
Using index on multi-dimensional array will result in a view instead of a copy, which will prevent the swap from working properly. For more information, you can refer to this question.
Better choice numpy.random.Generator.shuffle:
>>> data
array([[-1.88985877, -2.97312795, 0. ],
[-1.52352452, -2.19633099, 0. ],
[-2.06297352, -1.36627294, 0. ],
[-1.47460488, -2.09410403, 0. ],
[-1.18753167, -1.71069966, 0. ],
[-1.92878766, -1.19545861, 0. ],
[-2.4858627 , -2.66525855, 0. ],
[-2.97169999, -1.46985506, 0. ],
[-2.11395907, -2.19108576, 0. ],
[-2.63976951, -1.66742147, 0. ],
[ 2.11014123, 1.02687205, 1. ],
[ 2.47647548, 1.80366901, 1. ],
[ 1.93702648, 2.63372706, 1. ],
[ 2.52539512, 1.90589597, 1. ],
[ 2.81246833, 2.28930034, 1. ],
[ 2.07121234, 2.80454139, 1. ],
[ 1.5141373 , 1.33474145, 1. ],
[ 1.02830001, 2.53014494, 1. ],
[ 1.88604093, 1.80891424, 1. ],
[ 1.36023049, 2.33257853, 1. ]])
>>> rng = np.random.default_rng()
>>> rng.shuffle(data, 0)
>>> data
array([[-1.92878766, -1.19545861, 0. ],
[-2.97169999, -1.46985506, 0. ],
[ 2.07121234, 2.80454139, 1. ],
[ 1.36023049, 2.33257853, 1. ],
[ 1.93702648, 2.63372706, 1. ],
[-2.11395907, -2.19108576, 0. ],
[-2.63976951, -1.66742147, 0. ],
[ 1.02830001, 2.53014494, 1. ],
[ 2.11014123, 1.02687205, 1. ],
[ 1.88604093, 1.80891424, 1. ],
[-1.47460488, -2.09410403, 0. ],
[ 2.52539512, 1.90589597, 1. ],
[-1.18753167, -1.71069966, 0. ],
[-1.88985877, -2.97312795, 0. ],
[ 2.81246833, 2.28930034, 1. ],
[-2.06297352, -1.36627294, 0. ],
[ 1.5141373 , 1.33474145, 1. ],
[-2.4858627 , -2.66525855, 0. ],
[-1.52352452, -2.19633099, 0. ],
[ 2.47647548, 1.80366901, 1. ]])
In this example, numpy.random.shuffle also works normally because OP just requires shuffling along the first axis, but numpy.random.Generator.shuffle is the recommended usage in the new code and supports shuffling along other axis.

Related

How to select elements from subsequent numpy arrays stored in pandas series

I've got a Series of numpy arrays:
import pandas as pd
import numpy as np
pd.Series({10: np.array([[0.72260683, 0.27739317, 0. ],
[0.7187053 , 0.2812947 , 0. ],
[0.71435467, 0.28564533, 1. ],
[0.3268072 , 0.6731928 , 0. ],
[0.31941951, 0.68058049, 1. ],
[0.31260015, 0.68739985, 0. ]]),
20: np.array([[0.7022099 , 0.2977901 , 0. ],
[0.6983866 , 0.3016134 , 0. ],
[0.69411673, 0.30588327, 1. ],
[0.33857735, 0.66142265, 0. ],
[0.33244109, 0.66755891, 1. ],
[0.32675582, 0.67324418, 0. ]]),
38: np.array([[0.68811957, 0.31188043, 0. ],
[0.68425783, 0.31574217, 0. ],
[0.67994496, 0.32005504, 1. ],
[0.34872593, 0.65127407, 0. ],
[0.34276171, 0.65723829, 1. ],
[0.33722803, 0.66277197, 0. ]])}
)
and an array of indices np.array([1, 4, 1]) indicating which rows should be filtered from consecutive arrays. The expected output would be like this:
pd.Series({10: np.array([[0.7187053 , 0.2812947 , 0. ]]),
20: np.array([[0.33244109, 0.66755891, 1. ]]),
38: np.array([[0.68425783, 0.31574217, 0. ]])}
)
How can I do that? How it would be different if I would like to take out third element from each resulting array obtaining the following Series?
pd.Series({10: 0, 20: 1, 30: 0})
If possible convert Series of 2d arrays to 3d array (same length)s of each 2d arrays:
a = np.array([1, 4, 1])
b = np.array(s.tolist())[np.arange(len(s)), a, 2]
print (b)
[0. 1. 0.]
c = pd.Series(b, index=s.index)
print (c)
10 0.0
20 1.0
38 0.0
dtype: float64
If want select by array of indices:
b1 = np.array(s.tolist())[np.arange(len(s)), a]
print (b1)
[[0.7187053 0.2812947 0. ]
[0.33244109 0.66755891 1. ]
[0.68425783 0.31574217 0. ]]

one-hot encoding of some integers in sci-kit library

I'm trying to do one-hot encoding of the first column with following code:
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(categories = 'auto',sparse = False)
X[:,0] = enc.fit_transform(X[:,0]).toarray()
But I get Error which says I have to reshape my data. How can I one-hot encode the first column and then add it to the whole data ?
Your problem is you're passing to OHE a 1d array. Reshape is as a 2d and you're fine to go.
Proof
Suppose we happen to have some data that resembles yours:
np.random.seed(42)
X = np.c_[np.random.randint(0,3,10),np.random.randn(10),np.random.randn(10)]
X
array([[ 2. , 1.57921282, -1.01283112],
[ 0. , 0.76743473, 0.31424733],
[ 2. , -0.46947439, -0.90802408],
[ 2. , 0.54256004, -1.4123037 ],
[ 0. , -0.46341769, 1.46564877],
[ 0. , -0.46572975, -0.2257763 ],
[ 2. , 0.24196227, 0.0675282 ],
[ 1. , -1.91328024, -1.42474819],
[ 2. , -1.72491783, -0.54438272],
[ 2. , -0.56228753, 0.11092259]])
Then we can proceed as follows:
from sklearn.preprocessing import OneHotEncoder
oho = OneHotEncoder(sparse = False)
oho_enc = oho.fit_transform(X[:,0].reshape(-1,1)) # <--- you have a problem here
res = np.c_[oho_enc, X[:,1:]]
res
matrix([[ 0. , 0. , 1. , 1.57921282, -1.01283112],
[ 1. , 0. , 0. , 0.76743473, 0.31424733],
[ 0. , 0. , 1. , -0.46947439, -0.90802408],
[ 0. , 0. , 1. , 0.54256004, -1.4123037 ],
[ 1. , 0. , 0. , -0.46341769, 1.46564877],
[ 1. , 0. , 0. , -0.46572975, -0.2257763 ],
[ 0. , 0. , 1. , 0.24196227, 0.0675282 ],
[ 0. , 1. , 0. , -1.91328024, -1.42474819],
[ 0. , 0. , 1. , -1.72491783, -0.54438272],
[ 0. , 0. , 1. , -0.56228753, 0.11092259]])

Python: Generate a new array with more columns based on another array

I have this array:
I need create a new array like this:
I guess I need use a conditional, but I don't know how create an array with 7 columns, based on values of a 5 columns array.
If anyone could help me, I thank!
I'm going to assume you want to convert your last column into one hot concodings and then concat it to your original array. You can initialise an array of zeros, and then set the appropriate indices to 1. Finally concat the OHE array to your original.
MCVE:
print(arr)
array([[ -9.95, 15.27, 9.08, 1. ],
[ -6.81, 11.87, 8.38, 2. ],
[ -3.02, 11.08, -8.5 , 1. ],
[ -5.73, -2.29, -2.09, 2. ],
[ -7.01, -0.9 , 12.91, 2. ],
[-11.64, -10.3 , 2.09, 2. ],
[ 17.85, 13.7 , 2.14, 0. ],
[ 6.34, -9.49, -8.05, 2. ],
[ 18.62, -9.43, -1.02, 1. ],
[ -2.15, -23.65, -13.03, 1. ]])
c = arr[:, -1].astype(int)
ohe = np.zeros((c.shape[0], c.max() + 1))
ohe[np.arange(c.shape[0]), c] = 1
arr = np.hstack((arr[:, :-1], ohe))
print(arr)
array([[ -9.95, 15.27, 9.08, 0. , 1. , 0. ],
[ -6.81, 11.87, 8.38, 0. , 0. , 1. ],
[ -3.02, 11.08, -8.5 , 0. , 1. , 0. ],
[ -5.73, -2.29, -2.09, 0. , 0. , 1. ],
[ -7.01, -0.9 , 12.91, 0. , 0. , 1. ],
[-11.64, -10.3 , 2.09, 0. , 0. , 1. ],
[ 17.85, 13.7 , 2.14, 1. , 0. , 0. ],
[ 6.34, -9.49, -8.05, 0. , 0. , 1. ],
[ 18.62, -9.43, -1.02, 0. , 1. , 0. ],
[ -2.15, -23.65, -13.03, 0. , 1. , 0. ]])
One-line version of #COLDSPEED using the np.eye trick:
np.hstack([arr[:,:-1], np.eye(arr[:,-1].astype(int).max() + 1)[arr[:,-1].astype(int)]])

Possible bug of `np.mgrid`?

In the following snippet, 128.99 is included in the output, which by definition should not. Is it a bug? Tested with both python2 and python3.
In [38]: np.mgrid[119.99:128.99, 0:2]
Out[38]:
array([[[ 119.99, 119.99],
[ 120.99, 120.99],
[ 121.99, 121.99],
[ 122.99, 122.99],
[ 123.99, 123.99],
[ 124.99, 124.99],
[ 125.99, 125.99],
[ 126.99, 126.99],
[ 127.99, 127.99],
[ 128.99, 128.99]],
[[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ],
[ 0. , 1. ]]])
np.mgrid invoked with two arguments will fill the corresponding range with items from start to stop.
To determine the number of steps, the formula used is (see the source):
math.ceil((key[k].stop - start)/(step*1.0))
where the step is 1 by default. In your situation, stop-start is 9.000000000000014 so that the rounding by the ceil function generates 10 steps, of step 1: 119.99+9 = 128.99
So, no bug here.
If you rely on the number of items in mgrid, use a complex step index:
np.mgrid[119.99:127.99:9j, 0:2].
beware that now the "stop" point is inclusive, per the docs

Having trouble determining a 2D feature matrix structure to feed into machine learning algorithm

I am training an emotion recognition system that detects emotions through facial movement as a result, I have formed a 4 dimensional matrix that I am trying to reduce to 2 dimensions.
Features that makes up the 4D matrix:
Number of videos (and each video will be assigned emotion label)
Number of frames per video
Direction of the facial landmarks per frame
Speed of the facial landmarks per frame
The important features that I am trying to train with:
The left side is the speed (hypotenuse between same facial landmark each frame)
The right side is direction (arctan of the x and y values of the same facial landmark each frame)
The 4D matrix that I am stuck with and trying to reduce to 2D
>> main.shape
(60, 17, 68, 2)
# 60 videos, 17 frames per video, 68 facial landmarks, 2 features (direction and speed)
>> main
array([[[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ]],
[[ 1. , 1. ],
[ 1.41421356, 0.78539816],
[ 1.41421356, 0.78539816],
...,
[ 3. , 1. ],
[ 3. , 1. ],
[ 3. , 1. ]],
[[ 0. , 0. ],
[ -1.41421356, 0.78539816],
[ -1.41421356, 0.78539816],
...,
[ 2. , 1. ],
[ 3. , 1. ],
[ 3. , 1. ]],
...,
[[ 1. , 1. ],
[ 1.41421356, -0.78539816],
[ 1.41421356, -0.78539816],
...,
[ -1.41421356, 0.78539816],
[ 1. , 1. ],
[ -1.41421356, 0.78539816]],
[[ 2.23606798, -0.46364761],
[ 2.82842712, -0.78539816],
[ 2.23606798, -0.46364761],
...,
[ 1. , 0. ],
[ 0. , 0. ],
[ 1. , 1. ]],
[[ -1.41421356, -0.78539816],
[ -2.23606798, -0.46364761],
[ -2.23606798, -0.46364761],
...,
[ 1.41421356, -0.78539816],
[ 1.41421356, -0.78539816],
[ 2.23606798, -1.10714872]]],
[[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ]],
[[ 2. , 1. ],
[ 2.23606798, -1.10714872],
[ 1.41421356, -0.78539816],
...,
[ -2. , -0. ],
[ -1. , -0. ],
[ -1.41421356, -0.78539816]],
[[ 2. , 1. ],
[ -2.23606798, 1.10714872],
[ -1.41421356, 0.78539816],
...,
[ 1. , 1. ],
[ -1. , -0. ],
[ -1. , -0. ]],
...,
[[ -2. , -0. ],
[ -3. , -0. ],
[ -4.12310563, -0.24497866],
...,
[ 0. , 0. ],
[ -1. , -0. ],
[ -2.23606798, 1.10714872]],
[[ -2.23606798, 1.10714872],
[ -1.41421356, 0.78539816],
[ -2.23606798, 1.10714872],
...,
[ -2.23606798, 0.46364761],
[ -1.41421356, 0.78539816],
[ -1.41421356, 0.78539816]],
[[ 2. , 1. ],
[ 1.41421356, 0.78539816],
[ 2.82842712, 0.78539816],
...,
[ 1. , 1. ],
[ 1. , 1. ],
[ -2.23606798, -1.10714872]]],
[[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ]],
[[ 1. , 1. ],
[ 0. , 0. ],
[ 1. , 1. ],
...,
[ -3. , -0. ],
[ -2. , -0. ],
[ 0. , 0. ]],
[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 1.41421356, 0.78539816],
[ 1. , 0. ],
[ 0. , 0. ]],
...,
[[ 1. , 0. ],
[ 1. , 1. ],
[ 0. , 0. ],
...,
[ 2. , 1. ],
[ 3. , 1. ],
[ 3. , 1. ]],
[[ -7.28010989, 1.29249667],
[ -7.28010989, 1.29249667],
[ -8.54400375, 1.21202566],
...,
[-22.02271555, 1.52537305],
[ 22.09072203, -1.48013644],
[ 22.36067977, -1.39094283]],
[[ 1. , 0. ],
[ 1.41421356, -0.78539816],
[ 1. , 0. ],
...,
[ -1.41421356, -0.78539816],
[ 1. , 1. ],
[ 1.41421356, 0.78539816]]],
...,
[[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ]],
[[ 5.38516481, 0.38050638],
[ 5.09901951, 0.19739556],
[ 4.47213595, -0.46364761],
...,
[ -1.41421356, 0.78539816],
[ -2.82842712, 0.78539816],
[ -5. , 0.64350111]],
[[ -6.32455532, 0.32175055],
[ -6.08276253, -0.16514868],
[ -5.65685425, -0.78539816],
...,
[ 3.60555128, 0.98279372],
[ 5. , 0.92729522],
[ 5.65685425, 0.78539816]],
...,
[[ -3.16227766, -0.32175055],
[ -3.60555128, -0.98279372],
[ 5. , 1. ],
...,
[ 12.08304597, 1.14416883],
[ 13.15294644, 1.418147 ],
[ 14.31782106, 1.35970299]],
[[ 3.60555128, -0.5880026 ],
[ 4.47213595, -1.10714872],
[ 6. , 1. ],
...,
[-20.39607805, 1.37340077],
[-21.02379604, 1.52321322],
[-22.09072203, 1.48013644]],
[[ 1. , 1. ],
[ -1.41421356, 0.78539816],
[ 1. , 1. ],
...,
[ 4.12310563, 1.32581766],
[ 4. , 1. ],
[ 4.12310563, 1.32581766]]],
[[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ]],
[[ 0. , 0. ],
[ 1. , 1. ],
[ -2.23606798, 1.10714872],
...,
[ -3.16227766, 0.32175055],
[ 1. , 1. ],
[ 1.41421356, -0.78539816]],
[[ 1. , 1. ],
[ 1. , 1. ],
[ 1. , 1. ],
...,
[ 3. , 1. ],
[ 2. , 1. ],
[ -1.41421356, 0.78539816]],
...,
[[ 5.38516481, -1.19028995],
[ 4.47213595, -1.10714872],
[ 4.12310563, -1.32581766],
...,
[ 2.23606798, -0.46364761],
[ 1. , 1. ],
[ -1. , -0. ]],
[[ -5.38516481, 1.19028995],
[ -4.12310563, 1.32581766],
[ -3.16227766, 1.24904577],
...,
[ 0. , 0. ],
[ 1. , 0. ],
[ 1.41421356, -0.78539816]],
[[ 8.06225775, 1.44644133],
[ -7.07106781, -1.42889927],
[ 6. , 1. ],
...,
[ -3.16227766, -0.32175055],
[ -3.16227766, -0.32175055],
[ -3.16227766, -0.32175055]]],
[[[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ],
...,
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 0. ]],
[[ -2.23606798, 0.46364761],
[ -1.41421356, 0.78539816],
[ -2.23606798, 0.46364761],
...,
[ 1. , 0. ],
[ 1. , 0. ],
[ 1. , 1. ]],
[[ -2.23606798, -0.46364761],
[ -1.41421356, -0.78539816],
[ 2. , 1. ],
...,
[ 0. , 0. ],
[ 1. , 0. ],
[ 1. , 0. ]],
...,
[[ 1. , 0. ],
[ 1. , 1. ],
[ -2.23606798, -1.10714872],
...,
[ 19.02629759, 1.51821327],
[ 19. , 1. ],
[-19.10497317, -1.46591939]],
[[ 3.60555128, 0.98279372],
[ 3.60555128, 0.5880026 ],
[ 5. , 0.64350111],
...,
[ 7.28010989, -1.29249667],
[ 7.61577311, -1.16590454],
[ 8.06225775, -1.05165021]],
[[ -7.28010989, 1.29249667],
[ -5. , 0.92729522],
[ -5.83095189, 0.5404195 ],
...,
[ 20.09975124, 1.47112767],
[ 21.02379604, 1.52321322],
[-20.22374842, -1.42190638]]]])
The direction and speed features are quite valuable (the most important features) as it represents the movement of each facial landmark per frame and I am trying to get the machine learning algorithm to train base on that
I tried to reshape three of the dimensions into one long vector (just mushed speed, direction, and frame all together) and finally formed a 2D matrix, I fed it into sklearn SVM function and it produced a rather low accuracy. I expected this as I figured there is no way the ml algorithm would recognize that the difference between the features in the giant single matrix and assume that everything in the vector is the same features.
The 2D matrix I was forced to make to feed in sklearn SVM by forcing speed, direction, and video per frame all into one vector, and got a low accuracy with:
>> main
array([[ 0. , 0. , 0. , ..., -0.78539816,
2.23606798, -1.10714872],
[ 0. , 0. , 0. , ..., 1. ,
-2.23606798, -1.10714872],
[ 0. , 0. , 0. , ..., 1. ,
1.41421356, 0.78539816],
...,
[ 0. , 0. , 0. , ..., 1. ,
4.12310563, 1.32581766],
[ 0. , 0. , 0. , ..., -0.32175055,
-3.16227766, -0.32175055],
[ 0. , 0. , 0. , ..., 1.52321322,
-20.22374842, -1.42190638]])
>> main.shape
(60, 2312)
I want to preserve the speed and direction features, but have to represent them in a 2D matrix that takes into account the frames in the video.
The emotion label will be attached to each of the 17 frames in each video. (so basically, the 17 frame video will be labeled as an emotion)
Is there any smart way in reshaping and reducing the 4D matrix that would accomplish this?
So, the way that you've framed the question you will absolutely see poor accuracy and there's very little you can do to change it. Assigning a single emotion to a video (depending on your corpus), is generally inaccurate enough that any machine learning algorithm will have trouble learning the signal you're trying to pull out.
Additionally, you've framed the problem as a time-series problem, which is going to make your life a headache, especially if you're using off-the-shelf sklearn algorithms, which are very poorly suited for this kind of task.
If at all possible, you should instead frame your problem as a computer vision problem. You should attempt to predict on each individual frame, what the emotion content is. If you don't have a dataset with that level of granularity, you just aren't going to see great accuracy.
It's a little bit of a departure from the way in which you asked the question, but the way in which you've asked the question it's non-tractable. Here is, instead, the way that you should approach the problem:
Label individual frames with emotional content
Train an image-based algorithm to categorize those tagged frames
Convolutional neural networks will likely give you the best performance for any image-based problem where you have a decently-sized dataset
If that is not an option, you need to develop a 1d feature representation of the image. I would personally suggest using indico's image features API. Once you have this representation a typical algorithm like an SVM will work great.
If accuracy is not quite to your liking, but is getting close I would recommend using a pre-processing/data-augmentation pipeline like the one details here. Granted, that example is for plankton identification, the basic approach is identical
If the accuracy still isn't up to snuff, and you need to predict on the entire video you will then want to aggregate your results to give accurate results over the entire video
One method is to train a convolutional neural network on a graph of the predictions you've made over the video. This is kind of weird, but might work pretty well
A good approach would be to use a bayesian method, assuming each prediction has a certain level of confidence, and combining the prediction distributions over the video.
The best approach is to treat this as an ensemble learning problem. Luckily, ensemble learning is a very well-studied and understood problem. You can find details of how to combine multiple predictions in this format here.
I hope this has been helpful! Let me know if you have any more questions.
Disclaimer: I am the CEO of indico, so I may be biased in recommending its use.

Categories