Is MATLAB's PIQE function wrong? - python

I'm trying to train a Deep Learning model for image super resolution, and I wanted to implement the PIQE score as a loss function. Since I will be training the model with pytorch, I was trying to make an own Python implementation of the algorithm to compute the PIQE score.
As a first step, I looked at the MATLAB implementation of piqe (the link takes you to the main page but I am looking at the source code) to see how it's done and then adapt it to Python. There is one thing that bothers me, however.
The PIQE score starts off by calculating the Mean-Substracted Contrast-Normalized coefficients with the following formula:
But the matlab code at that step looks like this:
mu = imgaussfilt(ipImage,7/6,'FilterSize',7,'Padding','replicate');
sigma = sqrt(abs(imgaussfilt(ipImage.*ipImage,7/6,'FilterSize',7,'Padding','replicate') - mu.*mu));
imnorm = (ipImage-mu)./(sigma+1);
I'm puzzled about the calculation of the variance, sigma. In the algorithm of the paper, at each pixel, the mean of the 7x7 neighborhood is calculated and then subtracted from each value of said 7x7 neighborhood. Then, the differences are squared and multiplied by its corresponding Gaussian weight w(k,l)
Instead, the MATLAB algorithm multiplies the Gaussian weighting (by using imgausssfilt) with the squared pixel values, and then subtracts the squared means from that matrix, taking the absolute values of that operation. Correct me if I'm wrong, but isn't this a case of mistakenly using (a-b)² = a² - b² ?
Basically my question is if you could kindly confirm whether what I said before is true, or I misinterpreted the MATLAB code. Thanks in advance!

I understand why you are confused, but both are right. It uses the classical identity
Var(X) = E [(X-E(X))^2]= E(X^2) - E(X)^2
Just multiply your (I-mu)^2 out and compare the result with the definition of mu, then you will see that they cancel.

Related

Stable Softmax function returns wrong output

I implemented the Softmax function and later discovered that it has to be stabilized in order to be numerically stable (duh). And now, it is again not stable because even after deducting the max(x) from my vector, the given vector values are still too big to be able to be the powers of e. Here is the picture of the code I used to pinpoint the bug, vector here is sample output vector from forward propagating:
We can clearly see that the values are too big, and instead of probability, I get these really small numbers which leads to small error which leads to vanishing gradients and finally making the network unable to learn.
You are completely right, just translating the mathematical definition of softmax might make it unstable, which is why you have to substract the maximum of x before doing any compution.
Your implementation is correct, and vanishing/exploding gradient is an independant problem that you might encounter depending on what kind of neural network you intent to use.

Derivative of neural network with respect to input

I trained a neural network to do a regression on the sine function and would like to compute the first and second derivative with respect to the input.
I tried using the tf.gradients() function like this (neural_net is an instance of tf.keras.Sequential):
prediction = neural_net(x_value)
dx_f = tf.gradients(prediction, x_value)
dx_dx_f = tf.gradients(dx_f, x_value)
x_value is an array that has the length of the test size.
However, this results in predictions and derivatives. The prediction of the network (blue curve) basically exactly catches the sine function, but I had to divide the first derivative (orange) with a factor of 10 and the second derivative (green) with a factor of 100 in order for it to be in the same order of magnitude. So the, the first derivative looks (after that rescale) ok, but the seond derivative is completely erratic. Since the prediction of the sine function works really well there is clearly something funny going on here.
One possible explanation for what you observed, could be that your function is not derivable two times. It looks as if there are jumps in the 1st derivative around the extrema. If so, the 2nd derivative of the function doesn't really exist and the plot you get higly depends on how the library handles such places.
Consider the following picture of a non-smooth function, that jumps from 0.5 to -0.5 for all x in {1, 2, ....}. It's slope is 1 in all places except when x is an integer. If you'd try to plot it's derivative, you would probably see a straight line at y=1, which can be easily misinterpreted because if someone just looks at this plot, they could think the function is completely linear and starts from -infinity to +infinity.
If your results are produced by a neural net which uses RELU, you can try to do the same with the sigmoid activation function. I suppose you won't see that many spikes with this function.
I don't think you can calculate second order derivatives using tf.gradients. Take a look at tf.hessians (what you really want is the diagonal of the Hessian matrix), e.g. [1].
An alternative is to use tf.GradientTape: [2].
[1] https://github.com/gknilsen/pyhessian
[2] https://www.tensorflow.org/api_docs/python/tf/GradientTape
What you learned was the sinus function and not its derivative : during the training process, you are controlling the error with your cost function that takes into account only the values, but it does not control the slope at all : you could have learned a very noisy function but matching the data points exactly.
If you are just using the data point in your cost function, you have no guarantee about the derivative you've learned. However, with some advanced training technics, you could also learn such a derivative : https://arxiv.org/abs/1706.04859
So as a summary, it is not a code issue but only
a theoritical issue

Does gaussian_filter1d not work well in higher orders?

Ok, so what I'm trying to do is a scale space on a 1D set of data where the entire data set presumably is taken from a sum of gaussians function. To do this, I have to apply a gaussian convolution to the data set. My end goal is to find the number of gaussians in this data set by the number of zero crossings in the second order derivative of the convoluted data. The reasons for this come from this article.
Now the problem occurs with scipy's gaussian_filter1d that I'm using to do the convolution. I assume that when it says filter it only means a convolution with a gaussian because there is already a separate function for fourier_gaussian_filter. In addition, to avoid approximation, I'm using the gaussian_filter1d's own 2nd order derivative and then apply the convolution. The problem occurs when I keep lowering the sigma of the gaussian filter that you would assume that it would act more like an dirac delta. And this is what actually occurs at smaller values of sigma in the zero order derivate. Unfortunately, when I apply a 2nd order derivate gaussian filter, the data does not have the zero-crossings that I expect it to. In fact, it doesn't have any zero crossings even when there is only one gaussian in the original data.
Some possible ideas that came to me about what could be the problem is that an actual delta function doesn't have a derivative and that the derivative of a really small sigma Gaussian can't approximate the derivative of a delta. But I wanted to hear the community's thoughts on the problem. Thank you for reading this post.

Using robust linear methods from python module "statsmodels" with weights?

I have some data,y with errors, y_err, measured at x. I need to fit a straight line to this mimicking some code from matlab specifically the fit method with robust "on" and giving the weights as 1/yerr. The matlab documentation says it uses the bisquare method (also know as the TukeyBiweight method). My code so far is..
rlm_model = sm.RLM(y, x, M=sm.robust.norms.TukeyBiweight())
rlm_results = rlm_model.fit()
print rlm_results.params
however I need to find a way of including weights derived from yerr.
Hope people can help, this is the first time I have tried to used the statsmodel module.
In response to the first answer:
I tried;
y=y*(yerr)
x=x*(yerr)
x=sm.add_constant(x, prepend=False)
rlm_model = sm.RLM(y, x, M=sm.robust.norms.TukeyBiweight())
results=rlm_model.fit()
but sadly this doesnt match the matlab function.
Weights reflecting heteroscedasticity, that is unequal variance across observations, are not yet supported by statsmodels RLM.
As a workaround, you can divide your y and x by yerr in the call to RLM.
I think, in analogy to weighted least squares, the parameter estimates, their standard errors and other statistics are still correct in this case. But I haven't checked yet.
as reference:
Carroll, Raymond J., and David Ruppert. "Robust estimation in heteroscedastic linear models." The annals of statistics (1982): 429-441.
They also estimate the variance function, but for fixed weights 1/sigma_i the optimization just uses
(y_i - x_i beta) / sigma_i
The weights 1/sigma_i will only be relative weights and will still be multiplied with a robust estimate of the scale of the errors.

Maximum Likelihood Estimate pseudocode

I need to code a Maximum Likelihood Estimator to estimate the mean and variance of some toy data. I have a vector with 100 samples, created with numpy.random.randn(100). The data should have zero mean and unit variance Gaussian distribution.
I checked Wikipedia and some extra sources, but I am a little bit confused since I don't have a statistics background.
Is there any pseudo code for a maximum likelihood estimator? I get the intuition of MLE but I cannot figure out where to start coding.
Wiki says taking argmax of log-likelihood. What I understand is: I need to calculate log-likelihood by using different parameters and then I'll take the parameters which gave the maximum probability. What I don't get is: where will I find the parameters in the first place? If I randomly try different mean & variance to get a high probability, when should I stop trying?
I just came across this, and I know its old, but I'm hoping that someone else benefits from this. Although the previous comments gave pretty good descriptions of what ML optimization is, no one gave pseudo-code to implement it. Python has a minimizer in Scipy that will do this. Here's pseudo code for a linear regression.
# import the packages
import numpy as np
from scipy.optimize import minimize
import scipy.stats as stats
import time
# Set up your x values
x = np.linspace(0, 100, num=100)
# Set up your observed y values with a known slope (2.4), intercept (5), and sd (4)
yObs = 5 + 2.4*x + np.random.normal(0, 4, 100)
# Define the likelihood function where params is a list of initial parameter estimates
def regressLL(params):
# Resave the initial parameter guesses
b0 = params[0]
b1 = params[1]
sd = params[2]
# Calculate the predicted values from the initial parameter guesses
yPred = b0 + b1*x
# Calculate the negative log-likelihood as the negative sum of the log of a normal
# PDF where the observed values are normally distributed around the mean (yPred)
# with a standard deviation of sd
logLik = -np.sum( stats.norm.logpdf(yObs, loc=yPred, scale=sd) )
# Tell the function to return the NLL (this is what will be minimized)
return(logLik)
# Make a list of initial parameter guesses (b0, b1, sd)
initParams = [1, 1, 1]
# Run the minimizer
results = minimize(regressLL, initParams, method='nelder-mead')
# Print the results. They should be really close to your actual values
print results.x
This works great for me. Granted, this is just the basics. It doesn't profile or give CIs on the parameter estimates, but its a start. You can also use ML techniques to find estimates for, say, ODEs and other models, as I describe here.
I know this question was old, hopefully you've figured it out since then, but hopefully someone else will benefit.
If you do maximum likelihood calculations, the first step you need to take is the following: Assume a distribution that depends on some parameters. Since you generate your data (you even know your parameters), you "tell" your program to assume Gaussian distribution. However, you don't tell your program your parameters (0 and 1), but you leave them unknown a priori and compute them afterwards.
Now, you have your sample vector (let's call it x, its elements are x[0] to x[100]) and you have to process it. To do so, you have to compute the following (f denotes the probability density function of the Gaussian distribution):
f(x[0]) * ... * f(x[100])
As you can see in my given link, f employs two parameters (the greek letters µ and σ). You now have to calculate the values for µ and σ in a way such that f(x[0]) * ... * f(x[100]) takes the maximum possible value.
When you've done that, µ is your maximum likelihood value for the mean, and σ is the maximum likelihood value for standard deviation.
Note that I don't explicitly tell you how to compute the values for µ and σ, since this is a quite mathematical procedure I don't have at hand (and probably I would not understand it); I just tell you the technique to get the values, which can be applied to any other distributions as well.
Since you want to maximize the original term, you can "simply" maximize the logarithm of the original term - this saves you from dealing with all these products, and transforms the original term into a sum with some summands.
If you really want to calculate it, you can do some simplifications that lead to the following term (hope I didn't mess up anything):
Now, you have to find values for µ and σ such that the above beast is maximal. Doing that is a very nontrivial task called nonlinear optimization.
One simplification you could try is the following: Fix one parameter and try to calculate the other. This saves you from dealing with two variables at the same time.
You need a numerical optimisation procedure. Not sure if anything is implemented in Python, but if it is then it'll be in numpy or scipy and friends.
Look for things like 'the Nelder-Mead algorithm', or 'BFGS'. If all else fails, use Rpy and call the R function 'optim()'.
These functions work by searching the function space and trying to work out where the maximum is. Imagine trying to find the top of a hill in fog. You might just try always heading up the steepest way. Or you could send some friends off with radios and GPS units and do a bit of surveying. Either method could lead you to a false summit, so you often need to do this a few times, starting from different points. Otherwise you may think the south summit is the highest when there's a massive north summit overshadowing it.
As joran said, the maximum likelihood estimates for the normal distribution can be calculated analytically. The answers are found by finding the partial derivatives of the log-likelihood function with respect to the parameters, setting each to zero, and then solving both equations simultaneously.
In the case of the normal distribution you would derive the log-likelihood with respect to the mean (mu) and then deriving with respect to the variance (sigma^2) to get two equations both equal to zero. After solving the equations for mu and sigma^2, you'll get the sample mean and sample variance as your answers.
See the wikipedia page for more details.

Categories