My data is monthly and it looks like this, and has negative and zero values. I am writing codes for Holt-Winters method and getting the error message:
"ValueError: endog must be strictly positive when usingmultiplicative trend or seasonal components."
enter image description here
I have added an image of my codes if you someone can assist how to resolve it.
enter image description here
Related
This question already has an answer here:
Can I convert spectrograms generated with librosa back to audio?
(1 answer)
Closed 11 months ago.
I know that it is possible to convert an audio to a representive image.
Is someone know if the opposite way possible?
Can we convert back the representive image to audio?
If it's possible please tell me how.
I looked for ways to do this but I did not find.
Edit: my main goal is to generate new/random music using DCGAN.
I thought to take an audio, convert is to the image of the freq graph, use DCGAN and the convert it back to audio.
I don't know wwich tool to use and how exacly to do this.
If someone can help me it will be nice.
there are many ways to do this ... the approach I used is to iterate across each pixel in the input image ... assign to each pixel in order a unique frequency ... the range of frequencies can be arbitrary lets vary it across the human audible range from 200 to 8,000 Hertz ... divide this audio freq range by the number of pixels which will give you a frequency increment value ... give the first pixel 200 Hertz and as you iterate across all pixels give each pixel a frequency by adding this freq increment value to the previous pixel's frequency
while you perform above iteration across all pixels determine the light intensity value of the current pixel and use this to determine a value normalize from zero to one which will be the amplification factor of the frequency of a given pixel
now you have a new array where each element records the light intensity value and a frequency ... walk across this array and create an oscillator to output a sin curve at an amplitude driven from the amplification factor at the frequency of the current array element ... now combine all such oscillator outputs and normalize into a single aggregate audio
this aggregate synthesized output audio is the time domain representation of the input image which is your frequency domain starting point
beautiful thing is this output audio is the inverse Fourier Transform of the image ... anyone fluent in Fourier Transform will predict what comes next namely this audio can then be sent into a FFT call which will output a new output image which if you implement all this correctly will match more or less to your original input image
I used golang not python however this challenge is language agnostic ... good luck and have fun
there are several refinements to this ... a naive way to parse the input image is to simply zig zag left to right top to bottom which will work however if you use a Hilbert Curve to determine which pixel comes next your output audio will be better suited to people listening especially when and if you change the image resolution of out original input image ... ignore this embellishment until you have it working
far more valuable than the code which implements this is the voyage of discovery endured in writing the code ... here is the video which inspired me to embark on this voyage https://www.youtube.com/watch?v=3s7h2MHQtxc # Hilbert's Curve: Is infinite math useful?
here is a sample input photo
here is the output photo after converting above image into audio and then back into an image
once you get this up and running and are able to toggle from frequency domain into the time domain and back again you are free to choose whether you start from audio or an image
I am currently trying to work on a raw image and I would like apply very little processing. I am trying to understand what the no_auto_scale parameter in rawpy.postprocessrawpy.Params is. I don't understand what disabling pixel value scaling does. Could anyone help me please ?
My ultimate goal is to load the Bayer matrix with the colors scaled to balance out the sensitivity of each color sensor. So every pixel in the final image will correspond to a different color depending on where it is in the Bayer pattern but they will all be on a similar scale.
I'm trying to extract data from below image but i get the output in a very bad format and i'm having trouble to determine right column for right value programmatically..
My progress so far: I've managed to get all the values correctly and the only problem now is that i cant determine using program correct column for each value.. Please guide me how can i achieve this
I have an idea to split the image column-wise then put it through ocr but also dont know how to split image without breaking the text/words
Above is the image i'm trying to extract data from.. and below is the output
I'm working on an anpr system and to convert the registration plate image to text output. I previously tried to use (py)tessaract to do the ocr for me but this wasn't giving me sufficient results.
As my current training set i'm using this font as all registration fonts are the same
from my images some of the resultant number plates will be at weird angles so the plate isn't recognised correctly
So I am asking is there a way to make each digit distorted in many different ways and storing that distortion in an nparray in a file and from this can perform machine learning techniques on
Something like this (however output to be different)
https://archive.ics.uci.edu/ml/datasets/Letter+Recognition
Thanks i used a previous point to help me so far to separate characters and so on
Recognize the characters of license plate
Thanks any help would be appreciated
I am trying to display an lbp image after applying elbp function to a gray-scale image, the function was exposed manually in the face module for python.
here's the code:
LBP = cv2.face.elbp(gray_roi,1,8)
cv2.imshow("Face",LBP)
But however, what I got is a pure black window,also I noticed that the cols and rows are always smaller than the original image by 2,here is the error information:
could not broadcast input array from shape (95,95) into shape (97,97)
I noticed one other ppl asked the same question but was using c++ instead: Unable to display image after applying LBPH (elbp()) operator in opencv
But what I cant understand is what he meant by normalized the image to fit the screen rendering range?
here is the matrix ouput of my image:
print(LBP)
As you can see, the pixel intensity distribution is normal.
here is the actual elbp function!
#barny thx for your detailed response , based on your solution, I times the matrix by a value like 64, finally I got the image shown, but I'm not sure why do I have to times a value to get the proper visible image,shouldn't it be done in the original elbp function?
and also , the matrix element value become very large:
here is some part of the histogram that I printed out from the shown image(times a value by 64):
histogram=scipy.stats.itemfreq(LBP1)
print(histogram)[[ 0 1726]
histogram
If someone can explain to me why do I have to multiply such a big value that would be great!
ps:this is my 1st time ask on stack overflow, thx for everyone who tried to help !