I'm trying to detect specific pattern in the Real time data (Time Series). For the visualization, I'll show the data in two parts here.
Pattern: I'm trying to search for in time series,
DataWindow: data buffer(window) I slide in real time to keep track of history.
Here is my recorded data(red boxes shows the pattern that I want to detect), but this can be different since it is Real Time:
The above data doesn't have a lot of noise (at least for this collection) - as far as I look at the resolutions, peaks (maybe I would say sinusoidal peaks) are distinguishable at first glance. That is why applying a moving average filter does not help me at all.
The below image shows some samples from real time data but in the saved data, plotter applies extrapolation to draw continous plot. In general, data samples look like the image below or maybe with more resolution than this image.
For the initial start, I've tried Spike Detection in a Time-Seriesusing moving average and did not work as I expected.
I've also tried some solutions here from this thread Detecting patterns from two arrays of data in Python and the results are not good enough for me to raise a flag in the patterns during run-time(there are many false positives)
Also, as you might realize from the saved real time data that, patterns can have different scale and most importantly can have different offset. That is the problem I guess for me to apply above solutions on my problem to get distinguishable results.
To give some example to try out, these can be used for the Pattern and DataWindow
Pattern = [5.9, 5.6, 4.08, 2.57, 2.78, 4.78, 7.3, 7.98, 4.81, 5.57, 4.7]
SampleTarget = [4.74, 4.693, 4.599, 4.444, 3.448, 2.631, 1.845, 2.032, 2.415, 3.714, 5.184, 5.82, 5.61, 4.841, 3.802, 3.11]
SampleTarget2 = [5.898, 5.91, 5.62, 5.25, 4.72, 4.09, 3.445, 2.91, 2.7, 2.44, 2.515, 2.79, 3.25, 3.915,4.72, 5.65, 6.28, 7.15, 7.81, 8.2, 7.9, 7.71, 7.32, 6.88, 6.44, 6.0,5.58, 5.185, 4.88, 4.72, 4.69, 4.82]
I am trying to solve this problem on Python for PoC.
UPDATE: Dataset is added, includes first two red boxes and a bit wider side as well, which is shown in the saved real time data.dataset
You can compute the gradient of the data and use a threshold to identify the features. Here I use a triple mask to get the down/up/down feature.
I commented the code to give you the main steps, so I hope it is comprehensive.
import pandas as pd
import matplotlib.pyplot as plt
# read data
s = pd.read_csv('sin_peaks.txt', header=None)[0]
# 0 5.574537
# 1 5.736071
# 2 5.965132
# 3 6.164344
# 4 6.172413
thresh = 0.5 # threshold of derivative
span = 10 # max span of the feature (in number of points)
# calculate gradient
# if the points are not evenly spaced
# you should also divide by the spacing
s2 = s.diff()
# get points outside of threshold
m1 = s2.lt(-thresh)
m2 = s2.gt(thresh)
# extend masks
m1_fw = m1.where(m1).ffill(limit=span)
m1_bw = m1.where(m1).bfill(limit=span)
m2_fbw = m2.where(m2).ffill(limit=span).bfill(limit=span)
# slice data where all conditions are met
# up peak & down peak in the "span" before and down peak in the "span" after
peaks = s[m1_fw & m1_bw & m2_fbw]
# group peaks
groups = peaks.index.to_series().diff().ne(1).cumsum()
# plot identified features
ax = s.plot(label='data')
s.diff().plot(ax=ax, label='gradient')
ax.legend()
ax.axhline(thresh, ls=':', c='k')
ax.axhline(-thresh, ls=':', c='k')
for _, group in peaks.groupby(groups):
start = group.index[0]
stop = group.index[-1]
ax.axvspan(start, stop, color='k', alpha=0.1)
Related
I am trying to segment the time-series data as shown in the figure. I have lots of data from the sensors, any of these data can have different number of isolated peaks region. In this figure, I have 3 of those. I would like to have a function that takes the time-series as the input and returns the segmented sections of equal length.
My initial thought was to have a sliding window that calculates the relative change in the amplitude. Since the window with the peaks will have relatively higher changes, I could just define certain threshold for the relative change that would help me take the window with isolated peaks. However, this will create problem when choosing the threshold as the relative change is very sensitive to the noises in the data.
Any suggestions?
To do this you need to find signal out of noise.
get mean value of you signal and add some multiplayer that place borders on top and on bottom of noise - green dashed line
find peak values below bottom of noise -> array 2 groups of data
find peak values on top of noise -> array 2 groups of data
get min index of bottom first peak and max index of top of first peak to find first peak range
get min index of top second peak and max index of bottom of second peak to find second peak range
Some description in code. With this method you can find other peaks.
One thing that you need to input by hand is to tell program thex value between peaks for splitting data into parts.
See graphic for summary.
import numpy as np
from matplotlib import pyplot as plt
# create noise data
def function(x, noise):
y = np.sin(7*x+2) + noise
return y
def function2(x, noise):
y = np.sin(6*x+2) + noise
return y
noise = np.random.uniform(low=-0.3, high=0.3, size=(100,))
x_line0 = np.linspace(1.95,2.85,100)
y_line0 = function(x_line0, noise)
x_line = np.linspace(0, 1.95, 100)
x_line2 = np.linspace(2.85, 3.95, 100)
x_pik = np.linspace(3.95, 5, 100)
y_pik = function2(x_pik, noise)
x_line3 = np.linspace(5, 6, 100)
# concatenate noise data
x = np.linspace(0, 6, 500)
y = np.concatenate((noise, y_line0, noise, y_pik, noise), axis=0)
# plot data
noise_band = 1.1
top_noise = y.mean()+noise_band*np.amax(noise)
bottom_noise = y.mean()-noise_band*np.amax(noise)
fig, ax = plt.subplots()
ax.axhline(y=y.mean(), color='red', linestyle='--')
ax.axhline(y=top_noise, linestyle='--', color='green')
ax.axhline(y=bottom_noise, linestyle='--', color='green')
ax.plot(x, y)
# split data into 2 signals
def split(arr, cond):
return [arr[cond], arr[~cond]]
# find bottom noise data indexes
botom_data_indexes = np.argwhere(y < bottom_noise)
# split by visual x value
splitted_bottom_data = split(botom_data_indexes, botom_data_indexes < np.argmax(x > 3))
# find top noise data indexes
top_data_indexes = np.argwhere(y > top_noise)
# split by visual x value
splitted_top_data = split(top_data_indexes, top_data_indexes < np.argmax(x > 3))
# get first signal range
first_signal_start = np.amin(splitted_bottom_data[0])
first_signal_end = np.amax(splitted_top_data[0])
# get x index of first signal
x_first_signal = np.take(x, [first_signal_start, first_signal_end])
ax.axvline(x=x_first_signal[0], color='orange')
ax.axvline(x=x_first_signal[1], color='orange')
# get second signal range
second_signal_start = np.amin(splitted_top_data[1])
second_signal_end = np.amax(splitted_bottom_data[1])
# get x index of first signal
x_second_signal = np.take(x, [second_signal_start, second_signal_end])
ax.axvline(x=x_second_signal[0], color='orange')
ax.axvline(x=x_second_signal[1], color='orange')
plt.show()
Output:
red line = mean value of all data
green line - top and bottom noise borders
orange line - selected peak data
1, It depends on how you want to define a "region", but looks like you just have feeling instead of strict definition. If you have a very clear definition of what kind of piece you want to cut out, you can try some method like "matched filter"
2, You might want to detect the peak of absolute magnitude. If not working, try peak of absolute magnitude of first-order difference, even 2nd-order.
3, it is hard to work on the noisy data like this. My suggestion is to do filtering before you pick up sections (on unfiltered data). Filtering will give you smooth peaks so that the position of peaks can be detected by the change of derivative sign. For filtering, try just "low-pass filter" first. If it doesn't work, I also suggest "Hilbert–Huang transform".
*, Looks like you are using matlab. The methods mentioned are all included in matlab.
I am plotting a 2D histogram to show, for example, the concentration of lightnings (given by their position registered in longitude and latitude). The number of data points is not too large (53) and the result is too coarse. Here is a picture of the result:
For this reason, I am trying to find a way to weight in data from surrounding bins. For example, there is a bin at longitude = 130 and latitude = 34.395 with 0 lightning registered, but with several around it. I would want this bin to reflect somehow the concentration around it. In other words, I want to smooth the data by having overlapping bins (so that a data point can be counted more than once, by different contiguous bins).
I understand that hist2d has the input option for "weights", but this would only work to make a data point more "important" within its bin.
The simplified code is below and I can clarify anything needed.
import numpy as np
import matplotlib.pyplot as plt
# Here are the data, to experiment if needed
longitude = np.array([119.165, 115.828, 110.354, 117.124, 119.16 , 107.068, 108.628, 126.914, 125.685, 116.608, 122.455, 116.278, 123.43, 128.84, 128.603, 130.192, 124.508, 121.916, 133.245, 125.088, 126.641, 127.224, 113.686, 129.376, 127.312, 121.353, 117.834, 125.219, 138.077, 153.299, 135.66 , 128.391, 118.011, 117.313, 119.986, 118.619, 119.178, 120.295, 121.991, 123.519, 135.948, 132.224, 129.317, 135.334, 132.923, 129.828, 139.006, 140.813, 116.207, 139.254, 120.922, 112.171, 143.508])
latitude = np.array([34.381, 34.351, 34.359, 34.357, 34.364, 34.339, 34.351, 34.38, 34.381, 34.366, 34.373, 34.366, 34.369, 34.387, 34.39 , 34.39 , 34.386, 34.371, 34.394, 34.386, 34.384, 34.387, 34.369, 34.4 , 34.396, 34.37 , 34.374, 34.383, 34.403, 34.429, 34.405, 34.385, 34.367, 34.36 , 34.367, 34.364, 34.363, 34.367, 34.367, 34.369, 34.399, 34.396, 34.382, 34.401, 34.396, 34.392, 34.401, 34.401, 34.362, 34.404, 34.382, 34.346, 34.406])
# Number of bins
Nbins = 15
# Plot histogram of the positions
plt.hist2d(longitude,latitude, bins=Nbins)
plt.plot(longitude,latitude,'o',markersize = 8, color = 'k')
plt.plot(longitude,latitude,'o',markersize = 6, color = 'w')
plt.colorbar()
plt.show()
Perhaps you're getting confused with the concept of 2D-histogram, or histogram. Besides the fact a histogram is a bar plot groupping data into plot, it is also a dicretized estimation of a probability funtion. In your case, the presence probability. For this reason, I would not try to overlap histograms.
Moreover, because the histogram is 'discrete', it will be necessarily coarse. Actually, the resolution of a histogram is an important parameter regarding the desired visualization.
Going back to your question, if you want to disminish the coarse effect, you may to simply want to play on Nbins.
Perhaps, other graph type would suit better your usage: see this gallery and the 2D-density plot with shading.
I have some data and I have plotted magnitude against wavelength (the blue points). I then have some code that reads a model stellar population from a file, and plots this on the same graph (the pink line). In this code, there is a scale that can be adjusted that moves this line up or down on the graph. So far I have been changing the scale so that the line is as close as I can tell by eye to my points, but I would like to write some code that would calculate the value of the scale for which the total distance between my points and the line is a minimum. This is my code so far:
#Import modules
from math import *
import numpy as np
import matplotlib.pyplot as plt
# Specify data
wavelength =
np.array([357.389,445.832,472.355,547.783,620.246,752.243,891.252,2164.089])
magnitude =
np.array([24.0394,23.1925,23.1642,22.4794,21.7496,20.9047,20.4671,19.427])
# Create Graph
#plt.scatter(wavelength, magnitude)
#plt.ylim([25,18])
#plt.xlim([300,2200])
#plt.xlabel('wavelength (nm)')
#plt.ylabel('magnitude')
#plt.title('object 1')
#plt.show()
#plt.close()
#now - here is some code that reads a model stellar population model from a
file
lines = open('fig7b.dat').readlines()
wavelengths, luminosities = [],[]
for l in lines:
s = l.split()
wl = s[0]
old = s[-1]
if '#' not in wl:
wavelengths.append(float(wl)) #wavelength in angstroms
luminosities.append(float(old)) #luminosities are in log units!
scale = 3.5
c=3.e8
wavelengths = np.array(wavelengths)
nus = c/(wavelengths*1.e-10)
luminosities = np.array(luminosities) + scale
luminosity_density = np.log10(((10**luminosities)*wavelengths)/nus)
#plt.plot(wavelengths,luminosity_density)
#z = 1.0
#plt.plot(wavelengths*(1+z),luminosity_density,color='r')
#plt.axis([900, 10000, 25,31])
#plt.savefig('sed.png')
#plt.show()
#plt.close()
Mpc_to_cm = 3.086e24 #convert Mpc to cm
z = 0.3448 #our chosen redshift
D_L = 1841.7 * Mpc_to_cm
#remember luminosity_density is logged at the moment
flux_density = (10**luminosity_density) * (1+z) / (4*pi*D_L**2) #units will
be erg/s/cm^2/Hz
#now turn that into an AB magnitude - goes back to log
AB_mag = -2.5*np.log10(flux_density) - 48.6
#try plotting your photometry on here and play with z and D_L
plt.plot(wavelengths*(1+z),AB_mag,color='pink')
plt.scatter(wavelength*10., magnitude,color='cornflowerblue')
plt.axis([900, 25000, 30,18])
plt.xlabel('wavelength')
plt.ylabel('magnitude')
plt.title('object 1')
plt.savefig('sed_ab.png')
plt.show()
which gives a graph that looks like this:
Also it would be helpful to print the best scale value.
I'm very new to python and programming in general and the pink line isn't a simple equation (in the file I was given it is made up of a lot of data points) so I have been getting a bit stuck. Apologies if I am not using the correct language to describe my problem, and for the long code - a lot of the comments were previous plots my supervisor has kept from before when I had separate plots. (I am using python 2.7)
A link to fig7b.dat: https://drive.google.com/open?id=0B_tOncLLEAYsbG8wcHJMYVowOXc
First, create a list of points from the curve data so that each point corresponds to the first list of points (each corresponding pair of points will have the same X coordinate, i.e. the same wavelength).
Then the minimum distance between these two sets of points will simply be: (sum(points2)-sum(points1))/len(points1).
Look at the following example
points1 = [1.1, 1.4, 1.8, 1.9, 2.3, 1.7, 1.9, 2.7]
points2 = [8.4, 3.5, 2.9, 7.6, 0.1, 2.2, 3.3, 4.8]
def min_distance(first,second):
assert len(first) == len(second) # must have same size
result = (sum(second) - sum(first)) / len(first)
return result
print("Adding this value to the first series of points")
print("will provice minimum distance between curves")
print(min_distance(points1,points2))
Running this wil print value 2.25. If you add 2.25 to all values of points1, you will get the minimum possible distance between the two sets of points (which is 62.36 in this particular case).
In your problem, points1 will be the magnitude array. points2 will be the points from fig7b.dat corresponding to the wavelengths.
This assumes you want to minimize the sum of sqaures between the points and the curve. It also assumes distances are measured vertically (that is why you need to extract the points with the corresponding wavelengths).
If you want to write your own little code without using spicy.optimize I
would recommend:
use an interpolation of your theoretical spectrum to evaluate the theoretical value at each of your observed wavelength:
https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
e.g.:
from scipy.interpolate import interp1d
f2 = interp1d(wavelengths, luminosities, kind='cubic')
Than you can calculate \chi^{2} for every scale value you want to try and afterwards find the minumum.
I am using the Librosa library for pitch and onset detection. Specifically, I am using onset_detect and piptrack.
This is my code:
def detect_pitch(y, sr, onset_offset=5, fmin=75, fmax=1400):
y = highpass_filter(y, sr)
onset_frames = librosa.onset.onset_detect(y=y, sr=sr)
pitches, magnitudes = librosa.piptrack(y=y, sr=sr, fmin=fmin, fmax=fmax)
notes = []
for i in range(0, len(onset_frames)):
onset = onset_frames[i] + onset_offset
index = magnitudes[:, onset].argmax()
pitch = pitches[index, onset]
if (pitch != 0):
notes.append(librosa.hz_to_note(pitch))
return notes
def highpass_filter(y, sr):
filter_stop_freq = 70 # Hz
filter_pass_freq = 100 # Hz
filter_order = 1001
# High-pass filter
nyquist_rate = sr / 2.
desired = (0, 0, 1, 1)
bands = (0, filter_stop_freq, filter_pass_freq, nyquist_rate)
filter_coefs = signal.firls(filter_order, bands, desired, nyq=nyquist_rate)
# Apply high-pass filter
filtered_audio = signal.filtfilt(filter_coefs, [1], y)
return filtered_audio
When running this on guitar audio samples recorded in a studio, therefore samples without noise (like this), I get very good results in both functions. The onset times are correct and the frequencies are almost always correct (with some octave errors sometimes).
However, a big problem arises when I try to record my own guitar sounds with my cheap microphone. I get audio files with noise, such as this. The onset_detect algorithm gets confused and thinks that noise contains onset times. Therefore, I get very bad results. I get many onset times even if my audio file consists of one note.
Here are two waveforms. The first is of a guitar sample of a B3 note recorded in a studio, whereas the second is my recording of an E2 note.
The result of the first is correctly B3 (the one onset time was detected).
The result of the second is an array of 7 elements, which means that 7 onset times were detected, instead of 1! One of those elements is the correct onset time, other elements are just random peaks in the noise part.
Another example is this audio file containing the notes B3, C4, D4, E4:
As you can see, the noise is clear and my high-pass filter has not helped (this is the waveform after applying the filter).
I assume this is a matter of noise, as the difference between those files lies there. If yes, what could I do to reduce it? I have tried using a high-pass filter but there is no change.
I have three observations to share.
First, after a bit of playing around, I've concluded that the onset detection algorithm appears as if it's probably probably been designed to automatically rescale its own operation in order to take into account local background noise at any given instant. This is likely in order so that it can detect onset times in pianissimo sections with equal likelihood as it would in fortissimo sections. This has the unfortunate result that the algorithm tends to trigger on background noise coming from your cheap microphone--the onset detection algorithm honestly thinks it's simply listening to pianissimo music.
A second observation is that roughly the first ~2200 samples in your recorded example (roughly the first 0.1 seconds) are a bit wonky, in the sense that the noise truly is nearly zero during that short initial interval. Try zooming way into the waveform at the starting point and you'll see what I mean. Unfortunately, the start of the guitar playing follows so quickly after the noise onset (roughly around sample 3000) that the algorithm is unable to resolve the two independently--instead it simply merges the two into a single onset event that begins about 0.1 seconds too early. I therefore cut out roughly the first 2240 samples in order to "normalize" the file (I don't think this is cheating though; it's an edge effect that would likely disappear if you had simply recorded a second or so of initial silence prior to plucking the first string, as one would normally do).
My third observation is that frequency-based filtering only works if the noise and the music are actually in somewhat different frequency bands. That may be true in this case, however I don't think you've demonstrated it yet. Therefore, instead of frequency-based filtering, I elected to try a different approach: thresholding. I used the final 3 seconds of your recording, where there is no guitar playing, in order to estimate the typical background noise level throughout the recording, in units of RMS energy, and then I used that median value to set a minimum energy threshold which was calculated to lie safely above the median. Only onset events returned by the detector occurring at times when the RMS energy is above the threshold are accepted as "valid".
An example script is shown below:
import librosa
import numpy as np
import matplotlib.pyplot as plt
# I played around with this but ultimately kept the default value
hoplen=512
y, sr = librosa.core.load("./Vocaroo_s07Dx8dWGAR0.mp3")
# Note that the first ~2240 samples (0.1 seconds) are anomalously low noise,
# so cut out this section from processing
start = 2240
y = y[start:]
idx = np.arange(len(y))
# Calcualte the onset frames in the usual way
onset_frames = librosa.onset.onset_detect(y=y, sr=sr, hop_length=hoplen)
onstm = librosa.frames_to_time(onset_frames, sr=sr, hop_length=hoplen)
# Calculate RMS energy per frame. I shortened the frame length from the
# default value in order to avoid ending up with too much smoothing
rmse = librosa.feature.rmse(y=y, frame_length=512, hop_length=hoplen)[0,]
envtm = librosa.frames_to_time(np.arange(len(rmse)), sr=sr, hop_length=hoplen)
# Use final 3 seconds of recording in order to estimate median noise level
# and typical variation
noiseidx = [envtm > envtm[-1] - 3.0]
noisemedian = np.percentile(rmse[noiseidx], 50)
sigma = np.percentile(rmse[noiseidx], 84.1) - noisemedian
# Set the minimum RMS energy threshold that is needed in order to declare
# an "onset" event to be equal to 5 sigma above the median
threshold = noisemedian + 5*sigma
threshidx = [rmse > threshold]
# Choose the corrected onset times as only those which meet the RMS energy
# minimum threshold requirement
correctedonstm = onstm[[tm in envtm[threshidx] for tm in onstm]]
# Print both in units of actual time (seconds) and sample ID number
print(correctedonstm+start/sr)
print(correctedonstm*sr+start)
fg = plt.figure(figsize=[12, 8])
# Print the waveform together with onset times superimposed in red
ax1 = fg.add_subplot(2,1,1)
ax1.plot(idx+start, y)
for ii in correctedonstm*sr+start:
ax1.axvline(ii, color='r')
ax1.set_ylabel('Amplitude', fontsize=16)
# Print the RMSE together with onset times superimposed in red
ax2 = fg.add_subplot(2,1,2, sharex=ax1)
ax2.plot(envtm*sr+start, rmse)
for ii in correctedonstm*sr+start:
ax2.axvline(ii, color='r')
# Plot threshold value superimposed as a black dotted line
ax2.axhline(threshold, linestyle=':', color='k')
ax2.set_ylabel("RMSE", fontsize=16)
ax2.set_xlabel("Sample Number", fontsize=16)
fg.show()
Printed output looks like:
In [1]: %run rosatest
[ 0.17124717 1.88952381 3.74712018 5.62793651]
[ 3776. 41664. 82624. 124096.]
and the plot that it produces is shown below:
Did you test to normalize the sound sample before treatment ?
When reading onset_detect documentation we can see that there is a lot of optionals arguments, have you already try to use some ?
Maybe one of this optionals arguments may help you to keep only the good one (or at least limit the size of the onset time returned array):
librosa.util.peak_pick (maybe the best)
backtrack
energy
Please see also an update of your code in order to use a pre-computed onset envelope:
def detect_pitch(y, sr, onset_offset=5, fmin=75, fmax=1400):
y = highpass_filter(y, sr)
o_env = librosa.onset.onset_strength(y, sr=sr)
times = librosa.frames_to_time(np.arange(len(o_env)), sr=sr)
onset_frames = librosa.onset.onset_detect(y=o_env, sr=sr)
pitches, magnitudes = librosa.piptrack(y=y, sr=sr, fmin=fmin, fmax=fmax)
notes = []
for i in range(0, len(onset_frames)):
onset = onset_frames[i] + onset_offset
index = magnitudes[:, onset].argmax()
pitch = pitches[index, onset]
if (pitch != 0):
notes.append(librosa.hz_to_note(pitch))
return notes
def highpass_filter(y, sr):
filter_stop_freq = 70 # Hz
filter_pass_freq = 100 # Hz
filter_order = 1001
# High-pass filter
nyquist_rate = sr / 2.
desired = (0, 0, 1, 1)
bands = (0, filter_stop_freq, filter_pass_freq, nyquist_rate)
filter_coefs = signal.firls(filter_order, bands, desired, nyq=nyquist_rate)
# Apply high-pass filter
filtered_audio = signal.filtfilt(filter_coefs, [1], y)
return filtered_audio
does it work better ?
I am trying to plot contour lines of pressure level. I am using a netCDF file which contain the higher resolution data (ranges from 3 km to 27 km). Due to higher resolution data set, I get lot of pressure values which are not required to be plotted (rather I don't mind omitting certain contour line of insignificant values). I have written some plotting script based on the examples given in this link http://matplotlib.org/basemap/users/examples.html.
After plotting the image looks like this
From the image I have encircled the contours which are small and not required to be plotted. Also, I would like to plot all the contour lines smoother as mentioned in the above image. Overall I would like to get the contour image like this:-
Possible solution I think of are
Find out the number of points required for plotting contour and mask/omit those lines if they are small in number.
or
Find the area of the contour (as I want to omit only circled contour) and omit/mask those are smaller.
or
Reduce the resolution (only contour) by increasing the distance to 50 km - 100 km.
I am able to successfully get the points using SO thread Python: find contour lines from matplotlib.pyplot.contour()
But I am not able to implement any of the suggested solution above using those points.
Any solution to implement the above suggested solution is really appreciated.
Edit:-
# Andras Deak
I used print 'diameter is ', diameter line just above del(level.get_paths()[kp]) line to check if the code filters out the required diameter. Here is the filterd messages when I set if diameter < 15000::
diameter is 9099.66295612
diameter is 13264.7838257
diameter is 445.574234531
diameter is 1618.74618114
diameter is 1512.58974168
However the resulting image does not have any effect. All look same as posed image above. I am pretty sure that I have saved the figure (after plotting the wind barbs).
Regarding the solution for reducing the resolution, plt.contour(x[::2,::2],y[::2,::2],mslp[::2,::2]) it works. I have to apply some filter to make the curve smooth.
Full working example code for removing lines:-
Here is the example code for your review
#!/usr/bin/env python
from netCDF4 import Dataset
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage
from mpl_toolkits.basemap import interp
from mpl_toolkits.basemap import Basemap
# Set default map
west_lon = 68
east_lon = 93
south_lat = 7
north_lat = 23
nc = Dataset('ncfile.nc')
# Get this variable for later calucation
temps = nc.variables['T2']
time = 0 # We will take only first interval for this example
# Draw basemap
m = Basemap(projection='merc', llcrnrlat=south_lat, urcrnrlat=north_lat,
llcrnrlon=west_lon, urcrnrlon=east_lon, resolution='l')
m.drawcoastlines()
m.drawcountries(linewidth=1.0)
# This sets the standard grid point structure at full resolution
x, y = m(nc.variables['XLONG'][0], nc.variables['XLAT'][0])
# Set figure margins
width = 10
height = 8
plt.figure(figsize=(width, height))
plt.rc("figure.subplot", left=.001)
plt.rc("figure.subplot", right=.999)
plt.rc("figure.subplot", bottom=.001)
plt.rc("figure.subplot", top=.999)
plt.figure(figsize=(width, height), frameon=False)
# Convert Surface Pressure to Mean Sea Level Pressure
stemps = temps[time] + 6.5 * nc.variables['HGT'][time] / 1000.
mslp = nc.variables['PSFC'][time] * np.exp(9.81 / (287.0 * stemps) * nc.variables['HGT'][time]) * 0.01 + (
6.7 * nc.variables['HGT'][time] / 1000)
# Contour only at 2 hpa interval
level = []
for i in range(mslp.min(), mslp.max(), 1):
if i % 2 == 0:
if i >= 1006 and i <= 1018:
level.append(i)
# Save mslp values to upload to SO thread
# np.savetxt('mslp.txt', mslp, fmt='%.14f', delimiter=',')
P = plt.contour(x, y, mslp, V=2, colors='b', linewidths=2, levels=level)
# Solution suggested by Andras Deak
for level in P.collections:
for kp,path in enumerate(level.get_paths()):
# include test for "smallness" of your choice here:
# I'm using a simple estimation for the diameter based on the
# x and y diameter...
verts = path.vertices # (N,2)-shape array of contour line coordinates
diameter = np.max(verts.max(axis=0) - verts.min(axis=0))
if diameter < 15000: # threshold to be refined for your actual dimensions!
#print 'diameter is ', diameter
del(level.get_paths()[kp]) # no remove() for Path objects:(
#level.remove() # This does not work. produces ValueError: list.remove(x): x not in list
plt.gcf().canvas.draw()
plt.savefig('dummy', bbox_inches='tight')
plt.close()
After the plot is saved I get the same image
You can see that the lines are not removed yet. Here is the link to mslp array which we are trying to play with http://www.mediafire.com/download/7vi0mxqoe0y6pm9/mslp.txt
If you want x and y data which are being used in the above code, I can upload for your review.
Smooth line
You code to remove the smaller circles working perfectly. However the other question I have asked in the original post (smooth line) does not seems to work. I have used your code to slice the array to get minimal values and contoured it. I have used the following code to reduce the array size:-
slice = 15
CS = plt.contour(x[::slice,::slice],y[::slice,::slice],mslp[::slice,::slice], colors='b', linewidths=1, levels=levels)
The result is below.
After searching for few hours I found this SO thread having simmilar issue:-
Regridding regular netcdf data
But none of the solution provided over there works.The questions similar to mine above does not have proper solutions. If this issue is solved then the code is perfect and complete.
General idea
Your question seems to have 2 very different halves: one about omitting small contours, and another one about smoothing the contour lines. The latter is simpler, since I can't really think of anything else other than decreasing the resolution of your contour() call, just like you said.
As for removing a few contour lines, here's a solution which is based on directly removing contour lines individually. You have to loop over the collections of the object returned by contour(), and for each element check each Path, and delete the ones you don't need. Redrawing the figure's canvas will get rid of the unnecessary lines:
# dummy example based on matplotlib.pyplot.clabel example:
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure()
CS = plt.contour(X, Y, Z)
for level in CS.collections:
for kp,path in reversed(list(enumerate(level.get_paths()))):
# go in reversed order due to deletions!
# include test for "smallness" of your choice here:
# I'm using a simple estimation for the diameter based on the
# x and y diameter...
verts = path.vertices # (N,2)-shape array of contour line coordinates
diameter = np.max(verts.max(axis=0) - verts.min(axis=0))
if diameter<1: # threshold to be refined for your actual dimensions!
del(level.get_paths()[kp]) # no remove() for Path objects:(
# this might be necessary on interactive sessions: redraw figure
plt.gcf().canvas.draw()
Here's the original(left) and the removed version(right) for a diameter threshold of 1 (note the little piece of the 0 level at the top):
Note that the top little line is removed while the huge cyan one in the middle doesn't, even though both correspond to the same collections element i.e. the same contour level. If we didn't want to allow this, we could've called CS.collections[k].remove(), which would probably be a much safer way of doing the same thing (but it wouldn't allow us to differentiate between multiple lines corresponding to the same contour level).
To show that fiddling around with the cut-off diameter works as expected, here's the result for a threshold of 2:
All in all it seems quite reasonable.
Your actual case
Since you've added your actual data, here's the application to your case. Note that you can directly generate the levels in a single line using np, which will almost give you the same result. The exact same can be achieved in 2 lines (generating an arange, then selecting those that fall between p1 and p2). Also, since you're setting levels in the call to contour, I believe the V=2 part of the function call has no effect.
import numpy as np
import matplotlib.pyplot as plt
# insert actual data here...
Z = np.loadtxt('mslp.txt',delimiter=',')
X,Y = np.meshgrid(np.linspace(0,300000,Z.shape[1]),np.linspace(0,200000,Z.shape[0]))
p1,p2 = 1006,1018
# this is almost the same as the original, although it will produce
# [p1, p1+2, ...] instead of `[Z.min()+n, Z.min()+n+2, ...]`
levels = np.arange(np.maximum(Z.min(),p1),np.minimum(Z.max(),p2),2)
#control
plt.figure()
CS = plt.contour(X, Y, Z, colors='b', linewidths=2, levels=levels)
#modified
plt.figure()
CS = plt.contour(X, Y, Z, colors='b', linewidths=2, levels=levels)
for level in CS.collections:
for kp,path in reversed(list(enumerate(level.get_paths()))):
# go in reversed order due to deletions!
# include test for "smallness" of your choice here:
# I'm using a simple estimation for the diameter based on the
# x and y diameter...
verts = path.vertices # (N,2)-shape array of contour line coordinates
diameter = np.max(verts.max(axis=0) - verts.min(axis=0))
if diameter<15000: # threshold to be refined for your actual dimensions!
del(level.get_paths()[kp]) # no remove() for Path objects:(
# this might be necessary on interactive sessions: redraw figure
plt.gcf().canvas.draw()
plt.show()
Results, original(left) vs new(right):
Smoothing by resampling
I've decided to tackle the smoothing problem as well. All I could come up with is downsampling your original data, then upsampling again using griddata (interpolation). The downsampling part could also be done with interpolation, although the small-scale variation in your input data might make this problem ill-posed. So here's the crude version:
import scipy.interpolate as interp #the new one
# assume you have X,Y,Z,levels defined as before
# start resampling stuff
dN = 10 # use every dN'th element of the gridded input data
my_slice = [slice(None,None,dN),slice(None,None,dN)]
# downsampled data
X2,Y2,Z2 = X[my_slice],Y[my_slice],Z[my_slice]
# same as X2 = X[::dN,::dN] etc.
# upsampling with griddata over original mesh
Zsmooth = interp.griddata(np.array([X2.ravel(),Y2.ravel()]).T,Z2.ravel(),(X,Y),method='cubic')
# plot
plt.figure()
CS = plt.contour(X, Y, Zsmooth, colors='b', linewidths=2, levels=levels)
You can freely play around with the grids used for interpolation, in this case I just used the original mesh, as it was at hand. You can also play around with different kinds of interpolation: the default 'linear' one will be faster, but less smooth.
Result after downsampling(left) and upsampling(right):
Of course you should still apply the small-line-removal algorithm after this resampling business, and keep in mind that this heavily distorts your input data (since if it wasn't distorted, then it wouldn't be smooth). Also, note that due to the crude method used in the downsampling step, we introduce some missing values near the top/right edges of the region under consideraton. If this is a problem, you should consider doing the downsampling based on griddata as I've noted earlier.
This is a pretty bad solution, but it's the only one that I've come up with. Use the get_contour_verts function in this solution you linked to, possibly with the matplotlib._cntr module so that nothing gets plotted initially. That gives you a list of contour lines, sections, vertices, etc. Then you have to go through that list and pop the contours you don't want. You could do this by calculating a minimum diameter, for example; if the max distance between points is less than some cutoff, throw it out.
That leaves you with a list of LineCollection objects. Now if you make a Figure and Axes instance, you can use Axes.add_collection to add all of the LineCollections in the list.
I checked this out really quick, but it seemed to work. I'll come back with a minimum working example if I get a chance. Hope it helps!
Edit: Here's an MWE of the basic idea. I wasn't familiar with plt._cntr.Cntr, so I ended up using plt.contour to get the initial contour object. As a result, you end up making two figures; you just have to close the first one. You can replace checkDiameter with whatever function works. I think you could turn the line segments into a Polygon and calculate areas, but you'd have to figure that out on your own. Let me know if you run into problems with this code, but it at least works for me.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
def checkDiameter(seg, tol=.3):
# Function for screening line segments. NB: Not actually a proper diameter.
diam = (seg[:,0].max() - seg[:,0].min(),
seg[:,1].max() - seg[:,1].min())
return not (diam[0] < tol or diam[1] < tol)
# Create testing data
x = np.linspace(-1,1, 21)
xx, yy = np.meshgrid(x,x)
z = np.exp(-(xx**2 + .5*yy**2))
# Original plot with plt.contour
fig0, ax0 = plt.subplots()
# Make sure this contour object actually has a tiny contour to remove
cntrObj = ax0.contour(xx,yy,z, levels=[.2,.4,.6,.8,.9,.95,.99,.999])
# Primary loop: Copy contours into a new LineCollection
lineNew = list()
for lineOriginal in cntrObj.collections:
# Get properties of the original LineCollection
segments = lineOriginal.get_segments()
propDict = lineOriginal.properties()
propDict = {key: value for (key,value) in propDict.items()
if key in ['linewidth','color','linestyle']} # Whatever parameters you want to carry over
# Filter out the lines with small diameters
segments = [seg for seg in segments if checkDiameter(seg)]
# Create new LineCollection out of the OK segments
if len(segments) > 0:
lineNew.append(mpl.collections.LineCollection(segments, **propDict))
# Make new plot with only these line collections; display results
fig1, ax1 = plt.subplots()
ax1.set_xlim(ax0.get_xlim())
ax1.set_ylim(ax0.get_ylim())
for line in lineNew:
ax1.add_collection(line)
plt.show()
FYI: The bit with propDict is just to automate bringing over some of the line properties from the original plot. You can't use the whole dictionary at once, though. First, it contains the old plot's line segments, but you can just swap those for the new ones. But second, it appears to contain a number of parameters that are in conflict with each other: multiple linewidths, facecolors, etc. The {key for key in propDict if I want key} workaround is my way to bypass that, but I'm sure someone else can do it more cleanly.