I have been trying to decompress the Bing Maps location/border shape algorithm using Python. My end goal is to have custom regions/borders created from combining multiple counties and cities, and save the location data to our database for faster and more accurate location based analysis.
My strategy is as follows, but I'm a little stuck on #2, since I can't seem to accurately decompress the code:
Retrieve County/City borders from Bing Maps GeoData API - They refer to it as "shape"
Decompress their "shape" data to get the latitude and longitude of the border points
Remove the points that have the same lat/lng as other shapes (The goal is to make one large shape of multiple counties, as opposed to 5-6 separate shapes)
Compress the end result and save in the database
The function I am using seems to work for the example of 'vx1vilihnM6hR7mEl2Q' provided in the Point Compression Algorithm documentation. However, when I insert something a little more complex, like Cook County, the formula seems to be off (tested by inserting several of the points into different polygon mapping/drawing applications that also use Bing Maps). It basically creates a line at the south side of Chicago that vigorously goes East and West into Indiana, without much North-South movement. Without knowing what the actual coordinates of any counties are supposed to be, I'm not sure how to figure out where I'm going wrong.
Any help is greatly appreciated, even if it is a suggestion of a different strategy.
Here is the python code (sorry for the overuse of the decimal format - my poor attempt to ensure the error wasn't a result of inadvertently losing precision):
safeCharacters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_-'
def decodeBingBorder(compressedData):
latLng = []
pointsArray = []
point = []
lastLat = Decimal(0)
lastLng = Decimal(0)
# Assigns the number of of each character based on the respective index of 'safeCharacters'
# numbers under 32 indicate it is the last number of the combination of the point, and a new point is begun
for char in compressedData:
num = Decimal(safeCharacters.index(char))
if num < 32:
point.append(num)
pointsArray.append(point)
point = []
else:
num -= Decimal(32)
point.append(num)
# Loops through each point to determine the lat/lng of each point
for pnt in pointsArray:
result = Decimal(0)
# This revereses step 7 of the Point Compression Algorithm https://msdn.microsoft.com/en-us/library/jj158958.aspx
for num in reversed(pnt):
if result == 0:
result = num
else:
result = result * Decimal(32) + num
# This was pretty much taken from the Decompression Algorithm (not in Python format) at https://msdn.microsoft.com/en-us/library/dn306801.aspx
# Determine which diaganal it's on
diag = Decimal(int(round((math.sqrt(8 * result + 5) - 1) / 2)))
# submtract the total number of points from lower diagonals, and get the X and Y from what's left over
latY = Decimal(result - Decimal(diag * (diag + 1) / 2))
lngX = Decimal(diag - latY)
# undo the sign encoding
if latY % 2 == 1:
latY = (latY + Decimal(1)) * Decimal(-1)
if lngX % 2 == 1:
lngX = (lngX + Decimal(1)) * Decimal(-1)
latY /= 2
lngX /= 2
# undo the delta encoding
lat = latY + lastLat
lng = lngX + lastLng
lastLat = lat
lastLng = lng
# position the decimal point
lat /= Decimal(100000)
lng /= Decimal(100000)
# append the point to the latLng list in a string format, as opposed to the decimal format
latLng.append([str(lat), str(lng)])
return latLng
The compressed algorithm:
1440iqu9vJ957r8pB_825syB6rh_gXh1-ntqB56sk2B2nq07Mwvq5f64r0m0Fni11ooE4kkvxEy4wzMuotr_DvsiqvFozvt-Lw9znxH-r5oxLv9yxCwhh7wKnk4sB8o0Rvv56D8snW5n1jBg50K4kplClkpqBpgl9F4h4X_sjMs85Ls6qQi6qvqBr188mBqk-pqIxxsx5EpsjosI-8hgIoygDigU94l_4C
This is the result:
[['41.46986', '-87.79031'], ['41.47033', '-87.52569'], ['41.469145',
'-87.23372'], ['41.469395', '-87.03741'], ['41.41014', '-86.7114'],
['41.397545', '-86.64553'], ['41.3691', '-86.47018'], ['41.359585',
'-86.41984'], ['41.353585', '-86.9637'], ['41.355725', '-87.43971'],
['41.35561', '-87.52716'], ['41.3555', '-87.55277'], ['41.354625',
'-87.63504'], ['41.355635', '-87.54018'], ['41.360745', '-87.40351'],
['41.362315', '-87.29262'], ['41.36214', '-87.43194'], ['41.360915',
'-87.44473'], ['41.35598', '-87.58256'], ['41.3551', '-87.59025'],
['41.35245', '-87.59828'], ['41.34782', '-87.60784'], ['41.34506',
'-87.61664'], ['41.34267', '-87.6219'], ['41.34232', '-87.62643'],
['41.33809', '-87.63286'], ['41.33646', '-87.63956'], ['41.32985',
'-87.65056'], ['41.33069', '-87.65596'], ['41.32965', '-87.65938'],
['41.33063', '-87.6628'], ['41.32924', '-87.66659'], ['41.32851',
'-87.71306'], ['41.327105', '-87.75963'], ['41.329515', '-87.64388'],
['41.32698', '-87.73614'], ['41.32876', '-87.61933'], ['41.328275',
'-87.6403'], ['41.328765', '-87.63857'], ['41.32866', '-87.63969'],
['41.32862', '-87.70802']]
As mentioned by rbrundritt, storing the data from Big Maps is against the terms of use. However, there are other sources of this same data available, such as http://nationalmap.gov/boundaries.html
In the interest of solving the problem, and to store this and other coordinate data more efficiently, I solved the problem by removing the 'round' function when calculating 'diag'. This should be what replaces it:
diag = int((math.sqrt(8 * result + 5) - 1) / 2)
All of the 'Decimal' crap I added is not necessary, so you can remove it if you wish.
You can also do
diag=int(round((sqrt(8 * number + 1)/ 2)-1/2.))
Don't forget to subtract longitude*2 from latitude to get N/E coordinates!
Maybe it will be usefull, i found bug in code.
invert pair function should be
diag = math.floor((math.sqrt(8 * result + 1) - 1) / 2)
after fixing this, your implementation work correct
You can't store the boundary data from the Bing Maps GeoData API or any data derived from it in a database. This is against the terms of use of the platform.
Related
I'm trying to reproduce parts of the Higgs discovery in the Higgs --> 4 leptons channel with open data and making use of awkward. I can do it when the leptons are the same (e.g. 4 muons) with zip/unzip, but is there a way to do it in the 2 muon/2 electron channel? I started with the example in the HSF tutorial
https://hsf-training.github.io/hsf-training-scikit-hep-webpage/04-awkward/index.html
So I now have the following. First get the input file
curl http://opendata.cern.ch/record/12361/files/SMHiggsToZZTo4L.root --output SMHiggsToZZTo4L.root
Then I do the following
import numpy as np
import matplotlib.pylab as plt
import uproot
import awkward as ak
# Helper functions
def energy(m, px, py, pz):
E = np.sqrt( (m**2) + (px**2 + py**2 + pz**2))
return E
def invmass(E, px, py, pz):
m2 = (E**2) - (px**2 + py**2 + pz**2)
if m2 < 0:
m = -np.sqrt(-m2)
else:
m = np.sqrt(m2)
return m
def convert(pt, eta, phi):
px = pt * np.cos(phi)
py = pt * np.sin(phi)
pz = pt * np.sinh(eta)
return px, py, pz
####
# Read in the file
infile_name = 'SMHiggsToZZTo4L.root'
infile = uproot.open(infile_name)
# Convert to Cartesian
muon_pt = infile_signal['Events/Muon_pt'].array()
muon_eta = infile_signal['Events/Muon_eta'].array()
muon_phi = infile_signal['Events/Muon_phi'].array()
muon_q = infile_signal['Events/Muon_charge'].array()
muon_mass = infile_signal['Events/Muon_mass'].array()
muon_px,muon_py,muon_pz = convert(muon_pt, muon_eta, muon_phi)
muon_energy = energy(muon_mass, muon_px, muon_py, muon_pz)
# Do the magic
nevents = len(infile['Events/Muon_pt'].array())
# nevents = 1000 # For testing
max_entries = nevents
muons = ak.zip({
"px": muon_px[0:max_entries],
"py": muon_py[0:max_entries],
"pz": muon_pz[0:max_entries],
"e": muon_energy[0:max_entries],
"q": muon_q[0:max_entries],
})
quads = ak.combinations(muons, 4)
mu1, mu2, mu3, mu4 = ak.unzip(quads)
mass_try = (mu1.e + mu2.e + mu3.e + mu4.e)**2 - ((mu1.px + mu2.px + mu3.px + mu4.px)**2 + (mu1.py + mu2.py + mu3.py + mu4.py)**2 + (mu1.pz + mu2.pz + mu3.pz + mu4.pz)**2)
mass_try = np.sqrt(mass_try)
qtot = mu1.q + mu2.q + mu3.q + mu4.q
plt.hist(ak.flatten(mass_try[qtot==0]), bins=100,range=(0,300));
And the histogram looks good!
So how would I do this for 2-electron + 2-muon combinations? I would guess there's a way to make lepton_xxx arrays? But I'm not sure how to do this elegantly (quickly) such that I could also create a flag to keep track of what the lepton combinations are?
Thanks!
Matt
This could be answered in a variety of ways:
make a union array (mixed data types) of electrons and muons
make an array of electrons and muons that are the same type, but have a flag to indicate flavor (electron vs muon)
use ak.combinations with n=2 for the muons, ak.combinations with n=2 again for the electrons, and then combine them with ak.cartesian (and deal with tuples of tuples, rather than one level of tuples, which would mean two calls to ak.unzip)
break the electron and muon collections down into single-charge collections. You'll want exactly 1 positive muon, 1 negative muon, 1 positive electron, and 1 negative electron, so that would be an ak.cartesian of the four collections.
I'll go with the last method because I've decided that it's easiest.
Another thing you probably want to know about is the Vector library. After
import vector
vector.register_awkward()
you don't have to do explicit coordinate transformations or mass calculations. I'll be using that. Here's how I read in the data:
infile = uproot.open("/tmp/SMHiggsToZZTo4L.root")
muon_branch_arrays = infile["Events"].arrays(filter_name="Muon_*")
electron_branch_arrays = infile["Events"].arrays(filter_name="Electron_*")
muons = ak.zip({
"pt": muon_branch_arrays["Muon_pt"],
"phi": muon_branch_arrays["Muon_phi"],
"eta": muon_branch_arrays["Muon_eta"],
"mass": muon_branch_arrays["Muon_mass"],
"charge": muon_branch_arrays["Muon_charge"],
}, with_name="Momentum4D")
electrons = ak.zip({
"pt": electron_branch_arrays["Electron_pt"],
"phi": electron_branch_arrays["Electron_phi"],
"eta": electron_branch_arrays["Electron_eta"],
"mass": electron_branch_arrays["Electron_mass"],
"charge": electron_branch_arrays["Electron_charge"],
}, with_name="Momentum4D")
And this reproduces your plot:
quads = ak.combinations(muons, 4)
quad_charge = quads["0"].charge + quads["1"].charge + quads["2"].charge + quads["3"].charge
mu1, mu2, mu3, mu4 = ak.unzip(quads[quad_charge == 0])
plt.hist(ak.flatten((mu1 + mu2 + mu3 + mu4).mass), bins=100, range=(0, 200));
The quoted number slices (e.g. "0" and "1") are picking tuple fields, rather than array entries; it's a manual ak.unzip. (The fields could have had real names if I had given a fields argument to ak.combinations.)
For the two muons, two electrons case, let's make four distinct collections.
muons_plus = muons[muons.charge > 0]
muons_minus = muons[muons.charge < 0]
electrons_plus = electrons[electrons.charge > 0]
electrons_minus = electrons[electrons.charge < 0]
The ak.combinations function (with default axis=1) returns the Cartesian product of each list in the array of lists with itself, excluding an item with itself, (x, x) (unless you want that, specify replacement=True), and excluding one of each symmetric pair (x, y)/(y, x).
If you want just a plain Cartesian product of lists from one array with lists from another array, that's ak.cartesian. The four collections muons_plus, muons_minus, electrons_plus, electrons_minus are non-overlapping and we want each four-lepton group to have exactly one from each, so that's a plain Cartesian product.
mu1, mu2, e1, e2 = ak.unzip(ak.cartesian([muons_plus, muons_minus, electrons_plus, electrons_minus]))
plt.hist(ak.flatten((mu1 + mu2 + e1 + e2).mass), bins=100, range=(0, 200));
Separating particles by flavor (electron vs muon) but not by charge is an artifact of the typing constraints imposed by C++. Electrons are measured in different ways from muons, so they have different attributes in the dataset. In a statically typed language like C++, we couldn't put them in the same collection because they differ by type (have different attributes). But charges only differ by value (an integer), so there was no reason they had to be in different collections.
But now, the only thing distinguishing a type is (a) what attributes the objects have and (b) what objects with that set of attributes are named. Here, I named them "Momentum4D" because that allowed Vector to recognize them as Lorentz vectors and give them Lorentz vector methods. But they could have had other attributes (e.g. e_over_p for electrons, but not for muons). charge is a non-Momentum4D attribute.
So if we're going to the trouble to put different-flavor leptons in different arrays, why not put different-charge leptons in different arrays, too? Physically, flavor and charge are both particle properties at the same level of distinction, so we probably want to do the analysis that way. No reason to let a C++ type distinction get in the way of the analysis logic!
Also, you probably want
nevents = infile["Events"].num_entries
to get the number of entries from a TTree without reading the array data from it.
I'm trying to resample a set of GRIB2 arrays at 0.25 degree resolution to a coarser 0.5 degree resolution using the xESMF package (xarray's coarsen method does not work here because there is an odd number of coordinates in the latitude).
I have converted the GRIB data to xarray format through the pygrib package, which then subsets out the specific grid I need:
fhr = 96
gridDefs = {
"0.25":
{'url': "https://noaa-gefs-retrospective.s3.amazonaws.com/landsfc.pgrb2.0p25"},
"0.5":
{'url': "https://noaa-gefs-retrospective.s3.amazonaws.com/landsfc.pgrb2.0p50"},
}
fileDefs = {
"0.25":
{'url': "https://noaa-gefs-retrospective.s3.amazonaws.com/GEFSv12/reforecast/2019/2019051900/c00/Days%3A1-10/tmp_pres_2019051900_c00.grib2",
'localfile': "tmp_pres.grib2"},
"0.5":
{'url': "https://noaa-gefs-retrospective.s3.amazonaws.com/GEFSv12/reforecast/2019/2019051900/c00/Days%3A1-10/tmp_pres_abv700mb_2019051900_c00.grib2",
'localfile': "tmp_pres_abv_700.grib2"},
}
def grib_to_xs(grib, vName):
arr = xr.DataArray(grib.values)
arr = arr.rename({'dim_0':'lat', 'dim_1':'lon'})
xs = arr.to_dataset(name=vName)
return xs
gribs = {}
for key, item in gridDefs.items():
if not os.path.exists(item['url'][item['url'].rfind('/')+1:]):
os.system("wget " + item['url'])
lsGrib = pygrib.open(item['url'][item['url'].rfind('/')+1:])
landsea = lsGrib[1].values
gLats = lsGrib[1]["distinctLatitudes"]
gLons = lsGrib[1]["distinctLongitudes"]
gribs["dataset" + key] = xr.Dataset({'lat': gLats, 'lon': gLons})
lsGrib.close()
for key, item in fileDefs.items():
if not os.path.exists(item['localfile']):
os.system("wget " + item['url'])
os.system("mv " + item['url'][item['url'].rfind('/')+1:] + " " + item['localfile'])
for key, item in fileDefs.items():
hold = pygrib.open(item['localfile'])
subsel = hold.select(forecastTime=fhr)
#Grab the first item
gribs[key] = grib_to_xs(subsel[1], "TT" + key)
hold.close()
The above code downloads two constant files (landsfc) at the two grid domains (0.25 and 0.5), then downloads two GRIB files at each of the resolutions as well. I'm trying to resample the 0.25 degree GRIB file (tmp_pres.grib2) to a 0.5 degree domain as such:
regridder = xe.Regridder(ds, gribs['dataset0.5'], 'bilinear')
print(regridder)
ds2 = regridder(ds)
My issue is that I generate two warning messages when trying to use the regridder:
/media/robert/HDD/Anaconda3/envs/wrf-work/lib/python3.8/site-packages/xarray/core/dataarray.py:682: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return key in self.data
/media/robert/HDD/Anaconda3/envs/wrf-work/lib/python3.8/site-packages/xesmf/backend.py:53: UserWarning: Latitude is outside of [-90, 90]
warnings.warn('Latitude is outside of [-90, 90]')
The output xarray does have the correct coordinates, however the values inside the grid are way off (Outside the maxima/minima of the finer resolution grid), and exhibit these strange banding patterns that make no physical sense.
What I would like to know is, is this the correct process to upscale an array using xEMSF, and if not, how would I address the problem?
Any assistance would be appreciated, thanks!
I would recommend first trying conservative instead of bilinear (it's recommended on their documentation) and maybe check if you're using the parameters correctly because it seems something is wrong, my first guess would be that something you're doing moves the latitud around for some reason, I'm leaving the docs link here and hope someone knows more.
Regridder docs:
https://xesmf.readthedocs.io/en/latest/user_api.html?highlight=regridder#xesmf.frontend.Regridder.__init__
Upscaling recommendation (search for upscaling, there's also a guide for increasing resolution):
https://xesmf.readthedocs.io/en/latest/notebooks/Compare_algorithms.html?highlight=upscaling
Thanks to the documentation links and recommendations provided by MASACR 99, I was able to do some more digging into the xESMF package and to find a working example of resampling methods from the package author (https://github.com/geoschem/GEOSChem-python-tutorial/blob/main/Chapter03_regridding.ipynb), my issue was solved by two changes:
I changed the method from bilinear to conservative (This required also adding in two fields to the input array (boundaries for latitude and longitude).
Instead of directly passing the variable being resampled to the resampler, I instead had to define two fixed grids to create the resampler, then pass individual variables.
To solve the first change, I created a new function to give me the boundary variables:
def get_bounds(arr, gridSize):
lonMin = np.nanmin(arr["lon"].values)
latMin = np.nanmin(arr["lat"].values)
lonMax = np.nanmax(arr["lon"].values)
latMax = np.nanmax(arr["lat"].values)
sizeLon = len(arr["lon"])
sizeLat = len(arr["lat"])
bounds = {}
bounds["lon"] = arr["lon"].values
bounds["lat"] = arr["lat"].values
bounds["lon_b"] = np.linspace(lonMin-(gridSize/2), lonMax+(gridSize/2), sizeLon+1)
bounds["lat_b"] = np.linspace(latMin-(gridSize/2), latMax+(gridSize/2), sizeLat+1).clip(-90, 90)
return bounds
For the second change, I modified the regridder definition and application to use the statically defined grids, then passed the desired variable to resample:
regridder = xe.Regridder(get_bounds(gribs['dataset0.25'], 0.25), get_bounds(gribs['dataset0.5'], 0.5), 'conservative')
print(regridder)
ds2 = regridder(ds)
I have a load of 3 hour MP3 files, and every ~15 minutes a distinct 1 second sound effect is played, which signals the beginning of a new chapter.
Is it possible to identify each time this sound effect is played, so I can note the time offsets?
The sound effect is similar every time, but because it's been encoded in a lossy file format, there will be a small amount of variation.
The time offsets will be stored in the ID3 Chapter Frame MetaData.
Example Source, where the sound effect plays twice.
ffmpeg -ss 0.9 -i source.mp3 -t 0.95 sample1.mp3 -acodec copy -y
Sample 1 (Spectrogram)
ffmpeg -ss 4.5 -i source.mp3 -t 0.95 sample2.mp3 -acodec copy -y
Sample 2 (Spectrogram)
I'm very new to audio processing, but my initial thought was to extract a sample of the 1 second sound effect, then use librosa in python to extract a floating point time series for both files, round the floating point numbers, and try to get a match.
import numpy
import librosa
print("Load files")
source_series, source_rate = librosa.load('source.mp3') # 3 hour file
sample_series, sample_rate = librosa.load('sample.mp3') # 1 second file
print("Round series")
source_series = numpy.around(source_series, decimals=5);
sample_series = numpy.around(sample_series, decimals=5);
print("Process series")
source_start = 0
sample_matching = 0
sample_length = len(sample_series)
for source_id, source_sample in enumerate(source_series):
if source_sample == sample_series[sample_matching]:
sample_matching += 1
if sample_matching >= sample_length:
print(float(source_start) / source_rate)
sample_matching = 0
elif sample_matching == 1:
source_start = source_id;
else:
sample_matching = 0
This does not work with the MP3 files above, but did with an MP4 version - where it was able to find the sample I extracted, but it was only that one sample (not all 12).
I should also note this script takes just over 1 minute to process the 3 hour file (which includes 237,426,624 samples). So I can imagine that some kind of averaging on every loop would cause this to take considerably longer.
Trying to directly match waveforms samples in the time domain is not a good idea. The mp3 signal will preserve the perceptual properties but it is quite likely the phases of the frequency components will be shifted so the sample values will not match.
You could try trying to match the volume envelopes of your effect and your sample.
This is less likely to be affected by the mp3 process.
First, normalise your sample so the embedded effects are the same level as your reference effect. Constructing new waveforms from the effect and the sample by using the average of the peak values over time frames that are just short enough to capture the relevant features. Better still use overlapping frames. Then use cross-correlation in the time domain.
If this does not work then you could analyze each frame using an FFT this gives you a feature vector for each frame. You then try to find matches of the sequence of features in your effect with the sample. Similar to https://stackoverflow.com/users/1967571/jonnor suggestion. MFCC is used in speech recognition but since you are not detecting speech FFT is probably OK.
I am assuming the effect playing by itself (no background noise) and it is added to the recording electronically (as opposed to being recorded via a microphone). If this is not the case the problem becomes more difficult.
This is an Audio Event Detection problem. If the sound is always the same and there are no other sounds at the same time, it can probably be solved with a Template Matching approach. At least if there is no other sounds with other meanings that sound similar.
The simplest kind of template matching is to compute the cross-correlation between your input signal and the template.
Cut out an example of the sound to detect (using Audacity). Take as much as possible, but avoid the start and end. Store this as .wav file
Load the .wav template using librosa.load()
Chop up the input file into a series of overlapping frames. Length should be same as your template. Can be done with librosa.util.frame
Iterate over the frames, and compute cross-correlation between frame and template using numpy.correlate.
High values of cross-correlation indicate a good match. A threshold can be applied in order to decide what is an event or not. And the frame number can be used to calculate the time of the event.
You should probably prepare some shorter test files which have both some examples of the sound to detect as well as other typical sounds.
If the volume of the recordings is inconsistent you'll want to normalize that before running detection.
If cross-correlation in the time-domain does not work, you can compute the melspectrogram or MFCC features and cross-correlate that. If this does not yield OK results either, a machine learning model can be trained using supervised learning, but this requires labeling a bunch of data as event/not-event.
To follow up on the answers by #jonnor and #paul-john-leonard, they are both correct, by using frames (FFT) I was able to do Audio Event Detection.
I've written up the full source code at:
https://github.com/craigfrancis/audio-detect
Some notes though:
To create the templates, I used ffmpeg:
ffmpeg -ss 13.15 -i source.mp4 -t 0.8 -acodec copy -y templates/01.mp4;
I decided to use librosa.core.stft, but I needed to make my own implementation of this stft function for the 3 hour file I'm analysing, as it's far too big to keep in memory.
When using stft I tried using a hop_length of 64 at first, rather than the default (512), as I assumed that would give me more data to work with... the theory might be true, but 64 was far too detailed, and caused it to fail most of the time.
I still have no idea how to get cross-correlation between frame and template to work (via numpy.correlate)... instead I took the results per frame (the 1025 buckets, not 1024, which I believe relate to the Hz frequencies found) and did a very simple average difference check, then ensured that average was above a certain value (my test case worked at 0.15, the main files I'm using this on required 0.55 - presumably because the main files had been compressed quite a bit more):
hz_score = abs(source[0:1025,x] - template[2][0:1025,y])
hz_score = sum(hz_score)/float(len(hz_score))
When checking these scores, it's really useful to show them on a graph. I often used something like the following:
import matplotlib.pyplot as plt
plt.figure(figsize=(30, 5))
plt.axhline(y=hz_match_required_start, color='y')
while x < source_length:
debug.append(hz_score)
if x == mark_frame:
plt.axvline(x=len(debug), ymin=0.1, ymax=1, color='r')
plt.plot(debug)
plt.show()
When you create the template, you need to trim off any leading silence (to avoid bad matching), and an extra ~5 frames (it seems that the compression / re-encoding process alters this)... likewise, remove the last 2 frames (I think the frames include a bit of data from their surroundings, where the last one in particular can be a bit off).
When you start finding a match, you might find it's ok for the first few frames, then it fails... you will probably need to try again a frame or two later. I found it easier having a process that supported multiple templates (slight variations on the sound), and would check their first testable (e.g. 6th) frame and if that matched, put them in a list of potential matches. Then, as it progressed on to the next frames of the source, it could compare it to the next frames of the template, until all frames in the template had been matched (or failed).
This might not be an answer, it's just where I got to before I start researching the answers by #jonnor and #paul-john-leonard.
I was looking at the Spectrograms you can get by using librosa stft and amplitude_to_db, and thinking that if I take the data that goes in to the graphs, with a bit of rounding, I could potentially find the 1 sound effect being played:
https://librosa.github.io/librosa/generated/librosa.display.specshow.html
The code I've written below kind of works; although it:
Does return quite a few false positives, which might be fixed by tweaking the parameters of what is considered a match.
I would need to replace the librosa functions with something that can parse, round, and do the match checks in one pass; as a 3 hour audio file causes python to run out of memory on a computer with 16GB of RAM after ~30 minutes before it even got to the rounding bit.
import sys
import numpy
import librosa
#--------------------------------------------------
if len(sys.argv) == 3:
source_path = sys.argv[1]
sample_path = sys.argv[2]
else:
print('Missing source and sample files as arguments');
sys.exit()
#--------------------------------------------------
print('Load files')
source_series, source_rate = librosa.load(source_path) # The 3 hour file
sample_series, sample_rate = librosa.load(sample_path) # The 1 second file
source_time_total = float(len(source_series) / source_rate);
#--------------------------------------------------
print('Parse Data')
source_data_raw = librosa.amplitude_to_db(abs(librosa.stft(source_series, hop_length=64)))
sample_data_raw = librosa.amplitude_to_db(abs(librosa.stft(sample_series, hop_length=64)))
sample_height = sample_data_raw.shape[0]
#--------------------------------------------------
print('Round Data') # Also switches X and Y indexes, so X becomes time.
def round_data(raw, height):
length = raw.shape[1]
data = [];
range_length = range(1, (length - 1))
range_height = range(1, (height - 1))
for x in range_length:
x_data = []
for y in range_height:
# neighbours = []
# for a in [(x - 1), x, (x + 1)]:
# for b in [(y - 1), y, (y + 1)]:
# neighbours.append(raw[b][a])
#
# neighbours = (sum(neighbours) / len(neighbours));
#
# x_data.append(round(((raw[y][x] + raw[y][x] + neighbours) / 3), 2))
x_data.append(round(raw[y][x], 2))
data.append(x_data)
return data
source_data = round_data(source_data_raw, sample_height)
sample_data = round_data(sample_data_raw, sample_height)
#--------------------------------------------------
sample_data = sample_data[50:268] # Temp: Crop the sample_data (318 to 218)
#--------------------------------------------------
source_length = len(source_data)
sample_length = len(sample_data)
sample_height -= 2;
source_timing = float(source_time_total / source_length);
#--------------------------------------------------
print('Process series')
hz_diff_match = 18 # For every comparison, how much of a difference is still considered a match - With the Source, using Sample 2, the maximum diff was 66.06, with an average of ~9.9
hz_match_required_switch = 30 # After matching "start" for X, drop to the lower "end" requirement
hz_match_required_start = 850 # Out of a maximum match value of 1023
hz_match_required_end = 650
hz_match_required = hz_match_required_start
source_start = 0
sample_matched = 0
x = 0;
while x < source_length:
hz_matched = 0
for y in range(0, sample_height):
diff = source_data[x][y] - sample_data[sample_matched][y];
if diff < 0:
diff = 0 - diff
if diff < hz_diff_match:
hz_matched += 1
# print(' {} Matches - {} # {}'.format(sample_matched, hz_matched, (x * source_timing)))
if hz_matched >= hz_match_required:
sample_matched += 1
if sample_matched >= sample_length:
print(' Found # {}'.format(source_start * source_timing))
sample_matched = 0 # Prep for next match
hz_match_required = hz_match_required_start
elif sample_matched == 1: # First match, record where we started
source_start = x;
if sample_matched > hz_match_required_switch:
hz_match_required = hz_match_required_end # Go to a weaker match requirement
elif sample_matched > 0:
# print(' Reset {} / {} # {}'.format(sample_matched, hz_matched, (source_start * source_timing)))
x = source_start # Matched something, so try again with x+1
sample_matched = 0 # Prep for next match
hz_match_required = hz_match_required_start
x += 1
#--------------------------------------------------
I'm currently having a little issue with a fits file. The data is in table format, a format I haven't previously used. I'm a python user, and rely heavily on astropy.fits to manipulate fits images. A quick output of the info gives:
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 60 ()
1 BinTableHDU 29 3072R x 2C [1024E, 1024E]
The header for the BinTableHDU is as follows:
XTENSION= 'BINTABLE' /Written by IDL: Mon Jun 22 23:28:21 2015
BITPIX = 8 /
NAXIS = 2 /Binary table
NAXIS1 = 8192 /Number of bytes per row
NAXIS2 = 3072 /Number of rows
PCOUNT = 0 /Random parameter count
GCOUNT = 1 /Group count
TFIELDS = 2 /Number of columns
TFORM1 = '1024E ' /Real*4 (floating point)
TFORM2 = '1024E ' /Real*4 (floating point)
TTYPE1 = 'COUNT_RATE' /
TUNIT1 = '1e-6cts/s/arcmin^2' /
TTYPE2 = 'UNCERTAINTY' /
TUNIT2 = '1e-6cts/s/arcmin^2' /
HISTORY g000m90r1b120pm.fits created on 10/08/97. PI channel range: 8: 19
PIXTYPE = 'HEALPIX ' / HEALPIX pixelisation
ORDERING= 'NESTED ' / Pixel ordering scheme, either RING or NESTED
NSIDE = 512 / Healpix resolution parameter
NPIX = 3145728 / Total number of pixels
OBJECT = 'FULLSKY ' / Sky coverage, either FULLSKY or PARTIAL
FIRSTPIX= 0 / First pixel # (0 based)
LASTPIX = 3145727 / Last pixel # (zero based)
INDXSCHM= 'IMPLICIT' / indexing : IMPLICIT or EXPLICIT
GRAIN = 0 / GRAIN = 0: No index,
COMMENT GRAIN =1: 1 pixel index for each pixel,
COMMENT GRAIN >1: 1 pixel index for Grain consecutive pixels
BAD_DATA= -1.63750E+30 / Sentinel value given to bad pixels
COORDSYS= 'G ' / Pixelization coordinate system
COMMENT G = Galactic, E = ecliptic, C = celestial = equatorial
END
I'd like to access the fits image which is stored within the TTYPE labeled 'COUNT-RATE', and then have this in a format with which I can then add to other count-rate arrays with the same dimensions.
I started with my usual prodcedure for opening a fits file:
hdulist_RASS_SXRB_R1 = fits.open('/Users/.../RASS_SXRB_R1.fits')
hdulist_RASS_SXRB_R1.info()
image_XRAY_SKYVIEW_R1 = hdulist_RASS_SXRB_R1[1].data
image_XRAY_SKYVIEW_R1 = numpy.array(image_XRAY_SKYVIEW_R1)
image_XRAY_SKYVIEW_header_R1 = hdulist_RASS_SXRB_R1[1].header
But this is coming back with IndexError: too many indices for array. I've had a look at accessing table data in the astropy documentation here (Accessing data stored as a table in a multi-extension FITS (MEF) file)
If anyone has a tried and tested method for accessing such images from a fits table I'd be very grateful! Many thanks.
I can't be sure without seeing the full traceback but I think the exception you're getting is from this:
image_XRAY_SKYVIEW_R1 = numpy.array(image_XRAY_SKYVIEW_R1)
There's no reason to manually wrap numpy.array() around the array. It's already a Numpy array. But in this case it's a structured array (see http://docs.scipy.org/doc/numpy/user/basics.rec.html).
#Andromedae93's answer is right one. But also for general documentation on this see: http://docs.astropy.org/en/stable/io/fits/index.html#working-with-table-data
However, the way you're working (which is fine for images) of manually calling fits.open, accessing the .data attribute of the HDU, etc. is fairly low level, and Numpy structured arrays are good at representing tables, but not great for manipulating them.
You're better off generally using Astropy's higher-level Table interface. A FITS table can be read directly into an Astropy Table object with Table.read(): http://docs.astropy.org/en/stable/io/unified.html#fits
The only reason the same thing doesn't exist for FITS images is there's no a generic "Image" class yet.
I used astropy.io.fits during my internship in Astrophysics and this is my process to open file .fits and make some operations :
# Opening the .fits file which is named SMASH.fits
field = fits.open(SMASH.fits)
# Data fits reading
tbdata = field[1].data
Now, with this kind of method, tbdata is a numpy.array and you can make lots of things.
For example, if you have data like :
ID, Name, Object
1, HD 1527, Star
2, HD 7836, Star
3, NGC 6739, Galaxy
If you want to print data along one condition :
Data_name = tbdata['Name']
You will get :
HD 1527
HD 7836
NGC 6739
I don't know what do you want exactly with your data, but I can help you ;)
I give a lot of information on the methods that I used to write my code. If you just want to read my question, skip to the quotes at the end.
I'm working on a project that has a goal of detecting sub populations in a group of patients. I thought this sounded like the perfect opportunity to use association rule mining as I'm currently taking a class on the subject.
I there are 42 variables in total. Of those, 20 are continuous and had to be discretized. For each variable, I used the Freedman-Diaconis rule to determine how many categories to divide a group into.
def Freedman_Diaconis(column_values):
#sort the list first
column_values[1].sort()
first_quartile = int(len(column_values[1]) * .25)
third_quartile = int(len(column_values[1]) * .75)
fq_value = column_values[1][first_quartile]
tq_value = column_values[1][third_quartile]
iqr = tq_value - fq_value
n_to_pow = len(column_values[1])**(-1/3)
h = 2 * iqr * n_to_pow
retval = (column_values[1][-1] - column_values[1][1])/h
test = int(retval+1)
return test
From there I used min-max normalization
def min_max_transform(column_of_data, num_bins):
min_max_normalizer = preprocessing.MinMaxScaler(feature_range=(1, num_bins))
data_min_max = min_max_normalizer.fit_transform(column_of_data[1])
data_min_max_ints = take_int(data_min_max)
return data_min_max_ints
to transform my data and then I simply took the interger portion to get the final categorization.
def take_int(list_of_float):
ints = []
for flt in list_of_float:
asint = int(flt)
ints.append(asint)
return ints
I then also wrote a function that I used to combine this value with the variable name.
def string_transform(prefix, column, index):
transformed_list = []
transformed = ""
if index < 4:
for entry in column[1]:
transformed = prefix+str(entry)
transformed_list.append(transformed)
else:
prefix_num = prefix.split('x')
for entry in column[1]:
transformed = str(prefix_num[1])+'x'+str(entry)
transformed_list.append(transformed)
return transformed_list
This was done to differentiate variables that have the same value, but appear in different columns. For example, having a value of 1 for variable x14 means something different from getting a value of 1 in variable x20. The string transform function would create 14x1 and 20x1 for the previously mentioned examples.
After this, I wrote everything to a file in basket format
def create_basket(list_of_lists, headers):
#for filename in os.listdir("."):
# if filename.e
if not os.path.exists('baskets'):
os.makedirs('baskets')
down_length = len(list_of_lists[0])
with open('baskets/dataset.basket', 'w') as basketfile:
basket_writer = csv.DictWriter(basketfile, fieldnames=headers)
for i in range(0, down_length):
basket_writer.writerow({"trt": list_of_lists[0][i], "y": list_of_lists[1][i], "x1": list_of_lists[2][i],
"x2": list_of_lists[3][i], "x3": list_of_lists[4][i], "x4": list_of_lists[5][i],
"x5": list_of_lists[6][i], "x6": list_of_lists[7][i], "x7": list_of_lists[8][i],
"x8": list_of_lists[9][i], "x9": list_of_lists[10][i], "x10": list_of_lists[11][i],
"x11": list_of_lists[12][i], "x12":list_of_lists[13][i], "x13": list_of_lists[14][i],
"x14": list_of_lists[15][i], "x15": list_of_lists[16][i], "x16": list_of_lists[17][i],
"x17": list_of_lists[18][i], "x18": list_of_lists[19][i], "x19": list_of_lists[20][i],
"x20": list_of_lists[21][i], "x21": list_of_lists[22][i], "x22": list_of_lists[23][i],
"x23": list_of_lists[24][i], "x24": list_of_lists[25][i], "x25": list_of_lists[26][i],
"x26": list_of_lists[27][i], "x27": list_of_lists[28][i], "x28": list_of_lists[29][i],
"x29": list_of_lists[30][i], "x30": list_of_lists[31][i], "x31": list_of_lists[32][i],
"x32": list_of_lists[33][i], "x33": list_of_lists[34][i], "x34": list_of_lists[35][i],
"x35": list_of_lists[36][i], "x36": list_of_lists[37][i], "x37": list_of_lists[38][i],
"x38": list_of_lists[39][i], "x39": list_of_lists[40][i], "x40": list_of_lists[41][i]})
and I used the apriori package in Orange to see if there were any association rules.
rules = Orange.associate.AssociationRulesSparseInducer(patient_basket, support=0.3, confidence=0.3)
print "%4s %4s %s" % ("Supp", "Conf", "Rule")
for r in rules:
my_rule = str(r)
split_rule = my_rule.split("->")
if 'trt' in split_rule[1]:
print 'treatment rule'
print "%4.1f %4.1f %s" % (r.support, r.confidence, r)
Using this, technique I found quite a few association rules with my testing data.
THIS IS WHERE I HAVE A PROBLEM
When I read the notes for the training data, there is this note
...That is, the only
reason for the differences among observed responses to the same treatment across patients is
random noise. Hence, there is NO meaningful subgroup for this dataset...
My question is,
why do I get multiple association rules that would imply that there are subgroups, when according to the notes I shouldn't see anything?
I'm getting lift numbers that are above 2 as opposed to the 1 that you should expect if everything was random like the notes state.
Supp Conf Rule
0.3 0.7 6x0 -> trt1
Even though my code runs, I'm not getting results anywhere close to what should be expected. This leads me to believe that I messed something up, but I'm not sure what it is.
After some research, I realized that my sample size is too small for the number of variables that I have. I would need a way larger sample size in order to really use the method that I was using. In fact, the method that I tried to use was developed with the assumption that it would be run on databases with hundreds of thousands or millions of rows.