Standardizing GPX traces - python

I have two GPX files (from a race I ran twice, obtained via the Strava API) and I would like to be able to compare the effort across both. The sampling frequency is irregular however (i.e. data is not recorded every second, or every meter), so a straightforward comparison is not possible and I would need to standardize the data first. Preferably, I would resample the data so that I have data points for every 10 meters for example.
I'm using Pandas, so I'm currently standardizing a single file by inserting rows for every 10 meters and interpolating the heartrate, duration, lat/lng, etc from the surrounding data points. This works, but doesn't make the data comparable across files, as the recording does not start at the exact same location.
An alternative is first standardizing the course coordinates using something like geohashing and then trying to map both efforts to this standardized course. Since coordinates can not be easily sorted, I'm not sure how to do that correctly however.
Any pointers are appreciated, thanks!

Related

documenting CSV data, using python

Not sure this is the right place to ask, but it has to be a common problem.
I'm collecting AC voltage(real, apparent, etc.) vs. an input parameter to measure the performance of a solar inverter. I read data from a meter, the invert, and other test equipment and then write the data to a CSV file for later plotting. The details here are not important except to say that part works.
There are various modes available with their own values that I want to record with the data. I.e., I might set the foo-parameter to 0.5 and then record the complex voltage output versus the DC input. I need to do this for several values of the foo-parameter, and many parameters should be recorded.
I've modified my plot routine to allow #comment lines. This will allow me to record the parameters that were selected for any data set.
Is there a standard way of doing this, or otherwise documenting data collected like this?

Quickly re-projecting spatial data in python

I have a geodataframe with a local projection (EPSG:2263) that I want to transform to WGS84(global) in order to add basemap interactivity within python. However, when transforming this data, the runtime for this block of code is hours long and extremely impractical, I only have ~40,000 polygons that need to be transformed.
code I am using:
gdf.to_crs(epsg = 4326)
Does anyone know of quicker ways to re-project somewhat large datasets in python?
I have done this call before with a smaller dataset to test it out, and indeed it took nearly a minute with only 150 records.

Fastest approach for geopandas (reading and spatialJoin)

I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time.
Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process.
So I wonder: if there is any relatively easy way to parallelize those computations?
I am assuming you have already implemented GeoPandas and are still finding difficulties?
you can improve this by further hashing your coords data. similar to how google hashes their search data. Some databases already provide support for these types of operations (eg mongodb). Imagine if you took the first (left) digit of your coords, and put each set of cooresponding data into a seperate sqlite file. each digit can be a hash pointing to the correct file to look for. now your lookup time has improved by a factor of 20 (range(-9,10)), assuming your hash lookup takes minimal time in comparison
As it turned out, the most convenient solution in my case is to use pandas.read_SQL function with specific chunksize parameter. In this case, it returns a generator of data chunks, which can be effectively feed to the mp.Pool().map() along with the job;
In this (my) case job consists of 1) reading geoboundaries, 2) spatial joint of the chunk 3) writing the chunk to the database.
This method is completely dependent on your spatial scale, but one way you might parallelize your join would be to subdivide your polygons into subpolygons and then offload the work to separate threads in separate cores. This geopandas r-tree tutorial demonstrates that technique, subdividing a large polygon into many small ones and intersecting each with a large set of points. But again, this only works if your spatial scale is appropriate: ie, a few polygons and a lot of points (such as a few zip code polygons and millions of points in and around them).

HDF5 Links to Events in Dataset

I'm trying to use HDF5 to store time-series EEG data. These files can be quite large and consist of many channels, and I like the features of the HDF5 file format (lazy I/O, dynamic compression, mpi, etc).
One common thing to do with EEG data is to mark sections of data as 'interesting'. I'm struggling with a good way to store these marks in the file. I see soft/hard links supported for linking the same dataset to other groups, etc -- but I do not see any way to link to sections of the dataset.
For example, let's assume I have a dataset called EEG containing sleep data. Let's say I run an algorithm that takes a while to process the data and generates indices corresponding to periods of REM sleep. What is the best way to store these index ranges in an HDF5 file?
The best I can think of right now is to create a dataset with three columns -- the first column is a string and contains a label for the event ("REM1"), and the second/third column contains the start/end index respectively. The only reason I don't like this solution is because HDF5 datasets are pretty set in size -- if I decide later that a period of REM sleep was mis-identified and I need to add/remove that event, the dataset size would need to change (and deleting the dataset/recreating it with a new size is suboptimal). Compound this by the fact that I may have MANY events (imagine marking eyeblink events), this becomes more of a problem.
I'm more curious to find out if there's functionality in the HDF5 file that I'm just not aware of, because this seems like a pretty common thing that one would want to do.
I think what you want is a Region Reference — essentially, a way to store a reference to a slice of your data. In h5py, you create them with the regionref property and numpy slicing syntax, so if you have a dataset called ds and your start and end indexes of your REM period, you can do:
rem_ref = ds.regionref[start:end]
ds.attrs['REM1'] = rem_ref
ds[ds.attrs['REM1']] # Will be a 1-d set of values
You can store regionrefs pretty naturally — they can be attributes on a dataset, objects in a group, or you can create a regionref-type dataset and store them in there.
In your case, I might create a group ("REM_periods" or something) and store the references in there. Creating a "REM_periods" dataset and storing the regionrefs there is reasonable too, but you run into the whole "datasets tend not to be variable-length very well" thing.
Storing them as attrs on the dataset might be OK, too, but it'd get awkward if you wanted to have more than one event type.

Saving large Python arrays to disk for re-use later --- hdf5? Some other method?

I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later.
I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added.
What I want to do is this:
Read in all of the existing data to python arrays
Save the variable arrays to some kind of database/file
Then, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.
I would like the saved data to be accessible to further python scripts but also to be fairly "human readable" so that it can be handled in programs like OriginPro or perhaps even Excel.
My question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense?
EDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc).
HDF5 is an excellent choice! It has a nice interface, is widely used (in the scientific community at least), many programs have support for it (matlab for example), there are libraries for C,C++,fortran,python,... It has a complete toolset to display the contents of a HDF5 file. If you later want to do complex MPI calculation on your data, HDF5 has support for concurrently read/writes. It's very well suited to handle very large datasets.
Maybe you could use some kind of key-value database like Redis, Berkeley DB, MongoDB... But it would be nice some more info about the schema you would be using.
EDITED
If you choose Redis for example, you can index very long lists:
The max length of a list is 232 - 1 elements (4294967295, more than 4
billion of elements per list). The main features of Redis Lists from
the point of view of time complexity are the support for constant time
insertion and deletion of elements near the head and tail, even with
many millions of inserted items. Accessing elements is very fast near
the extremes of the list but is slow if you try accessing the middle
of a very big list, as it is an O(N) operation.
I would use a single file with fixed record length for this usecase. No specialised DB solution (seems overkill to me in that case), just plain old struct (see the documentation for struct.py) and read()/write() on a file. If you have just millions of entries, everything should be working nicely in a single file of some dozens or hundreds of MB size (which is hardly too large for any file system). You also have random access to subsets in case you will need that later.

Categories