Python plyfile vs pymesh - python

I need to read, manipulate and write PLY files in Python. PLY is a format for storing 3D objects. Through a simple search I've found two relevant libraries, PyMesh and plyfile. Has anyone had any experience with either of them, and does anyone have any recommendations? plyfile seems to have been dormant for a year now, judging by Github.
I know this question instigates opinion-based answers but I don't really know where else to ask this question.

As of (2020 January).
None, use open3d. It's the easiest and reads .ply files directly into numpy.
import numpy as np
import open3d as o3d
# Read .ply file
input_file = "input.ply"
pcd = o3d.io.read_point_cloud(input_file) # Read the point cloud
# Visualize the point cloud within open3d
o3d.visualization.draw_geometries([pcd])
# Convert open3d format to numpy array
# Here, you have the point cloud in numpy format.
point_cloud_in_numpy = np.asarray(pcd.points)
References:
http://www.open3d.org/docs/release/tutorial/Basic/visualization.html
http://www.open3d.org/docs/release/tutorial/Basic/working_with_numpy.html

I have succesfully used plyfile while working with pointclouds.
It's true that the poject had not presented any activity from a long time, but It meets its purpose.
And is not like the fact of parsing a ply file were something that allows you to recreate yourself by adding new features.
On the other hand PyMesh offers you many other features besides parsing ply files.
So maybe the question is:
Do you want to just 'read, manipulate and write PLY files' or are you looking for a library that provides more extra features?
What made me choose plyfile was that I'm able to incorporate it to my project by just copying 1 source file. Also I wasn't interested in any of the other features that PyMesh offers.
Update
I ended writing my own functions to read/write ply files (supporting ascii and binary) because I found the plyfile source code a little messy.
If anyone is interested, here is a link to the file:
ply reader/writer

I've just updated meshio to support PLY as well, next to about 20 other formats. Install with
pip install meshio
and use either on the command line
meshio convert in.ply out.vtk
or from within Python like
import meshio
mesh = meshio.read("in.ply")
# mesh.points, mesh.cells, ...

I rolled my own ascii ply writer (because it's so simple, I didn't want to take a dependency). Later, I was lazy and took a dependency on plyfile for loading binary .ply files coming from other places. Nothing has caught on fire yet.
A thing to mention, for better or worse, the .ply format is extensible. We shoehorned custom data into it, and that was easy since we also wrote our own writer.

Related

How to read .000 file type in python for Machine Learning?

I have to apply Machine Learning Algorithms on a dataset. But the problem that I've been facing is loading this dataset. I've tried nanoscope library but couldn't succeed. How should I proceed with this?
The dataset can be found here :
https://archive.ics.uci.edu/ml/machine-learning-databases/eeg-mld/
These appear to just be plain text files, so you should be able to read them with open and read (Python docs), or any other appropriate tool for that matter (e.g. pandas.read_csv (link), csv module (link), the list goes on).

Extracting weights from .caffemodel without caffe installed in Python

Is there a relatively simple way to extract weights in Python from one of the many pretrained models in Caffe Zoo WITHOUT CAFFE (nor pyCaffe)? i.e. parsing .caffemodel to hdf5/numpy or whatever format that can be read by Python?
All the answers I found use C++ code with caffe classes or Pycaffe.
I have looked at pycaffe's code it looks like you really need caffe to make sense of the binary is that the only solution?
I had to resolve that exact issue just now. Assuming you have a .caffemodel (binary proto format), it turns out to be quite simple.
Download the latest caffe.proto
Compile into python library:
protoc --python_out=. caffe.proto
Import and parse
The sample code below
import numpy as np
import sys, os
import argparse
import caffe_pb2 as cq
f = open('VGG_ILSVRC_16_layers.caffemodel', 'r')
cq2 = cq.NetParameter()
cq2.ParseFromString(f.read())
f.close()
print "name 1st layer: " + cq2.layers[0].name
produces for me:
name 1st layer: conv1_1
Obviously you can extract anything else you want from your object. I just printed the name of my first layer as an example.
Also, your model may be expressing the layers in either the layers array (deprecated) or the layer (no 's') array, but you get the gist.
Nowadays, caffe can save the weights in two formats: BINARYPROTO, or HDF5. Binary weights files with extension .caffemodel are in BINARYPROTO format, while extension .caffemodel.h5 are in HDF5 format. Since the HDF5 format was introduced to caffe recently, I expect most models you currently encounter in the "model zoo" to be in the more "traditional" BINARYPROTO format.
If the weights are in stored in HDF5 format, you might be able to pick through them using h5py package.
However, the BINARYPROTO format is based on a binary serialization of google protocol buffer format that is defined by caffe.proto. I am no expert in protocol buffers, but I suspect you will have a really hard time deciphering the binary file without explicitly "compiling" the caffe.proto protobuf definition files (which is part of caffe build).
I suppose the easiest way to pick into the weights is by installing caffe and using its python/C++ interface. Why don't you just do that?
As it so happens, ethereon made a wonderful library called caffe-tensorflow to convert caffe models to Tensorflow code, but that is not all! It also allows the user to convert .caffemodel files to .npy files without having to build pycaffe! It tests if caffe was built and if not it falls back to a pure google protobuf implementation.
I don't understand why you want to do that without caffe/pycaffe, perhaps you are tired of deploying caffe on new machine ? But since caffemodel is special binary data type of caffe, using others' tool doesn't make life easier.
If you do insist to do this, there is another framework : Mocha on Julia, which provides a method to extracting caffemodel to hdf5. I hope this could help you.

Using matlab matrix as input for a python code

Is it possible to use a matrix generated with matlab and saved in a binary file as input of a python script?
Of course it's possible. bits are bits, you just need to know how to interpret them :). Fortunately for you, it looks like someone has already done the hard work of figuring out the matlab file format and has written a reader for it ... Have a look at the scipy.io module. Specifically, the loadmat function might be useful.
scipy isn't in the python standard library, but generally speaking, if you're going to be trying to use python to replicate something done in matlab, you'll probably want to have it and it's sibling/child package numpy installed.

Converting NetCDF to GRIB2

I know there is software like wgrib2 that will convert files in grib and grib2 format to NetCDF files, but I need to go the other way: from NetCDF to grib2, because the local weather offices here can only consume gridded data in grib2 format.
It appears that one solution could be in Python, using the NetCDF4-Python library (or other) to read the NetCDF files and using pygrib to write grib2.
Is there a better way?
After some more research, I ended up using the British Met Office "Iris" package (http://scitools.org.uk/iris/docs/latest/index.html) which can read NetCDF as well as OPeNDAP, GRIB and several other formats, and allows to save as NetCDF or GRIB.
Basically the code looks like:
import iris
cubes = iris.load('input.nc') # each variable in the netcdf file is a cube
iris.save(cubes[0],'output.grib2') # save a specific variable to grib
But if your netcdf file doesn't contain sufficient metadata, you may need to add it, which you can also do with Iris. Here's a full working example:
https://github.com/rsignell-usgs/ipython-notebooks/blob/master/files/Iris_CFSR_wave_wind.ipynb
One can also use climate data operators (cdo's) for the task -https://code.zmaw.de/projects/cdo/wiki
but need to install the software with all additional libraries.
I know CDO is mentioned above, but I thought it would be useful to give the full command
cdo -f grb2 copy in.nc out.grb
ECMWF has a command line based tool to do just this: https://software.ecmwf.int/wiki/display/GRIB/grib_to_netcdf

How to extend h5py so that I can access data within a hdf5 file?

I have a small python program which creates a hdf5 file using the h5py module. I want to write a python module to work on the data from the hdf5 file. How could I do that?
More specifically, I can set the numpy arrays to PyArrayObject and read them using PyArg_ParseTuple. This way, I can read elements from the numpy array when I am writing a python module. How to read hdf5 files so that I can access individual elements?
Update: Thanks for the answers below. I need to read hdf5 file from C and not from Python- I know how to do that. For example:
import h5py as t
import numpy as np
f=t.File('\tmp\tmp.h5', 'w')
#this file is 2+GB
ofmat=np.load('offsetmatrix.npy')
f['FileDataset']=ofmat
f.close()
Now I have a hdf5 file called '\tmp\tmp.h5'. What I need to do is read the individual array elements from the hdf5 file using C (and not python) so that I can do something with those elements. This shows how to extend numpy arrays. How to extend hdf5?
Edit: Grammar
h5py gives you a direct interface for reading/writing and manipulating data stored in an hdf5 file. Have you looked at the docs?
http://docs.h5py.org/
I advise starting with these. These have pretty clear examples of how to do simple data access. If there are specific things that you are trying to do that aren't covered by the methods in h5py, could you please give a more specific description of your desired usage?
If you don't actually need a particular structure of HDF5, but you just need the speed and cross-platform compatibility, I'd recommend taking a look at PyTables. It has the built-in ability to read and write Numpy arrays.

Categories