Scale an .stl file using python script - python

I have an .stl file and I want to write a python script to scale if. So for example, if I have a file with a cube of dimensions 2x2x2mm, and I have my scale set to 10, the script should spit out an stl file with a cube of dimensions 20x20x20mm.
I searched and found numpy-stl library for modifying stl files. However, I can't seem to find any function that scales an .stl. Can someone point me in the right direction as what to use?

I know this is an old question, but you should be able to directly modify the mesh.x, mesh.y and mesh.z attributes to accomplish this.

I also stumbled across this post while trying to get an answer to the same problem. I assume you are using numpy-stl.
My solution was to create a rotation matrix R using the scaling parameter instead of rotation angles and then use mesh.rotate_using_matrix(R,point=centroid).
Seems to work for me.

Related

Using sfepy to solve a simple 2D differential euqation

I am trying to learn sfepy. To that effect I want to solve the differential equation
On a triangle domain (2D).
There';s 2 things I don't understand from reading the docs.
How do I specify a single triangle mesh? The code seems to assume you already have the mesh file but it does not provide a link to it so I don;t know hwo to cosntruct my data.
I am having a ahrd time learning how to map the equation onto sfepy's syntax. i.e. I don;t really know how to specify the problem using the library even after followignt he tutorial.
I have the solution to your problem 1.
The example script linear_elastic_interactive.py, does mention the path to the mesh used on line 35, like this mesh = Mesh.from_file(data_dir + '/meshes/2d/rectangle_tri.mesh') . And the tutorial also mentions that "This file should be run from the top-level SfePy source directory so it can find the mesh file correctly". So if you navigate the source code, you can find the mesh here: link

python plotting and image viewing library with equivalent of MATLABs datatips

Currently, I use MATLAB extensively for analyzing experimental scientific data (mostly time traces and images). However, again and again I keep running into fundamental problems with the MATLAB language and I would like to make the switch to python. One feature of matlab is holding me back however: its ability to add datatips to plots and images.
For a line plot the datatip is a window next to one of the data points that shows its coordinates. This is very useful to quickly see where datapoints are and what their value is. Of course this can also be done by inspecting the vectors that were used to plot the line, but that is slightly more cumbersome and becomes a headache when trying to analyze loads of data. E.g. let's say we quickly want to know for what value of x, y=0.6. Moving the datatip around will give a rough estimate very quickly.
For images, the datatip shows the x and y coordinates, but also the greyscale value (called index by MATLAB) and the RGB color. I'm mainly interested in the greyscale value here. Suppose we want to know the coordinates of the bottom tip of the pupil of the cat's eye. A datatip allows to simply click that point and copy the coordinates (either manually or programmatically). Alternatively, one would have to write some image processing script to find this pixel location. For a one time analysis that is not worthwhile.
The plotting library for python that I'm most familiar with and that is commonly called the most flexible is matplotlib. An old stock overflow question seems to indicate that this can be done using mpldatacursor and another module seems to be mplcursors. These libraries do not seem to be compatible with Spyder, however, limiting their usability. Also, I imagine many python programmers would be using a feature like datatips, so it seems odd to have to rely on a 3rd party module.
Now on to the actual question: Is there any module (or simple piece of code that I could put in my personal library) to get the equivalent of MATLAB's datatips in all figures generated by a python script?

Map a texture onto a model in Python

To frame this question: I am a complete beginner in terms of 3d rendering, and I would like to get my feet wet.
My goal is to create a command-line script (ideally in Python) which takes some kind of 3d model file (e.g. a sphere), maps a texture onto it, and outputs the result as an image file. That is, I would like my program to essentially be able to "do" the following:
From my reading, this appears to be something known as "uv mapping", but almost everything I've found on the subject in on how to do this using Blender, and I would prefer avoiding this: in a 2d analogy, it seems to me that Blender is like Photoshop where I'm looking for something like ImageMagick. Beyond that, I haven't been able to find much.
The closest I have found is this other stackoverflow question:
uv mapping works bad on low resolution (warning: lot of images)
But I don't quite understand what's going on there, because it doesn't import a 3d model at all -- it is my [perhaps mistaken?] understanding that EXR is a 2d image format.
Any guidance on how to get started would be greatly appreciated!

Does marching cubes in vtk require specific input?

The marching cubes class does not generate anything after I passed a ImageData to it.
I am trying to generate surface from a 3D numpy array. I used evtk to export the array to a .vti (Structured Points) file and stored as pointData. Then, read the file by vtkXMLImageDataReader. After passing the output of the Reader (vtkImageData) to vtkMarchingCubes class, the class returns a vtkPolyData with no point or cell, vtkImageData were used in vtk wiki's example, the MarchingCubes class should have no problem to handle it.
I tried to export the ImageData generated by the Reader, the output can be visualized by ParaView without any problem.
I just wondering are there any special requirement for the marching cubes in vtk to generate the surface? It will be even better if the 3D numpy can directly transform to data type that suitable for the MarchingCubes.
Here is my code to read the .vki file and pass it to marching cubes:
dataReader = vtk.vtkXMLImageDataReader()
dataReader.SetFileName('model_name.vti')
dataReader.Update()
contacts = vtk.vtkDiscreteMarchingCubes()
contacts.SetInputConnection(dataReader.GetOutputPort())
contacts.SetValue(0,1)
contacts.ComputeNormalsOn()
contacts.Update()
result = contacts.GetOutput()
If you wish to do this in the python/numpy world, then I would suggest using the marching cubes implementation from here (or look at isosurface routines in pyqtgraph, mayavi ...). Although there are many tutorials on moving data to and from VTK, if you are already programming with numpy, I wouldn't mix the two methodologies.
Although this question is quite old already, I ran into a very similar problem.
As YC.Chui pointed out in his comment, we need to set the scalar values in the read data. For me this helped:
reader = vtk.vtkXMLImageDataReader()
reader.SetFileName('model_name.vti')
reader.Update()
readData = reader.GetOutput()
readData.GetPointData().SetScalars(readData.GetPointData().GetArray(0))
I am quite new to VTK and have absolutely no idea why this solves the problem and whether there is a better way. Feel free to comment and correct me!

.asc viewer using Kivy

I want to develop a 3D file viewer in kivy and python that reads and displays .asc mesh files of the format:
x1,y1,z1
x2,y2,z2
........
xi,yi,zi
What I have thought so far is to use a method similar to beginShape() of Processing so as to begin drawing a 3D shape then use a for-loop to append each point respectively.
I have also found that kivy example which parses .obj files and then displays them. Do you have any ideas on how can I make a similar ascparser and try to display my files?
Any help is greatly appreciated
I have also found that kivy example which parses .obj files and then displays them. Do you have any ideas on how can I make a similar ascparser and try to display my files?
Your best strategy at the moment is probably to read the objparser and try to understand what it is doing. The important thing is building a list of points and normals, which are passed to opengl via a Mesh with a custom vertex_format and custom shaders. In principle it wouldn't be very hard to do the same thing for your own filetype just by comparison with the .obj code, though you will need some understanding of what's going on (you can read about opengl and read the kivy source, if you haven't already) to make significant changes.
This is really an advanced topic right now, Kivy has very few pre-built wrappers to 3d opengl rendering. The backend is fully capable (so the 3d rendering example isn't that complex, for instance), but you probably do need some understanding of what's going on to accomplish things like your own task.
There are also a few other examples of 3d rendering in Kivy, which you might find helpful. nskrypnik has several repositories doing just this (see kivy-trackball, kivy-3dpicking, kivy-rotation3d), and seems to have begun implementing a proper 3d api in the kivy3 repo, though this is not complete and I suggest it as something you can learn about by reading, not something that can necessarily do what you want right now. The other nice example I've seen is a 3d inspector POC by tito, though it's just a proof of concept and not a polished product.

Categories