To frame this question: I am a complete beginner in terms of 3d rendering, and I would like to get my feet wet.
My goal is to create a command-line script (ideally in Python) which takes some kind of 3d model file (e.g. a sphere), maps a texture onto it, and outputs the result as an image file. That is, I would like my program to essentially be able to "do" the following:
From my reading, this appears to be something known as "uv mapping", but almost everything I've found on the subject in on how to do this using Blender, and I would prefer avoiding this: in a 2d analogy, it seems to me that Blender is like Photoshop where I'm looking for something like ImageMagick. Beyond that, I haven't been able to find much.
The closest I have found is this other stackoverflow question:
uv mapping works bad on low resolution (warning: lot of images)
But I don't quite understand what's going on there, because it doesn't import a 3d model at all -- it is my [perhaps mistaken?] understanding that EXR is a 2d image format.
Any guidance on how to get started would be greatly appreciated!
Related
I have some time series plots I've done a long time ago that I would like to improve in terms of graphics and style. However, I didn't save the raw data and I cannot recover them.
So I was wondering, is there a way to retrieve data points from a chart image (e.g. png file)? like I input an image and I get a csv/dataframe/array with pairs of x,y coordinates?
To give an idea, that's the kind of images I would like to convert:
I've seen this GRABIT could potentially work but I'm not familiar with MatLab. Is there anything python-based or possibly some web tools?
Preferences:
work on linux systems (in particular ubuntu)
doesn't require installation
there is a little and simple software develop here in Brazil that can do this kind of thing.
You have to download in the following link:
http://paginapessoal.utfpr.edu.br/lasouza/analise-nao-linear-de-estruturas/Pega%20Ponto%201.0.exe/view
You can load the image, specify the origin and set the x and y labels. After you can retrieve the points that you clicked on the image.
I have a Geotiff that I display on a tile map, but it's slightly off to the south. For example, on this screenshot the edge of the image should be where the country border is, but it's a bit to the south:
Here's the relevant part of the code:
tiff_rio_500 = rioxarray.open_rasterio('/content/mw/mw_dist_to_light_at_all_from_light_mask_mw_cut_s3_500.tif')
dataarray_500 = tiff_rio_500[0]
dataarray_500_meters = dataarray_500.copy()
dataarray_500_meters['x'], dataarray_500_meters['y'] = ds.utils.lnglat_to_meters(dataarray_500.x, dataarray_500.y)
hv_dataset_500_meters = hv.Dataset(dataarray_500_meters, name='nightlights', vdims='cumulative_cost')
hv_tiles_osm_bokeh = hv.element.tiles.OSM().opts(width=1000, height=800)
hv_image_500_meters_bokeh = hv.Image(hv_dataset_500_meters, kdims=['x', 'y'], vdims=['cumulative_cost'], rtol=1).opts(cmap='inferno_r')
hv_combined_osm_500_meters_bokeh = hv_tiles_osm_bokeh * hv_image_500_meters_bokeh
hv_combined_osm_500_meters_bokeh
You can see the live notebook on google colab.
Now this is not the usual "everything is way off" problem that occurs when one doesn't convert the map to Web Mercator. It is almost perfect, it just isn't.
The Geotiff is an Earth Engine export. This is how it looked originally in Earth Engine (see live code):
As you can see, the image follows the borders everywhere.
At first, I suspected that maybe the export went wrong, or the google map tileset is somewhat different, but no, if I open the same exported Tiff in the QGis application on my windows laptop and view it on the same OSM tilemap as I do in the colab notebook, it looks fine:
Okay, the image does not follow the borders perfectly, but I know why and that's unrelated (I oversimplified the country border geometry). The point is, that it is projected to the correct location. So based on that, the tiff contains the correct information, it can be displayed at the same location as the borders are in the OSM tilemap, but still in my Holoviews-Datashader-Bokeh project it is slightly off.
Any idea why this happens?
I've got the answer on the Holoviz Discourse from one of the developers. Seeing how the recommended function is practically undocumented, I copy it here in case somebody looks for an easy way to load a geotiff and add to a tilemap in Holoviews/Geoviews:
https://discourse.holoviz.org/t/geotiff-overlay-position-is-slightly-off-on-holoviews-bokeh-tilemap/2071
philippjfr
I wouldn’t expect manually transforming the coordinates to work
particularly well. While it’s a much heavier weight dependency for
accurate coordinate transforms I’d recommend using GeoViews.
img = gv.util.load_tiff( '/content/mw/mw_dist_to_light_at_all_from_light_mask_mw_cut_s3_500.tif' )
gv.tile_sources.OSM() * img.opts(cmap='inferno_r')
Edit: Now it is possible one doesn't want to use Geoviews as it has a pretty heavy dependency chain that requires a lot of patience and luck to set it up right. Fortunately rioxarray (through rasterio) has a tool to reproject, just append ".rio.reproject('EPSG:3857')" to the first line and then you don't have to use the lnglat_to_meters which is not intended for this purpose.
So the corrected code becomes:
tiff_rio_500 = rioxarray.open_rasterio('/content/mw/mw_dist_to_light_at_all_from_light_mask_mw_cut_s3_500.tif').rio.reproject('EPSG:3857')
hv_dataset_500_meters = hv.Dataset(tiff_rio_500[0], name='nightlights', vdims='cumulative_cost')
hv_tiles_osm_bokeh = hv.element.tiles.OSM().opts(width=1000, height=800)
hv_image_500_meters_bokeh = hv.Image(hv_dataset_500_meters, kdims=['x', 'y'], vdims=['cumulative_cost'], rtol=1).opts(cmap='inferno_r')
hv_combined_osm_500_meters_bokeh = hv_tiles_osm_bokeh * hv_image_500_meters_bokeh
hv_combined_osm_500_meters_bokeh
Now compared to the Geoviews solution (that supposedly handles everything automatically), this solution has a downside that if you use a Hover Tooltip to display the values and coordinates under the mouse cursor, the coordinates are showing up in the newly projected web mercator system in millions of meters instead of the expected degrees. The solution for that is outside the scope of this answer, but I'm just finishing a detailed step by step guide that contains a solution for that too, and I will link that here as soon as it is published. Of course if you don't use Hover Tooltip, the code above will be perfect for you without any more tinkering.
for my school project, I need to find images in a large dataset. I'm working with python and opencv. Until now, I've managed to find an exact match of an image in the dataset but it takes a lot of time even though I had 20 images for the test code. So, I've searched few pages of google and I've tried the code on these pages
image hashing
building an image hashing search engine
feature matching
Also, I've been thinking to search through the hashed dataset, save their paths, then find the best feature matching image among them. But most of the time, my narrowed down working area is so much different than what is my query image.
The image hashing is really great. It looks like what I need but there is a problem: I need to find an exact match, not similar photos. So, I'm asking you guys, if you have any suggestion or a piece of code might help or improve the reference code that I've linked, can you share it with me? I'd be really happy to try or research what you guys send or suggest.
opencv is probably the wrong tool for this. The algorithms there are geared towards finding similar matches, not exact ones. The general idea is to use machine learning to teach the code to recognize what a car looks like so it can detect cars in videos, even when the color or form changes (driving in the shadow, different make, etc).
I've found two approaches work well when trying to build an image database.
Use a normal hash algorithm like SHA-256 plus maybe some metadata (file or image size) to find matches
Resize the image down to 4x4 or even 2x2. Use the pixel RGB values as "hash".
The first approach is to reduce the image to a number. You can then put the number in a look up table. When searching for the image, apply the same hashing algorithm to the image you're looking for. Use the new number to look in the table. If it's there, you have a match.
Note: In all cases, hashing can produce the same number for different pictures. So you have to compare all the pixels of two pictures to make sure it's really an exact match. That's why it sometimes helps to add information like the picture size (in pixels, not file size in bytes).
The second approach allows to find pictures which very similar to the eye but in fact slightly different. Imagine cropping off a single pixel column on the left or tilting the image by 0.01°. To you, the image will be the same but for a computer, they will by totally different. The second approach tries to average small changes out. The cost here is that you will get more collisions, especially for B&W pictures.
Finding exact image matches using hash functions can be done with the undouble library (Disclaimer: I am also the author). It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.
I want to develop a 3D file viewer in kivy and python that reads and displays .asc mesh files of the format:
x1,y1,z1
x2,y2,z2
........
xi,yi,zi
What I have thought so far is to use a method similar to beginShape() of Processing so as to begin drawing a 3D shape then use a for-loop to append each point respectively.
I have also found that kivy example which parses .obj files and then displays them. Do you have any ideas on how can I make a similar ascparser and try to display my files?
Any help is greatly appreciated
I have also found that kivy example which parses .obj files and then displays them. Do you have any ideas on how can I make a similar ascparser and try to display my files?
Your best strategy at the moment is probably to read the objparser and try to understand what it is doing. The important thing is building a list of points and normals, which are passed to opengl via a Mesh with a custom vertex_format and custom shaders. In principle it wouldn't be very hard to do the same thing for your own filetype just by comparison with the .obj code, though you will need some understanding of what's going on (you can read about opengl and read the kivy source, if you haven't already) to make significant changes.
This is really an advanced topic right now, Kivy has very few pre-built wrappers to 3d opengl rendering. The backend is fully capable (so the 3d rendering example isn't that complex, for instance), but you probably do need some understanding of what's going on to accomplish things like your own task.
There are also a few other examples of 3d rendering in Kivy, which you might find helpful. nskrypnik has several repositories doing just this (see kivy-trackball, kivy-3dpicking, kivy-rotation3d), and seems to have begun implementing a proper 3d api in the kivy3 repo, though this is not complete and I suggest it as something you can learn about by reading, not something that can necessarily do what you want right now. The other nice example I've seen is a 3d inspector POC by tito, though it's just a proof of concept and not a polished product.
I want to write a tool in Python that will help me create isometric tiles from 3D-models. You see, I'm not a very proficient artist and free 3D-models are plentisome, and creating something like a table or chair is much easer in 3D than in painting.
This script will load a 3D model in orthographic projection and take pictures from four directions so it can be used in a game. I've tried this in Blender, but the results are inconsistent, very difficult to control and take very long time to create simple sprites.
Rolling my own script will probably let me do neat things too, especially batch-genetration, maybe on texture changes, shading, etc. The game itself will probably be made in Python tpp, so maybe I could generate on the fly. (Edit: and automatically creat cut out see-through walls for when they face camera)
Now my question, what Python libraries can do something like this? I've checked both Pyglet and Panda3D, but I haven't even been able to load a model, let alone set it to orthographic projection.
I found this code:
www.pygame.org/wiki/OBJFileLoader
It let me load and display an .obj file of a cube from Blender with ease. It runs PyOpenGL so it should let me do everything OpenGL can. Never knew OpenGL was so low-level, didn't realize I'd have to write my own loaders and everything.
Anyway, I'm pretty sure I can modify this to project isometrically, rotate the object and grab shots and combine them into sprites. Thanks you guys!
Since you looked at Panda3D - if you can convert your model to the 'egg' format (which blender/maya may do), then you could be able to import it.
https://www.panda3d.org/manual/index.php/Loading_Models
https://www.panda3d.org/manual/index.php/Models_and_Actors
http://www.panda3d.org/manual/index.php/Converting_from_Blender
Note: sources of this was 'python 3d mesh loader' in a popular search engine - this looks viable to me. I now need to try installing it and some code...