I'm working on an algorithm to optimize the packing of items in boxes.
I can have up to 20 items which I need to pack in as few boxes as possible (6 possible box sizes), while minimizing the wasted volume within the boxes. I thought of implementing a variation of the 3D BPP algorithm - which solves part of my problem - but cannot find any algorithm written in Python.
Does anyone have suggestions of the way to go or of python algorithms for 3D BPP which I could use?
Thanks!
Unfortunately, the problem you want to solve here is NP-complete, so there are no great absolute solutions to this problem, but the following papers provide some nice heuristic algorithms which are not difficult to implement.
OPTIMIZING THREE-DIMENSIONAL BIN PACKING THROUGH SIMULATION
Three-dimensional packings with rotations
An Improved Three-Weight Message-Passing Algorithm
The first link posted by Don Thousand, "Optimizing...", has not lasted three years. Another 3D packer, Enzo Ruiz, has saved it on his GitHub page
https://github.com/enzoruiz/3dbinpacking
the link to the paper by Erick Dube is https://github.com/enzoruiz/3dbinpacking/blob/master/erick_dube_507-034.pdf
Related
I'm trying to build a simulation that will take place in a 1000x1000x1000 space. For each point in space, I need to be able to encode 2 or 3 properties.
I also need to be able to do some basic operations on the space, such as, given a point, find the properties of the 26 adjacent 3D neighbors of the point.
All points will lie on vertices in the 1000x1000x1000 space (i.e. every point is discrete).
I wrote up a version in python using numpy and it is much too slow. I've also looked for libraries that would speed it up but couldn't find anything.
Does anyone know of a python library that would provide useful helper functions and would be appropriate for a simulation of this size?
Using Numpy to together with the Numba python-compiler for the more intricate algorithms can take you a long way.
Also, I think you are refering to a "stencil" algorithm, and numba has specific stencil-functionality that could help you.
But start with a smaller grid during development!
I am plotting some geometry on an image. The problem is the coordinates are coming out as floats, so I am not able to plot them. As a work around I am using floor(); to truncate it to nearest Integer. This works fine in some cases, except that it shifts my image a little bit.
x=9.7
x'=floor(x)= 9 //the plot is now at 9 and not at 9.7 as desired, (this is 'shift')
But in cases where my requirement is to draw multiple shapes at equal distance I am facing problem as the shift (because of floor();)is not uniform for each shape.
It is just like quantization noise in digital communication.
Is there any way I can get around this problem ?
This problem is actually quite a common one in computer graphics. Rasterisation is the problem domain you are currently struggling with. You may find Bresenham's line algorithm a good introduction to the topic. Here is a nice interactive site with a few different rasterisation algorithms.
Hope that helps!
Image mosaics use a set of predefined squared images to build a larger image (example here).
There are a lot of solutions and it's quite trivial to achieve this effect. However, it becomes much harder with the following constraints:
The shape of the original mosaics is abstract. Any convex polygon could do.
Each mosaic can only be used once.
There is no need for the mosaics to be absolutely packed (i.e. occupying 100% of the canvas), but they should be as packed as possible without overlapping.
I'm trying to automatize the ancient art of tesselation, specifically the Opus palladianum technique.
My idea is to use simulated annealing or some other heuristic to optimize the position and rotation of each irregular mosaic, swaping two in each iteration, trying to minimize some energy function that reflects the similarity to the target image as well as the "packness" of the tiles.
I'm trying to achieve this in python, any ideas and help would be greatly appreciated.
Example:
I expect that you may probably use GA (Genetic Algorithm) with a "non-overlapping" constraint to do this job.
Parameters for individual (each convex polygon) are:
initial position
rotation
(size ?)
And your fit function will be build to give best note to each individual when polygon are not overlapping (and close to other individual)
You may see this video and this one as example.
Regards
I've been trying to read through the Stackoverflow questions for generating point clouds (x,y,z) coordinates from a left and right stereo image pair.
I haven't come to any definite solution, and I'm asking the community here for some help.
Problem statement: Given two stereo images, generate 3D (x,y,z) cartesian coordinate point clouds from those and do so in a way that lends itself to completing this point-cloud generation in a way that could work over a large set (thousands) of pairs of stereo images
My programming language experience lends itself to MATLAB, but I've dabbled in Python, and C++ is limited, but I may be able to work in that as well.
Speed is a factor here, so the the idea is to find a quick method of successively going through these pairs and generating the point cloud.
NOTE: I am not asking for the BEST as to avoid comparative solutions, I'm just asking for solutions.
Thank you very much!
Edit: After being recommended to utilize the Stereo Block Matching implementation in OpenCV, I may steer away from this method, as there is a wide variation in regards to texture of the topographical pictures that I'm dealing with.
What you probably want is called "Stereo Block Matching". If you don't feel like writing it yourself, OpenCV has an implementation ready to go (even a CUDA accelerated version).
I'm looking for a Python library that would allow me to visualise geometrical data. Nothing extremely complicated, just a bunch of points in the plane or space and a few basic shapes that I would generate.
I had a look at this question and the answers to it, but the focus over there seems to be on computation.
How about Matplotlib/mplot3D?