I'm currently working on a small game using pygame.
Right now I render images the standard way, by loading them and then blitting them to my main surface. This is great, if I want to work with an individual image size. Yet, I'd like to take in any NxN image and use it at an MxM resolution. Is there a technique for this that doesn't use surfarray and numeric? Something that already exists in pygame? If not, do you think it would be expensive to compute this?
I'd like to stretch the image. So, upscale or downscale the image. Sorry I wasn't clearer.
There is no single command to do this. You will first have to change the size using pygame.transform.scale, then make a rect of the same size, and set its place, and finally blit. It would probably be wisest to do this in a definition.
Related
I want to resize my images( to a smaller size) :
How can I resize my images properly without the bad pixels effect for further cnn processing afterward.
Your problems are due to interpolation artifacts. As you can check in the documentation for cv2.resize, by default BILINEAR is used. You should probably go with their suggestion and try using the INTER_AREA version. You may also want to check other options and see which one suits you best.
You need to look at vector images here at Wikipedia if you really want a clear picture in a small size. OpenCV library doesn't provide a function for converting bitmap images to vector images.
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.
Question:
Using graphicsmagick, what is a good way to find the coordinates of a small image inside a bigger image?
Explnation:
To explain further, I have a large screen shot that I am working with and would like to find the pixel coordinates of a known icon that is expected to be found somewhere within the screen shot.
Also, if this is not a good library to be using for this purpose, would love to hear suggestions for alternatives that will preferably be compatible with Python.
Thanks so much!
I use "gm display" to do that.
gm display &
Click on the image. Select Transform, then Crop. Put the cursor at the
top left of the small image. Read the coordinates from the small
information window. Select "dismiss"
Note that this is a manual method, which is OK if you are really "working
with the image" on screen. If you are looking for a batch method, it'll
be a little more complex.
I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data.
What you want is called Inpainting.
OpenCV has an inpaint() function that does what you want.
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.
I made my first gimp python script that might help you:
my scripts
It is called conditional filter as it is a matrix filter that fill all transparent pixels from an image according to the mean value of its 4 nearest neighbours that are not transparent.
Be sure to use a RGBA image with only 0 and 255 transparent values.
Its is rough, simple, slow, unoptimized but bug free.
I want to make sure an image that will be saved stays inside the border of my specific dimensions and scales down if any of its dimensions exceed these specific dimensions.
I will be using a gallery for my website using django and the width and height of it is certain, If I use crop while saving an image to keep dimensions under control, it crops a part and as a result doesn't act as I wish.
How can this be achieved ?
You should really look at http://thumbnail.sorl.net/ (especially http://thumbnail.sorl.net/template.html#options), it will solve this and many other problems that you may encounter later.