Simulate Lens flare and chromatic aberration using python [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a set of images. I have to use them for training a network. I want to simulate a lens flare effect and chromatic aberration on the images. I have tried to find some function in OpenCV, scikit and other python image library but no help from there. How can i simulate these effect on my image? Rough idea or code will be useful. Images are in jpg format.

Depends on what kind of lens flare you are trying to achieve. Create e.g. hexagon mask and overlay multiple instances of it partially transparently between start and end point of the flare axis? Hexagons should be at least slightly bigger "in sun's direction" and spaced more or less in equal distance compared to each others. User should be able to click start and end points of said axis from the pic and use e.g. mouse to rotate, zoom in/out the axis and define number of flare elements to be added.

For chromatic aberration, I would split the RGB components, apply slightly different scaling factors, and merge back. Depending on whether you want to simulate a flint or crown effect, the factors will be increasing or decreasing.

Related

Trying to read these plates using OCR but they are blurry. Where do I start? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
Hi I am a student doing research in my university. This is my first time using computer vision (openCV) and I am fairly new to image preprocessing. I have these images of License Plates and I would like to use easyOCR/pytesseract to read the plates. Currently all I have done is convert the image to grayscale, rotate it by a few degrees, but the reading results are very inconsistent. How do I improve that?
I have tried using kernels to sharpen the images but they seem to be fairly inconsistent too.
Here are some images I have to give you a general idea of what the images are like:
I would start with image enhancement. It's hard to tell what exactly is applicable but here are some possible manuevers:
As usual recognition algorithms are not invariant to rotation. And every image seems to be geometically distorted similarly. You can try to normalize the geometry by warpPerspective function from Opencv with appropriate transformation matrix. Rotation is a subset of all possible transformations covered by perspective transform.
You can try to use advanced deblurring techniques like wiener filter or deeplearning. It seems like point spread function is different from image to image that complecates the recovery.
There is some periodic signal in your images (vertical blue-white-blue stripes). That can possibly can be enhanced by doing FFT -> removing components of the specific wavelength -> iFFT.
Anyway looking on your images, I am not sure if it will be easy to achieve the desired result without diving into the OCR pipeline.

detecting color of the tractor from a perspective view [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to detect the color of the tractors (in trucks). I trained a model to detect the tractor part as ROI(Regio of interest). I detect ROI by drawing a rectangle around the tractor.
my question is how can I detect the color of cab/tractor? I need the method to be robust to lighting conditions and weather differential.
example of the truck can be found here
the perspective view makes the problem challenging since the ROI is a rectangle, part of the background gets into the ROI.
Edited: I use kmeans and put the number of clusters= 2, for the following image
[![enter image description here][1]][1]
it outputs this color
It seems as if conventional image processing techniques are not robust enough.
Could you please tell me what method would be more accurate?
You could try Color Quantization on the rectangle. This will try to reduce the number of colors to a palette that could still accurately represent the image. The usual algorithms return them sorted by most dominant color.
Hopefully there is more truck than background in the rectangle. You could also crop in a margin of the rectangle if you think truck is usually centered in the rectangle.
Here's an example using python and scikit: https://scikit-learn.org/stable/auto_examples/cluster/plot_color_quantization.html
You should use a very small number (4?) of colors in the expected palette.

PyOpenGL camera system [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way?
I couldn't find anything that can help me and I don't know how to translate C to python.
I just need a way to transform the camera that can help me understand how it works.
To say it bluntly: There is no such thing as a "camera" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume.
The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better.
You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which "the world" will appear in the first place. Any notion of a "camera" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation.
So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way:
What kind of transformations do I have to chain up, to give a tuple of numbers that are called "position" an actual meaning, by letting this position turn up at a certain place on the visible screen?
You really ought to think that way, because that is what's actually happening.

Comparing two images - python, openCV [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Which is the best way to compare two images from same domain, different features in python. Histogram or Image quality functions ?
I have two images for different scenes, the contents inside the images are different, but both of the images are taken during morning.
I want to compare how much these two images are related to each other ?, like my important metric, is to say that these two images were taken during morning for example, even if different contents.
Any idea or way how to do this ?
There is no easy answer to your question. It depends on how do you consider images similar or different. And this is a subjective measure that is totally dependable on what do you want to do with this information.
Anyway, for this kind of problems, opencv is your friend. Here I list some ideas:
use histograms: cv2.histogram https://docs.opencv.org/3.1.0/d1/db7/tutorial_py_histogram_begins.html
with histograms you can know how blueish, greenish or redish an image is. You can compare if 2 images are in the same range (bin) of a specific color. This is something very common when you want to detect skin color.
if you have an specific object that appears in different images, use SIFT or SURF.

Algorithm for Partial rectangle shape image recognition [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm looking for a algorithm (pseudocode is ok, or any readable sourcecode, implementation is prob. in Python) to recognise a partial beam/perspective rectangle on a low res image, and give the angle from horizontal. The algoritm should be realtime 30hz fast so pref analytic or convex optimization. And most importantly robust. The beam can be in any orientation and the width can vary, but the angle should be able to be estmated sub degree. Robustness is the priority.
For now lets assume there is at least some beam visible. But in application the beam could be partially occluded.
Here is a sample of some example data with true angle 78.8253 degree (its binary data with noise)
I've tried a gaussian blur, than a treshold, and than OLS in u-v image coordinates. This is a beginning, but not the way to go. The cut-off corners and edges biasses the angle.
Does anyone know of a good robust and fast way to do this? Thanks
edit 1. Hough Transform
Applying a Hough Transform after gausian blur+treshold and averaging the top 5 peaks in the transform. This is a big improvement! but still it seems biassed by the assymetry in the cutoff. Are there any candidates that take this cut-away into account?

Categories