Prevent image stretch in fullscreen for psychopy - python

This is the first time I am using this software to create an experiment.
For my experiment I am presenting two images side by side, ideally I would like to run this experiment in fullscreen but when I set the value to true, the images become stretched. How do i fix their aspect ratio so I can run the program in full screen without stretching the images?
I am using a MacBook Pro and the PsychoPy coder.
Here is my current code for the images:
scale=0.7
faceRGB = visual.ImageStim(win,image='male.jpg',
mask=None,
pos=(0.0,0.0),
size=(scale,scale))
faceRGBINV = visual.ImageStim(win,image='maleInv.jpg',
mask=None,
pos=(0.0,0.0),
size=(scale,scale)`
Furthermore, in my experiment one of the images will be slightly compressed or stretched as it is. The participants will then have to choose the fatter face. This is already set up and when run in a window the images appear normal, it is just in fullscreen mode when they become stretched to fit the monitor size.

By default, PsychoPy uses 'norm' as units, which is size normalized to the window dimensions. You may have a situation where you (1) change the size of the image and (2) the image just happens to have the correct dimensions when presented in the default 800 x 800 pixels Window but appears stretched when you go fullscreen because your monitor has another aspect ratio.
If you don't change the size of the image, PsychoPy maintains the correct aspect ratio. Scaling the image will preserve this aspect ratio, so that's an easy solution. E.g. add one line after initiating the ImageStim:
scale = 0.7
from psychopy import visual
win = visual.Window(fullscr=True)
faceRGB = visual.ImageStim(win, 'male.jpg')
faceRGB.size *= scale # scale the image relative to initial size
If you want to control size directly and not just proportionally, see this discussion on the users list. I suggested the following solution. Say you want to set the image size so that scale is the maximum length along either the x- og y-axis and scale the other axis proportionally. Replace the last line above with this:
faceRGB.size *= scale / max(faceRGB.size)
Multiplying maintains aspect ratio as above and the righthand side is the multiplication factor to ensure scale. Change max to min if you want this to apply to the minimum length instead of the maximum length.
Note: you do not need to set pos=(0,0) and mask=None as that is the default value of these parameters.

Related

Is it possible to calculate real-time distance of an object in a image w/o reference objects?

I have a picture of human eye taken roughly 10cm away using a mobile phone(no specifications regarding the camera). After some detection and contouring, I got 113px as the Euclidean distance between the center of the detected iris and the outermost edge of iris on the taken image. Dimensions of the image: 483x578px.
I tried converting the pixels into mm by simply multiplying the number of pixels with the size of a pixel in mm since 1px is roughly equal to 0.264mm which gives the proper length only if the image is in 1:1 ratio wrt to the real-time eye which is not the case here.
Edit:
Device used: One Plus 7T
View of range = 117 degrees
Aperture = f/2.2
Distance photo was taken = 10 cm (approx)
Question:
Is there an optimal way to find the real time radius of this particular eye with the amount of information I have gathered through processing thus far and by not including a reference object within the image?
P.S. The actual HVID of the volunteer's iris is 12.40mm taken using Sirus(A hi-end device to calculate iris radius and I'm trying to simulate the same actions using Python and OpenCV)
After months I was able to come up with the result after ton of research and lots of trials and errors. This is not the most ideal answer but it gave me expected results with decent precision.
Simply, In order to measure object size/distance from the image we need multiple parameters. In my case, I was trying to measure the diameter of iris from a smart phone camera.
To make that possible we need to know the following details prior to the calculation
1. The Size of the physical sensor (height and width) (usually in mm)
(camera inside the smart phone whose details can be obtained from websites on the internet but you need to know the exact brand and version of the smart phone used)
Note: You cannot use random values for these, otherwise you will get inaccurate results. Every step/constraint must be considered carefully.
2. The Size of the image taken (pixels).
Note: Size of the image can be easily obtained used img.shape but make sure the image is not cropped. This method relies on the total width/height of the original smartphone image so any modifications/inconsistencies would result in inaccurate results.
3. Focal Length of the Physical Sensor (mm)
Note: Info regarding focal length of the sensor used can be acquired from the internet and random values should not be given. Make sure you are taking images with auto focus feature disabled so the focal length is preserved. Incase if you have auto focus on then the focal length will be constantly changing and the results will be all over the place.
4. Distance at which the image is taken (Very Important)
Note: As "Christoph Rackwitz" told in the comment section. The distance from which the image is taken must be known and should not be arbitrary. Head cannoning a number as input will always result in inaccuracy for sure. Make sure you properly measure the distance from sensor to the object using some sort of measuring tool. There are some depth detection algorithms out there in the internet but they are not accurate in most cases and need to calibrated after every single try. That is indeed an option if you dont have any setup to take consistent photos but inaccuracies are inevitable especially in objects like iris which requires medical precision.
Once you have gathered all these "proper" information the rest is to dump these into a very simple equation which is a derivative of the "Similar Traingles".
Object height/width on sensor (mm) = Sensor height/width (mm) × Object height/width (pixels) / Sensor height/width (pixels)
Real Object height (in units) = Distance to Object (in units) × Object height on sensor (mm) / Focal Length (mm)
In the first equation, you must decide from which axis you want to measure. For instance, if the image is taken in portrait and you are measuring the width of the object on the image, then input the width of the image in pixels and width of the sensor in mm
Sensor height/width in pixels is nothing but the size of the "image"
Also you must acquire the object size in pixels by any means.
If you are taking image in landscape, make sure you are passing the correct width and height.
Equation 2 is pretty simple as well.
Things to consider:
No magnification (Digital magnification can destroy any depth info)
No Autofocus (Already Explained)
No cropping/editing image size/resizing (Already Explained)
No image skewing.(Rotating the image can make the image unfit)
Do not substitute random values for any of these inputs (Golden Advice)
Do not tilt the camera while taking images (Tilting the camera can distort the image so the object height/width will be altered)
Make sure the object and the camera is exactly in the same line
Don't use EXIF data of the image (EXIF data contains depth information which is absolute garbage since they are not accurate at all. DO NOT CONSIDER THEM)
Things I'm unsure of till now:
Lens distortion / Manufacturing defects
Effects of field of view
Perspective Foreshortening due to camera tilt
Depth field cameras
DISCLAIMER: There are multiple ways to solve this issue but I chose to use this method and I highly recommend you guys to explore more and see what you can come up with. You can basically extend this idea to measure pretty much any object using a smartphone (given the images that a normal smart phone can take)
(Please don't try to measure the size of an amoeba with this. Simply won't work but you can indeed take some of the advice I have gave for your advantage)
If you have cool ideas and issues with my answers. Please feel free to let me know I would love to have discussions. Feel free to correct me if I have made any mistakes and misunderstood any of these concepts.
Final Note:
No matter how hard you try, you cannot make something like a smartphone to work and behave like a camera sensor which is specifically designed to take images for measuring purposes. Smart phone can never beat those but sure we can manipulate the smart phone camera to achieve similar results upto a certain degree. So you guys must keep this in mind and I learnt it the hard way

Why is TKinter restricted to a specific resolution?

I have been using an edited version of the graphics.py library by John Zelle which is based upon TKinter for some time now, but this question always confused me:
Why is the resolution of the screen (the pixels/unit) of the TKinter screen lower than my computer's?
Now what I mean by this is, if I draw a point on window like so:
the pixel is VERY clearly visible. However, I know for a fact that my computer can display higher resolutions (obviously). When I use a vector graphic editing software, I can't make out individual points on the window - so why can I over here. How much ever I have tried increasing the resolution of the window, the pixel size remains the same.
The size is very different to if I used vector graphics - If I had a curve, I would see a curve, not some anti-aliased pixels. So why is this? Why can't I draw smaller pixels on a TKinter screen?
The same thing happens with PNG Images - Why can't I export in a higher resolution so that I don't see any of the anti-aliasing (yes the size would increase, but why isn't the option available), why is this a limitation when I can clearly see that it is not when I am using vector graphics?
Vector graphics can be zoomed in infinitely, but that is not my point. My question is if my computer's screen is capable of displaying far higher resolutions than what I can in TKinter, why can't I display them?

Anti-aliasing of random dot stereograms

I recently completed some Python (2.7) code for generating random dot stereograms based on this paper. The output is fairly good, though I have noticed that, even with a smooth gradient in the depth map, the output stereogram lacks these smooth gradients, instead having varying levels of depth. I believe this to be due to the DPI chosen when generating the image. While the detail of the depth can be increased by increasing the DPI, this becomes impractical as the convergence point becomes more difficult to reach.
Here are two examples. First at 75 DPI and second at 175 DPI. On the 75 DPI image, distinct "triangles" of depth can be seen. In the 175 DPI image, these are less pronounced but the guidance dots at the bottom of the image are further apart, and therefore viewing the 3D image is more difficult.
I'm looking to modify my current code to anti-alias the 3D image in order to smooth out the gradients even with a lower DPI. I have tried using SSAA on the depth map and pattern and generating the stereogram, then reducing the image size again with an antialiasing filter. However this seems to just contain the stereogram to the left of the image. For example, if I make the image 4 times bigger, the stereogram is limited to the left hand quarter of the image. The rest is just random noise and cannot be viewed. How would I go about antialiasing the image hidden in the stereogram? My code is almost the same as the algorithm described in the paper, so an antialiasing algorithm based on that would be perfect.
The solution for the problem I was having, with the stereogram being contained to the left of the image, was caused by not extending the same array to reflect the larger depth map. This caused everything beyond the original length of the depth map to be randomly generated noise.
After solving this problem, a second problem arose, in that the 3D image was distorted by the anti-aliasing, causing more gradient issues than it was solving. My solution for this was to increase the DPI setting in the code. For example, if I increased the size of the depth map by 4x, the stereogram must be generated with a DPI 4 times greater (300, rather than 75). When scaled down again, this produced excellent results.
This image uses 2x SSAA, making the gradients comparable with the 175DPI image from the question, but with a much easier converging point.
This image uses 4x SSAA, and I find the jaggies barely visible at all. The noise here becomes a lot more blurred and the general colour of the image becomes quite grey. I have found this effect can be avoided by pregenerating the noise and scaling that up by the same AA factor. This is demonstrated in the next image.

How to display a PDF in its true scale with Poppler?

I am getting confused as to how to display a PDF document in its true scale, i.e. scale = 100%.
NB: I am using python-poppler-qt4.
Poppler-qt4 provides a method to get the true size of the PDF in points:
document = Poppler.Document.load('mypdf.pdf')
page = document.page(0)
size = page.pageSize() # returns a QSize object
Then to render the page into a QImage, one should provide the resolution of the graphics device, in dots per inch (DPI):
image = page.renderToImage(72, 72)
Now since the natural size of the document is provided in points (i.e. 72 per inch), and the image renderer requires dots per inch, can I just assume that the natural size of the document is when its resolution is at 72 DPI? Or are dots and points two different measures? If I am wrong, then what is the solution to this?
The points in a PDF file are physical units, you can measure them with a ruler. The dots (pixels) in the image are virtual units and the connection between them is done through the resolution factor. When you move the content from vector space to raster space you decide the relation between points and pixels (the resolution used for conversion), it is up to your application to decide what 100% means.
Most applications use the DPI of the screen as reference for 100% scale. On Windows this usually means 96DPI, one inch from your PDF file is represented on 96 pixels on the screen. Adobe Reader lets you set your own resolution to be used for 100% scale and by default it is 110DPI.

Zooming into a Clutter CairoTexture while re-drawing

I am using python-clutter 1.0
My question in the form of a challenge
Write code to allow zooming up to a CairoTexture actor, by pressing a key, in steps such that at each the actor can be re-drawn (by cairo) so that the image remains high-res but still scales as expected, without re-sizing the actor.
Think of something like Inkscape and how you can zoom into the vectors; how the vectors remain clean at any magnification. Put a path (bunch of cairo line_to commands, say) onto an CairoTexture actor and then allow the same trick to happen.
More detail
I am aiming at a small SVG editor which uses groups of actors. Each actor is devoted to one path. I 'zoom' by using SomeGroup.set_depth(z) and then make z bigger/smaller. All fine so far. However, the closer the actor(s) get to the camera, the more the texture is stretched to fit their new apparent size.
I can't seem to find a way to get Clutter to do both:
Leave the actor's actual size static (i.e. what it started as.)
Swap-out its underlying surface for larger ones (on zooming in) that I can then re-draw the path onto (and use a cairo matrix to perform the scaling of the context.)
If I use set_size or set_surface_size, the actor gets larger which is not intended. I only want it's surface (underlying data) to get larger.
(I'm not sure of the terminology for this, mipmapping perhaps? )
Put another way: a polygon is getting larger, increase the size of its texture array so that it can map onto the larger polygon.
I have even tried an end-run around clutter by keeping a second surface (using pycairo) that I re-create to the apparent size of the actor (get_transformed_size) and then I use clutter's set_from_rgb_data and point it at my second surface, forcing a re-size of the surface but not of the actor's dimensions.
The problem with this is that a)clutter ignores the new size and only draws into the old width/height and b)the RGBA vs ARGB32 thing kind of causes a colour meltdown.
I'm open to any alternative ideas, I hope I'm standing in the woods missing all the trees!
\d
Well, despite all my tests and hacks, it was right under my nose all along.
Thanks to Neil on the clutter-project list, here's the scoop:
CT = SomeCairoTextureActor()
# record the old height, once:
old_width, old_height = CT.get_size()
Start a loop:
# Do stuff to the depth of CT (or it's parent)
...
# Get the apparent width and height (absolute size in pixels)
appr_w,appr_h = CT.get_transformed_size()
# Make the new surface to the new size
CT.set_surface_size( appr_w, appr_h )
# Crunch the actor back down to old size
# but leave the texture surface something other!
CT.set_size(old_width, old_height)
loop back again
The surface size and the size of the
actor don't have to be the same. The
surface size is just by default the
preferred size of the actor. You can
override the preferred size by just
setting the size on the actor. If the
size of the actor is different from
the surface size then the texture will
be squished to fit in the actor size
(which I think is what you want).
Nice to put this little mystery to bed. Thanks clutter list!
\d

Categories