Embedding windows in GUI - python

I am trying to build an app that does 2 things.
Get live feed from webcam and display it using OpenCV. (tried IP Camera but gave up, its still not working)
Plot a chart on the basis of the video input.
The webcam feed is working, I am able to display it using imshow() and namedWindow()
.
The chart I have made using Tkinter.
The two outputs above, I want to add them in a single frame. Is it possible to do so?
And what do I use to embed them in a single window?
Please note I am using Python and developing on Windows.

You can combine two or more output windows into a single output window using numpy stack concept.
Referene Link:-
http://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html#numpy.vstack
Sample Code:-
import cv2
import numpy as np
img1 = cv2.imread('Bird1.jpg')
img2 = cv2.imread('Bird2.jpg')
img_stack = np.hstack((img1,img2))
cv2.imshow('Image Stack',img_stack)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note:-
You can combine any number of output windows into single one. To do this,
the Input Images Height, Width and Channel must be same.
Channel means, If images are in RGB Mode means all Images should be in RGB Mode.
You cannot combine, one RGB Mode Image and one Grayscale Mode Image into a single window.
Like Images, you may also stack videos.

Related

Is there a way to increase the size of the background image in turtle?

I am trying to make a game in python 3 which requires a background image. I already have the image. The problem here is my image looks small compared to the big screen. I don't want to change the size of the screen because then I would have to redo all the coordinates of the other objects on the screen. Is there any way I can increase the size of the background image?
Ps. I'm using Python 3 and on VS code.
Thanks in advance!
This is the picture of what the small picture looks like.enter image description here
In order to increase the size of the image, you would need too increase the resolution.
Assuming that you are using windows, open the png with Microsoft Photos, and click on the three horizontal dots at the top right.
A dropdown menu will open, press the third option from the top labeled "Resize"
After this, press on "Define custom dimensions" and you may manipulate the dimensions of the image as you like.
Then, simply save the resized image, and use it in your project.
you can also do this using CV2 library
import cv2
image=cv2.imread("image.png")
scale_percent=1.5
width=int(image.shape[1]*scale_percent)
height=int(image.shape[0]*scale_percent)
dimension=(width,height)
resized = cv2.resize(image,dimension, interpolation = cv2.INTER_AREA)
print(resized.shape)
cv2.imwrite("output.png",resized)
OR you can use PIL library as well
from PIL import Image
image = Image.open("image.png")
image.save("output.png", dpi=(image.size[0]*1.5,image.size[1]*1.5))

How to save grayscale image in Python?

I am trying to save a grayscale image using matplotlib savefig(). I find that the png file which is saved after the use of matplotlib savefig() is a bit different from the output image which is showed when the code runs. The output image which is generated when the code is running contains more details than the saved figure.
How can I save the output plot in such a manner that all details are stored in the output image?
My my code is given below:
import cv2
import matplotlib.pyplot as plt
plt.figure(1)
img_DR = cv2.imread(‘image.tif',0)
edges_DR = cv2.Canny(img_DR,20,40)
plt.imshow(edges_DR,cmap = 'gray')
plt.savefig('DR.png')
plt.show()
The input file (‘image.tif’) can be found from here.
Following is the output image which is generated when the code is running:
Below is the saved image:
Although the two aforementioned images denote the same picture, one can notice that they are slightly different. A keen look at the circular periphery of the two images shows that they are different.
Save the actual image to file, not the figure. The DPI between the figure and the actual created image from your processing will be different. Since you're using OpenCV, use cv2.imwrite. In your case:
cv2.imwrite('DR.png', edges_DR)
Use the PNG format as JPEG is lossy and would thus give you a reduction in quality to promote small file sizes. If accuracy is the key here, use a lossless compression standard and PNG is one example.
If you are somehow opposed to using OpenCV, Matplotlib has an equivalent image writing method called imsave which has the same syntax as cv2.imwrite:
plt.imsave('DR.png', edges_DR, cmap='gray')
Note that I am enforcing the colour map to be grayscale for imsave as it is not automatically inferred like how OpenCV writes images to file.
Since you are using cv2 to load the image, why not using it also to save it.
I think the command you are looking for is :
cv2.imwrite('gray.jpg', gray_image)
Using a DPI that matches the image size seems to make a difference.
The image is of size width=2240 and height=1488 (img_DR.shape). Using fig.get_size_inches() I see that the image size in inches is array([7.24, 5.34]). So an appropriate dpi is about 310 since 2240/7.24=309.4 and 1488/5.34=278.65.
Now I do plt.savefig('DR.png', dpi=310) and get
One experiment to do would be to choose a high enough DPI, calculate height and width of figure in inches, for example width_inch = width_pixel/DPI and set figure size using plt.figure(figsize=(width_inch, height_inch)), and see if the displayed image itself would increase/decrease in quality.
Hope this helps.

How can I display an image and my code keeps running in the background?

I take part in a project in which we are making a sudoku solver. I want to print the image of the solved sudoku grid on the screen while our drawing table is drawing the solution on the paper grid.
But I can't find a way to display an image and my code keeps running.
I have looked into - I think - all of the opencv and matplotlib.pyplot functions to display images but every time the code stops when the image is displayed and continues once the image is closed (plt.show() or using cv2.waitKey()).
So if anyone has an idea of a way to display an image while the python code keeps running, I'd be glad to hear it.
Thanks
The PIL/Pillow Image.show() method will leave your image showing on the screen and your code will continue to run.
If you have a black and white image in a Numpy/OpenCV array, you can make it into a PIL Image and display it like this:
from PIL import Image
Image.fromarray(NumpyImg).show()
If your image is colour, you'll need to go from BGR to RGB either using cv2.cvtColor(...BGR2RGB..) or by reversing your 3rd channel something like (untested):
Image.fromarray(NumpyImg[:,:,::-1]).show()

How to show details of a loaded data in PyCharm? Do PyCharm have any workspace?

I am quite new to Python and PyCharm.
I am using PyCharm for coding now.
In MATLAB whenever I load an image or data I can easily see my image data (i.e. the values of the pixel intensities) or whenever I load a file I can just see the detail of the file in my workspace.
Can any one tell me if there is anyway to do such a thing in PyCharm?
To explain more:
When I run something in MATLAB I will have a work space that shows all my variables etc.
Now assume this is my Python code:
from PIL import Image
from logistic_sgd import load_data
Img = Image.open("fruits.jpg")
Is there anyway that I see the value for my Img and load_data?
Probably the simplest way to do this within PyCharm would be to use the integrated Python Console. You can open it via "Tools" -> "Python Console". Then, if you want to view the pixel data, you could use this technique. For example:
from PIL import Image
Img = Image.open("fruits.jpg")
pixels = list(Img.getdata())
width, height = Img.size
pixels = [pixels[i * width:(i + 1) * width] for i in xrange(height)]
Now you can view pixels (R,G,B) values in the pixels matrix. For example, via pixels[0][0]. I know this doesn't provide as clear a view as the simple data view that you get in a MATLAB workspace but you can use it to examine the pixel data.
You can also look into using a jupyter notebook. Load your image there and examine any pixel data you are interested in.

htImage display and synchronised face capture in python

I am trying to write a program for image display and face capture. I am somewhat new to python and OpenCV.
Please find the details below:
I am running Python 2.7.5 on win32 on windows XP
Open cv2 version is 3.0.0
For the program,
Images from a predefined folder needs to be displayed for a fixed time of 500 millisecond in random sequence.
The gap between the images should be covered through a black screen, which should come for any random time gap interval between 1000-1500 millisecond.
Face capture of the viewer needs to be done via webcam once image showed, in between the image show, i.e. at the point of 250 millisecond. The output of the face should be stored in another newly created folder each time the program is run.
I have written the code below, but not getting the sequence right with a synchronised face capture with Haarcascade integration(perhaps required).
I also read somewhere that 'camera index' could be involved in this with possibly the value zero assigned to it. What exactly could be its role?
Please assist in this. Thanks in advance.
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('C:\\Sourceimagepath.jpg', 1)
cv2.startWindowThread()
cv2.namedWindow("Demo")
cv2.imshow("Demo", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Categories