Here is my thoughts:
1) snap shot the given region from the screen like (100,100,80,60), save the result as image
2) process the image with OpenCV python interface
Just be first to python and wonder if this is good solution,be specific,wonder how to snapshot with python.
Thanks,
It's fairly simple using CGRectMake in Apple's CoreGraphics api:
CG.CGRectMake(x, y, w, h)
This allows you to define the horizontal/vertical positions, and width/height respectively.
Code:
#!/usr/bin/python
import Quartz
import LaunchServices
from Cocoa import NSURL
import Quartz.CoreGraphics as CG
def screenshot(path, dpi, region = None):
if region is None:
region = CG.CGRectInfinite
image = CG.CGWindowListCreateImage(
region,
CG.kCGWindowListOptionOnScreenOnly,
CG.kCGNullWindowID,
CG.kCGWindowImageDefault)
url = NSURL.fileURLWithPath_(path)
dest = Quartz.CGImageDestinationCreateWithURL(
url,
LaunchServices.kUTTypePNG, 1, None
)
prop = {
Quartz.kCGImagePropertyDPIWidth: dpi,
Quartz.kCGImagePropertyDPIHeight: dpi,
}
Quartz.CGImageDestinationAddImage(dest, image, prop)
Quartz.CGImageDestinationFinalize(dest)
spec = (0, 0, 800, 600) # x, y, w, h
path = '/path/to/screnshot_region.png' # save path
area = CG.CGRectMake(*spec) # set area to cgrect
screenshot(path, dpi=72, region=area) # call function
To use, just call the function:
screenshot(path, dpi=72, region=area)
*dpi will set the image output resolution; leave out the region argument for fullscreen capture. In regards to the OpenCV portion - I don't use it often enough to provide anything at this time.
Related
I've created a simple Python application that uses the CV2 computer vision library to recognise a template image on a webpage.
I give the application a template image that it needs to recognise on the source image. In this case, the source image is a screenshot of the website www.google.com and the template image is the Google search button.
Template image
I thought the application worked at first, but it's drawing the rectangle completely in the wrong place on the input (source) image. I've added a picture below of where the application located the template image.
Result
Here's the source code.
Main Application Source
import cv2
import numpy
from io import BytesIO
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
class Automate:
def __init__(self):
chrome_options = Options()
chrome_options.add_argument("kiosk")
self.driver = webdriver.Chrome(ChromeDriverManager("93.0.4577.63").install(), options=chrome_options)
#self.driver = webdriver.Chrome(executable_path='./chromedriver',options=chrome_options)
self.screenShot = None
self.finalImage = None
def open_webpage(self, url):
print(f"Open webpage {url}")
self.driver.get(url)
def close_webpage(self):
Event().wait(5)
self.driver.close()
print("Closing webpage")
def snap_screen(self):
print("Capturing screen")
self.screenShot = "screenshot.png"
self.driver.save_screenshot(self.screenShot)
print("done.")
def match(self, image, template):
# convert images to greyscale.
src = cv2.cvtColor(cv2.imread(image), cv2.COLOR_BGR2GRAY)
temp = cv2.cvtColor(cv2.imread(template), cv2.COLOR_BGR2GRAY)
cv2.imshow("out", temp)
cv2.waitKey(0)
height, width = src.shape
H, W = temp.shape
result = cv2.matchTemplate(src, temp, cv2.cv2.TM_CCOEFF_NORMED)
minVal, maxVal, minLoc, maxLoc = cv2.minMaxLoc(result)
location = maxLoc
bottomRight = (location[0] + W, location[1] + H)
src2 = cv2.imread(image)
cv2.rectangle(src2, location, bottomRight, (0, 0, 255), 5)
cv2.imshow("output", src2)
cv2.waitKey(0)
cv2.destroyAllWindows()
def main():
url = "http://www.google.com"
auto = Automate()
auto.open_webpage(url)
auto.snap_screen()
auto.close_webpage()
match_image = "images/templates/google-button.png"
# Match screenshot with template image.
auto.check_match(
image=auto.screenShot,
template=match_image
)
I'd appreciate any help or advice on how to solve this issue.
Update
Following the advice given by user zteffi, I resized my template image to the correct image dimensions. After doing this, the match template function works as expected.
You want to make sure that your template image is a close as possible to the actual size of the image you want to be located in the base image. In my case, this was around 150 x 150 or 200 x 200 so that it will be easier to find the button.
I was able to get the current image from a Thorlabs uc480 camera using instrumental. My issue is when I try to adjust the parameters for grab_image. I can change cx and left to any value and get an image. But cy and top only works if cy=600 and top=300. The purpose is to create a GUI so that the user can select values for these parameters to zoom in/out an image.
Here is my code
import instrumental
from instrumental.drivers.cameras import uc480
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
paramsets = instrumental.list_instruments()
cammer = instrumental.instrument(paramsets[0])
plt.figure()
framer= cammer.grab_image(timeout='1s',copy=True,n_frames=1,exposure_time='5ms',cx=640,
left=10,cy=600,top=300)
plt.pcolormesh(framer)
The above code does not give an image if I choose cy=600 and top=10. Are there any particular value set to be used for these parameters? How can I get an image of the full sensor size?
Thorlabs has a Python programming interface available as a download on their website. It is very well documented, and can be installed locally via pip.
Link:
https://www.thorlabs.com/software_pages/ViewSoftwarePage.cfm?Code=ThorCam
Here is an example of a simple capture algorithm that might help get you started:
from thorlabs_tsi_sdk.tl_camera import TLCameraSDK
from thorlabs_tsi_sdk.tl_mono_to_color_processor import MonoToColorProcessorSDK
from thorlabs_tsi_sdk.tl_camera_enums import SENSOR_TYPE
# open the TLCameraSDK dll
with TLCameraSDK() as sdk:
cameras = sdk.discover_available_cameras()
if len(cameras) == 0:
print("Error: no cameras detected!")
with sdk.open_camera(cameras[0]) as camera:
#camera.disarm() # ensure any previous session is closed
# setup the camera for continuous acquisition
camera.frames_per_trigger_zero_for_unlimited = 0
camera.image_poll_timeout_ms = 2000 # 2 second timeout
camera.arm(2)
# need to save the image width and height for color processing
image_width = camera.image_width_pixels
image_height = camera.image_height_pixels
# initialize a mono to color processor if this is a color camera
is_color_camera = (camera.camera_sensor_type == SENSOR_TYPE.BAYER)
mono_to_color_sdk = None
mono_to_color_processor = None
if is_color_camera:
mono_to_color_sdk = MonoToColorProcessorSDK()
mono_to_color_processor = mono_to_color_sdk.create_mono_to_color_processor(
camera.camera_sensor_type,
camera.color_filter_array_phase,
camera.get_color_correction_matrix(),
camera.get_default_white_balance_matrix(),
camera.bit_depth
)
# begin acquisition
camera.issue_software_trigger()
# get the next frame
frame = camera.get_pending_frame_or_null()
# initialize frame attempts and max limit
frame_attempts = 0
max_attempts = 10
# if frame is null, try to get a frame until
# successful or until max_attempts is reached
if frame is None:
while frame is None:
frame = camera.get_pending_frame_or_null()
frame_attempts += 1
if frame_attempts == max_attempts:
raise TimeoutError("Timeout was reached while polling for a frame, program will now exit")
image_data = frame.image_buffer
if is_color_camera:
# transform the raw image data into RGB color data
color_data = mono_to_color_processor.transform_to_24(image_data, image_width, image_height)
save_data = np.reshape(color_data,(image_height, image_width,3))
camera.disarm()
You can also process the image after capture with the PIL library.
I'm trying to save image buffers as .pngs with blender's python API (but this is mainly just a python question)
I'm trying to speed things up by making a thread to save the image, and the function that creates the thread is being called from a callback-handler that is activated whenever the 3D screen is refreshed, here is the full code (its a little messy):
import base64, io, os, bgl, gpu, bpy, threading, time, sys
import numpy as np
from gpu_extras.presets import draw_texture_2d
from PIL import Image
import multiprocessing.pool as mpool
finalPath = bpy.context.scene.render.filepath + "hithere.png"
WIDTH = 1920
HEIGHT = 1080
offscreen = gpu.types.GPUOffScreen(WIDTH, HEIGHT)
def draw2():
global finalPath
global array
global WIDTH
global HEIGHT
global needsSaving
context = bpy.context
scene = context.scene
view_matrix = scene.camera.matrix_world.inverted()
projection_matrix = scene.camera.calc_matrix_camera(
context.depsgraph, x=WIDTH, y=HEIGHT)
offscreen.draw_view3d(
scene,
context.view_layer,
context.space_data,
context.region,
view_matrix,
projection_matrix)
bgl.glDisable(bgl.GL_DEPTH_TEST)
draw_texture_2d(offscreen.color_texture, (0, -125), WIDTH, HEIGHT)
buffer = bgl.Buffer(bgl.GL_BYTE, WIDTH * HEIGHT * 4)
bgl.glReadBuffer(bgl.GL_BACK)
bgl.glReadPixels(0, -125, WIDTH, HEIGHT, bgl.GL_RGBA, bgl.GL_UNSIGNED_BYTE, buffer)
needle = threading.Thread(target=saveIt,args=[buffer, finalPath, WIDTH, HEIGHT])
needle.daemon = True
needle.start()
#### thread.start_new_thread(saveIt,(buffer, finalPath, WIDTH, HEIGHT))
def coby(scene):
frame = scene.frame_current
folder = scene.render.filepath
myFormat = "png"#scene.render.image_settings.renderformat.lower()
outputPath = os.path.join(folder, "%05d.%s" % (frame, myFormat))
global finalPath
finalPath = outputPath
h = bpy.types.SpaceView3D.draw_handler_add(draw2, (), 'WINDOW', 'POST_PIXEL')
bpy.app.handlers.frame_change_pre.clear()
bpy.app.handlers.frame_change_pre.append(coby)
def saveIt(buffer, path, width, height):
array = np.asarray(buffer, dtype=np.uint8)
myBytes = array.tobytes()
im = Image.frombytes("RGBA",(width, height), myBytes)
rawBytes = io.BytesIO()
im.save(rawBytes, "PNG")
rawBytes.seek(0)
base64Encoded = base64.b64encode(rawBytes.read())
txt = "data:image/png;base64," + base64Encoded.decode()
f = open(finalPath, "wb")
f.write(base64.decodebytes(base64Encoded))
f.close()
it actually does work, except when I play the timeline in blender (which calls the frame_pre callback and also the 3D view-refresh callback (not sure in which order though), most of my images are replaced, except some are not, as can be seen in this screenshot:
[![enter image description here][1]][1]
I originally had all blue-planed images, then I ran the script through the thread, and it replaced almost all of them, except some of the blue-planed images still remain (seemingly at a random interval). This worked fine if I call .join() right after I make the thread, or don't use the thread at all, but seemingly the thread is the only way to make it work a lot faster.
I've been looking around for how to use threads with queues and pooling (How to use python multiprocessing Pool.map within loop, What happened to thread.start_new_thread in python 3,
How can I make a background, non-blocking input loop in python?, Creating Threads in python),
SO: Why aren't all of the threads finishing?
[1]: https://i.stack.imgur.com/nJCwH.png .................
I'm starting to use WX GUI on python, and on my "Hello Word" project I'm trying to create a program with the ability to read any image and show it as icon into a CheckListCtrl column. I've done the first part (read the image and draw it into the CheckListCtrl), but I'm not able to load a PNG image and keep the transparency on that icon.
My code is the following:
'''
17 June 2018
#autor: Daniel Carrasco
'''
import wx
from wx.lib.mixins.listctrl import CheckListCtrlMixin, ListCtrlAutoWidthMixin
import sys
from pathlib import Path
BACKGROUNDCOLOR = (240, 240, 240, 255);
class CheckListCtrl(wx.ListCtrl, CheckListCtrlMixin, ListCtrlAutoWidthMixin):
def __init__(self, parent):
wx.ListCtrl.__init__(self, parent, -1, style=wx.LC_REPORT | wx.SUNKEN_BORDER,
size=wx.Size(395, 467), pos=wx.Point(10, 20));
CheckListCtrlMixin.__init__(self);
ListCtrlAutoWidthMixin.__init__(self);
#====================================================================
class MainFrame(wx.Frame):
def __init__(self, *args, **kwargs):
self.dataFolder = {
"images": Path("images/")
}
wx.Frame.__init__(self, *args, **kwargs);
icon = wx.Icon("icons.ico", wx.BITMAP_TYPE_ICO)
self.SetIcon(icon);
self.createWidgets();
self.createButtons();
self.Show();
#----------------------------------------------------------
def exitGUI(self, event): # callback
self.Destroy();
#----------------------------------------------------------
def createWidgets(self):
self.CreateStatusBar(); # wxPython built-in method
self.createMenu();
# Creamos el panel
boxSizer = wx.BoxSizer();
panel = wx.Panel(self);
panel.SetBackgroundColour(BACKGROUNDCOLOR);
panel.SetSizerAndFit(boxSizer);
staticBox = wx.StaticBox( panel, -1, "Listado de Saves", size=(415, 500),
pos=wx.Point(5, 0) )
self.statBoxSizerV = wx.StaticBoxSizer(staticBox, wx.VERTICAL)
# Lista de items
self.itemList = CheckListCtrl(staticBox);
self.itemList.InsertColumn(0, '', width=32);
self.itemList.InsertColumn(1, 'Icono', width=52);
self.itemList.InsertColumn(2, 'Título', width=140);
self.il_Small = self.itemList.GetImageList(wx.IMAGE_LIST_SMALL);
self.il = wx.ImageList(48, 48, wx.IMAGE_LIST_SMALL);
self.itemList.SetImageList(self.il, wx.IMAGE_LIST_SMALL);
image = wx.Image(str(self.dataFolder["images"] / "tick_1.png"), wx.BITMAP_TYPE_ANY);
self.il.Add(wx.Bitmap(image));
image = wx.Image(str(self.dataFolder["images"] / 'tick_2.png'), wx.BITMAP_TYPE_ANY);
self.il.Add(wx.Bitmap(image));
image = wx.Image(str(self.dataFolder["images"] / 'exit.png'), wx.BITMAP_TYPE_ANY );
for x in range(0, 4):
for y in range(0, 4):
image.SetAlpha(x, y, 0);
image = image.Scale(40, 40, wx.IMAGE_QUALITY_HIGH);
image = image.Size(wx.Size(48,48), wx.Point(4,4), 255, 255, 255);
# image.ClearAlpha();
self.il.Add(wx.Bitmap(image));
image = wx.Image(str(self.dataFolder["images"] / 'test.png'), wx.BITMAP_TYPE_ANY );
image = image.Scale(40, 40, wx.IMAGE_QUALITY_HIGH);
image = image.Size(wx.Size(48,48), wx.Point(4,4), 255, 255, 255);
self.il.Add(image.ConvertToBitmap());
index = self.itemList.InsertItem(sys.maxsize, "test");
self.itemList.SetItemColumnImage(0, 1, 3)
#self.itemList.Append("Prueba");
#----------------------------------------------------------
def createButtons(self):
pass
#----------------------------------------------------------
def createMenu(self):
# Menú Archivo
APP_EXIT = 1;
mArchivo = wx.Menu();
qmi = wx.MenuItem(mArchivo, APP_EXIT, '&Salir\tCtrl+Q');
image = wx.Image(str(self.dataFolder["images"] / 'exit.png'),wx.BITMAP_TYPE_PNG);
image = image.Scale(16, 16, wx.IMAGE_QUALITY_HIGH);
qmi.SetBitmap(image.ConvertToBitmap());
mArchivo.Append(qmi);
self.Bind(wx.EVT_MENU, self.exitGUI, id=APP_EXIT);
# Barra de menús
menuBar = wx.MenuBar();
menuBar.Append(mArchivo, "&Archivo");
# Seteamos la barra de menús
self.SetMenuBar(menuBar);
#======================
# Start GUI
#======================
app = wx.App()
MainFrame(None, style= wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX, title="Savegame Linker", size=(485,587))
app.MainLoop()
This code just read the image into and Image object, then scale the image and add a border resizing. The problem is that PNG transparency is not kept and only the border is transparent:
If I remove the image transparency with Photoshop (adding white background), then the image is showed with the transparency I want:
Is there any way to keep the PNG transparency on CheckListCtrl, or at least add a white background to the image (that looks like an alternative solution). I want to do it if posible using only WX, because I think that use pillow module for example, just for remove transparency, is not an optimal solution.
Thanks!!
I don't know how you run that code but try as I might, I have been unable to do so and thus cannot be sure of the answer below.
A wx.Image has a range of "image handlers", of which, only the BMPHandler is loaded by default. I suspect that you need to load the PNGHandler before attempting SetAlpha and you should probably check the image with HasAlpha beforehand.
See: https://docs.wxpython.org/wx.Image.html
Alpha channel support
Starting from wxWidgets 2.5.0 wx.Image supports alpha channel data, that is in addition to a byte for the red, green and blue colour components for each pixel it also stores a byte representing the pixel opacity. An alpha value of 0 corresponds to a transparent pixel (null opacity) while a value of 255 means that the pixel is 100% opaque. The constants IMAGE_ALPHA_TRANSPARENT and IMAGE_ALPHA_OPAQUE can be used to indicate those values in a more readable form. While all images have RGB data, not all images have an alpha channel. Before using wx.Image.GetAlpha you should check if this image contains an alpha channel with wx.Image.HasAlpha . Currently the BMP, PNG, TGA, and TIFF format handlers have full alpha channel support for loading so if you want to use alpha you have to use one of these formats. If you initialize the image alpha channel yourself using wx.Image.SetAlpha , you should save it in either PNG, TGA, or TIFF format to avoid losing it as these are the only handlers that currently support saving with alpha.
Available image handlers The following image handlers are
available. BMPHandler is always installed by default. To use other
image formats, install the appropriate handler with
wx.Image.AddHandler or call wx.InitAllImageHandlers .
BMPHandler: For loading (including alpha support) and saving, always
installed.
wx.PNGHandler: For loading and saving. Includes alpha
support.
wx.JPEGHandler: For loading and saving.
wx.GIFHandler: For loading and saving (see below).
wx.PCXHandler: For loading and saving (see below).
wx.PNMHandler: For loading and saving (see below).
wx.TIFFHandler: For loading and saving. Includes alpha support.
wx.TGAHandler: For loading and saving. Includes alpha support.
wx.IFFHandler: For loading only. wx.XPMHandler: For loading and saving.
ICOHandler: For loading and saving. CURHandler: For loading and saving.
ANIHandler: For loading only.
When saving in PCX format, wx.PCXHandler will count the number of different colours in the image; if there are 256 or less colours, it will save as 8 bit, else it will> save as 24 bit. Loading PNMs only works for ASCII or raw RGB images.
When saving in PNM format, wx.PNMHandler will always save as raw RGB.
Saving GIFs requires images of maximum 8 bpp (see Quantize ), and the
alpha channel converted to a mask (see wx.Image.ConvertAlphaToMask ).
Saving an animated GIF requires images of the same size (see
wx.GIFHandler.SaveAnimation )
See also wx.Bitmap, wx.InitAllImageHandlers , PixelData
I think that the right way is converting the transparency into mask. I've already tested it, but looks like I've used the function after other that made it fail.
image = wx.Image(str(self.dataFolder["images"] / 'test.png'), wx.BITMAP_TYPE_ANY );
image = image.Scale(40, 40, wx.IMAGE_QUALITY_HIGH);
image.ConvertAlphaToMask(threshold=50);
image = image.Size(wx.Size(48,48), wx.Point(4,4), 255, 255, 255);
self.il.Add(image.ConvertToBitmap());
The last time I'd tried the function was after the image.Size function, and then it fails (maybe the Size function removes the transparency), but if is done before then works.
Thanks again and greetings!!
EDIT:
A few months later I've continued the project and I've found another way to make the background transparent: Remove the transparency converting it to solid white:
def remove_transparency(im, bg_colour=(255, 255, 255)):
# Only process if image has transparency (http://stackoverflow.com/a/1963146)
if im.mode in ('RGBA', 'LA') or (im.mode == 'P' and 'transparency' in im.info):
# Need to convert to RGBA if LA format due to a bug in PIL (http://stackoverflow.com/a/1963146)
alpha = im.convert('RGBA').split()[-1]
# Create a new background image of our matt color.
# Must be RGBA because paste requires both images have the same format
# (http://stackoverflow.com/a/8720632 and http://stackoverflow.com/a/9459208)
bg = Image.new("RGBA", im.size, bg_colour + (255,))
bg.paste(im, mask=alpha)
return bg
else:
return im
# Open the image
sbuf = BytesIO(campo[4])
im = Image.open(sbuf)
# Remove transparency (white background will be transparent on ImageList)
im2 = remove_transparency(im).convert("RGB")
im.close()
# Create an wx.Image from image
width, height = im2.size
image = wx.Image(width, height, im2.tobytes())
image = image.Size(wx.Size(48,48), wx.Point(2,2), 255, 255, 255)
# Convert it to Bitmap and add it to ImageList
image = image.ConvertToBitmap()
icon_image = self.il.Add(image)
sbuf.close()
I've changed slightly the way I store the images, and now they comes from a PNG stored on an SQLite DB BLOB (campo[4]).
Greetings!!
I'm trying to save a captured 640x480 RGB image with NAO's front camera to my computer. I'm using python and PIL to do so. Unfortunately, the image just won't save on my computer, no matter what image type or path I use for the parameters of the Image.save()- Method. the image created with PIL contains valid RGB-information though. Here's my code sample from choregraphe:
import Image
def onInput_onStart(self):
cam_input = ALProxy("ALVideoDevice")
nameId = cam_input.subscribeCamera("Test_Cam", 1, 2, 13, 20)
image = cam_input.getImageRemote(nameId) #captures an image
w = image[0] #get the image width
h = image[1] #get the image height
pixel_array = image[6] #contains the image data
result = Image.fromstring("RGB", (w, h), pixel_array)
#the following line doesnt work
result.save("C:\Users\Claudia\Desktop\NAO\Bilder\test.png", "PNG")
cam_input.releaseImage(nameId)
cam_input.unsubscribe(nameId)
pass
Thank you so much for your help in advance!
- a frustrated student
In the comment, you say the code is pasted from choregraphe, so I guess you launch it using choregraphe.
If so, then the code is injected into your robot then started.
So your image is saved to the NAO hard drive and I guess your robot doesn't have a folder named: "C:\Users\Claudia\Desktop\NAO\Bilder\test.png".
So change the path to "/home/nao/test.png", start your code, then log into your NAO using putty or browse folder using winscp (as it looks like you're using windows).
And you should see your image-file.
In order for your code to run correctly it needs to be properly indented. Your code should look like this:
import Image
def onInput_onStart(self):
cam_input = ALProxy("ALVideoDevice")
nameId = cam_input.subscribeCamera("Test_Cam", 1, 2, 13, 20)
image = cam_input.getImageRemote(nameId) #captures an image
w = image[0] #get the image width
h = image[1] #get the image height
pixel_array = image[6] #contains the image data
...
Make sure to indent everything that's inside the def onInput_onStart(self): method.
Sorry for the late response, but it maybe helpful for someone. You should try it with naoqi. Here is the documentation for retriving images
http://doc.aldebaran.com/2-4/dev/python/examples/vision/get_image.html
The original code was not working for me so I made some tweeks.
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="nao.local.",
help="Robot IP address. On robot or Local Naoqi: use
'nao.local.'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
session = qi.Session()
try:
session.connect("tcp://" + args.ip + ":" + str(args.port))
except RuntimeError:
pass
"""
First get an image, then show it on the screen with PIL.
"""
# Get the service ALVideoDevice.
video_service = session.service("ALVideoDevice")
resolution = 2 # VGA
colorSpace = 11 # RGB
videoClient = video_service.subscribe("python_client",0,3,13,1)
t0 = time.time()
# Get a camera image.
# image[6] contains the image data passed as an array of ASCII chars.
naoImage = video_service.getImageRemote(videoClient)
t1 = time.time()
# Time the image transfer.
print ("acquisition delay ", t1 - t0)
#video_service.unsubscribe(videoClient)
# Now we work with the image returned and save it as a PNG using ImageDraw
# package.
# Get the image size and pixel array.
imageWidth = naoImage[0]
imageHeight = naoImage[1]
array = naoImage[6]
image_string = str(bytearray(array))
# Create a PIL Image from our pixel array.
im = Image.fromstring("RGB", (imageWidth, imageHeight), image_string)
# Save the image.
im.save("C:\\Users\\Lenovo\\Desktop\\PROJEKTI\\python2-
connect4\\camImage.png", "PNG")
Be careful to use Python 2.7.
The code runs on your computer not the NAO robot!