Colour Difference in The Foundry NUKE - python

So I am trying to make a painterly node group with Python code straight so I can use it for future projects but I can't seem to get the power part of the formula in nuke to work from this colour difference formula( I'm also new to Nuke so if there is a better way of writing this please let me know it would be awesome thank you, or if I'm doing this wrong completely also let me know)
The following formula for color difference is used to create the
difference image: |(r1,g1,b1) – (r2,g2,b2)| = ((r1 – r2)^2 + (g1
–g2)^2 + (b1 – b2)^2)^1/2.
nRedShuffle = nuke.nodes.Shuffle()
nRedShuffle['red'].setValue('red')
nRedShuffle['green'].setValue('red')
nRedShuffle['blue'].setValue('red')
nRedShuffle['alpha'].setValue('red')
nGreenShuffle = nuke.nodes.Shuffle()
nGreenShuffle['red'].setValue('green')
nGreenShuffle['green'].setValue('green')
nGreenShuffle['blue'].setValue('green')
nGreenShuffle['alpha'].setValue('green')
#...(so on for the rest of rgba1 and rgba2)
nGreenShuffle2 = nuke.nodes.Shuffle()
nGreenShuffle2['red'].setValue('green')
nGreenShuffle2['green'].setValue('green')
nGreenShuffle2['blue'].setValue('green')
nGreenShuffle2['alpha'].setValue('green')
nBlueShuffle2 = nuke.nodes.Shuffle()
nBlueShuffle2['red'].setValue('blue')
nBlueShuffle2['green'].setValue('blue')
nBlueShuffle2['blue'].setValue('blue')
nBlueShuffle2['alpha'].setValue('blue')
#I am having troubles with the powers below
redDiff = nuke.nodes.Merge2(operation='minus', inputs=[nRedShuffle2, nRedShuffle])
redDiffMuli = nuke.nodes.Merge2(operation='multiply', inputs=[redDiff, redDiff])
greenDiff = nuke.nodes.Merge2(operation='minus', inputs=[nGreenShuffle2, nGreenShuffle])
greenDiffMuli = nuke.nodes.Merge2(operation='multiply', inputs=[greenDiff, greenDiff])
blueDiff = nuke.nodes.Merge2(operation='minus', inputs=[nBlueShuffle2, nBlueShuffle])
blueDiffMuli = nuke.nodes.Merge2(operation='multiply', inputs=[blueDiff, blueDiff])
redGreenAdd = nuke.nodes.Merge2(operation='plus', inputs=[redDiffMuli, greenDiffMuli])
redGreenBlueAdd = nuke.nodes.Merge2(operation='plus', inputs=[redGreenAdd, blueDiffMuli])

Here are at least two ways to implement Color Difference formula for two images. You can use difference op in Merge node or you can write a formula in field for each channel inside MergeExpression node:
Expression for each channel is as simple as this:
abs(Ar-Br)
abs(Ag-Bg)
abs(Ab-Bb)
Python commands
You can use .nodes.MergeExpression methodology:
import nuke
merge = nuke.nodes.MergeExpression(expr0='abs(Ar-Br)',
expr1='abs(Ag-Bg)',
expr2='abs(Ab-Bb)')
or regular .createNode syntax:
merge = nuke.createNode('MergeExpression')
merge['expr0'].setValue('abs(Ar-Br)')
merge['expr1'].setValue('abs(Ag-Bg)')
merge['expr2'].setValue('abs(Ab-Bb)')
Full code version
import nuke
import nukescripts
red = nuke.createNode("Constant")
red['color'].setValue([1,0,0,1])
merge = nuke.createNode('MergeExpression')
merge['expr0'].setValue('abs(Ar-Br)')
merge['expr1'].setValue('abs(Ag-Bg)')
merge['expr2'].setValue('abs(Ab-Bb)')
yellow = nuke.createNode("Constant")
yellow['color'].setValue([1,1,0,1])
merge.connectInput(0, yellow)
nuke.toNode('MergeExpression1').setSelected(True)
nukescripts.connect_selected_to_viewer(0)
# Auto-alignment in Node Graph
for everyNode in nuke.allNodes():
everyNode.autoplace()
Consider! MergeExpression node is much slower that regular Merge(difference) node.

Related

How to tell if matchTemplate succeeds? [duplicate]

I'm attempting to find an image in another.
im = cv.LoadImage('1.png', cv.CV_LOAD_IMAGE_UNCHANGED)
tmp = cv.LoadImage('e1.png', cv.CV_LOAD_IMAGE_UNCHANGED)
w,h = cv.GetSize(im)
W,H = cv.GetSize(tmp)
width = w-W+1
height = h-H+1
result = cv.CreateImage((width, height), 32, 1)
cv.MatchTemplate(im, tmp, result, cv.CV_TM_SQDIFF)
print result
When I run this, everything executes just fine, no errors get thrown. But I'm unsure what to do from here. The doc says that result stores "A map of comparison results". I tried printing it, but it gives me width, height, and step.
How do I use this information to find whether or not one image is in another/where it is located?
This might work for you! :)
def FindSubImage(im1, im2):
needle = cv2.imread(im1)
haystack = cv2.imread(im2)
result = cv2.matchTemplate(needle,haystack,cv2.TM_CCOEFF_NORMED)
y,x = np.unravel_index(result.argmax(), result.shape)
return x,y
CCOEFF_NORMED is just one of many comparison methoeds.
See: http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
for full list.
Not sure if this is the best method, but is fast, and works just fine for me! :)
MatchTemplate returns a similarity map and not a location.
You can then use this map to find a location.
If you are only looking for a single match you could do something like this to get a location:
minVal,maxVal,minLoc,maxLoc = cv.MinMaxLoc(result)
Then minLoc has the location of the best match and minVal describes how well the template fits. You need to come up with a threshold for minVal to determine whether you consider this result a match or not.
If you are looking for more than one match per image you need to use algorithms like non-maximum supression.

How to remove the mean of some values in the 3rd dimension of a matrix?

I have been stuck trying to do this with numpy with no luck. I am trying to move from MATLAB to Python, however, the transition hasn't been so easy. Anyway, that doesn't matter.
I am trying to code the Python analog of this simple MATLAB line of code:
A(:,:,condtype==1 & Mat(:,9)==contra(ii)) = A(:,:, condtype ==1 & Mat(:,9)==contra(ii))-mean(A(:,:, condtype ==1 & Mat(:,9)==contra(ii)),3);
Right, so the above convoluted line of code does the following. Indexes a condition which is half of the 3rd dimension of A and removes the mean of those indexes which simultaneously changing the values in A to the new mean removed values.
How would one go about doing this in Python?
I actually figured it out. I was trying to use and when I should have been using np.isequal. Also, I needed to use keepdims=True for the mean. Here it is for anyone that wants to see:
def RmContrastMean(targettype,trialsMat,Contrastlvls,dX):
present = targettype==1
absent = targettype==0
for i in range(0,Contrastlvls.size):
CurrentContrast = trialsMat[:,8]==Contrastlvls[i]
preIdx = np.equal(present, CurrentContrast)
absIdx = np.equal(absent, CurrentContrast)
#mean
dX[:,:,preIdx] = dX[:,:,preIdx]-np.mean(dX[:,:,preIdx],axis=2,keepdims=True)
dX[:,:,absIdx] = dX[:,:,absIdx]-np.mean(dX[:,:,absIdx],axis=2,keepdims=True)
#std
dX[:,:,preIdx] = dX[:,:,preIdx]/np.std(dX[:,:,preIdx],axis=2,keepdims=True)
dX[:,:,absIdx] = dX[:,:,absIdx]/np.std(dX[:,:,absIdx],axis=2,keepdims=True)
return dX

How to iterate over and download each image in an image collection from the Google Earth Engine python api

I am new to google earth engine and was trying to understand how to use the Google Earth Engine python api. I can create an image collection, but apparently the getdownloadurl() method operates only on individual images. So I am trying to understand how to iterate over and download all of the images in the collection.
Here is my basic code. I broke it out in great detail for some other work I am doing.
import ee
ee.Initialize()
col = ee.ImageCollection('LANDSAT/LC08/C01/T1')
col.filterDate('1/1/2015', '4/30/2015')
pt = ee.Geometry.Point([-2.40986111110000012, 26.76033333330000019])
buff = pt.buffer(300)
region = ee.Feature.bounds(buff)
col.filterBounds(region)
So I pulled the Landsat collection, filtered by date and a buffer geometry. So I should have something like 7-8 images in the collection (with all bands).
However, I could not seem to get iteration to work over the collection.
for example:
for i in col:
print(i)
The error indicates TypeError: 'ImageCollection' object is not iterable
So if the collection is not iterable, how can I access the individual images?
Once I have an image, I should be able to use the usual
path = col[i].getDownloadUrl({
'scale': 30,
'crs': 'EPSG:4326',
'region': region
})
It's a good idea to use ee.batch.Export for this. Also, it's good practice to avoid mixing client and server functions (reference). For that reason, a for-loop can be used, since Export is a client function. Here's a simple example to get you started:
import ee
ee.Initialize()
rectangle = ee.Geometry.Rectangle([-1, -1, 1, 1])
sillyCollection = ee.ImageCollection([ee.Image(1), ee.Image(2), ee.Image(3)])
# This is OK for small collections
collectionList = sillyCollection.toList(sillyCollection.size())
collectionSize = collectionList.size().getInfo()
for i in xrange(collectionSize):
ee.batch.Export.image.toDrive(
image = ee.Image(collectionList.get(i)).clip(rectangle),
fileNamePrefix = 'foo' + str(i + 1),
dimensions = '128x128').start()
Note that converting a collection to a list in this manner is also dangerous for large collections (reference). However, this is probably the most scalable method if you really need to download.
Here is my solution:
import ee
ee.Initialize()
pt = ee.Geometry.Point([-2.40986111110000012, 26.76033333330000019])
region = pt.buffer(10)
col = ee.ImageCollection('LANDSAT/LC08/C01/T1')\
.filterDate('2015-01-01','2015-04-30')\
.filterBounds(region)
bands = ['B4','B5'] #Change it!
def accumulate(image,img):
name_image = image.get('system:index')
image = image.select([0],[name_image])
cumm = ee.Image(img).addBands(image)
return cumm
for band in bands:
col_band = col.map(lambda img: img.select(band)\
.set('system:time_start', img.get('system:time_start'))\
.set('system:index', img.get('system:index')))
# ImageCollection to List
col_list = col_band.toList(col_band.size())
# Define the initial value for iterate.
base = ee.Image(col_list.get(0))
base_name = base.get('system:index')
base = base.select([0], [base_name])
# Eliminate the image 'base'.
new_col = ee.ImageCollection(col_list.splice(0,1))
img_cummulative = ee.Image(new_col.iterate(accumulate,base))
task = ee.batch.Export.image.toDrive(
image = img_cummulative.clip(region),
folder = 'landsat',
fileNamePrefix = band,
scale = 30).start()
print('Export Image '+ band+ ' was submitted, please wait ...')
img_cummulative.bandNames().getInfo()
A reproducible example can you found it here: https://colab.research.google.com/drive/1Nv8-l20l82nIQ946WR1iOkr-4b_QhISu
You could possibly use ee.ImageCollection.iterate() with a function that gets the image and adds it to a list.
import ee
def accumluate_images(image, images):
images.append(image)
return images
for img in col.iterate(accumulate_images, []):
url = img.getDownloadURL(dict(scale=30, crs='EPSG:4326', region=region))
Unfortunately I am not able to test this code as I do not have access to the API, but it might help you arrive at a solution.
I have a similar problem and was not able o solve with presented solutions. Then I have elaborated a sample code for this purpose. It iterates over an image collection in client side, then it is not affected by limitations (server side only) of .map() or .iterate().
It is possible to download the code and see its explanation here
It basically transform the ImageCollection into a list (ic.toList()). Then it performs a standard loop, and for each individual image it is possible to convert it back to ee.Image(list.get(i)), and then process one by one taking all images in the collection.
In your particular case, to download each image, the function to be called within the loop could be: getDOwnloadURL() or getThumbURL():
var url = imgNew.getDownloadURL({
region: geometry,
});
var thumbURL = imgNew.getThumbURL({region: geometry,dimensions: 512, format: 'png'});

Proper way to use setAttr with channel box selection

please bear with me - I'm new to all this. I tried the searches and have only found bits and pieces to what I'm looking for, but not what I need to connect them.
Basically, I'm trying to create a Python script that allows the user to simply "0" out multiple selected attributes on Maya's Channel box.
So far I have:
import maya.cmds as cmds
selObjs = cmds.ls(sl=1)
selAttrs = cmds.channelBox("mainChannelBox", q=1, sma=1)
print selObjs # returns [u'pCube1']
print selAttrs # returns [u'ty']
If I would like to set the attributes:
cmds.setAttr(selObjs + "." + selAttrs, '0')
of course this is wrong, so how do I properly execute the setAttr command in this sceneario? (The intention includes having to set them if I have multiple selected attributes in the channel box).
I found that in MEL, it works like this. So really I just need help figuring out how to create the python counterpart of this:
string $object[] = `ls -sl`;
string $attribute[] = `channelBox -q -sma mainChannelBox`;
for ($item in $object)
for($attr in $attribute)
setAttr ($item + "." + $attr) 0;
Moving after that, I need an if loop where, if the attribute selected is a scale attribute, the value should be 1 - but this is something I'll look into later, but wouldn't mind being advised on.
Thanks!
So here's what I finally came up with:
import maya.cmds as cmds
selObjs = cmds.ls(sl=1)
selAttrs = cmds.channelBox("mainChannelBox", q=1, sma=1)
scales = ['sy','sx','sz','v']
if not selObjs:
print "no object and attribute is selected!"
elif not selAttrs:
print "no attribute is selected!"
else:
for eachObj in selObjs:
for eachAttr in selAttrs:
if any(scaleVizItem in eachAttr for scaleVizItem in scales):
cmds.setAttr (eachObj+"."+eachAttr, 1)
else:
cmds.setAttr (eachObj+"."+eachAttr, 0)
This will reset the basic transformations to their defaults. Including an if for the scale and visibility values.
I managed to come up with this:
import maya.cmds as cmds
selObjs = cmds.ls(sl=1)
selAttrs = cmds.channelBox("mainChannelBox", q=1, sma=1)
for each in selObjs:
for eachAttr in selAttrs:
cmds.setAttr (each+"."+eachAttr, 0)
And It's working to zero out selected attributes perfectly.
Now im at the stage of figuring out how to get the script to recognize if it contains scale attributes - to change that value to 1 instead of 0. (stuck at how to extract values from a list at the moment)

Blender: Walk around sphere

In order to understand blender python game scripting, I currently try to build a scene in which one can walk around a sphere, using the FPSController structure from this link. For gravity and FPSController orientation I tried to construct a python Controller, which currently looks like this:
def main():
print("Started")
controller = bge.logic.getCurrentController()
me = controller.owner
distance, loc, glob = me.getVectTo((0,0,0))
grav = controller.actuators['Gravity']
strength = me['Gravity']
force = strength*(distance*distance)*glob
grav.force = force
try:
rot = Vector((0,0,-1)).rotation_difference(glob).to_matrix()
except Exception as E:
print(E)
rot = (0,0,0)
rotZ = me.orientation
me.orientation = rot*rotZ
controller.activate(grav)
main()
which roughly works until any angle goes over 180 degrees, and looks discontinuous then. I assume this comes from rotation_difference being discontinuous – blender documentation on Math Types & Utilities does not say anything, and I have not thought enough about quaternionic representations yet to see if a continuous map would be possible – and I guess there is a more elegant way to achieve that the local Z orientation is continuously mouse-dependent, while local X and Y orientations depend continuously on some given vector, but how?
The consensus seems to be that you should accomplish such rotations using quaternions.
See this for the api: http://www.blender.org/documentation/249PythonDoc/Mathutils.Quaternion-class.html
See this for an introduction to the maths: http://en.wikipedia.org/wiki/Rotation_formalisms_in_three_dimensions#Quaternions
There is a allign-function. If the game-object is called own it should be something like own.alignAxisToVect(vector, 2, 1) with 2 being the index for the Z-axis(x=0,y=1,z=2) and 1 being the speed of allignment (between 0 and 1)

Categories