I can sample HSV space (fixed s and v) as so
hue_gradient = np.linspace(0, 360,16)#sample 16 different equally spread hues
hsv = np.ones(shape=(1, len(hue_gradient), 3), dtype=float)*0.75#set sat and brightness to 0.75
hsv[:, :, 0] = hue_gradient#make one array
hsv
array([[[ 0. , 0.75, 0.75],
[ 24. , 0.75, 0.75],
[ 48. , 0.75, 0.75],
[ 72. , 0.75, 0.75],
[ 96. , 0.75, 0.75],
[120. , 0.75, 0.75],
[144. , 0.75, 0.75],
[168. , 0.75, 0.75],
[192. , 0.75, 0.75],
[216. , 0.75, 0.75],
[240. , 0.75, 0.75],
[264. , 0.75, 0.75],
[288. , 0.75, 0.75],
[312. , 0.75, 0.75],
[336. , 0.75, 0.75],
[360. , 0.75, 0.75]]])
However, all of these colors are not perceptually uniform
I can confirm this by running a deltaE2000 equation (delta_e_cie2000) from colormath package. The result looks like this:
The values are deltaE values, colors 0-15 correspond to the hue angle positions. As you can see, some colors are below the perceptual threshold
So, question is, is it possible for me to uniformly sample a hsv space with the s and v fixed? If not, how can I sample the space in a way that the colors are arranged as neighbors with hue similarity with s and v varying as little as they have to?
I tried a few things, but in the end this seemed to work. It spaces the hue values uniformly and then nudges them until they are perceptually uniform.
from colormath import color_objects, color_diff, color_conversions
SAT = 1.0
VAL = 1.0
COLOR_COUNT = 16
NUDGE_SIZE = 0.2
def hue_to_lab(hue):
return color_conversions.convert_color(
color_objects.HSVColor(hue, SAT, VAL), color_objects.LabColor
)
def get_equally_spaced(number, iters=100):
# Create hues with evenly spaced values in hue space
hues = [360 * x / number for x in range(number)]
for _ in range(iters):
# Convert hues to CIELAB colours
cols = [hue_to_lab(h) for h in hues]
# Work out the perceptual differences between pairs of adjacent
# colours
deltas = [
color_diff.delta_e_cie2000(cols[i], cols[i - 1]) for i in range(len(cols))
]
# Nudge each hue towards whichever adjacent colour is furthest
# away perceptually
nudges = [
(deltas[(i + 1) % len(deltas)] - deltas[i]) * NUDGE_SIZE
for i in range(len(deltas))
]
hues = [(h + d) % 360 for (h, d) in zip(hues, nudges)]
return hues
print(get_equally_spaced(COLOR_COUNT, iters=1000))
NUDGE_SIZE can mess it up if set wrong (changing it to 2 here results in nothing resembling a rainbow) and I think the best value depends on how many iterations you’re doing and how many colours you’re generating. The delta_e_cie2000 values for adjacent colours (with the settings given) are [16.290288769191324, 16.290288766871242, 16.290288753399196, 16.290288726186013, 16.290288645469946, 16.290288040904777, 16.290288035037598, 16.290288051426675, 16.290288079361915, 16.290288122430887, 16.290288180738187, 16.290288265350803, 16.290288469198916, 16.29028866254433, 16.2902887136652], which are pretty uniform: I think iters=1000 is overkill for this few colours. I’m using normal lists here, but it should translate to NumPy arrays—and probably run a bit faster.
The algorithm works like this:
Start with a naïve evenly spaced set of hues.
Calculate the perceptual differences between adjacent pairs of colours.
Move each hue slightly towards whichever of its neighbours is most different to it perceptually. The size of this movement is proportional to NUDGE_SIZE.
Repeat 2–3 until the hues have been nudged iters times.
Related
This question already has answers here:
Python - Rounding by quarter-intervals
(4 answers)
From list of integers, get number closest to a given value
(10 answers)
Closed 10 months ago.
I have a large list of floats between 0 and 1 as seen below:
l1 = [0,0.2,0.9,0.75,0.3,1,0.43,0.65]
what i would like is to round these floats to the numbers:
l2 = [0,0.25,0.5,0.75,1]
Result here would be:
r = [0,0.25,1,0.75,0.5,1,0.5,0.75]
What i mean is for every item in l1 choose the item that is closer to it from l2 and replace with the latter. I could implement this using some for loops and check exhaustively but i was hoping for a slicker solution.
Is there a library in Python to achieve this elegantly?
If l2 is always [0, 0.25, 0.5, 0.75, 1] you can use numpy:
r = np.round(4*np.array(l1))/4
Output:
array([0. , 0.25, 1. , 0.75, 0.25, 1. , 0.5 , 0.75])
If you want to only round up, use numpy.ceil:
r = np.ceil(4*np.array(l1))/4
Output:
array([0. , 0.25, 1. , 0.75, 0.5 , 1. , 0.5 , 0.75])
Here's one way using round:
l1 = [round(n*4)/4 for n in l1]
print(l1)
Ouput:
[0.0, 0.25, 1.0, 0.75, 0.25, 1.0, 0.5, 0.75]
I want to plot a sphere with latitudes 3D using mayavi. But I don't want the the latitudes in an equidistant angular range, but in an arangement according to this: https://en.wikipedia.org/wiki/Spherical_segment
This should result in spherical segments which have the same surface area.
So far... Lets consider theta to be the polar angle and phi to be the azimutal angle. Then I have the following code:
import numpy as np
from mayavi import mlab
## Create a sphere
r = 1.0
pi = np.pi
cos = np.cos
sin = np.sin
arccos=np.arccos
phi, theta = np.mgrid[-0.5*pi:0.5*pi:101j, 0:1*pi:101j]
x = r*sin(phi)*cos(theta)
y = r*sin(phi)*sin(theta)
z = r*cos(phi)
## Basic settings mlab
mlab.figure(1, bgcolor=(1, 1, 1), fgcolor=(0, 0, 0), size=(500, 500))
mlab.clf()
mlab.mesh(x , y , z, color=(0.9,0.,0.), opacity=0.3)
phi1=np.linspace(0, 2 * np.pi, 100)
theta1=arccos(np.linspace(0,1,11))
for i in range(len(theta1)):
x_pol = np.cos(phi1) * np.cos(theta1[i])
y_pol = np.sin(phi1) * np.cos(theta1[i])
z_pol = np.ones_like(phi1) * np.sin(theta1[i])
mlab.plot3d(x_pol, y_pol, z_pol, color=(0,0,0), opacity=0.2, tube_radius=None)
mlab.show()
The result is shown in image0 below.
As you can see, the arrangement of the segments is not correctly ordered. So I changed the order in theta1:
theta1=arccos(np.linspace(1,0,11))
The result is shown in image1 below. As you can see, the arrangement of the segments didn't change.
So, why is that? When I arrange the angular spacing from 0...1 this should come up with a different result then a spacing from 1...0. But actually it doesn't?!?
Has anyone a clue, what I did wrong?
Thanks
image0
image1
The ranges have the same values. The segments are the same, but in reversed order.
See the values of theta:
In [1]: np.flip(np.linspace(0,1,11), 0), np.linspace(1,0,11)
Out[1]:
(array([ 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0. ]),
array([ 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0. ]))
Thanks for your reply. I am not sure, whether I got your point. In the first case theta looks like this:
In [1]: np.arccos(np.linspace(1,0,11))
Out [1]:
array([0. , 0.45102681, 0.64350111, 0.79539883, 0.92729522,
1.04719755, 1.15927948, 1.26610367, 1.36943841, 1.47062891,
1.57079633])
In the second case it looks like:
In [1]: np.arccos(np.linspace(0,1,11))
Out [1]:
array([1.57079633, 1.47062891, 1.36943841, 1.26610367, 1.15927948,
1.04719755, 0.92729522, 0.79539883, 0.64350111, 0.45102681,
0. ])
So to me, it seems correct.
Ok,
sometimes it takes quite a time for me ^^
I figured out, what I did wrong. I simply changed
np.arccos(np.linspace(0,1,11))
to
np.pi/2 - np.arccos(np.linspace(0,1,11))
which produces the correct output.
correct Image
Well, sometimes you (I) don't see the forest for the trees... ^^
Greetings...
I have an numpy array, lets say one with 4 rows and 6 (always even number) columns:
m=np.round(np.random.rand(4,6))
array([[ 0.99, 0.48, 0.05, 0.26, 0.92, 0.44],
[ 0.81, 0.54, 0.19, 0.38, 0.5 , 0.02],
[ 0.11, 0.96, 0.04, 0.69, 0.78, 0.31],
[ 0.5 , 0.53, 0.94, 0.77, 0.6 , 0.75]])
I now want to plot graphs according to the column pairs, in this case
Graph 1: x-values=m[:,1] and y-values=m[:,0]
Graph 2: x-values=m[:,3] and y-values=m[:,2]
Graph 3: x-values=m[:,5] and y-values=m[:,4]
The first two columns are basically a pair of values, the next two are another pair of values and the last two also are a pair of values.
All the graphs should be in the same plot!
I need a general solution for plotting multiple graphs like this with an undefined but EVEN number of columns of the array. Something like a loop!
Hope somebody can help me :)
you can loop on all values of the column pairs
import matplotlib.pyplot
i=1
while i<len(m[0]):
x = m[:,i]
y = m[:,i-1]
plt.plot(x,y)
plt.savefig('placeholderName_%d.png' % i)
plt.close()
i=i+2
note that I'm starting at 1, and incrementing by two. this conforms to the example you presented
This gives terrible results with the m arra y you specified, but if it was just a sample and your data is more realistic, the following should do:
for i in range(m.shape[1] // 2):
plt.figure()
plt.plot(m[:, 2* i], m[:, 2 * i + 1])
If you want all the plots on the same figure, just move the plt.figure() out of the loop:
plt.figure()
for i in range(m.shape[1] // 2):
plt.plot(m[:, 2* i], m[:, 2 * i + 1])
I have an array, something like:
array = np.arange(0,4,1).reshape(2,2)
> [[0 1
2 3]]
I want to both upsample this array as well as interpolate the resulting values. I know that a good way to upsample an array is by using:
array = eratemp[0].repeat(2, axis = 0).repeat(2, axis = 1)
[[0 0 1 1]
[0 0 1 1]
[2 2 3 3]
[2 2 3 3]]
but I cannot figure out a way to interpolate the values to remove the 'blocky' nature between each 2x2 section of the array.
I want something like this:
[[0 0.4 1 1.1]
[1 0.8 1 2.1]
[2 2.3 3 3.1]
[2.1 2.3 3.1 3.2]]
Something like this (NOTE: these will not be the exact numbers). I understand that it may not be possible to interpolate this particular 2D grid, but using the first grid in my answer, an interpolation should be possible during the upsampling process as you are increasing the number of pixels, and can therefore 'fill in the gaps'.
I am not too fussed on the type of interpolation, providing the final output is a smoothed surface! I have tried to use the scipy.interp2d method but to no avail, would be grateful if someone could share their wisdom!
You can use SciPy interp2d for the interpolation, you can find the documentation here.
I've modified the example from the documentation a bit:
from scipy import interpolate
x = np.array(range(2))
y = np.array(range(2))
a = np.array([[0, 1], [2, 3]])
f = interpolate.interp2d(x, y, a, kind='linear')
xnew = np.linspace(0, 2, 4)
ynew = np.linspace(0, 2, 4)
znew = f(xnew, ynew)
If you print znew it should look like this:
array([[ 0. , 0.66666667, 1. , 1. ],
[ 1.33333333, 2. , 2.33333333, 2.33333333],
[ 2. , 2.66666667, 3. , 3. ],
[ 2. , 2.66666667, 3. , 3. ]])
I would use scipy.misc.imresize:
array = np.arange(0,4,1).reshape(2,2)
from skimage.transform import resize
out = scipy.misc.imresize(array, 2.0)
The 2.0 indicates that I want the output to be twice the dimensions of the input. You could alternatively supply an int or a tuple to specify a percentage of the original dimensions or just the new dimensions themselves.
This is very easy to use, but there is an extra step because imresize rescales everything so that your max value becomes 255 and your min becomes 0. (And it changes the datatype to np.unit8.) You may need to do something like:
out = out.astype(array.dtype) / 255 * (np.max(array) - np.min(array)) + np.min(array)
Let's look at the output:
>>> out.round(2)
array([[0. , 0.25, 0.75, 1. ],
[0.51, 0.75, 1.26, 1.51],
[1.51, 1.75, 2.26, 2.51],
[2. , 2.25, 2.75, 3. ]])
imresize comes with a deprecation warning and a substitute, though:
DeprecationWarning: imresize is deprecated! imresize is deprecated
in SciPy 1.0.0, and will be removed in 1.2.0. Use
skimage.transform.resize instead.
Form resample method in SciPy. Signal you can up-sample your 2d array sequentially in one axis and then the other axis.
Consider two urns, E and U. There are holy grails and crappy grails in each of these. Denote the holy ones with H.
Say we draw out of both urns, xe times out of E, and xu times out of U - how many holy grails are we going to find? This is easily solvable for any pair (xe, xu). But I'd like to do this for grids of draws out of xe and xu.
What is the most efficient way to do this in Python using standard packages?
Here is my approach.
import numpy as np
import scipy.stats as stats
binomial = stats.binom.pmf
# define the grids of E, U to search
numberOfE = np.arange(3)
numberOfHolyE = np.arange(3)
numberOfU = np.arange(5)
numberOfHolyU = np.arange(5)
# mesh it
E, U, EH, UH = np.meshgrid(numberOfE, numberOfU, numberOfHolyE, numberOfHolyU, indexing='ij')
# independent draws from both urns. Probabilities are 0.9 and 0.1
drawsE = binomial(EH, E, 0.9)
drawsU = binomial(UH, U, 0.1)
# joint probability of being at a specific grid point
prob = drawsE * drawsU
totalHigh = EH + UH
This is how far I've come:
In [77]: prob[1,1,:]
Out[77]:
array([[ 0.09, 0.01, 0. , 0. , 0. ],
[ 0.81, 0.09, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. ]])
In [78]: totalHigh[1,1,:]
Out[78]:
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6]])
I think, that, these matrices mean the following:
Take a look at where totalHigh has value 1: if I draw one time from both urns, I have a 0.81 probability of drawing one high from E and zero from U, and 0.01 the other way around. That means, the total probability of drawing one guy conditional on drawing once from both urns is 0.82.
Which brings me to my second question:
Conditional on doing it this way, How do I sum up these probabilities efficiently, conditional on the first two dimensions? I effectively want to transform these 4D matrices into 3D matrices.