Python issue with matplotlib Pie Chart Function (erraneous labels) - python

I am having a strange issue when setting up pie charts using matplotlib. For some reason, it does not seem to be handling my labels argument correctly.
A little background...I am working on a tool that will allow us to create tables that summarize the hits on our web map services. This basically just loops through all the log files and and grabs username, site name, and date\time info. I have got this first part working nicely and it creates some good summary tables. However, it would be a nice cherry on top to generate pie charts showing the who uses each site the most (by username). I have also got the pie chart creation to work where each chart is named after the site.
What is not working correctly is the labels.
I have admittedly not worked with matplotlib very much at all, but according to the code samples I have seen online, my code seems sound. Just to double check that the number of labels is matching the number of slices for the pie I print the results out for each chart and the input arguments seem to be correct, but maybe I am missing something?
The input table I am using looks good and that is the data being used for the pie chart. Here is the particular function I am working with:
#! /usr/local/bin/python
import arcpy, os
from pylab import *
import numpy as np
def CreatePieChart(table, out, case_field, name_field, data_field):
# Make new folder
_dir = os.path.join(out, 'Figures')
if not os.path.exists(_dir):
os.makedirs(_dir)
# Grab unique values
rows = arcpy.SearchCursor(table)
cases = sorted(list(set(r.getValue(case_field) for r in rows)))
del rows
# vals dictionary
vals_dict = {}
# Make table views
tv = arcpy.MakeTableView_management(table, 'temp_table')
for case in cases:
query = ''' {0} = '{1}' '''.format(arcpy.AddFieldDelimiters(tv, case_field), case)
arcpy.SelectLayerByAttribute_management(tv, 'NEW_SELECTION', query)
# Get total number for pie slice
rows = arcpy.SearchCursor(tv)
vals_dict[case] = [[r.getValue(name_field),r.getValue(data_field)] for r in rows]
del rows
# Populate Pie Chart
for case,vals in vals_dict.iteritems():
the_fig = figure(1, figsize=(8,8))
axes([0.1, 0.1, 0.8, 0.8])
fig = os.path.join(_dir, '{0}.png'.format(case.replace(' ','_')))
ax = the_fig.add_subplot(111)
label = [v[0] for v in vals]
fracs = [v[1] for v in vals]
print '%s %s %s' %(case,label,fracs)
if len(label) == len(fracs): # to make sure same number of labels as slices
cmap = plt.cm.prism
color = cmap(np.linspace(0., 1., len(fracs)))
pie_wedges = pie(fracs,colors=color,labels=label,pctdistance=0.5, labeldistance=1.05)
for wedge in pie_wedges[0]:
wedge.set_edgecolor('white')
ax.set_title(case)
savefig(fig)
print 'Created: %s' %fig
del the_fig,label,pie_wedges,fracs
return fig
if __name__ == '__main__':
table = r'C:\TEMP\Usage_Summary.dbf'
out = r'C:\TEMP'
case = 'Site_ID'
name = 'Username'
data = 'Count'
CreatePieChart(table, out, case, name, data)
And here is what is printed out to my Python window (slice count does indeed match the number of labels):
ElkRiver [u'elkriver', u'jasonco', u'jenibr', u'johnsh', u'nickme'] [731, 1, 2, 55, 58]
Created: C:\TEMP\Figures\ElkRiver.png
LongPrairie [u'brianya', u'chuckde', u'johnsh', u'longprairie', u'nickme', u'scottja'] [6, 7, 61, 129, 25, 3]
Created: C:\TEMP\Figures\LongPrairie.png
Madison [u'deanhe', u'johnsh', u'kathrynst', u'madison', u'scottja'] [7, 9, 1, 39, 1]
Created: C:\TEMP\Figures\Madison.png
NorthMankato [u' ', u'adamja', u'brianma', u'johnsh', u'johnvo', u'marksc', u'mattme', u'nickme', u'nmankato', u'scottja'] [20, 13, 65, 64, 8, 2, 4, 63, 64, 1]
Created: C:\TEMP\Figures\NorthMankato.png
Orono [u'arroned', u'davidma', u'dionsc', u'dougkl', u'gis_guest', u'jenibr', u'johnsh', u'kenad', u'kevinfi', u'kylele', u'marksc', u'natest', u'nickme', u'orono', u'samel', u'scottja', u'sethpe', u'sueda'] [2, 11, 1, 3, 5, 1, 40, 6, 1, 1, 5, 1, 37, 819, 8, 5, 2, 2]
Created: C:\TEMP\Figures\Orono.png
BellePlaine [u'billhe', u'billsc', u'bplaine', u'christopherga', u'craigla', u'dennisst', u'elkriver', u'empire', u'gis_guest', u'jasonfe', u'joedu', u'johnsh', u'joshst', u'lancefr', u'nickme', u'richardde', u'scottja', u'teresabu', u'travisje', u'wadena'] [3, 1, 331, 1, 1, 40, 1, 1, 12, 1, 27, 61, 4, 1, 47, 3, 12, 2, 2, 1]
Created: C:\TEMP\Figures\BellePlaine.png
Osseo [u'johnsh', u'karlan', u'kevinbi', u'marcusth', u'nickme', u'osseo', u'scottja'] [22, 2, 23, 11, 66, 54, 3]
Created: C:\TEMP\Figures\Osseo.png
EmpireTownship [u'empire', u'johnsh', u'lancefr', u'lanile', u'marksc', u'nickme', u'richardde', u'scottja', u'travisje'] [96, 10, 1, 14, 2, 224, 1, 1, 3]
Created: C:\TEMP\Figures\EmpireTownship.png
Courtland [u'courtland', u'empire', u'joedu', u'johnsh', u'nickme', u'scottja'] [24, 3, 3, 10, 27, 2]
Created: C:\TEMP\Figures\Courtland.png
LeSueur [u' ', u'johnsh', u'marksc', u'nickme'] [8, 6, 1, 98]
Created: C:\TEMP\Figures\LeSueur.png
Stratford [u'johnsh', u'neilgu', u'scottja', u'stratford'] [9, 3, 2, 47]
Created: C:\TEMP\Figures\Stratford.png
>>>
Something funky is happening behind the scenes because the charts come out with way more labels than what I pass into the pie() function.
I do not yet have a reputation to post pictures, but I have posted a picture here on gis stack exchange. Here is the link where the picture can be viewed. The picture is from the "Osseo" Chart that is created and you can see the from my print statements that these are the slices and lables:
Osseo [u'johnsh', u'karlan', u'kevinbi', u'marcusth', u'nickme', u'osseo', u'scottja'] [22, 2, 23, 11, 66, 54, 3]
I am not sure why so many extra labels are being created. Am I missing something here?
Here is a clean version of the code so others can test:
#! /usr/local/bin/python
import os
from pylab import *
import numpy as np
_dir = os.path.join(os.getcwd(), 'Figures')
if not os.path.exists(_dir):
os.makedirs(_dir)
vals_dict = {u'ElkRiver': [[u'elkriver', 731], [u'jasonco', 1], [u'jenibr', 2], [u'johnsh', 55], [u'nickme', 58]],
u'LongPrairie': [[u'brianya', 6], [u'chuckde', 7], [u'johnsh', 61], [u'longprairie', 129], [u'nickme', 25],
[u'scottja', 3]], u'Madison': [[u'deanhe', 7], [u'johnsh', 9], [u'kathrynst', 1], [u'madison', 39],
[u'scottja', 1]], u'NorthMankato': [[u' ', 20], [u'adamja', 13],[u'brianma', 65], [u'johnsh', 64],
[u'johnvo', 8], [u'marksc', 2], [u'mattme', 4], [u'nickme', 63], [u'nmankato', 64], [u'scottja', 1]],
u'Orono': [[u'arroned', 2], [u'davidma', 11], [u'dionsc', 1], [u'dougkl', 3], [u'gis_guest', 5], [u'jenibr', 1],
[u'johnsh', 40], [u'kenad', 6], [u'kevinfi', 1], [u'kylele', 1], [u'marksc', 5], [u'natest', 1], [u'nickme', 37],
[u'orono', 819], [u'samel', 8], [u'scottja', 5], [u'sethpe', 2], [u'sueda', 2]], u'BellePlaine': [[u'billhe', 3],
[u'billsc', 1], [u'bplaine', 331], [u'christopherga', 1], [u'craigla', 1], [u'dennisst', 40], [u'elkriver', 1],
[u'empire', 1], [u'gis_guest', 12], [u'jasonfe', 1], [u'joedu', 27], [u'johnsh', 61], [u'joshst', 4], [u'lancefr', 1],
[u'nickme', 47], [u'richardde', 3], [u'scottja', 12], [u'teresabu', 2], [u'travisje', 2], [u'wadena', 1]],
u'Osseo': [[u'johnsh', 22], [u'karlan', 2], [u'kevinbi', 23], [u'marcusth', 11], [u'nickme', 66], [u'osseo', 54],
[u'scottja', 3]], u'EmpireTownship': [[u'empire', 96], [u'johnsh', 10], [u'lancefr', 1], [u'lanile', 14], [u'marksc', 2],
[u'nickme', 224], [u'richardde', 1], [u'scottja', 1], [u'travisje', 3]], u'Courtland': [[u'courtland', 24], [u'empire', 3],
[u'joedu', 3], [u'johnsh', 10], [u'nickme', 27], [u'scottja', 2]], u'LeSueur': [[u' ', 8], [u'johnsh', 6], [u'marksc', 1],
[u'nickme', 98]], u'Stratford': [[u'johnsh', 9], [u'neilgu', 3], [u'scottja', 2], [u'stratford', 47]]}
# Populate Pie Chart
for case,vals in vals_dict.iteritems():
the_fig = figure(1, figsize=(8,8))
axes([0.1, 0.1, 0.8, 0.8])
fig = os.path.join(_dir, '{0}.png'.format(case.replace(' ','_')))
ax = the_fig.add_subplot(111)
label = [v[0] for v in vals]
fracs = [v[1] for v in vals]
print '%s %s %s' %(case,label,fracs)
if len(label) == len(fracs): # to make sure same number of labels as slices
cmap = plt.cm.prism
color = cmap(np.linspace(0., 1., len(fracs)))
pie_wedges = pie(fracs,colors=color,labels=label,pctdistance=0.5, labeldistance=1.1)
for wedge in pie_wedges[0]:
wedge.set_edgecolor('white')
ax.set_title(case)
savefig(fig)
print 'Created: %s' %fig
del label,pie_wedges,fracs
the_fig.clf()

From what I can tell without being able to run your example, the problem is that you're using the same figure in your loop over the cases
for case,vals in vals_dict.iteritems():
the_fig = figure(1, figsize=(8,8))
This gets figure 1 each time, and this figure never gets cleared. So try either making a new figure (call figure(figsize=(8,8)) instead of figure(1, figsize=(8,8))), or clearing the figure right after you create it (with the_fig.clf()). Let me know if that helps.
Edit: here's a version that doesn't show the axes, and uses legend instead of label so that you don't have labels being squished together. I also threw percentages on the labels, especially since with your color scheme you sometimes end up with some slices the same color (I tried to fix that but didn't have any luck). Matplotlib is actually doing a pretty decent job of putting small slices as far from each other as possible, the problem is in some of your charts, you just have too many small slices, so it's just not possible to get them all separated. That's why I switched to using a legend.
for case,vals in vals_dict.iteritems():
the_fig = figure(figsize=(8,8))
axes([0.1, 0.1, 0.8, 0.8])
fig = os.path.join(_dir, '{0}.png'.format(case.replace(' ','_')))
label = [v[0] for v in vals]
fracs = [v[1] for v in vals]
print '%s %s %s' %(case,label,fracs)
if len(label) == len(fracs): # to make sure same number of labels as slices
cmap = plt.cm.prism
color = cmap(np.linspace(0., 1., len(fracs)))
pie_wedges = pie(fracs,colors=color,pctdistance=0.5, labeldistance=1.1)
for wedge in pie_wedges[0]:
wedge.set_edgecolor('white')
legend(map(lambda x, f: '%s (%0.0f%%)' % (x, f), label, fracs), loc=4)
title(case)
savefig(fig)
print 'Created: %s' %fig

Related

Share and manipulate multiple numpy arrays through multiprocessing

I'm trying to make use of multiprocessing to speed up my array-based calculations. General workflow is as follows:
I have three arrays:
id_array is holding IDs of an array that belong together
class_array is a classified array (just integer representing class values from an image classification)
prob_array has the probability for these classes
based on the segments I want to:
find the class majority within each segment
average the probabilities within the segment, but only for the "pixels" that have the class majority
Here is my non-parallel example, which works fine:
import numpy as np
id_array = np.array([[1, 1, 2, 2, 2],
[1, 1, 2, 2, 4],
[3, 3, 4, 4, 4],
[3, 3, 4, 4, 4]])
class_array = np.array([[7, 7, 6, 8, 8],
[5, 7, 7, 8, 8],
[8, 8, 5, 5, 8],
[9, 9, 8, 7, 7]])
prob_array = np.array([[0.7, 0.3, 0.9, 0.5, 0.1],
[0.4, 0.6, 0.3, 0.5, 0.9],
[0.8, 0.6, 0.2, 0.2, 0.3],
[0.4, 0.4, 0.6, 0.3, 0.7]])
all_ids = np.unique( )
dst_classes = np.zeros_like(class_array)
dst_probs = np.zeros_like(prob_array)
for my_id in all_ids:
segment = np.where(id_array == my_id)
class_data = class_array[segment]
# get majority of classes within segment
majority = np.bincount(class_data.flatten()).argmax()
# get probabilities within segment
prob_data = prob_array[segment]
# get probabilities within segment where class equals majority
majority_probs = prob_data[np.where(class_data == majority)]
# get median of these probabilities
median_prob = np.nanmedian(majority_probs)
# write values
dst_classes[segment] = majority
dst_probs[segment] = median_prob
print(dst_classes)
print(dst_probs)
The problem is that my real data have something like 4 million segments and this then takes a week to compute. So I followed this tutorial and came up with this:
import numpy as np
import multiprocessing as mp
WORKER_DICT = dict()
NODATA = 0
def shared_array_from_np_array(data_array, init_value=None):
raw_array = mp.RawArray(np.ctypeslib.as_ctypes_type(data_array.dtype), data_array.size)
shared_array = np.frombuffer(raw_array, dtype=data_array.dtype).reshape(data_array.shape)
if init_value:
np.copyto(shared_array, np.full_like(data_array, init_value))
return raw_array, shared_array
else:
np.copyto(shared_array, data_array)
return raw_array, shared_array
def init_worker(id_array, class_array, prob_array, class_results, prob_results):
WORKER_DICT['id_array'] = id_array
WORKER_DICT['class_array'] = class_array
WORKER_DICT['prob_array'] = prob_array
WORKER_DICT['class_results'] = class_results
WORKER_DICT['prob_results'] = prob_results
WORKER_DICT['shape'] = id_array.shape
mp.freeze_support()
def worker(id):
id_array = WORKER_DICT['id_array']
class_array = WORKER_DICT['class_array']
prob_array = WORKER_DICT['prob_array']
class_result = WORKER_DICT['class_results']
prob_result = WORKER_DICT['prob_results']
# array indices for "id"
segment = np.where(id_array == id)
# get data at these indices, mask nodata values
class_data = np.ma.masked_equal(class_array[segment], NODATA)
# get majority value
majority_class = np.bincount(class_data.flatten()).argmax()
# get probabilities
probs = prob_array[segment]
majority_probs = probs[np.where(class_array[segment] == majority_class)]
med_majority_probs = np.nanmedian(majority_probs)
class_result[segment] = majority_class
prob_result[segment] = med_majority_probs
return
if __name__ == '__main__':
# segment IDs
id_ra, id_array = shared_array_from_np_array(np.array(
[[1, 1, 2, 2, 2],
[1, 1, 2, 2, 4],
[3, 3, 4, 4, 4],
[3, 3, 4, 4, 4]]))
# classification
cl_ra, class_array = shared_array_from_np_array(np.array(
[[7, 7, 6, 8, 8],
[5, 7, 7, 8, 8],
[8, 8, 5, 5, 8],
[9, 9, 8, 7, 7]]))
# probabilities
pr_ra, prob_array = shared_array_from_np_array(np.array(
[[0.7, 0.3, 0.9, 0.5, 0.1],
[0.4, 0.6, 0.3, 0.5, 0.9],
[0.8, 0.6, 0.2, 0.2, 0.3],
[0.4, 0.4, 0.6, 0.3, 0.7]]))
cl_res, class_results = shared_array_from_np_array(class_array, 0)
pr_res, prob_results = shared_array_from_np_array(prob_array, 0.)
unique_ids = np.unique(id_array)
init_args = (id_ra, cl_ra, pr_ra, cl_res, pr_res, id_array.shape)
with mp.Pool(processes=2, initializer=init_worker, initargs=init_args) as pool:
pool.map_async(worker, unique_ids)
print('Majorities:', cl_res)
print('Probabilities:', pr_res)
But I do not see how I can now get my results and whether they are correct. I tried
np.frombuffer(cl_res)
np.frombuffer(pr_res)
but that gives me only 10 values for cl_res (there should be 20) and they seem completely random, while pr_res has the exact same values as prob_array.
I have tried making use of other examples around here, like this, but can't get them to work either. That looks like a similar problem, but it already required a lot of knowledge how multiprocessing really works and I don't have that (total beginner with multiprocessing).
Several things to fix:
You need to create the numpy arrays in init_worker(), which should also take a shape argument:
def init_worker(id_ra, cl_ra, pr_ra, cl_res, pr_res, shape):
WORKER_DICT['id_array'] = np.ctypeslib.as_array(id_ra, shape)
WORKER_DICT['class_array'] = np.ctypeslib.as_array(cl_ra, shape)
WORKER_DICT['prob_array'] = np.ctypeslib.as_array(pr_ra, shape)
WORKER_DICT['class_results'] = np.ctypeslib.as_array(cl_res, shape)
WORKER_DICT['prob_results'] = np.ctypeslib.as_array(pr_res, shape)
You should check if init_value is not None instead of just init_value in shared_array_from_np_array(), as 0 evaluates to False.
mp.freeze_support() should only be called immediately after if __name__ == '__main__', as per its docs.
pool.map_async() returns an AsyncResult object that needs to be waited on; you probably want pool.map(), which blocks until the processing is done.
You can access the results directly in the main section with the class_results and prob_results arrays.

How to slice arrays with a percantage of overlapping

I have a set of data like this:
numpy.array([[3, 7],[5, 8],[6, 19],[8, 59],[10, 42],[12, 54], [13, 32], [14, 19], [99, 19]])
which I want to split into number of chunkcs with a percantage of overlapping, for each column separatly... for example for column 1, splitting into 3 chunkcs with %50 overlapping (results in a 2-d array):
[[3, 5, 6, 8,],
[6, 8, 10, 12,],
[10, 12, 13, 14,]]
(ignoring last row which will result in [13, 14, 99] not identical in size as the rest).
I'm trying to make a function that takes the array, number of chunkcs and overlpapping percantage and returns the results.
That's a window function, so use skimage.util.view_as_windows:
from skimage.util import view_as_windows
out = view_as_windows(in_arr[:, 0], window_shape = 4, step = 2)
If you need numpy only, you can use this recipe
For numpy only, quite fast approach is:
def rolling(a, window, step):
shape = ((a.size - window)//step + 1, window)
strides = (step*a.itemsize, a.itemsize)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
And you can call it like so:
rolling(arr[:,0].copy(), 4, 2)
Remark: I've got unexpected outputs for rolling(arr[:,0], 4, 2) so just took a copy instead.

Drawing a 3d box in a 3d scatterplot using plotly

I was trying to plot a 3d box in a 3d scatterplot. Basically, this was the result of an optimization problem (background is here). The box is the largest empty box possible given all the points.
In the plotly docs I noticed an example of a 3d cube built using 3dmesh. I copied this:
import plotly.graph_objects as go
x=[ 0.93855, 0.20203, 0.54967, 0.58658, 0.39931, 0.06736, 0.61786, 0.36016, 0.12761, 0.71581, 0.81998, 0.04528, 0.08231, 0.41814, 0.58679, 0.21181, 0.34489, 0.21812, 0.46830, 0.81898,
0.57360, 0.18453, 0.99792, 0.37970, 0.51954, 0.84264, 0.22431, 0.31440, 0.23893, 0.28493, 0.76353, 0.45365, 0.44480, 0.94911, 0.98050, 0.28615, 0.02626, 0.85477, 0.60404, 0.47469,
0.10588, 0.55919, 0.42194, 0.34432, 0.80530, 0.88291, 0.53627, 0.45454, 0.01345, 0.84411, 0.04520, 0.35532, 0.45255, 0.99365, 0.72259, 0.08634, 0.78806, 0.28674, 0.57993, 0.84025,
0.22766, 0.51236, 0.83945, 0.21910, 0.41881, 0.18910, 0.00183, 0.59310, 0.12687, 0.45273, 0.14348, 0.66694, 0.28690, 0.32822, 0.93954, 0.34411, 0.25276, 0.14377, 0.08142, 0.05422,
0.51448, 0.48659, 0.66585, 0.25156, 0.69205, 0.21175, 0.72413, 0.92027, 0.79572, 0.13293, 0.81984, 0.25584, 0.42517, 0.41333, 0.75978, 0.60823, 0.83418, 0.37497, 0.10177, 0.01215]
y=[ 0.61424, 0.39918, 0.57526, 0.04537, 0.24058, 0.18701, 0.18450, 0.82907, 0.66274, 0.96315, 0.58458, 0.12807, 0.38695, 0.30646, 0.88417, 0.63859, 0.40404, 0.06445, 0.19149, 0.91259,
0.99317, 0.67468, 0.12954, 0.11868, 0.79252, 0.98170, 0.74706, 0.28944, 0.55650, 0.91190, 0.26978, 0.94868, 0.82534, 0.37846, 0.38055, 0.42637, 0.26349, 0.09109, 0.10308, 0.63728,
0.37470, 0.85528, 0.19407, 0.29683, 0.71095, 0.72789, 0.47052, 0.54725, 0.62322, 0.52442, 0.32547, 0.54581, 0.51336, 0.58652, 0.76841, 0.00042, 0.80743, 0.32560, 0.29931, 0.19091,
0.95850, 0.42236, 0.70728, 0.85435, 0.79661, 0.14909, 0.80658, 0.36827, 0.46344, 0.92196, 0.09802, 0.02856, 0.73966, 0.55969, 0.34595, 0.80634, 0.18350, 0.84283, 0.04560, 0.41515,
0.50151, 0.52665, 0.44211, 0.48040, 0.39643, 0.99743, 0.18206, 0.09721, 0.33793, 0.69245, 0.97670, 0.70870, 0.75288, 0.51147, 0.22298, 0.84305, 0.62014, 0.41474, 0.82815, 0.42865]
z=[ 0.13338, 0.81253, 0.46946, 0.76145, 0.83335, 0.96434, 0.79175, 0.20481, 0.60056, 0.26519, 0.89917, 0.16271, 0.02890, 0.49017, 0.18970, 0.16751, 0.47065, 0.85533, 0.73768, 0.14031,
0.92923, 0.11933, 0.40330, 0.46713, 0.69964, 0.25784, 0.87656, 0.25886, 0.64603, 0.92604, 0.83728, 0.71988, 0.48486, 0.57123, 0.78618, 0.70429, 0.30544, 0.20687, 0.47584, 0.58176,
0.43336, 0.35453, 0.96509, 0.98293, 0.88605, 0.70571, 0.51733, 0.09292, 0.69618, 0.76415, 0.82743, 0.99876, 0.86101, 0.58373, 0.03917, 0.60540, 0.59567, 0.94481, 0.35552, 0.80555,
0.97449, 0.31020, 0.61952, 0.48569, 0.50740, 0.69248, 0.01918, 0.04973, 0.21958, 0.98663, 0.09143, 0.24220, 0.96312, 0.66227, 0.91103, 0.26285, 0.28079, 0.10938, 0.07499, 0.34065,
0.83692, 0.33815, 0.89640, 0.06275, 0.01852, 0.08153, 0.88351, 0.08171, 0.87036, 0.51620, 0.90021, 0.67128, 0.36607, 0.54804, 0.72661, 0.18951, 0.11629, 0.46170, 0.24500, 0.88841]
fig = go.Figure(data=[
go.Scatter3d(x=x, y=y, z=z,
mode='markers',
marker=dict(size=2)
),
go.Mesh3d(
# 8 vertices of a cube
x=[0.608, 0.608, 0.998, 0.998, 0.608, 0.608, 0.998, 0.998],
y=[0.091, 0.963, 0.963, 0.091, 0.091, 0.963, 0.963, 0.091],
z=[0.140, 0.140, 0.140, 0.140, 0.571, 0.571, 0.571, 0.571],
i = [7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2],
j = [3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3],
k = [0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6],
opacity=0.6,
color='#DC143C'
)
])
fig.show()
However, the picture shows really the triangles. Any better way to draw a 3d box (in Plotly)?
For me using the argument flatshading = True did the job.
Code
fig = go.Figure(data=[
go.Scatter3d(x=x, y=y, z=z,
mode='markers',
marker=dict(size=2)
),
go.Mesh3d(
# 8 vertices of a cube
x=[0.608, 0.608, 0.998, 0.998, 0.608, 0.608, 0.998, 0.998],
y=[0.091, 0.963, 0.963, 0.091, 0.091, 0.963, 0.963, 0.091],
z=[0.140, 0.140, 0.140, 0.140, 0.571, 0.571, 0.571, 0.571],
i = [7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2],
j = [3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3],
k = [0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6],
opacity=0.6,
color='#DC143C',
flatshading = True
)
])
Output
You can use Poly3DCollection to create shape that you need with defining shape corners.
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from itertools import product
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# create list of corners
z = list(product([-1,1], repeat=3))
# set verts connectors
verts = [[z[0],z[1],z[5],z[4]], [z[4],z[6],z[7],z[5]], [z[7], z[6], z[2], z[3]], [z[2], z[0], z[1], z[3]],
[z[5], z[7], z[3], z[1]], [z[0], z[2], z[6], z[4]]]
ax.set_xlim3d(-2,2)
ax.set_ylim3d(-2,2)
ax.set_zlim3d(-2,2)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
# plot sides
ax.add_collection3d(Poly3DCollection(verts,facecolors='blue', linewidths=1, edgecolors='black', alpha=.1))
plt.show()
Output:

Plotting an array of arrays crashes matplotlib

I have an array of the shape (6416,17,3). I am trying to plot each entry (17,3) after each other in a 3D grid as if it's a video. This is the code I wrote for the visualizer function:
def draw_limbs_3d(ax, joints_3d, limb_parents):
# ax.clear()
for i in range(joints_3d.shape[0]):
x_pair = [joints_3d[i, 0], joints_3d[limb_parents[i], 0]]
y_pair = [joints_3d[i, 1], joints_3d[limb_parents[i], 1]]
z_pair = [joints_3d[i, 2], joints_3d[limb_parents[i], 2]]
ax.plot(x_pair, y_pair, zs=z_pair, linewidth=3)
def visualizer(joints_3d):
joint_parents = [16, 15, 1, 2, 3, 1, 5, 6, 14, 8, 9, 14, 11, 12, 14, 14, 1]
fig = plt.figure('3D Pose')
ax_3d = plt.axes(projection='3d')
plt.ion()
ax_3d.clear()
ax_3d.clear()
ax_3d.view_init(-90, -90)
ax_3d.set_xlim(-1000, 1000)
ax_3d.set_ylim(-1000, 1000)
ax_3d.set_zlim(0, 4000)
ax_3d.set_xticks([])
ax_3d.set_yticks([])
ax_3d.set_zticks([])
white = (1.0, 1.0, 1.0, 0.0)
ax_3d.w_xaxis.set_pane_color(white)
ax_3d.w_yaxis.set_pane_color(white)
ax_3d.w_xaxis.line.set_color(white)
ax_3d.w_yaxis.line.set_color(white)
ax_3d.w_zaxis.line.set_color(white)
draw_limbs_3d(ax_3d, joints_3d, joint_parents)
and I use this code to run on all entries:
joints_3d = np.load('output.npy')
for joint in joints_3d:
joint = joint.reshape((17,3))
visualizer(joint)
which causes the program to crash. It works for one array though and I get the correct plot. I would be grateful if you could help me. Thank you.

how do i repeat my function but instead use the next three values in the list?

if i type:
microcar(np.array([[45, 10, 10], [110, 10, 8], [60, 10, 5], [170, 10, 4]]), np.array([[47, 10, 15], [112, 9, 8.5], [50, 10, 8], [160, 8.5, 5]]))
it returns:
(52.53219888177297, 85.09035245341184, -148.85032037263932, 18.5359684117836, 100, 150.0)
which is good, however i want it to repeat this code for the next set of 3 values and so on e.g. [110,10,8] for the expected and [50,10,8] for the actual.
i can't figure how to incorporate a loop, where it treats the next set of 3 values as the new one to look at.
Also, cos(45) = 0.707106 (45 degrees) however it treats the cos(45) = 0.5253 (as radians) is there a way to convert the settings to degrees?
Below is my code
import numpy as np
def microcar(expected, actual):
horizontal_expected = expected[0,1]*expected[0,2]*np.cos(expected[0,0])
vertical_expected = expected[0,1]*expected[0,2]*np.sin(expected[0,0])
horizontal_actual = actual[0,1]*actual[0,2]*np.cos(actual[0,0])
vertical_actual = actual[0,1]*actual[0,2]*np.sin(actual[0,0])
distance_expected = expected[0,1]*expected[0,2]
distance_actual = actual[0,1]*actual[0,2]
return horizontal_expected, vertical_expected, horizontal_actual, vertical_actual, distance_expected, distance_actual
You can zip the inputs and loop over them like this
import numpy as np
def microcar(expected, actual):
l = zip(expected, actual)
res = []
for e in l:
horizontal_expected = e[0][1]*e[0][2]*np.cos(e[0][0])
vertical_expected = e[0][1]*e[0][2]*np.sin(e[0][0])
horizontal_actual = e[1][1]*e[1][2]*np.cos(e[1][0])
vertical_actual = e[1][1]*e[1][2]*np.sin(e[1][0])
distance_expected = e[0][1]*e[0][2]
distance_actual = e[1][1]*e[1][2]
res.append([
horizontal_expected,
vertical_expected,
horizontal_actual,
vertical_actual,
distance_expected,
distance_actual
])
return res
x = microcar(
np.array([[45, 10, 10], [110, 10, 8], [60, 10, 5], [170, 10, 4]]),
np.array([[47, 10, 15], [112, 9, 8.5], [50, 10, 8], [160, 8.5, 5]])
)
print(x)
The output:
[
[52.53219888177297, 85.09035245341184, -148.85032037263932, 18.5359684117836, 100, 150.0],
[-79.92166506517184, -3.539414246805677, 34.88163648998712, -68.08466373406274, 80, 76.5],
[-47.62064902075782, -15.240531055110834, 77.19728227936906, -20.9899882963143, 50, 80.0],
[37.51979008477766, 13.865978219881212, -41.46424579379759, 9.325573481107702, 40, 42.5]
]
I don't know what kind of output you expected, so this simply returns a list of lists with the results.
As for your question about np.cos, it expects input in radians, so you could convert the degrees to radians through np.deg2rad:
import numpy as np
print(np.cos(np.deg2rad(45))
# 0.7071067811865476
Without using zip, you can create a range equal to the length of one the arrays and loop over that, using the values (in this case i) to index into the arrays, in the following way
import numpy as np
def microcar(expected, actual):
res = []
for i in range(len(expected)):
horizontal_expected = expected[i,1]*expected[i,2]*np.cos(expected[i,0])
vertical_expected = expected[i,1]*expected[i,2]*np.sin(expected[i,0])
horizontal_actual = actual[i,1]*actual[i,2]*np.cos(actual[i,0])
vertical_actual = actual[i,1]*actual[i,2]*np.sin(actual[i,0])
distance_expected = expected[i,1]*expected[i,2]
distance_actual = actual[i,1]*actual[i,2]
res.append([
horizontal_expected,
vertical_expected,
horizontal_actual,
vertical_actual,
distance_expected,
distance_actual
])
return res
x = microcar(
np.array([[45, 10, 10], [110, 10, 8], [60, 10, 5], [170, 10, 4]]),
np.array([[47, 10, 15], [112, 9, 8.5], [50, 10, 8], [160, 8.5, 5]])
)
print(x)
Output:
[
[52.53219888177297, 85.09035245341184, -148.85032037263932, 18.5359684117836, 100, 150.0],
[-79.92166506517184, -3.539414246805677, 34.88163648998712, -68.08466373406274, 80, 76.5],
[-47.62064902075782, -15.240531055110834, 77.19728227936906, -20.9899882963143, 50, 80.0],
[37.51979008477766, 13.865978219881212, -41.46424579379759, 9.325573481107702, 40, 42.5]
]
Note that this assumes that both inputs are of equal length. If they are not, you will likely encounter an IndexError exception. This assumption holds for zip as well, but there, you would "lose" the surplus entries in the longer array.

Categories