How to convert a grayscale image into a list of pixel values? - python

I am trying to create a python program which takes a grayscale, 24*24 pixel image file (I haven't decided on the type, so suggestions are welcome) and converts it to a list of pixel values from 0 (white) to 255 (black).
I plan on using this array for creating a MNIST-like bytefile of the picture, that can be recognized by Tensor-Flow handwriting recognition algorithms.
I have found the Pillow library to be the most useful in this task, by iterating over each pixel and appending its value to an array
from PIL import Image
img = Image.open('eggs.png').convert('1')
rawData = img.load()
data = []
for y in range(24):
for x in range(24):
data.append(rawData[x,y])
Yet this solution has two problems:
The pixel values are not stored as integers, but pixel objects which cannot be further mathematically manipulated and are therefore useless.
Even the Pillow docs state that:
Accessing individual pixels is fairly slow. If you are looping over all of the pixels in an image, there is likely a faster way using other parts of the Pillow API.

You can convert the image data into a Python list (or list-of-lists) like this:
from PIL import Image
img = Image.open('eggs.png').convert('L') # convert image to 8-bit grayscale
WIDTH, HEIGHT = img.size
data = list(img.getdata()) # convert image data to a list of integers
# convert that to 2D list (list of lists of integers)
data = [data[offset:offset+WIDTH] for offset in range(0, WIDTH*HEIGHT, WIDTH)]
# At this point the image's pixels are all in memory and can be accessed
# individually using data[row][col].
# For example:
for row in data:
print(' '.join('{:3}'.format(value) for value in row))
# Here's another more compact representation.
chars = '#%#*+=-:. ' # Change as desired.
scale = (len(chars)-1)/255.
print()
for row in data:
print(' '.join(chars[int(value*scale)] for value in row))
Here's an enlarged version of a small 24x24 RGB eggs.png image I used for testing:
Here's the output from the first example of access:
And here the output from the second example:
# # % * # # # # % - . * # # # # # # # # # # # #
# # . . + # # . = # # # # # # # # # # # #
# * . . * # # # # # # # # # # # #
# # . . . . + % % # # # # # = # # # #
# % . : - - - : % # % : # # # #
# # . = = - - - = - . . = = % # # #
# = - = : - - : - = . . . : . % # # #
% . = - - - - : - = . . - = = = - # # #
= . - = - : : = + - : . - = - : - = : * %
- . . - = + = - . . - = : - - - = . -
= . : : . - - . : = - - - - - = . . %
% : : . . : - - . : = - - - : = : # #
# # : . . = = - - = . = + - - = - . . # #
# # # . - = : - : = - . - = = : . . # #
# # % : = - - - : = - : - . . . - #
# # * : = : - - - = . . - . . . + #
# # . = - : - = : : : . - % # # #
* . . . : = = - : . . - . - # # # # #
* . . . : . . . - = . = # # # # # #
# : - - . . . . # # # # # # # # #
# # = # # # * . . . - # # # # # # # # #
# # # # # # # . . . # # # # # # # # # # # #
# # # # # # # - . % # # # # # # # # # # # #
# # # # # # # # . : % # # # # # # # # # # # # #
Access to the pixel data should now be faster than using the object img.load() returns (and the values will be integers in the range of 0..255).

You can access the greyscale value of each individual pixel by accessing the r, g, or b value, which will all be the same for a greyscale image.
I.e.
img = Image.open('eggs.png').convert('1')
rawData = img.load()
data = []
for y in range(24):
for x in range(24):
data.append(rawData[x,y][0])
This doesn't solve the problem of access speed.
I'm more familiar with scikit-image than Pillow. It seems to me that if all you are after is listing the greyscale values, you could use scikit-image, which stores images as numpy arrays, and use img_as_ubyte to represent the image as a uint array, containing values between 0 and 255.
Images are NumPy Arrays provides a good starting point to see what the code looks like.

Related

Simulations taking too long and failing with simple model in abaqus

I'm trying to simulate the compression of a tensegrity structure in abaqus but can't achieve a solution as the time increment is too slow and the simulations keep on failing.
The model simulates a tensegrity regular truncated cuboctahedron being compressed of 0.15 of the radius of the circumsphere surrounding the structure (not present in the model as it is only used to size the model).
The model is fixed at the bottom while the upper part is free to rotate while a displacement BC is used to compress the structure. I've added some constraint to the nodes in the middle to force them to stay in the same plane (this should be a typical behaviour for tensegrity structures and the BCs are okay since they don't produce significant reaction forces).
My goal would be to run this and many other tensegrity simulations adding bendability (using quadratic beams in the simulation) and other internal structures but all simulations have more or less the same problem as at some point it fails due to excess rotation.
BCs can be seen here
The model has few elements and nodes so it should be easy to run in abaqus but if I don't force mass scaling I end up having very small increments and that's not feasible for my model.
The material properties are taken from the literature as the model simulates the compression of a cytoskeleton of a cell. I've tried everything from increasing or decreasing the seed number on the beams to adjusting the boundary conditions and I don't know what to do anymore.
I'm attaching the python3 code that I made to create this model, in the initial part it is possible to adjust mass scaling, the seeding of the beams and the geometrical and mechanical parameters of the model.
Thanks to anyone who'll help me understand what I'm doing wrong!
# Sript aimed at generating the necessary geometries and material properties
# of the Tensegrity cuboctahedron with 1 layer (no nucleoskeleton).
## Units
# Length - mm
# Force - N
# Stress - MPa
# Mass - Ton
# Density - Ton/mm3
# Time - s
# # # # # # # # VARIABLES INITIALIZATION # # # # # # # #
# General
import copy
import math
# -*- coding: mbcs -*-
import mesh
from part import *
from material import *
from section import *
from assembly import *
from step import *
from interaction import *
from load import *
from mesh import *
from optimization import *
from job import *
from sketch import *
from visualization import *
from connectorBehavior import *
import numpy as np
# import matplotlib.pyplot as plt
# General
MassScale = 1e-4
cpus = 1
seedmt = 2
# Type of experiment
Bendable = True
Compression = True
Time = 9 # seconds
Displace = 0.15 # Relative displacement (Percentage of cell radius)
# Microtubules
Emt = 1200 # Young's Module in MPa
BSmt = 1 # Bending Stiffness
CAmt = 0.00000000019 # Cross sectional Area in mm2
vmt = 0.3 # Poisson's ratio
Rmt = math.sqrt(CAmt / math.pi) # Radius of the cross sectional area
Densmt = 1.43e-09 # Density
# Microfilaments
Emf = 2600 # Young's Module in MPa
CAmf = 0.000000000019 # Cross sectional Area in nm2
vmf = 0.3 # Poisson's ratio
Rmf = math.sqrt(CAmf / math.pi) # Radius of the cross sectional area
Densmf = 1.43e-09 # Density
# Almost fixed variables
cell_radius = 0.015
prestress = 0
model_name = ''
Displace = cell_radius*Displace # Total displacement in nm (if Stress Relaxation)
# Compute model name
if Bendable:
model_name = model_name + 'B_'
if Compression:
model_name = model_name + 'Comp'
else:
model_name = model_name + 'Trac'
model_name = model_name
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # CALCULATIONS FOR NODES # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Input points
x = np.array([-1.5, -1.5, -1, -1, -1, -1, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1.5, 1.5])
y = np.array([-0.5, 0.5, -1, -1, 1, 1, -1.5, -0.5, -0.5, 0.5, 0.5, 1.5, -1.5, -0.5, -0.5, 0.5, 0.5, 1.5, -1, -1, 1, 1, -0.5, 0.5])
z = np.array([0, 0, -(1/math.sqrt(2)), 1/math.sqrt(2), -(1/math.sqrt(2)), 1/math.sqrt(2), 0, -(math.sqrt(2)), math.sqrt(2), -(math.sqrt(2)), math.sqrt(2), 0, 0, -(math.sqrt(2)), math.sqrt(2), -(math.sqrt(2)), math.sqrt(2), 0,-(1/math.sqrt(2)), 1/math.sqrt(2), -(1/math.sqrt(2)), 1/math.sqrt(2), 0, 0])
# Adjust to the cell radius
factor = (1/math.sqrt(5/2))*cell_radius
x1 = np.multiply(x, factor)
y1 = np.multiply(y, factor)
z1 = np.multiply(z, factor)
p1 = np.array([x1[0], y1[0], z1[0]])
p2 = np.array([x1[1], y1[1], z1[1]])
p3 = np.array([x1[2], y1[2], z1[2]])
p4 = np.array([x1[3], y1[3], z1[3]])
p5 = np.array([x1[4], y1[4], z1[4]])
p6 = np.array([x1[5], y1[5], z1[5]])
p7 = np.array([x1[6], y1[6], z1[6]])
p8 = np.array([x1[7], y1[7], z1[7]])
p9 = np.array([x1[8], y1[8], z1[8]])
p10 = np.array([x1[9], y1[9], z1[9]])
p11 = np.array([x1[10], y1[10], z1[10]])
p12 = np.array([x1[11], y1[11], z1[11]])
p13 = np.array([x1[12], y1[12], z1[12]])
p14 = np.array([x1[13], y1[13], z1[13]])
p15 = np.array([x1[14], y1[14], z1[14]])
p16 = np.array([x1[15], y1[15], z1[15]])
p17 = np.array([x1[16], y1[16], z1[16]])
p18 = np.array([x1[17], y1[17], z1[17]])
p19 = np.array([x1[18], y1[18], z1[18]])
p20 = np.array([x1[19], y1[19], z1[19]])
p21 = np.array([x1[20], y1[20], z1[20]])
p22 = np.array([x1[21], y1[21], z1[21]])
p23 = np.array([x1[22], y1[22], z1[22]])
p24 = np.array([x1[23], y1[23], z1[23]])
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # PART CREATION # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
profile_name = 'CytoskeletonProfile'
Model = mdb.Model(modelType=STANDARD_EXPLICIT, name=model_name)
# Microtubuli
CskPart = Model.Part(dimensionality=THREE_D, name='Csk', type=DEFORMABLE_BODY)
CskPart.ReferencePoint(point=(p17[0], p17[1], p17[2]))
CskPart.DatumPointByCoordinate(coords=(p13[0], p13[1], p13[2]))
CskPart.DatumPointByCoordinate(coords=(p19[0], p19[1], p19[2]))
CskPart.DatumPointByCoordinate(coords=(p4[0], p4[1], p4[2]))
CskPart.DatumPointByCoordinate(coords=(p20[0], p20[1], p20[2]))
CskPart.DatumPointByCoordinate(coords=(p21[0], p21[1], p21[2]))
CskPart.DatumPointByCoordinate(coords=(p8[0], p8[1], p8[2]))
CskPart.DatumPointByCoordinate(coords=(p23[0], p23[1], p23[2]))
CskPart.DatumPointByCoordinate(coords=(p7[0], p7[1], p7[2]))
CskPart.DatumPointByCoordinate(coords=(p10[0], p10[1], p10[2]))
CskPart.DatumPointByCoordinate(coords=(p3[0], p3[1], p3[2]))
CskPart.DatumPointByCoordinate(coords=(p6[0], p6[1], p6[2]))
CskPart.DatumPointByCoordinate(coords=(p15[0], p15[1], p15[2]))
CskPart.DatumPointByCoordinate(coords=(p1[0], p1[1], p1[2]))
CskPart.DatumPointByCoordinate(coords=(p16[0], p16[1], p16[2]))
CskPart.DatumPointByCoordinate(coords=(p2[0], p2[1], p2[2]))
CskPart.DatumPointByCoordinate(coords=(p22[0], p22[1], p22[2]))
CskPart.DatumPointByCoordinate(coords=(p5[0], p5[1], p5[2]))
CskPart.DatumPointByCoordinate(coords=(p9[0], p9[1], p9[2]))
CskPart.DatumPointByCoordinate(coords=(p12[0], p12[1], p12[2]))
CskPart.DatumPointByCoordinate(coords=(p18[0], p18[1], p18[2]))
CskPart.DatumPointByCoordinate(coords=(p14[0], p14[1], p14[2]))
CskPart.DatumPointByCoordinate(coords=(p24[0], p24[1], p24[2]))
CskPart.DatumPointByCoordinate(coords=(p11[0], p11[1], p11[2]))
Microtubules = CskPart.WirePolyLine(meshable=ON, points=((CskPart.referencePoints[1], CskPart.datums[2]),
(CskPart.datums[3], CskPart.datums[4]),
(CskPart.datums[5], CskPart.datums[6]),
(CskPart.datums[7], CskPart.datums[8]),
(CskPart.datums[9], CskPart.datums[10]),
(CskPart.datums[11], CskPart.datums[12]),
(CskPart.datums[13], CskPart.datums[14]),
(CskPart.datums[15], CskPart.datums[16]),
(CskPart.datums[17], CskPart.datums[18]),
(CskPart.datums[19], CskPart.datums[20]),
(CskPart.datums[21], CskPart.datums[22]),
(CskPart.datums[23], CskPart.datums[24]))) # IMPRINT
Microfilaments = CskPart.WirePolyLine(meshable=ON, points=((CskPart.referencePoints[1], CskPart.datums[24]),
(CskPart.referencePoints[1], CskPart.datums[17]),
(CskPart.referencePoints[1], CskPart.datums[13]),
(CskPart.datums[2], CskPart.datums[5]),
(CskPart.datums[2], CskPart.datums[9]),
(CskPart.datums[2], CskPart.datums[3]),
(CskPart.datums[3], CskPart.datums[8]),
(CskPart.datums[3], CskPart.datums[22]),
(CskPart.datums[4], CskPart.datums[19]),
(CskPart.datums[4], CskPart.datums[14]),
(CskPart.datums[4], CskPart.datums[9]),
(CskPart.datums[5], CskPart.datums[13]),
(CskPart.datums[5], CskPart.datums[8]),
(CskPart.datums[6], CskPart.datums[15]),
(CskPart.datums[6], CskPart.datums[23]),
(CskPart.datums[6], CskPart.datums[21]),
(CskPart.datums[7], CskPart.datums[11]),
(CskPart.datums[7], CskPart.datums[10]),
(CskPart.datums[7], CskPart.datums[22]),
(CskPart.datums[8], CskPart.datums[23]),
(CskPart.datums[9], CskPart.datums[11]),
(CskPart.datums[10], CskPart.datums[18]),
(CskPart.datums[10], CskPart.datums[15]),
(CskPart.datums[11], CskPart.datums[14]),
(CskPart.datums[12], CskPart.datums[16]),
(CskPart.datums[12], CskPart.datums[20]),
(CskPart.datums[12], CskPart.datums[24]),
(CskPart.datums[13], CskPart.datums[19]),
(CskPart.datums[14], CskPart.datums[16]),
(CskPart.datums[15], CskPart.datums[22]),
(CskPart.datums[16], CskPart.datums[18]),
(CskPart.datums[17], CskPart.datums[23]),
(CskPart.datums[17], CskPart.datums[21]),
(CskPart.datums[18], CskPart.datums[20]),
(CskPart.datums[19], CskPart.datums[24]),
(CskPart.datums[20], CskPart.datums[21]))) # IMPRINT
# Sets
MtSet = CskPart.Set(edges=CskPart.getFeatureEdges(Microtubules.name), name='MtSet')
MfSet = CskPart.Set(edges=CskPart.getFeatureEdges(Microfilaments.name), name='MfSet')
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # PROPERTIES # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
## Material properties
# Microfilaments
MfMat = Model.Material(name='Microfilaments')
MfMat.Density(table=((Densmf,),))
MfMat.Elastic(noCompression=OFF, table=((Emf, vmf),)) # Table contains E and v
# Microtubules
MtMat = Model.Material(name='Microtubules')
MtMat.Density(table=((Densmt,),))
MtMat.Elastic(noCompression=OFF, table=((Emt, vmt),)) # Table contains E and v
## Section assignment
if Bendable:
# Microtubules
MtProfile = Model.CircularProfile(name='MicrotubulesProfile', r=Rmt)
MtSection = Model.BeamSection(consistentMassMatrix=False, integration=DURING_ANALYSIS, material=MtMat.name, name='MtSection', poissonRatio=0.3, profile=MtProfile.name, temperatureVar=LINEAR)
CskPart.SectionAssignment(offset=0.0, offsetField='', offsetType=MIDDLE_SURFACE, region=MtSet, sectionName=MtSection.name, thicknessAssignment=FROM_SECTION)
CskPart.assignBeamSectionOrientation(method=N1_COSINES, n1=(0.0, 1.0, -1.0), region=MtSet)
else:
# Microtubules
MtSection = Model.TrussSection(area=CAmt, material=MtMat.name, name='MtSection')
CskPart.SectionAssignment(offset=0.0, offsetField='', offsetType=MIDDLE_SURFACE, region=MtSet, sectionName=MtSection.name, thicknessAssignment=FROM_SECTION)
# Microfilaments
MfSection = Model.TrussSection(area=CAmf, material=MfMat.name, name='MfSection')
CskPart.SectionAssignment(offset=0.0, offsetField='', offsetType=MIDDLE_SURFACE, region=MfSet, sectionName=MfSection.name, thicknessAssignment=FROM_SECTION)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # ASSEMBLY # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Generate Assembly
Assembly = Model.rootAssembly
Assembly.DatumCsysByDefault(CARTESIAN)
# Create Cytoskeleton
CskInstance = Assembly.Instance(dependent=ON, name='Csk', part=CskPart)
Centroid = np.array([(p14[0] + p24[0]) / 2, (p14[1] + p24[1]) / 2, (p14[2] + p24[2]) / 2])
norm = Centroid / np.linalg.norm(Centroid)
zaxis = np.array([0,0,1])
direct = np.cross(norm,zaxis)
angle = math.acos(norm[2])
angle = angle * 180 / math.pi
Assembly.rotate(angle=angle, axisDirection=direct, axisPoint=(0.0, 0.0, 0.0), instanceList=[CskInstance.name])
Model.rootAssembly.regenerate()
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # STEPS # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Preparing steps for explicit solver
Model.ExplicitDynamicsStep(improvedDtMethod=ON, timePeriod=Time,
massScaling=((SEMI_AUTOMATIC, MODEL, THROUGHOUT_STEP, 0.0, MassScale, BELOW_MIN, 1, 0, 0.0,
0.0, 0, None),),
name='Loading', previous='Initial')
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # LOADS # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Create Set of fixed nodes
minz = 0
maxz = 0
for i in range(24):
tempz = CskInstance.vertices[i].pointOn[0][2]
if tempz < minz:
minz = tempz
elif tempz > maxz:
maxz = tempz
LoadNodes = []
FixedNodes = []
InternalNodes = []
for i in range(24):
node = CskInstance.vertices[i]
tempz = node.pointOn[0][2]
if tempz == minz:
FixedNodes.append(node)
elif tempz == maxz:
LoadNodes.append(node)
else:
InternalNodes.append(node)
# Reverse Force and displacement if traction
if not Compression:
Displace = - Displace
Model.TabularAmplitude(data=((0.0, 0.0), (Time/float(2), 0.5), (Time, 1)), name='LinearLoading', smooth=SOLVER_DEFAULT, timeSpan=STEP)
# Create Loads and boundary conditions
Model.DisplacementBC(name='Displacement', createStepName='Loading', region=LoadNodes, amplitude='LinearLoading', u3=-Displace, ur3=0)
Model.EncastreBC(createStepName='Initial', localCsys=None, name='FixedNodes', region=FixedNodes)
Model.DisplacementBC(name='InternalPlanes', createStepName='Loading', region=InternalNodes, amplitude='LinearLoading', ur1=0, ur2=0)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # MESH # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
if not Bendable:
seedmt = 1
# Assign number of elements per microfilaments and Microtubules
CskPart.seedEdgeByNumber(edges=MtSet.edges[:], number=seedmt, constraint=FINER)
CskPart.seedEdgeByNumber(edges=MfSet.edges[:], number=1, constraint=FINER)
TrussMeshMf = mesh.ElemType(elemCode=T3D2, elemLibrary=EXPLICIT) # Microfilaments are always not bendable
if Bendable:
TrussMeshMt = mesh.ElemType(elemCode=B32, elemLibrary=EXPLICIT) # B31 Linear Beam, B32 Quadratic Beam
else:
TrussMeshMt = mesh.ElemType(elemCode=T3D2, elemLibrary=EXPLICIT)
# Assign Element type
CskPart.setElementType(regions=MfSet, elemTypes=(TrussMeshMf,))
CskPart.setElementType(regions=MtSet, elemTypes=(TrussMeshMt,))
# Generate the mesh
CskPart.generateMesh()
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # JOB # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Build job name
JobName = 'Cubo'
if Bendable:
JobName = JobName + '_Bend'
if Compression:
JobName = JobName + '_Comp'
else:
JobName = JobName + '_Tract'
JobName = JobName
# Build Job
mdb.Job(name=JobName, model=model_name, description='A description',
type=ANALYSIS, atTime=None, waitMinutes=0, waitHours=0, queue=None,
memory=90, memoryUnits=PERCENTAGE, explicitPrecision=DOUBLE_PLUS_PACK,
nodalOutputPrecision=FULL, echoPrint=OFF, modelPrint=OFF,
contactPrint=OFF, historyPrint=OFF, userSubroutine='', scratch='',
resultsFormat=ODB, parallelizationMethodExplicit=DOMAIN, numDomains=cpus,
activateLoadBalancing=False, multiprocessingMode=DEFAULT, numCpus=cpus)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # HISTORY # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Model.FieldOutputRequest(createStepName='Loading',
name='Loading', numIntervals=300, variables=PRESELECT)
Model.HistoryOutputRequest(createStepName='Loading',
name='Loading', variables=PRESELECT)
Model.rootAssembly.regenerate()

how to maintain the surround view cameras (4 camera images) position in the point cloud

I am trying to generate 3D point cloud from 4 RGB-D images. I am able to do that with Open3D but I am unable to maintain the position of the images. You can find the camera_parameters.json here.
import open3d as o3d
import cv2
import os
import numpy as np
import matplotlib.pyplot as plt
import json
def load_json(path):
with open(path) as f:
return json.load(f)
def parse_camera_params(camera):
param = {}
param['camera'] = camera['camera']
param['depth'] = {}
param['depth']['fx'] = camera['K_depth'][0][0]
param['depth']['fy'] = camera['K_depth'][1][1]
param['depth']['cx'] = camera['K_depth'][0][2]
param['depth']['cy'] = camera['K_depth'][1][2]
param['depth']['K'] = camera['K_depth']
# ignore distCoeffs_depth's 5th (1000) and 6th (0) element
# since they are strange
param['depth']['distCoeffs'] = np.array(camera['distCoeffs_depth'][:5])
param['depth_width'] = camera['depth_width']
param['depth_height'] = camera['depth_height']
param['color'] = {}
param['color']['fx'] = camera['K_color'][0][0]
param['color']['fy'] = camera['K_color'][1][1]
param['color']['cx'] = camera['K_color'][0][2]
param['color']['cy'] = camera['K_color'][1][2]
param['color']['K'] = camera['K_color']
# ignore distCoeffs_color's 5th (1000) and 6th (0) element
# since they are strange
param['color']['distCoeffs'] = np.array(camera['distCoeffs_color'][:5])
param['color_width'] = camera['color_width']
param['color_height'] = camera['color_height']
# world to depth
w2d_T = np.array(camera['M_world2sensor'])
param['w2d_R'] = w2d_T[0:3, 0:3]
param['w2d_t'] = w2d_T[0:3, 3]
param['w2d_T'] = camera['M_world2sensor']
d2w_T = np.linalg.inv(w2d_T)
param['d2w_R'] = d2w_T[0:3, 0:3]
param['d2w_t'] = d2w_T[0:3, 3]
param['d2w_T'] = d2w_T
return param
if __name__ == '__main__':
data_dir = "data/"
camera_params = load_json(os.path.join(data_dir,
'camera_parameters.json'))
SVC_NUM = 4
pcd_combined = o3d.geometry.PointCloud()
for i in range(SVC_NUM):
param = parse_camera_params(camera_params['sensors'][i])
color = cv2.imread(os.path.join(data_dir, 'color_{:05d}.png'.format(i)))
color = cv2.cvtColor(color, cv2.COLOR_BGR2RGB)
depth = cv2.imread(os.path.join(data_dir, 'depth_{:05d}.png'.format(i)), -1)
# depth = depth * 0.001 # factor to scale the depth image from carla
o3d_color = o3d.geometry.Image(color)
o3d_depth = o3d.geometry.Image(depth)
rgbd_image = o3d.geometry.RGBDImage.create_from_tum_format(o3d_color, o3d_depth, False)
h, w = depth.shape
dfx, dfy, dcx, dcy = param['depth']['fx'], param['depth']['fy'], param['depth']['cx'], param['depth']['cy']
intrinsic = o3d.camera.PinholeCameraIntrinsic(w, h, dfx,dfy, dcx, dcy)
intrinsic.intrinsic_matrix = param['depth']['K']
cam = o3d.camera.PinholeCameraParameters()
cam.intrinsic = intrinsic
cam.extrinsic = np.array(param['w2d_T'])
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, cam.intrinsic, cam.extrinsic)
o3d.io.write_point_cloud("svc_{:05d}_v13.pcd".format(i), pcd)
pcd_combined += pcd
o3d.io.write_point_cloud("svc_global_v13.pcd", pcd_combined)
With the above code I am getting output of svc_global_v13.pcd like below
As you can see, all the images are projected into center. As indicated in the json file, I would like the images to be positioned as left, right, front and rear in the 3D point cloud.
May I know what is it I am missing here?

Python script simplification

I created this python script to replace images in a psd file, but I feel that I am repeating the same code and was looking to simplify it, since I have more instances of "groups", I just pasted 2 here to make it shorter. Here is the ps file screenshot.
photoshop file screenshot
Thank you for your time.
import win32com.client
#Open Adobe Photoshop App programmatically and locate PSD file
ps = win32com.client.Dispatch("Photoshop.Application")
mainPsd = ps.Open("C:/Users/Admin/Documents/images/edit_tiling_wall.psd")
#group 5 copy
mainGrp = mainPsd.layerSets[0]
#all grps in main group
allLayerSets = mainPsd.layerSets[0].layerSets
#function to resize
def resize(layer,w,h):
w_max = w
h_max = h
bounds = layer.bounds
w = bounds[2] - bounds[0]
h = bounds[3] - bounds[1]
#print(w,h)
diffw = w_max - w
diffh = h_max - h
#print(diffw,diffh)
aumw = diffw + w
aumh = diffh + h
#print(aumw,aumh)
moltw = 100*aumw/w
molth = 100*aumh/h
#print(moltw,molth)
layer.Resize(moltw,molth)
bounds = layer.bounds
w = bounds[2] - bounds[0]
h = bounds[3] - bounds[1]
return w,h
def translate(doc,layer,goalX,goalY):
bounds = layer.bounds
w = bounds[2] - bounds[0]
h = bounds[3] - bounds[1]
posX = doc.width - round(doc.width - (bounds[0] + w / 2))
posY = doc.height - round(doc.height - (bounds[1] + h / 2))
layer.Translate(goalX-posX,goalY-posY)
bounds = layer.bounds
w = bounds[2] - bounds[0]
h = bounds[3] - bounds[1]
posX = doc.width - round(doc.width - (bounds[0] + w / 2))
posY = doc.height - round(doc.height - (bounds[1] + h / 2))
return posX, posY
for grp in allLayerSets:
# filter first type of texture
if grp.name.startswith('Group 1'):
print ( grp.name )
print ( grp.artLayers[-1].name ) #base layer name
# base layer
baseLayer = grp.artLayers[-1]
#get width and height of base layer
boundsLayer = baseLayer.bounds
w = boundsLayer[2] - boundsLayer[0]
h = boundsLayer[3] - boundsLayer[1]
print(w,h)
#get position(center of layer img) of base layer
posX = mainPsd.width - round(mainPsd.width - (boundsLayer[0] + w / 2))
posY = mainPsd.height - round(mainPsd.height - (boundsLayer[1] + h / 2))
print( posX, posY )
# new image to add
newTexture = "C:/CG_CONTENT/TEXTURES/Wall_v2_height.exr"
newHeightMap = ps.Open(newTexture)
newHeightMap_layer = newHeightMap.ArtLayers.Item(1)
newHeightMap_layer.Copy()
newHeightMap.Close(2)
# Set as active image in Photoshop
ps.ActiveDocument = mainPsd
# Paste 'newHeightMap' image from clipboard
pasted = mainPsd.Paste()
currentLayer = mainPsd.activeLayer
# Move the pasted layer to the correct grp
currentLayer.Move(grp, 1)
# resize in pixels the new texture
resize(currentLayer, w, h)
# IS IT FLIPPED HORIZONTALLY OR VERTICALLY? MAYBE ADD A TAG TO FLIPPED LAYERS?
if baseLayer.name.endswith('_x'):
# flip horizontal
currentLayer.Resize(-100,100)
elif baseLayer.name.endswith('_y'):
# flip vertical
currentLayer.Resize(100,-100)
elif baseLayer.name.endswith('_x_n_y'):
# flip horizontal and vertical
currentLayer.Resize(-100,-100)
#translate the texture to position
translate(mainPsd,currentLayer,posX,posY)
baseLayer.visible = False
# filter first type of texture
elif grp.name.startswith('Group 2'):
print ( grp.name )
print ( grp.artLayers[-1].name ) #base layer name
# base layer
baseLayer = grp.artLayers[-1]
#get width and height of base layer
boundsLayer = baseLayer.bounds
w = boundsLayer[2] - boundsLayer[0]
h = boundsLayer[3] - boundsLayer[1]
print(w,h)
#get position(center of layer img) of base layer
posX = mainPsd.width - round(mainPsd.width - (boundsLayer[0] + w / 2))
posY = mainPsd.height - round(mainPsd.height - (boundsLayer[1] + h / 2))
print( posX, posY )
# new image to add
newTexture = "C:/CG_CONTENT/TEXTURES/Wall_v3_height.exr"
newHeightMap = ps.Open(newTexture)
newHeightMap_layer = newHeightMap.ArtLayers.Item(1)
newHeightMap_layer.Copy()
newHeightMap.Close(2)
# Set as active image in Photoshop
ps.ActiveDocument = mainPsd
# Paste 'newHeightMap' image from clipboard
pasted = mainPsd.Paste()
currentLayer = mainPsd.activeLayer
# Move the pasted layer to the correct grp
currentLayer.Move(grp, 1)
# resize in pixels the new texture
resize(currentLayer, w, h)
# IS IT FLIPPED HORIZONTALLY OR VERTICALLY? MAYBE ADD A TAG TO FLIPPED LAYERS?
if baseLayer.name.endswith('_x'):
# flip horizontal
currentLayer.Resize(-100,100)
elif baseLayer.name.endswith('_y'):
# flip vertical
currentLayer.Resize(100,-100)
elif baseLayer.name.endswith('_both'):
# flip horizontal and vertical
currentLayer.Resize(-100,-100)
#translate the texture to position
translate(mainPsd,currentLayer,posX,posY)
baseLayer.visible = False
I pasted your branches in a comparison software, and they are really almost identical.
The only differences are:
Which can be easily stored in a two variables, let's call them texture_name and ending_name. Both will be a string.
Hence here is how I would refactor your code:
for grp in allLayerSets:
#[ ... ]
texture_name = ""
ending_name = ""
if grp.name.startswith('Group 1'):
texture_name = "Wall_v2_height.exr"
ending_name = "_x_n_y"
elif grp.name.startswith('Group 2'):
texture_name = "Wall_v3_height.exr"
ending_name = "_both"
else:
print("Deal with this problem")
# Now the code will be just one.
# [ ... ]
newTexture = "C:/CG_CONTENT/TEXTURES/" + texture_name
# [ ... ]
elif baseLayer.name.endswith(ending_name):
See how we have used the variables in your big chunk of code? But we write it only once!

How do I count the number of vector geometry that intersect a raster cell?

Or: The search for a faster and more accurate way to rasterize small OpenStreetMap extracts into population-weighted rasters.
I'd like to turn a small .pbf file into a GeoTiff which will be easier to do further spatial analysis on. For the purpose of this question I will constrain the requirements to dealing with polygon geometry since I already found a solution that works very well for lines. It works so well that I am considering converting all my polygons into lines.
To give an example of the type of data that I'd like to convert:
wget https://download.geofabrik.de/europe/liechtenstein-latest.osm.pbf
osmium tags-filter liechtenstein-latest.osm.pbf landuse=grass -o liechtenstein_grass.pbf
ogr2ogr -t_srs EPSG:3857 liechtenstein_grass.sqlite -dsco SPATIALITE=YES -nln multipolygons -nlt POLYGON -skipfailures liechtenstein_grass.pbf
I found a zonal statistics script here which we might be able to build from to solve this problem: http://pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html#calculate-zonal-statistics
The above script takes a vector layer and a raster layer, iterates on the vector features by clipping the raster and doing some statistics on that.
Instead of normal zonal statistics I would like to count the number of vector features which intersect with each raster pixel. I have a global raster grid Int32 with a unique value for each pixel.
{qgis_process} run native:creategrid -- TYPE=2 EXTENT="-20037760, -8399416, 20037760, 18454624 [EPSG:3857]" HSPACING=1912 VSPACING=1912 HOVERLAY=0 VOVERLAY=0 CRS="EPSG:3857" OUTPUT="grid.gpkg"
sqlite3 land.gpkg
SELECT load_extension("mod_spatialite");
alter table output add column ogcfod int;
update output set ogcfod = fid;
gdal_rasterize -l output -a ogcfod -tap -tr 1912.0 1912.0 -a_nodata 0.0 -ot Int32 -of GTiff -co COMPRESS=DEFLATE -co PREDICTOR=2 grid.gpkg grid.tif -te -20037760 -8399416 20037760 18454624
So I'm thinking if we could still iterate on the vector features (as there are far, far fewer of those and there are 88m+ zones in the raster grid) it will probably be much more performant.
We want a script script which takes a vector layer and a raster layer, iterates on the vector features looks up the values of all the pixels the feature covers and then adds one to a dictionary: {px_id: qty}
However, when trying to make this script work it keeps giving me the same geometry... It only shows me 1 of the pixel IDs over and over
import sys
import gdal
import numpy
import ogr
import osr
from rich import inspect, print
def zonal_stats(feat, lyr, raster):
# Get raster georeference info
transform = raster.GetGeoTransform()
xOrigin = transform[0]
yOrigin = transform[3]
pixelWidth = transform[1]
pixelHeight = transform[5]
# Reproject vector geometry to same projection as raster
sourceSR = lyr.GetSpatialRef()
targetSR = osr.SpatialReference()
targetSR.ImportFromWkt(raster.GetProjectionRef())
coordTrans = osr.CoordinateTransformation(sourceSR, targetSR)
feat = lyr.GetNextFeature()
geom = feat.GetGeometryRef()
geom.Transform(coordTrans)
# Get extent of feat
geom = feat.GetGeometryRef()
if geom.GetGeometryName() == "MULTIPOLYGON":
count = 0
pointsX = []
pointsY = []
for polygon in geom:
geomInner = geom.GetGeometryRef(count)
ring = geomInner.GetGeometryRef(0)
numpoints = ring.GetPointCount()
for p in range(numpoints):
lon, lat, z = ring.GetPoint(p)
pointsX.append(lon)
pointsY.append(lat)
count += 1
elif geom.GetGeometryName() == "POLYGON":
ring = geom.GetGeometryRef(0)
numpoints = ring.GetPointCount()
pointsX = []
pointsY = []
for p in range(numpoints):
lon, lat, z = ring.GetPoint(p)
pointsX.append(lon)
pointsY.append(lat)
else:
sys.exit("ERROR: Geometry needs to be either Polygon or Multipolygon")
xmin = min(pointsX)
xmax = max(pointsX)
ymin = min(pointsY)
ymax = max(pointsY)
print(xmin, xmax)
print(ymin, ymax)
# Specify offset and rows and columns to read
xoff = int((xmin - xOrigin) / pixelWidth)
yoff = int((yOrigin - ymax) / pixelWidth)
xcount = int((xmax - xmin) / pixelWidth) + 1
ycount = int((ymax - ymin) / pixelWidth) + 1
# Create memory target raster
target_ds = gdal.GetDriverByName("MEM").Create(
"", xcount, ycount, 1, gdal.GDT_Int32
)
target_ds.SetGeoTransform(
(
xmin,
pixelWidth,
0,
ymax,
0,
pixelHeight,
)
)
# Create for target raster the same projection as for the value raster
raster_srs = osr.SpatialReference()
raster_srs.ImportFromWkt(raster.GetProjectionRef())
target_ds.SetProjection(raster_srs.ExportToWkt())
# Rasterize zone polygon to raster
gdal.RasterizeLayer(target_ds, [1], lyr, burn_values=[1])
# Read raster as arrays
banddataraster = raster.GetRasterBand(1)
dataraster = banddataraster.ReadAsArray(xoff, yoff, xcount, ycount)
bandmask = target_ds.GetRasterBand(1)
datamask = bandmask.ReadAsArray(0, 0, xcount, ycount)
print(dataraster)
# Mask zone of raster
# zoneraster = numpy.ma.masked_array(dataraster, numpy.logical_not(datamask))
# print(zoneraster)
# exit()
def loop_zonal_stats(lyr, raster):
featList = range(lyr.GetFeatureCount())
statDict = {}
for FID in featList:
feat = lyr.GetFeature(FID)
meanValue = zonal_stats(feat, lyr, raster)
statDict[FID] = meanValue
return statDict
def main(input_zonal_raster, input_value_polygon):
raster = gdal.Open(input_zonal_raster)
shp = ogr.Open(input_value_polygon)
lyr = shp.GetLayer()
return loop_zonal_stats(lyr, raster)
if __name__ == "__main__":
if len(sys.argv) != 3:
print(
"[ ERROR ] you must supply two arguments: input-zone-raster-name.tif input-value-shapefile-name.shp "
)
sys.exit(1)
main(sys.argv[1], sys.argv[2])
Prior research:
https://gis.stackexchange.com/questions/177738/count-overlapping-polygons-including-duplicates
https://stackoverflow.com/a/47443399/697964
If gdal_rasterize could burn in the count of all the polygons which intersect with each pixel (rather than a fixed value) that would likely fulfill my needs.
https://github.com/rory/osm-summary-heatmap/blob/main/Makefile
https://old.reddit.com/r/gis/comments/4n2q5v/count_overlapping_polygons_qgis/
heatmapkerneldensity does not work very well or maybe I'm not using it correctly but it seems off
{qgis_process} run qgis:heatmapkerneldensityestimation -- INPUT="{basen}.json.geojson" RADIUS=2868 RADIUS_FIELD=None PIXEL_SIZE=1912 WEIGHT_FIELD=None KERNEL=4 DECAY=0 OUTPUT_VALUE=0 OUTPUT="{basen}.tif
This seems to work. Output is CSV columns=["px_id", "qty"]
"""
python rasterlayerzonalvectorcounts.py grid.tif liechtenstein_grass.sqlite
MIT License
Based on https://github.com/pcjericks/py-gdalogr-cookbook/blob/master/raster_layers.rst#calculate-zonal-statistics
"""
import sys
import osr
import os
import ogr
import numpy
import gdal
import pandas
from joblib import Parallel, delayed
from collections import Counter
def zonal_stats(FID, lyr, raster):
feat = lyr.GetFeature(FID)
# Get raster georeference info
transform = raster.GetGeoTransform()
xOrigin = transform[0]
yOrigin = transform[3]
pixelWidth = transform[1]
pixelHeight = transform[5]
# Reproject vector geometry to same projection as raster
sourceSR = lyr.GetSpatialRef()
targetSR = osr.SpatialReference()
targetSR.ImportFromWkt(raster.GetProjectionRef())
coordTrans = osr.CoordinateTransformation(sourceSR, targetSR)
geom = feat.GetGeometryRef()
geom.Transform(coordTrans)
# Get extent of feat
geom = feat.GetGeometryRef()
xmin, xmax, ymin, ymax = geom.GetEnvelope()
# Specify offset and rows and columns to read
xoff = int((xmin - xOrigin) / pixelWidth)
yoff = int((yOrigin - ymax) / pixelWidth)
xcount = int((xmax - xmin) / pixelWidth) + 1
ycount = int((ymax - ymin) / pixelWidth) + 1
# Create memory target raster
target_ds = gdal.GetDriverByName("MEM").Create(
"", xcount, ycount, 1, gdal.GDT_Int32
)
target_ds.SetGeoTransform(
(
xmin,
pixelWidth,
0,
ymax,
0,
pixelHeight,
)
)
# Create for target raster the same projection as for the value raster
raster_srs = osr.SpatialReference()
raster_srs.ImportFromWkt(raster.GetProjectionRef())
target_ds.SetProjection(raster_srs.ExportToWkt())
# Rasterize zone polygon to raster
gdal.RasterizeLayer(target_ds, [1], lyr, burn_values=[1])
# Read raster as arrays
banddataraster = raster.GetRasterBand(1)
dataraster = banddataraster.ReadAsArray(xoff, yoff, xcount, ycount)
bandmask = target_ds.GetRasterBand(1)
datamask = bandmask.ReadAsArray(0, 0, xcount, ycount)
# Mask zone of raster
zoneraster = numpy.ma.masked_array(dataraster, numpy.logical_not(datamask))
return numpy.array(zoneraster).tolist()
def loop_zonal_stats(input_value_polygon, input_zonal_raster):
shp = ogr.Open(input_value_polygon)
lyr = shp.GetLayer()
print("Processing", lyr.GetFeatureCount(), "features")
featList = range(lyr.GetFeatureCount())
def processFID(input_value_polygon, input_zonal_raster, FID):
shp = ogr.Open(input_value_polygon)
raster = gdal.Open(input_zonal_raster)
lyr = shp.GetLayer()
if FID:
px_ids = zonal_stats(FID, lyr, raster)
# print(px_ids)
px_ids = [item for sublist in px_ids for item in sublist]
return px_ids
return Parallel(n_jobs=8)(
delayed(processFID)(input_value_polygon, input_zonal_raster, FID)
for FID in featList
)
if __name__ == "__main__":
if len(sys.argv) != 3:
print(
"[ ERROR ] you must supply two arguments: input-zone-raster-name.tif input-value-shapefile-name.shp "
)
sys.exit(1)
input_zonal_raster = sys.argv[1]
input_value_polygon = sys.argv[2]
counts = list(
filter(None, loop_zonal_stats(input_value_polygon, input_zonal_raster))
)
counts = Counter([item for sublist in counts for item in sublist])
pandas.DataFrame.from_dict(data=counts, orient="index").to_csv(
os.path.splitext(input_value_polygon)[0] + ".csv", header=False
)
This one will create an output GeoTiff with the same grid system as the source zonal GeoTiff
I wonder if it could be sped up by using What is the purpose of meshgrid in Python / NumPy?
"""
python rasterlayerzonalvectorcounts.py grid.tif liechtenstein_grass.sqlite
MIT License
Based on https://github.com/pcjericks/py-gdalogr-cookbook/blob/master/raster_layers.rst#calculate-zonal-statistics
"""
import sys
import osr
import os
import ogr
import numpy
import gdal
import pandas
from joblib import Parallel, delayed
from collections import Counter
from rich import print, inspect
def zonal_stats(FID, lyr, raster):
feat = lyr.GetFeature(FID)
# Get raster georeference info
transform = raster.GetGeoTransform()
xOrigin = transform[0]
yOrigin = transform[3]
pixelWidth = transform[1]
pixelHeight = transform[5]
# Get extent of feat
geom = feat.GetGeometryRef()
xmin, xmax, ymin, ymax = geom.GetEnvelope()
# Specify offset and rows and columns to read
xoff = int((xmin - xOrigin) / pixelWidth)
yoff = int((yOrigin - ymax) / pixelWidth)
xcount = int((xmax - xmin) / pixelWidth) + 1
ycount = int((ymax - ymin) / pixelWidth) + 1
feat_arr = []
# if xcount != 1 or ycount != 1:
# print(xoff, yoff, xcount, ycount)
for x in range(xcount):
for y in range(ycount):
# print(xoff + x, yoff + y)
feat_arr.append((xoff + x, yoff + y))
return feat_arr
def loop_zonal_stats(input_value_polygon, input_zonal_raster):
shp = ogr.Open(input_value_polygon)
lyr = shp.GetLayer()
print("Processing", lyr.GetFeatureCount(), "features")
featList = range(lyr.GetFeatureCount())
def processFID(input_value_polygon, input_zonal_raster, FID):
shp = ogr.Open(input_value_polygon)
raster = gdal.Open(input_zonal_raster)
lyr = shp.GetLayer()
if FID:
px_ids = zonal_stats(FID, lyr, raster)
return px_ids
return Parallel(n_jobs=1)(
delayed(processFID)(input_value_polygon, input_zonal_raster, FID)
for FID in featList
)
if __name__ == "__main__":
if len(sys.argv) != 3:
print(
"[ ERROR ] you must supply two arguments: input-zone-raster-name.tif input-value-shapefile-name.shp "
)
sys.exit(1)
input_zonal_raster = sys.argv[1]
input_value_polygon = sys.argv[2]
counts = list(
filter(None, loop_zonal_stats(input_value_polygon, input_zonal_raster))
)
counts = Counter([item for sublist in counts for item in sublist])
raster_srs = osr.SpatialReference()
raster_srs.ImportFromWkt(gdal.Open(input_zonal_raster).GetProjectionRef())
raster_arr = numpy.empty((14045, 20960), numpy.int32)
for px in counts.items():
# print(px)
raster_arr[px[0][1]][px[0][0]] = px[1]
target_ds = gdal.GetDriverByName("GTiff").Create(
os.path.splitext(input_value_polygon)[0] + ".tif",
20960,
14045,
1,
gdal.GDT_Int32,
options=["COMPRESS=LZW"],
)
target_ds.SetGeoTransform(gdal.Open(input_zonal_raster).GetGeoTransform())
target_ds.SetProjection(raster_srs.ExportToWkt())
target_ds.GetRasterBand(1).WriteArray(raster_arr)
target_ds.GetRasterBand(1).SetNoDataValue(0)
target_ds.GetRasterBand(1).GetStatistics(0, 1)
target_ds.FlushCache()
target_ds = None

Python - paint a tree HW

I need to make a tree from *, the height need to be the number the user insert with 2 more * for extra row, and the brunk 1/3 of it.
In addition, the last row need to be with no space before.
I made the whole code, but the last and longest row appear with one space before..
where am I worong?
print "Enter tree height:"
height=input()
i=1
space=height-1
while i<=2*height:
print space*" ",i*"*"
i=i+2
space=space-1
trunk=height/3
j=0
while j<trunk:
print (height-1)*" ","*"
j=j+1
output:
Enter tree height: 3
*
***
*****
*
Enter tree height: 8
*
***
*****
*******
*********
***********
*************
***************
*
*
try change:
print space*" ",i*"*"
by:
print (space)*" " + i*"*"
when you do:
print ' ','3'
the output is different than
print ' '+'3'
I would use string formatting for this:
print "Enter tree height:"
height=input()
max_width=((height-1)*2)+1
for i in range(1,max_width+1,2):
print "{0:^{1:}}".format("*"*i,max_width) # ^ to indent the code in center
# as per max_width
trunk=height/3
for i in range(trunk):
print "{0:^{1:}}".format("*",max_width)
output:
monty#monty-Aspire-5050:~$ python so27.py
Enter tree height:
3
*
***
*****
*
monty#monty-Aspire-5050:~$ python so27.py
Enter tree height:
8
*
***
*****
*******
*********
***********
*************
***************
*
*
Here's what I did:
#!/usr/bin/env python3
import math
import random
class Tree(object):
def __init__(self, height=24, width=80, branches=5):
self.height = height
self.width = width
self.branches = branches
def draw(self):
tree = [[" " for i in range(self.width)] for j in range(self.height)]
# These are arbitrary trunk dimensions, you can change them
trunk = {}
trunk["width"] = math.floor(self.width / 12)
trunk["height"] = math.floor(self.height * (2 / 3))
# Center the trunk. (0, 0) is the top-left corner of the screen.
trunk["x"] = int(math.floor(self.width / 2) - math.floor(trunk["width"] / 2))
trunk["y"] = int(self.height - trunk["height"])
for i in range(trunk["height"]):
for j in range(trunk["width"]):
tree[i + trunk["y"]][j + trunk["x"]] = "#"
# Add branches
for i in range(self.branches):
# Choose a position on the side of the trunk
branch = {}
if random.randint(0, 1):
side = -1
branch["x"] = trunk["x"] - 1
else:
side = 1
branch["x"] = trunk["x"] + trunk["width"]
branch["y"] = random.randint(1, math.floor(trunk["height"] / 2)) + (self.height - trunk["height"]) - 1
tree[branch["y"]][branch["x"]] = "#"
# Just sort of squiggle around for a while
branch["length"] = math.floor(self.width / 2)
for i in range(branch["length"]):
direction_determiner = random.randint(0, 10)
direction = {}
if direction_determiner < 8:
direction["x"] = side
direction["y"] = 0
else:
direction["x"] = 0
direction["y"] = -1
branch["x"] += direction["x"]
branch["y"] += direction["y"]
if not (0 <= branch["x"] < self.width) or not (0 <= branch["y"] < self.height):
break
tree[branch["y"]][branch["x"]] = "#"
# Add a leaf sometimes
if branch["y"] < self.height - 2 and random.randint(0, 1):
tree[branch["y"] + 1][branch["x"]] = "."
tree[branch["y"] + 2][branch["x"]] = "."
# Draw the tree
for i in range(self.height):
for j in range(self.width):
print(tree[i][j], end="")
print("")
if __name__ == "__main__":
tree_drawer = Tree()
tree_drawer.draw()
It's not perfect, but it can output something like this:
~/tmp $ ./tree.py
## ### #########
.##### .######## ######.... ...
... ..#### .. # #### . .... ...
... .. . .##### ###### .... .
. . . .# . . .### #.... ####
###### . .# . . . .. #.. ##...
.. ..# ########## ##. ####.....
.. ...## . . ..## #. . ## . ..
.. .# . . ....# ###### .. #### .
. ..###### ..# ##############. #....
.. ..### .######## ...############....
.. .. .##### .######## ###.. . ... . .
. .. . ######.#.###### ###. .. . ... .
. . .. . #.########... .
.. . ######## ..
#######
######
######
######
######
######
######
######
######

Categories