Reportlab [ Platypus ] - Image does not render - python

I am working on a report that includes a mixture of tables and images. The images [ graphs, actually ] are saved to the filesystem in .png form.
The method that actually renders the PDF is:
def _render_report(report_data):
file_name = get_file_name() # generate random filename for report
rpt = Report(settings.MEDIA_ROOT + os.sep + file_name)
Story = []
for (an, sam, event), props in report_data.iteritems():
Story.append(Paragraph("%s - sample %s results for %s" % (an.name, sam.name, event.name), styles["Heading2"]))
data_list = [['Lab', 'Method', 'Instrument', 'Unit', 'Raw Result', 'Converted Result', 'Outlier']]
for (index, series) in props['frame'].iterrows():
data_list.append(_format([
Paragraph(Lab.objects.get(pk=series['labs']).name, styles['BodyText']),
Paragraph(Method.objects.get(pk=series['methods']).name, styles['BodyText']),
Paragraph(Instrument.objects.get(pk=series['instruments']).name, styles['BodyText']),
Paragraph(Unit.objects.get(pk=series['units']).name, styles['BodyText']),
series['raw_results'],
series['results'],
series['outlier']
]))
table = Table(data_list, colWidths=[45 * mm, 35 * mm, 35 * mm, 25 * mm, 35 * mm, 35 * mm, 35 * mm], repeatRows=1)
Story.append(table)
Story.append(PageBreak())
if props['graph'] is not None:
Story.append(Image("/tmp/%s" % props['graph'], width=10 * inch, height=6 * inch))
Story.append(PageBreak())
rpt.draw(Story, onFirstPage=setup_header_and_footer, onLaterPages=setup_header_and_footer)
return file_name
Background Information
The page is set up as an A4, in landscape orientation
My development environment is a virtualenv; PIL 1.1.7 and reportlab 2.6 are installed and functional
The "Report" class used above is simply a thin wrapper around SimpleDocTemplate that sets up some defaults but delegates to SimpleDocTemplate's build implementation. Its code is:
class Report(object):
def __init__(self, filename, doctitle="Report", docauthor="<default>",
docsubject="<default>", doccreator="<default>", orientation="landscape", size=A4):
DEFAULTS = {
'leftMargin' : 10 * mm,
'rightMargin' : 10 * mm,
'bottomMargin' : 15 * mm,
'topMargin' : 36 * mm,
'pagesize' : landscape(size) if orientation == "landscape" else portrait(size),
'title' : doctitle,
'author' : docauthor,
'subject' : docsubject,
'creator' : doccreator
}
self.doc = SimpleDocTemplate(filename, **DEFAULTS)
def draw(self, flowables, onFirstPage=setup_header_and_footer, onLaterPages=setup_header_and_footer):
self.doc.build(flowables, onFirstPage=setup_header_and_footer,
onLaterPages=setup_header_and_footer, canvasmaker=NumberedCanvas)
What I have already looked at
I have confirmed that the images exist on disk. The paths are fully qualified paths.
PIL is installed, and is able to read the images correctly
The space assigned to the image is adequate; I have confirmed this by calculation. Also, if I increase the image size, ReportLab complains about the Image Flowable being too large. The current dimension should fit.
I have tested with and without the page breaks; they do not seem to make any difference
The Problem
The table, headings and page templates render OK but the images are blank. Earlier today, I had encountered this issue [ when setting up the templates used by this report ]. The workaround was to use canvas.drawInlineImage(... in place of canvas.DrawImage(... . It therefore looks as though there is an issue with my setup; I could use some pointers on how to debug it.
Update
I was able to apply a variant of the same workaround used in this linked question ( use canvas.drawInlineImage in place of canvas.drawImage. I subclassed `Image' as follows:
class CustomImage(Image):
"""
Override - to use inline image instead; inexplicable bug with non inline images
"""
def draw(self):
lazy = self._lazy
if lazy>=2: self._lazy = 1
self.canv.drawInlineImage(self.filename,
getattr(self,'_offs_x',0),
getattr(self,'_offs_y',0),
self.drawWidth,
self.drawHeight
)
if lazy>=2:
self._img = None
self._lazy = lazy
The only change from the "stock" Image class is in one line - using self.canv.drawInlineImage where there was self.canvas.drawImage before.
This "works" in the sense that the images are finally visible in my PDF. The reason why drawImage is not working still remains a mystery.
I have tried #PedroRomano's suggestion ( to make sure the images RGBA ), and even tried JPEG images instead of PNGs. These did not make a difference.

I eventually brought this matter to a close by using a custom Image subclass:
class CustomImage(Image):
"""
Override - to use inline image instead; inexplicable bug with non inline images
"""
def draw(self):
lazy = self._lazy
if lazy>=2: self._lazy = 1
self.canv.drawInlineImage(self.filename,
getattr(self,'_offs_x',0),
getattr(self,'_offs_y',0),
self.drawWidth,
self.drawHeight
)
if lazy>=2:
self._img = None
self._lazy = lazy
Saving the graphs in vector graphics formats eg EPS, saving as JPEG, saving as PNG with and without the alpha channel all did not seem to make a difference.

The optimal solution would be to generate your graphs in a vector format like postscript that is supported by reportlab. A lot of UNIX software can do this out of the box, and on windows you can use the excellent PDFCreator.
If you have to use raster images for your graphs, try converting your images to JPEG format. Those can be easily embedded in a PDF file using the DCTDecode filter. (This is e.g. what jpeg2pdf does.)

Related

Failing to place an image with pylatex

I am using pylatex to create a pdf with an image in it at the coordinates that I specify. I have used the code below but no matter what coordinates I enter, the image is always in the upper left corner. Please help me.
from pylatex import (Document, TikZ,
TikZNode, TikZCoordinate,
TikZOptions, StandAloneGraphic, NoEscape)
geometry_options = {"margin": "0cm"}
doc = Document(documentclass='standalone',document_options=('tikz'), geometry_options=geometry_options)
doc.append(NoEscape(r'\noindent'))
with doc.create(TikZ()) as pic:
# img = TikZNode(text='\includegraphics[width=0.8\\textwidth]{example-image-a}',
# at = TikZCoordinate(3,4),
# options=TikZOptions('inner sep=0pt,anchor=south west'))
img = TikZNode(text=StandAloneGraphic('example-image-a').dumps(),
at = TikZCoordinate(1,2),
options=TikZOptions('inner sep=0pt')
)
pic.append(img)
tex = doc.dumps()
doc.generate_pdf('basic',compiler='lualatex', clean_tex=False)
doc.generate_tex()
Tex-Code:
\documentclass[tikz]{standalone}%
\usepackage[T1]{fontenc}%
\usepackage[utf8]{inputenc}%
\usepackage{lmodern}%
\usepackage{textcomp}%
\usepackage{lastpage}%
\usepackage{geometry}%
\geometry{margin=0cm}%
\usepackage{tikz}%
%
%
%
\begin{document}%
\normalsize%
\noindent%
\begin{tikzpicture}%
\node[inner sep=0pt] at (1.0,2.0) {\includegraphics[width=0.8\textwidth]{example-image-a}};%
\end{tikzpicture}%
\end{document}
This looks pretty similar to the code in this post to me:
https://tex.stackexchange.com/questions/9559/drawing-on-an-image-with-tikz
The size of the tikz picture will adapt to the content and it is normally placed wherever you use it on the page, just as if you would write a normal letter.
You can see this if you add another, bigger element, to your picture. If you now play around with the coordinates of your node, you will see the image moving around.
Normally you could also position your nodes relative to the paper with the [remember picture, overlay] option, but you are using the standalone class, which will automatically adapt the paper size to the content, so in your case this does not help.
\documentclass[tikz]{standalone}%
\usepackage[T1]{fontenc}%
\usepackage[utf8]{inputenc}%
\usepackage{lmodern}%
\usepackage{textcomp}%
\usepackage{lastpage}%
%\usepackage{geometry}%
%\geometry{margin=0cm}%
\usepackage{tikz}%
%
%
%
\begin{document}%
\normalsize%
\noindent%
\begin{tikzpicture}%
\path (-20,-20) rectangle (20,20);
\node[inner sep=0pt] at (3.0,2.0) {\includegraphics[width=0.8\textwidth]{example-image-a}};%
\end{tikzpicture}%
\end{document}

Download Image from Google Earth Engine `ImageCollection` to drive

I need to download single or multiple images from the collection of ee. (preferably multiple but I can just put a single image code in a loop).
My main problem --> Download every month image from a start date to end date for a specific location (lat: "", long: "") with zoom 9
I am trying to download historical simple satellite data from the SKYSAT/GEN-A/PUBLIC/ORTHO/RGB. This gives an image like -->
https://developers.google.com/earth-engine/datasets/catalog/SKYSAT_GEN-A_PUBLIC_ORTHO_RGB#description
I am working on this in python. So this code will give me the collection but I can't download the entire collection, I need to select an image out of it.
import ee
# Load a Landsat 8 ImageCollection for a single path-row.
collection = (ee.ImageCollection('SKYSAT/GEN-A/PUBLIC/ORTHO/RGB').filterDate('2016-03-01', '2018-03-01'))
#pp.pprint('Collection: '+str(collection.getInfo())+'\n')
# Get the number of images.
count = collection.size()
print('Count: ', str(count.getInfo()))
image = ee.Image(collection.sort('CLOUD_COVER').first())
Here the image contains the <ee.image.Image at 0x23d6cf5dc70> property but I don't know how to download it.
Also, how do I specify I want it for a specific location(lat, long) with zoom 19.
Thanks for anything
You can download your whole images in the image collection using geetools.
You can install with pip install geetools.
Then, change the area, dates, and folder name(if the folder does not exist, it will be created on your drive) and execute this code.
import geetools
StartDate= ee.Date.fromYMD(2018,1,16)
EndDate = ee.Date.fromYMD(2021,10,17)
Area = ee.Geometry.Polygon(
[[[-97.93534101621628, 49.493287372441955],
[-97.93534101621628, 49.49105034378085],
[-97.93049158231736, 49.49105034378085],
[-97.93049158231736, 49.493287372441955]]])
collection =(ee.ImageCollection('SKYSAT/GEN-A/PUBLIC/ORTHO/RGB')
.filterDate(StartDate,EndDate)
.filterBounds(Area)
data_type = 'float32'
name_pattern = '{system_date}'
date_pattern = 'yMMdd' # dd: day, MMM: month (JAN), y: year
scale = 10
folder_name = 'GEE_Images'
tasks = geetools.batch.Export.imagecollection.toDrive(
collection=collection,
folder=folder_name ,
region=Area ,
namePattern=name_pattern,
scale=scale,
dataType=data_type,
datePattern=date_pattern,
verbose=True,
maxPixels=1e10
)
Or;
You can use the Earth Engine library to export images.
nimg = collection.toList(collection.size().getInfo()).size().getInfo()
for i in range(nimg):
img = ee.Image(collection.toList(nimg).get(i))
date = img.date().format('yyyy-MM-dd').getInfo()
task = ee.batch.Export.image.toDrive(img.toFloat(),
description=date,
folder='Folder_Name',
fileNamePrefix= date,
region = Area,
dimensions = (256,256),
# fileFormat = 'TFRecord',
maxPixels = 1e10)
task.start()
There are two file formats; TFRecord and GeoTIFF. The default format is GeoTIFF. Also, you can extract images with specific dimensions as shown above. If you want to download images with a scale factor, just remove the dimension line and add a scale factor instead of it.
You can read this document for more information.
Insert your region of analysis (geom) by constructing a bounding box. Then, use the code below to batch download the images.
// Example geometry. You could also insert points etc.
var geom = ee.Geometry.Polygon(
[[[-116.8, 44.7],
[-116.8, 42.6],
[-110.6, 42.6],
[-110.6, 44.7]]], None, False)
for (var i = 0; i < count ; i++) {
var img = ee.Image(collection.toList(1, i).get(0));
var geom = img.geometry().getInfo();
Export.image(img, img.get('system:index').getInfo(), {crs: crs, scale: scale, region: geom});
}

OpenGL Texturing - some jpg's are being distorted in a strange way

I am trying to draw a textured square using Python, OpenGL and GLFW.
Here are all the images I need to show you.
Sorry for the way of posting images, but I don't have enough reputation to post more than 2 links (and I can't even post a photo).
I am getting this:
[the second image from the album]
Instead of that:
[the first image from the album]
BUT if I use some different jpg files:
some of them are being displayed properly,
some of them are being displayed properly until I rotate them 90 degrees (I mean using numpy rot90 function on an array with RGB components) and then send them to the GPU. And it looks like that (colors don't change, I only get some distortion):
Before rotation:
[the third image from the album]
After rotation:
[the fourth image from the album]
It all depends on a file.
Does anybody know what I do wrong? Or see anything that I don't see?
Code:
First, I do the thing with initializing glfw, creating a window, etc.
if __name__ == '__main__':
import sys
import glfw
import OpenGL.GL as gl
import numpy as np
from square import square
from imio import imread,rgb_flag,swap_rb
from txio import tx2gpu,txrefer
glfw.glfwInit()
win =glfw.glfwCreateWindow(800,800,"Hello")
glfw.glfwMakeContextCurrent(win)
glfw.glfwSwapInterval(1)
gl.glClearColor(0.75,0.75,0.75,1.0)
Then I load an image using OpenCV imread function and I remember about swapping red with blue. Then I send the image to gpu - I will describe tx2gpu in a minute.
image = imread('../imtools/image/ummagumma.jpg')
if not rgb_flag: swap_rb(image)
#image = np.rot90(image)
tx_id = tx2gpu(image)
The swap_rb() function (defined in a different file, imported):
def swap_rb(mat):
X = mat[:,:,2].copy()
mat[:,:,2] = mat[:,:,0]
mat[:,:,0] = X
return mat
Then comes the main loop (in a while I will describe txrefer and square):
while not glfw.glfwWindowShouldClose(win):
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
txrefer(tx_id); square(2); txrefer(0)
glfw.glfwSwapBuffers(win)
glfw.glfwPollEvents()
And here is the end of the main function:
glfw.glfwDestroyWindow(win)
glfw.glfwTerminate()
NOW IMPORTANT THINGS:
A function that defines a square looks like that:
def square(scale=1.0,color=None,solid=True):
s = scale*.5
if type(color)!=type(None):
if solid:
gl.glBegin(gl.GL_TRIANGLE_FAN)
else:
gl.glBegin(gl.GL_LINE_LOOP)
gl.glColor3f(*color[0][:3]); gl.glVertex3f(-s,-s,0)
gl.glColor3f(*color[1][:3]); gl.glVertex3f(-s,s,0)
gl.glColor3f(*color[2][:3]); gl.glVertex3f(s,s,0)
gl.glColor3f(*color[3][:3]); gl.glVertex3f(s,-s,0)
else:
if solid:
gl.glBegin(gl.GL_TRIANGLE_FAN)
else:
gl.glBegin(gl.GL_LINE_LOOP)
gl.glTexCoord2f(0,0); gl.glVertex3f(-s,-s,0)
gl.glTexCoord2f(0,1); gl.glVertex3f(-s,s,0)
gl.glTexCoord2f(1,1); gl.glVertex3f(s,s,0)
gl.glTexCoord2f(1,0); gl.glVertex3f(s,-s,0)
gl.glEnd()
And texturing functions look like that:
import OpenGL.GL as gl
unit_symbols = [
gl.GL_TEXTURE0,gl.GL_TEXTURE1,gl.GL_TEXTURE2,
gl.GL_TEXTURE3,gl.GL_TEXTURE4,
gl.GL_TEXTURE5,gl.GL_TEXTURE6,gl.GL_TEXTURE7,
gl.GL_TEXTURE8,gl.GL_TEXTURE9,
gl.GL_TEXTURE10,gl.GL_TEXTURE11,gl.GL_TEXTURE12,
gl.GL_TEXTURE13,gl.GL_TEXTURE14,
gl.GL_TEXTURE15,gl.GL_TEXTURE16,gl.GL_TEXTURE17,
gl.GL_TEXTURE18,gl.GL_TEXTURE19,
gl.GL_TEXTURE20,gl.GL_TEXTURE21,gl.GL_TEXTURE22,
gl.GL_TEXTURE23,gl.GL_TEXTURE24,
gl.GL_TEXTURE25,gl.GL_TEXTURE26,gl.GL_TEXTURE27,
gl.GL_TEXTURE28,gl.GL_TEXTURE29,
gl.GL_TEXTURE30,gl.GL_TEXTURE31]
def tx2gpu(image,flip=True,unit=0):
gl.glActiveTexture(unit_symbols[unit])
texture_id = gl.glGenTextures(1)
gl.glBindTexture(gl.GL_TEXTURE_2D,texture_id)
gl.glTexParameteri(gl.GL_TEXTURE_2D,gl.GL_TEXTURE_WRAP_S,gl.GL_REPEAT)
gl.glTexParameteri(gl.GL_TEXTURE_2D,gl.GL_TEXTURE_WRAP_T,gl.GL_REPEAT)
gl.glTexParameteri(gl.GL_TEXTURE_2D,gl.GL_TEXTURE_MAG_FILTER,gl.GL_LINEAR)
gl.glTexParameteri(gl.GL_TEXTURE_2D,gl.GL_TEXTURE_MIN_FILTER,gl.GL_LINEAR)
yres,xres,cres = image.shape
from numpy import flipud
gl.glTexImage2D(gl.GL_TEXTURE_2D,0,gl.GL_RGB,xres,yres,0,gl.GL_RGB,gl.GL_UNSIGNED_BYTE,flipud(image))
gl.glBindTexture(gl.GL_TEXTURE_2D,0)
return texture_id
def txrefer(tex_id,unit=0):
gl.glColor4f(1,1,1,1);
gl.glActiveTexture(unit_symbols[unit])
if tex_id!=0:
gl.glEnable(gl.GL_TEXTURE_2D)
gl.glBindTexture(gl.GL_TEXTURE_2D,tex_id)
else:
gl.glBindTexture(gl.GL_TEXTURE_2D,0)
gl.glDisable(gl.GL_TEXTURE_2D)
The problem you have there are alignment issues. OpenGL initial alignment setting for "unpacking" images is that each row starts on a 4 byte boundary. This happens if the image width is not a multiple of 4 or if there are not 4 bytes per pixel. But it's easy enough to change this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
would probably do the trick for you. Call it right before glTex[Sub]Image.
Another thing: Your unit_symbols list is completely unnecessary. The OpenGL specification explicitly says that GL_TEXTUREn = GL_TEXTURE0 + n. You can simply do glActiveTexture(GL_TEXTURE0 + n). However when loading a texture image the unit is completely irrelevant; the only thing it may matter is, that loading a texture only goes with binding one, which happens in a texture unit; a texture can be bound in any texture unit desired.
Personally I use the highest texture unit for loading images, to avoid accidently clobbering required state.

Render my map in Mapnik

Sorry for disturbing.
I cannot render my map, I don't know why...
I read a csv file using ogr, which use a .vrt file I created, associated to the csv:
Then, I have a simple code to render my map, but I cannot it work: a map with an empty background is created, nothing on it ...
I get a warning, but I think it is normal:
Warning 1: The 'LON' and/or 'LAT' fields of the source layer are not declared as numeric fields,
so the spatial filter cannot be turned into an attribute filter on them
Do you have an idea ?
Thanks!
My .csv (called ZZZ.csv), just the begining and the interesting fields:
RecordID,VehId,DateTime,LAT,LON
0,2232,2012-04-07 18:54:39,32.801926,-116.871742
0,2232,2012-04-07 18:54:40,32.801888,-116.871727
0,2232,2012-04-07 18:54:41,32.801849,-116.871704
My .vrt:
<OGRVRTDataSource>
<OGRVRTLayer name="ZZZ">
<SrcDataSource>ZZZ.csv</SrcDataSource>
<GeometryType>wkbPoint</GeometryType>
<LayerSRS>WGS84</LayerSRS>
<GeometryField encoding="PointFromColumns" x="LON" y="LAT"/>
</OGRVRTLayer>
</OGRVRTDataSource>
My python module to render the card:
"""module mapniktest"""
import mapnik
#Defining the envelope
MIN_LAT = 30
MAX_LAT = +35
MIN_LON = -120
MAX_LON =-110
MAP_WIDTH = 1000
MAP_HEIGHT = 500
#defining the datasource: the .vrt above
datasource = mapnik.Ogr(file="ZZZ.vrt",layer = "ZZZ")
#Creating layer, rules and styles
layer = mapnik.Layer("ZZZ")
layer.datasource = datasource
layer.styles.append("LineStyle")
stroke = mapnik.Stroke()
stroke.color = mapnik.Color("#008000")
stroke.add_dash(50, 100)
symbol = mapnik.LineSymbolizer(stroke)
rule = mapnik.Rule()
rule.symbols.append(symbol)
style = mapnik.Style()
style.rules.append(rule)
print style
#creating the map
map = mapnik.Map(MAP_WIDTH, MAP_HEIGHT, "+proj=longlat +datum=WGS84")
map.append_style("LineStyle", style)
map.background = mapnik.Color("#8080a0")
map.layers.append(layer)
#displaying the map
map.zoom_to_box(mapnik.Envelope(MIN_LON, MIN_LAT, MAX_LON, MAX_LAT))
mapnik.render_to_file(map, "map.png")
thks!!!!
The problem is that you are applying a LineSymbolizer to point data. You need to either apply a PointSymbolizer or a MarkersSymbolizer to point data.
Also Mapnik 2.1 and above supports reading directly from CSV files so you do not need to use a VRT and the OGR plugin, although both should work similarly.

How to get width of a truetype font character in 1200ths of an inch with Python?

I can get the height and width of a character in pixels with PIL (see below), but (unless I'm mistaken) pixel size depends on the screen's DPI, which can vary. Instead what I'd like to do is calculate the width of a character in absolute units like inches, or 1200ths of an inch ("wordperfect units").
>>> # Getting pixels width with PIL
>>> font = ImageFont.truetype('/blah/Fonts/times.ttf' , 12)
>>> font.getsize('a')
(5, 14)
My reason for wanting to do this is to create a word-wrapping function for writing binary Word Perfect documents. Word Perfect requires soft linebreak codes to be inserted at valid points throughout the text, or the file will be corrupt and unopenable. The question is where to add them for variable width fonts.
I realize that I don't fully understand the relationship between pixels and screen resolution and font sizes. Am I going about this all wrong?
Raw text widths are usually calculated in typographer's points, but since the point for the purpose of font definitions is defined as 1/72 of an inch, you can easily convert it into any other unit.
To get the design width of a character (expressed in em units), you need access to the low-level data of the font. The easiest way is to pip install fonttools, which has everything to work at the lowest possible level of font definitions.
With fontTools installed, you can:
load the font data – this requires the path to the actual font file;
character widths are stored as glyph widths, meaning you must retrieve a 'character-to-glyph' mapping; this is in the cmap table of a font:
a. load the cmap for your font. The most useful is the Unicode map – a font may contain others.
b. load the glyph set for your font. This is a list of names for the glyphs in that font.
Then, for each Unicode character, first look up its name and then use the name to retrieve its width in design units.
Don't forget that the 'design units' is based on the overall 'design width' of a font. This can be a standard value of 1000 (typical for Type 1 fonts), 2048 (typical for TrueType fonts), or any other value.
That leads to this function:
from fontTools.ttLib import TTFont
from fontTools.ttLib.tables._c_m_a_p import CmapSubtable
font = TTFont('/Library/Fonts/Arial.ttf')
cmap = font['cmap']
t = cmap.getcmap(3,1).cmap
s = font.getGlyphSet()
units_per_em = font['head'].unitsPerEm
def getTextWidth(text,pointSize):
total = 0
for c in text:
if ord(c) in t and t[ord(c)] in s:
total += s[t[ord(c)]].width
else:
total += s['.notdef'].width
total = total*float(pointSize)/units_per_em;
return total
text = 'This is a test'
width = getTextWidth(text,12)
print ('Text: "%s"' % text)
print ('Width in points: %f' % width)
print ('Width in inches: %f' % (width/72))
print ('Width in cm: %f' % (width*2.54/72))
print ('Width in WP Units: %f' % (width*1200/72))
The result is:
Text: "This is a test"
Width in points: 67.353516
Width in inches: 0.935465
Width in cm: 2.376082
Width in WP Units: 1122.558594
and is correct when comparing to what Adobe InDesign reports. (Note that per-character kerning is not applied here! That would require a lot more code.)
Characters that are not defined in the font are silently ignored and, as usually is done, the width for the .notdef glyph gets used. If you want this reported as an error, remove the if test in the function.
The cast to float in the function getTextWidth is so this works under both Python 2.7 and 3.5, but note that if you use Python 2.7 and larger value Unicode characters (not plain ASCII), you need to rewrite the function to correctly use UTF8 characters.
This worked better for me:
def pixel_width(unicode_text):
width=len(unicode_text)*50
height=100
back_ground_color=(0,0,0)
font_size=64
font_color=(255,255,255)
im = Image.new ( "RGB", (width,height), back_ground_color )
draw = ImageDraw.Draw (im)
unicode_font = ImageFont.truetype("./usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf", font_size)
draw.text ( (0,0), unicode_text, font=unicode_font, fill=font_color )
im.save("/dev/shm/text.png")
box = Image.open("/dev/shm/text.png").getbbox()
return box[2] - box[0]

Categories