Sorry for disturbing.
I cannot render my map, I don't know why...
I read a csv file using ogr, which use a .vrt file I created, associated to the csv:
Then, I have a simple code to render my map, but I cannot it work: a map with an empty background is created, nothing on it ...
I get a warning, but I think it is normal:
Warning 1: The 'LON' and/or 'LAT' fields of the source layer are not declared as numeric fields,
so the spatial filter cannot be turned into an attribute filter on them
Do you have an idea ?
Thanks!
My .csv (called ZZZ.csv), just the begining and the interesting fields:
RecordID,VehId,DateTime,LAT,LON
0,2232,2012-04-07 18:54:39,32.801926,-116.871742
0,2232,2012-04-07 18:54:40,32.801888,-116.871727
0,2232,2012-04-07 18:54:41,32.801849,-116.871704
My .vrt:
<OGRVRTDataSource>
<OGRVRTLayer name="ZZZ">
<SrcDataSource>ZZZ.csv</SrcDataSource>
<GeometryType>wkbPoint</GeometryType>
<LayerSRS>WGS84</LayerSRS>
<GeometryField encoding="PointFromColumns" x="LON" y="LAT"/>
</OGRVRTLayer>
</OGRVRTDataSource>
My python module to render the card:
"""module mapniktest"""
import mapnik
#Defining the envelope
MIN_LAT = 30
MAX_LAT = +35
MIN_LON = -120
MAX_LON =-110
MAP_WIDTH = 1000
MAP_HEIGHT = 500
#defining the datasource: the .vrt above
datasource = mapnik.Ogr(file="ZZZ.vrt",layer = "ZZZ")
#Creating layer, rules and styles
layer = mapnik.Layer("ZZZ")
layer.datasource = datasource
layer.styles.append("LineStyle")
stroke = mapnik.Stroke()
stroke.color = mapnik.Color("#008000")
stroke.add_dash(50, 100)
symbol = mapnik.LineSymbolizer(stroke)
rule = mapnik.Rule()
rule.symbols.append(symbol)
style = mapnik.Style()
style.rules.append(rule)
print style
#creating the map
map = mapnik.Map(MAP_WIDTH, MAP_HEIGHT, "+proj=longlat +datum=WGS84")
map.append_style("LineStyle", style)
map.background = mapnik.Color("#8080a0")
map.layers.append(layer)
#displaying the map
map.zoom_to_box(mapnik.Envelope(MIN_LON, MIN_LAT, MAX_LON, MAX_LAT))
mapnik.render_to_file(map, "map.png")
thks!!!!
The problem is that you are applying a LineSymbolizer to point data. You need to either apply a PointSymbolizer or a MarkersSymbolizer to point data.
Also Mapnik 2.1 and above supports reading directly from CSV files so you do not need to use a VRT and the OGR plugin, although both should work similarly.
Related
I am trying to load in a shapefile with the use of geopandas. I have read the file in, but when I go to add the layer to the ipyleaflet map, an error comes up that says 'GeoDataFrame' object has no attribute 'model_id'.
I have tried many files and it just will not read with the geodataframe.
from ipyleaflet import Map, basemaps
fire_was = gpd.read_file('fire_wa.shp')
#add map
center = [47.409824923593575,-120.43401014843751]zoom = 7
m = Map(basemap=basemaps.OpenStreetMap.Mapnik, center=center, zoom=zoom)
m.add_layer(fire_was)
m
Any ideas?
I figured it out. The example I had seen did not show the below but checked more of the documentation of ipyleaflet. You must use GeoData to load the geodataframe.
from ipyleaflet import Map, GeoData, basemaps, LayersControl
fire_was = gpd.read_file('fire_wa.shp') firewa = GeoData(geo_dataframe = fire_was)
m = Map(center =(47.409824923593575,-120.43401014843751), zoom =7,basemap= basemaps.Esri.WorldTopoMap)
m.add_layer(firewa)
m
I have a global dataset at about 300m resolution in tif. I want to upscale it to 9km resolution (below you see my code). I decided to do upscaling piecewise due to high resolution data and large computing time. So I divided the whole global data into 10 pieces, do upscaling and store each piece separately in a tif file again. NOW my problem pops up: the last piece of global data is NOT saved completely on the disk. Each piece of map should be 2M but piece#10 is 1.7M. And the strange thing is that after running my script twice, that piece#10 will be completed and it will change from 1.7M to 2M. But the current piece10 is again not complete.
import numpy as np
from osgeo import gdal
from osgeo import osr
from osgeo.gdalconst import *
import pandas as pd
#
#%%
#-----converting--------#
df_new = pd.read_excel("input_attribute_table.xlsx",sheet_name='Global_data')
listvar = ['var1']
number = df_new['data_number'][:]
##The size of global array is 129599 x 51704. The pieces should be square
xoff = np.array([0, 25852.00, 51704.00, 77556.00, 103408.00])
yoff = np.array([0, 25852.00])
xcount = 25852
ycount = 25852
o = 1
for q in range(len(yoff)):
for p in range(len(xoff)):
src = gdal.Open('Global_database.tif')
ds_xform = src.GetGeoTransform()
ds_driver = gdal.GetDriverByName('Gtiff')
srs = osr.SpatialReference()
srs.ImportFromEPSG(4326)
data =src.GetRasterBand(1).ReadAsArray(xoff[p],yoff[q],xcount,ycount).astype(np.float32)
Var = np.zeros(data.shape, dtype=np.float32)
Variable_load = df_new[listvar[0]][:]
for m in range(len(number)):
Var[data==number[m]] = Variable_load[m]
#-------rescaling-----------#
Var[np.where(np.isnan(Var))]=0
ds_driver = gdal.GetDriverByName('Gtiff')
srs = osr.SpatialReference()
srs.ImportFromEPSG(4326)
sz = Var.itemsize
h,w = Var.shape
bh, bw = 36, 36
shape = (h/bh, w/bw, bh, bw)
shape2 = (int(shape[0]),int(shape[1]),shape[2],shape[3])
strides = sz*np.array([w*bh,bw,w,1])
blocks = np.lib.stride_tricks.as_strided(Var,shape=shape2,strides=strides)
resized_array=ds_driver.Create(str(listvar[0])+'_resized_to_9km_glob_piece'+str(o)+'.tif',shape2[1],shape2[0],1,gdal.GDT_Float32) resized_array.SetGeoTransform((ds_xform[0],ds_xform[1]*bw,ds_xform[2],ds_xform[3],ds_xform[4],ds_xform[5]*bh))
resized_array.SetProjection(srs.ExportToWkt())
band = resized_array.GetRasterBand(1)
zero_array = np.zeros([shape2[0],shape2[1]], dtype=np.float32)
for z in range(len(blocks)):
for k in range(len(blocks)):
zero_array[z][k] = np.mean(blocks[z][k])
band.WriteArray(zero_array)
band.FlushCache()
band = None
del zero_array
del Var
o=o+1
Normally, you should either be sure to call close on a file, or use the with statement. However, it looks like neither of those is supported by gdal.
Instead, you're expected to remove all references to the file. You're already setting band = None, but you also need to set src = None.
This is a bad, non-Pythonic interface, but that's apparently what the Python gdal library does. In addition to being a weird gotcha in its own right, it also interacts poorly with exceptions; any unhandled exceptions may also result in the file not being saved (or being partly saved, or being corrupted).
For the immediate problem, though, adding src = None or del src should do the trick.
PS (from comments): Another option would be to move the body of the for loop into a function; that will automatically delete all the variables without you having to list them all and potentially miss one. It'll still have problems if there's an exception, but at least the normal case should start working...
I have this code:
import pandas as pd
import numpy as np
from geopandas import GeoDataFrame
import geopandas
from shapely.geometry import LineString, Point
import matplotlib.pyplot as plt
import contextily
''' Do Something'''
df = start_stop_df.drop('track', axis=1)
crs = {'init': 'epsg:4326'}
gdf = GeoDataFrame(df, crs=crs, geometry=geometry)
ax = gdf.plot()
contextily.add_basemap(ax)
ax.set_axis_off()
plt.show()
Basically, this generates a background map that is in Singapore. However, when I run it, I get the following error: HTTPError: Tile URL resulted in a 404 error. Double-check your tile url:http://tile.stamen.com/terrain/29/268436843/268435436.png
However, it still produces this image:
How can I change the Tile URL? I would still like to have the map of Singapore as the base layer.
EDIT:
Also tried including this argument to add_basemap:
url ='https://www.openstreetmap.org/#map=12/1.3332/103.7987'
Which produced this error:
OSError: cannot identify image file <_io.BytesIO object at 0x000001CC3CC4BC50>
First make sure that your GeoDataframe is in Web Mercator projection (epsg=3857). Once your Geodataframe is correctly georeferenced, you can achieve this by Geopandas reprojection:
df = df.to_crs(epsg=3857)
Once you have this done, you easily choose any of the supported map styles. A full list can be found in contextily.sources module, at the time of writing:
### Tile provider sources ###
ST_TONER = 'http://tile.stamen.com/toner/tileZ/tileX/tileY.png'
ST_TONER_HYBRID = 'http://tile.stamen.com/toner-hybrid/tileZ/tileX/tileY.png'
ST_TONER_LABELS = 'http://tile.stamen.com/toner-labels/tileZ/tileX/tileY.png'
ST_TONER_LINES = 'http://tile.stamen.com/toner-lines/tileZ/tileX/tileY.png'
ST_TONER_BACKGROUND = 'http://tile.stamen.com/toner-background/tileZ/tileX/tileY.png'
ST_TONER_LITE = 'http://tile.stamen.com/toner-lite/tileZ/tileX/tileY.png'
ST_TERRAIN = 'http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'
ST_TERRAIN_LABELS = 'http://tile.stamen.com/terrain-labels/tileZ/tileX/tileY.png'
ST_TERRAIN_LINES = 'http://tile.stamen.com/terrain-lines/tileZ/tileX/tileY.png'
ST_TERRAIN_BACKGROUND = 'http://tile.stamen.com/terrain-background/tileZ/tileX/tileY.png'
ST_WATERCOLOR = 'http://tile.stamen.com/watercolor/tileZ/tileX/tileY.png'
# OpenStreetMap as an alternative
OSM_A = 'http://a.tile.openstreetmap.org/tileZ/tileX/tileY.png'
OSM_B = 'http://b.tile.openstreetmap.org/tileZ/tileX/tileY.png'
OSM_C = 'http://c.tile.openstreetmap.org/tileZ/tileX/tileY.png'
Keep in mind that you should not be adding actual x,y,z tile numbers in your tile URL (like you did in your "EDIT" example). ctx will take care of all this.
You can find a working copy-pastable example and further info at GeoPandas docs.
import contextily as ctx
# Dataframe you want to plot
gdf = GeoDataFrame(df, crs= {"init": "epsg:4326"}) # Create a georeferenced dataframe
gdf = gdf.to_crs(epsg=3857) # reproject it in Web mercator
ax = gdf.plot()
# choose any of the supported maps from ctx.sources
ctx.add_basemap(ax, url=ctx.sources.ST_TERRAIN)
ax.set_axis_off()
plt.show()
Contextily's default crs is epsg:3857. However, your data-frame is in different CRS. Use the following,refer the manual here:
ctx.add_basemap(ax, crs='epsg:4326', source=ctx.providers.Stamen.TonerLite)
Please, refer to this link for using different sources such as Stamen.Toner, Stamen.Terrain etc. (Stamen.Terrain is used as default).
Also, you can cast your data frame to EPSG:3857 by using df.to_crs(). In this case, you should skip crs argument inside ctx.add_basemap() function.
im too new to add a comment but I wanted to point out to those saying in the comments that they get a 404 error. Check you capitaliations, etc. Stamen's urls are specifc on this. For instance there is not an all caps call. It is only capitalize the first letter. For example:
ctx.add_basemap(ax=ax,url=ctx.providers.Stamen.Toner, zoom=10)
I am working with openmesh installed in Python 3.6 via pip. I need to add custom properties to vertices of a mesh in order to store some data at each vertex. My code goes as follows :
import openmesh as OM
import numpy as np
mesh = OM.TriMesh()
#Some vertices
vh0 = mesh.add_vertex(np.array([0,0,0]));
vh1 = mesh.add_vertex(np.array([1,0,0]));
vh2 = mesh.add_vertex(np.array([1,1,0]));
vh3 = mesh.add_vertex(np.array([0,1,0]));
#Some data
data = np.arange(mesh.n_vertices)
#Add custom property
for vh in mesh.vertices():
mesh.set_vertex_property('prop1', vh, data[vh.idx()])
#Check properties have been added correctly
print(mesh.vertex_property('prop1'))
OM.write_mesh('mesh.om',mesh)
print returns [0, 1, 2, 3]. So far, so good. But when I read again the mesh, the custom property has disappeared :
mesh1 = OM.TriMesh()
mesh1 = OM.read_trimesh('mesh.om')
print(mesh1.vertex_property('prop1'))
returns [None, None, None, None]
I have two guesses :
1 - The property was not saved in the first place
2 - The reader does not know there is a custom property when it reads the file mesh.om
Does anybody know how to save and read properly a mesh with custom vertex properties with openmesh in Python? Or is it even possible (has anybody done it before?)?
Is it that there is something wrong with my code?
Thanks for your help,
Charles.
The OM writer currently does not support custom properties. If you are working with numeric properties, it is probably easiest to convert the data to a NumPy array and save it separately.
Say your mesh and properties are set up like this:
import openmesh as om
import numpy as np
# create example mesh
mesh1 = om.TriMesh()
v00 = mesh1.add_vertex([0,0,0])
v01 = mesh1.add_vertex([0,1,0])
v10 = mesh1.add_vertex([1,0,0])
v11 = mesh1.add_vertex([1,1,0])
mesh1.add_face(v00, v01, v11)
mesh1.add_face(v00, v11, v01)
# set property data
mesh1.set_vertex_property('color', v00, [1,0,0])
mesh1.set_vertex_property('color', v01, [0,1,0])
mesh1.set_vertex_property('color', v10, [0,0,1])
mesh1.set_vertex_property('color', v11, [1,1,1])
You can extract the property data as a numpy array using one of the *_property_array methods and save it alongside the mesh using NumPy's save function.
om.write_mesh('mesh.om', mesh1)
color_array1 = mesh1.vertex_property_array('color')
np.save('color.npy', color_array1)
Loading is similar:
mesh2 = om.read_trimesh('mesh.om')
color_array2 = np.load('color.npy')
mesh2.set_vertex_property_array('color', color_array2)
# verify property data is equal
for vh1, vh2 in zip(mesh1.vertices(), mesh2.vertices()):
color1 = mesh1.vertex_property('color', vh1)
color2 = mesh2.vertex_property('color', vh2)
assert np.allclose(color1, color2)
When you store data, you should set set_persistent function true like below.
(sorry for using c++, I don't know about python)
OpenMesh::VPropHandleT<float> vprop_float;
mesh.add_property(vprop_float, "vprop_float");
mesh.property(vprop_float).set_persistent(true);
OpenMesh::IO::write_mesh(mesh, "tmesh.om");
and then, you have to request this custom property in your mesh before loading it with the obj reader. Order is important.
TriMesh readmesh;
OpenMesh::VPropHandleT<float> vprop_float;
readmesh.add_property(vprop_float, "vprop_float");
OpenMesh::IO::read_mesh(readmesh, "tmesh.om");'
I refered below.
https://www.openmesh.org/media/Documentations/OpenMesh-4.0-Documentation/a00062.html
https://www.openmesh.org/media/Documentations/OpenMesh-4.0-Documentation/a00060.html
I am working on a report that includes a mixture of tables and images. The images [ graphs, actually ] are saved to the filesystem in .png form.
The method that actually renders the PDF is:
def _render_report(report_data):
file_name = get_file_name() # generate random filename for report
rpt = Report(settings.MEDIA_ROOT + os.sep + file_name)
Story = []
for (an, sam, event), props in report_data.iteritems():
Story.append(Paragraph("%s - sample %s results for %s" % (an.name, sam.name, event.name), styles["Heading2"]))
data_list = [['Lab', 'Method', 'Instrument', 'Unit', 'Raw Result', 'Converted Result', 'Outlier']]
for (index, series) in props['frame'].iterrows():
data_list.append(_format([
Paragraph(Lab.objects.get(pk=series['labs']).name, styles['BodyText']),
Paragraph(Method.objects.get(pk=series['methods']).name, styles['BodyText']),
Paragraph(Instrument.objects.get(pk=series['instruments']).name, styles['BodyText']),
Paragraph(Unit.objects.get(pk=series['units']).name, styles['BodyText']),
series['raw_results'],
series['results'],
series['outlier']
]))
table = Table(data_list, colWidths=[45 * mm, 35 * mm, 35 * mm, 25 * mm, 35 * mm, 35 * mm, 35 * mm], repeatRows=1)
Story.append(table)
Story.append(PageBreak())
if props['graph'] is not None:
Story.append(Image("/tmp/%s" % props['graph'], width=10 * inch, height=6 * inch))
Story.append(PageBreak())
rpt.draw(Story, onFirstPage=setup_header_and_footer, onLaterPages=setup_header_and_footer)
return file_name
Background Information
The page is set up as an A4, in landscape orientation
My development environment is a virtualenv; PIL 1.1.7 and reportlab 2.6 are installed and functional
The "Report" class used above is simply a thin wrapper around SimpleDocTemplate that sets up some defaults but delegates to SimpleDocTemplate's build implementation. Its code is:
class Report(object):
def __init__(self, filename, doctitle="Report", docauthor="<default>",
docsubject="<default>", doccreator="<default>", orientation="landscape", size=A4):
DEFAULTS = {
'leftMargin' : 10 * mm,
'rightMargin' : 10 * mm,
'bottomMargin' : 15 * mm,
'topMargin' : 36 * mm,
'pagesize' : landscape(size) if orientation == "landscape" else portrait(size),
'title' : doctitle,
'author' : docauthor,
'subject' : docsubject,
'creator' : doccreator
}
self.doc = SimpleDocTemplate(filename, **DEFAULTS)
def draw(self, flowables, onFirstPage=setup_header_and_footer, onLaterPages=setup_header_and_footer):
self.doc.build(flowables, onFirstPage=setup_header_and_footer,
onLaterPages=setup_header_and_footer, canvasmaker=NumberedCanvas)
What I have already looked at
I have confirmed that the images exist on disk. The paths are fully qualified paths.
PIL is installed, and is able to read the images correctly
The space assigned to the image is adequate; I have confirmed this by calculation. Also, if I increase the image size, ReportLab complains about the Image Flowable being too large. The current dimension should fit.
I have tested with and without the page breaks; they do not seem to make any difference
The Problem
The table, headings and page templates render OK but the images are blank. Earlier today, I had encountered this issue [ when setting up the templates used by this report ]. The workaround was to use canvas.drawInlineImage(... in place of canvas.DrawImage(... . It therefore looks as though there is an issue with my setup; I could use some pointers on how to debug it.
Update
I was able to apply a variant of the same workaround used in this linked question ( use canvas.drawInlineImage in place of canvas.drawImage. I subclassed `Image' as follows:
class CustomImage(Image):
"""
Override - to use inline image instead; inexplicable bug with non inline images
"""
def draw(self):
lazy = self._lazy
if lazy>=2: self._lazy = 1
self.canv.drawInlineImage(self.filename,
getattr(self,'_offs_x',0),
getattr(self,'_offs_y',0),
self.drawWidth,
self.drawHeight
)
if lazy>=2:
self._img = None
self._lazy = lazy
The only change from the "stock" Image class is in one line - using self.canv.drawInlineImage where there was self.canvas.drawImage before.
This "works" in the sense that the images are finally visible in my PDF. The reason why drawImage is not working still remains a mystery.
I have tried #PedroRomano's suggestion ( to make sure the images RGBA ), and even tried JPEG images instead of PNGs. These did not make a difference.
I eventually brought this matter to a close by using a custom Image subclass:
class CustomImage(Image):
"""
Override - to use inline image instead; inexplicable bug with non inline images
"""
def draw(self):
lazy = self._lazy
if lazy>=2: self._lazy = 1
self.canv.drawInlineImage(self.filename,
getattr(self,'_offs_x',0),
getattr(self,'_offs_y',0),
self.drawWidth,
self.drawHeight
)
if lazy>=2:
self._img = None
self._lazy = lazy
Saving the graphs in vector graphics formats eg EPS, saving as JPEG, saving as PNG with and without the alpha channel all did not seem to make a difference.
The optimal solution would be to generate your graphs in a vector format like postscript that is supported by reportlab. A lot of UNIX software can do this out of the box, and on windows you can use the excellent PDFCreator.
If you have to use raster images for your graphs, try converting your images to JPEG format. Those can be easily embedded in a PDF file using the DCTDecode filter. (This is e.g. what jpeg2pdf does.)