In this article, I will show how the basic OpenGL ES 2.0/3.2 techniques, allowing to utilize the power of modern GPUs, can be used to draw 2D vector nautical charts by the example of S57 format. And as you probably already have guessed, the first technique I am going to consider is using vertex buffers for storing objects that makes up the chart. I will say at once that it is not a trivial task because, for example, some area in S57 chart can be filled and outlined with specific patterns generated from so called Presentation Library and at that, the fill pattern should be aligned to the left-top corner of the screen, but not to the area itself, so when the user shift the chart, the fill pattern does not move. The other example is a point objects or symbol that should be shown in the center of the visible part of an area, so its geographical coordinates changes when the user offsets or scales the chart. Theoretically, if we have OpenGL ES 3.2, we can use geometry shaders for drawing patterns, but if we have only OpenGL ES 2.0 that does not support geometry shaders, those patterns can also be drawn with the vertex buffers that should be regenerated each time the user changes the scale. So let’s start to solve this complex task with a simple example when we have geographic area or polygon filled with blue color, at least this example will demonstrate the basic idea of using the vertex buffers.
The first question we need to answer is in what coordinates we will store our polygon in a vertex buffer. There we have at least two alternatives: geographic coordinates (latitude and longitude) and so called plane coordinates. The first alternative is very straightforward: the vertex contains its latitude and longitude and the vertex shader converts the latitude and longitude to the screen coordinates using projection-specific formulas. This approach has its own benefits, because theoretically it allows to draw the polygon as a 3D object on the surface of 3D ellipsoid representing the Earth. But if we need only 2D charts, we can significantly simplify the vertex shader by using plane coordinates. Below I will talk about projections a bit, to explain what the plane coordinates are.
We can think of 2D geographic projections of all types like Mercator, Transversal Mercator, Cylindric, Conic, etc… as a transformation of the ellipsoid to some plane paper chart of certain scale. After the projection is applied we do not have latitude and longitude anymore, but we have some plane_x, plane_y coordinates on the paper chart. I call this paper chart coordinates the plane coordinates and the conversion from geographical coordinates to the plane coordinates the first stage of conversion. The second stage is the conversion from the plane coordinates to the screen coordinates according to the scale, offset and rotation angle selected by the user, this kind of conversion can be done easily by multiplication by two dimensional matrix and offsetting on two dimensional vector. The first stage does not depend on the scale, offset and rotation angle, and the second stage does not depend on the projection type. The C++ classes representing the projection should look like this:
class GeoProjection { public: virtual PlainPoint GeoToPlain(GeoPoint src) = 0; virtual GeoPoint PlainToGeo(PlainPoint src) = 0; }; class MercatorProjection : public GeoProjection { public: virtual PlainPoint GeoToPlain(GeoPoint src) override; virtual GeoPoint PlainToGeo(PlainPoint src) override; }; class MatrixProjection { public: PlainPoint ScreenToPlain(ScreenPoint src); ScreenPoint PlainToScreen(PlainPoint src); }; class ScreenProjection { public: PlainPoint GeoToPlain(GeoPoint src); ScreenPoint GeoToScreen(GeoPoint src); GeoPoint ScreenToGeo(ScreenPoint src); PlainPoint ScreenToPlain(ScreenPoint src); GeoPoint PlainToGeo(PlainPoint src); ScreenPoint PlainToScreen(PlainPoint src); private: GeoProjectionParameters Params; //scale, offset, angle std::unique_ptr<GeoProjection> pGeoProjection; MatrixProjection matrixProjection; };
Returning to vertex buffers, my idea is that the first stage of the conversion should be performed on CPU and the second stage on GPU. Thus we will have the video memory filled with paper charts in plane coordinates and vertex shader will multiply them by the matrix:
uniform mat4 matrix; //scale, offset and rotation angle attribute vec2 position; //polygon vertices in plane coordinates void main() { gl_Position = matrix * vec4(position, 0.0, 1.0); }
This should work nicely with a blue polygon, but what if the polygon has an outline of two-pixel width, for example?
The answer is that we are out of luck here, because there is no requirement in OpenGL ES 2.0 for lines with width grater than one to be supported, so the following query can return {1, 1}:
GLfloat line_width_range[2]; glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE, line_width_range);
and the following code will draw a line of one pixel width:
glLineWidth(3); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer); glDrawElements(GL_LINES, indice_count, GL_UNSIGNED_SHORT, 0);
There are two possible workarounds: the first is to use geometry shader that by two given vertices will output the rectangle representing the line segment, and the second is to put rectangles into the vertex buffer, but in this case we need to recalculate their coordinates when the user changes the scale, and so we need separate vertex buffer for each view (window) with different scale. The same approach can be used for outlining with patterns, but in general the geometry shader probably will not help, because we can encounter the limitation of output vertex count (max_vertices parameter).
The conclusion is that changing the scale is a bottleneck, but fortunately in the most cases changing the scale is initiated by a user action and occurs rarely. Changing the offset and angle should work fast enough, probably we will see something comparable to 300 fps on modern graphic cards.