Chapter 3

The practical result of this work is a fully functional open-source project, although some parts and features are still in the “beta” phase. This chapter introduces some of the tools used during the development and describes important and/or interesting parts of the implementation at the time of writing this document.

3.1 Closure Tools

Closure Tools is a set of tools designed for efficient JavaScript development. It is being actively developed by Google as a base for their large applications such as Gmail, Google Maps, or Google Docs. Closure Tools were released as open-source in November 2009 and Google is accepting patches since May 2010. The main purpose of Closure Tools is to make JavaScript development easier and more efficient by keeping the code clean and easy to read. We decided to use Closure Tools for WebGL Earth development to make it easy for anyone who would like to join the project in the future.

Closure Library

The Closure Library is an object-oriented codebase with modular architecture that contains a lot of useful functions and classes for a wide range of web-related tasks such as DOM manipulation, WYSIWYG editor or even client–server communication utilities.

The library also provides module-dependency and a namespace system or class inheritance that can help programmers to achieve consistent project structuring and higher maintainability. Because it’s divided into a lot of small files (modules), the best practice is to use Closure Compiler along with the library to minify the code.

Closure Compiler

The Closure Compiler is an optimization tool that can be used to “compile” JavaScript source code into faster and more compact, though behaviorally-equivalent, output. It utilizes several different optimization techniques such as function inlining, dead code removal, function and property renaming, constant folding, etc.

Closure Linter

Another important part of Closure Tools is a rather small utility for code style checking. When developing a project together with other people, it is often difficult to make every contributor follow the same code style guidelines. Closure Linter notifies the programmer of the most common violations of good JavaScript coding practices1 and is even able to auto-fix most of them.

3.2 Project overview

We utilize Closure Tools to organize the code of WebGL Earth into a number of namespaces and modules (figure 3.2).


Figure 3.1: Visualization of the most important classes and modules

The project is divided into three main parts. The namespace we (WebGL Earth) serves as a codebase and provides all of the core functionality for the other two namespaces called weapp and api. As their names suggest, weapp represents the final application (and additional UI (User Interface) controls) running on, while api contains an application and exports for the WebGL Earth API. If a programmer wants to build his own, complex application, the codebase in the we namespace should be used together with the Closure Compiler. However, for simple applications that use only the most common operations, the API should provide enough functionality and can be easily used together with a wide variety of JavaScript libraries and compilers.

This approach is similar to Google Maps model, where the core codebase is shared between the Google Maps application itself (running at and the public JavaScript API.

3.3 Tile management

One of the most important parts of WebGL Earth is tile management. All the tiles are loaded via the appropriate TileProvider in the we.texturing namespace. Each implementation of this abstract class provides access to tiles of one particular data source (BingTileProvider, OSMTileProvider, etc.). Nearly all tile requests are directed to an instance of the TileCache class, that serves cached images or requests loading of the unavailable ones and callbacks after loading finishes. The only exception are tiles for the ClipLevelN, which don’t need to be cached. The tile loading itself is realized by dynamically creating image element (figure 3.2).

tile.image = new Image(); 
//define function to be called after loading finishes 
tile.image.onload = ...; 
//start loading the tile image 
tile.image.src = this.getTileURL(tile.zoom, 
                                 tile.x, tile.y);

Figure 3.2: Loading of the image data

The ClipStack, as described in section 2.2.2, is implemented in relevant classes in the we.scene namespace. The ClipStack class creates and controls individual ClipLevel class instances – it manages their offsets and distributes instances of the ClipBuffer. Each ClipLevel is also periodically prompted to request the tiles it needs from appropriate TileCache to “fill” itself.

All the “low-level” operations are encapsulated in the aforementioned classes so that the Earth class only needs to move the center of the ClipStack to correspond with the camera movement.

In the current WebGL Earth implementation, no more than one tile per frame is buffered into the ClipBuffer. It would be optimal to distribute the load of tile buffering over several frames either via asynchronous operations or by dividing tiles into smaller units. However, we don’t have any means to achieve this at the moment.

3.4 Segmented Plane

In order to visualize the Earth, we also need properly optimized geometry onto which to map the texture data. As previously stated, we need polygon edges to be aligned with tile edges. It would be impossible to create and store a triangle mesh that would cover the whole Earth at once.

We observed, that using a top-down oriented camera, the amount of visible tiles is practically constant and even with a free-look camera (adjustable tilt, heading, etc.), the number of visible tiles does not increase nearly as fast as the number of existing tiles (O(2n2)).

Therefore, we developed a specialized structure called the Segmented Plane. This two-dimensional square grid is designed to represent a constant number of tiles and is immutable during application execution.

When rendering, the raw Segmented Plane data, which are stored in the graphics memory, are projected (in the vertex shader) into three-dimensional space based on current zoom level and offset (calculated from the camera position and direction) to form a part of the actual globe. Because we project corners of tiles to the appropriate positions, we also inherently solve the problem of Mercator to sphere mapping, although with inexact linear interpolation between vertices. It does not, however, cause any visible problems. Figure 3.3 shows optimized vertex position calculation including Mercator to sphere transformation.

// real world coordinates 
vec2 phi = PI2*(aVertexPosition+uOffset)/uTileCount; 
//bend the segplane 
float exp_2y = exp(2.0*phi.y); 
float tanh = ((exp_2y-1.0)/(exp_2y+1.0)); 
float cosy = sqrt(1.0-tanh*tanh); 
vec3 pos = vec3(sin(phi.x)*cosy, tanh, 
gl_Position = uMVPMatrix*vec4(pos, 1.0);

Figure 3.3: Vertex position calculation extracted form the vertex shader

If we used only four vertices and two triangles for every tile, the Earth would look very “angular” at lower zoom levels. This can be easily solved by introducing additional geometry subdivision. Current WebGL Earth uses several SegmentedPlane instances with different dimensions and subdivision levels. The adequate instance is then chosen every frame based on the current zoom level.

After proper constant adjustments and fine-tuning, the Segmented Plane proved to be an efficient way to manage the geometry without having to update the vertex buffers at all.


Figure 3.4: Progressively subdivided Segmented Plane

After we implemented the free-look camera (section 3.6) and 3D terrain (section 3.5), additional refactoring and improvements to the Segmented Plane had to be made. Most notable is the introduction of progressive geometry subdivision (figure 3.4), inspired by [Boe00], that allows us to widen the covered area (substantial for the free-look camera) while keeping enough geometry detail in the important area (necessary for good-looking terrain).

When two neighboring geometry tiles have different level-of-detail, visible geometry gaps could occur. This is explained in detail by Boer [Boe00]. Figure 3.4 displays that we solved this problem by introducing extra transitional triangles at the T-junctions.

3.5 3D terrain

One of the most intuitive utilizations of the three-dimensional view is the ability to display elevation data. The terrain visualization currently implemented in WebGL Earth is based on a regular heightmap approach similarly to [AH05] and [LH04]. We have a dedicated ClipStack for heightmap data alongside one for regular texture data. We don’t need a 256 × 256px heightmap for each tile, so we automatically request tiles several levels less detailed than the texture data. This allows the ClipStack to be smaller, contain less active ClipLevels, and thus require less system resources.

The integration of elevation data is executed in the vertex shader, where the needed texel is sampled from the appropriate ClipLevel, raw elevation is calculated, and the position of the vertex is properly altered.


Figure 3.5: Experimental 3D terrain (Gurtnellen, Switzerland)

For the terrain to work properly, the graphics card and the driver have to support Vertex Texture Fetch (VTF). Some web browsers (Google Chrome and Mozilla Firefox 4) use ANGLE (Almost Native Graphics Layer Engine) project for WebGL to Direct3D mapping on Microsoft Windows platform, because a large number of users does not have proper OpenGL drivers installed. [Bri10] VTF support for the ANGLE project is currently under development and will be available soon. [Tra11]

For debugging purposes Chrome can be configured to use native desktop OpenGL instead of the ANGLE library by launching chrome.exe with --use-gl=desktop command line argument. The same can be achieved in Firefox 4 by setting webgl.prefer-native-gl (or webgl.prefer_gl in older versions of the browser) to true at the about:config metapage. This allows us to test the terrain implementation on most graphics cards even in the web browsers using ANGLE project. After VTF support is fully implemented in this library, more and more users will be able to see virtual terrain in WebGL Earth.

3.6 Free-look camera and different behavior models

To be able to fully utilize the potential of 3D terrain, we simultaneously developed the free-look camera that allows user to change the way the globe is displayed.


Figure 3.6: Visualization of attributes of the free-look camera

The free-look camera is defined by several parameters (illustrated in figure 3.6) that together fully describe it’s position and orientation.

To provide the best-looking, one-to-one pixel mapping, most of the two-dimensional online map applications express distance by (usually integral) zoom level rather than altitude. To reflect this behavior (especially in API), WebGL Earth also originally operated based on the zoom level. Because the Mercator projection is not an equal-area projection, the camera was moving closer to the surface when the distance from the Equator increased, which was counterintuitive to most of the users.

To solve this issue, WebGL Earth codebase defines two distinct camera behavior models: Fixed zoom (original behavior – moving the camera does not change zoom level, but altitude is adjusted) or Fixed altitude (more intuitive – altitude does not change, but occasional “zoom jumps” may be seen). It’s up to the final application which behavior model to choose – api currently operates in fixed zoom mode while weapp uses fixed altitude by default, but allows runtime mode switching.

3.7 WebGL Earth JavaScript API

In order to enable web developers to easily include WebGL Earth virtual globe into their own websites, the project also provides a simple JavaScript API (Application Programming Interface), which is easy to use without having any knowledge of the inner workings of the codebase (figure 3.7).

  <script src=””> 
   function initialize() { 
    var options = {zoom: 3, center: [47.1953,8.5244], 
                   map: WebGLEarth.Maps.OSM}
    var earth = new WebGLEarth(’earth’, options); 
 <body onload=”initialize()”> 
  <div id=”earth” style=”width:600px;height:400px”> 

Figure 3.7: Example of simple “Hello World” API usage

It is also currently possible to programmatically display custom datasets and dynamically change camera position and zoom. We are planning on adding support for marker placement and other features soon.

3.8 Contents of the attached CD

Data on the attached CD are organized into following directory structure:

  1. – WebGL Earth Git repository snapshot
  2. – Compiled WebGL Earth application
  3. LATEX source files of this document
  4. – This document in electronic form

In order to compile WebGL Earth or use the codebase for you own application, these are the steps you should follow to setup the development environment:

  1. Python Interpreter1 and Java Virtual Machine2 are required to successfully compile the code using Closure Tools applications.
  2. Download the source code from our GitHub repository3 or use the files provided on the CD.
  3. Under Linux & Mac: just run make in the project directory.
  4. Under Windows: download latest Closure Library4 and unpack the Closure Compiler5 into the project directory. Now you are ready to run build_app.bat to build WebGL Earth Application or build_api.bat to build WebGL Earth JavaScript API.

Elaboration of this document included writing most of the source code with exception of original inertial animation in SceneDragger and API examples (contributed by Petr Přidal), PanControl and ZoomSlider UI elements (by Leonardo Salom) and the Makefile (Tom Payne). Details of authorship can be observed in the repository log.