Vis Comput (2013) 29:277–286 DOI 10.1007/s00371-012-0773-1
O R I G I N A L A RT I C L E
Perception-motivated visualization for 3D city scenes Bin Pan · Yong Zhao · Xiaoming Guo · Xiang Chen · Wei Chen · Qunsheng Peng
Published online: 26 January 2013 © Springer-Verlag Berlin Heidelberg 2013
Abstract Many approaches have been developed to visualize 3D city scenes, most of which exhibit the visualization results in a uniform rendering style. This paper presents an expressive rendering approach for visualizing large-scale 3D city scenes with various rendering styles integrated in a seamless way. Each view is actually a combination of the photorealistic rendering and the nonphotorealistic rendering to highlight the information that is interesting for the users and de-emphasize the other that is less important. At runtime, the users are allowed to specify their interested locations interactively. Our system automatically computes the salience of each location and illustrates the entire scene with emphasis in the area of interests. The GPU-based implementation enables interactive realtime performance. Our implementation of a system demonstrates benefits in many applications such as 3D GPS navigation, tourist information, etc. We have performed a pilot user evaluation of the effect for users to access information in 3D city. Keywords Large-scale · City scenes · Expressive rendering · Landmarks
B. Pan () · X. Guo School of Sciences, Liaoning Shihua University, Fushun, China e-mail:
[email protected] X. Chen · W. Chen · Q. Peng State Key Lab of CAD&CG, Zhejiang University, Zhejiang, China Y. Zhao School of Mathematical Sciences, Ocean University of China, Qingdao, China
1 Introduction Due to the rapid development of computer hardware and the progress in (semi)automatic data acquisition, it is now possible to create large-scale 3D city scenes at reasonable costs. An increasing number of applications and systems incorporate virtual 3D city scenes as essential system components such as urban planning and redevelopment, facility management, etc. [6]. Usually, large-scale 3D city scenes are characterized by a large number of objects with different types, structures and hierarchies, yielding a high degree of visual detail. Thus, they transport a huge amount of information, which frequently causes perceptional and cognitive problems for the users such as heavy visual clutter and information overloading. This observation reveals a fundamental problem for visualization of complex 3D city scenes, namely how to access information efficiently and effectively for users. The 3D representation of a 3D city scene serves as a medium to convey spatial-related information in a comprehensive way. The requirements on virtual 3D city scenes vary between different applications. In the context of tourism, entertainment, or public participation, a high degree of photorealism is required. In applications that attempt to provide analytical and exploratory functionality, visual details of buildings are not of primary interest and the shape and the structure are of main concerns. For commercial applications like Google Earth, WorldWind, and Microsoft Virtual Earth, fast and accurate access to geospatial information becomes more and more important. In almost all cases, the users are often interested in only a small part or just a specific aspect of the 3D city scenes. Thus, how to efficiently present those meaningful information to the users is of great importance. A rendering is an abstraction that favors, preserves, or even emphasizes some qualities while sacrificing, suppressing, or omitting other characteristics that are not the focus
278
B. Pan et al.
Fig. 1 Results of our algorithm. From left to right, the results rendered with Phong shading, point based single focus model, building-based single focus model
of attention [13]. However, most of the current 3D city illustration applications present all buildings at the same level of detail, making it difficult for the users to filter out the useless information. Large areas of the screen contain useless or even misused pixels with respect to information content and transfer, which are called dead values [17]. For example, a bird-view of the 3D city scene frequently shows too many details at once and does not distinguish between areas in focus and context areas. As a consequence, the information on the screen often appears overly complex for the users. Visualization is the science of representing data visually in order to enhance communication or understanding [24]. We propose that 3D city visualization should be based on human visual perception. Rendering style plays an important role in visualization; it effects the efficiency for user to access interested information from the scene. Current virtual 3D environment adopts a single rendering style, such as photorealistic (PR) or non-photorealistic (NPR) for all buildings in the 3D city scene. It is our belief that an expressive rendering technique would clarify meaningful structures in the 3D city scene. However, the decision of interesting regions and structures depends on the users’ demand. In our system, the models that are important for the users are rendered with full details using PR, and the area surrounding the important buildings is rendered by simple shading technique. Regions far from the focal point are presented merely by line drawing method. By combining different rendering styles with different weighting and proximity schemes, our approach favors abstractive visualization of large-scale scenes in a smooth and characteristic-preserving way (Fig. 1). The main contributions of our methods are as follows: 1. A concept and implementation of a simple system that links 3D city visualization with human visual perception. 2. A GPU based rendering pipeline that includes the CUDA based line visibility method, 2D image processing and multipass rendering technique.
3. We perform a pilot user evaluation of the efficiency for users to access information from the system. Although we believe that a more general and professional user study could help to make rapid development in the field of 3D city visualization.
2 Related work A 3D city scene is a three-dimensional representation of a city or an urban environment. Nowadays, virtual 3D city scenes allow for visually integrating huge amount of urban information within a single framework. To facilitate comprehensive exploration of a city scene, artists have developed a set of principles by which they adjust rendering qualities such as the amount of color and contrast in order to emphasize some areas of an illustration and deemphasize other areas. Focus + context visualization Focus + context visualization has been well studied in the past years [14, 18, 23, 27]. In [25], a set of tools were presented to enable simultaneous exploration of a virtual world from two different viewpoints. One viewport is used to display the surrounding environment and the other is interactively adjusted to display the user concerned area. In [19], Semantic Depth of Field (SDOF) is utilized for information visualization and to blur different parts of the depicted scene based on their relevance. Objects of interest are depicted sharply in SDOF, whereas the context of the visualization is blurred. Matthias et al. [27] proposed 3D generalization lenses. It is a visualization technique for virtual 3D city scenes that combines different levels of structural abstraction. Focus area within lens volumes are shown in full details while less important details in the surrounding area are suppressed. Tassilo Glander et al. [14] presented a novel concept for the real-time depiction of landmarks, which emphasizes 3D objects by improving
Perception-motivated visualization for 3D city scenes
279
Fig. 2 Flow chart of our algorithm
their visibility with respect to their surrounding areas. They adaptively scale the landmarks by view-dependent deformations. To make the best exploitation of the screen space, several approaches have been proposed to use nonstandard projective method [2, 22]. Sebastian Möser et al. [22] proposed a nonstandard perspective method to reallocate the available screen space. The results of the familiar pedestrian perspective and a standard map depiction are blended to present an efficient deformation technique to interactively allocate the screen space for the city models. Lorenz et al. [20] presented an interactively visualization technique that generates multiperspective views of 3D city scenes. A standard linear projection is used for rendering and a global space deformation to simulate multiperspective views. In the virtual environment rendered by these nonstandard projective methods, the users often have an incorrect estimation of the actual distance. In a recent work of Amir et al. [26] introduces saliencyguided visualization, also in the context of virtual 3D cities and rendering style combination, and show convincing results. They achieve Level-of-Abstraction mainly via image space abstraction, such as texture map abstraction. However, we think that the feature lines also have great influence for objects presentation, especially for buildings in city, so we adopt line drawing to show the general structure of the buildings. Moreover, as the visualization system is closely related to human perception, we think a user study is necessary to test the effects of the system. Building generalization To achieve efficient rendering and reduce the visual confusion, many building generalization techniques have been proposed [10, 21]. Building generalization techniques were nicely summarized by Glander et al. [11]. These techniques provide different representations of the buildings with different degrees of generalizations according to the distance to the view point, which are also known as Level of Detail (LoD). By employing vectorbased morphology and discrete/continuous curvature space, Mayer presented a generalization technique [21], to detect local curvature and shift the adjacent polygons accordingly. The method is costly in terms of processing time. In [8], generalization is enabled by moving near parallel faces of building geometry to a common plane and merging them if
possible. Unfortunately, the algorithm is applicable to orthogonal buildings only. Thiemann [28] proposed to create a constructive solid geometry (CSG) representation of the given building geometry based on feature removal. It uses the planes of the building’s faces to subdivide the geometry into a body and features. In contrast to these building generalization techniques that simplify and abstract a single 3D building model, Glander proposed an algorithm to emphasize the generalization of clustered 3D building models in [11]. In a preprocessing step, individual building models were clustered. Then three different techniques are presented to automatically derive generalized 3D building ensembles for each cluster. Illustrative visualization of 3D city scenes NPR focuses on rendering objects and scenes to resemble how artists and designers might want to view. It provides an effective visual interface to urban spatial information and associated thematic information. Much work has sought to apply this rendering technique to a virtual 3D city scene like the work done by Jürgen Dölner et al. that have done some work on NPR rendering of 3D city scenes [6, 7]. Grabler et al. [9] developed an automatic tourist map generation algorithm by combining a 2D map with 3D city scenes. They use NPR techniques to clarify the information for tourists to present the map on demand of the users. However, they only gave an illustrative symbol of the buildings. Tourists can hardly get a more detail illustration of the city buildings. Cole [3] presented an interactive system for directing a viewer’s gaze in stylized imagery rendered from 3D models. In their system, each pixel is assigned a normalized scalar value E(p), which indicates how much emphasis to place at every point in the scene.
3 Algorithm overview The outline of our algorithm is shown in Fig. 2. Our basic idea is to highlight the user-concerned information in the underlying scene while deemphasize the others through seamless integration of several different rendering styles. We use different graphic style to show the user the information of a virtual 3D city. The photorealistic style shows more detail information of the buildings, while the nonphotorealistic
280
B. Pan et al.
Fig. 3 Results of four different illustration styles. From left to right: the PR style, the NPR style, the line drawing style, and the result combined with the NPR and the line drawing style
style draw users attention to the general information. While exploring a 3D city, the users may be attracted by structures and objects that are interesting and well known in daily life, i.e., the landmark buildings. They rely on these buildings to get the right sense of directions. We seek to highlight user-concerned important features to improve the legibility of 3D city scenes. In the preprocessing step, with a given scene, we manually tag the buildings in the city. Then the tagged city model serves as input for user exploration. At run-time, with the multipass rendering technique, we render the 3D city scene with each style into separate frame buffers (Sect. 4.1). Users interactively specify the interested regions, locations, buildings, or information which they are interested in, the system automatically calculate the focus points for user. With the specified focus points, we derive an algorithm to identify the proportion for each rendering style at every location of the 3D city scene (Sect. 4.2). The final result blends multiple styles according to the proportion calculated. We have performed a pilot user evaluation of the effect for users to access information in 3D city (Sect. 6).
4 Multistyle illustration of 3D city scenes In this section, we describe how our system illustrates the 3D city scenes by highlighting the user interested area. We introduce a novel multistyle illustration model associated with meaningful distance metrics and multi-focus models. 4.1 Multistyle illustration model
for the proportion of each style used to blending at v. In this paper, we employ three different rendering styles, including the PR style, NPR style and line drawing. – PR style: PR produces images indistinguishable from photographs of real-world objects and scenes. This style is suitable for emphasizing interesting objects. In this paper, we use Phong shading model as it is a classical PR rendering technique, and is simple and effective to produce reasonable pictures. – NPR style: NPR focuses on rendering objects and scenes to generate imagery that looks like to be made by artists, such as a cartoon, watercolor, etc. It gives a comprehensive shape and structure while omitting the details. Here, we use the Gooch lighting model [12, 13] to expressively render the less important objects in the 3D city scene. The lighting model allows shading to occur only in mid-tones so that edge lines and highlights remain visually prominent. – Line drawing style: Besides the color of the buildings, the feature lines also have great influence for the presentation of important buildings, so we adopt the line drawing method. There are many works on this technique [3–5, 16]. Our scheme utilizes the combination of these three styles to achieve expressive effects. The effects of the three styles and combination of shading and line drawing styles are shown in Fig. 3.
For each vertex v, our multi-style illustration model determines it’s intensity I (v) by the following equation:
4.2 Multistyle composition
I (v) = f p (v) · Ivp + f n (v) · Ivn + f l (v) · Ivl
The composition consists of two stages. In the first stage, we get the output of each rendering style and compute a proportion f ∗ (v) for each style. Each style has a coefficient f ∗ (v), which is calculated based on the distance between the point v and the user-specified focus point v f . In the second stage, the results from previous stage are blended at each vertex in the scene associated with the computed proportions.
(1)
f p (v) + f n (v) + f l (v) = 1.0 p
Here, Iv , Ivn , Ivl are the results of different rendering styles. We combine these results to highlight user interested information. f p (v), f n (v), f l (v) are the scalar values that stand
Perception-motivated visualization for 3D city scenes
281
Fig. 4 Results with different distance metrics. From left to right: the 3D Euclidean Distance, the 2D Euclidean distance, the 2D city block distance and the per-building distance. The region of the top-right
shows the distance field of the image. The red region is in the innermost zone, the blue region is in the second zone, and the yellow region is in the outermost zone
4.2.1 Distance computation
If we render different parts of a building with different proportion of styles, it can break the building’s integrality. To cope with this concept, we utilize the R-tree data structure [1, 15], which has been used for spatial access methods, i.e., for indexing multidimensional information. A common real-world usage for an R-tree might be: “Find all museums within 2 miles from my current location.” With this data structure, our system can treat each single building as an integrated unit. For each building b, we calculate the 2D minimum bounding box of its footprint r(b), with which we construct an R-tree. When the users have specified a center building bf and the number of buildings ni (i = 1, 2, 3) in each zone, the R-tree will return all the buildings bi contained in the ascending order, according to their distances to bf . To render the building with appropriate emphasis, we calculate a pseudodistance d(bi ) from a building bi to the user-specified center building bf .
With the users’ specifications on the interested zones and locations, we extract the focus point v f . Then the proportion f ∗ (v) at each vertex v is calculated according to the distance d(v) between v and v f . Each focus point yields three regions in the scene based on a user-specified radius, d0 , d1 , d2 (Fig. 5(a)). Regions in the innermost zone are rendered mainly by Phong shading that shows the scene’s appearance in detail. Those in the second zone are rendered by gradually changing from Phong shading to the Gooch shading, giving a concise image of shape and structure. The regions in the outermost zone that are far away from the focus point are rendered mainly by line drawing style. This is efficient at conveying shape while reducing visual clutter. In this paper, we have tried several distance metrics to express different combined effects. Euclidean distance metric This metrics is the most intuitive for 3D scenes. In [3], the Euclidean distance in threedimensional space is used to specify the importance at each vertex. However, the 3D Euclidean distance will trunk high buildings (Fig. 4(a)). Inspired by the real-world distance, we employ the 2D Euclidean distance, namely, the distance in the ground. f 2 f 2 f dE (v) = v − v h = vx − vx + vy − vy City block distance This metric computes the distance that would be traveled to get from one location to the other if a grid-like path is followed. In modern cities, it is more closer to the real distance: f f dC (v) = v − v f = vx − vx + vy − vy h
Per-building distance In daily life, the basic unit in 3D geo-space is a building. People account for the distance between two buildings rather than the distance between two 3D points, to demonstrate how far it is from home to school.
d(bi ) = i/n,
n = (n0 + n1 + n2 )
The coefficients f ∗ (bi ) are calculated using Eq. (2) by substituting di with ni /n. The results of the above metrics are shown in Fig. 4. 4.2.2 The focus model With the computed distance, we can calculate the proportion f ∗ (v) of each rendering style according to the model proposed in Eq. (1). In some cases, the users may be interested in a single point. We call this single focus model. In other cases, they are interested in multiple points, which can be augmented with a multifocus model. Single focus model Often the users expect to find some exact places in a city, for example, the government building of a city. When the user chooses a building, our system first calculates the building’s bounding box in 3D, then chooses the bounding box’s center as the focus point v f , from which the degree of users’ concern falls off radially. According to
282
B. Pan et al.
n di (v) wi = di (v)/ i
fi∗ (v)
Fig. 5 The weighting function for combining different styles
the distance from v to the specified focus point, we could identify the proportion f ∗ (v). In order to control the effects of our blending algorithm, we calculate each f ∗ (v) in Eq. (1) based on the calculated distance d(v). The integrity and contrast of a building is very important for users to examine. We render the user-concerned buildings in the innermost zone mainly using Phong shading. Consequently, for the vertices in the innermost zone, we set f p (v) = 1.0. In the second zone, the proportion of Phong shading gradually drops to zero and the proportion of Gooch shading rises to 1.0. In the next zone, Gooch shading coefficient gradually drops to zero while the line drawing coefficient grows to 1.0. In the outermost area, all the buildings are drawn by only the line drawing method. The proportion for each rendering style is defined as follows: ⎧ ⎨ 1.0, if d(v) <= d0 f p (v) = 1.0 − (d(v) − d0 )/(d1 − d0 ), ⎩ if d0 < d(v) <= d1 ⎧ (d(v) − d0 )/(d1 − d0 ), ⎪ ⎪ ⎨ if d0 < d(v) <= d1 f n (v) = (2) 1 − (d(v) − d1 )/(d2 − d1 ), ⎪ ⎪ ⎩ if d1 < d(v) <= d2 ⎧ ⎨ (d(v) − d1 )/(d2 − d1 ), f l (v) = if d1 < d(v) <= d2 ⎩ 1.0, if d(v) > d2
is the coefficient of v that is calculated according to f the focus point vi . The results are shown in Fig. 6. Rather than a random distribution of undivided buildings, a city has certain parts that people intuitively understand and aggregate when describing it from different levels of abstraction. In particular, some buildings are called landmarks. Landmarks represent elements of 3D city scene, with distinctive importance for user orientation. However, in most existing systems, landmarks are often occluded or too normal to be noticed by the users (Fig. 6). We propose to make landmarks significant visible through expressive rendering. We first augment the scene by manual tagging, the landmarks of a city are marked. Like Grabler et al. [9], we identify landmarks manually from the travel-related web site. In order to enable users to keep an accurate sense of direction when they are exploring in the scene, our system always visualizes the landmarks using Phong shading, so the landmark buildings are always highlighted.
5 Implementation details Rendering pipeline We exploit the programmable graphics hardware to devise a multipass rendering pipeline and achieve a real-time performance. The implementation of the underlying rendering technique relies on the scenegraphbased high-level rendering framework OSG. Given a 3D city scene, we first extract the visible feature lines, then render the three different styles into three different frame buffers using three different pixel shaders. In the same pass, we calculate the f ∗ (v) of each vertex v in a vertex shader and save them as textures within three different frame buffers. Thereafter, we combine different styles using Eq. (1). If we treat each building as a basic unit, we get the building ID buffer in the first stage and use it to combine the rendering effects for each building. In the second stage, we can get more expressive effects by means of GPU-based image processing.
The curve for each coefficient is shown in Fig. 5(b). Multifocus model In many cases, the users may be interested in more than one building. For example, the users may want to find all the hospitals in a city. In this case, a point in the scene may be affected by several focus points. Here, we propose a multi-focus model to calculate the proportion f ∗ (v) for each rendering result. Given a point v and sevf eral user-specified focus points vi , we calculate f ∗ (v) as follows: f ∗ (v) =
n i
wi · fi∗ (v)
(3)
CUDA based line visibility We have observed that besides the color of the buildings, the feature lines also have great influence for the presentation of important buildings. There are many works has been done on the this technique [3–5, 16]. In a city, the building models often have simple structures, and the crease lines are often adequate. In practice, the edges which connect two triangles with significantly deviated normal are chosen as crease lines. After extracting the feature lines, the visible lines in the scene are drawn. Traditional method implements the line visibility analysis on CPU, which is costly due to the large amount of the transmission of the data between CPU and GPU. In our work, we
Perception-motivated visualization for 3D city scenes
283
Fig. 6 Results of our expressive illustration: (a) result rendered with the PR style, (b)–(e) are results rendered using our system, (b) and (c) are per-point style composition with the single- and multi-focus points
respectively, while (d) and (e) are per-building style composition with the single- and multi-focus buildings respectively
use an improved line visibility test algorithm, which implemented totally on GPU with CUDA to achieve the real-time rendering of a large city model. We render the whole scene only to get the depth buffer. In our implementation, the depth buffer is rendered on a render buffer object, then copied to a texture for further use. Then, before the beginning of comparison among the lines, a data structure is needed to
record the visibility. One visibility value for a line is far from enough. Some lines may be partial visible, while the exact visibility calculation is impossible in this method. So, we build a buffer to record the line visibility whose length is related to the projection of the line on the screen space. The test points are set one-pixel length, which is enough for the rendering. Finally, for every test point on a line, we check
284
B. Pan et al.
Table 1 Performance evaluation measured in frame rate for the campus model with three different screen resolutions model/screen res.
1920 × 1080
1024 × 768
800 × 600
Campus Model
30.1
78.4
92.6
the depth in its neighborhood. The final visibility of a point is the average of all comparison result weighted by a Gaussian kernel.
6 Results and evaluation Performance Evaluation We have tested our approach for the campus model with about 100 K faces and 40 K lines. The platform that we used for performance evaluation is an Intel Core2Duo 23.0 GHz with 4 GB memory and NVidia GeForce GTX 280 with 1 GByte VRAM. Figure 6 shows the results of our algorithm compared with the uniform rendering style. Figure 6(a) is rendered with the PR style while Figs. 6(b)–6(e) is rendered by our system. Since we utilize several frame buffers for multipass rendering, the efficiency of the system is closely related to the resolution of the frame. The line visibility algorithm occupied a large part of the resource. The results in Table 1 also verify this fact. From Fig. 6, we can see that the target buildings are well emphasized in our algorithm. The user’s attention can be effectively drawn to the interested point. With our system, the point-based method gives a visually plausible effect, but may break the integrity of the objects in the scene. The building-based method can preserve the integrity of a building, but may produce visual discontinuity. Pilot user study One usage scenario of our system is for tourist guide. We performed a pilot user study based on the campus model. The study is designed to determine whether the combination of rendering styles improves the efficiency for users to access information. We invited 20 users to attend the user study. They are all unfamiliar with the environment of the campus. Participants are randomly divided into two groups. Then each group of participants are shown 10 images, respectively, there is one marked building in each image. The first group is presented with images rendered in uniform rendering style, and the second group with images of our composite styles. Each image was displayed for 5 seconds on a 22-inch LCD display. Then every participant is randomly shown a building image, and is asked to identify the name of the building. The result shows that 7 person in the second group can identify more than 5 buildings accurately, however, only 5 person can achieve that in the first group. Moreover, we found that animated transition can draw user’s attention effectively. Every time the user has selected a building, a zooming in animation is immediately generated to attract the user concentration.
7 Limitations and future work In this paper, we have proposed a concept and implementation for real-time expressive illustration of 3D city scenes. Based on user interaction in the scene, we emphasize the user-concerned buildings through the combination of several rendering techniques. As human perception is sensitive for color, rendering user interested buildings in photorealistic rendering technique while the others only with lines, may improve the visibility of some interesting buildings compared with unique rendering style. Users can access their required information more efficiently and effectively. The method will greatly improve the readability of modern 3D city. There are several limitations in our work. One is that we calculate the interest of each building only according to the view metric, however, other metrics such as semantic metric can be took into consideration. The other is that we only conduct a coarse user study, a more strict user study need an in-depth study of human visual perception yet will push the 3D city visualization technique forward. In the current implementations, objects can only be emphasized to fully reflect their features on the buildings. Some results exhibit inadequate geometry details. We expect to add other resources such as texture to enhance the expressiveness [26]. For instance, the feature pattern in a texture can be extracted and added to enhance the feature. Meanwhile, some areas which have too many geometry details need further simplification to decrease their visual influence. Thus, a method for simplifying the geometry model of low importance and transiting between different levels of details is needed. In addition, we plan to incorporate more artistic styles in our system. Acknowledgements This research is supported by National Basic Research Program of China (973 Program, No. 2009CB320802), National Natural Science Foundation of China (Nos. 60970020, 60873123), Foundation of Liaoning Educational Committee (No. L2012131), Research Award Fund of Shandong Province, China (No. BS2012DX043), and the Fundamental Research Funds for the Central Universities (No. 201313005).
References 1. Arge, L., Berg, M.D., Haverkort, H., Yi, K.: The priority R-tree: a practically efficient and worst-case optimal R-tree. ACM Trans. Algorithms 4, 1–30 (2008) 2. Brosz, J., Samavati, F.F., Sheelagh, M.T.C., Sousa, M.C.: Single camera flexible projection. In: NPAR ’07, pp. 33–42 (2007) 3. Cole, F., DeCarlo, D., Finkelstein, A., Kin, K., Morley, K., Santella, A.: Directing gaze in 3D models with stylized focus. In: Proc. of EGSR’06, pp. 377–387 (2006) 4. Cole, F., Finkelstein, A.: Partial visibility for stylized lines. In: NPAR 2008 (2008) 5. Cole, F., Finkelstein, A.: Fast high-quality line visibility. In: Proceedings of I3D 2009, pp. 115–120 (2009)
Perception-motivated visualization for 3D city scenes 6. Döllner, J., Buchholz, H., Nienhaus, M., Florian, K.: Illustrative visualization of 3d city models. In: Proc. of Visualization and Data Analysis 2005, pp. 42–51 (2005) 7. Döllner, J., Walther, M.: Real-time expressive rendering of city models. In: Proc. of InfoVis’03, p. 245 (2003) 8. Forberg, A., Mayer, H.: Generalization of 3d building data based on scale spaces. In: International Archives of Photogrammetry and Remote Sensing, pp. 225–230 (2002) 9. Grabler, F., Agrawala, M., Sumner, R.W., Pauly, M.: Automatic generation of tourist maps. Trans. Graph. 27(3), 1–11 (2008) 10. Glander, T., Döllner, J.: Cell-based generalization of 3d building groups with outlier management. In: Proc. of the 15th Annual ACM International Symposium on Advances in Geographic Information Systems ’07, pp. 1–4 (2007) 11. Glander, T., Döllner, J.: Techniques for generalizing building geometry of complex virtual 3d city models. In: 2nd International Workshop on 3D Geo-Information, pp. 381–400 (2007) 12. Gooch, A., Gooch, B., Shirley, P., Cohen, E.: A non-photorealistic lighting model for automatic technical illustration. In: Proc. of SIGGRAPH ’98, pp. 447–452 (1998) 13. Gooch, B., Sloan, P.-P.J., Gooch, A., Shirley, P., Riesenfeld, R.: Interactive technical illustration. In: Proc. of I3D ’99, pp. 31–38 (1999) 14. Glander, T., Trapp, M., Döllner, J.: A concept of effective landmark depiction in geovirtual 3d environments by view-dependent deformation. In: 4th International Symposium on LBS and Telecartography (2007) 15. Guttman, A.: R-Trees: a dynamic index structure for spatial searching, pp. 599–609 16. Isenberg, T., Freudenberg, B., Halper, N., Schlechtweg, S., Strothotte, T.: A developer’s guide to silhouette algorithms for polygonal models. IEEE Comput. Graph. Appl. 23(4), 28–37 (2003) 17. Jobst, M., Döllner, J.: 3d city model visualization with cartography-oriented design. In: REAL CORP Proc., Vienna, (2008) 18. Kosara, R., Hauser, H., Gresh, D.L.: An interaction view on information visualization. In: EG 2003, pp. 123–137 (2003) 19. Kosara, R., Miksch, S., Hauser, H.: Semantic depth of field. In: Proc. of INFOVIS ’01, p. 97 (2001) 20. Lorenz, H., Trapp, M., Döllner, J.: Interactive multi-perspective views of virtual 3d landscape and city models. In: Lecture Notes in Geoinformation and Cartography’08, pp. 301–321 (2008) 21. Mayer, H.: Scale-space events for the generalization of 3dbuilding data. In: International Archives of Photogrammetry and Remote Sensing, pp. 639–646 (1999) 22. Möser, S., Degener, P., Wahl, R., Klein, R.: Context aware terrain visualization for wayfinding and navigation. Comput. Graph. Forum 27(7), 1853–1860 (2008) 23. Qu, H., Wang, H., Cui, W., Wu, Y., Chan, M.-Y.: Focus+context route zooming and information overlay in 3d urban environments. IEEE Trans. Vis. Comput. Graph. 15(6), 1547–1554 (2009) 24. Rheingans, P., Landreth, C.: Perceptual principles for effective visualizations. In: Perceptual Issues in Visualisation, pp. 59–74 (1995) 25. Straber, W., Stoev, S.L., Schmalstieg, D.: The through-the-lens metaphor: taxonomy and application. In: Proc. of the IEEE Virtual Reality, pp. 285–286 (2002) 26. Semmo, A., Trapp, M., Kyprianidis, J.E., Döllner, J.: Interactive visualization of generalized virtual 3d city models using levelof-abstraction transitions. Comput. Graph. Forum 31(3), 885–894 (2012) 27. Trapp, M., Glander, T., Buchholz, H., Döllner, J.: 3d generalization lenses for interactive focus + context visualization of virtual city models. In: Proc. of InfoVis’08, pp. 225–230 (2008)
285 28. Thiemann, F.: Generalization of 3d building data. In: Proc. of Joint International Symposium on GeoSpatial Theory, Processing and Applications, pp. 225–230 (2002)
Bin Pan received his Ph.D. degree in computer graphics from State Key Lab of CAD&CG, Zhejiang University, China in 2011. Currently, he is an instructor in Liaoning Shihua University, Fushun, China. His main scientific interests include computer graphics, viusalization and digital image processing.
Yong Zhao received his Ph.D. degree in 2009 from State Key Lab of CAD&CG, Zhejiang University. He is currently an assistant professor of the School of Mathematical Sciences, Ocean University of China. His research interests include digital geometry processing, non-photorealistic rendering, and computer animation.
Xiaoming Guo received her M.Sc. degree in 2008 from Liaoning Normal University, Dalian, P.R. China. She is currently an instructor in Liaoning Shihua University, Fushun, China. Her research interess include Support Vector Machines, expressive rendering and digital image processing.
Xiang Chen received his B.Eng. (with Honors) in Computer Science from Chu Kochen Honors College, Zhejiang University, Hangzhou, China, and now a M.Sc. candidate of the Interactions Lab in University of Calgary. His research mission is to explore novel Ubicomp input techniques that are driven and guided by well-defined design concepts and eventually lead to a design space that creates more ideas and opportunities.
286
B. Pan et al. Wei Chen is a professor in State Key Lab. of CAD&CG at Zhejiang University, P.R. China. From June, 2000 to June 2002, he was a joint Ph.D. student in Fraunhofer Institute for Graphics, Darmstadt, Germany and received his Ph.D. degree in July 2002. He was a visiting scholar at Purdue University, working in PURPL with Prof. David S. Ebert. His current research interests include scientific visualization, visual analytics, and biomedical image computing.
Qunsheng Peng is a professor in State Key Lab of CAD&CG, Zhejiang University. He graduated from Beijing Mechanical College in 1970 and received his Ph.D. degree in the School of Computing Studies, University of East Anglia, UK, in 1983. His research interests include realistic image synthesis, virtual reality, biomolecule graphics, and scientific visualization. In these fields, he has authored and coauthored more than two hundred journal and conference papers. He is a member of the editorial board of several international and Chinese journals.