1. Trang chủ
  2. » Khoa Học Tự Nhiên

InformatIon ScIence Reference Part 3 pps

52 245 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 52
Dung lượng 2,75 MB

Nội dung

Map Overlay Problem • lie strictly either on the left or right side of the line are stored in the corresponding subtree The segments that intersect the line are stored in both The process continues until there are only a few segments of one color, and the brute force algorithm can be run for them R-tree allows Ο((n + k )∗ log(n + k )) running time and Ο(n + k ) space in the worst case (van Oosterom, 1994) It is based on the spatial data structure R-Tree (Guttman, 1984) It could also be used to deal with distributed data in a non-uniform way The insertion algorithm creates more nodes in the areas with a larger number of line segments Thus, it will automatically adapt to the distribution of the data The R-Tree has been designed to work efficiently with large amounts of data that not fit in primary memory Therefore, it can handle the expensive operations of swapping memory pages between primary and secondary memory Spatial Ordering Algorithms: Because algorithms based on plane partitions divide the space without considering the topology of the input, they run in quadratic time However, we can consider spatial ordering algorithms that use an aboveness ordering to reduce the number of comparisons between segments A segment A is above B if some vertical line intersects A at a greater y-coordinate than B Some examples are (Andrews et al 1994): • 68 Bentley-Ottman sweep has a running time in the worst and average case of Ο(n ∗ log n + k ∗ log n ), and a Ο(n ) size space (Bentley & Ottman, 1979) This algorithm is based on the plane sweep technique A vertical line is moved across the plane, stopping at each node (event) of the subdivisions It uses two data structures: a priority queue for storing the nodes of both subdivisions, ordered by the x-coordinate, • in aboveness order; and a binary search tree for the segments that intersect the line at the events if corresponding The initial subdivisions are stored in a Doubly-Connected Edge List (DCEL) (frequently used for representing planar subdivisions of the space, keeping a record for each node, edge and face It also considers the topological relationships among elements, which allow operations such as traversing the edges along a face boundary, reporting the incident edges of a node, or determining which edges are shared by two faces Trapezoid Sweep provides Ο(n ∗ log n + k ) average and worst case running time, and Ο(n ) space (Chan, 1994) This algorithm is an extension of a classical algorithm based on plane-sweep (Bentley-Ottman sweep), which also works with other types of curves In the algorithm by Bentley and Ottman the intersections are sorted according to the X coordinate Then, each intersection takes logarithmic time, which leads to k * log n complexity On the other hand, the Trapezoid Sweep avoids the logarithmic factor by computing the segment intersections in a way that does not require sorting Blue trapezoidation is defined as the decomposition of the plane into closed trapezoids formed by the blue segments and some vertical segments which are vertical extensions of the endpoints of the segments spanning from the next blue segment above the point to the next blue segment below it (Figure 1) The algorithm is based on a sweep from left to right stepping through each blue trapezoid Once the whole plane is swept, all intersections are reported • Hereditary Segment Tree allows Ο(n ∗ log n ) running time in the average and worst cases It also takes Ο(n ) size space (Chazelle et al 1990; Palazzy & Snoeyink, 1994) The Hereditary Segment Tree is the basic data Map Overlay Problem Figure A blue trapezoidation structure for computing the red-blue intersections Firstly, all the segments and endpoints are sorted using a horizontal sweep The result is a sweep tree which as traversed in-order and returns the sorted list of points and segments As input the algorithm takes two lists of segments and points in topological order Then, it assigns each segment and point to its respective slab and computes the number of intersections per slab R aster Algorithms Overlay algorithms for raster maps are much simpler than vector ones The former make use of map algebra, which is the application of boolean operators (AND, OR, NOT and XOR) to the corresponding layers (Chang, 2006) To apply these operators, both layers must have the same level of resolution, otherwise the layer with the coarsest resolution is converted into the finest resolution It is also necessary that both layers cover the same geographic area; otherwise the operation is performed only on the intersecting area Other problems could be introduced by a wrong orientation of the pixels In this case, rotations or resampling can be applied in order to adjust the layers The most widely used data structures to represent raster maps in GIS are as follows: • • • • Quadtree is one of the best known data structures for representing raster maps The initial map must be a power of two sizes The map is recursively divided along the axes coordinates into four quadrants of equal size, which are labeled as NW, NE, SW, and SE The division stops when the whole quadrant is in a homogeneous region and therefore the value of the region is stored in it The data structure is represented as a tree with the root as the whole map Every time a quadrant is divided, the four children that represent the new smaller quadrants are inserted to the corresponding node into the tree Run-length codes stores each row’s first and last columns as the index of the region along with its thematic value Block codes uses the same idea as run-length, and is extended into two dimensions using square blocks to fill in the map regions The data structure consists of the origin (center or bottom left corner) and the size of each square Chain codes encodes the contour of a region by means of an origin defined by the user and a clockwise sequence of vectors in the direction of the cardinal points: (east=0, north=1, west=2, south=3) The length of a vector indicates the number of pixels For example: 010, 37, 23, 13, 25, 33, 24, 17 (Figure 5), starting from the pixel on row and column 6, where the first number of each pair represents the direction and the second number (after the symbol ) represents the length of the vector CONC LUS ION One of the principal applications of GIS is to analyze geographic data In order to implement complex operations that involve more than one layer it is necessary to overlay several maps or 69 Map Overlay Problem Figure QuadTree Figure Run Length codes Figure Chain codes Figure Block codes partitions so that a single map that relates all the inputs is obtained In this article we have covered the principal techniques and algorithms for the Map Overlay Problem, which is one of the widely used spatial analysis operations in GIS There are several problems that can be solved through map overlay: ranging from complex queries involving more than one layer, and Boolean operations between polygons, to windowing and combined buffering 70 Map Overlay Problem REFERENCES Andrews, D S., Snoeyink, J., Boritz, J., Chan, T., Denham, G., Harrinson, J., & Zhu, C Further Comparison of Algorithms for Geometric Intersection Problems In Proceeding of the 10th International Symposium of Spatial Data Handling Balaban, I J (1995) An optimal algorithm for finding segment intersections, In Proceeding of the 11th Annual ACM Symposium of Computational Geometry, 211-219 Bentley, J L & Ottmann, T A (1979) Algorithms for reporting and counting geometric intersections, IEEE, Transactions on Computers, 28(9), 643-647 Chan, T M (1994) A simple trapezoid sweep algorithm for reporting red/blue segment intersections, Proceeding of the 6th Canadian Conference of Computational Geometry Chang, Kang-tsung (2006) Introduction to Geographic Information Systems Third Edition McGraw-Hill New York, NY Finke, U & Hinrichs, K (1995) “Overlaying simply connected planar subdivision in linear time”, In Proceeding of the 11th Annual ACM Symposium of Computational Geometry, 119-126 Franklin, W R (1989) Efficient intersection calculations in large databases In International Cartographic Association 14th World Conference, Budapest, A62-A63 Franklin, W R., Kankanhalli, M & Narayanaswami, C (1989) Geometric computing and uniform grid data technique Comput Aided Design, 410-420 Franklin, W R., Kankanhalli, M., Narayanaswami, C., Sun, D., Zhou, C & Wu, P Y F (1989) Uniform grids: A technique for intersection detection on serial and parallel machines In Proceedings of Auto Carto 9: Ninth International Symposium on Computer-Assisted Cartography, Baltimore, Maryland, 100-109 Guttman, A (1984) R-Trees: A Dynamic Index Structure for Spatial Searching ACM SIGMOD, 13, 47-57 Chazelle, B., & Edelsbrunner, H (1992) An Optimal Algorithm for Intersecting Line Segments in the Plane Journal of the Association for Computing Machinery, 39(1), 1-54 Kriegel, Hans-Peter, Brinkhoff, Thomas & Schneider, Ralf (1992) An Efficient Map Overlay Algorithm Based on Spatial Access Methods and Computational Geometry, Springer-Verlag, 194-211 Chazelle, B., Edelsbrunner, H., Guibas, L & Shair, M (1990) Algorithms for bichromatic line segment problems and polyhedral terrains Technical Report UIUC DCS-R-90-1578, Dept Comp Sci., Univ Ill Urbana Mairson, H G & Stolfi, J (1988) Reporting and counting intersections between two sets of line segments Theoretical Foundations of Computer Graphics and CAD, NATO ASI Series F, Volume 40, Springer-Verlag, 307-325 de Berg, M., van Kreveld, M., Overmars, M & Schwarzkopf, O (2000) “Computational Geometry: Algorithms and Applications”, Springer, 2, 19-42 Palazzy, L & Snoeyink, J (1994) Counting and reporting red/blue segment intersections CVGIP: Graph Mod Image Proc de Hoop, S., van Oosterom, P., & Molenaar, M (1993) Topological Querying of Multiple Map Layers In COSIT’93, Springer-Verlag, 139-157 Preparata, F P., & Shamos, M I (1985) Computational Geometry, Springer-Verlag, New York 71 Map Overlay Problem Shaffer, C A., Samet, H & Nelson, R C (1990) QUILT: A Geographic Information System based on Quadtrees In J GIS, 103-131 van Oosterom, Peter (1994) An R-tree based Map-Overlay Algorithm EGIS/MARI´94, 1-10 Van Oosterom, Peter (1990) A modified Binary Space Partition for Geographic Information Systems Int J GIS, 133-140 Wu, Peter Y (2005) The Notion of Vicinity: in Space, Time, and WorkFlow Automation URISA Journal key TER MS Combined Buffer: Buffers are generated around points or lines To obtain the combined buffer of two or more points and/or lines, the unions of the generated buffers for each one is computed Line Segments Intersection: Consists in computing the intersections between two sets of line segments It is also known as the red-blue line segments intersection when one set contains red segments and the other contains blue segments In this case, intersections between red and blue segments are reported 72 Map Overlay: Overlay two or more thematic maps obtaining a single output map Plane Sweep: A computational geometry technique, mostly used in finding intersections between lines It consists of moving a line across a plane while stopping at certain events stored in a priority queue The events are then taken according to their priorities A binary search tree is also used to represent the state of the line at each moment The problem is partially solved for all the area that has already been swept by the line at each event It is completely solved when the line finishes with the last event Subdivision Overlay: It is the process of generating an output plane subdivision from overlaying two or more input plane subdivisions A plane subdivision is a set of points, lines, and the regions determined by such points and lines The output subdivision must contain all the points from the input maps plus the points of intersection between the input subdivision segments The lines in the output subdivisions will be those made up by the input lines divided at the intersection points and the input lines that not intersect The output regions will be the union and intersection of all the input regions Windowing: Overlaying a rectangular window to a map and discarding everything that it is not inside of the window 73 Chapter X Dealing with 3D Surface Models: Raster and TIN Mahbubur R Meenar Temple University, USA John A Sorrentino Temple University, USA Abstr act Three-dimensional surface modeling has become an important element in the processing and visualization of geographic information Models are created from a finite sample of data points over the relevant area The techniques used for these activities can be broadly divided into raster-based interpolation methods and vector-based triangulation methods This chapter contains a discussion of the benefits and costs of each set of methods The functions available using 3D surface models include elevation, queries, contours, slope and aspect, hillshade, and viewshed Applications include modeling elevation, pollution concentration and run-off and erosion potential The chapter ends with a brief discussion of future trends, and concludes that the choice among the methods depends on the nature of the input data and the goals of the analyst INTRODUCT ION A surface can be defined as a continuously variable and measurable field that contains an infinite number of points It typically embodies a great deal of information, and may vary in elevation or proximity to a feature (Bratt, 2004, O’Sullivan et al., 2003) The points that create a 3D topographic surface store elevation or z-values on the z-axis in a three-dimensional x, y, z coordinate system Other examples of using continuous 3D surfaces are climate, meteorology, pollution, land cover, Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited Dealing with 3D Surface Models Figure Six alternative digital surface representations (Adapted from Longley et al., 2005b) ( 1) (2) (4) and natural resources (McCloy, 2006, Lo et al., 2002, Schneider, 2001) In a Geographic Information Systems (GIS) environment, a surface cannot measure or record the z-values of all of the infinite points stored inside it A surface model, commonly known as a Digital Terrain Model (DTM), or more generally, a Digital Elevation Model (DEM), is used to create 3D surfaces in various areas of science and technology The surface model primarily follows a two step process in recording and storing data: sampling and interpolation (O’Sullivan et al., 2003, Heywood et al., 2002) The model selects sample observation points, either systematically or adaptively; takes values of this finite number of points; interpolates the values between these points; and finally stores the output information (McCloy, 2006; Bratt, 2004, Heywood et al., 2002) 74 ( 5) (3) (6) There are six alternative ways, as shown in Figure 1, to express the continuous variation of a surface in digital representation: (1) regularly spaced sample points (e.g., a 10 meter spacing DEM); (2) irregularly spaced sample points (e.g., temperature recorded at different weather stations); (3) rectangular cells (e.g., values of reflected radiation in a remotely sensed scene); (4) irregularly shaped polygons (e.g., vegetation cover class of different parcels); (5) irregular network of triangles (e.g., elevation in a Triangulated Irregular Network or TIN); and (6) polylines representing contours (e.g., elevation in contour lines) (Longley et al., 2005b) Among all these surfaces, numbers and are rasters and the remaining four are vectors, which use points (number 2), lines (number 6), or polygons (numbers and 5) The DEMs or DTMs may be derived from a number of data sources, including contour and spot height information, Dealing with 3D Surface Models Table DTM data collection methods • • • Ground Survey ° Suitable for large-scale surface modeling for engineering projects ° Data can be acquired using GPS or Electronic Tacheometer Photogrammetry ° Suitable for smaller-scale surface covering larger geographic areas ° Data can be acquired using Analog Stereoplotter, Analytical Plotter, and Digital Photogrammetry Cartography ° Suitable for digitizing maps of smaller-scale areas having regional or national coverage ° Data can be prepared by digitizing or scanning Source: Lo et al, 2002 stereoscopic aerial photography or photogrammetry, satellite images, cartography (existing maps), and ground surveys (Heywood et al., 2002, Lo et al., 2002, Cohen-Or et al., 1995) Table gives an idea about different surface data collection methods and the techniques they use In a raster, a DTM is structured as a regular grid consisting of a rectangular array of uniformly-spaced equally-sized cells with sampled or interpolated z-values In vector GIS, a more advanced, more complex, and more common form of DTM is the TIN, which is constructed as a set of irregularly located nodes with z-values, connected by edges to form a network of contiguous, non-overlapping triangular facets (Bratt, 2004, Longley et al., 2005b, Heywood et al., 2002, Lo et al., 2002) Figure displays examples of raster and TIN surfaces Both surfaces can be created using two main methods: interpolation and triangulation B ACKGROUND C reating and Interpolating a Raster Surface Spatial interpolation is the prediction of exact values of attributes at unsampled locations – a process of intelligent guesswork which is invoked without the user’s direct involvement (Longley et al., 2005b, O’Sullivan et al., 2003, Siska et al., 2001, Schneider, 2001, McLean et al., 2000) An interpolation method assumes that spatiallydistributed objects are spatially correlated Using this method, cell values are predicted in a raster from a limited number of sample geographic point data, such as elevation, rainfall, or temperature Such points can be either randomly, strategically, or regularly-spaced features The several interpolation methods used to create raster surfaces are Inverse Distance Weighted (IDW), Kriging, Spline, Natural Neighbors, and Trend The IDW is the simplest and most commonly used method that weights the points closer to the processing cell more heavily than those located further away Such weights are defined following the inverse square of distances: wi = 1/di2 This method is useful when a variable being mapped decreases in influence with distance from the sampled location (Kennedy, 2006, Bratt, 2004, Longley et al., 2005b, Clarke et al., 2002) Kriging, a popular geo-statistical method, assumes that the distance or direction between sample points reflects a spatial correlation that can be used to explain variation in the surface In this method, values between points are interpolated by considering the nearest eight points covaried 75 Dealing with 3D Surface Models within a circular kernel (Hupy et al., 2005) Kriging is mostly used in geology or soil science where the data might have spatially-correlated distance or directional bias (McCloy, 2006, Kennedy, 2006, Bratt, 2004, Longley et al., 2005b, Clarke et al., 2002) Studies have shown that the Kriging method provides reasonable results in regions with sufficiently dense observations (Hofierka et al., 2002) The Spline method may be thought of as the mathematical equivalent of fitting a thin sheet of rubber to a series of data points (Clarke et al., 2002) It is the best method for graduallyvarying surfaces such as elevations or pollution concentrations, but not appropriate for surfaces with significant changes within short horizontal distance or regions where there are no nearby control points (Kennedy, 2006, Bratt, 2004, O’Sullivan et al., 2003) The Trend method uses a least-squares regression fit In the Natural Neighbors method, the raster surface is interpolated using the input points that are natural neighbors of a cell It creates a Delaunay Triangulation of the input points, which is part of the TIN creation process and is discussed in the following section (McCloy, 2006, Kennedy, 2006, Bratt, 2004) C reating a TIN Surface The TIN surfaces are usually created from a combination of point (mass point), line (breakline), and polygon data, at least some of which should have z-values The vertices of the TIN triangles represent peaks, pits, depressions, and passes; the edges represent ridges, river channels, and valleys; and the surfaces of each triangle provide area, slope (gradient), and aspect (orientation) (Kennedy, 2006, Heywood et al., 2002, Lo et al., 2002, O’Sullivan et al., 2003) In vector GIS, TINs are stored as polygons, each with three sides Mass points are the primary input features for creating a TIN, as they determine the overall shape of a surface These mass points with their z- 76 values become nodes in the triangulated network Breaklines and polygons can provide more control over the shape of the TIN surface Breaklines may or may not have height measurements, and they represent natural or built features such as a stream or a road Two types of breaklines are available, hard and soft Hard breaklines are used to show the abrupt changes in a surface, whereas the soft breaklines not indicate any change in slope, but may display a study area boundary Polygons are separately-interpolated areas, and they represent surface features that have areas, such as lakes or ponds (Bratt, 2004) According to Heywood et al (2002), there are two methods available to achieve the linking of TIN vertices In the distance-ordering method, the distance between all pairs of points is calculated, and then sorted, and finally the closest points are connected This method can produce, however, too many short triangles In the Delaunay Triangulation method, a convex hull is created for a dataset Then nonoverlapping straight lines are drawn from interior points to points on the boundary of the convex hull and to each other Thus the hull is divided into a set of polygons and then divided again into triangles by drawing more lines between the vertices of the polygons (McCloy, 2006, Longley et al., 2005b, Clarke et al., 2002, Okabe et al., 1992) This method is better than the other one for the following reasons: (1) the triangles are as equi-angular as possible, which reduces potential numerical precision problems created by long, thin triangles; (2) the triangles ensure that any point on the surface is as close as possible to a node; and (3) the triangulation is independent of the order in which the points are processed (Geocities web site, 2006) Delaunay Triangulation breaks down when (a) the observation points get further apart than the feature(s) being mapped; (b) the surface becomes more anisotropic; and (c) the data not match the shape of the features in the surface (McCloy, 2006) Dealing with 3D Surface Models Another type of irregular network, known as Tetrahedral Irregular Network and not often used, creates a volumetric representation of a 3D set of points by using tetrahedron cells to fill the 3D convex hull of the data (Clarke et al., 2002) R ASTER CONS and T IN : PROS AND Some advantages of raster surfaces are as follows: (1) they give a uniform density of point data that is easy to process; (2) they not need to store spatial coordinates explicitly for every control point; (3) each z-value’s position can be easily found (because of the GRID structure), as well as its spatial relationship to all the other points; and (4) they can be processed using array data structures available in most computer programming languages (O’Sullivan et al., 2003) Since an interpolation method relies heavily on the spatial correlation of points, the accuracy of the raster DTM depends on the complexity of its surface and the grid spacing (McCloy, 2006, Kennedy, 2006, Heywood et al., 2002, Siska et al., 2001) Any feature that is more precise than the size of the grid cell cannot be located in a raster Whether an input point falls on the center or other part of a cell, there is no guarantee that the whole cell will have the same value as that input point (Bratt, 2004) For this reason, the more input points in a raster surface and the more even their distribution, the more reliable or accurate will be the results (Bratt, 2004, Lo et al., 2002, Siska et al., 2001) On the other hand, because of the irregular shapes of the triangles the TIN DTM can have higher resolution in areas where the surface is more complex (such as mountainous areas) by including more mass points, and have lower resolution in areas that are relatively flat (Longley et al., 2005b, Bratt, 2004, Heywood et al., 2002) Thus TIN surfaces can produce more accurate results in spatial analysis, even if they are built with a significantly smaller number of sample points compared to raster surfaces (Lo et al., 2002, Siska et al., 2001) The other advantage of a TIN model is the efficiency of data storage (Heywood et al., 2002, Lo et al., 2002) In addition to nodes and edges, other features such as roads and streams can be used as input features in the TIN creation process which makes it more precise than a raster GRID (Bratt, 2004) However, because of the complex data structure, TIN surface models are considered more expensive and time consuming, and thus not as widely used as raster surfaces TINs are used mostly in smaller areas, for more detailed, large-scale applications; raster surfaces are used in more regional, small-scale applications (Bratt, 2004) Some other limitations of TINs are their inability to deal with discontinuity of slope across triangle boundaries, the difficulty of calculating optimum routes, and the need to ensure that peaks, ridges, and channels are recorded (Longley et al., 2005b) Raster and TIN surfaces are interchangeable A TIN can be created from a raster to simplify the surface model for visualization; a raster can Figure Raster and TIN surfaces of the same area 77 About the Point Location Problem Point in polygon: In a general formulation, the ability to superimpose a set of points on a set of polygons and determine which polygons (if any) contain each point Polygon: A planar shape created by a set of connected line segments (or vectors) that form vertices at their meeting points Note that an ngon is a polygon with an undetermined number of sides Polyline: A set of coordinate points connected by a set of line segments In programming, geometric objects as for instance: polylines, points, lines, etc are defined in an Application Programming Interface (API) as primitives with graphical output Quadtree: A data structure used, among other uses, to reduce the storage requirements of a raster by coding contiguous homogenous areas singly A raster of 2n by 2n cells is recursively divided into four equal squares Subdivision continues in each square until the square is homogeneous or subdivision is no longer possible or in general the final condition is reached Quad-Tree: An abbreviation for quadrilateral tree Vertex: The location at which vectors and polygon faces or edges intersect The vertices of an object are used in transformation algorithms to describe the object’s location and its location in relation to other objects Quadrilateral: A closed polygon with four vertices, and thus four sides Quadrilaterals, or quads, can be planar, in which case all vertices lie on the same plane, or non-planar With nonplanar quadrilaterals (sometimes called bow ties, because of their shape), it is more difficult to calculate orientation and lighting, and thus many systems tessellate quads to triangles, which are definitively planar 105 106 Chapter XIV Classification in GIS Using Support Vector Machines Alina Lazar Youngstown State University, USA Bradley A Shellito Youngstown State University, USA Abstr act Support Vector Machines (SVM) are powerful tools for classification of data This article describes the functionality of SVM including their design and operation SVM have been shown to provide high classification accuracies and have good generalization capabilities SVM can classify linearly separable data as well as nonlinearly separable data through the use of the kernel function The advantages of using SVM are discussed along with the standard types of kernel functions Furthermore, the effectiveness of applying SVM to large, spatial datasets derived from Geographic Information Systems (GIS) is also described Future trends and applications are also discussed – the described extracted dataset contains seven independent variables related to urban development plus a class label which denotes the urban areas versus the rural areas This large dataset, with over a million instances really proves the generalization capabilities of the SVM methods Also, the spatial property allows experts to analyze the error signal Introduct ion This entry addresses the usage of Support Vector Machines (SVM) for classification of remotely sensed data and other spatial data created from Geographic Information Systems (GIS) Variability, noise, and the nonlinear separability property are problems that must be confronted when dealing with spatial data, and SVM have become popular tools for classification and regression as they address most of these problems SVM are widely recognized as classification tools by computer scientists, data mining and machine learning researchers To date, GIS Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited Classification in GIS Using Support Vector Machines researchers have used older, more developed techniques such as the logistic model for regression and artificial neural networks for classification and regression However, we consider that the SVM method has much potential and the goal of our chapter is to bring it to the attention of the GIS community SVM translate the input data into a larger feature space, using a nonlinear mapping, where the instances are linearly separable A straight model line in the new feature space corresponds to a nonlinear model in the original space In order to build the straight delimitation line called “the maximum margin hyperplane” in the new space, a quadratic optimization learning algorithm is applied The support vectors are the instances with the minimum distance to the hyperplane The new space is the result of the dot product of the data points in the feature space Cla ss if ic at ion Prob le ms on S pat ial Dat a Machine learning and data mining methodologies (such as artificial neural networks and agent-based modeling) have been adapted for the classification of geospatial data in numerous studies such as Fischer (1994), Pijanowski and Alexandidris (2002), and Brown et al (2005) Proposed by Vapnik (1999) in 1992 and improved in 1995, the SVM algorithm was inspired by statistical learning theory SVM became a very popular classification tool after it was successfully implemented and applied to handwritten digit recognition Lately, SVM have been adapted and utilized in the classification of remotely sensed data (e.g., see Foody and Mathur 2004, Mantero et al 2005, Marcal et al 2005, Niishi and Eguchi 2005) for land cover mapping SVMs have also been applied to the classification of hyperspectral data by Melgani and Bruzzone (2004) Other studies have utilized SVMs as a classification tool with remotely sensed data alongside data derived from traditional GIS sources (Song et al 2004, Watanachaturaporn et al 2005) SVM have been less commonly utilized as a classification tool in GIS modeling Guo et al (2005) have utilized SVM together with GIS tools for studying the ecology of oak death in California Other studies (e.g., see Shellito and Lazar 2005, Lazar and Shellito 2005) use SVM with GIS data to examine distributions of urbanization in Ohio with regard to predictor variables (in raster format) of urban development The SVM methods provide not only higher accuracy compared to other classifiers using fewer parameters, but also are particularly robust to instances of missing data, errors, and redundancy, all of which typically characterize real datasets T he SV M Appro ach and cr it ic al issues The SVM algorithms of Cristianini and ShaweTaylor (2000) and Schölkopf and Smola (2002) inspired from the statistical learning theory (Vapnik, 1999) combine together kernel methodology and convex function optimization to solve classification and regression problems With numerous advantages (Table 1) and several available computer implementations (Table 2), SVM has become a viable and efficient tool SVM builds a decision boundary by choosing the separating hyperplane, which maximizes the margin (Figure 1) between positive and negative examples Decision boundaries with large margins usually generalize better than those with small margins and they will not be affected by using a big training set The training points that lie closest to the separating hyperplane are called support vectors The algorithm used to build the hyperplane is a quadratic optimization algorithm Let xi, i = 1,…,l, be a training set of examples for the problem of interest For a binary classification task, each input instance xi ∈ R N in the attribute space will have associated a decision yi ∈ {-1, 1} 107 Classification in GIS Using Support Vector Machines Figure Separation of two classes Table Advantages of SVM High classification accuracies Good generalization capabilities Few parameters to set up Good for heterogeneous classes No need for feature reduction Uses quadratic programming Good for high dimensional data Table SVM Implementations LIBSVM SVMLight WEKA Gist In the simple case of a linearly separable problem, the set of hyperplanes defined by the dot product: 〈w,x〉+b=0 is considered in order to find the ideal separator of the data points The vector w determines the orientation of the discriminating plane and the scalar b represents the offset from the origin The defining property of this optimal hyperplane is that its distance to the closest input points is maximized Through simple geometrical analysis, the problem reduces to a system of quadratic Figure Kernel transformation 108 YALE equations The solution defines the parameters w and b of the hyperplane The previous analysis applies to linearly separable data sets, but the method is easily extendable to nonlinear problems by replacing the dot product, 〈w,x〉, by a kernel function k( x,x') ( x ), ( x') A kernel, which is the similarity measure between different members of the data set, is used together Classification in GIS Using Support Vector Machines with this mapping function Φ to translate the initial dataset into a higher dimensional space (Figure 2) Typically, the kernels are parametric functions that are used to tune for the best accuracy on the training data set Polynomial, Gaussian, radial basis and the sigmoid kernel functions (Table 3) have been successfully used with similar behavior in terms of the resulting accuracy There is no accepted method for choosing the best kernel function For the nonlinearly separable data finding, the decision boundary requires not only to maximize the margin as in the case of linearly separable data but also to minimize the error of wrongly classified instances In practice, slack variables, (Schölkopf et al, 2002) multiplied by a constant C, were introduced to account for the wrongly classified data points Here the positive constant C controls the trade-off between the maximization of the margin and the training error minimization A larger C value means a higher penalty for misclassified samples Finding the best SVM model involves the selection of the parameter C and the parameter or parameters associated with the kernel function One way to determine the best parameters is by running a grid search in which the values for the parameters are generated in a predefined inter- val with a fixed step The performance of each combination is computed and used to determine the best set of parameters Due to computing limitations and because of the quadratic growth of the kernel with the number of training examples, the search for the SVM parameters is done on datasets containing usually no more than 103 data instances Thus, for larger datasets only a randomly selected subset of training instances is used for the grid search The field of mathematical programming provides deterministic algorithms for the quadratic programming problem described above Unfortunately, these algorithms are suitable to be applied only on small datasets Large real spatial datasets can be handled by a decomposition method called Sequential Minimal Optimization (SMO) (Platt, 1998) This type of algorithm is implemented such that at each iteration, only a small subset of the initial training dataset is used to optimize two of the parameters when the rest are kept fixed F uture trends Due to its applicability as a classification and predictive tool, the authors foresee SVM becoming more prevalently utilized with geospatial technologies SVMs are being used extensively Table Types of Kernels Kernel Formula Number of parameters 1)p Poly nomial k( x,x') ( x,x' Gaussian Radial Function k( x,x') exp( Sigmoid k( x,x') tanh( κ x,x' Radial Basis Function (R BF) k( x,x') exp( x x' x x' 2σ ) ) ϑ 2σ2 109 Classification in GIS Using Support Vector Machines with the classification of remotely sensed data as well as with other spatial data generated within GIS Studies are also being done to compare the effectiveness of SVM with other machine learning methods, such as artificial neural networks, maximum likelihood, and decision trees Huang et al (2002) and Pal and Mather (2005) found SVM to be very competitive when compared to other methods, often outperforming them in accuracy levels When utilized with spatial modeling, the ability to map outputs or error signals provides another useful tool Using a ‘spatial error signal’ similar to a predictive model used by Pijanowski et al (2002), Shellito and Lazar (2005) were able to spatially examine the error results of the SVM output to search for under-prediction and overprediction of errors The SVM algorithms work well for binary classification problems Future studies should take into consideration modifying the algorithms for multi-class problems and regression C onc lus ion This article focused on the usage and operation of Support SVM, especially when applied to realworld datasets, including spatial data SVM have been used as a classification method for geospatial data, including remotely sensed imagery and spatial modeling with GIS SVM has numerous advantages for use in image classification and prediction or classification of spatial data when compared with other methods Overall, SVM provides better accuracies and requires less parameters, but the selection process should lead to an optimal set of parameters references Brown, D., Riolo, R., Robinson, D T., North, M., & Rand, W (2005) Spatial process and data 110 models: Toward integration of agent based models and GIS Journal of Geographical Systems, 7(1), 25-47 Cristianini, N., & Shawe-Taylor, J (2000) An Introduction to Support Vector Machines and Other Kernel-based Learning Methods Cambridge University Press, Cambridge, England Fischer, M M (1994) Expert systems and artificial neural networks for spatial analysis and modeling Geographical Systems, 1, 221-235 Foody, G M., & Mathur, A (2004) A relative evaluation of multiclass image classification by support vector machines Geoscience and Remote Sensing, IEEE Transactions on, 42(6), 1335-1343 Guo, Q., Kelly, M., & Grahamb, C H (2004) Support vector machines for predicting distribution of Sudden Oak Death in California Ecological Modelling, 182(1), 75-90 Huang, C., Davis, L S., & Townshend, J R G (2002) An assessment of support vector machines for land cover classification International Journal of Remote Sensing, 23(4), 725-749 Lazar, A., & Shellito, B A (2005) Comparing Machine Learning Classification Schemes - A GIS Approach Paper presented at the The Fourth International Conference on Machine Learning and Applications, Los Angeles, CA, USA, December 15-17, 2005 Mantero, P., Moser, G., & Serpico, S B (2005) Partially Supervised Classification of Remote Sensing Images Through SVM-Based Probability Density Estimation IEEE Transactions on Geoscience and Remote Sensing, 43(3) Marcal, A R S., Borges, J S., Gomes, J A., & Costa, J F P D (2005) Land cover update by supervised classification of segmented ASTER images International Journal of Remote Sensing, 26(7), 1347-1362 Classification in GIS Using Support Vector Machines Melgani, F., & Bruzzone, L (2004) Classification of hyperspectral remote sensing images with support vector machines Geoscience and Remote Sensing, IEEE Transactions on, 42(8), 1778-1790 Nishii, R., & Eguchi, S (2005) Supervised image classification by contextual AdaBoost based on posteriors in neighborhoods Geoscience and Remote Sensing, IEEE Transactions on, 43(11), 2547-2554 Pal, M., & Mather, P M (2005) Support Vector Machines for Classification in Remote Sensing International Journal of Remote Sensing, 26(5), 1007-1011 Pijanowski, B, Brown, D, Shellito, B & Manik, G.  (2002) Use of Neural Networks and GIS to Predict Land Use Change.  Computers, Environment, and Urban Systems, 26(6), 553-575 Pijanowski, B., & Alexandidris, K (2002) A Multi-Agent Based Environmental Landscape (MABEL) Model: A Distributed Artificial Intelligence Simulation Model In Proceedings of the Second World Congress of Environmental and Resources Economics Monterrey, CA Platt, J (1998) Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines Technical Report MSR-TR-98-14, Microsoft Research Schölkopf, B., & Smola, A., (2002) Learning with Kernels., Cambridge, MA: MIT Press Shellito, B A., & Lazar, A (2005) Applying Support Vector Machines and GIS to Urban Pattern Recognition Paper presented at the Papers of the Applied Geography Conferences, Washington DC, USA, November 2-5, 2005 Song, X., Fan, G., & Rao, M N (2004) Machine Learning Approaches to Multisource Geospatial Data Classification: Application to CRP Mapping in Texas County, Oklahoma Paper presented at the IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, NASA Goddard Center, Greenbelt, MD, October 27-28, 2003 Vapnik, V N (1999) The Nature of Statistical Learning Theory, 2nd edition New York, NY: Springer-Verlag Watanachaturaporn, P., Varshney, P K., & Arora, M K (2005) Multisource Fusion for Land Cover Classification using Support Vector Machines Paper presented at the 2005 7th International Conference on Information Fusion, Philadelphia, PA, USA, 25-28 July 2005 key T er ms Classification: The task of building predictive models from training datasets The model is then used to label objects from a testing dataset and assign them into predefined categories Kernel: The similarity measure between different members of a data set that is used together with a mapping function Φ to translate the initial dataset into a higher dimensional space Linearly Separable Data: Two sets of data points in a two dimensional space are said to be linearly separable when they can be completely separable by a single straight line In general, two groups of data points are separable in a ndimensional space if they can be separated by an n-1 dimensional hyperplane Maximal Margin Hyperplane: A hyperplane, which separates two clouds of points and is at equal distance from the two The margin between the hyperplane and the clouds is thus maximal Noise: The random component of a measurement error Regression: The task of learning a target function that maps one or more input attributes to a single numeric attribute 111 Classification in GIS Using Support Vector Machines Spatial Data: Geographic information related to the location and distribution of objects on the Earth Support Vectors: A subset of the training examples that are closest to the maximum margin hyperplane Support Vector Machines: A supervised learning technique based on statistical learning theory 112 113 Chapter XV Network Modeling Kevin M Curtin George Mason University, USA Abstr act Network models are some of the earliest and most consistently important data models in GISystems Network modeling has a strong theoretical basis in the mathematical discipline of graph theory, and methods for describing and measuring networks and proving properties of networks are well-developed There are a variety of network models in GISystems, which are primarily differentiated by the topological relationships they maintain Network models can act as the basis for location through the process of linear referencing Network analyses such as routing and flow modeling have to some extent been implemented, although there are substantial opportunities for additional theoretical advances and diversified application introduct ion Network modeling encompasses a wide range of procedures, techniques, and methods for the examination of phenomena that can be modeled in the form of connected sets of edges and ver- tices Such sets are termed networks or graphs, and the mathematical basis for network analysis is known as graph theory Graph theory contains descriptive measures and indices of networks, as well as methods for proving the properties of networks Networks have long been recognized Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited Network Modeling as an efficient way to model many types of geographic data, including transportation networks, river networks, and utility networks among many others Network structures in Geographic Information Systems (GISystems) were among the first to be developed and have persisted with wide use to this day Most importantly, network models enable the analysis of phenomena that operate on networks This article reviews the types of networks modeled in geographic applications, describes the graph theoretic bases underlying network models, outlines the implementations of network models in GISystems, and the analysis performed with those models, and describes future challenges in network modeling Ba ckground The most familiar network models are those used to represent the networks with which much of the population interacts every day: transportation and utility networks Figure shows three networks (roads, rivers, and railroads) with their typical GISystems representations For most, the objects that these networks represent are obvious due to a familiarity created by frequent use Cartographic conventions serve to reinforce the interpretation of the functions of these networks It is clear that these three networks represent fundamentally different phenomena: roads and railroads are man-made, while river networks are natural Rivers flow only in one direction while—depending on the network model—roads and railroads can allow flow in both directions Perhaps most importantly, different types of activities occur on these networks Pedestrian, bicycle, car, and truck traffic occur only on the road network, and the others are similarly limited in the vehicles that use them Similar idiosyncrasies could be noted for many other types of networks that can be modeled in GISystems, including utility networks (electricity, telephone, cable, etc.), other transportation 114 networks (airlines, shipping lanes, transit routes), and even networks based on social connections if there is a geographic component Although the differences among the variety of networks that can be modeled allow for a great diversity of applications, it is the similarities in their structure that provide a basis for analysis G r aph T heor Y for Mode ling N etwork Networks exist as a general class for geoinformatic research based on the concept of topology, and the properties of networks are formalized in the mathematical sub-discipline of graph theory All networks or graphs (the terms can be used interchangeably), regardless of their application or function, consist of connected sets of edges (a.k.a arcs or lines) and vertices (a.k.a nodes or points) The topological properties of graphs are those that are not altered by elastic deformations (such as a stretching or twisting) Therefore, properties such as connectivity and adjacency are topological properties of networks, and they remain constant even if the network is deformed by some process in a GISystem (such as projecting or rubber sheeting) The permanence of these properties allows them to serve as a basis for describing, measuring, and analyzing networks Network Descriptions and Measures In Network Modeling—as in many scientific endeavors—the first concern is to define and describe the basic elements to be analyzed Simple descriptions of graphs include the number of edges and vertices, the maximum, minimum, or total length of the edges, or the diameter of the graph (the length of the longest minimal cost path between vertices) As a measure of the complexity of a graph, the number of fundamental cycles (cycles that not contain other cycles) can be computed from the number of edges, vertices, Network Modeling Figure Geographic network datasets and sub-graphs (disconnected parts of the graph) Measures of centrality within a graph include the maximum, minimum, or average degree of the vertices (the degree of a vertex is the number of edges incident to it), the closeness of nodes to other nodes as measured by the distances between them, the betweenness of a node which is a measure of the number of paths between nodes that go through it, and eigenvector centrality which measures centrality via a neighborhood mechanism Networks can also be classified into idealized types based on topological or structural characteristics Examples of these include tree networks where branching occurs at nodes but no cycles are created (river networks are frequently tree networks), Manhattan networks which are formed from edges crossing at right angles creating rectangular “blocks” (urban street networks frequently approximate the Manhattan network), and hub-and-spoke networks where edges radiate from a central vertex (as airline routes radiate from a hub) These special cases of network models have distinctive properties, and provide a range of platforms on which network analysis can take place N etwork Indices One can move beyond the description of the attributes of graphs toward comparison and analysis of those graphs with several graph indices (Kansky, 1963; Rodrigue et al., 2006) These indices are more complex in that they compare two or more measures of networks, and they can loosely be grouped into measures of pure graph properties and measure of applied networks Since topological properties are of primary concern, most of the indices of pure graph properties are measures of connectivity Examples include: 115 Network Modeling • • • The Alpha Index: A measure of connectivity that compares the number of fundamental cycles in a graph to the maximum possible number of fundamental cycles Since more cycles indicates greater connectivity in the graph, the larger the Alpha Index, the more connected the graph The possible values of the Alpha Index range from for graphs with no cycles (such as trees) to for completely connected networks The Beta Index: A measure of connectivity that compares the number of edges to the number of vertices in a graph When comparing two graphs with the same number of vertices, if there are more edges in one of the graphs then that graph must be more thoroughly connected The Gamma Index: A measure of connectivity that compares the number of edges in a graph to the maximum possible number of edges in a graph Like the Alpha Index, the Gamma Index may have a value between (a graph with no edges) and (a completely connected graph) The measures just described can be used on any graph simply by counting the number of edges, vertices, and sub-graphs With applied networks comes additional attribute information such as the length of the edges or the area in which the graph exists This information allows for another set of indices to be defined: • • 116 The Eta Index: The average length per link The Eta index provides a comparative measure of the distance between vertices or the cost of traveling between nodes The Pi Index: The relationship between the diameter of the graph and the total length of the edges in the graph This index measures the shape of the graph, where a low value indicates a graph dominated by its diameter, with relatively little development off of this axis A high value of the Pi Index indicates • • a greater level of development (more and longer edges) away from the diameter The Detour Index: A comparison of the Euclidean distance between two points on the network and the actual travel distance between them The closer the actual distance is to the Euclidean distance, the more efficiently the network overcomes distance Network Density: A measure comparing the length of network edges to the areal extent of the network The greater the length of edges per unit area, the more developed the network These measures and indices begin to describe the properties of networks, and provide a means for making basic comparisons among different networks, or among different versions of the same network over time Although this is not the appropriate venue to review them, there are many more advanced graph theoretic techniques for describing networks, for proving their properties, and for determining paths across them (Harary, 1982; Wilson, 1996) N etwork Mode ls in G eogr aph ic Infor mat ion S yste ms Although the graph theoretic definition of a network—a connected set of edges and vertices—applies to network models as they exist in GISystems, the details of implementing networks has changed over time The earliest computer systems for automated cartography employed the “spaghetti” data model for geographic objects including the elements of networks This data model did not preserve topological properties among the parts of the network, but simply recorded each feature individually with an identifier and a list of coordinates that defined its shape While this data model is simple to understand and efficient in terms of display, it is essentially useless for net- Network Modeling work analysis The spaghetti data model persisted in Computer Aided Design software for several decades, but was abandoned (at least for a time) by most GISystems in favor of the topologically integrated network models T opological N etwork D ata Models Prior to the 1970 Census of Population the U.S Census Bureau had not been widely known for its mapping capability The necessity for identifying explicit locations to which census questionnaires could be mailed resulted in an extraordinary advance for GISystem network modeling Researchers at the Census Bureau are credited with developing the first GISystem topological data model, termed the Dual Incidence Matrix Encoding (DIME) model (Cooke, 1998) “Dual Incidence” refers to the capture of topological information between nodes (which nodes are incident) and along lines (which polygons are adjacent) Figure shows both a graphic and a tabular representation of how points, lines, and polygons are stored in the topological data model The DIME databases and their successors eventually became known as the Topologically Integrated Geographic Encoding and Referencing (TIGER) files, and the data model employed for TIGER became the de facto standard for network representations in GISystems A similar data model was employed for the Digital Line Graph (DLG) series of products from the United States Geologic Survey (USGS) The introduction of topological incidence into network GISystem data structures had a profound influence on the ability to conduct network analysis The graph theoretic methods developed over several centuries could now be employed In addition to the ability to conduct network analysis, the topologically integrated data model was also more efficient in terms of storage and eliminated “sliver” and overlap errors However, this data model also imposed constraints on network analysts The Census Bureau designed the data model to explicitly associate all potential locations within the United States to a set of hierarchical polygons and it was the definition of these polygons that was the ultimate goal The fact that streets were often used as the boundaries between these polygons was a fortunate twist of fate for transportation modelers and others interested in network GISystems In order to create the complete polygonal coverage, the data model had to enforce planarity Planar graphs are those that can be drawn in such a way that no edges intersect without a vertex at that location The dual incidence data model is therefore a fully intersected planar data model There are several consequences of planar enforcement for network analysis First, geographic features (such as roads) are divided into separate database objects (records) based on every intersection That is, a road that is perceived and used as a single object in the transportation network would need to be represented as a series of records in the geographic database This repetition increases the database size many times over and exposes the database to error when multiple features are assigned attribute values Second, the planar network data model does not easily support the creation of bridge, overpass, or tunnel features; places where network features cross without an intersection being present These limitations have necessitated the development of non-planar data models designed specifically for network routing applications (Fohl et al., 1996) As network models have become further divorced from the TIGER data model over time, modelers have employed efficient storage structures such as the forward star representation (Evans & Minieka, 1992) Linear R eferencing on N etworks For many applications it is the network itself that acts as the underlying datum (or reference set) rather than a coordinate system designed to locate objects on the Earth’s surface The process of using a network for reference is termed 117 Network Modeling Figure Dual incidence encoding linear referencing This process (also known as location referencing or dynamic segmentation) emerged from engineering applications where it was necessary to locate a point along a linear feature (often roads) The most frequently recognized application of linear referencing is the mile markers along US highways (Federal Transit Administration, 2003) For any network application, the use of linear referencing has several primary benefits First, locations specified with linear referencing can be readily recovered in the field and are generally more intuitive than locations specified with traditional coordinates Second, linear referencing removes the requirement of a highly segmented network based on differences in attribute values The implementation of linear referencing allows 118 an organization to maintain a network database with many different attribute events associated with a single, reasonably small set of network features Linear referencing can be applied to any network-based phenomenon It has been applied for mapping accidents, traffic stops, or other incident locations, displaying traffic counts along streets, maintaining the location of fleet vehicles, and performing asset management functions such as the recording of pavement conditions or the location of street signs, bridges, exits, and many other traffic related objects (Federal Highway Administration, 2001) There are a myriad of linear referencing systems (Fletcher et al., 1998; Scarponcini, 2001), methods (Noronha & Church, 2002; Nyerges, 1990), and data models (Curtin et Network Modeling al., 2001; Dueker & Butler, 2000; Koncz & Adams, 2002; Sutton & Wyman, 2000; Vonderohe et al., 1997) available for implementation An alys is w ith N etwork Mode ls The wide variety of applications and the concomitant set of network analysis techniques demand that variations in network models must be accommodated A large portion of network analysis is concerned with routing This is a substantial research area by itself, and therefore is not thoroughly addressed here, but several special modeling cases are discussed In the case of network flow problems (such as the flow along a river, through a pipeline, or along a highway) the network must be able to support the concepts of capacity and flow direction The capacity is generally implemented as an attribute value associated with features The concept of flow direction is more complex in that, although a direction for an edge in the network can be assigned with an attribute value, more frequently flow direction is a function of the location of—and topological connection to—sources and destinations Although flow direction can be determined automatically for some network structures (trees are particularly suited for this), other networks (in particular road networks) will have indeterminate flow unless there are large numbers of sources and sinks When flow direction is maintained, GISystems can be programmed to solve problems such as tracing up- or down-stream, or to determine the maximal flow through the network While costs are frequently a function of distance for analytical purposes, impedance can be measured in many different ways, and network models must have the flexibility to support these measures As an example, costs on a network may have a temporal component such as the increased costs associated with rush hour traffic in a road network When congestion occurs a convex cost function may more accurately portray the changing levels of cost under varying conditions Advances in network analysis are ongoing and network models must evolve to meet new analytical needs FUTURE TRENDS Recent advances in network modeling include the ability to generate multi-modal networks These allow the modeling of comprehensive transportation networks that include, for example, road networks, light-rail and commuter rail networks, and the transfer stations that connect these modes Increasingly the ability to model turns or turn restrictions is becoming available Networks are not always static, thus the ability to incorporate dynamic networks in GISystems is a challenge for researchers Moreover, new applications such as sensor networks and mobile communications networks are requiring research and development into network modeling for GISystems With increasing flexibility in network models, GISystems are beginning to move beyond solutions to simple routing problems, in order to tackle the more difficult problems of network design (Ahuja et al., 1993) Currently there are implementations of heuristics for problems such as the Traveling Salesman Problem, the Maximal Covering Location problem, or the P-Median Problem These combinatorially complex network location problems provide a challenge to GIScientists and will require the integration of both GISystems and optimization techniques (Curtin et al., 2005) CONC LUS ION Network models are some of the earliest and most consistently important data models in GISystems Network modeling has a strong theoretical basis in the mathematical discipline of graph theory, 119 ... (2005b) Geographic Information Systems and Science – Principles, Techniques, Management, and Applications Hoboken, New Jersey: John Wiley & Sons Ltd, (pp 76-77, 189190, 33 3 -33 7) McCloy, K (2006)... JPEG JFIF http://www.w3.org/ Graphics/JPEG/ W3C (20 03) Scalable Vector Graphics (SVG) 1.1 Specification World Wide Web Consortium http://www.w3.org/TR /20 03/ REC-SVG1120 030 114/ W3C (2004) Extensible... com/en/wms_serverlist.html W3C (1990) GIF Graphics Interchange Format http://www.w3.org/Graphics/GIF/spec-gif89a txt W3C (2001) WebCGM 1.0 Second Release http://www.w3.org/TR/2001/REC-WebCGM20011217 W3C (20 03) JPEG

Ngày đăng: 05/08/2014, 22:22

TỪ KHÓA LIÊN QUAN