1. Trang chủ
  2. » Công Nghệ Thông Tin

3D Graphics with OpenGL ES and M3G- P38 docx

10 224 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 163,58 KB

Nội dung

354 THE M3G SCENE GRAPH CHAPTER 15 excess data will always use up memory for no good reason. This applies to data such as texture coordinates and vertex colors as well—if you have no use for some piece of data, drop it before putting it into the M3G format. Next, let us figure out how to move the objects around in the World. 15.3 TRANSFORMING OBJECTS Remember how in immediate mode rendering you had to pass in a modeling transforma- tion for each rendering call? In the scene graph, all you have to do is to move the objects themselves. Let us move the meshes we created in the previous section: myMesh.setTranslation(0.0f, 0.0f, — 20.0f); myMesh2.setTranslation(10.0f, 0.0f, — 20.f); myMesh2.setOrientation(30.0f, 1.0f, 1.0f, 0.0f); Node is derived from Transformable, which provides you with the functions for setting various transformation components as discussed in Section 13.3: translation T, rotation R, scale S, and an arbitrary 4 × 4 matrix M. These are combined into a single composite transformation in each node: C = TRSM (15.1) For scene graph Node objects, there is an additional restriction that the bottom row of the matrix component must be (0 0 0 1)—in other words, the W component will retain its value of 1 in all scene graph node transformations. There is normally no need for projec- tive transformations in this context, so supporting them would unnecessarily complicate M3G implementations. Querying transformations In addition to the getters for each of the transformation components, you can also query the composite transformation C directly. To do this, call Transformable. getCompositeTransform(Transform transform). Upon return, transform will contain the composite transformation matrix. This is usually faster than combining the individual components in Java code. The node transformations are concatenated hierarchically within the scene graph. If you have a Group with a Mesh inside it, the composite world-space transformation of the Mesh object is C mesh-to-world = C group C mesh (15.2) where C group and C mesh are the composite transformations of the group and mesh, respec- tively. Note that the transformation of World nodesisalwaysignored,asonlyobservers outside a world would notice when the whole world moves. SECTION 15.3 TRANSFORMING OBJECTS 355 Often, you will want to do some computation between two objects in the scene graph. For that, you need to know the transformation from one object to the other so that you can do your computations in a single coordinate system. To get the composite tr ansformation from one Node to another, call the Node member function: boolean getTransformTo(Node target, Transform transform) where target is the node you want to transform to, and transform is the resulting composite transformation. M3G will automatically find the shortest path between the two nodes or return false if no path exists, i.e., the two nodes are not in the same scene graph. The only restriction is that all tr ansformations along the path must be nonsingular, as inverse node transformations may be needed in order to compute the composite transformation. As an example, this will return the world space transformation of myMesh: boolean pathFound = myMesh.getTransformTo(myWorld, myMeshToWorldTransform); Reversing the nodes will give you the transformation from world space to your node: boolean pathFound = myWorld.getTransformTo(myMesh, myWorldToMeshTransform); Note that this is mathematically equivalent to calling invert on myMeshToWorld - Transform. Numeric precision issues may, however, cause the results to differ, and M3G may be able to compute the inverse faster if it knows to look for it in the first place. 15.3.1 Camera, Light, AND VIEWING TRANSFORMATIONS Concatenating the node transformations up to world space gives us the modeling trans- formation for each object. As discussed in Section 14.3.1, the viewing part of the model- view transformation is obtained from the Camera class. Moving to the scene graph world, the only difference from our treatise of the subject in Section 14.3.1 is that you no longer have to give the viewing transformation explic- itly. Instead, you can place your camera—or as many cameras as you like—in the scene graph like any other nodes. They can be placed directly into the World, or inside Group objects. The inverse of the camera-to-world t ransformation is then automatically com- puted and concatenated with each modeling transformation when rendering the scene. Let us add some light and cameras to our world: Light sunLight = new Light(); sunLight.setMode(Light.DIRECTIONAL); sunLight.setColor(0xFFEE88); sunLight.setIntensity(1.5f); sunLight.setOrientation(20.f, -1.f, 0.f, 1.f); myWorld.addChild(sunLight); 356 THE M3G SCENE GRAPH CHAPTER 15 // Note that these getters are only available in M3G 1.1 float aspectRatio = g3d.getViewportWidth() / g3d.getViewportHeight(); float fovXToY = 1.f / aspectRatio; Camera myWideCamera = new Camera(); myWorld.addChild(myWideCamera); myWideCamera.setPerspective(60.f*fovXToY, aspectRatio, 1.f, 100.f); Camera myTeleCamera = new Camera(); myWorld.addChild(myTeleCamera); myTeleCamera.setTranslation(-50.f, 20.f, -30.f); myTeleCamera.setOrientation(30.f, 0.f, -1.f, 0.f); myTeleCamera.postRotate(10.f, -1.f, 0.f, 0.f); myTeleCamera.setPerspective(20.f*fovXToY, aspectRatio, 1.f, 100.f); myWorld.setActiveCamera(myWideCamera); Now, you can use setActiveCamera to switch between the two cameras. This saves you the trouble of having to move a single camera around the scene graph to switch between different predefined viewpoints. Pitfall: Your camera must be a part of your World! Otherwise, M3G will be unable to compute the camera-to-world t ransformation, and will raise an exception. Pitfall: If you want an entire World garbage-collected, it will not happen as long as its camera and lights are referenced from Graphics3D (see Section 13.1.3). It is therefore not enough to do myWorld = null in your cleanup code; you also need to do g3d.setCamera(null, null) and g3d.resetLights(). 15.3.2 NODE ALIGNMENT In addition to setting the transformations explicitly, there is a semi-automatic mechanism for orienting nodes of which you can take advantage in some cases. Node alignment lets you, for example, force an object to always face some other object or maintain an upright position in the world. Alignment can be forced for any Node and the entire subtree of its descendants with a single function call. Typically, you would place alignment after all animations, just before rendering: myWorld.align(myWorld.getActiveCamera()); // apply node alignments g3d.render(myWorld); // draw the world Let us leave the details of that align call for later, though. First, we will discuss how alignment works and what you can do with it. The specification for node alignment is rather involved because it attempts to ensure that all implementations work in the same way. The actual operation is much simpler. For both the Z and the Y axis of a node, you can specify that the axis should always point SECTION 15.3 TRANSFORMING OBJECTS 357 toward a specific point or direction in the scene. If you specify alignment for both of the axes, it is the Z axis that rules: it will always be exactly aligned, while the Y axis will only make its best effort thereafter. We will clarify this with a couple of examples in a moment. Pitfall: Some early M3G implementations lack sufficient numeric range and precision to compute alig nments accurately. You may not be able to rely on alignment working reliably across the entire range of M3G-enabled devices. Setting up node alignment To set up alignment for a Node, call: void setAlignment(Node zRef, int zTarget, Node yRef, int yTarget) This looks complicated, but note that there are two identical sets of parameters compris- ing a reference node and an alignment target for each axis. Looking at the Z parameters, the reference node zRef is what you want your node to use as a guide when aligning itself; zTarget is what in zRef you want your node to align to. The same goes for the equivalent Y parameters. Valid valuesfor zTarget and yTargetare NONE,ORIGIN, X_AXIS, Y_AXIS, and Z_AXIS. NONE, fairly obviously, disables alignment for that axis. ORIGIN makes the axis point toward the origin of the reference node. The three axis targets make the aligned axis point in the same direction as the specified axis of the reference node. Here are two examples: myMesh.setAlignment(null, Node.NONE, myWorld, Node.Y_AXIS); myMesh2.setAlignment(myWorld, Node.ORIGIN, null, Node.NONE); Now, the Y axis of myMesh will always point in the same direction as the Y axis of myWorld; and the Z axis of myMesh2 w ill always point toward the origin of myWorld. Note that we are specifying no alignment for the Y axis of myMesh2—this means that the Y axis will point in whatever direction it happens to point after myMesh2 is rotated to align its Z axis. The M3G specification states that the alignment rotation will always start from a fixed reference orientation, without any rotation applied. Therefore, even though you may not know the exact orientation of your object after the alignment, you can still rely on the Y axis behaving nicely: g iven a target Z axis direction, you will always get a deterministic Y axis direction, and it will change smoothly rather than jump around randomly each frame. Pitfall: Make sure that your alignment reference nodes are in the same scene graph as the nodes being aligned! Otherwise, M3G will be unable to compute the alignment and will throw an exception that may be hard to track down. In addition to fixed nodes, you can also easily align objects based on the current camera, without giving an explicit node reference to it. To make a billboard that always faces the camera, you could apply this setting on a piece of flat geometry: myNode.setAlignment(null, Node.ORIGIN, null, Node.NONE); 358 THE M3G SCENE GRAPH CHAPTER 15 Note that we left zRef and yRef null in this example. Then, return to our first code example on alignment and notice how we passed in the active camera of the world. The only argument to align is a Node to be used as the alignment reference in all cases where you have specified a null zRef or yRef. The most common use for this is passing in the currently active camera as in our example. You could of course specify any Camera node as a reference to setAlignment—however, if you have multiple cameras in your scene graph, using null instead lets you switch between them, or even create new cam- eras, without having to reset the alignment for any camera-aligned nodes. It is therefore good practice to use null to mean “align to my current camera.” Alignment examples Now, let us try a couple of more examples to illustrate what you can do with alignment. These examples, again, assume that you are passing the active camera of your world to each align call. As an alternative to our billboard example above, you may want to align the billboard with the Z axis of the camera rather than aiming it at the camera origin: myNode.setAlignment(null, Node.Z_AXIS, null, Node.NONE); This may be faster on some M3G implementations, but the result will also look slightly different—especially with a wide field of view—because the billboard will align to the orientation of the camera rather than its position. You can see this difference in Figure 15.1. Which alternative looks better depends on the desired effect of the billboard. Of course, we can also make a billboard and align its Y axis. If you want to emulate Sprite3D, you can align the billboard with both the Z and Y axes of the camera: myNode.setAlignment(null, Node.Z_AXIS, null, Node.Y_AXIS); Note that unlike Sprite3D, this still lets you use any geometry you want, as well as apply multi-texturing and multi-pass rendering. Of course, if you just want to draw a sprite, the dedicated sprite class is optimized for that and may give you better performance, but on many practical implementations you are unlikely to notice a difference. To simulate complex objects such as trees, as in Figure 15.1, you may want to have the impostor geometry and textures oriented vertically with respect to the world, while turn- ing about the vertical axis to face the camera—in other words, have the orientation of your billboard const rained by a fixed axis. Since Y-axis alignment is subordinate to Z-axis alignment in M3G, we must use the Z axis as the constraint. Assuming that the Y axis represents the vertical direction, or height, in your world, you would align your impostor trees like this: myNode.setAlignment(myWorld, Node.Y_AXIS, null, Node.ORIGIN); Of course, you must also model your geometry so that the Z axis is the vertical axis of your impostor geometry. SECTION 15.3 TRANSFORMING OBJECTS 359 Figure 15.1: Two variants of billboard trees. On the left, the trees are aligned with the Z axis of the camera; on the right, they are aligned to face the camera origin. In both cases, the vertical axis of each tree is constrained to be perpendicular to the ground plane. Performance tip: Alignment comes at a price, as it involves quite a bit of computation. It may therefore not be the best idea to use our example above to make lots of trees using aligning billboards—note that you cannot just group them and align the group as one object, because you want each tree to stay at a fixed location, so you would have to align them individually. You will likely get better performance if you limit that technique to a few close-by or medium distance trees, and implement the faraway ones using static impostor objects representing larger portions of the forest. Targeting the camera and lights Our finalexample isalso a commonone: a targetcamera. Often, you will wantyour camera or lights to track an object. To do this, let us aim the Z axis at the object myTarget, and align the Y axis with the world Y axis so that the camera stays upright while tracking: myCamera.setAlignment(myTarget, Node.ORIGIN, myWorld, Node.Y_AXIS); myCamera.setScale( — 1, 1, — 1); What is it with that setScale line? Remember that the camera in M3G looks in the direction of the negative Z axis. Alignment aims the positive Z axis at myTarget,so the camera will by default look away from the target. We need to rotate the camera by 180 ◦ after the alignment to aim it in the right direction. We could wrap the camera in an extra Group node and align that instead, but reversing the X and Z axes of the cam- era itself achieves the same result for free. The scale component of the node transfor- mation does not affect alignment computations; if you refer to Equation 15.1, you will see that the rotation component, which is replaced by the alignment rotation, resides on the left side of the scale and matrix components. In practice, this means that the scale and matrix components of the node itself are ignored when computing the alignment rotation. 360 THE M3G SCENE GRAPH CHAPTER 15 What we said about cameras above applies equally to lights, except that the Y axis does not matter for them: all M3G lights are symmetric about the Z axis, so you only need to align that. You can therefore save some processing time by specifying null and Node.NONE for yRef and yTarget, respectively. 15.4 LAYERING AND MULTI-PASS EFFECTS There is a default sorting rule in the M3G scene graph that all blended primi- tives, i.e., primitives using a blending mode of anything other than REPLACE in CompositingMode, are drawn after all nonblended primitives. This is sufficient to cover many cases where semitr ansparent geometry and opaque geometry are used—such as our example on rendering a separate specular lighting pass in Section 14.2.5, which is easily enough wrapped into a Mesh object: IndexBuffer primitives[2] = { myTriangles, myTriangles }; Appearance passes[2] = { diffusePass, specularPass }; Mesh mesh = new Mesh(myVertexBuffer, primitives, passes); This works, regardless of the order in which you specify your rendering passes, because diffusePass used the default REPLACE blending mode, whereas specularPass used ALPHA_ADD (refer back to page 336 for the details). However, sometimes you will want to override this default sorting, or force sorting based on some other criteria. This is where rendering layers come into the picture. Pitfall: Other than the default sorting rule, the M3G specification does not require implementations to sort semitransparent primitives in any particular way—such as back-to-front. This was intentionally left out, as sorting can be an expensive operation and it still does not quite solve all of the problems associated with rendering semitrans- parent geometry. In the end, it is your responsibility to make sure that your blended tri- angles get drawn in the correct order. However, if transparency is rare enough that you do not routinely expect to view transparent objects through other transparent objects, the default rule will be quite sufficient. Rendering layers When discussing Appearance in Section 14.2, we already mentioned the subject of rendering layers, but dismissed it as something that is only useful in scene graphs. Each Appearance object has a rendering layer index that you can set with setLayer(int layer). Valid values for layer range from −63 to 63, with the default being 0. The layer index overrides the default sorting rule when determining the rendering order for submeshes and sprites. The default rule is still obeyed within each layer, but the layers are sorted in ascending order. The layer with the smallest index gets drawn first. For SECTION 15.4 LAYERING AND MULTI-PASS EFFECTS 361 example, we could use a sprite as a waypoint or some other marker overlaid on top of the 3D scene: Image2D markerImage = CompositingMode alphaOverlay = new CompositingMode(); alphaOverlay.setBlending(CompositingMode.ALPHA); alphaOverlay.setDepthTestEnable(false); alphaOverlay.setDepthWriteEnable(false); Appearance overlay = new Appearance(); overlay.setCompositingMode(alphaOverlay); overlay.setLayer(63); Sprite3D myMarker = new Sprite3D(false, markerImage, overlay); Setting the rendering layer to 63, the maximum value, ensures that our marker is drawn last, and not overwritten by anything in the scene. You can use the layer index to separate things into discrete passes or to do a coarse sorting. For example, if you know that some semitransparent geometry will always be close to the viewer, put it in one of the higher layers (that is, larger-numbered) to have it drawn on top of anything behind it. If you have two-sided transparent geometry, use an Appearance with a lower index on the “inside” polygons to make them correctly visible through the “outside” polygons drawn in front. Any lens flares and other light blooming effects should be in the highest layers so that they are drawn on top of the entire scene. Performance tip: If you have a sky cube, draw it last to save fill rate. If you have geometry that you know will always be close to the viewer, draw that first to occlude larger parts of the scene early on. This way, depth buffering can drop the hidden pixels before they are drawn at all, saving the work of shading and texturing them. Translucent objects will naturally need different sorting for blending to work. Multi-pass Meshes Inmulti-passrendering,youcanjustputthesameIndexBufferintoyourMeshmultiple times,witha differentAppearance object foreachrendering pass, andusethelayerindex to indicate the order in which to render the passes. This way, you can easilydo shading bey- ond simple light mapping without having to explicitly draw your objects multiple times. Performance tip: When rendering multiple passes of the same geomet ry, make the first pass opaque and disable depth writes for all subsequent passes. Depth testing is still needed, but you save the cost of rewriting the existing values into the depth buffer. Also note that the layer sorting works across all objects in the scene, so the first opaque pass will save on fill rate for all subsequent passes of any occluded geometry. Once the layer indices are set, M3G handles the sorting of multiple rendering passes auto- matically. The specified depth test function also guarantees that multiple passes of the 362 THE M3G SCENE GRAPH CHAPTER 15 same geometry get drawn at the same depth, allowing you to blend arbitr arily many layers on top of a single opaque layer. 15.5 PICKING Picking is one more thing you can only do with the scene graph. Put briefly, picking lets youfirearayintoaGroup in the scene graph and see what you hit. Pitfall: The performance of picking varies widely from one implementation to another. As a rule of thumb, consider picking a once-in-a-while utility function rather than a tool that your physics engine can make extensive use of. Picking through the camera You can use picking in either of two ways: picking through a camera plane, or picking from a 3D point. To use the first alternative, call the Group member function: boolean pick(int scope, float x, float y, Camera camera, RayIntersection ri). The first parameter, scope, is tested for a match with the scope mask of each node (Section 15.6.2) prior to performing the actual picking test. The x and y parameters spec- ify a point on the image plane of camera that the ray is fired from. The origin is in the upper left corner of the viewport, with (1, 1) being in the lower right corner, so you can fire a ray through the center of the camera image by specifying (0.5, 0.5). The direc- tion of the ray is always away from the eye, i.e., the origin of the Camera node. Note that you can pick from any camera in the scene, not just the current active camera. By using the active camera, though, it is easy to pick the object in the center of your current view: RayIntersection hitInfo; if (myWorld.pick( — 1, 0.5f, 0.5f, myWorld.getActiveCamera(), hitInfo)) { Node objectHit = hitInfo.getIntersected(); float distance = hitInfo.getDistance(); } Note that the ray is fired from the near clipping plane—you cannot hit objects closer to the camera than that. The unit of distance is equal to the distance between the near and far clipping planes, as measured along the picking ray, so that distance 0 is at the near clipping plane and 1 at the far clipping plane. This lets you easily determine whether the hit object is actually visible when rendered. The actual origin and direction of the ray, in the coordinates of the world or group node being picked, can also be queried from the RayIntersection object. SECTION 15.5 PICKING 363 Performance tip: If you really want to fire the ray from the origin of your camera, you can use getTransformTo to get the transformation from your camera to world space: myCamera.getTransformTo(myWorld, myMatrix). Now, the last column of myMatrix gives you the origin of the ray, and the third one is the (pos- itive) Z axis of the camera, which you can use as the ray direction. You can then pass these to the other picking variant which we introduce below. Picking with an explicit ray The other pick variant lets you specify the picking ray explicitly: boolean pick(int scope, float ox, float oy, float oz, float dx, float dy, float dz, RayIntersection ri). The point (ox, oy, oz) is the origin of the picking ray and (dx, dy, dz) is its direction. Both are expressed in the local coordinate system of the node that pick is invoked from. Obviously, you do not need a camera to use this variant: RayIntersection hitInfo; if (myWorld.pick( — 1, 0.f, 0.f, 0.f, 0.f, 0.f, 1000.f, hitInfo)) { Node objectHit = hitInfo.getIntersected(); } The example fires a picking ray from the origin of myWorld along the positive Z axis. Here, the unit of distance is the length of the given direction vector. In this case, we gave a non-unit direction vector, so our distance would be scaled accordingly; if the world coor- dinates are in meters, for example, the picking distance returned would be in kilometers. Picking traversal In either case, the picking ray will only be tested against objects inside the scene subtree spanned by the group you invoked pick for. The return value tells you if anything was hit in the first place. If it is true, details about the closest object intersected are returned in the RayIntersection object. Table 15.1 lists the functions you can use for retrieving data about the intersection. Like rendering , picking is controlled hierarchically via the setPickingEnable func- tion. If you disable picking on a group node, picking from higher up in the scene graph, such as from your root World object, will ignore ever ything inside that group. However, if you fire your picking ray from a child of a disabled group, picking traversal will proceed normally to all enabled nodes inside that child group. Performance tip: Always choose the smallest possible group of objects for picking . For example, if you only want to test against the terrain, create a separate group to hold just your terrain, and fire your picking ray into that. This saves the picking traversal from visiting all the non-terrain objects in your scene. . into a Mesh object: IndexBuffer primitives[2] = { myTriangles, myTriangles }; Appearance passes[2] = { diffusePass, specularPass }; Mesh mesh = new Mesh(myVertexBuffer, primitives, passes); This. hierarchically within the scene graph. If you have a Group with a Mesh inside it, the composite world-space transformation of the Mesh object is C mesh-to-world = C group C mesh (15.2) where C group and. the rendering order for submeshes and sprites. The default rule is still obeyed within each layer, but the layers are sorted in ascending order. The layer with the smallest index gets drawn first.

Ngày đăng: 03/07/2014, 11:20