1. Trang chủ
  2. » Công Nghệ Thông Tin

Expert one-on-one J2EE Design and Development phần 10 potx

67 317 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 67
Dung lượng 3,03 MB

Nội dung

Brought to you by ownSky Performance Testing and Tuning an Application o Can we cope with the increased implementation complexity required to support caching? This will be mitigated if we use a good, generic cache implementation, but we must be aware that read-write caching introduces significant threading issues. o Is the volume of data we need to cache manageable? Clearly, if the data set we need to cache contains millions of entities, and we can't predict which ones users will want, a cache will just waste memory. Databases are very good at plucking small numbers of records from a large range, and our cache isn't likely to do a better job. o Will our cache work in a cluster? This usually isn't an issue for reference data: it's not a problem if each server has its own copy of read-only data, but maintaining integrity of cached read-write data across a cluster is hard. If replication between caches looks necessary, it's pretty obvious that we shouldn't be implementing such infrastructure as part of our application, but looking for support in our application server or a third-party product. o Can the cache reasonably satisfy the kind of queries clients will make against the data? Otherwise we might find ourselves trying to reinvent a database. In some situations, the need for querying might be satisfied more easily by an XML document than cached Java objects. o Are we sure that our application server cannot meet our caching requirements? For example, if we know that it offers an efficient entity bean cache, caching data on the client may be unnecessary. One decisive issue here will be how far (in terms of network distance) the client is from the EJB tier. The Pareto Principle (the 80/20 rule) is applicable to caching. Most of the performance gain can often be achieved with a small proportion of the effort involved in tackling the more difficult caching issues. Data caching can radically improve the performance of J2EE applications. However, caching can add much complexity and is a common cause of bugs. The difficulty of implementing different caching solutions varies greatly. Jump at any quick wins, such as caching read-only data. This adds minimal complexity, and can produce a good performance improvement. Think much more carefully about any alternatives when caching is a harder problem - for example, when it concerns read-write data. Don't rush to implement caching with the assumption that it will be required; base caching policy on performance analysis. A good application design, with a clean relationship between architectural tiers, will usually facilitate adding any caching required. In particular, interface-based design facilitates caching; we can easily replace any interface with a caching implementation, if business requirements are satisfied. We'll look at an example of a simple cache shortly. Where to Cache As using J2EE naturally produces a layered architecture, there are multiple locations where caching may occur. Some of these types of caching are implemented by the J2EE server or underlying database, and are accessible to the developer via configuration, not code. Other forms of caching must be implemented by developers, and can absorb a large part of total development effort. Let's look at choices for cache locations, beginning from the backend: 633 Brought to you by ownSky Brought to you by ownSky Performance Testing and Tuning an Application Brought to you by ownSky Brought to you by ownSky Performance Testing and Tuning an Application Generally, the closer to the client we can cache, the bigger the performance improvement, especially in distributed applications. The flip side is that the closer to the client we cache, the narrower the range of scenarios that benefit from the cache. For example, if we cache the whole of an application's dynamically generated pages, response time on these pages will be extremely fast (of course, this particular optimization only works for pages that don't contain user-specific information). However, this is a "dumb" form of caching - the cache may have an obvious key for the data (probably the requested URL), but it can't understand the data it is storing, because it is mixed with presentation markup. Such a cache would be of no use to a Swing client, even if the data in the varying fragments of the cached pages were relevant to a Swing client. J2EE standard infrastructure is really geared only to support the caching of data in entity EJBs. This option isn't available unless we choose to use entity EJBs (and there are many reasons why we might not). It's also of limited value in distributed applications, as they face as much of a problem in moving data from EJB container to remote client as in moving data from database to EJB container. Thus we often need to implement our own caching solution, or resort to another third-party caching solution. I recommend the following guidelines for caching: o Avoid caching unless it involves reference data (in which case it's simple to implement) or unless performance clearly requires it. In general, distributed applications are much more likely to need to implement data caching than collocated applications. o As read/write caches involve complex concurrency issues, use third-party libraries (discussed below) to conceal the complexity of the necessary synchronization. Use the simplest approach to ensuring integrity under concurrent access that delivers satisfactory performance. o Consider the implications of multiple caches working together. Would it result in users seeing data that is staler than any one of the caches might tolerate? Or does one cache eliminate the need for another? Third-party Caching Products for Use in J2EE Applications Let's look at some third-party commercial caching products that can be used inJ2EE applications. The main reasons we might spend money on a commercial solution are to achieve reliable replicated caching functionality, and avoid the need to implement and maintain complex caching functionality in-house. Coherence, from Tangosol (http://www.tangosol.com/products-clustering.jsp) is a replicated caching solution, which claims even to support clusters including geographically dispersed servers. Coherence integrates with most leading application servers, including JBoss. Coherence caches are basically alternatives to standard Java map implementations, such as java.util.HashMap, so using them merely requires Coherence-specific implementations of Java core interfaces. SpiritCache, from SpiritSoft (http://www.spiritsoft.net/products/jmsjcache/overview.html) is also a replicated caching solution, and claims to provide a "universal caching framework for the Java platform". The SpiritCache API is based on the proposed JCache standard API (JSR-107: http://jcp.org/jsr/detail/107.jsp). JCache, proposed by Oracle, defines a standard API for caching and retrieving objects, including an event-based system allowing application code to register for notification of cache events. 637 Brought to you by ownSky Commercial caching products are likely to prove a very good investment for applications with sophisticated caching requirements, such as the need for caching across a cluster of servers. Developing and maintaining complex caching solutions in-house can prove very expensive. However, even if we use third-party products, running a clustered cache will significantly complicate application deployment, as the caching product - in addition to the J2EE application server - will need to be configured appropriately for our clustered environment. Code Optimization Since design largely determines performance, unless application code is particularly badly written, code optimization is seldom worth the effort in J2EE applications unless it is targeted at known problem areas. However, all professional developers should be familiar with performance issues at code level to avoid making basic errors. For discussion of Java performance in general, I recommend Java Performance Tuningby Jack Shirazi from O'Reilly (ISBN: 0-596-00015-4) and Java 2 Performance and Idiom Guide from Prentice Hall, (ISBN: 0-13-014260-3). There are also many good online resources on performance tuning. Shirazi maintains a performance tuning web site (http://www.javaperformancetuning.com/) that contains an exhaustive directory of code tuning tips from many sources. Avoid code optimizations that reduce maintainability unless there is an overriding performance imperative. Such "optimizations" are not just a one-off effort, but are likely to prove an ongoing cost and cause of bugs. The higher-level the coding issue, the bigger the potential performance gain by code optimization. Thus there often is potential to achieve good results by techniques such as reordering the steps of an algorithm, so that expensive tasks are executed only if absolutely essential. As with design, an ounce of prevention is worth a pound of cure. While obsession with performance is counter-productive, good programmers don't write grossly inefficient code that will later need optimization. Sometimes, however, it does make sense to try a simple algorithm first, and change the implementation to use a faster but more complex algorithm only if it proves necessary. Really low-level techniques such as loop unrolling are unlikely to bring any benefit to J2EE systems. Any optimization should be targeted, and based on the results of profiling. When looking at profiler output, concentrate on the slowest five methods; effort directed elsewhere will probably be wasted. The following table lists some potential code optimizations (worthwhile and counter-productive), to illustrate some of the tradeoffs between performance and maintainability to be considered: 638 Brought to you by ownSky 639 Brought to you by ownSky Performance Testing and Tuning an Application Brought to you by ownSky Performance Testing and Tuning an Application As an example of this, consider logging in our sample application. The following seemingly innocent statement in our TicketController web controller, performed only once, accounts for a surprisingly high 5% of total execution time if a user requests information about a reservation already held in their session: logger.fine("Reservation request is [“ + reservationRequest + "]"); The problem is not the logging statement itself, but that of performing a string operation (which HotSpot optimizes to a StringBuffer operation) and invoking the toString() method on the ReservationRequest object, which performs several further string operations. Adding a check as to whether the log message will ever be displayed, to avoid creating it if it won't be, will all but eliminate this cost in production, as any good logging package provides highly efficient querying of log configuration: if (logger.isLoggable(Level.FINE)) logger.fine("Reservation request is [“ + reservationRequest + "]"); Of course a 5% performance saving is no big deal in most cases, but such careless use of logging can be much more critical in frequendy-invoked methods. Such conditional logging is essential in heavily used code. Generating log output usually has a minor impact on performance. However, building log messages unnecessarily, especially if it involves unnecessary toString () invocations, can be surprisingly expensive. Two particularly tricky issues are synchronization and reflection. These are potentially important, because they sit midway between design and implementation. Let's take a closer look at each in turn. Correct use of synchronization is an issue of both design and coding. Excessive synchronization throttles performance and has the potential to deadlock. Insufficient synchronization can cause state corruption. Synchronization issues often arise when implementing caching. The essential reference on Java threading is Concurrent Programming in Java: Design Principles and Patterns from Addison Wesley (ISBN '. 0-201-31009-Oj. I strongly recommend referring to this book when implementing any complex multi-threaded code. However, the following tips may be useful: o Don't assume that synchronization will always prove disastrous for performance. Base decisions empirically. Especially if operations executed under synchronization execute quickly, synchronization may ensure data integrity with minimal impact on performance. We'll look at a practical example of the issues relating to synchronization later in this chapter. o Use automatic variables instead of instance variables where possible, so that synchronization is not necessary (this advice is particularly relevant to web-tier controllers). o Use the least synchronization consistent with preserving state integrity. o Synchronize the smallest possible sections of code. o Remember that object references, like ints (but not longs and doubles) are atomic (read or written in a single operation), so their state cannot be corrupted. Hence a race condition in which two threads initialize the same object in succession (as when putting an object into a cache) may do no harm, so long as it's not an error for initialization to occur more than once, and be acceptable in pursuit of reduced synchronization. 641 Brought to you by ownSky o Use lock splitting to minimize the performance impact of synchronization. Lock splitting is a technique to increase the granularity of synchronization locks, so that each synchronized block locks out only threads interested in the object being updated. If possible, use a standard package such as Doug Lea's util. concurrent to avoid the need to implement well-known synchronization techniques such as lock splitting. Remember that using EJB to take care of concurrency issues isn't the only alternative to writing your own low-level multi-threaded code: util .concurrent is an open source package that can be used anywhere in ajavaapplication. Reflection has a reputation for being slow. Reflection is central to much J2EE functionality and a powerful tool in writing generic Java code, so it's worth taking a close look at the performance issues involved. It reveals that most of the fear surrounding the performance of reflection is unwarranted. To illustrate this, I ran a simple test to time four basic reflection operations: o Loading a class by name with the Class.forName (String) method. The cost of invoking this method depends on whether the requested class has already been loaded. Any operation - using reflection or not - will be much slower if it requires a class to be loaded for the first time. o Instantiating a loaded class by invoking the Class.newlnstance() method, using the class's no-argument constructor. o Introspection: finding a class's methods using Class.getMethods(). o Method invocation using Method. invoke(), once a reference to a method has been cached. The source code for the test can be found in the sample application download, under the path /framework/test/reflection/Tests.Java. The following method was invoked via reflection: The most important results, in running these tests concurrently on a IGhz Pentium III under JDK 1.3.1_02, were: o 10,000 invocations this method via Method.invoke( ) took 480ms. o 10,000 invocations this method directly took 301ms (less than twice as fast). o 10,000 creations of an object with two superclasses and a fairly large amount of instance data took 21,371ms. o 10,000 creations of objects of the same class using the new operations took 21,280ms. This means that whether reflection or the new operator is used will produce no effect on the cost 01 creating a large object. My conclusions, from this and tests I have run in the past, and experience from developing real application are that: 642 [...]... Since the web-tier code in com.wrox.expertj2ee.ticket.web.Ticketcontroller is coded to use the com wrox.expertj2ee.ticket.command.AvailabilityCheck interface to retrieve availability information, rather than a concrete implementation, we can easily substitute a different JavaBean implementation to implement caching Interface-driven design is an area in which good design practice leads to maximum freedom... PerformanceWithAvailabilitylmpl pai = new PerformanceWithAvailabilitylmpl(p, avail); for (int i = 0; i < p.getPriceBands().size(); i++) { PriceBand pb = (PriceBand) p.getPriceBands().get( i ) ; avail = boxOff ice.getFreeSeatCount (p.getldl), pb.getId()); PriceBandWithAvailability pba = new PriceBandWithAvailabilitylmpl(pb, avail); pai.addPriceBand(pba); } return pai; } We begin by trying the simplest possible approach: caching performance... PerformanceWithAvailabilitylmpl pai = new PerformanceWithAvailabilitylmpl(p, avail); for (int i = 0; i < p.getPriceBands().size(); i++) { PriceBand pb = (PriceBand) p.getPriceBands().get(i) ; avail = boxOf fice.getFreeSeatCount (p.getld() , pb.getldO) PriceBandWithAvailability pba = new PriceBandWithAvailabilitylmpl(pb, avail); pai.addPriceBand(pba); } return pai; Since using a synchronized hash table guarantees data integrity,... sent across the network, and the receiver reassembles object parameters Marshaling and unmarshaling has an overhead over and above the work of serialization and deserialization and the time taken to communicate the bytes across the network The overhead depends on the protocol being used, which may be IIOP or an optimized proprietary protocol such as WebLogic's T3 or Orion's ORMI J2EE 1.3 application servers... It eliminates the reflection overhead of the standard serialization process (which may not, however, be particularly great), and may allow us to use a more efficient representation as fields are persisted Many standard library classes implement readObject() and writeObject(), including java.util.Date, java.util.HashMap, java.util.LinkedList, and most AWT and Swing components In the second technique,... applications are concerned, this means deploying the web tier and EJB tier in the same J2EE server MostJ2EE servers detect collocation and can use local calls in place of remote calls (in most servers, this optimization is enabled by default) This optimization avoids the overhead of serialization and remote invocation protocols Both caller and receiver will use the same copy of an object, meaning that... reflection Reflective operations are generally faster - and some dramatically faster - in JDK 1.3.1 and JDK 1.4 than in JDK 1.3.0 and earlier JDKs Sun have realized the importance of reflection, and have put much effort into improving the performance of reflection with each new JVM The assumption among many Java developers that "reflection is slow" is misguided, and becoming increasingly anachronistic with maturing... persistent class extends an abstract base class, and a new field is added to it, we would need to modify our externalizable implementation The standard handling of serialization enables us to ignore all these issues Remember that adding complexity to an application's design or implementation to achieve optimization is not a one-off operation Complexity is forever, and may cause ongoing costs in maintenance... of J2EE applications 662 Brought to you by ownSky Performance Testing and Tuning an Application Another possibility is moving data in generic Java objects such as java.util.HashMap or javax sql.RowSet In this case it's important to consider the cost of serializing and deserializing the objects and the size of their serialized form In the case of java.util.HashMap, which implements writeObject() and. .. object written to the stream is assigned a handle, meaning that subsequent references to the object can be represented in the output by the handle, not the object data When the objects are instantiated on the client, a faithful copy of the references will be constructed This may produce a significant benefit in serialization and deserialization time and network bandwidth in the case of object graphs with . p.getPriceBands().size(); i++) { PriceBand pb = (PriceBand) p.getPriceBands().get( i ); avail = boxOff ice.getFreeSeatCount (p.getldl), pb.getId()); PriceBandWithAvailability pba = new PriceBandWithAvailabilitylmpl(pb,. p.getPriceBands().size(); i++) { PriceBand pb = (PriceBand) p.getPriceBands().get(i) ; avail = boxOf fice.getFreeSeatCount (p.getld() , pb.getldO) PriceBandWithAvailability pba = new PriceBandWithAvailabilitylmpl(pb,. maneuvering. Since the web-tier code in com.wrox.expertj2ee.ticket.web.Ticketcontroller is coded to use the com. wrox.expertj2ee.ticket.command.AvailabilityCheck interface to retrieve availability

Ngày đăng: 13/08/2014, 12:21

TỪ KHÓA LIÊN QUAN