1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu Advanced PHP Programming- P9 pdf

50 305 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 50
Dung lượng 515,82 KB

Nội dung

378 Chapter 15 Building a Distributed Environment Client X Client Y Server B Newly Cached Older Cache Server A Client X get a fresh copy of Joe's page Client Y gets a stale copy of Joe's page Figure 15.6 Stale cache data resulting in inconsistent cluster behavior. Centralized Caches One of the easiest and most common techniques for guaranteeing cache consistency is to use a centralized cache solution. If all participants use the same set of cache files, most of the worries regarding distributed caching disappear (basically because the caching is no longer completely distributed—just the machines performing it are). Network file shares are an ideal tool for implementing a centralized file cache. On Unix systems the standard tool for doing this is NFS. NFS is a good choice for this application for two main reasons: n NFS servers and client software are bundled with essentially every modern Unix system. n Newer Unix systems supply reliable file-locking mechanisms over NFS, meaning that the cache libraries can be used without change. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 379 Caching in a Distributed Environment Figure 15.7 Inconsistent cached session data breaking shopping carts. The real beauty of using NFS is that from a user level, it appears no different from any other filesystem, so it provides a very easy path for growing a cache implementation from a single file machine to a cluster of machines. If you have a server that utilizes /cache/www.foo.com as its cache directory, using the Cache_File module developed in Chapter 10,“Data Component Caching,” you can extend this caching architecture seamlessly by creating an exportable directory /shares/ cache/www.foo.com on your NFS server and then mounting it on any interested machine as follows: Joe Joe Server A Shopping Cart A Shopping Cart A Server B Shopping Cart B Server A Empty Cart Server B Joe starts his shopping cart on A When Joe gets served by B he gets a brand new cart. Cart A is not merged into B. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 380 Chapter 15 Building a Distributed Environment #/etc/fstab nfs-server:/shares/cache/www.foo.com /cache/www.foo.com nfs rw,noatime - - Then you can mount it with this: # mount –a These are the drawbacks of using NFS for this type of task: n It requires an NFS server. In most setups, this is a dedicated NFS server. n The NFS server is a single point of failure. A number of vendors sell enterprise- quality NFS server appliances.You can also rather easily build a highly available NFS server setup. n The NFS server is often a performance bottleneck.The centralized server must sustain the disk input/output (I/O) load for every Web server’s cache interaction and must transfer that over the network.This can cause both disk and network throughput bottlenecks. A few recommendations can reduce these issues: n Mount your shares by using the noatime option.This turns off file metadata updates when a file is accessed for reads. n Monitor your network traffic closely and use trunked Ethernet/Gigabit Ethernet if your bandwidth grows past 75Mbps. n Take your most senior systems administrator out for a beer and ask her to tune the NFS layer. Every operating system has its quirks in relationship to NFS, so this sort of tuning is very difficult. My favorite quote in regard to this is the following note from the 4.4BSD man pages regarding NFS mounts: Due to the way that Sun RPC is implemented on top of UDP (unreliable datagram) transport, tuning such mounts is really a black art that can only be expected to have limited success. Another option for centralized caching is using an RDBMS.This might seem complete- ly antithetical to one of our original intentions for caching—to reduce the load on the database—but that isn’t necessarily the case. Our goal throughout all this is to eliminate or reduce expensive code, and database queries are often expensive. Often is not always, however, so we can still effectively cache if we make the results of expensive database queries available through inexpensive queries. Fully Decentralized Caches Using Spread A more ideal solution than using centralized caches is to have cache reads be completely independent of any central service and to have writes coordinate in a distributed fashion to invalidate all cache copies across the cluster. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 381 Caching in a Distributed Environment To achieve this, you can use Spread, a group communication toolkit designed at the Johns Hopkins University Center for Networking and Distributed Systems to provide an extremely efficient means of multicast communication between services in a cluster with robust ordering and reliability semantics. Spread is not a distributed application in itself; it is a toolkit (a messaging bus) that allows the construction of distributed applications. The basic architecture plan is shown in Figure 15.8. Cache files will be written in a nonversioned fashion locally on every machine.When an update to the cached data occurs, the updating application will send a message to the cache Spread group. On every machine, there is a daemon listening to that group.When a cache invalidation request comes in, the daemon will perform the cache invalidation on that local machine. Figure 15.8 A simple Spread ring. This methodology works well as long as there are no network partitions. A network par- tition event occurs whenever a machine joins or leaves the ring. Say, for example, that a machine crashes and is rebooted. During the time it was down, updates to cache entries may have changed. It is possible, although complicated, to build a system using Spread whereby changes could be reconciled on network rejoin. Fortunately for you, the nature of most cached information is that it is temporary and not terribly painful to re-create. You can use this assumption and simply destroy the cache on a Web server whenever the cache maintenance daemon is restarted.This measure, although draconian, allows you to easily prevent usage of stale data. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 382 Chapter 15 Building a Distributed Environment To implement this strategy, you need to install some tools.To start with, you need to download and install the Spread toolkit from www.spread.org. Next, you need to install the Spread wrapper from PEAR: # pear install spread The Spread wrapper library is written in C, so you need all the PHP development tools installed to compile it (these are installed when you build from source). So that you can avoid having to write your own protocol, you can use XML-RPC to encapsulate your purge requests.This might seem like overkill, but XML-RPC is actually an ideal choice: It is much lighter-weight than a protocol such as SOAP, yet it still provides a relatively extensible and “canned” format, which ensures that you can easily add clients in other languages if needed (for example, a standalone GUI to survey and purge cache files). To start, you need to install an XML-RPC library.The PEAR XML-RPC library works well and can be installed with the PEAR installer, as follows: # pear install XML_RPC After you have installed all your tools, you need a client.You can augment the Cache_File class by using a method that allows for purging data: require_once ‘XML/RPC.php’; class Cache_File_Spread extends File { private $spread; Spread works by having clients attach to a network of servers, usually a single server per machine. If the daemon is running on the local machine, you can simply specify the port that it is running on, and a connection will be made over a Unix domain socket.The default Spread port is 4803: private $spreadName = ‘4803’; Spread clients join groups to send and receive messages on. If you are not joined to a group, you will not see any of the messages for it (although you can send messages to a group you are not joined to). Group names are arbitrary, and a group will be automati- cally created when the first client joins it.You can call your group xmlrpc: private $spreadGroup = ‘xmlrpc’; private $cachedir = ‘/cache/’; public function _ _construct($filename, $expiration=false) { parent::_ _construct($filename, $expiration); You create a new Spread object in order to have the connect performed for you auto- matically: $this->spread = new Spread($this->spreadName); } Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 383 Caching in a Distributed Environment Here’s the method that does your work.You create an XML-RPC message and then send it to the xmlrpc group with the multicast method: function purge() { // We don’t need to perform this unlink, // our local spread daemon will take care of it. // unlink(“$this->cachedir/$this->filename”); $params = array($this->filename); $client = new XML_RPC_Message(“purgeCacheEntry”, $params); $this->spread->multicast($this->spreadGroup, $client->serialize()); } } } Now, whenever you need to poison a cache file, you simply use this: $cache->purge(); You also need an RPC server to receive these messages and process them: require_once ‘XML/RPC/Server.php’; $CACHEBASE = ‘/cache/’; $serverName = ‘4803’; $groupName = ‘xmlrpc’; The function that performs the cache file removal is quite simple.You decode the file to be purged and then unlink it.The presence of the cache directory is a half-hearted attempt at security.A more robust solution would be to use chroot on it to connect it to the cache directory at startup. Because you’re using this purely internally, you can let this slide for now. Here is a simple cache removal function: function purgeCacheEntry($message) { global $CACHEBASE; $val = $message->params[0]; $filename = $val->getval(); unlink(“$CACHEBASE/$filename”); } Now you need to do some XML-RPC setup, setting the dispatch array so that your server object knows what functions it should call: $dispatches = array( ‘purgeCacheEntry’ => array(‘function’ => ‘purgeCacheEntry’)); $server = new XML_RPC_Server($dispatches, 0); Now you get to the heart of your server.You connect to your local Spread daemon, join the xmlrpc group, and wait for messages.Whenever you receive a message, you call the server’s parseRequest method on it, which in turn calls the appropriate function (in this case, purgeCacheEntry): Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 384 Chapter 15 Building a Distributed Environment $spread = new Spread($serverName); $spread->join($groupName); while(1) { $message = $spread->receive(); $server->parseRequest($data->message); } Scaling Databases One of the most difficult challenges in building large-scale services is the scaling of data- bases.This applies not only to RDBMSs but to almost any kind of central data store.The obvious solution to scaling data stores is to approach them as you would any other serv- ice: partition and cluster. Unfortunately, RDBMSs are usually much more difficult to make work than other services. Partitioning actually works wonderfully as a database scaling method.There are a number of degrees of portioning. On the most basic level, you can partition by breaking the data objects for separate services into distinct schemas. Assuming that a complete (or at least mostly complete) separation of the dependant data for the applications can be achieved, the schemas can be moved onto separate physical database instances with no problems. Sometimes, however, you have a database-intensive application where a single schema sees so much DML (Data Modification Language—SQL that causes change in the data- base) that it needs to be scaled as well. Purchasing more powerful hardware is an easy way out and is not a bad option in this case. However, sometimes simply buying larger hardware is not an option: n Hardware pricing is not linear with capacity. High-powered machines can be very expensive. n I/O bottlenecks are hard (read expensive) to overcome. n Commercial applications often run on a per-processor licensing scale and, like hardware, scale nonlinearly with the number of processors. (Oracle, for instance, does not allow standard edition licensing on machines that can hold more than four processors.) Common Bandwidth Problems You saw in Chapter 12, “Interacting with Databases,” that selecting more rows than you actually need can result in your queries being slow because all that information needs to be pulled over the network from the RDBMS to the requesting host. In high-volume applications, it’s very easy for this query load to put a signif- icant strain on your network. Consider this: If you request 100 rows to generate a page and your average row width is 1KB, then you are pulling 100KB of data across your local network per page. If that page is requested 100 times per second, then just for database data, you need to fetch 100KB × 100 = 10MB of data per second. That’s bytes, not bits. In bits, it is 80Mbps. That will effectively saturate a 100Mb Ethernet link. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 385 Scaling Databases This example is a bit contrived. Pulling that much data over in a single request is a sure sign that you are doing something wrong—but it illustrates the point that it is easy to have back-end processes consume large amounts of bandwidth. Database queries aren’t the only actions that require bandwidth. These are some other traditional large consumers: n Networked file systems—Although most developers will quickly recognize that requesting 100KB of data per request from a database is a bad idea, many seemingly forget that requesting 100KB files over NFS or another network file system requires just as much bandwidth and puts a huge strain on the network. n Backups—Backups have a particular knack for saturating networks. They have almost no computational overhead, so they are traditionally network bound. That means that a backup system will easily grab whatever bandwidth you have available. For large systems, the solution to these ever-growing bandwidth demands is to separate out the large con- sumers so that they do not step on each other. The first step is often to dedicate separate networks to Web traffic and to database traffic. This involves putting multiple network cards in your servers. Many network switches support being divided into multiple logical networks (that is, virtual LANs [VLANs]). This is not technically necessary, but it is more efficient (and secure) to manage. You will want to conduct all Web traffic over one of these virtual networks and all database traffic over the other. Purely internal networks (such as your database network) should always use private network space. Many load balancers also support network address translation, meaning that you can have your Web traffic network on private address space as well, with only the load balancer bound to public addresses. As systems grow, you should separate out functionality that is expensive. If you have a network-available backup system, putting in a dedicated network for hosts that will use it can be a big win. Some systems may eventually need to go to Gigabit Ethernet or trunked Ethernet. Backup systems, high-throughput NFS servers, and databases are common applications that end up being network bound on 100Mb Ethernet net- works. Some Web systems, such as static image servers running high-speed Web servers such as Tux or thttpd can be network bound on Ethernet networks. Finally, never forget that the first step in guaranteeing scalability is to be careful when executing expensive tasks. Use content compression to keep your Web bandwidth small. Keep your database queries small. Cache data that never changes on your local server. If you need to back up four different databases, stagger the backups so that they do not overlap. There are two common solutions to this scenario: replication and object partitioning. Replication comes in the master/master and master/slave flavors. Despite what any vendor might tell you to in order to sell its product, no master/master solution currently performs very well. Most require shared storage to operate properly, which means that I/O bottlenecks are not eliminated. In addition, there is overhead introduced in keeping the multiple instances in sync (so that you can provide consistent reads during updates). The master/master schemes that do not use shared storage have to handle the over- head of synchronizing transactions and handling two-phase commits across a network (plus the read consistency issues).These solutions tend to be slow as well. (Slow here is a relative term. Many of these systems can be made blazingly fast, but not as fast as a Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 386 Chapter 15 Building a Distributed Environment doubly powerful single system and often not as powerful as a equally powerful single system.) The problem with master/master schemes is with write-intensive applications.When a database is bottlenecked doing writes, the overhead of a two-phase commit can be crippling.Two-phase commit guarantees consistency by breaking the commit into two phases: n The promissory phase, where the database that the client is committing to requests all its peers to promise to perform the commit. n The commit phase, where the commit actually occurs. As you can probably guess, this process adds significant overhead to every write opera- tion, which spells trouble if the application is already having trouble handling the volume of writes. In the case of a severely CPU-bound database server (which is often an indication of poor SQL tuning anyway), it might be possible to see performance gains from clustered systems. In general, though, multimaster clustering will not yield the performance gains you might expect.This doesn’t mean that multimaster systems don’t have their uses.They are a great tool for crafting high-availability solutions. That leaves us with master/slave replication. Master/slave replication poses fewer technical challenges than master/master replication and can yield good speed benefits.A critical difference between master/master and master/slave setups is that in master/master architectures, state needs to be globally synchronized. Every copy of the database must be in complete synchronization with each other. In master/slave replication, updates are often not even in real-time. For example, in both MySQL replication and Oracle’s snap- shot-based replication, updates are propagated asynchronously of the data change. Although in both cases the degree of staleness can be tightly regulated, the allowance for even slightly stale data radically improves the cost overhead involved. The major constraint in dealing with master/slave databases is that you need to sepa- rate read-only from write operations. Figure 15.9 shows a cluster of MySQL servers set up for master/slave replication.The application can read data from any of the slave servers but must make any updates to replicated tables to the master server. MySQL does not have a corner on the replication market, of course. Many databases have built-in support for replicating entire databases or individual tables. In Oracle, for example, you can replicate tables individually by using snapshots, or materialized views. Consult your database documentation (or your friendly neighborhood database adminis- trator) for details on how to implement replication in your RDBMS. Master/slave replication relies on transmitting and applying all write operations across the interested machines. In applications with high-volume read and write concurrency, this can cause slowdowns (due to read consistency issues).Thus, master/slave replication is best applied in situations that have a higher read volume than write volume. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 387 Scaling Databases Figure 15.9 Overview of MySQL master/slave replication. Writing Applications to Use Master/Slave Setups In MySQL version 4.1 or later, there are built-in functions to magically handle query distribution over a master/slave setup.This is implemented at the level of the MySQL client libraries, which means that it is extremely efficient.To utilize these functions in PHP, you need to be using the new mysqli extension, which breaks backward compatibility with the standard mysql extension and does not support MySQL prior to version 4.1. If you’re feeling lucky, you can turn on completely automagical query dispatching, like this: $dbh = mysqli_init(); mysqli_real_connect($dbh, $host, $user, $password, $dbname); mysqli_rpl_parse_enable($dbh); // prepare and execute queries as per usual The mysql_rpl_parse_enable() function instructs the client libraries to attempt to automatically determine whether a query can be dispatched to a slave or must be serv- iced by the master. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. [...]... and talk over low-level network sockets by using the sockets extension in PHP Further Reading An interesting development in PHP- oriented application servers is the SRM project, which is headed up by Derick Rethans SRM is an application server framework built around an embedded PHP interpreter Application services are scripted in PHP and are interacted with using a bundled communication extension Of course,... autoglobal $HTTP_RAW_POST_DATA, you need to make certain that you do not turn off always_populate_raw_post_data in your php. ini file Now, if you place the server code at www.example.com/xmlrpc .php and execute the client code from any machine, you should get back this: > php system_load .php 0.34 or whatever your one-minute load average is Building a Server: Implementing the MetaWeblog API The power of... in WSDL After the call to getQuote() is made, the result is SOAP_Client 407 408 Chapter 16 RPC: Interacting with Remote Services deserialized into native PHP types, using deserializeBody().When you executing it, you get this: > php delayed-stockquote .php Current price of IBM is 90.25 Rewriting system.load as a SOAP Service A quick test of your new SOAP skills is to reimplement the XML-RPC system.load... $detail->Asin\n”; } When you run this, you get the following: Title: Advanced PHP Programming, ASIN: 0672325616 Generating Proxy Code You can quickly write the code to generate dynamic proxy objects from WSDL, but this generation incurs a good deal of parsing that should be avoided when calling Web services repeatedly.The SOAP WSDL manager can generate actual PHP code for you so that you can invoke the calls directly,... names to PHP function names Finally, an XML_RPC_Server object is created, and the dispatch array $dispatches is passed to it.The second parameter, 1, indicates that it should immediately service a request, using the service() method (which is called internally) service() looks at the raw HTTP POST data, parses it for an XML-RPC request, and then performs the dispatching Because it relies on the PHP autoglobal... XML-RPC Of course you don’t have to build and interpret these documents yourself.There are a number of different XML-RPC implementations for PHP I generally prefer to use the PEAR XML-RPC classes because they are distributed with PHP itself (They are used by the PEAR installer.) Thus, they have almost 100% deployment Because of this, there is little reason to look elsewhere An XML-RPC... representations into PHP types by using the getval() method.Then metaWeblog_newPost() authenticates the specified user If the user fails to authenticate, metaWeblog_newPost() returns an empty XML_RPC_Response object, with an “Authentication Failed” error message If the authentication is successful, metaWeblog_newPost() reads in the item_struct parameter and deserializes it into the PHP array $item_struct,... three methods combined to get a complete picture of what an XMLRPC server implements Here is a script that lists the documentation and signatures for every method on a given XML-RPC server: < ?php require_once ‘XML/RPC .php ; if($argc != 2) { print “Must specify a url.\n”; 401 402 Chapter 16 RPC: Interacting with Remote Services exit; } $url = parse_url($argv[1]); $client = new XML_RPC_Client($url[‘path’],... $return $method($params)\n”; } } else { print “NO SIGNATURE\n”; } print “\n”; } ?> SOAP Running this against a Serendipity installation generates the following: > xmlrpc-listmethods .php http://www.example.org/serendipity_xmlrpc .php /* */ Method metaWeblog.newPost: Takes blogid, username, password, item_struct, publish_flag and returns the postid of the new entry Signature #0: string metaWeblog.newPost(string,... sends it to a server, and parses the response.The following code generates the request document shown earlier in this section and parses the resulting response: require_once ‘XML/RPC .php ; $client = new XML_RPC_Client(‘/xmlrpc .php , ‘www.example.com’); $msg = new XML_RPC_Message(‘system.load’); $result = $client->send($msg); if ($result->faultCode()) { echo “Error\n”; } print XML_RPC_decode($result->value()); . extension in PHP. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 391 Further Reading An interesting development in PHP- oriented. www.vl-srm.net. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 16 RPC:

Ngày đăng: 26/01/2014, 09:20