1. Trang chủ
  2. » Công Nghệ Thông Tin

Thủ thuật Sharepoint 2010 part 59 potx

13 167 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 13
Dung lượng 1,37 MB

Nội dung

Unified Logging Service ❘ 419 Developer Dashboard While this book is squarely aimed at SharePoint administrators, we need to cover a new piece of functionality called the developer dashboard. Despite what the name may suggest, it’s not just for developers. The developer dashboard is dashboard that shows how long it took for a page to load, and which components loaded with it. A picture is worth 1000 words, so Figure 15-8 probably explains it better. FIGURE 158 This dashboard is loaded at the bottom of your requested web page. As you can see, the dash- board is chock full of information about the page load. You can see how long the page took to load (708.77 ms) as well as who requested it, its correlation ID, and so on. This info is useful when the helpdesk gets those ever popular “SharePoint’s slow” calls from users. Now you can quantify exactly what “slow” means as well as see what led up to the page load being slow. If Web Parts were poorly designed and did a lot of database queries, you’d see it here. If they fetched large amounts of SharePoint content, you’d see it here. If you’re really curious you can click the link on the bottom 420 ❘ CHAPTER 15 moNitoriNg sharePoiNt 2010 left, “Show or hide additional tracing information” to get several pages worth of information about every step that was taken to render that page. Now that you’re sold on the developer dashboard, how do you actually use it? Like we mentioned before, it is exposed as a dashboard at the bottom of the page when it renders. The user browsing the page must be a site collection admin- istrator to see the developer dashboard, and it must be enabled in your farm. By default it is shut off, which is one of the three possible states. It can also be on, which means the dashboard is displayed on every page load. Not only is that tedious when you’re using SharePoint, but it also has a performance penalty. The third option, ondemand, is a more reasonable approach. In ondemand mode the developer dashboard is not on, but it’s warming up in the on deck circle, waiting for you to put it in the big game. When the need arises, a site collec- tion administrator can turn it on my clicking the icon indicated in Figure 15-9. When you are finished with it, you can put it back on the bench by clicking the same icon. How do you go about enabling the developer dashboard to make this possible? You have two options. You can use sad, old STSADM, or you can use shiny new Windows PowerShell. The fol- lowing code shows both ways of enabling it. Using STSADM: stsadm -o setproperty -pn developer-dashboard -pv on stsadm -o setproperty -pn developer-dashboard -pv off stsadm -o setproperty -pn developer-dashboard -pv ondemand Using Windows PowerShell: $dash = [Microsoft.SharePoint.Administration.SPWebService]:: ContentService.DeveloperDashboardSettings; $dash.DisplayLevel = ‘OnDemand’; $dash.TraceEnabled = $true; $dash.Update() Notice that at no point do you specify a URL when you’re setting this. It is a farm-wide setting. Never fear though; only site collection administrators will see it, so hopefully it won’t scare too many users if you have to enable it for troubleshooting. Logging Database Microsoft has always made it pretty clear how it feels about people touching the SharePoint databases. The answer is always a very clear and concise, “Knock that off!” They didn’t support reading from or writing to SharePoint databases, period. End of story. That became a problem, however, because not all of the information administrators wanted about their farm or servers was FIGURE 159 Unified Logging Service ❘ 421 discoverable in the interface, or with the SharePoint Object Model. This resulted in rogue adminis- trators, in the dark of night, quietly querying their databases, hoping to never get caught. SharePoint 2010 addresses this by introducing a logging database. This database is a repository of SharePoint events from every machine in your farm. It aggregates information from many different locations and writes them all to a single database. This database contains just about everything you could ever want to know about your farm. Even better, you can read from and write to this database if you would like, as the schema is public. Do your worst to it, Microsoft doesn’t care. Microsoft’s reason for forbidding access to databases before was well intentioned. Obviously, writ- ing to a SharePoint database potentially puts it in a state where SharePoint can no longer read it and render the content in it. We all agree this is bad. What is less obvious though is that reading from a database can have the same impact. A seemingly innocent but poorly written SQL query that only reads values could put a lock on a table, or the whole database. This lock would also mean that SharePoint could not render out the content of that database for the duration of the lock. That’s also a bad thing. This logging database, however, is just a copy of information gathered from other places and is not used to satisfy end user requests, so it’s safe for you to read from it or write to it. If you destroy the database completely, you can just delete it and let SharePoint re-create it. The freedom is invigorating. Figure 15-10 shows some of the information that is copied into the logging database. FIGURE 1510 Configuring the Logging Database How do you use this magical database and leverage all this information? By default, health data col- lection is enabled. This builds the logging database. To view the settings, open SharePoint Central Administration and go into the now familiar Monitoring section. Under the Reporting heading, click “Configure usage and health data collection,” as shown in Figure 15-11. 422 ❘ CHAPTER 15 moNitoriNg sharePoiNt 2010 FIGURE 1511 Let’s start our tour of the settings at the top. The first checkbox on the page determines whether the usage data is collected and stored in the Logging database. This is turned on by default, and here is where you would disable it, should you choose to. The next section enables you to determine which events you want reported in the log. By default, all eight events are logged. If you want to reduce the impact logging has on your servers, you can disable events for which you don’t think you’ll want reports. You always have the option to enable events later. You may want to do this if you need to investigate a specific issue. You can turn the logging on during your investigation, and then shut it off after the investigation is finished. The next section determines where the usage logs will be stored. By default they are stored in the Logs directory of the SharePoint Root, along with the trace logs. The usage logs follow the same naming convention as the trace logs, but have the suffix .usage. As with the trace logs, it’s a good idea to move these logs off of the C:\ drive if possible. You can also limit the amount of space the usage logs take, with 5GB being the default. The next section, Health data collection, seems simple enough: just a checkbox and a link. The checkbox determines whether SharePoint will periodically collect health information about the members of the farm. The link takes you to a list of timer jobs that collect that information. When Unified Logging Service ❘ 423 you click the Health Logging Schedule link, you’re taken to a page that lists all of the timer jobs that collect this information. You can use this page to disable the timer jobs for any information you don’t want to collect. Again, the more logging you do, the greater the impact on performance. Figure 15-12 shows the health data collection timer jobs. FIGURE 1512 Clearly, SharePoint collects a vast amount of information. Not only does it monitor SharePoint- related performance, such as the User Profile Service Application Synchronization Job, it also keeps track of the health of non-SharePoint processes, like SQL. It reports SQL blocking queries and DMV (dynamic management view) data. Not only can you disable the timer jobs for information you don’t want to collect, you can also decrease how often they run, to reduce the impact on your servers. The next section of the Configure web analytics and health data collection page is the log collection schedule, which enables you to configure how frequently the logs are collected from the servers in the farm, and how frequently they are processed and written to the logging database. This lets you 424 ❘ CHAPTER 15 moNitoriNg sharePoiNt 2010 control the impact the log collection has on your servers. The default setting collects the logs every 30 minutes, but you can increase that to reduce the load placed on the servers. The final section of the page displays the SQL instance and database name of the reporting data- base itself. The default settings use the same SQL instance as the default content database SQL instance, and use the database name WSS_Logging. Although the page recommends using the default settings, there are some pretty good reasons to change its location and settings. Considering the amount of information that can be written to this database, and how frequently that data can be written, it might make sense to move this database to its own SQL server. While reading from and writing to the database won’t directly impact end user performance, the amount of usage this database could see might overwhelm your SQL server, or fill up the drives that also house your other SharePoint databases. If your organization chooses to use the logging database, keep an eye on the disk space that it uses, and the amount of activity it generates. On a test environment with about one month’s worth of use by one user, the logging database grew to over 1GB. This database can get huge. If you need to alter those settings you can do so in Windows PowerShell with the Set- SPUsageApplication cmdlet. The following PowerShell code demonstrates how to change the log- ging database’s location: Set-SPUsageApplication -DatabaseServer <Database server name> -DatabaseName <Database name> [-DatabaseUsername <User name>] [-DatabasePassword <Password>] [-Verbose] Specify the name of the SQL server or instance where you would like to host the logging database. You must also specify the database name, even if you want to use the default name, WSS_Logging. If the user running the Set-SPUsageApplication cmdlet is not the owner of the database, provide the username and password of an account that has sufficient permissions. Because this database consists of data aggregated from other locations, you can move it without losing any data. It will simply be repopulated as the collection jobs run. To get the full list of PowerShell cmdlets that deal with the Usage service, use Get-Command as follows: get-command -noun spusage* Consuming the Logging Database We’ve talked a lot about this logging database, what’s in it, and how to configure it, but we haven’t yet covered how you can enjoy its handiwork. There are many places to consume the information in the logging database. The first place is Central Administration. Under Monitoring ➪ Reporting are three reports that use information in the logging database. The first link is View administrative reports. Clicking that link takes you to a document library in Central Administration that contains a few canned administrative reports. Out of the box there are only search reports, as shown in Figure 15-13, but any type of report could be put here. Microsoft could provide these reports, or they can be created by SharePoint administrators. The documents in this library are simply web pages, so click any of them to see the information they contain. These particular reports are very handy for determining the source of search bottlenecks. Unified Logging Service ❘ 425 This enables you to be proactive in scaling out your search infrastructure. You are able to see how long discrete parts of search take, and then scale out your infrastructure before end users are affected. FIGURE 1513 The next set of reports in Central Administration are the health reports. These reports enable you to isolate the slowest pages in your web app, and the most active users per web app. Like the search reports, these reports enable you to proactively diagnose issues in your farm. After viewing details about the slowest pages being rendered, you can take steps to improve their performance. Figure 15-14 shows part of the report. To view a report, click the Go button on the right. The report shows how long each page takes to load, including minimums, maximums, and averages. This gives you a very convenient way to find your trouble pages. You can also see how many data- base queries the page makes. This is helpful, as database queries are expensive operations that can slow down a page render. You can drill down to a specific server or web app with this report as well, since the logging database aggregates information from all the servers in your farm. Pick the scope of the report you want and click the Go button. The reports are generated at runtime, so it might take a few seconds for it to appear. After the results appear, you can click a column heading to sort by those values. 426 ❘ CHAPTER 15 moNitoriNg sharePoiNt 2010 FIGURE 1514 The third and final set of reports in Central Admin that are fed from the logging database are the Web Analytics reports. These reports provide usage information about each of your farm’s web applications, excluding Central Admin. Clicking the View Web Analytics reports link takes you to a summary page listing the web apps in your farm, along with some high-level metrics like total num- ber of page views and total number of daily unique visitors. Figure 15-15 shows the Summary page. When you click on a web application on the Summary page you’re taken to a Summary page for that web app that provides more detailed usage information. This includes additional metrics for the web app, such as referrers, total number of page views, and the trends for each, as shown in Figure 15-16. The web app summary report also adds new links on the left. These links enable you to drill further down into each category. Each new report has a graph at the top, with more detailed information at the bottom of the screen. If you want to change the scope of a report, click Analyze in the ribbon. This shows the options you have for the report, including the date ranges included. You can choose one of the date ranges provided or, as shown in Figure 15-17, choose custom dates. Unified Logging Service ❘ 427 FIGURE 1515 FIGURE 1516 428 ❘ CHAPTER 15 moNitoriNg sharePoiNt 2010 FIGURE 1517 This gives you the flexibility to drill down to the exact date you want. You can also export the report out to a CSV file with the Export to Spreadsheet button. Because this is a CSV file, the graph is not included — only the dates and their values. These options are available for any of the reports after you choose a web app. As mentioned, the web analytics reports do not include Central Administration. While it’s unlikely that you’ll need such a report, they are available to you. The Central Admin site is simply a highly specialized site collection in its own web app. Because it is a site collection, usage reports are also available for it. To view them, click Site Actions ➪ Site Settings. Under Site Administration, click Site web analytics reports. This brings up the same usage reports you just saw at the web app level. You also have the same options in the ribbon, with the exception of being able to export to CSV. Figure 15-18 shows the browser report for Central Admin. Because these reports are site collection web analytics reports, they are available in all site collections as well as in Central Admin. This is another way to consume the information in the logging data- base. You can view the usage information for any site collection or web, just open Site Actions ➪ Site Settings to get the web analytics links. You have two similar links: Site Web Analytics reports and Site Collection Web Analytics reports. These are the same sets of reports, but at different scopes. The site collection–level reports are for the entire site collection. The Site-level reports provide the same information but at the site (also called web) level. You have a further option of scoping the reports at that particular site, or that site and its subsites. Figure 15-19 shows the options available at the site level. [...]... can simply delete it and SharePoint will re-create it Health Analyzer By now you’ve seen there are a lot of ways for you to keep an eye on SharePoint What if there were some magical way for SharePoint to watch over itself? What if it could use all that fancy monitoring to see when something bad was going to happen to it and just fix it itself? Welcome to the future SharePoint 2010 introduces a feature... databases Normally, it’s a very bad thing to touch any of the SharePoint databases; the logging database, mentioned earlier, is the only exception to that rule Open Management Studio and find the WSS_Logging database Go ahead and poke around; it’s fine You’ll notice the large number of tables in the database Each category of information has 32 tables to partition the data It’s obvious this database was designed...Unified Logging Service  Figure 15-18 Figure 15-19 ❘  429 430  ❘  Chapter 15   Monitoring SharePoint 2010 You may also notice another option that was not available in the Central Administration web analytics reports, the capability to use workflows to schedule alerts or reports You can use this... introduces a feature called the Health Analyzer that does just that The Health Analyzer utilizes Timer Jobs to run rules periodically and check on system metrics that are based on SharePoint best practices When a rule fails, SharePoint can alert an administrator in Central Administration, or, in some cases, just fix the problem itself To access all this magic, just select Monitoring ➪ Health Analyzer... specific reports sent to people at specific intervals, or when specific values are met This is another way that you can use the logging database and the information it collects to be proactive with your SharePoint farm There is one final way to consume the information stored in the logging database: directly from SQL Although it might feel like you’re doing something wrong, Microsoft supports this method... the large number of tables in the database Each category of information has 32 tables to partition the data It’s obvious this database was designed to accommodate a lot of growth Because of the database partitions, it’s tough to do SELECT statements against them Fortunately, the database also includes a Views feature that you can use to view the data Expand the Views node of the database to see which . fetched large amounts of SharePoint content, you’d see it here. If you’re really curious you can click the link on the bottom 420 ❘ CHAPTER 15 moNitoriNg sharePoiNt 2010 left, “Show or hide. interface, or with the SharePoint Object Model. This resulted in rogue adminis- trators, in the dark of night, quietly querying their databases, hoping to never get caught. SharePoint 2010 addresses. delete it and SharePoint will re-create it. Health Analyzer By now you’ve seen there are a lot of ways for you to keep an eye on SharePoint. What if there were some magical way for SharePoint to

Ngày đăng: 02/07/2014, 12:20