1. Trang chủ
  2. » Công Nghệ Thông Tin

network performance toolkit using open source testing tools phần 10 pptx

35 352 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 35
Dung lượng 518,01 KB

Nội dung

Running the Model After creating and saving the simulation model, you can run it using the ns command-line program: $ ns test.tcl Depending on the speed of your Unix host, the simulation may run for a few seconds, then finish. After it finishes, you should see two files: out.nam and out.tr. The out.nam file contains the network simulation data for the nam ani- mator program. The out.tr file contains the network simulation regarding the raw data transfers. You can observe the network in action by using the nam program with the out.nam file: $nam out.nam This starts the nam application, and loads the data from the out.nam file. You may want to click the Relayout button a few times, until the network lay- out suits your taste. Figure 19.5 shows the basic layout of the network simula- tion, along with packets as they are being processed. Figure 19.5 The nam network simulation display. Comparing Application Performance Tools 371 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 371 You can watch the queue status of the router devices as packets are passed from the remote client to the server device. Interpreting the Results After the ns simulation is run, you can analyze the out.tr file to observe the data transfers. A sample of the out.tr contents looks like: + 1 0 1 tcp 40 1 0.0 6.0 0 0 - 1 0 1 tcp 40 1 0.0 6.0 0 0 + 1 5 4 tcp 40 2 5.0 6.1 0 1 - 1 5 4 tcp 40 2 5.0 6.1 0 1 r 1.010003 0 1 tcp 40 1 0.0 6.0 0 0 + 1.010003 1 2 tcp 40 1 0.0 6.0 0 0 - 1.010003 1 2 tcp 40 1 0.0 6.0 0 0 r 1.010003 5 4 tcp 40 2 5.0 6.1 0 1 + 1.010003 4 6 tcp 40 2 5.0 6.1 0 1 - 1.010003 4 6 tcp 40 2 5.0 6.1 0 1 r 1.010006 4 6 tcp 40 2 5.0 6.1 0 1 The trace file shows the status of each simulated packet at any given time in the simulation. The format of the trace file is: event time src dst pkttype pktsize flags fid srcaddr dstaddr seqnum pktid The event field defines the status of the packet record. A plus sign indicates that the packet has been placed in the queue for the associated source and des- tination node link. A minus sign indicates that the packet has been removed from the link queue, while a letter r indicates that the destination node has received the packet. You can use this information to monitor the amount of data that is received by each node in the simulation. The goal of this simulation was to watch how long it would take each client to send 10 Mb of data to the server. You can use the trace file to watch the data packets as they are received within the network, and count the data. To ensure that you are tracking data from the right client, you need to choose two points in the network that are unique to the individual client data paths. Monitoring data received by node 4 from node 3 ensures that it is data received from client 0, while monitoring data received by host 4 from node 5 ensures it is from client 5. You can write a simple shell script to create an output file containing the time field and the sum of the packet size field for specific source and destina- tion nodes. A sample would look like this: cat out.tr | grep ^r | grep “3 4 tcp 1040” | awk ‘{old = old + $6; printf(“%f\t%d\n”, $2, old)}’ > out.34 372 Chapter 19 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 372 This script scans the trace file, looking for received packets (the ^r part) from host 3 to host 4 (the 3 4 part) that contain data (the 1040 part). That infor- mation is fed into an awk script, which sums the packet sizes, and prints the time and the packet size sum value. The result is redirected to an output file. The result of the output file looks like: $ head out.34 1.096288 1040 1.101835 2080 1.162349 3120 1.167940 4160 1.173487 5200 1.179034 6240 1.228499 7280 1.234046 8320 1.239593 9360 1.245139 10400 $ By changing the host numbers in the shell script, you can produce a similar file for the data between hosts 5 and 4, showing the local client data path. Now, you can use the xgraph program to chart both data lines and observe the data streams, as shown in Figure 19.6. Figure 19.6 xgraph display of the data traffic. Comparing Application Performance Tools 373 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 373 As seen in the xgraph display, both of the data streams start at 1 second into the simulation. The data stream from client 5 to the server reached 10 Mb at about 11 seconds into the simulation, while the data stream from client 0 to the server reached the 10-Mb data mark at about 55 seconds. This indicates that the remote client transfer took about 54 seconds, while the local client transfer took about 10 seconds. Performing additional analysis of the trace output, you can compare the individual times for each data stream. It took on average 6 ms for one data packet to traverse the network from the remote client to the server, while it took less than 1 ms for the same data to traverse the local network from the local client. A 5-ms difference does not seem like a long time, but, as seen in this example, when large amounts of data are traversing the network, it adds up to a large overall performance delay. Using SSFNet Next up is the SSFNet model. To simulate the model network, you must create a DML program defining the network devices and links, along with the proto- cols used. This section describes the steps necessary to build an SSFNet model and observe the results. Building the Model The SSFNet DML model must define the pertinent devices and links to simu- late from the production network. Figure 19.7 shows the SSFNet model used to simulate the sample network. Figure 19.7 SSFNet network model. host 0 host 3 host 4 router 1 router 2 100 Mbps 100 Mbps 100 Mbps 100 Mbps 100 Mbps 100 Mbps 1.5 Mbps 1.5 Mbps 00 0 00 0 11 374 Chapter 19 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 374 The SSFNet network model uses the Ethernet LAN modeling feature to sim- ulate the speed and delay present in the internal switch network within the buildings. This greatly simplifies the model. Each device must also have the proper interfaces configured (shown by the small numbers on the nodes) to represent the link speeds in the model. Figure 19.8 shows the resulting DML model code. Net [ frequency 1000000000 randomstream [ generator “MersenneTwister” stream “stream1” reproducibility_level “timeline” ] traffic [ pattern [ client 0 servers [nhi 4(0) port 1600] ] pattern [ client 3 servers [nhi 4(0) port 1600] ] ] host [ id 0 interface [id 0 bitrate 10000000] route [dest default interface 0] graph [ ProtocolSession [ name TCPclient use SSF.OS.TCP.test.tcpClient start_time 1.0 start_window 1.0 file_size 10000000 request_size 40 show_report true ] ProtocolSession [name socket use SSF.OS.Socket.socketMaster] ProtocolSession [name tcp use SSF.OS.TCP.tcpSessionMaster tcpinit[ show_report true ] ] ProtocolSession [name ip use SSF.OS.IP] Figure 19.8 SSFNet DML model. (continued) Comparing Application Performance Tools 375 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 375 ] ] host [ id 3 interface [id 0 bitrate 100000000] nhi_route [dest default interface 0 next_hop 2(0)] graph [ ProtocolSession [ name tcpClient use SSF.OS.TCP.test.tcpClient start_time 1.0 start_window 1.0 file_size 10000000 request_size 40 show_report true ] ProtocolSession [name socket use SSF.OS.Socket.socketMaster] ProtocolSession [name tcp use SSF.OS.TCP.tcpSessionMaster tcpinit [ show_report true ] ] ProtocolSession [name ip use SSF.OS.IP] ] ] host [ id 4 interface [id 0 bitrate 100000000 tcpdump test7.dmp] nhi_route [dest default interface 0 next_hop 2(0)] graph [ ProtocolSession [ name TCPServer use SSF.OS.TCP.test.tcpServer port 1600 request_size 10 show_report true ] ProtocolSession [name socket use SSF.OS.Socket.socketMaster] ProtocolSession [name tcp use SSF.OS.TCP.tcpSessionMaster tcpinit[ show_report true Figure 19.8 (continued) 376 Chapter 19 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 376 ] ] ProtocolSession [name ip use SSF.OS.IP] ] ] router [ id 1 interface [id 0 bitrate 100000000] interface [id 1 bitrate 1500000] graph [ProtocolSession [name ip use SSF.OS.IP]] route [dest default interface 1] ] router [ id 2 interface [id 0 bitrate 100000000] interface [id 1 bitrate 1500000] graph [ProtocolSession [name ip use SSF.OS.IP]] route [dest default interface 1] ] link [attach 0(0) attach 1(0) delay 0.010] link [attach 1(1) attach 2(1)] link [attach 2(0) attach 3(0) attach 4(0) delay 0.010] ] Figure 19.8 (continued) Hosts 0 and 3 are configured as TCP client devices, capable of sending a 10- Mb file using standard TCP. Host 4 is configured as a TCP server device, accepting the 10-Mb file stream from the clients, and returning an acknowl- edgment packet. Two router devices are configured to simulate the T1 link within the production network. Since no IP addresses are assigned to the sim- ulation, default routes are specified for each device on the network. Both of the local networks are modeled using a single LAN connecting the devices, and a 10-ms delay period. Running the Model The SSFNet model is run using the java command-line interpreter, along with the SSF.Net.Net base class used in the Raceway SSFNet system. Since each of the client devices and the server device use the show_report feature, you can observe the start and end of the data streams. The output looks like: $ java SSF.Net.Net 100 test.dml | Raceway SSF 1.1b01 (15 March 2002) Comparing Application Performance Tools 377 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 377 | (c)2000,2001,2002 Renesys Corporation | | ?? | CIDR IP Block b16 NHI 0.0.0.0/27 0x00000000 0 0.0.0.12/30 0x0000000c 0(0) 1(0) 1 0.0.0.8/30 0x00000008 1(1) 2(1) 2 0.0.0.0/29 0x00000000 2(0) 3(0) 4(0) NHI Addr CIDR Level IP Address Block % util 0.0.0.0/27 56.25 ** Using specified 1.0ns clock resolution Phase I: construct table of routers and hosts Phase II: connect Point-To-Point links Phase III: add static routes ## Net config: 5 routers and hosts ## Elapsed time: 0.667 seconds ** Running for 100000000000 clock ticks (== 100.0 seconds sim time) 1.919942997 TCP host 3 src={0.0.0.2:10001} dest={0.0.0.3:1600} Active Open 1.920456578 TCP host 0 src={0.0.0.13:10001}dest={0.0.0.3:1600} Active Open 1.930046197 TCP host 4 src={0.0.0.3:1600} dest={0.0.0.2:10001} SYN recvd 1.941005111 TCP host 4 src={0.0.0.3:1600} dest={0.0.0.13:10001} SYN recvd 14.39844274 [ sid 1 start 1.919942997 ] tcpClient 3 srv 4(0) rcvd 10000000B at 6411.027kbps - read() SUCCESS 14.39844274 TCP host 3 src={0.0.0.2:10001} dest={0.0.0.3:1600} Active Close 14.40854594 TCP host 4 src={0.0.0.3:1600} dest={0.0.0.2:10001} Active Close 14.40854594 TCP host 4 src={0.0.0.3:1600} dest={0.0.0.2:10001} Passive Close 57.521336685 [ sid 1 start 1.920456578 ] TCPclient 0 srv 4(0) rcvd 10000000B at 1438.826kbps - read() SUCCESS 57.521336685 TCP host 0 src={0.0.0.13:10001} dest={0.0.0.3:1600} Active Close 57.541885218 TCP host 4 src={0.0.0.3:1600} dest={0.0.0.13:10001} Active Close 57.541885218 TCP host 4 src={0.0.0.3:1600} dest={0.0.0.13:10001} Passive Close | 1 timelines, 5 barriers, 117752 events, 7450 ms, 17 Kevt/s $ 378 Chapter 19 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 378 By analyzing the output from the simulation, you can see the start and stop times for each data stream. The next step is to analyze this information to see how the data transfers performed. Interpreting the Results From the output data produced by SSFNet, you can determine the time it took for each individual data transfer. For the slow link from host 0 to host 4, the transfer started at about 1.92 seconds in the simulation, and finished at about 57.52 seconds, for a total transfer time of about 55.6 seconds, very close to what the ns test predicted. For the local file transfer, the transfer again started at about 1.92 seconds in the simulation, and finished at about 14.4 seconds, for a total transfer time of about 12.5 seconds. The ns prediction for this value was a little less close, but still in the same ballpark. Remember, the point of the simulation is to compare the two transfer times, which in both cases, so far, indicate that remote clients will observe a significant decrease in performance when running the network application. Using dummynet The first network emulator package to test is the dummynet application. This is used to create a test network environment on a FreeBSD system that can emulate the actual production network. This section explains how to create the dummyet network emulation, and how to observe network application behav- ior within the test network. Building the Emulation Environment Since the dummynet application intercepts packets at the kernel level of the FreeBSD system, you can configure dummynet to affect data traffic either between two installed network cards, or from the local system to the network card or to itself. For this test, I will configure dummynet to intercept packets between itself and a remote device on the test network. The test network appli- cation will be an FTP session from the local machine to the remote test host. Dummynet builds rules that affect network traffic as it traverses the system kernel. A single rule will define the total network behavior between the two endpoints. You must incorporate all network delays and bandwidth limita- tions within the single rule. WARNING Remember that dummynet affects the network traffic both when it enters the kernel and as it leaves the kernel, so any delays configured must be cut in half. Comparing Application Performance Tools 379 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 379 Since dummynet uses the ipfw program, you must clear out any existing rules that may affect the test, and then enter the rules necessary to create the emulation rule: # ipfw flush Are you sure [yn] y # ipfw add pipe 1 tcp from 192.168.1.1 to 192.168.1.6 # ipfw add pipe 2 tcp from 192.168.1.6 to 192.168.1.6 # ipfw pipe 1 config bw 1.5Mb/s delay 10ms # ipfw pipe 2 config bw 1.5Mb/s delay 10ms The first command clears any existing rules in the ipfw table. The second command creates a pipe used for all TCP traffic originating from the local host IP address to the specific IP address of the remote test host. The next command creates a second pipe handling traffic in the opposite direction. After the two pipes are created, they can be configured for the appropriate network bandwidth limitation and delay time. These create an emulation environment for the T1 router and incorporate a network delay representing the delay found on the local network switches. After the first emulation is complete, you must remove these rules, using the flush option, and create two new rules using the 100-Mbps link speed to emulate the local client test. Running the Emulation After creating the first set of emulation rules, you can begin a simple network application test by starting an FTP session between the test hosts and sending a 10-Mb file to represent the data stream between the hosts. First, you must create a sample 10-Mb file to use. I like to echo a string of a known size to a file, then ping-pong copy the file to another file until the size is appropriate: $ echo 0123456789012345678901234 > test $ cat test > test1 $ cat test1 >> test $ cat test >> test1 . . $ ls -al test -rw-r r 1 rich rich 10000000 Jan 28 15:39 test $ This example creates a 25-byte file, and continually concatenates it onto a work file until the file size is 10 Mb. When the test file is complete, you are ready to start the FTP session: 380 Chapter 19 23 433012 Ch19.qxd 6/16/03 9:12 AM Page 380 [...]... compiling source code, 104 components, 100 101 daemon mode, 106 downloading, 103 105 file transfers, testing, 112–113 graphical front end, 100 101 graphical interface, 114–115 installing, 103 105 Iperf library, 101 Iperf program, 100 jperf, 114–115 jperf front end, 100 101 multicast traffic, testing, 111–112 online resources, 388 output, 102 103 running a test, 106 108 standalone mode, 105 106 starting... HTTP traffic, 227 standalone mode, 105 106 starting a test, 93–94 starting the server, 105 106 synchronizing host times, 81 TCP network performance, 62 TCP window sizes, testing, 113–114 test commands, 90 test environment, defining, 91 test types, 101 102 throughput testing, 70–72 TOS traffic, testing, 108 109 UDP network performance, 62–63 UDP traffic, testing, 109 –111 viewing actual bandwidth, 221–223... output, 102 103 running a test, 106 108 standalone mode, 105 106 starting the server, 105 106 TCP window sizes, testing, 113–114 test types, 101 102 TOS traffic, testing, 108 109 UDP traffic, testing, 109 –111 ipfw, 263–264 J jitter value, 110 jperf, 100 101 , 114–115 jPlot, 188 K keys, 139, 148–149 Kylix libraries, 310 L lblnettest, 138 libpcap library, 22–23 links, 325–326 Index links, simulating, 350–351... and ideas, to keep your network running smoothly Happy networking 385 APPENDIX Resources One of the advantages of using open source tools is their availability on the Internet There are scores of Web sites devoted to network performance issues and the tools used to monitor and analyze network performance This appendix lists some of the resources that are available for each of the tools presented in this... language, ns (Network Simulator), 329–330 links, 350–351 modeling language, 329–330 network agents, 326–327 network applications, 327–329 network delays, 250–251 network devices, 251–257 network links, 325–326 network model elements, 324–329 network nodes, 324 network problems, 246–251 Index network traffic See Network Traffic Generator networks, 245–246, 347–349 See also ns; SSFNet online resources, 390... production network environment within a test network As seen from the results, each of the network application performance tools produced results that were consistent with the actual results observed on the production network This proves that you can indeed easily duplicate production network performance with a simplified test network environment This chapter concludes our walk through network performance tools. .. 250–251 lost packets, 248–249 network traffic generators, 244 out-of-order packets, 250 packet errors, 248 simulating networks See emulation; simulation testing methods, 242–246 testing on a production network, 243 performance (network) See tools (network performance) ; specific aspects of performance PicoBSD, 273–274 ping command, 4–5 pinging determining response time, 9 10 large ping packets, 7–8 sample... mode, 106 dbsc command file, 86–93 dbsd program, running, 84–86 dbs_view script, 94–96 file transfers, testing, 112–113 graphical interface, 100 101 , 114–115 host configuration, defining, 87 multicast traffic, testing, 111–112 online resources, 388 output, 80–81, 102 103 403 404 Index traffic analysis (continued) performance test data, defining, 88–89 plotting graphs, 82 running a test, 106 108 sample... each of the network models to more accurately represent the actual production network environment Summary This section of the book presented five different scenarios for testing network application performance for a production network environment This chapter concluded the section by comparing the tools in a common scenario The first step to working with network application performance tools is to... production network, 243 TCP window sizes, 113–114 throughput, 70–72 TOS traffic, 108 109 UDP traffic, 109 –111 throughput See also bandwidth description, 12–13 graphing, 189–191 measurement See netperf measuring See netperf testing, 70–72 time, synchronizing across hosts, 81 time to live, setting, 10 11 Time to Live (TTL), 10 11 tools (application performance) cnistnet, 285–286, 292–295 emulating networks . production network. Figure 19.7 shows the SSFNet model used to simulate the sample network. Figure 19.7 SSFNet network model. host 0 host 3 host 4 router 1 router 2 100 Mbps 100 Mbps 100 Mbps 100 Mbps 100 . produc- tion network performance with a simplified test network environment. This chapter concludes our walk through network performance tools. I hope you have enjoyed your experience with each of the tools, . devoted to network performance issues and the tools used to monitor and analyze network performance. This appen- dix lists some of the resources that are available for each of the tools presented in

Ngày đăng: 14/08/2014, 12:20

TỪ KHÓA LIÊN QUAN