Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 84 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
84
Dung lượng
5,24 MB
Nội dung
Chapter 1
Introduction
1.1 Background
Recently wireless mobile ad-hoc networks (MANETs) have received much
attention from the research community for their important applications in emergence and
military situations, and other areas. Most current multi-hop wireless mobile ad-hoc
network implementations suffer from severe throughput limitations, less robustness and
do not scale well. Typically, MANETs are resource constrained nodes, e.g. battery power
can be limited, and therefore higher data throughput cannot always be achieved by
increasing transmission power. Overall data throughout for MANETs is also an issue as
communications are often multi-hop in nature. It is therefore a challenging research
problem to send more information with low power to optimize the throughput.
To improve network throughput, a novel idea of network coding for the current
packet switched networks has been proposed by Ahlswede et al. [1]. Traditionally, the
intermediate node in the network just forwards the input packets to the intended nodes.
However, network coding allows the intermediate node to combine some input packets
into one or several output packets based on the assumption that the intended nodes are
able to decode these combined packets. Figure 1 is a simple illustration for this promising
idea which shows how network coding can save a great deal of transmissions, and thus
improving the overall wireless network throughput.
In the three-node scenario in Figure 1, Alice and Bob want to send packets to each
other with the help of a relay. Without network coding, Alice sends packet p1 to the relay
1
and then the relay sends it to Bob. Likewise, Bob sends packet p2 to Alice. Therefore, a
total of 4 transmissions are required for Alice and Bob to exchange one pair of packets.
With network coding, the relay combines packets p1 and p2 together simply using XOR
and then broadcasts it to both Alice and Bob. Alice and Bob then extract the packets they
want by performing the required decoding operation. The total number of transmissions is
therefore reduced to 3. This illustrates the basic idea of how network coding is able to
improve the network throughput.
(a) No coding
(b) Coding
Figure 1: A simplified illustration of network coding, showing how network
coding saves bandwidth consumption. It shows Alice and Bob to exchange a pair
of packets using 3 transmissions instead of 4.
Recently the research focus on network coding has shifted towards the practical
aspects of research, in particular, for wireless mesh networks. For example, COPE [2] is
regarded as the first practical implementation of network coding in wireless mesh
networks. In [2-5] the authors introduce COPE as a new packet forwarding architecture
which combines several packets together by bit-wise exclusive OR (XOR) operation,
2
coupled with a completely opportunistic approach in routing packets. COPE inserts a
coding shim between the IP and MAC layers, which identifies coding opportunities and
benefits from them by forwarding multiple packets in a single transmission. What is more,
by taking advantage of the opportunistic property and simple XOR coding algorithm,
COPE manages to address practical issues when integrating network coding in the current
communication protocol stack. The details of the opportunistic property and practical
considerations will be described in the next chapter. Experimental results [2-5] have
shown that COPE can substantially improve the throughput of wireless mesh networks by
3 to 4 folds in simple one-hop topologies and it also slightly improves the throughput in
large scale multi-hop wireless mesh networks.
1.2 Motivations
Though S. Katti [2] has shown COPE to work well in wireless mesh networks,
there are still a number of issues to be further investigated. For example, it is shown that in
a large scale multi-hop wireless environment, COPE is unable to deal with TCP traffic as
well as UDP traffic. This was because the retransmission scheme and congestion control in
TCP protocol would degrade the performance. In general, network coding causes packet
reordering, and for TCP traffic, the performance suffers further. On the other hand, it is
observed that even for UDP traffic, the performance of COPE in large scale networks is
also much worse than that of single-hop topologies. One of the main reasons could be that
the impact of COPE’s control messages on the overall performance of the opportunistic
network coding scheme has not been studied. It is further suspected that if COPE’s control
messages are scheduled intelligently, the performance of the opportunistic network coding
scheme in large scale scenario may be improved.
3
In addition, few works have been reported on designing a practical network coding
scheme for wireless mobile ad-hoc networks. MANETs play a very important role in many
fields of application nowadays, for example the emergency situation and military
application. However, the throughput and robustness of wireless mobile ad-hoc networks
are limited due to many factors, such as the node resource and changing topologies. We are
therefore interested in investigating the outstanding issues described here with the aim to
improve the performance of opportunistic network coding in wireless mobile ad-hoc
networks.
In this thesis we propose to first study and investigate the performance behavior
and key control parameters of opportunistic network coding in large scale static wireless
mesh networks. Our proposed approach is to be independent of routing protocol so that this
opportunistic network coding scheme can take advantage of any new designed or existing
routing protocols. To validate our proposed schemes, we choose QualNet as the pseudo
implementation environment and simulation platform since QualNet is the most realistic
network simulator there is, and its protocols are fully implemented similar to that of actual
implementation. So our proposed solutions and simulation results would be much closer to
real system implementations.
Due to the time restriction of this project, my experiment only deals with UDP
traffic rather than those various traffic in the real world and are not able to be implemented
in large scale mobile ad-hoc network scenarios. However, the valuable simulation results
and analysis presented in this thesis would shed light on further research about the complex
traffic situation within mobile environment.
4
1.3 Thesis Contributions
This thesis has carried out the following work and contributed to the understanding
of performance of practical network coding in wireless mesh networks. In addition,
detailed studies on the behavior and key control parameters of COPE on simple topologies
have shed light on the way to address the issue on large scale wireless networks. Lastly,
based on these valuable insights, we have proposed and designed an intelligent
opportunistic network coding scheme which would be particularly suitable for the MANET
environment. We summarize our contributions as follows:
1.
Extend the QualNet simulator to include opportunistic network coding
functionalities – We have designed and developed the new functionalities in the
existing protocol stack of QualNet. We further integrate these new functionalities
into QualNet as a new enhanced network layer protocol. Through this
implementation on QualNet, the performance of any kind of opportunistic network
coding scheme and network size can be easily studied through QualNet simulations.
2.
Evaluate and study the behavior of opportunistic network coding scheme via our
enhanced QualNet both on simple as well as large scale topologies – We enhance
the original COPE functionality and simulate its behavior in Alice-and-Bob (i.e. 1to-1), X and Cross topologies. Based on the simulation results from these simple
topologies we further study its performance in a 20-node multi-hop wireless mesh
network.
3.
Evaluate and study the key parameters of opportunistic network coding scheme
with respect to overall network throughput, e.g. the impact of control message on
the performance of opportunistic network coding scheme. We then use the findings
5
for further study of network coding scheme on large scale wireless ad-hoc networks.
We find out there is an optimal value for the control message interval to maximally
improve the network throughput.
4.
We propose an intelligent opportunistic network coding scheme that is suitable for
large scale wireless ad-hoc networks and demonstrated the effectiveness of our
solutions through simulations. The proposed intelligent algorithm manages to
reduce the overhead and interference caused by control message in large scale
multi-hop networks without degrading the benefit brought by network coding.
1.4 Related Works
Network coding is regarded as a promising technique to improve network
throughput. It originates from Ahlswede et al [1], which demonstrates that intermediate
nodes in network may combine several received packets into one or several output packets.
Much theoretical work has been done to optimize network coding scheme in information
and networking systems. Li et al. [6] has showed that linear codes are sufficient for
multicast traffic to achieve the maximum capacity bounds. At the same time, Koetter and
Medard [7] has proposed an algebraic approach and showed that coding and decoding can
be done in polynomial time. Ho et al. [8] has presented the concept of random linear
network coding, which makes network coding more practical, especially in distributed
networks such as wireless networks. In paper [4] an intra-flow network coding scheme is
proposed to deal with intra-flow traffic, which can effectively handle reliability issue.
Joon-Sang [9] has presented a network coding based ad-hoc multicast protocol CodeCast,
which is especially well suited for multimedia application in a wireless network. All of the
above works have shown results either analytically and/or through extensive simulations.
6
In the last few years, many researchers have focused on developing practical
network coding techniques in wireless networks [10-12] for inter-flow traffic, to
significantly improve the network capacity. A great deal of attention has been focused on
dealing with the practical issues and designing the implementable protocols with network
coding [13-15]. Generally, the common practical issues are that how to integrate network
coding technique into the current network protocol stacks and achieve low complexity of
coding and decoding scheme. In [2] S. Katti et al proposed COPE which is regarded as the
first practical network coding implementation for wireless mesh network dealing with
inter-flow and unicast traffic. With the opportunistic listening and opportunistic coding
characteristics, COPE exploits the broadcast nature of the wireless channel. Through
eavesdropping and sharing information among neighbors, the intermediate node with
COPE can simply XOR multiple packets into one packet and broadcast it to several
neighbors. All the neighbors are then able to decode the specific packets from the
combined packet by the simple XOR method. The authors show that this opportunistic
network coding scheme improves network throughput several times for wireless mesh
network. In addition, K. Ajoy et al [21] further evaluated the performance of COPE with
two different routing protocols, i.e. AODV and OLSR using three different queue
management schemes such as FIFO, RED and RIO. The authors show that OLSR provides
better performance than AODV for COPE while the FIFO achieves shortest packet delay
among the three queue management schemes. Finally in paper [16] which is a valuable
primer for network coding the authors explicitly explain some popular network coding
schemes and also describe the advantages and challenges in research area and practical
implementation. They also enumerate some other promising fields where network coding
7
could be applied, from peer-to-peer (P2P) file distribution networks to wireless ad-hoc
networks, from the improvement of network capacity to increase of network security.
1.5 Thesis Organization
The rest of this thesis is organized as follows.
Chapter 2 presents the overview of opportunistic network coding including two
components of the opportunistic character, i.e. opportunistic listening and opportunistic
coding. We also explain the packets coding and decoding algorithm implemented in this
specific opportunistic network coding scheme.
In chapter 3, we present the design of opportunistic network coding algorithm and
architecture. A detailed description of the control flow is presented to show how we design
this opportunistic network coding scheme.
Chapter 4 shows our implementation of the opportunistic network coding scheme in
QualNet simulator. We first introduce the architecture of QualNet simulator, including the
protocol model and application program interface. Then we enumerate some key
programming abstractions of our implementation in QualNet, which show how we
implement the opportunistic network coding scheme and how we integrate them into
QualNet simulator.
In chapter 5, we present our simulation results and analyze the observations,
including the parameters that affect the results and the way to improve the coding scheme.
In chapter 6, an intelligent version of opportunistic network coding scheme is
proposed based on the conclusions drawn in previous chapter. We further demonstrate how
this intelligent scheme would be suitable for MANETs.
8
Chapter 7 presents the performance evaluation of the intelligent opportunistic
network coding scheme proposed in previous chapter.
In the last chapter, Chapter 8, we draw some conclusions based on those
simulations. We also highlight a number of areas where further enhancements can be
explored.
9
Chapter 2
Opportunistic Network Coding Overview
COPE is a new forwarding architecture for current packet switched network
especially for wireless mesh networks with unicast traffic. Traditionally, the intermediate
node in the wireless network directly forwards the input packet to the next hop. However,
in COPE the intermediate node may XOR several input packets into one output packet
and then broadcast the combined packet to several intended neighbors based on the
assumption that all the intended neighbors are able to decode the combined packet. No
synchronization or prior knowledge of senders, receivers, or traffic rates are necessary i.e.
all of which may vary at any time. COPE depends highly on the local information shared
by neighbors to detect and exploit coding opportunities whenever they arise. To
successfully exchange the information among neighbors in a wireless environment, an
opportunistic mechanism is introduced which has two main components, i.e.
1.
Opportunistic Listening
2.
Opportunistic Coding
In this chapter, all these techniques will be explained and corresponding coding
and decoding schemes will be also introduced.
2.1 Opportunistic Listening
The wireless network comprises a broadcast medium wherein the nodes in the
network are able to hear packets even when they are not the intended recipients. This is the
basic idea of opportunistic listening which allows the nodes to snoop in the network and
store all overheard packets in their local buffers, called the packet pool. To achieve this, all
10
nodes in the network should be equipped with omni-direction antennas and set to
“promiscuous” listening mode. A specific callback function is necessary to handle the
packets heard in the promiscuous mode. We will introduce how the callback function is
implemented in QualNet simulator in Chapter 4. The packet pool, in our implementation, is
an FIFO queue with fixed capacity of N0. The larger the N0 is, the more packets the node
can store, therefore, there would be more opportunities to perform network coding.
However, if N0 is too large, the packets in the head of packet pool would be too old, thus
degrading the coding efficiency. In our implementation, N0 is set at 128. The total amount
of storage required is less than 192 kilobytes, which is easily available on today’s PCs,
laptops or PDAs. This constitutes the Opportunistic Listening function.
In addition to promiscuous listening, each node periodically sends reception reports
to its neighbors to share information with them which is important to the coding decision.
This reception report contains the information on the packets that the host has heard and
stored in its packet pool. Once the neighbors have received these reception reports, they
extract all the relevant information and store them in another local buffer, called the report
pool. The reception reports are normally inserted into the data packets as an extra packet
header and sent together with the data. However, when nodes have no data packets to send,
they will periodically broadcast “Hello” messages which contain the reception reports to
their neighbors. In the original COPE implementation, this Hello message is the same as
the control packet, but in our implementation, control packets are indicated as a separate
kind of packet. The Hello message would be part of an intelligent scheduling scheme,
which will be described at Chapter 6.
11
In summary, through the Opportunistic Listening technique, the nodes learn and
share their states with each other, thus contributing to the nodes’ network coding decision.
2.2 Opportunistic Coding
Opportunistic coding allows nodes to combine multiple packets into a single packet
based on the assumption that all intended recipients can extract the packets they want from
the combined packet. However, the main issue is regarding which packets to code, and
how to code. Each node should answer this question based on local information and
without further consulting with other nodes. A basic-method to address the coding issue in
wireless networks is that each node maintains a FIFO queue of packets to be forwarded.
When the MAC indicates that the node can send, the node picks the packets at the head of
the queue, checks which other packets in the queue which may be encoded with this packet,
XORs those packets together, and broadcasts the single combined packet.
However, the question is which packets should be combined together to maximize
network throughput since a node may have multiple coding options. It should therefore
pick the one that maximizes the number of packets delivered in a single transmission. In
[2] the authors provide a good example to illustrate this situation. In Figure 2(b) node E
has eight packets in its output queue P1-P8. The list in Figure 2(a) shows the next hop of
each packet in E’s output queue. When the MAC notifies node E to transmit, node E dequeues packet P1 from head of the output queue and tries to code with other packets in the
output queue. With the help of the information in reception reports, node E knows what
packets its neighbors have in the Packet Pool. Now node E has some coding options as
shown in Figure 2(c).
12
The first option is P1 ⊕ P2, which is not a good option because none of the
recipient nodes can decode the P1 ⊕ P2. The second option in Figure 2(c) shows a better
coding decision P1 ⊕ P3. As node C has packet P3 so it can successfully decode packet
P1; and node D has packet P3 so it can decode packet P1 successfully. As for the third
and fourth options, it can be seen that these two are bad coding decisions as none of the
recipient nodes can decode either P1 ⊕ P3 ⊕ P5 or P1 ⊕ P3+ ⊕ P4. One of the interesting
observations is that the second option is a better coding decision than the third and fourth
options although the third and fourth options can code more packets than the second
option. The fifth option P1 ⊕ P3 ⊕ P6 is a much better option than the second option as
three of the recipient nodes (B, C and D) can decode their intended packet successfully, i.e.
node B has P1 and P3 so it can decode P6, node C has P1 and P6 so it can decode P3 and
node D has P3 and P6 so it can decode P1 successfully. Coding option sixth is the worst
as it would have coded four packets and none of the intended next-hop can decode the
encoded packet successfully. The seventh option P1 ⊕ P2 ⊕ P6 ⊕ P8 in the Figure 2(c) is
the best coding decision for that scenario with the maximum number of packets XORed
together. Here node A has P1, P3 and P6 so it can decode P8 successfully. Similarly node
B, C and D can decode their intended packet P6, P3 and P1 respectively from the encoded
packet of P1 ⊕ P2 ⊕ P6 ⊕ P8.
As can be seen from this simple example, a general opportunistic coding rule can
be like this proposed in paper [2]: A node can XOR n packets p1, … , pn together to
transmit to n next-hops r1, … , rn only if each next-hop ri has all n-1 packets pj for j ≠ i.
Each time when a node is ready to send, it tries to find the maximum n in order to code and
transmit as many packets as possible in a single packet. This coding scheme has a few
13
other important characteristics. First, there is no scheduling or assumed synchronization.
Second, no packet is delayed; every time the node sends a packet it picks the head of the
queue as it would have done in the current approach. The difference is that, whenever
possible, the node tries to load each transmission with additional information through
coding. Third, the scheme does not cause packet reordering as it considers the packets
according to their order in the FIFO queue, for both transmission and coding. This
characteristic is particularly important for TCP flows, which may mistake packet
reordering as a congestion signal.
2.3 Packet Coding Algorithm
In this section, we introduce the details of packet coding algorithm based on the
opportunistic coding idea mentioned above. Some practical issues in the implementation
and the specific solutions are also proposed.
First, the coding scheme in the original COPE does not introduce additional delay.
The node always de-queues the head of its output queue and checks coding opportunities
when it is ready to send. If there is no coding opportunity, the node will send the packet
without waiting for the arrival of a matching codable packet. However, in our design, we
have added a waiting scheme before the coding procedure shown in Figure 3. This waiting
scheme takes action immediately before the network coding procedure. When the node is
ready to send, it checks the packet in its output queue and executes this waiting scheme
before it executes the network coding. This waiting scheme works as follows: a node
checks the coding opportunity if and only if the number of packets in its output queue is
greater than N and no new packet arrives during the last T seconds. N and T are called
queue threshold and waiting duration of the waiting scheme, respectively. The values of N
14
P1
D
P5
B
P2
C
P6
B
P3
C
P7
A
P4
C
P8
A
(a) Next Hop of packets in E’s Queue
A’s Packet Pool
P3
P1
B’s Packet
P6
P8
P1
P3
E’s Output Queue
P1 P2
P3
P4
P5
P6
C’s Packet Pool
P8
P1
P7
P8
D’s Packet Pool
P6
P8
P3
P6
(b) Sending Scenario
Coding Option
Is it good?
P1
P2
Bad
P1
P3
Better
P1
P3
P4
Bad
P1
P3
P5
Bad
Coding Option
P1
P3
Is it good?
P6
Much better
P1
P3
P6
P7
Bad
P1
P3
P6
P8
Best
(c) Possible Coding Option
Figure 2: An Example of Opportunistic Coding
and T would significantly impact the performance of the algorithm. It is understandable
that the waiting scheme would accumulate N packets in the output queue thus improving
the coding opportunities with the price of delaying the packet by T. However, the
15
improvement in network throughput may shorten the packet delay which may compensate
for the increase in delay due to the waiting scheme.
Figure 3: A simplified illustration of waiting scheme, showing a threshold N in the output
queue. A node sends packet if and only if the threshold N is fulfilled and no packets came
during the last T seconds.
Second, the coding scheme gives preference to XOR-ing packets of similar lengths,
because XOR-ing small packets with larger one reduces overall bandwidth savings.
Empirical studies show that the packet-size distribution in the Internet is bimodal with
peaks at 40 and 1500 bytes [17]. We can therefore limit the overhead of searching for
packets with the right sizes by distinguishing between small and large packets. We might
still have to XOR packets of different sizes only when necessary. In this case, the shorter
packets are padded with zeros and the receiving node can easily remove the padding by
checking the packet-size field in the IP header of each native packet.
Third, the coding scheme will never encode together packets headed to the same
nexthop, since the nexthop will not be able to decode them. Hence, we only need to
consider packets headed to different next hops. Hence the relay node maintains a virtual
queue for each neighbor. When a new packet is inserted into the output queue, an entry is
added into the virtual queue for the specific intended neighbor.
16
Finally, we want to ensure that each intended neighbor is able to decode its native
packet from the combined packet. Thus, for each packet in output queue, our relay node
checks whether each of its neighbors has already heard the packet. The neighbor’s
information is shared and learned by the reception report, mentioned previously.
In our implementation, each node maintains the following data structures.
Each node has 3 FIFO queues of packets to be forwarded, which we call the output
queue, which is the default node configuration. All these 3 queues have different priorities
from 0 to 2. In our implementation, data packets have lowest priority 0 while Hello
messages and control messages have the highest priority 2.
For each neighbor, the node maintains two per-neighbor virtual queues, one for
small packets (e.g. smaller than 100 bytes), and the other for large packets. The virtual
queue for a neighbor A contain pointers to the packets in the output queue whose nexthop
is A.
Each node maintains two extra buffers named packet pool and reception report
pool. Packet pool is used to store the overheard packets and reception report pool is used
to store the reception report from neighbors.
The packet pool stores the native packets while the reception report pool stores the
report indicating the specific details of the packets overheard by neighbors. One entry of
the report contains the packet’s ID and previous hop and next hop’s address. The details
of the packet format will be explained in next Chapter.
The specific coding procedure is illustrated by the following pseudo-code.
17
Coding procedure –
Pick packet p at the head of the output queue.
Sending Packets = {p}
Nexthops = {nexthop (p)}
if size (p)>100 bytes then
queue = virtual queue for large packet
Else
queue = virtual queue for small packet
End if
for Neighbor i = 1 to M do
Pick packet pi the head of virtual queue Q (i, queue)
if ∀ n ∈ Nexthops ∪ {i}, n can decode p ⊕ pi based on the reception report
p = p ⊕ pi
Sending Packets = Sending Packets ∪ {pi}
Nexthops = Nexthops ∪ {i}
end if
end for
queue = ! queue
for Neighbor i = 1 to M do
Pick packet pi, the head of virtual queue Q (i, queue)
if ∀ n ∈ Nexthops ∪ {i}, n can decode p ⊕ pi based on the reception report
p = p ⊕ pi
Sending Packets = Sending Packets ∪ {pi}
Nexthops = Nexthops ∪ {i}
end if
end for
return Sending Packets
18
In above pseudo-code p represents the specific packet in the output queue while Q
indicates the whole output queue in term of the overall virtual queue structure for all
neighbours. The variable queue is a two-state variable indicating which queue is selected
between those two queues for large and small packets. In this algorithm, it uses “!”
operation to switch between these two states. For example, queue=!queue means if
originally queue indicates the one for large packets, after the operation, queue indicates
the one for small packets and vice verse. Given the values of i and queue, Q(i, queue) can
locate the specific virtual queue for neighbour i.
In the original coding scheme, the authors introduced an intelligent guessing
scheme which is based on the integrated ETX [18] routing scheme. At the congestion
situation, the reception report may be dropped at the wireless channel or may be too late to
reach the intended node. Thus the relay node may miss some coding opportunity.
Depending on the packet delivery probability of the wireless link, the relay node may
guess whether the intended neighbour has received the packets or not. Even though the
authors show the intelligent guessing technique can somehow benefit the total network
coding opportunity at congestion situation in static wireless mesh networks, we do not
implement this technique in our design for several reasons. First, we would like to design
an opportunistic network coding scheme independent of routing protocol, thus making the
algorithm very flexible at difference scenarios by cooperating with the suitable routing
protocol instead of integrating with ETX based routing algorithm. Furthermore, ETX
algorithm calculates the metric value by measuring the loss rate of broadcast packets
between a pair of neighbour nodes, indicating the link quality. However, this method is
not suitable for the wireless mobile environment, because the topology is always changing
19
as well as the link quality. In contrast, the AODV [19] and OLSR [20] are two more
practical routing protocols for wireless mobile ad-hoc networks than those based on ETX.
2.4 Packet Decoding
The packet decoding scheme is much simpler than the coding side. As mentioned
above, in COPE each node maintains an extra buffer, named the packet pool, to store a
copy of the packets overheard or sent out. Each packet is indicated by a specific packet ID
consisting of the packet source address and IP sequence number. When a node receives an
encoded packet consisting of n native packets, the node goes through ids of the native
packets one by one in the local packet pool and retrieves the corresponding n-1 packets.
Ultimately it XORs these n-1 packets with the received encoded packet to get the intended
packet.
2.5 Pseudo Broadcast
In COPE, the node sends the packets in an encoded manner, which is a single
encoded packet that contains information of several packets with several different next
hops. Moreover, for opportunistic listening, all nodes snoop at the network to monitor all
packets that are transmitted among its neighbors. The natural method to do this would be
broadcast. However, one of the biggest disadvantages of 802.11 MAC protocol is that a
recipient does not send an acknowledgement in response to a broadcast packet. In the
absence of an acknowledgement, the broadcast mode offers no retransmissions and
consequently very low reliability. In addition, a broadcast source does not detect collision,
and thus does not back off and retransmit. If several nodes sharing the same wireless
channel broadcast data packets to the neighbors, the total network throughput would be
severely degraded due to the congestion. On the other hand, unicast mode ensures both
20
sending node retransmission and back-off but only to one specific destination at a time.
However, unicast does not help in opportunistic listening and consequently not in
opportunistic coding.
To address this problem pseudo broadcast is introduced in COPE. Pseudo broadcast
is actually unicast and therefore benefit from the reliability and the back-off mechanism.
The link layer destination field of the encoded packet is set to the MAC address of one of
the intended recipients. However, an extra header after the link layer header is added,
listing all other next hops of the encoded packet (except link layer destination). Recall that
all nodes in the network listen on promiscuous mode. They snoop in the network by eavesdropping all packets transmitted among neighbors. In this way the node is able to process
those packets not headed to it. When a node hears an encoded packet, it checks the link
layer destination field to determine if it is the intended receiver or not. If it is, it will
process this packet directly. If not, this node further checks the next hop list in the next
packet header to see whether it is the intended receiver or not. If not, it just stores a copy of
that packet-as a native packet-in the packet pool. If it is meant for the next hop, it processes
this encoded packet further to retrieve the intended packet and then stores a copy of the
decoded native packet in its packet pool. As all packets are sent using 802.11 unicast, the
MAC layer is able to detect collisions and back-off properly. Pseudo broadcast is therefore
more reliable than simple broadcast and inherits all the advantages of broadcast.
In this chapter, we have presented the overview of the opportunistic network coding
for wireless environment. In next chapter, we will show the specific architecture of
opportunistic network coding in our implementation including the specific packet header
structure and the whole control flow of the algorithm.
21
Chapter 3
Opportunistic Network Coding Architecture
In this chapter, we introduce the architecture of the opportunistic network coding
scheme which we have implemented in QualNet simulator. The details of the architecture
are based on the overview of the opportunistic characteristics, coding and decoding
algorithm introduced in the previous chapter. Here the packet header structure will be
shown and the functionality of each field in the header will be explained. Next the overall
control flowchart will be presented to illustrate the structure of the opportunistic network
coding algorithm in our implementation.
3.1 Packet header
Figure 4 shows the modified variable-length coding header for the opportunistic
network coding scheme, which is inserted into each packet. If the routing protocol has its
own header, our coding header sits in between the routing and MAC layer headers.
Otherwise, it sits between the IP and MAC headers. Only the shaded fields in Figure 4, is
required in every coding header (called the constant block). Besides this, there are two
other header blocks containing the identifiers of the coded native packets and the
reception reports.
Constant block: The first block records some constant values for the whole
coding header. For example, it records the number of coded native packets in this encoded
packet, the number of reception reports attached in this header, the packet sequence
number and the total length of the header. Besides these, the protocol serial information
and parity can also be inserted in this constant block. In our implementation, we have
added the version information and check-sum fields in this block.
22
Figure 4: Packet header for our algorithm. The first constant block indicates the number of
entries in the following blocks. The second block identifies the native packets encoded and
their next hops. The last block contains reception reports. Each entry identifies a source, the
last IP sequence number received from the source, and a 32-bit long bit-map of most recent
packets seen from that source.
Identifiers (Ids) of the coded native packets: This block records metadata to
enable packet decoding. The number of entries is indicated in the first constant block.
Each entry contains the information of corresponding native packet. It begins with the
packet Id, which is a 32-bit hash of the packet’s source IP address and IP sequence
number. This is followed by the IP address of the native packet’s nexthop. When a node
hears an XOR-ed packet, it checks the list of nexthop in this block to see whether it is the
intended nexthop of this XOR-ed packet, in which case it decodes the packet, and
processes it further.
Reception reports: As shown in Figure 4, reception reports form the last block in
the header, and the number of report entries is also recorded in the first constant block.
Each report entry specifies the source of the reported packet SRC_IP, which is followed
by the IP sequence number of the last packet received from the source LAST_PKT, and a
23
bit-map of recently heard packets. This bit-map technique for reception report has two
advantages: compactness and effectiveness. In particular, it allows the nodes to report
each packet multiple times with minimal overhead. This prevents reception reports from
being lost at high traffic congestion situation.
Our packet header structure by-and-large follow the original COPE header
structure; however, in order to make this opportunistic mechanism fit in QualNet and the
mobile environment, we make some modifications in our implementation. First, we
remove the asynchronous acknowledgment scheme in the original COPE. Originally,
COPE exploits hop-by-hop ACKs and retransmission to guarantee the reliability in the
hop-by-hop fashion, thus adding an ACKs block at the end of the header structure.
However, we discover this asynchronous ACKs technique is not a good solution;
sometime it even makes the performance worse. So in our header structure, there is no
ACKs block, thus resulting in smaller overhead. Besides this, we rearrange the header
structure thus placing a constant block at the beginning of the header shown in Figure 4.
In addition, we use 32-bit bitmap instead of 8-bit at the reception report block. The bitmap
is used to represent packets. For example, if the first bit of the bitmap indicates packet 10,
then the second bit indicates packet 11 and so on. A longer bitmap can indicate more
packets than a short one, which is used to compensate for the delay associated with the
reception report. This is particularly so in a mobile environment, where a node can
provide more information to a new incoming neighbor which has no information about his
neighbor node, therefore potentially improve the coding opportunity. Another
modification is that we replace the MAC address at the Nexthop field of the second block
in the header with IP address. IPv4 address which is 16-bit shorter than MAC address
24
means less overhead in the header thus compensating for the long bitmap. Moreover, the
replacement could make our solution independent of the underlying MAC layer, making it
suitable for different networks.
3.2 Control Flow
This section describes the overall packet flow of our opportunistic network coding
scheme, which mainly consists of two parts, i.e. the Sending Side and the Receiving Side.
3.2.1 Sending Side
(a) Sender Side
(b) Receiver Side
Figure 5: Flowcharts for our opportunistic network coding implementation
25
Figure 5(a) shows the flowchart of the Sender Side. When a node is ready to send
a packet, it first checks whether it needs to wait for the next new input packet or not.
Based on the waiting scheme we have introduced in previous chapter, the node just checks
the number of packets in its output queue and whether it has received new packets in the
last T seconds. If the number of packet in the output queue is less than the threshold value
N or if the wait period of T seconds has yet to expire, the node will wait for additional new
packets. Otherwise, the node goes to the next step immediately to de-queue from the head
of the output queue. Next the node continues to traverse the packets in the output queue to
pick up some packets that are able to be coded with the head packet according to the
coding algorithm. After the packets are XOR-ed together, the node constructs the header
block with the Ids of the coded native packets in the header followed by the reception
reports. Finally, the combined packet with the extra coding header is transmitted.
Alternatively, if no other packets can be coded with the head packet, the native head
packet is just added with the coding header and transmitted without any delay.
3.2.2 Receiving Side
At the receiver side, whenever a node receives a packet, it checks whether this
packet is a coded packet or not. If this packet is native without the extra coding header, the
node processes it in the usual way. Otherwise, the node processes it according to the
flowchart of the receiving side illustrated in Figure 5(b). First, the node extracts the
reception reports from the header and updates the neighbor’s state recorded in its report
pool. Next, the node checks whether there is more than one packet combined together in
this packet, in which case the node tries to decode it. After the node manages to get the
native packet, it stores a copy of the native packet in its packet pool and goes on to check
26
whether it is the intended next hop. If not, it just stops handling this packet. Otherwise, it
passes this native packet to higher level protocol for further processing.
In the next chapter, we will describe our implementation details of this
opportunistic network coding architecture in QualNet simulator.
27
Chapter 4
Implementation in QualNet Simulator
In this chapter, we describe the details of our implementation of opportunistic
network coding scheme in QualNet simulator. Before describing the detailed
programming, it is useful to present a brief description of the QualNet simulator.
4.1 Simulator Abstraction
The QualNet simulator is a commercial network simulation tool derived from
GloMoSim that was first released at 2000 by Scalable Network Technologies (SNT). We
use QualNet 4.0 as our performance evaluation study platform. QualNet is famous for its
ultra high-fidelity, scalability, portability, and extensibility. It can simulate a scenario with
thousands of network nodes because it takes full advantage of multi-threading capability
of multi-core 64 bit processor. In addition, the source code and configuration files are
identical to those in the real communication system organized in OSI protocol stack model.
Furthermore, the protocol architecture in QualNet is also very close to the real TCP/IP
network structure, which consists of the Application, Transport, Network, Link (MAC)
and Physical Layers, from top to bottom. Compared to other open source network
simulation tools, QualNet is closest to the real system implementation, thus capable
generating more realistic and accurate simulation results.
4.1.1 Discrete event simulator
Compared to the continuous time simulator, a discrete event simulator is much
more popular among the fields of industry implementation or academic research. There
are many articles online providing many convincing evidence and analytics on this
28
statement, so we do not discuss further in this thesis. QualNet is also a discrete event
simulator where the system state changes over time only when an event occurs. An event
can be anything in the network system such as a packet generation request, a collision or
time out and so on, which triggers the system to change its state or perform a specific
operation. In QualNet there are two event types: Packet events and Timer events. Packet
events are used to simulate exchange of data packets between layers or nodes. To send a
packet to the adjacent layer, the QualNet kernel passes the handle to the specific node and
the node schedules a packet event for the adjacent layer, then returns the handle back to
kernel. After a pre-set delay, the occurrence of the packet event simulates the arrival of the
data packet, which triggers the QualNet kernel passing the handle to this node again and
the adjacent layer in this node processes the data packet further. Next, this data packet is
passed to another adjacent layer until it is freed. Packet events are also used for modeling
communication between different nodes in the network. In fact, the communication
among nodes in the network can only be achieved by scheduling data packets and
exchanging them with each other. On the other hand, Timer events are used to perform the
function of alarms. For example, a time alarm is used to trigger the periodic broadcast of
control message every one second. Timer events are very useful and important for the
simulator to schedule more complex event pattern like in a real application system.
In QualNet, both the Packet event and Timer event are defined via the same
message data structure. A message contains the information about an event such as the
event type, sequence number, generating node and the associated data. Figure 6 shows the
message data structure in QualNet.
29
struct message_str
{
Message* next; // For kernel use only.
short layerType;
// Layer which will receive the message
short protocolType; // Protocol which will receive the message in the layer.
short instanceId;
short eventType;
// Message's Event type.
……
MessageInfoHeader infoArray[MAX_INFO_FIELDS];
int packetSize;
char *packet;
NodeAddress originatingNodeId;
int sequenceNumber;
int originatingProtocol;
int numberOfHeaders;
int headerProtocols[MAX_HEADERS];
int headerSizes[MAX_HEADERS];
Figure 6: Message structure in QualNet
Some of the fields of message data structure are explained bellow.
layerType:
Layer associated with event indicating which layer will receive this
message
protocolType: Protocol associated with event indicating which protocol will
process this message in the layer
instanceId:
For multiple instances of a protocol, this field indicates which
instance will receive this message
30
infoArray:
Stores additional information that used in the processing of events
and the information that needs to be transported between layers.
Packet:
This field is optional. If the event is for actual data packet, this field
would hold the data. Headers added by different layer are included
in this field.
packetSize:
The total size of the packet field.
numberOfHeaders:
Recording how many headers are added
headerProtocols:
It is an array storing the specific protocol which adds the
header.
headerSizes:
It is an array storing the specific size for each header.
The last three fields are for packet tracing. If the packet tracing function is enabled,
these fields are filled up during the simulation to facilitate the analysis after the simulation,
but it slows down the simulation. All the fields listed above are some key parts in the
message data structure, more details can be found in the API reference provided by
QualNet. We have added some new entries in the message structure to facilitate our
implementation of network coding scheme, which will be described later.
4.1.2 Protocol Model in QualNet
As mentioned above, each node in QualNet runs a protocol stack just like the
physical communication device in the real world and each protocol operates at one of the
layers in the stack. Before we implement our own protocol into QualNet, we describe how
a protocol is modeled in QualNet. Figure 7 shows the general protocol model as a finite
state machine in QualNet.
31
Figure 7: Packet Model in QualNet
A general protocol model in QualNet consists of three states, Initialization, Event
Dispatcher and Finalization. In the Initialization state, the protocol reads parameters from
the simulation configuration file to configure its initial state. Then the protocol transfers to
the Event Dispatcher state, which is the kernel of the protocol model. This state contains
two sub-states, Wait For Event state and Event Handler state, which construct a loop.
Initially, all protocols are waiting for the events headed to them to happen, in which case
the protocol transfers to the Event Handler state to call the specific handler function to
process the event. Afterwards, the protocol transfers back to the Wait For Event state to
wait for the next event to happen. After all potential events in the simulation are processed,
the protocol transfers to the last state called Finalization, in which the protocol may print
out the packet tracing data and also the simulation statistics into some output files.
32
4.1.3 Application Program Interface
QualNet provides several Application Program Interface (API) functions for the
events operations. Some of these APIs can be called from any layer while others can only
be called by specific layers. The complete list of APIs and their explicit descriptions can
be found in the API reference provided by QualNet. Here we select some examples which
are helpful for our implementation in QualNet.
MESSAGE_Alloc:
This API allocates a new Message structure, which is called
when a new message has to be sent through the system.
MESSAGE_Free:
It is called to free a message when this message is no longer
needed in the system.
MESSAGE_AddInfo:
This is to allocate one “info” field with given info
type for the message.
MESSAGE_RemoveInfo:
Remove one “info” field with given info type from
the info array of the message.
MESSAGE_PacketAlloc:
Allocate the “payLoad” field for the packet to be
delivered. It is called when a message is a packet
event carrying specific data.
MESSAGE_AddHeader:
Add a new header with the specific header size to
the packet enclosed in the message.
MESSAGE_RemoveHeader: Remove the header from the packet enclosed in the
message.
MESSAGE_Send:
It is called to pass message within QualNet.
33
IO_ReadNodeInput: It is called to read the parameters from external
configuration file.
IO_PrintStat:
It is called to print out the statistics collected during the
simulation into an output file.
NetworkIpSneakPeekAtMacPacket: Called directly by the MAC layer, this allows
a routing protocol to “sneak a peek” or “tap”
messages it would not normally see from the
MAC layer.
Besides these APIs to process the message, there are many other APIs enclosed in
Scheduler and Queue classes that deal with the queuing system. For example, some APIs
are used to insert a packet into or de-queue a packet from a queue while others are used to
construct some kinds of queuing systems. There are also some APIs for queue
management, such as FIFO queue, RIO queue or RED queue.
4.1.4 QualNet Simulator Architecture
From the section of protocol model in QualNet, we have learned a protocol is
modeled as a finite state machine with three states: Initialization, Event Dispatcher and
Finalization. However, how are those protocols managed as a stack in QualNet similar to
the TCP/IP protocol stack in the real world? As we know the protocols are grouped into
layers in the protocol stack of TCP/IP model, which is achieved by registering those
protocols into the event type table and protocol type table managed by QualNet. What is
more, the corresponding event handler function should be embedded into each layer’s
entrance. Take the AODV routing protocol as an example. In a wireless ad-hoc network
with the nodes equipped with AODV routing protocol, when a packet is passed to the
34
network layer from the transport layer, the entrance function in the network layer is
checked to determine whether this packet needs to be routed or not. If it needs to be routed,
the network layer entrance function calls the embedded Event Handler function of AODV
protocol registered at protocol table to process this packet further.
Another issue is how this protocol stack operates in QualNet. As mentioned above,
a protocol model in QualNet has three components: Initialization, Event Dispatcher and
Finalization, which operate in hierarchical manner, first at the node level, second at the
layer level and finally at the protocol level.
At the start of the simulation, each node in the network is initialized by the kernel
of QualNet. The initialization function of the node calls the initialize function for layer
initialization. The layers are initialized in a bottom up order. All the layers are initialized
one node at a time, except the MAC layer which is initialized locally. Each layer
initialization function then calls all the protocol initialization functions running in that
layer. The initialization functions of a protocol create and initialize the protocol state
variables. If the value of the variable is not given by the user, a default value is selected
during the Initialize state. After all the nodes are initialized, the simulator is ready to
generate events and process events.
When an event occurs, the QualNet kernel passes the handle to the node where the
event occurs. The node calls a dispatcher function to determine which layer should
process the event further. The event dispatcher function of that selected layer then calls
the event dispatcher function for the appropriate protocol based on the protocol type
information enclosed in the event, normally in the packet header. The protocol event
35
dispatcher function then calls the corresponding event handler function to perform the
actions for the occurred event.
At the end of the simulation, Finalization functions are called automatically and
hierarchically in a manner similar to Initialization functions. Finalization functions usually
print out the statistics collected during the simulation time.
4.2 Programming Abstraction in QualNet
As we know a native QualNet does not support any network coding functionality,
we should therefore implement the opportunistic network coding functionality in
QualNet’s protocol stack before we can further evaluate its performance via simulations.
In this section we will describe the implementation details of our opportunistic network
coding scheme in QualNet. As mentioned in chapter 2, this opportunistic network coding
scheme works between the MAC and IP layers by inserting an extra coding header
between MAC and IP headers. However, in order to be consistent with the real network
architecture, we do not create an extra independent communication layer between Link
and Network layer. Instead, we implement the opportunistic network coding protocol in
the network layer but at the bottom entrance. It means the opportunistic network coding
protocol will process the packet passed from MAC layer first before passing it to the IP
protocol. On the other hand, before the IP protocol passes the IP packet to the MAC layer,
the network coding protocol will traverse the packets in the output queue to search for the
opportunity of combining several packets together.
Before describing the details of the functions, we will present the states and
variables that our network coding protocol maintains. Figure 8 shows the overall map of
structures created for the protocol.
36
37
Figure 8: Implemented data structure
The main structure maintained by the protocol is the CopeData which contains
other small part data structures. It is initialized by the specific protocol initialization
function, CopeInit(), at the beginning of the simulation. Recall the protocol model in
QualNet, where the protocol initialization function is called hierarchically to read the
parameters from the external configuration files to initialize the protocol states. In addition,
as can be seen there are two local buffers in the CopeData, i.e. “ppScheduler” and
“reportInfo”. “ppScheduler” is managed as a FIFO queue and is the packet pool storing
the overheard packet from neighbors. There are three functions to manage this queue, one
of which is CopePpQueueInit() to initialize this queue system, and another is
CopePpQueueInsert() to insert one cope of the new overheard packet into the local packet
pool. The last is CopePpQueueExtract() to extract specific packets from this packet pool
for decoding. The capacity of this queue is set to the default value of QualNet and when
the queue is full, and the head of the queue will be dropped automatically due to the
characteristic of FIFO. The other local buffer is the “reportInfo”, which, in fact, is a hash
table to keep the reception report information for the neighbors and itself. The head of this
table is the self reception report records. When a new neighbor sends a reception report to
this node, CreateSelfReport() function is called to insert a new element in the “reportInfo”
hash table. If an
existing neighbor sends new information to this node,
CopeReportUpdate() and Cope_SubReportUpdate() functions is called to update the
records
for
this
neighbor.
To
facilitate
debug,
we
add
another
function
CopePrintReportInfo() to print out all the content of this local buffer. The length of this
hash table is not limited since the number of neighbors for one specific node would not be
too large to consume large amount of memory. However, the specific array for each
38
element in this hash table is upper bounded by MAX_Entry, which is set at 128 in our
implementation. Besides these two local buffers, the protocol also keeps a structure,
CopeStats, to collect the statistics during the simulation. These statistics are printed out by
CopeFinalize() function at the end of simulation. Another important variable is
pkt_seq_no which records the local packet IP sequence number to help identify a specific
IP packet with the packet original source address in the network. Some other entries in the
CopeData structure are variables to store the parameter values configured in the external
configuration file, such as “helloInterval” storing the Hello message interval,
“processHello” storing the Boolean value indicating whether to process Hello message
and so on. Besides the main data structure CopeData for the protocol to maintain the
protocol states, CopeHeaderType structure is used by the protocol to indicate the extra
coding header structure. It is created according to the packet header structure discussed in
chapter 3. The functions CopeAddHeader() and CopeRemoveHeader() are used to add or
remove the extra coding header into or away from the IP packet.
Having explained the overall data structures required by this opportunistic network
coding scheme, we next describe our programming details of the algorithm. For the sender
side, the challenging part is checking the coding opportunity and intelligently combining
the packets together. Based on the flowchart shown in Figure 5(a) in Chapter 3, we
construct a more detailed modeling diagram for the sender side function CopeCoding()
shown in Figure 9.
39
40
Figure 9: CopeCoding() flowchart
This function acts immediately after an IP packet is successfully de-queued from
the output queue of network layer. First, it updates the self reception report and inserts a
copy of this de-queued packet into the local packet pool. Then it initializes the coding
header structure waiting for the corresponding components to be filled up later. The next
stage is to traverse the packets in the output queue to check whether there is any packet
that can be combined with the de-queued packet. The checking and coding functionalities
are implemented in Coding() function. There are two criteria to check the coding
opportunity. The first is that no more than one packet is heading to the same recipient; the
other is that each of the intended recipients has received all other packets except the one
destined for it. The Coding() function achieves the second criteria by checking the
reception reports stored in the local report pool. After all the codable packets in the output
queue are found, the Coding() function combines them together via a simple XOR method
then generates the list of encoded packet’ ids which is inserted into the coding header.
After the coding procedure, a report is generated and sent to the neighbors. The report
contains the new packets’ information received recently by this node, which will be
attached at the bottom of coding header. Finally, the CopeAddHeader() function is called
to assemble all the header’s components together and add it into the encoded packet which
is passed to the MAC layer later.
We have so far discussed the implementation details of the sender side, Figure 10
shows the flowchart on the receiver side function, CopeSneakPeekAtMacPacket().
41
Figure 10: CopeSneakPeekAtMacPacket() flowchart
42
As mentioned at Chapter 2 all nodes in the network work in promiscuous mode
and the opportunistic network coding scheme takes advantage of pseudo broadcast, so the
node is able to process the packet even if it is not the intended recipient. This is achieved
by calling MAC_SneakPeekAtMacPacket() function at MAC layer. This function is
provided as a default API at MAC layer to enable promiscuous mode. Further this
function calls the NetworkIpSneakPeekAtMacPacket() function implemented at network
layer
to
allow
network
CopeSneakPeekAtMacPacket()
layer
is
to
process
implemented
this
as
packet.
the
child
Our
function
function
of
NetworkIpSneakPeekAtMacPacket(), thus managing to make the network coding scheme
sitting on the network layer to process the encoded packets. The first step of our function
is to check whether the packet is carrying the coding header. If not, the packet is just
dropped. Otherwise, the CopeRemoveHeader() function is called to remove the coding
header from this packet, and meanwhile the coding information and reception report
enclosed are extracted. The coding information is used to help the decoding procedure
while the reception report for the neighbor is used to update the neighbor’s state recorded
in local report pool. Next, the node goes through the list of nexthop enclosed in the second
block of the header to check whether it is one of the next hops. If it is not and there is only
one packet enclosed, the node updates its reception report and store one copy of this
native packet in the packet pool. Otherwise, the packet will be dropped. On the other hand,
if this node is one of the next hops, the node takes advantage of the coding information
extracted from the header to decode the native packet headed to it, and then updates its
report and stores one copy of the packet in the local packet pool. After that, the network
coding protocol passes the native IP packet to the IP protocol for further process.
43
We have presented our implementation details in QualNet simulator in this chapter,
the performance evaluation will be discussed in next chapter.
44
Chapter 5
Performance Evaluation and Discussion
5.1 Emulation Environment
We have implemented the opportunistic network coding algorithm in QualNet 4.0
simulator as described in the previous chapter. In this chapter we will evaluate and
analyze the performance of this opportunistic network coding scheme over several simple
topologies in a wireless mesh network setting. In our simulation, we use CBR (Constant
Bit Rate) as the application traffic, which is carried on UDP transport protocol. CBR is the
simplest traffic pattern to model the streaming internet traffic, such as audio or video
traffic. CBR is chosen to simplify the analysis of our simulation results. More complex
application traffic will be studied in the future. For the CBR application, the packet size is
set at 512 bytes while the packet interval varies from 0.1s to 0.01s, which means the actual
load for each source node is from 41kbps to 410kbps. Each source node starts its
transmission at the first second while the entire simulation lasts 2 minutes.
5.1.1 Simulation Parameters
In this section we present the details of the simulation configuration. Our
simulation uses IPv4 (Internet Protocol Version 4) as the network layer protocol with
three output queues with different priorities while the Internet Control Message Protocol
(ICMP) model is disabled. IEEE 802.11 is chosen as the MAC layer protocol and the
propagation delay is set at 1µs. Both RTS and CTS mechanisms are enabled in our
simulation with RTS threshold value set at zero. The wireless channel frequency is set at
2.4GHz. The signal propagation model is statistical with a limit of -110.0dBm. Two Ray
45
ground model is used to model the propagation pass loss. Shadowing is modeled as
constant with mean set at 4.0 dB. Channel fading effect is not considered. The noise factor
of the channel is 10.0. IEEE 802.11b is selected as the radio type with nominal data rate of
2 Mb/sec. The transceiver is equipped with omni-directional antenna with an efficiency of
0.8. Antenna mismatch and antenna connection loss is set at 0.3 dB and 0.2 dB,
respectively. The transmission power of the antenna is 15.0 mWatt. For 2Mb/sec data rate
the receiver sensitivity is set to -89.0. The estimated directional antenna gain is 15, and the
packet reception is modeled as Phy802.11b.
Besides these pre-determined parameters for the device, there are also some
parameters for the opportunistic network coding algorithm. For the waiting scheme, the
waiting duration T is set at 40 milliseconds. Further in-depth study of the impact of T is a
part of the future work. The queue threshold of waiting scheme N increases from 0 to 37
which is a bit larger than the default setting of the output queue in QualNet simulator thus
large enough to study the behavior of the waiting scheme. The Hello message interval T0
is initially set at 40 milliseconds; however, it will be adjusted in our further study later.
5.1.2 Experimental Topology
First we run our simulation on three simple network topologies, Alice and Bob, X
and Cross topologies. We then run each topology on a 20-node multi-hop scenario. Aliceand-Bob scenario has been shown in Figure 1. Figure 11(a) shows the X topology where
node 1, node 4 and relay can hear each other and so do node 2, node 5 and relay. However,
node 1 and node 4 cannot hear node 2 and node 5, and vice verse. In this scenario node 1
and node 2 send packets to node 5 and node 4 respectively under the help of the relay
node. The Cross topology is shown in Figure 11(b) where there are 5 nodes placed as a
46
cross. The relay can be heard by all other nodes, and each of other nodes can hear 2
nearest nodes beside the relay. How many nodes can be heard by any one node in the
network is an important consideration. The neighborhood condition and local traffic
pattern are the essential differences between Cross and X topologies. In the Cross scenario,
4 traffic flows travel cross the relay. Node 1 and node 5 communicate with each other and
so do node 2 and node 4. Figure 12 shows the topology of the 20-node scenario. There are
(a) X topology
(b) Cross topology
Figure 11: X and Cross wireless topologies
Figure 12: 20-node multi-hop topology, where all nodes uniformly placed on
1000m*1000m area
47
20 nodes uniformly distributed on 1000m*1000 m area. Node 1 and node 5 communicate
with each other and AODV is used as the routing protocol.
5.1.3 Performance Metrics
In our simulation, we use three metrics to evaluate the performance of our
opportunistic network coding scheme by comparing with the situation without network
coding. The three comparison metrics are total packets dropped, network throughput and
average end-to-end packet delay. The total packets dropped metric denotes the total
number of packets dropped at the end of the simulation for the whole network. It is simply
calculated by reducing the number of packets received by the application servers from the
number of packets generated by the application clients. The network throughput is
calculated at application layer as the CBR throughput, which takes advantage of the
default calculation method provided by the CBR application in QualNet. The average endto-end packet delay is also calculated using the default method in QualNet. All the values
for each traffic flow are averaged at the end as the final packet delay for the network. In
addition, we also use Coding Gain to demonstrate how many transmissions can be saved
in the network by using opportunistic network coding. Coding Gain is the ratio of number
of data packets to the number of transmissions. In this thesis, the data packets are
considered as the least priority IP packets dequeued by any nodes in the network since the
traffic is carried with UDP protocol. In this situation, even one original data packet is
relayed 5 times among a multi-hop network, it should be considered as 5 different data
packets because they have different IP headers. On the other hand, the transmission is
considered as MAC layer data frame excluding other CTS/RTS transmissions. This
statistical information is easily obtained through QualNet simulation platform. Based on
48
these assumptions, the Coding Gain should be 1 in the traditional situation where
normally one IP data packet requires one MAC layer data frame, while with network
coding, Coding Gain would be greater than 1 because one MAC layer data frame may
contain several encoded IP data packets. Coding Gain is a useful method to understand
how network coding could improve network throughput.
5.2 Simulation Results
Figure 13 shows the simulation results for the Alice-and-Bob topology. As can be
seen from Figure 13(a), for the situation without network coding, the throughput of the
network increases with actual load and then saturates at 600 Kbps which indicates the
capacity of this simple network. For the situation with network coding, the results of three
different waiting scheme queue thresholds N, i.e. 0, 1 and 37, are shown. When N is 0 and
1, the corresponding lines coincide with that without network coding. However, the
network capacity is improved by 17% to nearly 700 Kbps when the queue threshold N is
set at 37. The improvement can be easily explained by Figure 13(b), which shows there is
no coding gain when N is 0. In this situation, the waiting scheme does not take action, and
thus the relay forwards the input packet immediately when it is received, resulting in no
coding opportunity. In addition, for such a simple topology, even in the heavy load
situation there would be no more than one packet queuing at the output queue of the relay
unless applying the waiting scheme. When N is 1, the coding opportunity is low at the
light load situation and almost zero at heavy load. As mentioned earlier, there would be
only two packets queuing at the output queue of the relay due to the waiting scheme,
resulting in the limited potential coding opportunity. However, in such a wireless channel,
Alice and Bob have to contend for the transmission opportunity, which leads to
49
asymmetric traffic flows between Alice and Bob. The asymmetric phenomenon is more
common at heavy load situation, which explains the trends shown on the figure. However,
the coding opportunity is significantly improved when N increases to 37. It is because
there are more packets queuing at the relay’s output queue due to the large queue
threshold. The increase of coding opportunity reduces the total transmissions required,
resulting in an improvement of network capacity. Correspondingly, the total number of
packets dropped is also reduced when N is 37, as shown in Figure 13(d). As expected, the
waiting scheme increases the average end-to-end delay under light load situations while
the increase in network capacity compensates for the packet delay at heavy load situation,
as seen in Figure 13(c). The average end-to-end packet delay is even reduced when the
improvement of network capacity is much greater.
(a) Comparison of the network throughput
50
Coding gain
Delay (s)
(b) Comparison of the Coding Gain with different queue threshold N of the waiting scheme
(c) Comparison of the average end-to-end packet delay
51
(d) Comparison of the number of packets dropped
Figure 13: Simulation results for the Alice-and-Bob scenario showing comparisons of
the network throughput, total number of packets dropped, average end-to-end packet
delay and coding gain between the situations without network coding and with network
coding with different queue thresholds of waiting scheme
Figure 14 shows the simulation results for the Cross topology. As can be seen, the
network capacity increases by nearly 3 times from 200 Kbps to 800 Kbps because of our
opportunistic network coding scheme irrespective of the value of the queue threshold of
the wait scheme. Correspondingly, the total number of packets dropped in this network is
also significantly reduced. Likewise, as shown in Figure 14(c) the overall average end-toend packet delay is also reduced by the opportunistic network coding scheme though the
delay slightly increases due to the waiting scheme at light load.
However, in X topology the network coding scheme actually results in poorer
network performance compared to the scheme without network coding, as shown in
Figure 15(a). Figure 15(b) shows that there is no coding opportunity at heavy load
52
situation in X topology, resulting in no network throughput improvement. The key
difference between X and Cross scenarios is that Hello message is the dominating method
to carry the reception report in X scenario. Thus the improper value of the Hello message
Coding
(a) Comparison of the network throughput
(b) Comparison of the coding gain
53
Delay (s)
(c) Comparison of end-to-end packet delay
(d) Comparison of the packets dropped
Figure 14: Simulation results for the Cross scenario showing the comparisons of
network throughput, total number of packets dropped and average end-to-end
packet delay between the situations without network coding and with network
coding with different queue thresholds of waiting scheme
interval T0 could causes this poor performance. To verify this, we did another simulation
on X topology by adjusting the value of T0 from 5 milliseconds to 40 milliseconds to
study the impact of Hello message interval on network coding performance. The results
are shown in Figure 16.
54
Coding gain
(a) Comparison of the network throughput
Delay (s)
(b) Comparison of the coding gain
(c) Comparison of the number of packet delay
55
(d) Comparison of the number of packets dropped
Figure 15: Simulation results for the X scenario showing the comparisons of network
throughput, total number of packets dropped, average end-to-end packet delay and
coding gain between the situations without network coding and with network coding
with different queue thresholds of waiting scheme
Figure 16(a) shows the coding opportunity in X topology decreases as the Hello
message interval T0 increases. If T0 is large, the information exchanged via Hello
messages would be out of date, which is useless for opportunistic network coding decision,
resulting in less coding opportunity. On the other hand, if T0 is small, nodes could
exchange information in a timely manner via Hello messages thus contributing to making
useful coding decisions. However, in this case the periodic transmission of Hello
messages may consume considerable bandwidth and may even flood the network if the
interval is too small. Figure 16(d) show there is an optimal value for T0 to obtain the best
performance. The optimal value is 13 ms and it closes to the average time the source sends
a packet, which implies replying a node by Hello message immediately after receiving a
packet from it is the best arrangement. The comparison of the throughput is shown in
Figure 16(b). As can be seen, network coding with optimal T0 improves the network
capacity by nearly 25%. The improvement is significantly less than that of Cross scenario
because the periodic transmission of Hello messages consume significant bandwidth of
56
the wireless channel. In addition, the average end-to-end packet delay could be also
improved by the proper setting of the Hello message interval as shown in Figure 16(c).
(a) In X topology where some nodes exchange information via Hello message, the
coding opportunity decreases at the increase of Hello message interval.
(b) Comparison of the network throughput of X topology between without network
coding and with network coding with different Hello message interval T0. The queue
threshold N of the waiting scheme is 16.
57
(c) The end-to-end packet delay changes according to the value of Hello message interval.
It shows there is an optimal value for Hello message interval in this simple X topology.
(d) The network throughput changes according to the value of Hello message interval. It
shows there is an optimal value for Hello message interval in this simple X topology.
Figure 16: The impact of Hello message interval on the performance of network
coding for simple X topology.
The above results demonstrate that this opportunistic network coding scheme could
potentially improve the network capacity. The study of the waiting scheme and the impact
of Hello message interval provide good suggestions on how we should make the algorithm
more flexible so that it could be made more suitable for a dynamic neighborhood
environment such as in wireless mobile ad-hoc networks. This is especially so for setting
the value of the waiting scheme and Hello message interval. Having experimented with
these simple topologies, we continue to run the algorithm on a multi-hop scenario.
58
Figure 17 shows the simulation results on a 20-node multi-hop scenario. The
figure implies that the performance of our scheme is much poorer when compared to the
situation without network coding. Through comparing with the simple Alice-and-Bob
scenario, we can say it is caused by the interference of periodic Hello messages. Nodes 1,
3 and 5 communicate with each other, while all other nodes in the network are proactively
sending Hello messages among themselves, which in this case seriously interferes the
communication channel between nodes 1 and 5, resulting in the poor performance
observed. In a large scale network the periodic Hello messages consume large amounts of
bandwidth and cause serious channel congestion problem. In order to address this problem,
we conclude that the opportunistic network coding scheme should be adapted in a more
intelligent way for it to work well.
Figure 17: Comparison of the network throughput between without network
coding and with network coding with the queue threshold N as 16 on the 20node multi-hop network scenario
5.3 Discussion
Simulation results for the simple topologies conclude that the opportunistic
network coding scheme is a practical technique, and can potentially improve the network
59
capacity by 3 to 4 times for UDP traffics. In addition, the waiting scheme is a useful
feature to this algorithm; however, it depends largely on a number of neighborhood
situations. The waiting scheme would be helpful for those networks without bottlenecks,
for example the simple Alice-and-Bob scenario. It may help some packets queuing at the
relay’s output queue resulting in the improvement in coding opportunity. However, for
those networks with bottlenecks such as Cross scenario, the waiting scheme is somewhat
redundant, since the packets automatically queue at the bottleneck in heavy load situation.
Therefore, it is better if the algorithm is intelligent enough to dynamically apply this
waiting scheme based on the actual neighborhood conditions in multi-hop networks.
Moreover, the use of Hello message is also affected by the actual neighbor situation and
traffic pattern. In the X scenario, the source nodes share information via data packets
while the sink nodes use Hello messages. However, in the Cross topology all nodes use
data packets to share information. The difference between the results of X and Cross
topologies implies that the most ideal situation is for all nodes to exchange information
via data packets instead of Hello messages. This is because the periodic Hello messages
increase overhead and introduce interference to nodes in the network. So a more practical
algorithm should detect the neighborhood situation and local traffic pattern and then make
smart coding decisions based on this local one-hop information. What is more, in
situations where some nodes have to exchange information by Hello messages, the
interval of these Hello messages impacts the performance of this algorithm significantly.
Through our simulation, we find there is an optimal value for the Hello message interval
which minimizes interference, i.e. it is best for one node to share information immediately
after it receives new packets. In the multi-hop scenario with very complex local traffic
60
pattern, it is better for the nodes to adjust the value of this Hello message interval
intelligently in order to get the best performance.
Thus a more flexible and “intelligent” opportunistic network coding scheme
should be proposed to address these issues. The intelligence of the algorithm should
contain these competences discussed below. For example, through intelligently listening
to the packets from neighbors, the node in the network should make a decision whether to
turn on or off the Hello message, because in the multi-hop scenario some nodes do not
contribute to the communication at all. In addition, if the node turns on Hello message, it
should know how to adjust the value of Hello message interval and how to set the value of
the queue threshold for the waiting scheme based on the knowledge of its neighbors and
the local traffic pattern. In this way every node would minimize the interference of its
Hello messages without sacrificing on coding opportunity.
In next chapter, we will present an intelligent version of this opportunistic network
coding scheme incorporating some of these suggestions. Further simulations and
evaluations are also carried out.
61
Chapter 6
Intelligent Opportunistic Network Coding
As discussed in the previous chapter, there are some issues in current opportunistic
network coding scheme for wireless environment, such as the ineffective Hello messages.
Sending of Hello message is an important technique for the opportunistic mechanisms
adopted in this network coding scheme, which makes the nodes in the network able to
share information with neighbors thus contributing to overall coding opportunities.
However, in the current algorithm the Hello message is scheduled in a proactive style,
which means all nodes in the network broadcast Hello messages actively to share
information without considering whether the information benefits the coding decision in
the relay node or not. This proactive Hello message technique consumes considerable
wireless channel bandwidth, thus resulting in poor network performance as observed. In
addition, the current network coding scheme does not provide a way to dynamically adjust
the value of Hello message interval. As implied in previous simulation results, there is an
optimal Hello message interval to achieve best network performance in simple X topology,
and this optimal value varies in different network topologies with different traffic patterns.
To address these findings, we present our intelligent version of this opportunistic network
coding scheme in this chapter. The coding opportunity is detected automatically while the
Hello message is scheduled according to an On-demand style with the interval set
accordingly.
6.1 Intelligent Hello Message
First, we discuss some challenges to make this opportunistic network coding
scheme intelligent to address issues mentioned above. In a large scale wireless network,
62
not all nodes are active, which means that some nodes snooping at the network do not
generate traffic, forward traffic or receive traffic. These nodes do not do network coding.
Even for the active nodes, not all of them have the coding opportunity. As we know this
opportunistic network coding scheme only combines the packets from different traffic
flows. So only the relay nodes with at least two different traffic flows crossing them have
the coding opportunities. However, the issue is how one node in the network is able to
discover itself as a relay node with different traffic flows crossing? Likewise, not all
information of a node in the network is able to benefit the overall network coding. For
example, those inactive nodes which just snoop at the network should not broadcast Hello
messages to consume the channel bandwidth since they do not join the communication. In
another words, only the neighbors of the relay node with coding opportunities could
possibly contribute to the coding decision. However, they should not broadcast Hello
messages all the time since the relay node does not always have the coding opportunity.
So this is the issue when the neighbors should turn on Hello messages and share their
information with the relay node to contribute the coding decision. Furthermore, when the
Hello message is turned on, what value should be chosen as the interval? In addition, the
coding opportunity does not exist indefinitely, since one network traffic flow would end at
any moment while another begins somewhere else. This is another challenge for nodes to
decide when to turn off Hello messages. As can be seen, our proposed scheme should be
made more intelligent to overcome these challenges and thus improve the overall
throughput. The control flow of our intelligent scheme is shown in next section and
solutions to address these challenges are presented.
63
6.2 Control Flow
In this section, the overall control flow of our intelligent opportunistic network
coding scheme is shown. The overall control flow consists of two parts, Host Side and
Neighbor Side. The Host Side includes the decision procedures at the relay node which
detects the coding opportunity and broadcasts control messages to neighbors. While the
Neighbor Side indicates the response procedures at the corresponding neighbors which
receive the control messages from the Host Side. Through the communications between
the host and neighbors, the neighbors schedule the Hello message in an On-demand style
which is more efficient and causes less overhead than the basic approach.
6.2.1 Host Side
Figure 18 shows the control flowchart of the Host Side. Initially, all nodes work on
promiscuous mode and snoop at the network but with the Hello messages turned off,
which is different from the basic algorithm previously. In that previous algorithm, all
nodes eavesdropping at the network are eager to share their own states and do not consider
whether their information is valuable or not. Here we make all nodes “keep quiet” until
some of them are told to share information. This is how our scheme works, whenever a
node is ready to send packets, it checks the information of all packets in its output queue,
by which it may decide whether it has coding opportunity or not. The information of the
packets contains the previous hop, next hop and the destination of this packet. This
information can be provided by integrating this opportunistic network coding scheme with
some routing protocols. In addition, recall that each node also exploits a waiting scheme
with a queue threshold N when it is ready to send packets, which means there are at least
N packets in the node’s output queue. By collecting the information array of these N
64
packets, the node is able to detect the traffic pattern crossing itself. The detection
procedure is as follows. All packets from the same previous hop and to the same next hop
are classified into one traffic flow, since this opportunistic network coding scheme does
one-hop network coding. This means that the coding packet will be decoded immediately
at the next hop and the next hop will check the coding opportunity for this decoded packet
again. Thus, after the node gets the information array of all N packets in the output queue,
it may know how many different traffic flows are crossing it. It is easy to determine that a
coding opportunity exists when at least two different traffic flows are crossing each other,
this is because this network coding scheme only does inter-flow coding. However, the real
coding opportunity still depends on the exact topology this node belongs, as it will have a
bearing on the overall performance as shown in the simulation results presented in chapter
5. In order to learn its local topology, the node should have information up to two hopes
away, which can be obtained by a route discovery protocol which we will not elaborate
here. Instead, we just simply assume all nodes are able to discover the two-hop away
information in current configuration. The explicit route discovery protocol design and
integration with this network coding scheme is beyond the scope of this work. Once the
node finds out its local topology, a decision metric to determine its coding decision is
available. For example, take any two traffic flows, if the next hop of one traffic flow can
hear the previous hop of another traffic flow and similarly for the other pair, then packets
from these two flows can be combined together since the next hops of these packets are
able to decode the combined packets. This is easily illustrated in the X topology shown in
Figure 11(a) in chapter 5. There are two traffic flows crossing the relay node, from node 1
to node 5 and from node 2 to node 4, which can be written as 1->5 and 2->4. As long as
65
{1, 4} and {2, 5} are able to hear each other, we deem that the relay node has coding
opportunity. This decision metric can also be applied in Alice-and-Bob, Cross or other
scenarios.
Figure 18: Host Side
Once the node has coding opportunity, it continues to check whether it needs the
help of its neighbors since the node still needs the reception report from neighbors to do
network coding. The procedure described here shows how the node decides whether it
needs help or not. First, if the node has no records for the intended neighbor, the node
66
must need the reception report from the neighbor. On the other hand, if the intended
neighbor has previously sent something to this node, this node must have recorded the
information in its local buffer. So this node checks the time stamp of the neighbor’s record
in local buffer. If the record is relatively new then this node does not need the help from
the neighbor because the neighbor has sent the latest information to it. Otherwise, this
node would need the new information from the neighbor. If the time stamp of the record is
within the last T2 seconds called New Record Duration, the record is regarded as new;
otherwise, it is regarded as old. If the node needs the help from neighbors, it just schedules
a control message broadcast to the neighbors informing them to turn on Hello message
with a specific packet interval T0. This T0 is the average input packets interval of the
traffic flow which can be measured when the node is checking the packets’ information in
the output queue. Once the above coding decision is taken the node goes through the
opportunistic coding procedures as previous algorithm.
On the other hand, if this node has no coding opportunity, it will simply skip the
opportunity coding procedures. Before that, it has to check whether it has informed its
neighbors to turn on Hello message, if it did, it should schedule another control packet to
the neighbors asking them turning off the Hello message.
6.2.2 Neighbor Side
Figure 19 shows the flowchart for the Neighbor Side, where the procedure is much
simpler since the Neighbor Side just does what the Host tells them to do. When the
neighbors receive control packet from the relay node, they simply check the flag in the
control packets to decide whether to turn on or off the Hello message. If they have to turn
on, they just schedule the corresponding periodic timer based on the Hello message
67
interval enclosed in the control packet. If they have to turn off, they just simply cancel the
Hello message timer maintained in the protocol state structure. In addition, when the
neighbors are ready to broadcast Hello messages, they should always check whether they
have received new messages during the last T1 second called New Packet Duration. If not,
they should stop broadcasting Hello messages; otherwise, they broadcast the Hello
message as usual. This scheme prevents the neighbors from broadcasting useless
information to the relay node, which may again consume extra wireless bandwidth.
Figure 19: Neighbor Side
We have presented the intelligent version of the opportunistic network coding
scheme in this chapter. In next chapter, the evaluation of this intelligent scheme will be
elaborated.
68
Chapter 7
Simulation And Evaluation
In previous chapter, we have presented the details of our intelligent version of
opportunistic network coding for wireless mobile ad-hoc network. In this chapter,
simulation and evaluation of this intelligent scheme are studied and presented. The
implementation of this intelligent scheme is relatively simple, and we just add a few extra
protocol states in the CopeData structure. An extra coding opportunity detection function
is added before the coding procedure function Coding(). In addition, a control packet
handler function is implemented to process the control message at the neighbor side. The
details of the implementation are found in our source code. With regards to the simulation,
most of the simulator parameters and performance metrics remain as the same as those
mentioned in chapter 5. We further expand our simulation from simple wireless topologies
to large scale multi-hop wireless environment, thus evaluating the performance of our
intelligent scheme in large scale networks. In our simulation, the New Packet Duration T1
and New Record Duration T2 are both set as 1 second.
7.1 Simulation
It may be better to compare the performance of this intelligent algorithm with the
previous basic algorithm on those simple scenarios shown in Chapter 5. However, this
intelligent algorithm is based on the previous basic one and deals well with large scale
wireless networks, which means actually the same simulation results are expected for both
algorithm on those simple network scenarios. Thus we do not present those simulation
results on the simple scenarios here due to the space limitation. In fact, those simulation
results shown on Figure 20 and 23 can be the very evidence for this assumption.
69
We simulated the intelligent scheme in the same 20-node scenario shown in Figure
12 in chapter 5 to show the expected improvement based on the previous basic algorithm.
In this scenario, node 1 and node 5 communicate with each other with the help of node 3,
where the route is discovered by AODV. Figure 20 shows the simulation results on this
scenario. As can be seen from Figure 20(a), the lines with square markers and diamond
markers indicate the network throughput achieved by non-coding method and basic
coding algorithm respectively, which have been shown in Figure 17. The basic network
coding scheme makes the network performance much worse than the non-coding method
on a large scale wireless network. However, the line with x marker indicating the network
throughput achieved by our intelligent scheme shows the performance of the network has
significantly improved compared to the basic coding scheme. The network throughput
achieved by intelligent scheme is also much better than that of non-coding method, which
has been improved by nearly 25%. What is more, as shown in Figure 20(c), the total
number of packets dropped has also been significantly reduced by our intelligent scheme
in this large scale wireless network. However, this improvement in network throughput is
achieved with larger average end-to-end packet delay. Figure 20(b) shows the average
end-to-end packet delay has increased, especially at the light load situation. The trimodal
delay curve is expected because in this simple traffic pattern one packet would have to
experience two stages of delay due to the waiting scheme. The first stage of delay happens
at the output queue of the source while the second happens at the output queue of the relay
node. At each stage the delay can be modeled like this:
I#
ˠ >ˠ
ˤ# = {˚ ∗ ˠ
ˠ ≤ˠ
70
I$
ˠ > ˠ
ˤ$ = {˚ ∗ ˠ
ˠ ≤ˠ
where d1 and d2 are the delay caused in source and relay node. c1 and c2 are the constant
delay when the waiting scheme is not executed. N is the threshold of the number of
packets for the waiting scheme. T is the waiting duration for the waiting scheme. Here it is
set at 40 milliseconds. Ti is the input packet interval at source while Ti’ is at the relay node.
It is easy to find that Ti=2Ti’. Thus d2 could be written as:
I$
ˠ > 2ˠ
ˤ$ = {˚ ∗ 0.5 ∗ ˠ
ˠ ≤ 2ˠ
Then the total delay d is
I# + I$
ˤ = ˤ# + ˤ$ = {I# + ˚ ∗ 0.5 ∗ ˠ
˚ ∗ 1.5 ∗ ˠ
ˠ > 2ˠ
ˠ < ˠ ≤ 2ˠ
ˠ ≤ˠ
which is obviously a bimodal curve matching the first half of the one with x marker
shown in Figure 20(b). The tail of the curve is caused by the significant packet drop which
would lead the longer and longer average end-to-end packet delay.
Besides this simulation on the simple traffic pattern, we expand our simulation to
more complex multi-hop traffic patterns. As mentioned in the previous chapter, we do not
design the specific route discovery protocol to discover two-hop away information
required by our intelligent scheme but simply assume that all nodes in the network have
two-hop away information. In order to simplify the procedure to set the static route table
for each node, we simply exploit a regular multi-hop wireless scenario shown in Figure 21
where all 20 nodes are regularly placed in a grid covering an area of 1200m*1200m. In
this scenario, each node can only hear its surrounding neighbors and the static route table
for each node is set according to this.
71
(a) Comparison of network throughput
(b) Comparison of average end-to-end packet delay
(c) Comparison of total packet dropped
Figure 20: Simulation results for the 20-node scenario showing the comparisons of network
throughput, total number of packets dropped, average end-to-end packet delay between the
situations without network coding, with network coding and intelligent network coding
72
Figure 21: 20-node multi-hop scenario, where 20 nodes regularly
placed in a grid area covering 1200m*1200m
We first repeat the simple X and Cross scenarios from the original 5-node
topology to this 20-node environment which is shown in Figure 22. As expected, the
simulation results on this larger scale wireless network are similar to that of the simple
scenarios. This is proven to be the case by the simulation results shown in Figure 23.
Figure 23(a) shows the network throughput is improved by nearly 40% from 500 Kbps to
700Kbps due to the intelligent scheme. Figure 23(b) shows for the Cross topology the
network throughput is improved by almost 2.7 folds from 300 Kbps to 800 Kbps.
73
(a) X scenario
(b) Cross scenario
Figure 22: X and Cross scenarios on regular 20-node wireless environment
(a) Comparison of network throughput on X scenario
74
(b) Comparison of network throughput on Cross scenario
Figure 23: Network throughput from X and Cross scenarios on regular 20-node
wireless environment
(a) 4-hop chain scenario
(b) Long X scenario
Figure 24: Multi-hop scenarios on regular 20-node wireless envrionment
Next we further expand our simulation to multi-hop traffic flows. Figure 24(a)
shows the 4-hop away chain scenario where the node 6 and node 10 communicate with
75
each other by CBR under the help of the relay nodes between them. This scenario is
similar to the simple Alice-and-Bob scenario but it is two-hop further. Thus, the coding
procedures in this scenario would be much more complex. The simulation results are
shown in Figure 25 where we see that the total numbers of packets dropped on these two
different methods are almost the same while the network throughput achieved by our
intelligent network coding scheme is a bit better than that achieved by non-coding method.
Another example is the Long X scenario shown in Figure 24(b) which is expanded from
the simple X scenario. However, the overall coding procedure is quite different from that
of the basic scenario. For example, the other nodes except those circled would decide to
keep quiet without sending Hello message during the communication period since they
know they are not able to contribute to the ongoing communication. In addition, the relay
node with two traffic flows crossing should decide not to ask its neighbors to turn on
Hello message while the other two relay nodes, 8 and 14, should detect that they have no
coding opportunity and thus they do not to ask the two destinations to turn on Hello
messages either. All these features were not available in the previous opportunistic
network coding algorithm. The corresponding simulation results are shown in Figure 26
where we observe that the maximum network throughput is improved by nearly 33% from
300Kbps to 400Kbps compared to the non-coding method. Meanwhile, the total number
of packets dropped is also significantly reduced by the intelligent scheme.
76
(a) Comparison of network throughput
(b) Comparison of total number of packets dropped
Figure 25: Simulation results for the 4-hop chain scenario
(a) Comparison of network throughput
77
(b) Comparison of total number of packets dropped
Figure 26: Simulation results for the Long X scenario
7.2 Evaluation
As can be seen from the simulation results shown above, our intelligent
opportunistic network coding scheme manages to solve those issues encountered in the
basic algorithm. Our intelligent scheme is able to work well in the large scale multi-hop
wireless environment by changing the proactive Hello message technique with on-demand
Hello message. Those nodes which cannot contribute positively to the network coding
communication keep quiet without causing any overhead and interference to the network.
Other nodes which take part in the communication procedure are able to detect coding
opportunity and correctly indicate to their neighbors to turn on or off Hello messages. For
example, in the 4-hop away chain scenario, our simulation results show that all three
intermediate nodes, 7, 8 and 9, have coded several packets. While in the Long X scenario,
only node 13 has coded packets and the other two nodes, 8 and 14 just forward packets
because of no coding opportunity. Therefore, the two destination nodes just receive the
data quietly without broadcasting any Hello message.
78
Though only some regular wireless scenarios are chosen to verify our intelligent
opportunistic network coding scheme, the simulation results are sufficient to show that our
intelligent scheme is able to improve overall network throughput no matter what the
topology is albeit at different degrees. For some specific network topologies and traffic
patterns, the network throughput could be improved up to 3 folds. Our intelligent scheme
is effective in detecting potential coding opportunity based on the local topology and the
current traffic patterns, thus taking corresponding actions to maximize overall network
throughput. Generally, the total number of packets dropped is also significantly reduced.
In next chapter, we will present our overall conclusions arising from the
implementation and evaluation of our opportunistic network coding scheme in wireless
mobile ad-hoc networks. Future work will be also outlined to facilitate possible
enhancements to our intelligent opportunistic network coding scheme.
79
Chapter 8
Conclusions And Future Work
8.1 Conclusions
Network coding has been demonstrated to be a promising technique in current
wireless packet switched networks. In order to bring the benefit of network coding to the
wireless mobile ad-hoc networks we have designed an intelligent opportunistic network
coding scheme and implemented it in QualNet simulator. To achieve this, we first study
the behavior of opportunistic network coding scheme and evaluate its performance in
wireless mesh networks filled with UDP traffic. Our implementation and subsequent
investigations using QualNet simulator has shown that this intelligent opportunistic
network coding scheme is practical to be integrated into the current protocol stack and
work well with current 802.11 wireless networks. What is more, by solving the overhead
and interference caused by inefficient Hello messages, this intelligent network coding
scheme can be implemented in a large scale network. Moreover, our simulation results
show this intelligent opportunistic network coding is able to improve the overall network
throughput of wireless mesh networks with UDP traffic. The total number of packets
dropped is also significantly reduced. Though the improvement in performance highly
depends on the overall wireless scenario and exact traffic patterns, and the overall network
throughput of some specific simple topologies can be improved by nearly 3 fold with
significant reduction of packets dropped. Even for long chain scenario the network
throughput can be improved up to 10%.
80
8.2 Future Work
Our target is to research network coding in wireless mobile ad-hoc networks
(MANETs), to improve parameters such as the network throughput gain and robustness.
Our work here has achieved some important milestones with respect to this goal, and also
shed light on possible future works which will complete the overall objective. First, in
order to evaluate the performance of pure network coding we have made our intelligent
network coding scheme to be independent of the underlying routing protocol. In future
therefore, an explicit routing protocol for wireless mobile ad-hoc network which is able to
discover the local topology should be designed and integrated with our intelligent network
coding scheme for evaluation. We believe by taking full advantage of efficient routing
protocol for wireless mobile ad-hoc network, the intelligent opportunistic network coding
scheme proposed should have better performance in mobile environments.
In addition, the impact of two parameters during the coding opportunity discovery
procedure i.e. New Packet Duration and New Record Duration should be evaluated further,
especially in mobile environments. These two parameters may be associated with the
degree of mobility of the network nodes. On the other hand, we have evaluated the
performance of network coding based on UDP traffic, therefore, the more complex TCP
traffic can be exploited next. Compared to UDP protocol, TCP traffic contains more
features such as congestion control, acknowledgement and retransmissions, which may
result in additional challenging issues to be addressed.
Finally, another interesting area for consideration would be information security.
In our intelligent opportunistic network coding scheme, we not only exploit simple XOR
to combine input packets but also allow all nodes to eavesdrop in the network, which may
81
be a potential security issue. A study on the security issue of network coding for wireless
mobile ad-hoc network would be relevant here.
82
REFERENCES
[1] R. Ahlswede, N. Cai, S. Li, and R. Yeung, “Network information flow,” IEEE
Transactions on Information Theory, vol. 46, no 4, Jul. 2000, pp 1204-1216.
[2] S. Katti, H. Rahul, W. Huss, D. Katabi, M. M´edard and J. Crowcroft, “XORs in The
Air: Practical Wireless Network Coding,” ACM SIGCOMM, Pisa, Italy, Sep. 2006.
[3] Qunfeng Dong, Jianming Wu, Wenjun Hu and Jon Crowcroft, “Practical Network
Coding in Wireless Networks”, ACM MOBICOM, Montréal, Québec, Canada, Sep.
2007
[4] Christina Fragouli, Dina Katabi, Athina Markopoulou, Muriel M´edard and Hariharan
Rahul, “Wireless Network Coding: Opportunities & Challenges” MILCOM IEEE,
Orlando, FL, USA, Oct. 2007, pp. 1-8.
[5] S. Katti, D. Katabi, W. Hu, H. S. Rahul, and M. M´edard. “The importance of being
opportunistic: Practical network coding for wireless environments”, 43rd Annual
Allerton Conference on Communication, Control, and Computing. Monticello, IL.,
USA, Sept. 2005.
[6] S. Li, R. Yeung and N. Cai, “Linear network coding”, IEEE Transactions on
Information Theory, vol. 49, no. 2, Feb. 2003, pp. 371–381.
[7] R. Koetter and M. Medard, “An algebraic approach to network coding”, IEEE/ACM
Transactions on Networking, vol. 11, no. 5, Oct. 2003, pp. 782–795.
[8] T. Ho, M. Medard, J. Shi, M. Effros, and D. Karger, “On randomized network coding”,
41rd Annual Allerton Conference on Communication, Control, and Computing.
Monticello, IL., USA, Oct. 2003.
[9] J-S. Park, M. Gerla, D. S. Lun, Y. Yi and M. Medard, “CodeCast: A Network-CodingBased Ad Hoc Multicast Protocol”, IEEE Wireless Communications, vol. 13, no. 5,
Oct. 2006, pp. 76-81.
[10] A. Ramamoorthy, J. Shi and R. Wesel, “On the capacity of Network coding for
random networks,” IEEE Transactions on Information Theory, vol. 51, no. 8, Aug.
2005, pp. 2878–2885.
[11] S. Deb, M. Effros, T. Ho, D. Karger, R. Koetter, D. Lun, M. Medard and N.
Ratnakar, “Network coding for wireless applications: A brief tutorial,” International
Workshop on Wireless Ad-hoc Networks (IWWAN), London, UK, May 2005.
[12] D. Lun, N. Ratnakar, R. Koetter, M. Medard, E. Ahmed and H. Lee, “Achieving
minimum-cost multicast: A decentralized approach based on network coding,” IEEE
INFOCOM, Miami, FL, USA, Mar. 2005. pp. 1607-1617
83
[13] S. Chachulski, M. Jennings, S. Katti and D. Katabi, “Trading structure for
randomness in wireless opportunistic routing,” ACM SIGCOMM Kyoto Japan, Aug.
2007.
[14] S. Katti, D. Katabi, H. Balakrishnan and M. Medard, “Symbol-level network
coding for wireless mesh networks,” ACM SIGCOMM. Seattle WA, USA, Aug. 2008.
[15] X. Zhang and B. Li, “Dice: A game theoretic frame work for wireless multipath
network coding,” 9th ACM International Symposium on Mobile Ad-hoc Networking
and Computing, Hong Kong, China, May 2008, pp. 293-302.
[16] Christina Fragouli, Jean-Yves Le Boudec and Jorg Widmer, “Network Coding: An
Instant Primer,” ACM SIGCOMM Computer Communication Review, vol. 36, no. 1,
Jan. 2006, pp. 63-68.
[17]
Internet packet size distributions: Some observations.
http://www.isi.edu/~johnh/PAPERS/Sinha07a.html
http://netweb.usc.edu/~rsinha/pkt-sizes/.
[18] D. S. DeCouto, D. Aguayo, J. Bicket and R. Morris, “A High Throughput Path
Metric for Multi-Hop Wireless Routing,” 9th Annual International Conference on
Mobile Computing and Networking, San Diego, CA, USA, Sep. 2003, pp. 134-146.
[19] C. E. Perkins, E. M. Royer, “Ad-hoc On-Demand Distance Vector Routing” 2nd
IEEE Workshop on Mobile Computing Systems and Applications, New Orleans, LA,
USA, Feb. 1999.
[20] T. Clausen, P. Jacquet, A. Laouiti, P. Muhlethaler, A. Qayyum and L. Viennot.
“Optimized link state routing protocol for ad hoc network” IEEE International Multi
Topic Conference, Pakistan, Dec. 2001, pp. 62-68.
[21] Ajoy Kar. “Performance study of practical wireless network coding in 802.11
based wireless mesh networks” Master Thesis, National University of Singapore. 2009
84
[...]... coding scheme in QualNet As mentioned in chapter 2, this opportunistic network coding scheme works between the MAC and IP layers by inserting an extra coding header between MAC and IP headers However, in order to be consistent with the real network architecture, we do not create an extra independent communication layer between Link and Network layer Instead, we implement the opportunistic network coding. .. receive the message in the layer short instanceId; short eventType; // Message's Event type …… MessageInfoHeader infoArray[MAX_INFO_FIELDS]; int packetSize; char *packet; NodeAddress originatingNodeId; int sequenceNumber; int originatingProtocol; int numberOfHeaders; int headerProtocols[MAX_HEADERS]; int headerSizes[MAX_HEADERS]; Figure 6: Message structure in QualNet Some of the fields of message data... implementation 3.1 Packet header Figure 4 shows the modified variable-length coding header for the opportunistic network coding scheme, which is inserted into each packet If the routing protocol has its own header, our coding header sits in between the routing and MAC layer headers Otherwise, it sits between the IP and MAC headers Only the shaded fields in Figure 4, is required in every coding header (called the... wireless environment In next chapter, we will show the specific architecture of opportunistic network coding in our implementation including the specific packet header structure and the whole control flow of the algorithm 21 Chapter 3 Opportunistic Network Coding Architecture In this chapter, we introduce the architecture of the opportunistic network coding scheme which we have implemented in QualNet simulator... the intended packet and then stores a copy of the decoded native packet in its packet pool As all packets are sent using 802. 11 unicast, the MAC layer is able to detect collisions and back-off properly Pseudo broadcast is therefore more reliable than simple broadcast and inherits all the advantages of broadcast In this chapter, we have presented the overview of the opportunistic network coding for wireless. .. specific destination at a time However, unicast does not help in opportunistic listening and consequently not in opportunistic coding To address this problem pseudo broadcast is introduced in COPE Pseudo broadcast is actually unicast and therefore benefit from the reliability and the back-off mechanism The link layer destination field of the encoded packet is set to the MAC address of one of the intended... details of the architecture are based on the overview of the opportunistic characteristics, coding and decoding algorithm introduced in the previous chapter Here the packet header structure will be shown and the functionality of each field in the header will be explained Next the overall control flowchart will be presented to illustrate the structure of the opportunistic network coding algorithm in our implementation. .. packet reordering as a congestion signal 2.3 Packet Coding Algorithm In this section, we introduce the details of packet coding algorithm based on the opportunistic coding idea mentioned above Some practical issues in the implementation and the specific solutions are also proposed First, the coding scheme in the original COPE does not introduce additional delay The node always de-queues the head of its output... be part of an intelligent scheduling scheme, which will be described at Chapter 6 11 In summary, through the Opportunistic Listening technique, the nodes learn and share their states with each other, thus contributing to the nodes’ network coding decision 2.2 Opportunistic Coding Opportunistic coding allows nodes to combine multiple packets into a single packet based on the assumption that all intended... congestion situation in static wireless mesh networks, we do not implement this technique in our design for several reasons First, we would like to design an opportunistic network coding scheme independent of routing protocol, thus making the algorithm very flexible at difference scenarios by cooperating with the suitable routing protocol instead of integrating with ETX based routing algorithm Furthermore, ... message on the performance of opportunistic network coding scheme We then use the findings for further study of network coding scheme on large scale wireless ad- hoc networks We find out there is an... mobile ad- hoc networks In this thesis we propose to first study and investigate the performance behavior and key control parameters of opportunistic network coding in large scale static wireless. .. illustration of network coding, showing how network coding saves bandwidth consumption It shows Alice and Bob to exchange a pair of packets using transmissions instead of Recently the research focus on network