The problem of exploiting all the available shortest paths between a source and
destination to improve Quality of Service (QoS) is deceptively simple. From prior
work it is known that even when multiple shortest paths are available, a straightforward
approach to using such paths will result in nodes along one of the paths handling
more entities than the nodes along the other shortest paths. This phenomenon,
called Loading, limits the QoS and warrants the design of new methods
to better utilize the available paths. A contour was defined as the union of
all the shortest paths between a source and a destination. It was also shown that contours
have an interesting structure that depended on the number of neighbors of each node
and the relative location of the source and the destination. This structure
characterized the number of corner nodes, the length of the shortest paths, the number
of shortest paths, the depth of the expansion region, and depth of the contraction region
in graph theoretic terms. The structural characterization of contours was important both
for precisely characterizing loading and for designing effective methods to exploit the
paths available in the contours. However, several questions remained open.
This dissertation presents a new graph theoretic characterization of the structure
of contours in regular mesh connected topologies when each node has three
neighbors. This is a particularly interesting structure because it appears to be robust
while making efficient use of the available resources. The results show that the
structure of contours are severely affected by the relative location of the source
and destination. This means that a small change in the location of either the source
or the destination can dramatically change the structure of the contours and hence
affect the QoS that can be achieved. The results here show that for over two-thirds
of the source-destination pairs that are possible, the contours will have a pendant
node, i.e., a node that is connected to only one neighbor - and this node is the
bottleneck node that limits performance.
For a new application, it is possible to use different kinds of connectivity between
the nodes - for example, nodes can have four neighbors, eight neighbors, six neighbors
or three neighbors. Each choice has a cost associated with it and the choice critically
affect the QoS. This dissertation presents empirical results that offer new insights
into the Cost Vs. QoS tradeoffs in the design of mesh connected systems. These
results show that the best throughput can be achieved when each node has six
neighbors and the best latency is achieved when each node has eight neighbors.
This dissertation also presents two applications that demonstrate how the structure
of the neighborhoods and the structure of contours can be exploited to improve
performance. The first application is in the domain of data aggregation - it is shown
that the structure of neighborhoods help to identify cluster heads. The second application
demonstrates how the structure of contours can be exploited to improve QoS in
a Network-on-Chip that is central to the design of next-generation processors.
The third focus for this dissertation is to understand how to weaken the regularity
assumption that was central to the graph theoretic characterizations. Given a system
that does not have a regular topology, the discovery of contours is not a significant
challenge because simple extensions of the classical shortest path algorithms will
discover contours. However, it is difficult to characterize the structure of contours.
In the absence of such a characterization, one needs to use a stochastic technique
to improve QoS. It is shown that classical machine learning algorithms can effectively
exploit the contours to improve QoS. It is first shown empirically that critical parameters
that control the performance of the machine learning algorithm, namely the learning
rate and feedback interval, depend on the topology. Next it is shown that when
multiple contours overlap, the machine learning algorithm effectively spreads
entities to improve QoS. Finally, empirical results also show that the machine learning
algorithm exploits non-shortest paths to achieve the best possible QoS.