Tropical linear algebra is the study of classical linear algebra problems with arithmetic
done over the tropical semiring, namely with addition replaced by max, and multiplication
replaced by addition. It allows one to reformulate nonlinear problems into piecewise-linear
ones. This approach has successfully been employed to solve and characterize solutions to
many problems in combinatorial optimization, control theory and game theory [5]. Tropical
spectral theory, the study of tropical eigenvalues and eigenspaces, often plays a central role
in these applications. We derive the basics of this theory in Chapter 1.
In Chapter 2 we give a combinatorial description of the cones of linearity of the tropical
eigenvector map. In Chapter 3 we extend this work to cones of linearity of the tropical
eigenspace and polytrope map. Our results contribute to a better understanding of the
polyhedral foundations of tropical linear algebra.
Chapter 4 illustrates the above results in the context of pairwise ranking. Here one as-
signs to each pair of candidates a comparison score, and the algorithm produces a cardinal
(numerically quantified) ranking of candidates. This setup is natural in sport competitions,
business and decision making. The difficulty lies in the existence of inconsistencies of the
form A > B > C > A, since pairwise comparisons are performed independently. Tropi-
calRank is an algorithm pioneered by Elsner and van den Driessche. Solution sets of this
ranking method are precisely the polytropes studied in Chapter 3. For generic input pair-
wise comparison matrices, this set contains one unique point that is the tropical eigenvector,
which is then interpreted as the comparison score. In particular, the results in Chapter 3
provide a complete classification of all possible solution sets to the optimization problem
that TropicalRank solves. This answers open questions from several papers [22, 32] in the
area.
In Chapter 4 we also show that TropicalRank belongs to the same parametrized family
of ranking methods as two other commonly used algorithms, PerronRank and HodgeRank.
Furthermore, we show that HodgeRank and PerronRank asymptotically give the same score
under certain random ranking models. Despite their mathematical connections, we can
construct instances in which these three methods produce arbitrarily different rank order.2
The last two chapters are topics in applied probability. Chapter 5 studies the exact
and asymptotic distribution of size-biased permutations of finite sequences with independent
and identically distributed (i.i.d) terms. The size-biased permutation of a positive summable
sequence (x1 , x2 , . . .) is the same sequence presented in a random order (x[1], x[2], . . .), where
x[1] is defined to be xi with probability proportional to its `size' xi ; given that x[1] is xi ,
the next term x[2] is defined to be xj for j = i with probability again proportional to its
`size' xj , and so on. Size-biased permutations model the successive sampling method in
ecology and oil discovery, where species (or oil reserves) are discovered in a random order
proportional to their abundances. In the ranking literature it is known as the Plackett-Luce
model, a parametrized family modeling ranking distributions. Size-biased permutation is
one of the building blocks of combinatorial stochastic processes, an area of probability with
applications to computer science [78]. Finite i.i.d sequence setup serves as a simple model for
successive sampling, or ranking with increasing number of items. We study the size-biased
permutation of such a sequence using two separate methods: Markov chains and induced
order statistics. By going back and forth between the two approaches, we arrive at more
general results with simplified proofs, and provide a Poisson coupling argument which leads
to an explicit formula for the asymptotic distribution of the last few terms in the size-biased
permutation.
Chapter 6 is about the binary Little-Hopfield network. This is an established computa-
tional model of neural memory storage and retrieval which can have exponential capacity
relative to the number of neurons. However, known algorithms have produced networks with
linear capacity, and it has been a long-standing open problem whether robust exponential
storage is possible. For a network with n neurons, the problem involves a linear program in
n2 variables and exponentially many constraints. We utilized the action of the symmetric
group on the neuron labels and successfully reduced the problem to a linear program in three
variables and three constraints. Thus we explicitly constructed simple networks that answer
the question affirmatively, with the best possible asymptotic robustness index. This work
calls for further research into Little-Hopfield networks and their applications to theoretical
neuroscience and computer science.