12.4.2 Reductions Among Matrix Problems
NOTATIONS:
- MQ is the problem of multiplying arbitrary square matrices.
- MT is the problem of multiplying upper triangular square
matrices.
- MS is the problem of multiplying symmetric matrices.
- IT is the problem of inverting non-singular upper
triangular matrices.
These problems are all ("under reasonable assumptions")
linearly equivalent.
(The problem of inverting an *arbitrary* nonsingular matrix is
also essentially linearly equivalent to these problems, but the
proof is not given in our text.)
We know from the discussion in section 7.6 that MQ can be done with a
O(n2.81) or even O(n2.376) algorithm. Because of the
linear equivalences stated above, we know there are also O(n2.376)
algorithms for all these other problems. The equivalence also tells us that
if we find an algorithm with a smaller big-Θ for any of the problems we
can use it to create better algorithms for the other problems too.
THEOREM 12.4.5 [MT] <=L [MQ] and [MS] <=L [MQ]
PROOF: [MT] <=L [MQ] and [MS] <=L [MQ] are obvious: any algorithm for
multiplying *arbitrary* square matrices can be used to multiply two upper
triangular square matrices or to multiply two symmetric matrices.
THEOREM: 12.4.6 [MQ] <=L [MT], assuming MT is smooth.
PROOF: Suppose that triangular square matrices can be
multiplied by an algorithm that is O(t(n)), where t(n) is
smooth. If A and B are two arbitrary NXN matrices that we wish
to multiply, we can construct the two (3N)X(3N) matrices
depicted on the left below and form the product shown with our
algorithm for triangular matrices.
0 A 0 0 0 0 0 0 AB
0 0 0 X 0 0 B = 0 0 0
0 0 0 0 0 0 0 0 0
The work involved will be O(t(3n)), including the "overhead", which is
O(n2). Since t is smooth, t(3n) is O(t(n)), and it follows that
the transformation indicated provides us with an O(t(n)) algorithm for
multiplying square matrices.
THEOREM 12.4.7 [MQ] <=L [MS], assuming MS is smooth.
PROOF: Similar to proof of Theorem 12.4.6. Use:
0 A 0 Bt AB 0
X =
At 0 B 0 0 AtBt
(Here, At and Bt are the transposes of A and B,
respectively.)
THEOREM 12.4.8 [MQ] <=L [IT], assuming IT is smooth.
PROOF: Suppose that A and B are two arbitrary NXN matrices that
we wish to multiply. Consider the (3N)X(3N) matrices depicted
below.
I A 0 I -A AB I 0 0
0 I B X 0 I -B = 0 I 0
0 0 I 0 0 I 0 0 I
The equation above makes it clear that the second matrix above
is the inverse of the first. The first matrix above is upper
triangular and the second matrix above contains AB.
It should be clear now that if we have an O(t(n)) algorithm
that can invert non-singular upper triangular matrices, and if
t(n) is smooth, then we can also construct AB from A and B with
an O(t(n)) algorithm.
LEMMA: Assuming IT2 is smooth, [IT] <=L [IT2]
PROOF: (IT2 is the problem of inverting a non-singular upper triangular
matrix whose size is a power of 2). If we want to invert an arbitrary NXN
non-singular upper triangular matrix A, we can find the smallest 2j
such that N <= 2j, and then form the 2j by
2j matrix B:
A 0
0 I
Then, assuming we have an algorithm for IT2 that is O(t(n)) where t(n) is
smooth, we can use the algorithm to produce the inverse of B, which is:
A-1 0
0 I
This gives us the inverse of A. Because 2j is less than 2*N, it's
easy to use the smoothness of t(n) to show that the amount of work done is
O(t(n)). We invert B with at most t(2j) work. This is
t(2*2j-1). Since t is smooth, t(2*2j-1) <=
c*t(2j-1) for some constant c, and sufficiently large n. Since t
is eventually non- decreasing (part of being smooth), t(2j-1) <=
t(N). Thus the work to invert B is no more than c*t(N). (The "overhead" is
O(N2) and is dominated by t.)
LEMMA 12.4.10 [IT2] <=L [MQ], assuming MQ is strongly
quadratic.
PROOF: The proof is omitted here. See Brassard and Bratley,
pp. 436-437. The second half of the proof is a bit involved,
but the first half is straightforward and conveys the basic
idea of the proof well.
THEOREM 12.4.11 [IT] <=L [MQ]< assuming MQ is strongly
quadratic.
PROOF: This is "almost immediate" from the preceding two
results. There is a technical impediment to the proof. The
authors give hints and suggest we work out the details. We'll
decline this time.