Determinant of a sum of matrices












9












$begingroup$


I would like to know if the following formula is well known and get some references for it.



I don't know yet how to prove it (and I am working on it), but I am quite sure of its validity, after having performed a few symbolic computations with Maple.



Given $n$ square matrices $A_1,ldots,A_n$ of size $m<n$ :




$$sum_{p=1}^n(-1)^psum_{1leqslant i_1<cdots<i_pleqslant n}det(A_{i_1}+cdots+A_{i_p})=0$$




For example, if $A,B,C$ are three $2times2$ matrices, then :



$$det(A+B+C)-left[det(A+B)+det(A+C)+det(B+C)right]+det(A)+det(B)+det(C)=0$$










share|cite|improve this question











$endgroup$












  • $begingroup$
    This is directly related to this MSE question
    $endgroup$
    – Somos
    Nov 14 '17 at 19:31












  • $begingroup$
    @Somos: Thank you ! I will jump to it right now :)
    $endgroup$
    – Adren
    Nov 14 '17 at 19:47
















9












$begingroup$


I would like to know if the following formula is well known and get some references for it.



I don't know yet how to prove it (and I am working on it), but I am quite sure of its validity, after having performed a few symbolic computations with Maple.



Given $n$ square matrices $A_1,ldots,A_n$ of size $m<n$ :




$$sum_{p=1}^n(-1)^psum_{1leqslant i_1<cdots<i_pleqslant n}det(A_{i_1}+cdots+A_{i_p})=0$$




For example, if $A,B,C$ are three $2times2$ matrices, then :



$$det(A+B+C)-left[det(A+B)+det(A+C)+det(B+C)right]+det(A)+det(B)+det(C)=0$$










share|cite|improve this question











$endgroup$












  • $begingroup$
    This is directly related to this MSE question
    $endgroup$
    – Somos
    Nov 14 '17 at 19:31












  • $begingroup$
    @Somos: Thank you ! I will jump to it right now :)
    $endgroup$
    – Adren
    Nov 14 '17 at 19:47














9












9








9


6



$begingroup$


I would like to know if the following formula is well known and get some references for it.



I don't know yet how to prove it (and I am working on it), but I am quite sure of its validity, after having performed a few symbolic computations with Maple.



Given $n$ square matrices $A_1,ldots,A_n$ of size $m<n$ :




$$sum_{p=1}^n(-1)^psum_{1leqslant i_1<cdots<i_pleqslant n}det(A_{i_1}+cdots+A_{i_p})=0$$




For example, if $A,B,C$ are three $2times2$ matrices, then :



$$det(A+B+C)-left[det(A+B)+det(A+C)+det(B+C)right]+det(A)+det(B)+det(C)=0$$










share|cite|improve this question











$endgroup$




I would like to know if the following formula is well known and get some references for it.



I don't know yet how to prove it (and I am working on it), but I am quite sure of its validity, after having performed a few symbolic computations with Maple.



Given $n$ square matrices $A_1,ldots,A_n$ of size $m<n$ :




$$sum_{p=1}^n(-1)^psum_{1leqslant i_1<cdots<i_pleqslant n}det(A_{i_1}+cdots+A_{i_p})=0$$




For example, if $A,B,C$ are three $2times2$ matrices, then :



$$det(A+B+C)-left[det(A+B)+det(A+C)+det(B+C)right]+det(A)+det(B)+det(C)=0$$







matrices determinant






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 18 '18 at 19:16









Rodrigo de Azevedo

13k41959




13k41959










asked Nov 14 '17 at 19:27









AdrenAdren

5,388519




5,388519












  • $begingroup$
    This is directly related to this MSE question
    $endgroup$
    – Somos
    Nov 14 '17 at 19:31












  • $begingroup$
    @Somos: Thank you ! I will jump to it right now :)
    $endgroup$
    – Adren
    Nov 14 '17 at 19:47


















  • $begingroup$
    This is directly related to this MSE question
    $endgroup$
    – Somos
    Nov 14 '17 at 19:31












  • $begingroup$
    @Somos: Thank you ! I will jump to it right now :)
    $endgroup$
    – Adren
    Nov 14 '17 at 19:47
















$begingroup$
This is directly related to this MSE question
$endgroup$
– Somos
Nov 14 '17 at 19:31






$begingroup$
This is directly related to this MSE question
$endgroup$
– Somos
Nov 14 '17 at 19:31














$begingroup$
@Somos: Thank you ! I will jump to it right now :)
$endgroup$
– Adren
Nov 14 '17 at 19:47




$begingroup$
@Somos: Thank you ! I will jump to it right now :)
$endgroup$
– Adren
Nov 14 '17 at 19:47










3 Answers
3






active

oldest

votes


















4












$begingroup$

Let me outline two other proofs. Let me first rename your $m$ and $n$ as $n$
and $r$, since I find it confusing when $n$ is not the size of the square
matrices involved. So you are claiming the following:




Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $ninmathbb{N}$
and $rinmathbb{N}$ be such that $n<r$. Let $A_{1},A_{2},ldots,A_{r}$ be
$ntimes n$-matrices over $mathbb{K}$. Then,
begin{equation}
sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
end{equation}




Notice that I've snuck in one more little change into your formula: I've added
the addend for $I=varnothing$. This addend usually doesn't contribute much,
because $detleft( sumlimits_{iinvarnothing}A_{i}right) =detleft(
0_{ntimes n}right) $
is usually $0$... unless $n=0$, in which case it
contributes $detleft( 0_{0times0}right) =1$ (keep in mind that there is
only one $0times0$-matrix and its determinant is $1$), and the whole equality
fails if this addend is missing.



A first proof of Theorem 1 appears in (the solution to) Exercise 6.53 in my
Notes on the combinatorial fundamentals of algebra, version of 10 January
2019. (To obtain
Theorem 1 from this exercise, set $G=left{ 1,2,ldots,rright} $.) The
main idea of this proof is that Theorem 1 holds not only for determinants, but
also for each of the $n!$ products that make up the determinant (assuming that
you define the determinant of an $ntimes n$-matrix as a sum over the $n!$
permutations); this is proven by interchanging summation signs and exploiting
discrete "destructive interference" (i.e., the fact that if $G$ is a finite
set and $R$ is a subset of $G$, then $sumlimits_{substack{Isubseteq
G;\Rsubseteq I}}left( -1right) ^{leftvert Irightvert }=
begin{cases}
1, & text{if }R=G;\
0, & text{if }Rneq G
end{cases}
$
).



Let me now sketch a second proof of Theorem 1, which shows that it isn't
really about determinants. It is about finite differences, in a slightly more
general context than they are usually studied.



Let $M$ be any $mathbb{K}$-module. The dual $mathbb{K}$-module $M^{vee
}=operatorname{Hom}_{mathbb{K}}left( M,mathbb{K}right) $
of
$M$ consists of all $mathbb{K}$-linear maps $Mrightarrowmathbb{K}$. Thus,
$M^{vee}$ is a $mathbb{K}$-submodule of the $mathbb{K}$-module
$mathbb{K}^{M}$ of all maps $Mrightarrowmathbb{K}$. The $mathbb{K}
$
-module $mathbb{K}^{M}$ becomes a commutative $mathbb{K}$-algebra (we just
define multiplication to be pointwise, i.e., the product $fg$ of two maps
$f,g:Mrightarrowmathbb{K}$ sends each $min M$ to $fleft( mright)
gleft( mright) inmathbb{K}$
).



For any $dinmathbb{N}$, we let $M^{vee d}$ be the $mathbb{K}$-linear span
of all elements of $mathbb{K}^{M}$ of the form $f_{1}f_{2}cdots f_{d}$ for
$f_{1},f_{2},ldots,f_{d}in M^{vee}$. (For $d=0$, the only such element is
the empty product $1$, so $M^{vee0}$ consists of the constant maps
$Mrightarrowmathbb{K}$. Notice also that $M^{vee1}=M^{vee}$.) The elements
of $M^{vee d}$ are called homogeneous polynomial functions of degree $d$ on
$M$
. The underlying idea is that if $M$ is a free $mathbb{K}$-module with a
given basis, then the elements of $M^{vee d}$ are the maps $Mrightarrow
mathbb{K}$
that can be expressed as polynomials of the coordinate functions
with respect to this basis; but the $mathbb{K}$-module $M^{vee d}$ makes
perfect sense whether or not $M$ is free.



We also set $M^{vee d}=0$ (the zero $mathbb{K}$-submodule of $mathbb{K}
^{M}$
) for $d<0$.



For each $d in mathbb{Z}$, we define a $mathbb{K}$-submodule
$M^{vee leq d}$ of $mathbb{K}^M$ by
begin{equation}
M^{vee leq d} = sumlimits_{i leq d} M^{vee i} .
end{equation}

The elements of $M^{vee leq d}$ are called (inhomogeneous) polynomial
functions of degree $leq d$ on $M$
.
The submodules $M^{vee leq d}$ satisfy
begin{equation}
M^{vee leq d} M^{vee leq e} subseteq M^{vee leq left(d+eright)}
end{equation}

for any integers $d$ and $e$.



For any $xin M$, we define the $mathbb{K}$-linear map $S_{x}:mathbb{K}
^{M}rightarrowmathbb{K}^{M}$
by setting
begin{equation}
left( S_{x}fright) left( mright) =fleft( m+xright) qquadtext{for
each }min Mtext{ and }finmathbb{K}^{M}.
end{equation}

This map $S_{x}$ is called a shift operator. It is an endomorphism of the
$mathbb{K}$-algebra $mathbb{K}^{M}$ and preserves all the $mathbb{K}
$
-submodules $M^{vee leq d}$ (for all $dinmathbb{Z}$).



Moreover, for any $xin M$, we define the $mathbb{K}$-linear map $Delta
_{x}:mathbb{K}^{M}rightarrowmathbb{K}^{M}$
by $Delta_{x}
=operatorname*{id}-S_{x}$
. Hence,
begin{equation}
left( Delta_{x}fright) left( mright) =fleft( mright) -fleft(
m+xright) qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
end{equation}

This map $Delta_{x}$ is called a difference operator. The following crucial
fact shows that it "decrements the degree" of a polynomial function, similarly
to how differentiation decrements the degree of a polynomial:




Lemma 2. Let $x in M$. Then,
$Delta_{x}M^{vee d}subseteq M^{vee leq left( d-1right)}$
for each $dinmathbb{Z}$.




[Let me sketch a proof of Lemma 2:



Lemma 2 clearly holds for $d < 0$ (since $M^{vee d} = 0$ if $d < 0$).
Hence, it remains to prove Lemma 2 for $d geq 0$.
We shall prove this by induction on $d$.
The induction base is the case $d = 0$, which is easy to
check (indeed, each $f in M^{vee 0}$ is a constant map, and thus
satisfies $Delta_x f = 0$; therefore,
$Delta_{x}M^{vee 0} = 0 subseteq M^{vee leq left( 0-1right) }$).



For the induction step, we fix some nonnegative integer $e$, and assume
that Lemma 2 holds for $d = e$. We must then show that Lemma 2
holds for $d = e+1$.



We have assumed that Lemma 2 holds for $d = e$.
In other words, we have
$Delta_{x}M^{vee e}subseteq M^{vee leq left( e-1right)}$.



Our goal is to show that Lemma 2
holds for $d = e+1$. In other words, our goal is to show
that
$Delta_{x}M^{vee left(e+1right)}subseteq M^{vee leq e}$.



But the $mathbb{K}$-module $M^{vee left(e+1right)}$ is
spanned by maps of the form $fg$ with $fin M^{vee e}$ and
$gin M^{vee}$ (since it is spanned by products of the
form $f_1 f_2 cdots f_{e+1}$ with
$f_1, f_2, ldots, f_{e+1} in M^{vee}$, but each such
product can be rewritten in the form $fg$
with $f = f_1 f_2 cdots f_e in M^{vee e}$ and
$g = f_{e+1} in M^{vee}$).
Hence, it suffices to show that
$Delta_x left( fg right) in M^{vee leq e}$
for each $fin M^{vee e}$ and
$gin M^{vee}$.



Let us first notice that if $g in M^{vee}$ is arbitrary,
then $Delta_x g$ is the constant map whose value is
$- gleft(xright)$
(because each $m in M$ satisfies
begin{equation}
left(Delta_x gright) left(mright)
= gleft(mright) - underbrace{gleft(m+xright)}_{substack{=gleft(mright) + gleft(xright)\ text{(since }g text{ is } mathbb{K}text{-linear)}}}
= gleft(mright) - left(gleft(mright) + gleft(xright)right)
= - gleft(xright)
end{equation}

), and thus belongs to $M^{vee 0}$.
In other words, $Delta_x M^{vee} subseteq M^{vee 0}$.



For each $f in mathbb{K}^M$ and $g in mathbb{K}^M$,
we have
begin{align*}
Delta_{x}left( fgright) & =left( operatorname*{id}-S_{x}right)
left( fgright) qquadleft( text{since }Delta_{x}=operatorname*{id}
-S_{x}right) \
& =fg-underbrace{S_{x}left( fgright) }_{substack{=left( S_{x}fright)
left( S_{x}gright) \text{(since }S_{x}text{ is an endomorphism}
\text{of the }mathbb{K}text{-algebra }mathbb{K}^{M}text{)}}}\
& =fg-left( S_{x}fright) left( S_{x}gright) =underbrace{left(
f-S_{x}fright) }_{=left( operatorname*{id}-S_{x}right) f}g+left(
S_{x}fright) underbrace{left( x-S_{x}gright) }_{=left(
operatorname*{id}-S_{x}right) g}\
& =left( underbrace{left( operatorname*{id}-S_{x}right) }_{=Delta
_{x}}fright) g+left( S_{x}fright) left( underbrace{left(
operatorname*{id}-S_{x}right) }_{=Delta_{x}}gright) \
& =left( Delta_{x}fright) g+left(
underbrace{S_{x}}_{substack{=operatorname*{id}-Delta_{x}\
text{(since }Delta
_{x}=operatorname*{id}-S_{x}text{)}}}fright) left( Delta_{x}gright)
\
& =left( Delta_{x}fright) g+underbrace{left( left(
operatorname*{id}-Delta_{x}right) fright) }_{=f-Delta_{x}f}left(
Delta_{x}gright) \
& =left( Delta_{x}fright) g+left( f-Delta_{x}fright) left(
Delta_{x}gright) \
& =left( Delta_{x}fright) g+fleft( Delta_{x}gright) -left(
Delta_{x}fright) left( Delta_{x}gright) .
end{align*}

Hence, for each $fin M^{vee e}$ and $gin M^{vee}$, we have
begin{align*}
Delta_{x}left( fgright) & =left( Delta_{x}underbrace{f}_{in
M^{vee e}}right) underbrace{g}_{in M^{vee}}+underbrace{f}_{in M^{vee
e}}left( Delta_{x}underbrace{g}_{in M^{vee}}right) -left( Delta
_{x}underbrace{f}_{in M^{vee e}}right)
left( Delta_{x}underbrace{g}_{in M^{vee}}right) \
& inunderbrace{left( Delta_{x}M^{vee e}right) }_{subseteq M^{vee
leqleft( e-1right) }}M^{vee}+M^{vee e}underbrace{left( Delta
_{x}M^{vee}right) }_{subseteq M^{vee0}}-underbrace{left( Delta
_{x}M^{vee e}right) }_{subseteq M^{veeleqleft( e-1right) }
}underbrace{left( Delta_{x}M^{vee}right) }_{subseteq M^{vee0}}\
& subsetequnderbrace{M^{veeleqleft( e-1right) }M^{vee}}_{subseteq
M^{veeleq e}}+underbrace{M^{vee e}M^{vee0}}_{subseteq M^{vee
e}subseteq M^{veeleq e}}-underbrace{M^{veeleqleft( e-1right)
}M^{vee0}}_{subseteq M^{veeleqleft( e-1right) }subseteq M^{veeleq
e}}\
& subseteq M^{veeleq e}+M^{veeleq e}-M^{veeleq e}subseteq M^{veeleq
e}.
end{align*}

This proves that $Delta_{x}left( M^{veeleft( e+1right) }right)
subseteq M^{veeleq e}$
, as we intended to prove.



Thus, the induction step is complete, and Lemma 2 is proven.]



The following fact follows by induction using Lemma 2:




Corollary 3. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
elements of $M$. Then,
begin{equation}
Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}subseteq
M^{vee leq left( d-rright) }
end{equation}

for each $dinmathbb{Z}$.




And as a consequence of this, we obtain the following:




Corollary 4. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
elements of $M$. Then,
begin{equation}
Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}=0
end{equation}

for each $dinmathbb{Z}$ satisfying $d<r$.




[In fact, Corollary 4 follows immediately from Corollary 3, because $d<r$
implies $M^{vee leq left( d-rright) }=0$.]



To make use of Corollary 4, we want a more-or-less explicit expression for how
$Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}$ acts on maps in
$mathbb{K}^{M}$. This is the following fact:




Proposition 5. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
elements of $M$. Then,
begin{equation}
left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
mright) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right)
qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
end{equation}




[Proposition 5 can be proven by induction over $r$, where the induction step
involves splitting the sum on the right hand side into the part with the $I$
that contain $r$ and the part with the $I$ that don't. But there is also a
slicker argument, which needs some preparation. The maps $S_{x}in
operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right) $
for
different elements $xin M$ commute; better yet, they satisfy the
multiplication rule $S_{x}S_{y}=S_{x+y}$ (as can be checked immediately).
Hence, by induction over $leftvert Irightvert $, we conclude that if $I$
is any finite set, and if $x_{i}$ is an element of $M$ for each $iin I$, then
begin{equation}
prodlimits_{iin I}S_{x_{i}}=S_{sumlimits_{iin I}x_{i}}
qquad text{in the ring } operatorname{End}_{mathbb{K}} left(mathbb{K}^Mright) .
end{equation}

I shall refer to this fact as the S-multiplication rule.



Now, let us prove Proposition 5. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
elements of $M$. Recall the well-known formula
begin{equation}
prodlimits_{iinleft{ 1,2,ldots,rright} }left( 1-a_{i}right)
=sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }prodlimits_{iin I}a_{i},
end{equation}

which holds whenever $a_{1},a_{2},ldots,a_{r}$ are commuting elements of some
ring. Applying this formula to $a_{i}=S_{x_{i}}$, we obtain
begin{equation}
prodlimits_{iinleft{ 1,2,ldots,rright} }left( operatorname*{id}
-S_{x_{i}}right) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left(
-1right) ^{leftvert Irightvert }prodlimits_{iin I}S_{x_{i}}
end{equation}

(since $S_{x_{1}},S_{x_{2}},ldots,S_{x_{r}}$ are commuting elements of the
ring $operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right)
$
). Thus,
begin{align*}
Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}} & =prodlimits_{iinleft{
1,2,ldots,rright} }underbrace{Delta_{x_{i}}}
_{substack{=operatorname*{id}-S_{x_{i}}\text{(by the definition of }
Delta_{x_{i}}text{)}}}=prodlimits_{iinleft{ 1,2,ldots,rright} }left(
operatorname*{id}-S_{x_{i}}right) \
& =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }underbrace{prodlimits_{iin I}S_{x_{i}}}
_{substack{=S_{sumlimits_{iin I}x_{i}}\text{(by the S-multiplication rule)}
}}=sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}.
end{align*}

Hence, for each $min M$ and $finmathbb{K}^{M}$, we obtain
begin{align*}
& left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
mright) \
& =left( sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}fright) left( mright)
\
& =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }underbrace{left( S_{sumlimits_{iin I}x_{i}}fright)
left( mright) }_{substack{=fleft( m+sumlimits_{iin I}x_{i}right)
\text{(by the definition of the shift operators)}}}\
& =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) .
end{align*}

Thus, Proposition 5 is proven.]



We can now combine Corollary 4 with Proposition 5 and obtain the following:




Corollary 6. Let $x_{1},x_{2},ldots,x_{r}$ be $r$ elements of $M$. Let
$dinmathbb{Z}$ be such that $d<r$. Let $fin M^{vee d}$ and $min M$. Then,
begin{equation}
sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) =0.
end{equation}




[Indeed, Corollary 6 follows from the computation
begin{align*}
& sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) \
& =underbrace{left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}
}fright) }_{substack{=0\text{(by Corollary 4, since } f in M^{vee d} text{)}}}left( mright)
qquadleft( text{by Proposition 5}right) \
& =0.
end{align*}

]



Finally, let us prove Theorem 1. The matrix ring $mathbb{K}^{ntimes n}$ is a
$mathbb{K}$-module. Let $M$ be this $mathbb{K}$-module $mathbb{K}^{ntimes
n}$
. For each $i,jinleft{ 1,2,ldots,nright} $, we let $x_{i,j}$ be the
map $Mrightarrowmathbb{K}$ that sends each matrix $M$ to its $left(
i,jright) $
-th entry; this map $x_{i,j}$ is $mathbb{K}$-linear and thus
belongs to $M^{vee}$.



It is easy to see that the map $det:mathbb{K}^{ntimes n}rightarrow
mathbb{K}$
(sending each $ntimes n$-matrix to its determinant) is a
homogeneous polynomial function of degree $n$ on $M$; indeed, it can be
represented in the commutative $mathbb{K}$-algebra $mathbb{K}^M$ as
begin{equation}
det=sumlimits_{sigmain S_{n}}left( -1right) ^{sigma}x_{1,sigmaleft(
1right) }x_{2,sigmaleft( 2right) }cdots x_{n,sigmaleft( nright)
},
end{equation}

where $S_{n}$ is the $n$-th symmetric group, and where $left( -1right)
^{sigma}$
denotes the sign of a permutation $sigma$. In other words,
$detin M^{vee n}$. Hence, Corollary 6 (applied to $x_{i}=A_{i}$, $d=n$,
$f=det$ and $m=0$) yields
begin{equation}
sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }detleft( 0+sumlimits_{iin I}A_{i}right) =0.
end{equation}

In other words,
begin{equation}
sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
end{equation}

This proves Theorem 1. $blacksquare$






share|cite|improve this answer











$endgroup$





















    3












    $begingroup$

    Given integers $n > m > 0$, let $[n]$ be a short hand for the set ${1,ldots,n}$.



    For any $t in mathbb{R}$ and $x_1, ldots, x_n in mathbb{C}$, we have the identity



    $$prod_{k=1}^n (1 - e^{tx_k}) = sum_{P subset [n]} (-1)^{|P|} e^{tsum_{kin P} x_k}$$



    Treat both sides as function of $t$. Expand against $t$, one notice on LHS, coefficients in front of $t^k$ vanishes whenever $k < n$.
    By comparing coefficients of $t^m$, we obtain:



    $$ 0 = sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} x_kright)^mtag{*1}$$



    Notice RHS is a polynomial function in $x_1,ldots,x_n$ with integer coefficients. Since it evaluates to $0$ for all $(x_1,ldots,x_n) in mathbb{C}^n$, it is valid as a polynomial identity in $n$ indeterminates with integer coefficients. As a corollary, it is valid as an algebraic identity when $x_1, x_2, ldots, x_n$ are elements taken from any commutative algebra.



    Let $V$ be a vector space over $mathbb{C}$ spanned by
    elements $eta_1, ldots, eta_m$ and $bar{eta}_1,ldots,bar{eta}_m$.



    Let $Lambda^{e}(V) = bigoplus_{k=0}^n Lambda^{2k}(V)$ be the 'even' portion
    of its exterior algebra. $Lambda^{e}(V)$ itself is a commutative algebra.



    For any $m times m$ matrix $A$, let $tilde{A} in Lambda^e(V)$ be the element defined by:



    $$A = (a_{ij}) quadlongrightarrowquad tilde{A} = sum_{i=1}^msum_{j=1}^m a_{ij}bar{eta}_i wedge eta_j$$



    Notice the $m$-fold power of $tilde{A}$ satisfies an interesting identity:



    $$tilde{A}^m = underbrace{tilde{A} wedge cdots wedge tilde{A}}_{m text{ times}} = det(A) omega
    quadtext{ where }quad
    omega = m!, bar{eta}_1 wedge eta_1 wedge cdots wedge bar{eta}_m wedge eta_mtag{*2}$$



    Given any $n$-tuple of matrices $A_1, ldots, A_n in M_{mtimes m}(mathbb{C})$, if we substitute $x_k$ in $(*1)$ by $tilde{A}_k$ and apply $(*2)$, we find



    $$
    sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} tilde{A}_kright)^m
    = sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright)omega
    = 0
    $$
    Extracting the coefficient in front of $omega$, the desired identity follows:
    $$sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright) = 0$$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      A very beautiful result and very beautiful proof!
      $endgroup$
      – Jair Taylor
      Nov 15 '17 at 18:22



















    0












    $begingroup$

    HINT:



    The determinant of an $ntimes n$ matrix is a form of degree $n$. Forms come from multilinear forms.



    Consider $M$ an abelian group. For $a in M$, denote by $a^{[n]}$ the element $aotimes a otimes ldots otimes ain M^{otimes n}$. Let now $a_iin M$, $i in I$, finitely many elements in $M$. Let's try to find
    $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}$$



    Consider a product $a_{i_1}otimes ldots otimes a_{i_n}$. It appears in the above sum with the coefficient
    $$sum_{Jsubset K subset I}(-1)^{|I| - |J|}$$ where $J={i_1, ldots, i_n }$. This is $0$ for $Jne I$ and $1$ for $J=I$. ( a Möbius function)



    Therefore
    $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}=sum_{phicolon {1,ldots n}to I,phi text{surjective}}a_{phi(1)}otimes ldots a_{phi(n)}$$



    Particular cases:




    1. $|I|>n$, we get $0$, the result desired.


    2. $|I|=n$, we get $sum_{phicolon {1,ldots n}to I,phi text{bijective}}a_{phi(1)}otimes ldots a_{phi(n)}$







    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2520370%2fdeterminant-of-a-sum-of-matrices%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4












      $begingroup$

      Let me outline two other proofs. Let me first rename your $m$ and $n$ as $n$
      and $r$, since I find it confusing when $n$ is not the size of the square
      matrices involved. So you are claiming the following:




      Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $ninmathbb{N}$
      and $rinmathbb{N}$ be such that $n<r$. Let $A_{1},A_{2},ldots,A_{r}$ be
      $ntimes n$-matrices over $mathbb{K}$. Then,
      begin{equation}
      sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
      end{equation}




      Notice that I've snuck in one more little change into your formula: I've added
      the addend for $I=varnothing$. This addend usually doesn't contribute much,
      because $detleft( sumlimits_{iinvarnothing}A_{i}right) =detleft(
      0_{ntimes n}right) $
      is usually $0$... unless $n=0$, in which case it
      contributes $detleft( 0_{0times0}right) =1$ (keep in mind that there is
      only one $0times0$-matrix and its determinant is $1$), and the whole equality
      fails if this addend is missing.



      A first proof of Theorem 1 appears in (the solution to) Exercise 6.53 in my
      Notes on the combinatorial fundamentals of algebra, version of 10 January
      2019. (To obtain
      Theorem 1 from this exercise, set $G=left{ 1,2,ldots,rright} $.) The
      main idea of this proof is that Theorem 1 holds not only for determinants, but
      also for each of the $n!$ products that make up the determinant (assuming that
      you define the determinant of an $ntimes n$-matrix as a sum over the $n!$
      permutations); this is proven by interchanging summation signs and exploiting
      discrete "destructive interference" (i.e., the fact that if $G$ is a finite
      set and $R$ is a subset of $G$, then $sumlimits_{substack{Isubseteq
      G;\Rsubseteq I}}left( -1right) ^{leftvert Irightvert }=
      begin{cases}
      1, & text{if }R=G;\
      0, & text{if }Rneq G
      end{cases}
      $
      ).



      Let me now sketch a second proof of Theorem 1, which shows that it isn't
      really about determinants. It is about finite differences, in a slightly more
      general context than they are usually studied.



      Let $M$ be any $mathbb{K}$-module. The dual $mathbb{K}$-module $M^{vee
      }=operatorname{Hom}_{mathbb{K}}left( M,mathbb{K}right) $
      of
      $M$ consists of all $mathbb{K}$-linear maps $Mrightarrowmathbb{K}$. Thus,
      $M^{vee}$ is a $mathbb{K}$-submodule of the $mathbb{K}$-module
      $mathbb{K}^{M}$ of all maps $Mrightarrowmathbb{K}$. The $mathbb{K}
      $
      -module $mathbb{K}^{M}$ becomes a commutative $mathbb{K}$-algebra (we just
      define multiplication to be pointwise, i.e., the product $fg$ of two maps
      $f,g:Mrightarrowmathbb{K}$ sends each $min M$ to $fleft( mright)
      gleft( mright) inmathbb{K}$
      ).



      For any $dinmathbb{N}$, we let $M^{vee d}$ be the $mathbb{K}$-linear span
      of all elements of $mathbb{K}^{M}$ of the form $f_{1}f_{2}cdots f_{d}$ for
      $f_{1},f_{2},ldots,f_{d}in M^{vee}$. (For $d=0$, the only such element is
      the empty product $1$, so $M^{vee0}$ consists of the constant maps
      $Mrightarrowmathbb{K}$. Notice also that $M^{vee1}=M^{vee}$.) The elements
      of $M^{vee d}$ are called homogeneous polynomial functions of degree $d$ on
      $M$
      . The underlying idea is that if $M$ is a free $mathbb{K}$-module with a
      given basis, then the elements of $M^{vee d}$ are the maps $Mrightarrow
      mathbb{K}$
      that can be expressed as polynomials of the coordinate functions
      with respect to this basis; but the $mathbb{K}$-module $M^{vee d}$ makes
      perfect sense whether or not $M$ is free.



      We also set $M^{vee d}=0$ (the zero $mathbb{K}$-submodule of $mathbb{K}
      ^{M}$
      ) for $d<0$.



      For each $d in mathbb{Z}$, we define a $mathbb{K}$-submodule
      $M^{vee leq d}$ of $mathbb{K}^M$ by
      begin{equation}
      M^{vee leq d} = sumlimits_{i leq d} M^{vee i} .
      end{equation}

      The elements of $M^{vee leq d}$ are called (inhomogeneous) polynomial
      functions of degree $leq d$ on $M$
      .
      The submodules $M^{vee leq d}$ satisfy
      begin{equation}
      M^{vee leq d} M^{vee leq e} subseteq M^{vee leq left(d+eright)}
      end{equation}

      for any integers $d$ and $e$.



      For any $xin M$, we define the $mathbb{K}$-linear map $S_{x}:mathbb{K}
      ^{M}rightarrowmathbb{K}^{M}$
      by setting
      begin{equation}
      left( S_{x}fright) left( mright) =fleft( m+xright) qquadtext{for
      each }min Mtext{ and }finmathbb{K}^{M}.
      end{equation}

      This map $S_{x}$ is called a shift operator. It is an endomorphism of the
      $mathbb{K}$-algebra $mathbb{K}^{M}$ and preserves all the $mathbb{K}
      $
      -submodules $M^{vee leq d}$ (for all $dinmathbb{Z}$).



      Moreover, for any $xin M$, we define the $mathbb{K}$-linear map $Delta
      _{x}:mathbb{K}^{M}rightarrowmathbb{K}^{M}$
      by $Delta_{x}
      =operatorname*{id}-S_{x}$
      . Hence,
      begin{equation}
      left( Delta_{x}fright) left( mright) =fleft( mright) -fleft(
      m+xright) qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
      end{equation}

      This map $Delta_{x}$ is called a difference operator. The following crucial
      fact shows that it "decrements the degree" of a polynomial function, similarly
      to how differentiation decrements the degree of a polynomial:




      Lemma 2. Let $x in M$. Then,
      $Delta_{x}M^{vee d}subseteq M^{vee leq left( d-1right)}$
      for each $dinmathbb{Z}$.




      [Let me sketch a proof of Lemma 2:



      Lemma 2 clearly holds for $d < 0$ (since $M^{vee d} = 0$ if $d < 0$).
      Hence, it remains to prove Lemma 2 for $d geq 0$.
      We shall prove this by induction on $d$.
      The induction base is the case $d = 0$, which is easy to
      check (indeed, each $f in M^{vee 0}$ is a constant map, and thus
      satisfies $Delta_x f = 0$; therefore,
      $Delta_{x}M^{vee 0} = 0 subseteq M^{vee leq left( 0-1right) }$).



      For the induction step, we fix some nonnegative integer $e$, and assume
      that Lemma 2 holds for $d = e$. We must then show that Lemma 2
      holds for $d = e+1$.



      We have assumed that Lemma 2 holds for $d = e$.
      In other words, we have
      $Delta_{x}M^{vee e}subseteq M^{vee leq left( e-1right)}$.



      Our goal is to show that Lemma 2
      holds for $d = e+1$. In other words, our goal is to show
      that
      $Delta_{x}M^{vee left(e+1right)}subseteq M^{vee leq e}$.



      But the $mathbb{K}$-module $M^{vee left(e+1right)}$ is
      spanned by maps of the form $fg$ with $fin M^{vee e}$ and
      $gin M^{vee}$ (since it is spanned by products of the
      form $f_1 f_2 cdots f_{e+1}$ with
      $f_1, f_2, ldots, f_{e+1} in M^{vee}$, but each such
      product can be rewritten in the form $fg$
      with $f = f_1 f_2 cdots f_e in M^{vee e}$ and
      $g = f_{e+1} in M^{vee}$).
      Hence, it suffices to show that
      $Delta_x left( fg right) in M^{vee leq e}$
      for each $fin M^{vee e}$ and
      $gin M^{vee}$.



      Let us first notice that if $g in M^{vee}$ is arbitrary,
      then $Delta_x g$ is the constant map whose value is
      $- gleft(xright)$
      (because each $m in M$ satisfies
      begin{equation}
      left(Delta_x gright) left(mright)
      = gleft(mright) - underbrace{gleft(m+xright)}_{substack{=gleft(mright) + gleft(xright)\ text{(since }g text{ is } mathbb{K}text{-linear)}}}
      = gleft(mright) - left(gleft(mright) + gleft(xright)right)
      = - gleft(xright)
      end{equation}

      ), and thus belongs to $M^{vee 0}$.
      In other words, $Delta_x M^{vee} subseteq M^{vee 0}$.



      For each $f in mathbb{K}^M$ and $g in mathbb{K}^M$,
      we have
      begin{align*}
      Delta_{x}left( fgright) & =left( operatorname*{id}-S_{x}right)
      left( fgright) qquadleft( text{since }Delta_{x}=operatorname*{id}
      -S_{x}right) \
      & =fg-underbrace{S_{x}left( fgright) }_{substack{=left( S_{x}fright)
      left( S_{x}gright) \text{(since }S_{x}text{ is an endomorphism}
      \text{of the }mathbb{K}text{-algebra }mathbb{K}^{M}text{)}}}\
      & =fg-left( S_{x}fright) left( S_{x}gright) =underbrace{left(
      f-S_{x}fright) }_{=left( operatorname*{id}-S_{x}right) f}g+left(
      S_{x}fright) underbrace{left( x-S_{x}gright) }_{=left(
      operatorname*{id}-S_{x}right) g}\
      & =left( underbrace{left( operatorname*{id}-S_{x}right) }_{=Delta
      _{x}}fright) g+left( S_{x}fright) left( underbrace{left(
      operatorname*{id}-S_{x}right) }_{=Delta_{x}}gright) \
      & =left( Delta_{x}fright) g+left(
      underbrace{S_{x}}_{substack{=operatorname*{id}-Delta_{x}\
      text{(since }Delta
      _{x}=operatorname*{id}-S_{x}text{)}}}fright) left( Delta_{x}gright)
      \
      & =left( Delta_{x}fright) g+underbrace{left( left(
      operatorname*{id}-Delta_{x}right) fright) }_{=f-Delta_{x}f}left(
      Delta_{x}gright) \
      & =left( Delta_{x}fright) g+left( f-Delta_{x}fright) left(
      Delta_{x}gright) \
      & =left( Delta_{x}fright) g+fleft( Delta_{x}gright) -left(
      Delta_{x}fright) left( Delta_{x}gright) .
      end{align*}

      Hence, for each $fin M^{vee e}$ and $gin M^{vee}$, we have
      begin{align*}
      Delta_{x}left( fgright) & =left( Delta_{x}underbrace{f}_{in
      M^{vee e}}right) underbrace{g}_{in M^{vee}}+underbrace{f}_{in M^{vee
      e}}left( Delta_{x}underbrace{g}_{in M^{vee}}right) -left( Delta
      _{x}underbrace{f}_{in M^{vee e}}right)
      left( Delta_{x}underbrace{g}_{in M^{vee}}right) \
      & inunderbrace{left( Delta_{x}M^{vee e}right) }_{subseteq M^{vee
      leqleft( e-1right) }}M^{vee}+M^{vee e}underbrace{left( Delta
      _{x}M^{vee}right) }_{subseteq M^{vee0}}-underbrace{left( Delta
      _{x}M^{vee e}right) }_{subseteq M^{veeleqleft( e-1right) }
      }underbrace{left( Delta_{x}M^{vee}right) }_{subseteq M^{vee0}}\
      & subsetequnderbrace{M^{veeleqleft( e-1right) }M^{vee}}_{subseteq
      M^{veeleq e}}+underbrace{M^{vee e}M^{vee0}}_{subseteq M^{vee
      e}subseteq M^{veeleq e}}-underbrace{M^{veeleqleft( e-1right)
      }M^{vee0}}_{subseteq M^{veeleqleft( e-1right) }subseteq M^{veeleq
      e}}\
      & subseteq M^{veeleq e}+M^{veeleq e}-M^{veeleq e}subseteq M^{veeleq
      e}.
      end{align*}

      This proves that $Delta_{x}left( M^{veeleft( e+1right) }right)
      subseteq M^{veeleq e}$
      , as we intended to prove.



      Thus, the induction step is complete, and Lemma 2 is proven.]



      The following fact follows by induction using Lemma 2:




      Corollary 3. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
      elements of $M$. Then,
      begin{equation}
      Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}subseteq
      M^{vee leq left( d-rright) }
      end{equation}

      for each $dinmathbb{Z}$.




      And as a consequence of this, we obtain the following:




      Corollary 4. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
      elements of $M$. Then,
      begin{equation}
      Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}=0
      end{equation}

      for each $dinmathbb{Z}$ satisfying $d<r$.




      [In fact, Corollary 4 follows immediately from Corollary 3, because $d<r$
      implies $M^{vee leq left( d-rright) }=0$.]



      To make use of Corollary 4, we want a more-or-less explicit expression for how
      $Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}$ acts on maps in
      $mathbb{K}^{M}$. This is the following fact:




      Proposition 5. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
      elements of $M$. Then,
      begin{equation}
      left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
      mright) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right)
      qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
      end{equation}




      [Proposition 5 can be proven by induction over $r$, where the induction step
      involves splitting the sum on the right hand side into the part with the $I$
      that contain $r$ and the part with the $I$ that don't. But there is also a
      slicker argument, which needs some preparation. The maps $S_{x}in
      operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right) $
      for
      different elements $xin M$ commute; better yet, they satisfy the
      multiplication rule $S_{x}S_{y}=S_{x+y}$ (as can be checked immediately).
      Hence, by induction over $leftvert Irightvert $, we conclude that if $I$
      is any finite set, and if $x_{i}$ is an element of $M$ for each $iin I$, then
      begin{equation}
      prodlimits_{iin I}S_{x_{i}}=S_{sumlimits_{iin I}x_{i}}
      qquad text{in the ring } operatorname{End}_{mathbb{K}} left(mathbb{K}^Mright) .
      end{equation}

      I shall refer to this fact as the S-multiplication rule.



      Now, let us prove Proposition 5. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
      elements of $M$. Recall the well-known formula
      begin{equation}
      prodlimits_{iinleft{ 1,2,ldots,rright} }left( 1-a_{i}right)
      =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }prodlimits_{iin I}a_{i},
      end{equation}

      which holds whenever $a_{1},a_{2},ldots,a_{r}$ are commuting elements of some
      ring. Applying this formula to $a_{i}=S_{x_{i}}$, we obtain
      begin{equation}
      prodlimits_{iinleft{ 1,2,ldots,rright} }left( operatorname*{id}
      -S_{x_{i}}right) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left(
      -1right) ^{leftvert Irightvert }prodlimits_{iin I}S_{x_{i}}
      end{equation}

      (since $S_{x_{1}},S_{x_{2}},ldots,S_{x_{r}}$ are commuting elements of the
      ring $operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right)
      $
      ). Thus,
      begin{align*}
      Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}} & =prodlimits_{iinleft{
      1,2,ldots,rright} }underbrace{Delta_{x_{i}}}
      _{substack{=operatorname*{id}-S_{x_{i}}\text{(by the definition of }
      Delta_{x_{i}}text{)}}}=prodlimits_{iinleft{ 1,2,ldots,rright} }left(
      operatorname*{id}-S_{x_{i}}right) \
      & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }underbrace{prodlimits_{iin I}S_{x_{i}}}
      _{substack{=S_{sumlimits_{iin I}x_{i}}\text{(by the S-multiplication rule)}
      }}=sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}.
      end{align*}

      Hence, for each $min M$ and $finmathbb{K}^{M}$, we obtain
      begin{align*}
      & left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
      mright) \
      & =left( sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}fright) left( mright)
      \
      & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }underbrace{left( S_{sumlimits_{iin I}x_{i}}fright)
      left( mright) }_{substack{=fleft( m+sumlimits_{iin I}x_{i}right)
      \text{(by the definition of the shift operators)}}}\
      & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) .
      end{align*}

      Thus, Proposition 5 is proven.]



      We can now combine Corollary 4 with Proposition 5 and obtain the following:




      Corollary 6. Let $x_{1},x_{2},ldots,x_{r}$ be $r$ elements of $M$. Let
      $dinmathbb{Z}$ be such that $d<r$. Let $fin M^{vee d}$ and $min M$. Then,
      begin{equation}
      sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) =0.
      end{equation}




      [Indeed, Corollary 6 follows from the computation
      begin{align*}
      & sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) \
      & =underbrace{left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}
      }fright) }_{substack{=0\text{(by Corollary 4, since } f in M^{vee d} text{)}}}left( mright)
      qquadleft( text{by Proposition 5}right) \
      & =0.
      end{align*}

      ]



      Finally, let us prove Theorem 1. The matrix ring $mathbb{K}^{ntimes n}$ is a
      $mathbb{K}$-module. Let $M$ be this $mathbb{K}$-module $mathbb{K}^{ntimes
      n}$
      . For each $i,jinleft{ 1,2,ldots,nright} $, we let $x_{i,j}$ be the
      map $Mrightarrowmathbb{K}$ that sends each matrix $M$ to its $left(
      i,jright) $
      -th entry; this map $x_{i,j}$ is $mathbb{K}$-linear and thus
      belongs to $M^{vee}$.



      It is easy to see that the map $det:mathbb{K}^{ntimes n}rightarrow
      mathbb{K}$
      (sending each $ntimes n$-matrix to its determinant) is a
      homogeneous polynomial function of degree $n$ on $M$; indeed, it can be
      represented in the commutative $mathbb{K}$-algebra $mathbb{K}^M$ as
      begin{equation}
      det=sumlimits_{sigmain S_{n}}left( -1right) ^{sigma}x_{1,sigmaleft(
      1right) }x_{2,sigmaleft( 2right) }cdots x_{n,sigmaleft( nright)
      },
      end{equation}

      where $S_{n}$ is the $n$-th symmetric group, and where $left( -1right)
      ^{sigma}$
      denotes the sign of a permutation $sigma$. In other words,
      $detin M^{vee n}$. Hence, Corollary 6 (applied to $x_{i}=A_{i}$, $d=n$,
      $f=det$ and $m=0$) yields
      begin{equation}
      sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }detleft( 0+sumlimits_{iin I}A_{i}right) =0.
      end{equation}

      In other words,
      begin{equation}
      sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
      ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
      end{equation}

      This proves Theorem 1. $blacksquare$






      share|cite|improve this answer











      $endgroup$


















        4












        $begingroup$

        Let me outline two other proofs. Let me first rename your $m$ and $n$ as $n$
        and $r$, since I find it confusing when $n$ is not the size of the square
        matrices involved. So you are claiming the following:




        Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $ninmathbb{N}$
        and $rinmathbb{N}$ be such that $n<r$. Let $A_{1},A_{2},ldots,A_{r}$ be
        $ntimes n$-matrices over $mathbb{K}$. Then,
        begin{equation}
        sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
        end{equation}




        Notice that I've snuck in one more little change into your formula: I've added
        the addend for $I=varnothing$. This addend usually doesn't contribute much,
        because $detleft( sumlimits_{iinvarnothing}A_{i}right) =detleft(
        0_{ntimes n}right) $
        is usually $0$... unless $n=0$, in which case it
        contributes $detleft( 0_{0times0}right) =1$ (keep in mind that there is
        only one $0times0$-matrix and its determinant is $1$), and the whole equality
        fails if this addend is missing.



        A first proof of Theorem 1 appears in (the solution to) Exercise 6.53 in my
        Notes on the combinatorial fundamentals of algebra, version of 10 January
        2019. (To obtain
        Theorem 1 from this exercise, set $G=left{ 1,2,ldots,rright} $.) The
        main idea of this proof is that Theorem 1 holds not only for determinants, but
        also for each of the $n!$ products that make up the determinant (assuming that
        you define the determinant of an $ntimes n$-matrix as a sum over the $n!$
        permutations); this is proven by interchanging summation signs and exploiting
        discrete "destructive interference" (i.e., the fact that if $G$ is a finite
        set and $R$ is a subset of $G$, then $sumlimits_{substack{Isubseteq
        G;\Rsubseteq I}}left( -1right) ^{leftvert Irightvert }=
        begin{cases}
        1, & text{if }R=G;\
        0, & text{if }Rneq G
        end{cases}
        $
        ).



        Let me now sketch a second proof of Theorem 1, which shows that it isn't
        really about determinants. It is about finite differences, in a slightly more
        general context than they are usually studied.



        Let $M$ be any $mathbb{K}$-module. The dual $mathbb{K}$-module $M^{vee
        }=operatorname{Hom}_{mathbb{K}}left( M,mathbb{K}right) $
        of
        $M$ consists of all $mathbb{K}$-linear maps $Mrightarrowmathbb{K}$. Thus,
        $M^{vee}$ is a $mathbb{K}$-submodule of the $mathbb{K}$-module
        $mathbb{K}^{M}$ of all maps $Mrightarrowmathbb{K}$. The $mathbb{K}
        $
        -module $mathbb{K}^{M}$ becomes a commutative $mathbb{K}$-algebra (we just
        define multiplication to be pointwise, i.e., the product $fg$ of two maps
        $f,g:Mrightarrowmathbb{K}$ sends each $min M$ to $fleft( mright)
        gleft( mright) inmathbb{K}$
        ).



        For any $dinmathbb{N}$, we let $M^{vee d}$ be the $mathbb{K}$-linear span
        of all elements of $mathbb{K}^{M}$ of the form $f_{1}f_{2}cdots f_{d}$ for
        $f_{1},f_{2},ldots,f_{d}in M^{vee}$. (For $d=0$, the only such element is
        the empty product $1$, so $M^{vee0}$ consists of the constant maps
        $Mrightarrowmathbb{K}$. Notice also that $M^{vee1}=M^{vee}$.) The elements
        of $M^{vee d}$ are called homogeneous polynomial functions of degree $d$ on
        $M$
        . The underlying idea is that if $M$ is a free $mathbb{K}$-module with a
        given basis, then the elements of $M^{vee d}$ are the maps $Mrightarrow
        mathbb{K}$
        that can be expressed as polynomials of the coordinate functions
        with respect to this basis; but the $mathbb{K}$-module $M^{vee d}$ makes
        perfect sense whether or not $M$ is free.



        We also set $M^{vee d}=0$ (the zero $mathbb{K}$-submodule of $mathbb{K}
        ^{M}$
        ) for $d<0$.



        For each $d in mathbb{Z}$, we define a $mathbb{K}$-submodule
        $M^{vee leq d}$ of $mathbb{K}^M$ by
        begin{equation}
        M^{vee leq d} = sumlimits_{i leq d} M^{vee i} .
        end{equation}

        The elements of $M^{vee leq d}$ are called (inhomogeneous) polynomial
        functions of degree $leq d$ on $M$
        .
        The submodules $M^{vee leq d}$ satisfy
        begin{equation}
        M^{vee leq d} M^{vee leq e} subseteq M^{vee leq left(d+eright)}
        end{equation}

        for any integers $d$ and $e$.



        For any $xin M$, we define the $mathbb{K}$-linear map $S_{x}:mathbb{K}
        ^{M}rightarrowmathbb{K}^{M}$
        by setting
        begin{equation}
        left( S_{x}fright) left( mright) =fleft( m+xright) qquadtext{for
        each }min Mtext{ and }finmathbb{K}^{M}.
        end{equation}

        This map $S_{x}$ is called a shift operator. It is an endomorphism of the
        $mathbb{K}$-algebra $mathbb{K}^{M}$ and preserves all the $mathbb{K}
        $
        -submodules $M^{vee leq d}$ (for all $dinmathbb{Z}$).



        Moreover, for any $xin M$, we define the $mathbb{K}$-linear map $Delta
        _{x}:mathbb{K}^{M}rightarrowmathbb{K}^{M}$
        by $Delta_{x}
        =operatorname*{id}-S_{x}$
        . Hence,
        begin{equation}
        left( Delta_{x}fright) left( mright) =fleft( mright) -fleft(
        m+xright) qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
        end{equation}

        This map $Delta_{x}$ is called a difference operator. The following crucial
        fact shows that it "decrements the degree" of a polynomial function, similarly
        to how differentiation decrements the degree of a polynomial:




        Lemma 2. Let $x in M$. Then,
        $Delta_{x}M^{vee d}subseteq M^{vee leq left( d-1right)}$
        for each $dinmathbb{Z}$.




        [Let me sketch a proof of Lemma 2:



        Lemma 2 clearly holds for $d < 0$ (since $M^{vee d} = 0$ if $d < 0$).
        Hence, it remains to prove Lemma 2 for $d geq 0$.
        We shall prove this by induction on $d$.
        The induction base is the case $d = 0$, which is easy to
        check (indeed, each $f in M^{vee 0}$ is a constant map, and thus
        satisfies $Delta_x f = 0$; therefore,
        $Delta_{x}M^{vee 0} = 0 subseteq M^{vee leq left( 0-1right) }$).



        For the induction step, we fix some nonnegative integer $e$, and assume
        that Lemma 2 holds for $d = e$. We must then show that Lemma 2
        holds for $d = e+1$.



        We have assumed that Lemma 2 holds for $d = e$.
        In other words, we have
        $Delta_{x}M^{vee e}subseteq M^{vee leq left( e-1right)}$.



        Our goal is to show that Lemma 2
        holds for $d = e+1$. In other words, our goal is to show
        that
        $Delta_{x}M^{vee left(e+1right)}subseteq M^{vee leq e}$.



        But the $mathbb{K}$-module $M^{vee left(e+1right)}$ is
        spanned by maps of the form $fg$ with $fin M^{vee e}$ and
        $gin M^{vee}$ (since it is spanned by products of the
        form $f_1 f_2 cdots f_{e+1}$ with
        $f_1, f_2, ldots, f_{e+1} in M^{vee}$, but each such
        product can be rewritten in the form $fg$
        with $f = f_1 f_2 cdots f_e in M^{vee e}$ and
        $g = f_{e+1} in M^{vee}$).
        Hence, it suffices to show that
        $Delta_x left( fg right) in M^{vee leq e}$
        for each $fin M^{vee e}$ and
        $gin M^{vee}$.



        Let us first notice that if $g in M^{vee}$ is arbitrary,
        then $Delta_x g$ is the constant map whose value is
        $- gleft(xright)$
        (because each $m in M$ satisfies
        begin{equation}
        left(Delta_x gright) left(mright)
        = gleft(mright) - underbrace{gleft(m+xright)}_{substack{=gleft(mright) + gleft(xright)\ text{(since }g text{ is } mathbb{K}text{-linear)}}}
        = gleft(mright) - left(gleft(mright) + gleft(xright)right)
        = - gleft(xright)
        end{equation}

        ), and thus belongs to $M^{vee 0}$.
        In other words, $Delta_x M^{vee} subseteq M^{vee 0}$.



        For each $f in mathbb{K}^M$ and $g in mathbb{K}^M$,
        we have
        begin{align*}
        Delta_{x}left( fgright) & =left( operatorname*{id}-S_{x}right)
        left( fgright) qquadleft( text{since }Delta_{x}=operatorname*{id}
        -S_{x}right) \
        & =fg-underbrace{S_{x}left( fgright) }_{substack{=left( S_{x}fright)
        left( S_{x}gright) \text{(since }S_{x}text{ is an endomorphism}
        \text{of the }mathbb{K}text{-algebra }mathbb{K}^{M}text{)}}}\
        & =fg-left( S_{x}fright) left( S_{x}gright) =underbrace{left(
        f-S_{x}fright) }_{=left( operatorname*{id}-S_{x}right) f}g+left(
        S_{x}fright) underbrace{left( x-S_{x}gright) }_{=left(
        operatorname*{id}-S_{x}right) g}\
        & =left( underbrace{left( operatorname*{id}-S_{x}right) }_{=Delta
        _{x}}fright) g+left( S_{x}fright) left( underbrace{left(
        operatorname*{id}-S_{x}right) }_{=Delta_{x}}gright) \
        & =left( Delta_{x}fright) g+left(
        underbrace{S_{x}}_{substack{=operatorname*{id}-Delta_{x}\
        text{(since }Delta
        _{x}=operatorname*{id}-S_{x}text{)}}}fright) left( Delta_{x}gright)
        \
        & =left( Delta_{x}fright) g+underbrace{left( left(
        operatorname*{id}-Delta_{x}right) fright) }_{=f-Delta_{x}f}left(
        Delta_{x}gright) \
        & =left( Delta_{x}fright) g+left( f-Delta_{x}fright) left(
        Delta_{x}gright) \
        & =left( Delta_{x}fright) g+fleft( Delta_{x}gright) -left(
        Delta_{x}fright) left( Delta_{x}gright) .
        end{align*}

        Hence, for each $fin M^{vee e}$ and $gin M^{vee}$, we have
        begin{align*}
        Delta_{x}left( fgright) & =left( Delta_{x}underbrace{f}_{in
        M^{vee e}}right) underbrace{g}_{in M^{vee}}+underbrace{f}_{in M^{vee
        e}}left( Delta_{x}underbrace{g}_{in M^{vee}}right) -left( Delta
        _{x}underbrace{f}_{in M^{vee e}}right)
        left( Delta_{x}underbrace{g}_{in M^{vee}}right) \
        & inunderbrace{left( Delta_{x}M^{vee e}right) }_{subseteq M^{vee
        leqleft( e-1right) }}M^{vee}+M^{vee e}underbrace{left( Delta
        _{x}M^{vee}right) }_{subseteq M^{vee0}}-underbrace{left( Delta
        _{x}M^{vee e}right) }_{subseteq M^{veeleqleft( e-1right) }
        }underbrace{left( Delta_{x}M^{vee}right) }_{subseteq M^{vee0}}\
        & subsetequnderbrace{M^{veeleqleft( e-1right) }M^{vee}}_{subseteq
        M^{veeleq e}}+underbrace{M^{vee e}M^{vee0}}_{subseteq M^{vee
        e}subseteq M^{veeleq e}}-underbrace{M^{veeleqleft( e-1right)
        }M^{vee0}}_{subseteq M^{veeleqleft( e-1right) }subseteq M^{veeleq
        e}}\
        & subseteq M^{veeleq e}+M^{veeleq e}-M^{veeleq e}subseteq M^{veeleq
        e}.
        end{align*}

        This proves that $Delta_{x}left( M^{veeleft( e+1right) }right)
        subseteq M^{veeleq e}$
        , as we intended to prove.



        Thus, the induction step is complete, and Lemma 2 is proven.]



        The following fact follows by induction using Lemma 2:




        Corollary 3. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
        elements of $M$. Then,
        begin{equation}
        Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}subseteq
        M^{vee leq left( d-rright) }
        end{equation}

        for each $dinmathbb{Z}$.




        And as a consequence of this, we obtain the following:




        Corollary 4. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
        elements of $M$. Then,
        begin{equation}
        Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}=0
        end{equation}

        for each $dinmathbb{Z}$ satisfying $d<r$.




        [In fact, Corollary 4 follows immediately from Corollary 3, because $d<r$
        implies $M^{vee leq left( d-rright) }=0$.]



        To make use of Corollary 4, we want a more-or-less explicit expression for how
        $Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}$ acts on maps in
        $mathbb{K}^{M}$. This is the following fact:




        Proposition 5. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
        elements of $M$. Then,
        begin{equation}
        left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
        mright) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right)
        qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
        end{equation}




        [Proposition 5 can be proven by induction over $r$, where the induction step
        involves splitting the sum on the right hand side into the part with the $I$
        that contain $r$ and the part with the $I$ that don't. But there is also a
        slicker argument, which needs some preparation. The maps $S_{x}in
        operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right) $
        for
        different elements $xin M$ commute; better yet, they satisfy the
        multiplication rule $S_{x}S_{y}=S_{x+y}$ (as can be checked immediately).
        Hence, by induction over $leftvert Irightvert $, we conclude that if $I$
        is any finite set, and if $x_{i}$ is an element of $M$ for each $iin I$, then
        begin{equation}
        prodlimits_{iin I}S_{x_{i}}=S_{sumlimits_{iin I}x_{i}}
        qquad text{in the ring } operatorname{End}_{mathbb{K}} left(mathbb{K}^Mright) .
        end{equation}

        I shall refer to this fact as the S-multiplication rule.



        Now, let us prove Proposition 5. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
        elements of $M$. Recall the well-known formula
        begin{equation}
        prodlimits_{iinleft{ 1,2,ldots,rright} }left( 1-a_{i}right)
        =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }prodlimits_{iin I}a_{i},
        end{equation}

        which holds whenever $a_{1},a_{2},ldots,a_{r}$ are commuting elements of some
        ring. Applying this formula to $a_{i}=S_{x_{i}}$, we obtain
        begin{equation}
        prodlimits_{iinleft{ 1,2,ldots,rright} }left( operatorname*{id}
        -S_{x_{i}}right) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left(
        -1right) ^{leftvert Irightvert }prodlimits_{iin I}S_{x_{i}}
        end{equation}

        (since $S_{x_{1}},S_{x_{2}},ldots,S_{x_{r}}$ are commuting elements of the
        ring $operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right)
        $
        ). Thus,
        begin{align*}
        Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}} & =prodlimits_{iinleft{
        1,2,ldots,rright} }underbrace{Delta_{x_{i}}}
        _{substack{=operatorname*{id}-S_{x_{i}}\text{(by the definition of }
        Delta_{x_{i}}text{)}}}=prodlimits_{iinleft{ 1,2,ldots,rright} }left(
        operatorname*{id}-S_{x_{i}}right) \
        & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }underbrace{prodlimits_{iin I}S_{x_{i}}}
        _{substack{=S_{sumlimits_{iin I}x_{i}}\text{(by the S-multiplication rule)}
        }}=sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}.
        end{align*}

        Hence, for each $min M$ and $finmathbb{K}^{M}$, we obtain
        begin{align*}
        & left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
        mright) \
        & =left( sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}fright) left( mright)
        \
        & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }underbrace{left( S_{sumlimits_{iin I}x_{i}}fright)
        left( mright) }_{substack{=fleft( m+sumlimits_{iin I}x_{i}right)
        \text{(by the definition of the shift operators)}}}\
        & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) .
        end{align*}

        Thus, Proposition 5 is proven.]



        We can now combine Corollary 4 with Proposition 5 and obtain the following:




        Corollary 6. Let $x_{1},x_{2},ldots,x_{r}$ be $r$ elements of $M$. Let
        $dinmathbb{Z}$ be such that $d<r$. Let $fin M^{vee d}$ and $min M$. Then,
        begin{equation}
        sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) =0.
        end{equation}




        [Indeed, Corollary 6 follows from the computation
        begin{align*}
        & sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) \
        & =underbrace{left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}
        }fright) }_{substack{=0\text{(by Corollary 4, since } f in M^{vee d} text{)}}}left( mright)
        qquadleft( text{by Proposition 5}right) \
        & =0.
        end{align*}

        ]



        Finally, let us prove Theorem 1. The matrix ring $mathbb{K}^{ntimes n}$ is a
        $mathbb{K}$-module. Let $M$ be this $mathbb{K}$-module $mathbb{K}^{ntimes
        n}$
        . For each $i,jinleft{ 1,2,ldots,nright} $, we let $x_{i,j}$ be the
        map $Mrightarrowmathbb{K}$ that sends each matrix $M$ to its $left(
        i,jright) $
        -th entry; this map $x_{i,j}$ is $mathbb{K}$-linear and thus
        belongs to $M^{vee}$.



        It is easy to see that the map $det:mathbb{K}^{ntimes n}rightarrow
        mathbb{K}$
        (sending each $ntimes n$-matrix to its determinant) is a
        homogeneous polynomial function of degree $n$ on $M$; indeed, it can be
        represented in the commutative $mathbb{K}$-algebra $mathbb{K}^M$ as
        begin{equation}
        det=sumlimits_{sigmain S_{n}}left( -1right) ^{sigma}x_{1,sigmaleft(
        1right) }x_{2,sigmaleft( 2right) }cdots x_{n,sigmaleft( nright)
        },
        end{equation}

        where $S_{n}$ is the $n$-th symmetric group, and where $left( -1right)
        ^{sigma}$
        denotes the sign of a permutation $sigma$. In other words,
        $detin M^{vee n}$. Hence, Corollary 6 (applied to $x_{i}=A_{i}$, $d=n$,
        $f=det$ and $m=0$) yields
        begin{equation}
        sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }detleft( 0+sumlimits_{iin I}A_{i}right) =0.
        end{equation}

        In other words,
        begin{equation}
        sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
        ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
        end{equation}

        This proves Theorem 1. $blacksquare$






        share|cite|improve this answer











        $endgroup$
















          4












          4








          4





          $begingroup$

          Let me outline two other proofs. Let me first rename your $m$ and $n$ as $n$
          and $r$, since I find it confusing when $n$ is not the size of the square
          matrices involved. So you are claiming the following:




          Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $ninmathbb{N}$
          and $rinmathbb{N}$ be such that $n<r$. Let $A_{1},A_{2},ldots,A_{r}$ be
          $ntimes n$-matrices over $mathbb{K}$. Then,
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
          end{equation}




          Notice that I've snuck in one more little change into your formula: I've added
          the addend for $I=varnothing$. This addend usually doesn't contribute much,
          because $detleft( sumlimits_{iinvarnothing}A_{i}right) =detleft(
          0_{ntimes n}right) $
          is usually $0$... unless $n=0$, in which case it
          contributes $detleft( 0_{0times0}right) =1$ (keep in mind that there is
          only one $0times0$-matrix and its determinant is $1$), and the whole equality
          fails if this addend is missing.



          A first proof of Theorem 1 appears in (the solution to) Exercise 6.53 in my
          Notes on the combinatorial fundamentals of algebra, version of 10 January
          2019. (To obtain
          Theorem 1 from this exercise, set $G=left{ 1,2,ldots,rright} $.) The
          main idea of this proof is that Theorem 1 holds not only for determinants, but
          also for each of the $n!$ products that make up the determinant (assuming that
          you define the determinant of an $ntimes n$-matrix as a sum over the $n!$
          permutations); this is proven by interchanging summation signs and exploiting
          discrete "destructive interference" (i.e., the fact that if $G$ is a finite
          set and $R$ is a subset of $G$, then $sumlimits_{substack{Isubseteq
          G;\Rsubseteq I}}left( -1right) ^{leftvert Irightvert }=
          begin{cases}
          1, & text{if }R=G;\
          0, & text{if }Rneq G
          end{cases}
          $
          ).



          Let me now sketch a second proof of Theorem 1, which shows that it isn't
          really about determinants. It is about finite differences, in a slightly more
          general context than they are usually studied.



          Let $M$ be any $mathbb{K}$-module. The dual $mathbb{K}$-module $M^{vee
          }=operatorname{Hom}_{mathbb{K}}left( M,mathbb{K}right) $
          of
          $M$ consists of all $mathbb{K}$-linear maps $Mrightarrowmathbb{K}$. Thus,
          $M^{vee}$ is a $mathbb{K}$-submodule of the $mathbb{K}$-module
          $mathbb{K}^{M}$ of all maps $Mrightarrowmathbb{K}$. The $mathbb{K}
          $
          -module $mathbb{K}^{M}$ becomes a commutative $mathbb{K}$-algebra (we just
          define multiplication to be pointwise, i.e., the product $fg$ of two maps
          $f,g:Mrightarrowmathbb{K}$ sends each $min M$ to $fleft( mright)
          gleft( mright) inmathbb{K}$
          ).



          For any $dinmathbb{N}$, we let $M^{vee d}$ be the $mathbb{K}$-linear span
          of all elements of $mathbb{K}^{M}$ of the form $f_{1}f_{2}cdots f_{d}$ for
          $f_{1},f_{2},ldots,f_{d}in M^{vee}$. (For $d=0$, the only such element is
          the empty product $1$, so $M^{vee0}$ consists of the constant maps
          $Mrightarrowmathbb{K}$. Notice also that $M^{vee1}=M^{vee}$.) The elements
          of $M^{vee d}$ are called homogeneous polynomial functions of degree $d$ on
          $M$
          . The underlying idea is that if $M$ is a free $mathbb{K}$-module with a
          given basis, then the elements of $M^{vee d}$ are the maps $Mrightarrow
          mathbb{K}$
          that can be expressed as polynomials of the coordinate functions
          with respect to this basis; but the $mathbb{K}$-module $M^{vee d}$ makes
          perfect sense whether or not $M$ is free.



          We also set $M^{vee d}=0$ (the zero $mathbb{K}$-submodule of $mathbb{K}
          ^{M}$
          ) for $d<0$.



          For each $d in mathbb{Z}$, we define a $mathbb{K}$-submodule
          $M^{vee leq d}$ of $mathbb{K}^M$ by
          begin{equation}
          M^{vee leq d} = sumlimits_{i leq d} M^{vee i} .
          end{equation}

          The elements of $M^{vee leq d}$ are called (inhomogeneous) polynomial
          functions of degree $leq d$ on $M$
          .
          The submodules $M^{vee leq d}$ satisfy
          begin{equation}
          M^{vee leq d} M^{vee leq e} subseteq M^{vee leq left(d+eright)}
          end{equation}

          for any integers $d$ and $e$.



          For any $xin M$, we define the $mathbb{K}$-linear map $S_{x}:mathbb{K}
          ^{M}rightarrowmathbb{K}^{M}$
          by setting
          begin{equation}
          left( S_{x}fright) left( mright) =fleft( m+xright) qquadtext{for
          each }min Mtext{ and }finmathbb{K}^{M}.
          end{equation}

          This map $S_{x}$ is called a shift operator. It is an endomorphism of the
          $mathbb{K}$-algebra $mathbb{K}^{M}$ and preserves all the $mathbb{K}
          $
          -submodules $M^{vee leq d}$ (for all $dinmathbb{Z}$).



          Moreover, for any $xin M$, we define the $mathbb{K}$-linear map $Delta
          _{x}:mathbb{K}^{M}rightarrowmathbb{K}^{M}$
          by $Delta_{x}
          =operatorname*{id}-S_{x}$
          . Hence,
          begin{equation}
          left( Delta_{x}fright) left( mright) =fleft( mright) -fleft(
          m+xright) qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
          end{equation}

          This map $Delta_{x}$ is called a difference operator. The following crucial
          fact shows that it "decrements the degree" of a polynomial function, similarly
          to how differentiation decrements the degree of a polynomial:




          Lemma 2. Let $x in M$. Then,
          $Delta_{x}M^{vee d}subseteq M^{vee leq left( d-1right)}$
          for each $dinmathbb{Z}$.




          [Let me sketch a proof of Lemma 2:



          Lemma 2 clearly holds for $d < 0$ (since $M^{vee d} = 0$ if $d < 0$).
          Hence, it remains to prove Lemma 2 for $d geq 0$.
          We shall prove this by induction on $d$.
          The induction base is the case $d = 0$, which is easy to
          check (indeed, each $f in M^{vee 0}$ is a constant map, and thus
          satisfies $Delta_x f = 0$; therefore,
          $Delta_{x}M^{vee 0} = 0 subseteq M^{vee leq left( 0-1right) }$).



          For the induction step, we fix some nonnegative integer $e$, and assume
          that Lemma 2 holds for $d = e$. We must then show that Lemma 2
          holds for $d = e+1$.



          We have assumed that Lemma 2 holds for $d = e$.
          In other words, we have
          $Delta_{x}M^{vee e}subseteq M^{vee leq left( e-1right)}$.



          Our goal is to show that Lemma 2
          holds for $d = e+1$. In other words, our goal is to show
          that
          $Delta_{x}M^{vee left(e+1right)}subseteq M^{vee leq e}$.



          But the $mathbb{K}$-module $M^{vee left(e+1right)}$ is
          spanned by maps of the form $fg$ with $fin M^{vee e}$ and
          $gin M^{vee}$ (since it is spanned by products of the
          form $f_1 f_2 cdots f_{e+1}$ with
          $f_1, f_2, ldots, f_{e+1} in M^{vee}$, but each such
          product can be rewritten in the form $fg$
          with $f = f_1 f_2 cdots f_e in M^{vee e}$ and
          $g = f_{e+1} in M^{vee}$).
          Hence, it suffices to show that
          $Delta_x left( fg right) in M^{vee leq e}$
          for each $fin M^{vee e}$ and
          $gin M^{vee}$.



          Let us first notice that if $g in M^{vee}$ is arbitrary,
          then $Delta_x g$ is the constant map whose value is
          $- gleft(xright)$
          (because each $m in M$ satisfies
          begin{equation}
          left(Delta_x gright) left(mright)
          = gleft(mright) - underbrace{gleft(m+xright)}_{substack{=gleft(mright) + gleft(xright)\ text{(since }g text{ is } mathbb{K}text{-linear)}}}
          = gleft(mright) - left(gleft(mright) + gleft(xright)right)
          = - gleft(xright)
          end{equation}

          ), and thus belongs to $M^{vee 0}$.
          In other words, $Delta_x M^{vee} subseteq M^{vee 0}$.



          For each $f in mathbb{K}^M$ and $g in mathbb{K}^M$,
          we have
          begin{align*}
          Delta_{x}left( fgright) & =left( operatorname*{id}-S_{x}right)
          left( fgright) qquadleft( text{since }Delta_{x}=operatorname*{id}
          -S_{x}right) \
          & =fg-underbrace{S_{x}left( fgright) }_{substack{=left( S_{x}fright)
          left( S_{x}gright) \text{(since }S_{x}text{ is an endomorphism}
          \text{of the }mathbb{K}text{-algebra }mathbb{K}^{M}text{)}}}\
          & =fg-left( S_{x}fright) left( S_{x}gright) =underbrace{left(
          f-S_{x}fright) }_{=left( operatorname*{id}-S_{x}right) f}g+left(
          S_{x}fright) underbrace{left( x-S_{x}gright) }_{=left(
          operatorname*{id}-S_{x}right) g}\
          & =left( underbrace{left( operatorname*{id}-S_{x}right) }_{=Delta
          _{x}}fright) g+left( S_{x}fright) left( underbrace{left(
          operatorname*{id}-S_{x}right) }_{=Delta_{x}}gright) \
          & =left( Delta_{x}fright) g+left(
          underbrace{S_{x}}_{substack{=operatorname*{id}-Delta_{x}\
          text{(since }Delta
          _{x}=operatorname*{id}-S_{x}text{)}}}fright) left( Delta_{x}gright)
          \
          & =left( Delta_{x}fright) g+underbrace{left( left(
          operatorname*{id}-Delta_{x}right) fright) }_{=f-Delta_{x}f}left(
          Delta_{x}gright) \
          & =left( Delta_{x}fright) g+left( f-Delta_{x}fright) left(
          Delta_{x}gright) \
          & =left( Delta_{x}fright) g+fleft( Delta_{x}gright) -left(
          Delta_{x}fright) left( Delta_{x}gright) .
          end{align*}

          Hence, for each $fin M^{vee e}$ and $gin M^{vee}$, we have
          begin{align*}
          Delta_{x}left( fgright) & =left( Delta_{x}underbrace{f}_{in
          M^{vee e}}right) underbrace{g}_{in M^{vee}}+underbrace{f}_{in M^{vee
          e}}left( Delta_{x}underbrace{g}_{in M^{vee}}right) -left( Delta
          _{x}underbrace{f}_{in M^{vee e}}right)
          left( Delta_{x}underbrace{g}_{in M^{vee}}right) \
          & inunderbrace{left( Delta_{x}M^{vee e}right) }_{subseteq M^{vee
          leqleft( e-1right) }}M^{vee}+M^{vee e}underbrace{left( Delta
          _{x}M^{vee}right) }_{subseteq M^{vee0}}-underbrace{left( Delta
          _{x}M^{vee e}right) }_{subseteq M^{veeleqleft( e-1right) }
          }underbrace{left( Delta_{x}M^{vee}right) }_{subseteq M^{vee0}}\
          & subsetequnderbrace{M^{veeleqleft( e-1right) }M^{vee}}_{subseteq
          M^{veeleq e}}+underbrace{M^{vee e}M^{vee0}}_{subseteq M^{vee
          e}subseteq M^{veeleq e}}-underbrace{M^{veeleqleft( e-1right)
          }M^{vee0}}_{subseteq M^{veeleqleft( e-1right) }subseteq M^{veeleq
          e}}\
          & subseteq M^{veeleq e}+M^{veeleq e}-M^{veeleq e}subseteq M^{veeleq
          e}.
          end{align*}

          This proves that $Delta_{x}left( M^{veeleft( e+1right) }right)
          subseteq M^{veeleq e}$
          , as we intended to prove.



          Thus, the induction step is complete, and Lemma 2 is proven.]



          The following fact follows by induction using Lemma 2:




          Corollary 3. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Then,
          begin{equation}
          Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}subseteq
          M^{vee leq left( d-rright) }
          end{equation}

          for each $dinmathbb{Z}$.




          And as a consequence of this, we obtain the following:




          Corollary 4. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Then,
          begin{equation}
          Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}=0
          end{equation}

          for each $dinmathbb{Z}$ satisfying $d<r$.




          [In fact, Corollary 4 follows immediately from Corollary 3, because $d<r$
          implies $M^{vee leq left( d-rright) }=0$.]



          To make use of Corollary 4, we want a more-or-less explicit expression for how
          $Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}$ acts on maps in
          $mathbb{K}^{M}$. This is the following fact:




          Proposition 5. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Then,
          begin{equation}
          left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
          mright) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right)
          qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
          end{equation}




          [Proposition 5 can be proven by induction over $r$, where the induction step
          involves splitting the sum on the right hand side into the part with the $I$
          that contain $r$ and the part with the $I$ that don't. But there is also a
          slicker argument, which needs some preparation. The maps $S_{x}in
          operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right) $
          for
          different elements $xin M$ commute; better yet, they satisfy the
          multiplication rule $S_{x}S_{y}=S_{x+y}$ (as can be checked immediately).
          Hence, by induction over $leftvert Irightvert $, we conclude that if $I$
          is any finite set, and if $x_{i}$ is an element of $M$ for each $iin I$, then
          begin{equation}
          prodlimits_{iin I}S_{x_{i}}=S_{sumlimits_{iin I}x_{i}}
          qquad text{in the ring } operatorname{End}_{mathbb{K}} left(mathbb{K}^Mright) .
          end{equation}

          I shall refer to this fact as the S-multiplication rule.



          Now, let us prove Proposition 5. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Recall the well-known formula
          begin{equation}
          prodlimits_{iinleft{ 1,2,ldots,rright} }left( 1-a_{i}right)
          =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }prodlimits_{iin I}a_{i},
          end{equation}

          which holds whenever $a_{1},a_{2},ldots,a_{r}$ are commuting elements of some
          ring. Applying this formula to $a_{i}=S_{x_{i}}$, we obtain
          begin{equation}
          prodlimits_{iinleft{ 1,2,ldots,rright} }left( operatorname*{id}
          -S_{x_{i}}right) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left(
          -1right) ^{leftvert Irightvert }prodlimits_{iin I}S_{x_{i}}
          end{equation}

          (since $S_{x_{1}},S_{x_{2}},ldots,S_{x_{r}}$ are commuting elements of the
          ring $operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right)
          $
          ). Thus,
          begin{align*}
          Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}} & =prodlimits_{iinleft{
          1,2,ldots,rright} }underbrace{Delta_{x_{i}}}
          _{substack{=operatorname*{id}-S_{x_{i}}\text{(by the definition of }
          Delta_{x_{i}}text{)}}}=prodlimits_{iinleft{ 1,2,ldots,rright} }left(
          operatorname*{id}-S_{x_{i}}right) \
          & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }underbrace{prodlimits_{iin I}S_{x_{i}}}
          _{substack{=S_{sumlimits_{iin I}x_{i}}\text{(by the S-multiplication rule)}
          }}=sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}.
          end{align*}

          Hence, for each $min M$ and $finmathbb{K}^{M}$, we obtain
          begin{align*}
          & left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
          mright) \
          & =left( sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}fright) left( mright)
          \
          & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }underbrace{left( S_{sumlimits_{iin I}x_{i}}fright)
          left( mright) }_{substack{=fleft( m+sumlimits_{iin I}x_{i}right)
          \text{(by the definition of the shift operators)}}}\
          & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) .
          end{align*}

          Thus, Proposition 5 is proven.]



          We can now combine Corollary 4 with Proposition 5 and obtain the following:




          Corollary 6. Let $x_{1},x_{2},ldots,x_{r}$ be $r$ elements of $M$. Let
          $dinmathbb{Z}$ be such that $d<r$. Let $fin M^{vee d}$ and $min M$. Then,
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) =0.
          end{equation}




          [Indeed, Corollary 6 follows from the computation
          begin{align*}
          & sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) \
          & =underbrace{left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}
          }fright) }_{substack{=0\text{(by Corollary 4, since } f in M^{vee d} text{)}}}left( mright)
          qquadleft( text{by Proposition 5}right) \
          & =0.
          end{align*}

          ]



          Finally, let us prove Theorem 1. The matrix ring $mathbb{K}^{ntimes n}$ is a
          $mathbb{K}$-module. Let $M$ be this $mathbb{K}$-module $mathbb{K}^{ntimes
          n}$
          . For each $i,jinleft{ 1,2,ldots,nright} $, we let $x_{i,j}$ be the
          map $Mrightarrowmathbb{K}$ that sends each matrix $M$ to its $left(
          i,jright) $
          -th entry; this map $x_{i,j}$ is $mathbb{K}$-linear and thus
          belongs to $M^{vee}$.



          It is easy to see that the map $det:mathbb{K}^{ntimes n}rightarrow
          mathbb{K}$
          (sending each $ntimes n$-matrix to its determinant) is a
          homogeneous polynomial function of degree $n$ on $M$; indeed, it can be
          represented in the commutative $mathbb{K}$-algebra $mathbb{K}^M$ as
          begin{equation}
          det=sumlimits_{sigmain S_{n}}left( -1right) ^{sigma}x_{1,sigmaleft(
          1right) }x_{2,sigmaleft( 2right) }cdots x_{n,sigmaleft( nright)
          },
          end{equation}

          where $S_{n}$ is the $n$-th symmetric group, and where $left( -1right)
          ^{sigma}$
          denotes the sign of a permutation $sigma$. In other words,
          $detin M^{vee n}$. Hence, Corollary 6 (applied to $x_{i}=A_{i}$, $d=n$,
          $f=det$ and $m=0$) yields
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }detleft( 0+sumlimits_{iin I}A_{i}right) =0.
          end{equation}

          In other words,
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
          end{equation}

          This proves Theorem 1. $blacksquare$






          share|cite|improve this answer











          $endgroup$



          Let me outline two other proofs. Let me first rename your $m$ and $n$ as $n$
          and $r$, since I find it confusing when $n$ is not the size of the square
          matrices involved. So you are claiming the following:




          Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $ninmathbb{N}$
          and $rinmathbb{N}$ be such that $n<r$. Let $A_{1},A_{2},ldots,A_{r}$ be
          $ntimes n$-matrices over $mathbb{K}$. Then,
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
          end{equation}




          Notice that I've snuck in one more little change into your formula: I've added
          the addend for $I=varnothing$. This addend usually doesn't contribute much,
          because $detleft( sumlimits_{iinvarnothing}A_{i}right) =detleft(
          0_{ntimes n}right) $
          is usually $0$... unless $n=0$, in which case it
          contributes $detleft( 0_{0times0}right) =1$ (keep in mind that there is
          only one $0times0$-matrix and its determinant is $1$), and the whole equality
          fails if this addend is missing.



          A first proof of Theorem 1 appears in (the solution to) Exercise 6.53 in my
          Notes on the combinatorial fundamentals of algebra, version of 10 January
          2019. (To obtain
          Theorem 1 from this exercise, set $G=left{ 1,2,ldots,rright} $.) The
          main idea of this proof is that Theorem 1 holds not only for determinants, but
          also for each of the $n!$ products that make up the determinant (assuming that
          you define the determinant of an $ntimes n$-matrix as a sum over the $n!$
          permutations); this is proven by interchanging summation signs and exploiting
          discrete "destructive interference" (i.e., the fact that if $G$ is a finite
          set and $R$ is a subset of $G$, then $sumlimits_{substack{Isubseteq
          G;\Rsubseteq I}}left( -1right) ^{leftvert Irightvert }=
          begin{cases}
          1, & text{if }R=G;\
          0, & text{if }Rneq G
          end{cases}
          $
          ).



          Let me now sketch a second proof of Theorem 1, which shows that it isn't
          really about determinants. It is about finite differences, in a slightly more
          general context than they are usually studied.



          Let $M$ be any $mathbb{K}$-module. The dual $mathbb{K}$-module $M^{vee
          }=operatorname{Hom}_{mathbb{K}}left( M,mathbb{K}right) $
          of
          $M$ consists of all $mathbb{K}$-linear maps $Mrightarrowmathbb{K}$. Thus,
          $M^{vee}$ is a $mathbb{K}$-submodule of the $mathbb{K}$-module
          $mathbb{K}^{M}$ of all maps $Mrightarrowmathbb{K}$. The $mathbb{K}
          $
          -module $mathbb{K}^{M}$ becomes a commutative $mathbb{K}$-algebra (we just
          define multiplication to be pointwise, i.e., the product $fg$ of two maps
          $f,g:Mrightarrowmathbb{K}$ sends each $min M$ to $fleft( mright)
          gleft( mright) inmathbb{K}$
          ).



          For any $dinmathbb{N}$, we let $M^{vee d}$ be the $mathbb{K}$-linear span
          of all elements of $mathbb{K}^{M}$ of the form $f_{1}f_{2}cdots f_{d}$ for
          $f_{1},f_{2},ldots,f_{d}in M^{vee}$. (For $d=0$, the only such element is
          the empty product $1$, so $M^{vee0}$ consists of the constant maps
          $Mrightarrowmathbb{K}$. Notice also that $M^{vee1}=M^{vee}$.) The elements
          of $M^{vee d}$ are called homogeneous polynomial functions of degree $d$ on
          $M$
          . The underlying idea is that if $M$ is a free $mathbb{K}$-module with a
          given basis, then the elements of $M^{vee d}$ are the maps $Mrightarrow
          mathbb{K}$
          that can be expressed as polynomials of the coordinate functions
          with respect to this basis; but the $mathbb{K}$-module $M^{vee d}$ makes
          perfect sense whether or not $M$ is free.



          We also set $M^{vee d}=0$ (the zero $mathbb{K}$-submodule of $mathbb{K}
          ^{M}$
          ) for $d<0$.



          For each $d in mathbb{Z}$, we define a $mathbb{K}$-submodule
          $M^{vee leq d}$ of $mathbb{K}^M$ by
          begin{equation}
          M^{vee leq d} = sumlimits_{i leq d} M^{vee i} .
          end{equation}

          The elements of $M^{vee leq d}$ are called (inhomogeneous) polynomial
          functions of degree $leq d$ on $M$
          .
          The submodules $M^{vee leq d}$ satisfy
          begin{equation}
          M^{vee leq d} M^{vee leq e} subseteq M^{vee leq left(d+eright)}
          end{equation}

          for any integers $d$ and $e$.



          For any $xin M$, we define the $mathbb{K}$-linear map $S_{x}:mathbb{K}
          ^{M}rightarrowmathbb{K}^{M}$
          by setting
          begin{equation}
          left( S_{x}fright) left( mright) =fleft( m+xright) qquadtext{for
          each }min Mtext{ and }finmathbb{K}^{M}.
          end{equation}

          This map $S_{x}$ is called a shift operator. It is an endomorphism of the
          $mathbb{K}$-algebra $mathbb{K}^{M}$ and preserves all the $mathbb{K}
          $
          -submodules $M^{vee leq d}$ (for all $dinmathbb{Z}$).



          Moreover, for any $xin M$, we define the $mathbb{K}$-linear map $Delta
          _{x}:mathbb{K}^{M}rightarrowmathbb{K}^{M}$
          by $Delta_{x}
          =operatorname*{id}-S_{x}$
          . Hence,
          begin{equation}
          left( Delta_{x}fright) left( mright) =fleft( mright) -fleft(
          m+xright) qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
          end{equation}

          This map $Delta_{x}$ is called a difference operator. The following crucial
          fact shows that it "decrements the degree" of a polynomial function, similarly
          to how differentiation decrements the degree of a polynomial:




          Lemma 2. Let $x in M$. Then,
          $Delta_{x}M^{vee d}subseteq M^{vee leq left( d-1right)}$
          for each $dinmathbb{Z}$.




          [Let me sketch a proof of Lemma 2:



          Lemma 2 clearly holds for $d < 0$ (since $M^{vee d} = 0$ if $d < 0$).
          Hence, it remains to prove Lemma 2 for $d geq 0$.
          We shall prove this by induction on $d$.
          The induction base is the case $d = 0$, which is easy to
          check (indeed, each $f in M^{vee 0}$ is a constant map, and thus
          satisfies $Delta_x f = 0$; therefore,
          $Delta_{x}M^{vee 0} = 0 subseteq M^{vee leq left( 0-1right) }$).



          For the induction step, we fix some nonnegative integer $e$, and assume
          that Lemma 2 holds for $d = e$. We must then show that Lemma 2
          holds for $d = e+1$.



          We have assumed that Lemma 2 holds for $d = e$.
          In other words, we have
          $Delta_{x}M^{vee e}subseteq M^{vee leq left( e-1right)}$.



          Our goal is to show that Lemma 2
          holds for $d = e+1$. In other words, our goal is to show
          that
          $Delta_{x}M^{vee left(e+1right)}subseteq M^{vee leq e}$.



          But the $mathbb{K}$-module $M^{vee left(e+1right)}$ is
          spanned by maps of the form $fg$ with $fin M^{vee e}$ and
          $gin M^{vee}$ (since it is spanned by products of the
          form $f_1 f_2 cdots f_{e+1}$ with
          $f_1, f_2, ldots, f_{e+1} in M^{vee}$, but each such
          product can be rewritten in the form $fg$
          with $f = f_1 f_2 cdots f_e in M^{vee e}$ and
          $g = f_{e+1} in M^{vee}$).
          Hence, it suffices to show that
          $Delta_x left( fg right) in M^{vee leq e}$
          for each $fin M^{vee e}$ and
          $gin M^{vee}$.



          Let us first notice that if $g in M^{vee}$ is arbitrary,
          then $Delta_x g$ is the constant map whose value is
          $- gleft(xright)$
          (because each $m in M$ satisfies
          begin{equation}
          left(Delta_x gright) left(mright)
          = gleft(mright) - underbrace{gleft(m+xright)}_{substack{=gleft(mright) + gleft(xright)\ text{(since }g text{ is } mathbb{K}text{-linear)}}}
          = gleft(mright) - left(gleft(mright) + gleft(xright)right)
          = - gleft(xright)
          end{equation}

          ), and thus belongs to $M^{vee 0}$.
          In other words, $Delta_x M^{vee} subseteq M^{vee 0}$.



          For each $f in mathbb{K}^M$ and $g in mathbb{K}^M$,
          we have
          begin{align*}
          Delta_{x}left( fgright) & =left( operatorname*{id}-S_{x}right)
          left( fgright) qquadleft( text{since }Delta_{x}=operatorname*{id}
          -S_{x}right) \
          & =fg-underbrace{S_{x}left( fgright) }_{substack{=left( S_{x}fright)
          left( S_{x}gright) \text{(since }S_{x}text{ is an endomorphism}
          \text{of the }mathbb{K}text{-algebra }mathbb{K}^{M}text{)}}}\
          & =fg-left( S_{x}fright) left( S_{x}gright) =underbrace{left(
          f-S_{x}fright) }_{=left( operatorname*{id}-S_{x}right) f}g+left(
          S_{x}fright) underbrace{left( x-S_{x}gright) }_{=left(
          operatorname*{id}-S_{x}right) g}\
          & =left( underbrace{left( operatorname*{id}-S_{x}right) }_{=Delta
          _{x}}fright) g+left( S_{x}fright) left( underbrace{left(
          operatorname*{id}-S_{x}right) }_{=Delta_{x}}gright) \
          & =left( Delta_{x}fright) g+left(
          underbrace{S_{x}}_{substack{=operatorname*{id}-Delta_{x}\
          text{(since }Delta
          _{x}=operatorname*{id}-S_{x}text{)}}}fright) left( Delta_{x}gright)
          \
          & =left( Delta_{x}fright) g+underbrace{left( left(
          operatorname*{id}-Delta_{x}right) fright) }_{=f-Delta_{x}f}left(
          Delta_{x}gright) \
          & =left( Delta_{x}fright) g+left( f-Delta_{x}fright) left(
          Delta_{x}gright) \
          & =left( Delta_{x}fright) g+fleft( Delta_{x}gright) -left(
          Delta_{x}fright) left( Delta_{x}gright) .
          end{align*}

          Hence, for each $fin M^{vee e}$ and $gin M^{vee}$, we have
          begin{align*}
          Delta_{x}left( fgright) & =left( Delta_{x}underbrace{f}_{in
          M^{vee e}}right) underbrace{g}_{in M^{vee}}+underbrace{f}_{in M^{vee
          e}}left( Delta_{x}underbrace{g}_{in M^{vee}}right) -left( Delta
          _{x}underbrace{f}_{in M^{vee e}}right)
          left( Delta_{x}underbrace{g}_{in M^{vee}}right) \
          & inunderbrace{left( Delta_{x}M^{vee e}right) }_{subseteq M^{vee
          leqleft( e-1right) }}M^{vee}+M^{vee e}underbrace{left( Delta
          _{x}M^{vee}right) }_{subseteq M^{vee0}}-underbrace{left( Delta
          _{x}M^{vee e}right) }_{subseteq M^{veeleqleft( e-1right) }
          }underbrace{left( Delta_{x}M^{vee}right) }_{subseteq M^{vee0}}\
          & subsetequnderbrace{M^{veeleqleft( e-1right) }M^{vee}}_{subseteq
          M^{veeleq e}}+underbrace{M^{vee e}M^{vee0}}_{subseteq M^{vee
          e}subseteq M^{veeleq e}}-underbrace{M^{veeleqleft( e-1right)
          }M^{vee0}}_{subseteq M^{veeleqleft( e-1right) }subseteq M^{veeleq
          e}}\
          & subseteq M^{veeleq e}+M^{veeleq e}-M^{veeleq e}subseteq M^{veeleq
          e}.
          end{align*}

          This proves that $Delta_{x}left( M^{veeleft( e+1right) }right)
          subseteq M^{veeleq e}$
          , as we intended to prove.



          Thus, the induction step is complete, and Lemma 2 is proven.]



          The following fact follows by induction using Lemma 2:




          Corollary 3. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Then,
          begin{equation}
          Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}subseteq
          M^{vee leq left( d-rright) }
          end{equation}

          for each $dinmathbb{Z}$.




          And as a consequence of this, we obtain the following:




          Corollary 4. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Then,
          begin{equation}
          Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}M^{vee d}=0
          end{equation}

          for each $dinmathbb{Z}$ satisfying $d<r$.




          [In fact, Corollary 4 follows immediately from Corollary 3, because $d<r$
          implies $M^{vee leq left( d-rright) }=0$.]



          To make use of Corollary 4, we want a more-or-less explicit expression for how
          $Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}$ acts on maps in
          $mathbb{K}^{M}$. This is the following fact:




          Proposition 5. Let $rinmathbb{N}$. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Then,
          begin{equation}
          left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
          mright) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right)
          qquadtext{for each }min Mtext{ and }finmathbb{K}^{M}.
          end{equation}




          [Proposition 5 can be proven by induction over $r$, where the induction step
          involves splitting the sum on the right hand side into the part with the $I$
          that contain $r$ and the part with the $I$ that don't. But there is also a
          slicker argument, which needs some preparation. The maps $S_{x}in
          operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right) $
          for
          different elements $xin M$ commute; better yet, they satisfy the
          multiplication rule $S_{x}S_{y}=S_{x+y}$ (as can be checked immediately).
          Hence, by induction over $leftvert Irightvert $, we conclude that if $I$
          is any finite set, and if $x_{i}$ is an element of $M$ for each $iin I$, then
          begin{equation}
          prodlimits_{iin I}S_{x_{i}}=S_{sumlimits_{iin I}x_{i}}
          qquad text{in the ring } operatorname{End}_{mathbb{K}} left(mathbb{K}^Mright) .
          end{equation}

          I shall refer to this fact as the S-multiplication rule.



          Now, let us prove Proposition 5. Let $x_{1},x_{2},ldots,x_{r}$ be $r$
          elements of $M$. Recall the well-known formula
          begin{equation}
          prodlimits_{iinleft{ 1,2,ldots,rright} }left( 1-a_{i}right)
          =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }prodlimits_{iin I}a_{i},
          end{equation}

          which holds whenever $a_{1},a_{2},ldots,a_{r}$ are commuting elements of some
          ring. Applying this formula to $a_{i}=S_{x_{i}}$, we obtain
          begin{equation}
          prodlimits_{iinleft{ 1,2,ldots,rright} }left( operatorname*{id}
          -S_{x_{i}}right) =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left(
          -1right) ^{leftvert Irightvert }prodlimits_{iin I}S_{x_{i}}
          end{equation}

          (since $S_{x_{1}},S_{x_{2}},ldots,S_{x_{r}}$ are commuting elements of the
          ring $operatorname{End}_{mathbb{K}}left( mathbb{K}^{M}right)
          $
          ). Thus,
          begin{align*}
          Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}} & =prodlimits_{iinleft{
          1,2,ldots,rright} }underbrace{Delta_{x_{i}}}
          _{substack{=operatorname*{id}-S_{x_{i}}\text{(by the definition of }
          Delta_{x_{i}}text{)}}}=prodlimits_{iinleft{ 1,2,ldots,rright} }left(
          operatorname*{id}-S_{x_{i}}right) \
          & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }underbrace{prodlimits_{iin I}S_{x_{i}}}
          _{substack{=S_{sumlimits_{iin I}x_{i}}\text{(by the S-multiplication rule)}
          }}=sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}.
          end{align*}

          Hence, for each $min M$ and $finmathbb{K}^{M}$, we obtain
          begin{align*}
          & left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}}fright) left(
          mright) \
          & =left( sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }S_{sumlimits_{iin I}x_{i}}fright) left( mright)
          \
          & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }underbrace{left( S_{sumlimits_{iin I}x_{i}}fright)
          left( mright) }_{substack{=fleft( m+sumlimits_{iin I}x_{i}right)
          \text{(by the definition of the shift operators)}}}\
          & =sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) .
          end{align*}

          Thus, Proposition 5 is proven.]



          We can now combine Corollary 4 with Proposition 5 and obtain the following:




          Corollary 6. Let $x_{1},x_{2},ldots,x_{r}$ be $r$ elements of $M$. Let
          $dinmathbb{Z}$ be such that $d<r$. Let $fin M^{vee d}$ and $min M$. Then,
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) =0.
          end{equation}




          [Indeed, Corollary 6 follows from the computation
          begin{align*}
          & sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }fleft( m+sumlimits_{iin I}x_{i}right) \
          & =underbrace{left( Delta_{x_{1}}Delta_{x_{2}}cdotsDelta_{x_{r}
          }fright) }_{substack{=0\text{(by Corollary 4, since } f in M^{vee d} text{)}}}left( mright)
          qquadleft( text{by Proposition 5}right) \
          & =0.
          end{align*}

          ]



          Finally, let us prove Theorem 1. The matrix ring $mathbb{K}^{ntimes n}$ is a
          $mathbb{K}$-module. Let $M$ be this $mathbb{K}$-module $mathbb{K}^{ntimes
          n}$
          . For each $i,jinleft{ 1,2,ldots,nright} $, we let $x_{i,j}$ be the
          map $Mrightarrowmathbb{K}$ that sends each matrix $M$ to its $left(
          i,jright) $
          -th entry; this map $x_{i,j}$ is $mathbb{K}$-linear and thus
          belongs to $M^{vee}$.



          It is easy to see that the map $det:mathbb{K}^{ntimes n}rightarrow
          mathbb{K}$
          (sending each $ntimes n$-matrix to its determinant) is a
          homogeneous polynomial function of degree $n$ on $M$; indeed, it can be
          represented in the commutative $mathbb{K}$-algebra $mathbb{K}^M$ as
          begin{equation}
          det=sumlimits_{sigmain S_{n}}left( -1right) ^{sigma}x_{1,sigmaleft(
          1right) }x_{2,sigmaleft( 2right) }cdots x_{n,sigmaleft( nright)
          },
          end{equation}

          where $S_{n}$ is the $n$-th symmetric group, and where $left( -1right)
          ^{sigma}$
          denotes the sign of a permutation $sigma$. In other words,
          $detin M^{vee n}$. Hence, Corollary 6 (applied to $x_{i}=A_{i}$, $d=n$,
          $f=det$ and $m=0$) yields
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }detleft( 0+sumlimits_{iin I}A_{i}right) =0.
          end{equation}

          In other words,
          begin{equation}
          sumlimits_{Isubseteqleft{ 1,2,ldots,rright} }left( -1right)
          ^{leftvert Irightvert }detleft( sumlimits_{iin I}A_{i}right) =0.
          end{equation}

          This proves Theorem 1. $blacksquare$







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jan 10 at 2:43

























          answered Nov 19 '17 at 6:54









          darij grinbergdarij grinberg

          11k33167




          11k33167























              3












              $begingroup$

              Given integers $n > m > 0$, let $[n]$ be a short hand for the set ${1,ldots,n}$.



              For any $t in mathbb{R}$ and $x_1, ldots, x_n in mathbb{C}$, we have the identity



              $$prod_{k=1}^n (1 - e^{tx_k}) = sum_{P subset [n]} (-1)^{|P|} e^{tsum_{kin P} x_k}$$



              Treat both sides as function of $t$. Expand against $t$, one notice on LHS, coefficients in front of $t^k$ vanishes whenever $k < n$.
              By comparing coefficients of $t^m$, we obtain:



              $$ 0 = sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} x_kright)^mtag{*1}$$



              Notice RHS is a polynomial function in $x_1,ldots,x_n$ with integer coefficients. Since it evaluates to $0$ for all $(x_1,ldots,x_n) in mathbb{C}^n$, it is valid as a polynomial identity in $n$ indeterminates with integer coefficients. As a corollary, it is valid as an algebraic identity when $x_1, x_2, ldots, x_n$ are elements taken from any commutative algebra.



              Let $V$ be a vector space over $mathbb{C}$ spanned by
              elements $eta_1, ldots, eta_m$ and $bar{eta}_1,ldots,bar{eta}_m$.



              Let $Lambda^{e}(V) = bigoplus_{k=0}^n Lambda^{2k}(V)$ be the 'even' portion
              of its exterior algebra. $Lambda^{e}(V)$ itself is a commutative algebra.



              For any $m times m$ matrix $A$, let $tilde{A} in Lambda^e(V)$ be the element defined by:



              $$A = (a_{ij}) quadlongrightarrowquad tilde{A} = sum_{i=1}^msum_{j=1}^m a_{ij}bar{eta}_i wedge eta_j$$



              Notice the $m$-fold power of $tilde{A}$ satisfies an interesting identity:



              $$tilde{A}^m = underbrace{tilde{A} wedge cdots wedge tilde{A}}_{m text{ times}} = det(A) omega
              quadtext{ where }quad
              omega = m!, bar{eta}_1 wedge eta_1 wedge cdots wedge bar{eta}_m wedge eta_mtag{*2}$$



              Given any $n$-tuple of matrices $A_1, ldots, A_n in M_{mtimes m}(mathbb{C})$, if we substitute $x_k$ in $(*1)$ by $tilde{A}_k$ and apply $(*2)$, we find



              $$
              sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} tilde{A}_kright)^m
              = sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright)omega
              = 0
              $$
              Extracting the coefficient in front of $omega$, the desired identity follows:
              $$sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright) = 0$$






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                A very beautiful result and very beautiful proof!
                $endgroup$
                – Jair Taylor
                Nov 15 '17 at 18:22
















              3












              $begingroup$

              Given integers $n > m > 0$, let $[n]$ be a short hand for the set ${1,ldots,n}$.



              For any $t in mathbb{R}$ and $x_1, ldots, x_n in mathbb{C}$, we have the identity



              $$prod_{k=1}^n (1 - e^{tx_k}) = sum_{P subset [n]} (-1)^{|P|} e^{tsum_{kin P} x_k}$$



              Treat both sides as function of $t$. Expand against $t$, one notice on LHS, coefficients in front of $t^k$ vanishes whenever $k < n$.
              By comparing coefficients of $t^m$, we obtain:



              $$ 0 = sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} x_kright)^mtag{*1}$$



              Notice RHS is a polynomial function in $x_1,ldots,x_n$ with integer coefficients. Since it evaluates to $0$ for all $(x_1,ldots,x_n) in mathbb{C}^n$, it is valid as a polynomial identity in $n$ indeterminates with integer coefficients. As a corollary, it is valid as an algebraic identity when $x_1, x_2, ldots, x_n$ are elements taken from any commutative algebra.



              Let $V$ be a vector space over $mathbb{C}$ spanned by
              elements $eta_1, ldots, eta_m$ and $bar{eta}_1,ldots,bar{eta}_m$.



              Let $Lambda^{e}(V) = bigoplus_{k=0}^n Lambda^{2k}(V)$ be the 'even' portion
              of its exterior algebra. $Lambda^{e}(V)$ itself is a commutative algebra.



              For any $m times m$ matrix $A$, let $tilde{A} in Lambda^e(V)$ be the element defined by:



              $$A = (a_{ij}) quadlongrightarrowquad tilde{A} = sum_{i=1}^msum_{j=1}^m a_{ij}bar{eta}_i wedge eta_j$$



              Notice the $m$-fold power of $tilde{A}$ satisfies an interesting identity:



              $$tilde{A}^m = underbrace{tilde{A} wedge cdots wedge tilde{A}}_{m text{ times}} = det(A) omega
              quadtext{ where }quad
              omega = m!, bar{eta}_1 wedge eta_1 wedge cdots wedge bar{eta}_m wedge eta_mtag{*2}$$



              Given any $n$-tuple of matrices $A_1, ldots, A_n in M_{mtimes m}(mathbb{C})$, if we substitute $x_k$ in $(*1)$ by $tilde{A}_k$ and apply $(*2)$, we find



              $$
              sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} tilde{A}_kright)^m
              = sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright)omega
              = 0
              $$
              Extracting the coefficient in front of $omega$, the desired identity follows:
              $$sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright) = 0$$






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                A very beautiful result and very beautiful proof!
                $endgroup$
                – Jair Taylor
                Nov 15 '17 at 18:22














              3












              3








              3





              $begingroup$

              Given integers $n > m > 0$, let $[n]$ be a short hand for the set ${1,ldots,n}$.



              For any $t in mathbb{R}$ and $x_1, ldots, x_n in mathbb{C}$, we have the identity



              $$prod_{k=1}^n (1 - e^{tx_k}) = sum_{P subset [n]} (-1)^{|P|} e^{tsum_{kin P} x_k}$$



              Treat both sides as function of $t$. Expand against $t$, one notice on LHS, coefficients in front of $t^k$ vanishes whenever $k < n$.
              By comparing coefficients of $t^m$, we obtain:



              $$ 0 = sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} x_kright)^mtag{*1}$$



              Notice RHS is a polynomial function in $x_1,ldots,x_n$ with integer coefficients. Since it evaluates to $0$ for all $(x_1,ldots,x_n) in mathbb{C}^n$, it is valid as a polynomial identity in $n$ indeterminates with integer coefficients. As a corollary, it is valid as an algebraic identity when $x_1, x_2, ldots, x_n$ are elements taken from any commutative algebra.



              Let $V$ be a vector space over $mathbb{C}$ spanned by
              elements $eta_1, ldots, eta_m$ and $bar{eta}_1,ldots,bar{eta}_m$.



              Let $Lambda^{e}(V) = bigoplus_{k=0}^n Lambda^{2k}(V)$ be the 'even' portion
              of its exterior algebra. $Lambda^{e}(V)$ itself is a commutative algebra.



              For any $m times m$ matrix $A$, let $tilde{A} in Lambda^e(V)$ be the element defined by:



              $$A = (a_{ij}) quadlongrightarrowquad tilde{A} = sum_{i=1}^msum_{j=1}^m a_{ij}bar{eta}_i wedge eta_j$$



              Notice the $m$-fold power of $tilde{A}$ satisfies an interesting identity:



              $$tilde{A}^m = underbrace{tilde{A} wedge cdots wedge tilde{A}}_{m text{ times}} = det(A) omega
              quadtext{ where }quad
              omega = m!, bar{eta}_1 wedge eta_1 wedge cdots wedge bar{eta}_m wedge eta_mtag{*2}$$



              Given any $n$-tuple of matrices $A_1, ldots, A_n in M_{mtimes m}(mathbb{C})$, if we substitute $x_k$ in $(*1)$ by $tilde{A}_k$ and apply $(*2)$, we find



              $$
              sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} tilde{A}_kright)^m
              = sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright)omega
              = 0
              $$
              Extracting the coefficient in front of $omega$, the desired identity follows:
              $$sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright) = 0$$






              share|cite|improve this answer











              $endgroup$



              Given integers $n > m > 0$, let $[n]$ be a short hand for the set ${1,ldots,n}$.



              For any $t in mathbb{R}$ and $x_1, ldots, x_n in mathbb{C}$, we have the identity



              $$prod_{k=1}^n (1 - e^{tx_k}) = sum_{P subset [n]} (-1)^{|P|} e^{tsum_{kin P} x_k}$$



              Treat both sides as function of $t$. Expand against $t$, one notice on LHS, coefficients in front of $t^k$ vanishes whenever $k < n$.
              By comparing coefficients of $t^m$, we obtain:



              $$ 0 = sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} x_kright)^mtag{*1}$$



              Notice RHS is a polynomial function in $x_1,ldots,x_n$ with integer coefficients. Since it evaluates to $0$ for all $(x_1,ldots,x_n) in mathbb{C}^n$, it is valid as a polynomial identity in $n$ indeterminates with integer coefficients. As a corollary, it is valid as an algebraic identity when $x_1, x_2, ldots, x_n$ are elements taken from any commutative algebra.



              Let $V$ be a vector space over $mathbb{C}$ spanned by
              elements $eta_1, ldots, eta_m$ and $bar{eta}_1,ldots,bar{eta}_m$.



              Let $Lambda^{e}(V) = bigoplus_{k=0}^n Lambda^{2k}(V)$ be the 'even' portion
              of its exterior algebra. $Lambda^{e}(V)$ itself is a commutative algebra.



              For any $m times m$ matrix $A$, let $tilde{A} in Lambda^e(V)$ be the element defined by:



              $$A = (a_{ij}) quadlongrightarrowquad tilde{A} = sum_{i=1}^msum_{j=1}^m a_{ij}bar{eta}_i wedge eta_j$$



              Notice the $m$-fold power of $tilde{A}$ satisfies an interesting identity:



              $$tilde{A}^m = underbrace{tilde{A} wedge cdots wedge tilde{A}}_{m text{ times}} = det(A) omega
              quadtext{ where }quad
              omega = m!, bar{eta}_1 wedge eta_1 wedge cdots wedge bar{eta}_m wedge eta_mtag{*2}$$



              Given any $n$-tuple of matrices $A_1, ldots, A_n in M_{mtimes m}(mathbb{C})$, if we substitute $x_k$ in $(*1)$ by $tilde{A}_k$ and apply $(*2)$, we find



              $$
              sum_{Psubset [n]} (-1)^{|P|} left(sum_{kin P} tilde{A}_kright)^m
              = sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright)omega
              = 0
              $$
              Extracting the coefficient in front of $omega$, the desired identity follows:
              $$sum_{Psubset [n]} (-1)^{|P|} detleft(sum_{kin P} A_kright) = 0$$







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Nov 16 '17 at 15:24

























              answered Nov 15 '17 at 16:19









              achille huiachille hui

              96.1k5132260




              96.1k5132260












              • $begingroup$
                A very beautiful result and very beautiful proof!
                $endgroup$
                – Jair Taylor
                Nov 15 '17 at 18:22


















              • $begingroup$
                A very beautiful result and very beautiful proof!
                $endgroup$
                – Jair Taylor
                Nov 15 '17 at 18:22
















              $begingroup$
              A very beautiful result and very beautiful proof!
              $endgroup$
              – Jair Taylor
              Nov 15 '17 at 18:22




              $begingroup$
              A very beautiful result and very beautiful proof!
              $endgroup$
              – Jair Taylor
              Nov 15 '17 at 18:22











              0












              $begingroup$

              HINT:



              The determinant of an $ntimes n$ matrix is a form of degree $n$. Forms come from multilinear forms.



              Consider $M$ an abelian group. For $a in M$, denote by $a^{[n]}$ the element $aotimes a otimes ldots otimes ain M^{otimes n}$. Let now $a_iin M$, $i in I$, finitely many elements in $M$. Let's try to find
              $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}$$



              Consider a product $a_{i_1}otimes ldots otimes a_{i_n}$. It appears in the above sum with the coefficient
              $$sum_{Jsubset K subset I}(-1)^{|I| - |J|}$$ where $J={i_1, ldots, i_n }$. This is $0$ for $Jne I$ and $1$ for $J=I$. ( a Möbius function)



              Therefore
              $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}=sum_{phicolon {1,ldots n}to I,phi text{surjective}}a_{phi(1)}otimes ldots a_{phi(n)}$$



              Particular cases:




              1. $|I|>n$, we get $0$, the result desired.


              2. $|I|=n$, we get $sum_{phicolon {1,ldots n}to I,phi text{bijective}}a_{phi(1)}otimes ldots a_{phi(n)}$







              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                HINT:



                The determinant of an $ntimes n$ matrix is a form of degree $n$. Forms come from multilinear forms.



                Consider $M$ an abelian group. For $a in M$, denote by $a^{[n]}$ the element $aotimes a otimes ldots otimes ain M^{otimes n}$. Let now $a_iin M$, $i in I$, finitely many elements in $M$. Let's try to find
                $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}$$



                Consider a product $a_{i_1}otimes ldots otimes a_{i_n}$. It appears in the above sum with the coefficient
                $$sum_{Jsubset K subset I}(-1)^{|I| - |J|}$$ where $J={i_1, ldots, i_n }$. This is $0$ for $Jne I$ and $1$ for $J=I$. ( a Möbius function)



                Therefore
                $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}=sum_{phicolon {1,ldots n}to I,phi text{surjective}}a_{phi(1)}otimes ldots a_{phi(n)}$$



                Particular cases:




                1. $|I|>n$, we get $0$, the result desired.


                2. $|I|=n$, we get $sum_{phicolon {1,ldots n}to I,phi text{bijective}}a_{phi(1)}otimes ldots a_{phi(n)}$







                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  HINT:



                  The determinant of an $ntimes n$ matrix is a form of degree $n$. Forms come from multilinear forms.



                  Consider $M$ an abelian group. For $a in M$, denote by $a^{[n]}$ the element $aotimes a otimes ldots otimes ain M^{otimes n}$. Let now $a_iin M$, $i in I$, finitely many elements in $M$. Let's try to find
                  $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}$$



                  Consider a product $a_{i_1}otimes ldots otimes a_{i_n}$. It appears in the above sum with the coefficient
                  $$sum_{Jsubset K subset I}(-1)^{|I| - |J|}$$ where $J={i_1, ldots, i_n }$. This is $0$ for $Jne I$ and $1$ for $J=I$. ( a Möbius function)



                  Therefore
                  $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}=sum_{phicolon {1,ldots n}to I,phi text{surjective}}a_{phi(1)}otimes ldots a_{phi(n)}$$



                  Particular cases:




                  1. $|I|>n$, we get $0$, the result desired.


                  2. $|I|=n$, we get $sum_{phicolon {1,ldots n}to I,phi text{bijective}}a_{phi(1)}otimes ldots a_{phi(n)}$







                  share|cite|improve this answer









                  $endgroup$



                  HINT:



                  The determinant of an $ntimes n$ matrix is a form of degree $n$. Forms come from multilinear forms.



                  Consider $M$ an abelian group. For $a in M$, denote by $a^{[n]}$ the element $aotimes a otimes ldots otimes ain M^{otimes n}$. Let now $a_iin M$, $i in I$, finitely many elements in $M$. Let's try to find
                  $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}$$



                  Consider a product $a_{i_1}otimes ldots otimes a_{i_n}$. It appears in the above sum with the coefficient
                  $$sum_{Jsubset K subset I}(-1)^{|I| - |J|}$$ where $J={i_1, ldots, i_n }$. This is $0$ for $Jne I$ and $1$ for $J=I$. ( a Möbius function)



                  Therefore
                  $$sum_{Jsubset I}(-1)^{|I|-|J|}(sum_{i in J} a_i)^{[n]}=sum_{phicolon {1,ldots n}to I,phi text{surjective}}a_{phi(1)}otimes ldots a_{phi(n)}$$



                  Particular cases:




                  1. $|I|>n$, we get $0$, the result desired.


                  2. $|I|=n$, we get $sum_{phicolon {1,ldots n}to I,phi text{bijective}}a_{phi(1)}otimes ldots a_{phi(n)}$








                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 26 '17 at 22:41









                  Orest BucicovschiOrest Bucicovschi

                  28.6k31748




                  28.6k31748






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2520370%2fdeterminant-of-a-sum-of-matrices%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Human spaceflight

                      Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

                      File:DeusFollowingSea.jpg