The definition of Determinant in the spirit of algebra and geometry












12












$begingroup$


The concept of determinant is quite unmotivational topic to introduce. Textbooks use such an "strung out" introductions like axiomatic definition, Laplace expansion, Leibniz'a permutation formula or something like signed volume.



Question: is the following a possible way to introduce the determinant?





Determinant is all about determing whether a given set of vectors are linearly independent, and a direct way to check this is to add scalar multiplications of column vectors to get the diagonal form:



$$begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \
a_{21} & a_{22} & a_{23} & a_{24} \
a_{31} & a_{32} & a_{33} & a_{34} \
a_{41} & a_{42} & a_{43} & a_{44} \
end{pmatrix} thicksim begin{pmatrix}
d_1 & 0 & 0 & 0 \
0 & d_2 & 0 & 0 \
0 & 0 & d_3 & 0 \
0 & 0 & 0 & d_4 \
end{pmatrix}.$$



Now it's clear that the vectors are linearly independent if and only if every $d_i$ is nonzero, i.e. $prod_{i=1}^n d_ineq0$. It may also be the case that two columns are equal and there is no diagonal form, so we must add a condition that annihilates the determinant (this is consistent with $prod_{i=1}^n d_i=0$), since column vectors can't be linearly independent.



If we want to have a real valued function that tells us this information, then we simply introduce an ad hoc function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ with following properties:




  1. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=det (a_1,ldots,a_i,ldots,kcdot a_i+a_j,ldots,a_n).$$


  2. $$det(d_1cdot e_1,ldots,d_ncdot e_n)=prod_{i=1}^n d_i.$$


  3. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=0, space space text{if} space space a_i=a_j.$$



The determinant offers information how orthogonal a set of vectors is, that is, if we norm every vector to 1, then the determinant of an orthogonal set of vectors is 1. More generally, with Gram-Schmidt process we can form an orthogonal set of vectors form set $(a_1,ldots, a_n)$, and the absolute value of determinant is actually the volume of parallelepiped formed by the set of vectors.



Definition.
Volume of parallelepiped formed by set of vectors $(a_1,ldots, a_n)$ is $Vol(a_1,ldots, a_n)=Vol(a_1,ldots, a_{n-1})cdot |a_{n}^{bot}|=|a_{1}^{bot}|cdots |a_{n}^{bot}|$, where $a_{i}^{bot} bot span(a_1,ldots, a_{i-1}).$





From the previous definition of determinant we can infer the multilinearity property:



$$[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]thicksim diag[d_1,ldots,c_1 cdot d'_i+c_2 cdot d''_i ,ldots,d_n],$$ so $$det[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]=prod_{j=1:jneq i}^n d_j(c_1 cdot d'_i+c_2 cdot d''_i)$$ $$=c_1det(diag[d_1,ldots, d'_i,ldots,d_n])+c_2det(diag[d_1,ldots, d''_i,ldots,d_n])$$ $$=c_1det[a_1,ldots,u,ldots,a_n]+c_2det[a_1,ldots, v,ldots,a_n].$$



Note that previous multilinearity together with property $(1)$ gives the property $(2)$, so we know from the literature that the determinant function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ actually exists and it is unique.



Now, due to property $(1)$ we have for sets of linearly independent vectors (matrices) $A$ and $B$ following presentations: $A=L_a D_a$ and $B=L_b D_b$, where $D_a$ and $D_b$ are diagonal matrices and $L_a$ and $L_b$ are products of column-addition matrices. Note that if we multiply a diagonal matrix $D$ with a column-addition matrix $L$, then we can always "counter" that with another column-addition matrix $L'$, i.e. $L'DL=D.$



Now we can infer the multiplication property:



$det(A)=det(L_a D_a)=det(D_a)=prod_{i=1}^n alpha_i$ and $det(B)=det(L_b D_b)=det(D_b)=prod_{i=1}^n beta_i$.



Therefore



$det(AB)=det(L_a D_a L_b D_b)=det(D_a L_b D_b)=det(L'_b D_a L_b D_b)=det(D_a D_b)
=prod_{i=1}^n alpha_i beta_i=det(A)det(B).$





This approach to determinant works equally well if we begin with the volume of a parallelepiped (geometric approach) or with the search of invertibility (algebraic approach). I was motivated by the book Linear algebra and its applications by Lax on chapter 5:



Rather than start with a formula for the determinant, we shall deduce it from the properties forced on it by the geometric properties of signed volume. This approach to determinants is due to E. Artin.





  1. $det (a_1,ldots,a_n)=0$, if $a_i=a_j$, $ineq j.$


  2. $det (a_1,ldots,a_n)$ is a multilinear function of its arguments, in the sense that if all $a_i, i neq j$ are fixed, $det$ is a linear function of the remaining argument $a_j.$

  3. $det(e_1,ldots,e_n)=1.$










share|cite|improve this question











$endgroup$








  • 6




    $begingroup$
    Have you posted this in the Mathematics Educators SE? You might get better answers there.
    $endgroup$
    – SplitInfinity
    Mar 14 '16 at 13:29








  • 2




    $begingroup$
    You're not adding columns but rather, diagonalizing the matrix, (which cannot always be done for matrices with non-zero determinant).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:30








  • 6




    $begingroup$
    See Down with Determinants!: axler.net/DwD.html.
    $endgroup$
    – Martín-Blas Pérez Pinilla
    Mar 14 '16 at 13:34






  • 2




    $begingroup$
    @Juho The problem is there are matrices with both zero determinant and with nonzero determinants that cannot be expressed as a diagonal matrix (non-diagonal entries all $0$).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:53






  • 3




    $begingroup$
    I think the main issue with the plan, however, is that it is not at all clear that the determinant of a matrix found in this way will be unique.
    $endgroup$
    – Henning Makholm
    Mar 14 '16 at 14:20
















12












$begingroup$


The concept of determinant is quite unmotivational topic to introduce. Textbooks use such an "strung out" introductions like axiomatic definition, Laplace expansion, Leibniz'a permutation formula or something like signed volume.



Question: is the following a possible way to introduce the determinant?





Determinant is all about determing whether a given set of vectors are linearly independent, and a direct way to check this is to add scalar multiplications of column vectors to get the diagonal form:



$$begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \
a_{21} & a_{22} & a_{23} & a_{24} \
a_{31} & a_{32} & a_{33} & a_{34} \
a_{41} & a_{42} & a_{43} & a_{44} \
end{pmatrix} thicksim begin{pmatrix}
d_1 & 0 & 0 & 0 \
0 & d_2 & 0 & 0 \
0 & 0 & d_3 & 0 \
0 & 0 & 0 & d_4 \
end{pmatrix}.$$



Now it's clear that the vectors are linearly independent if and only if every $d_i$ is nonzero, i.e. $prod_{i=1}^n d_ineq0$. It may also be the case that two columns are equal and there is no diagonal form, so we must add a condition that annihilates the determinant (this is consistent with $prod_{i=1}^n d_i=0$), since column vectors can't be linearly independent.



If we want to have a real valued function that tells us this information, then we simply introduce an ad hoc function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ with following properties:




  1. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=det (a_1,ldots,a_i,ldots,kcdot a_i+a_j,ldots,a_n).$$


  2. $$det(d_1cdot e_1,ldots,d_ncdot e_n)=prod_{i=1}^n d_i.$$


  3. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=0, space space text{if} space space a_i=a_j.$$



The determinant offers information how orthogonal a set of vectors is, that is, if we norm every vector to 1, then the determinant of an orthogonal set of vectors is 1. More generally, with Gram-Schmidt process we can form an orthogonal set of vectors form set $(a_1,ldots, a_n)$, and the absolute value of determinant is actually the volume of parallelepiped formed by the set of vectors.



Definition.
Volume of parallelepiped formed by set of vectors $(a_1,ldots, a_n)$ is $Vol(a_1,ldots, a_n)=Vol(a_1,ldots, a_{n-1})cdot |a_{n}^{bot}|=|a_{1}^{bot}|cdots |a_{n}^{bot}|$, where $a_{i}^{bot} bot span(a_1,ldots, a_{i-1}).$





From the previous definition of determinant we can infer the multilinearity property:



$$[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]thicksim diag[d_1,ldots,c_1 cdot d'_i+c_2 cdot d''_i ,ldots,d_n],$$ so $$det[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]=prod_{j=1:jneq i}^n d_j(c_1 cdot d'_i+c_2 cdot d''_i)$$ $$=c_1det(diag[d_1,ldots, d'_i,ldots,d_n])+c_2det(diag[d_1,ldots, d''_i,ldots,d_n])$$ $$=c_1det[a_1,ldots,u,ldots,a_n]+c_2det[a_1,ldots, v,ldots,a_n].$$



Note that previous multilinearity together with property $(1)$ gives the property $(2)$, so we know from the literature that the determinant function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ actually exists and it is unique.



Now, due to property $(1)$ we have for sets of linearly independent vectors (matrices) $A$ and $B$ following presentations: $A=L_a D_a$ and $B=L_b D_b$, where $D_a$ and $D_b$ are diagonal matrices and $L_a$ and $L_b$ are products of column-addition matrices. Note that if we multiply a diagonal matrix $D$ with a column-addition matrix $L$, then we can always "counter" that with another column-addition matrix $L'$, i.e. $L'DL=D.$



Now we can infer the multiplication property:



$det(A)=det(L_a D_a)=det(D_a)=prod_{i=1}^n alpha_i$ and $det(B)=det(L_b D_b)=det(D_b)=prod_{i=1}^n beta_i$.



Therefore



$det(AB)=det(L_a D_a L_b D_b)=det(D_a L_b D_b)=det(L'_b D_a L_b D_b)=det(D_a D_b)
=prod_{i=1}^n alpha_i beta_i=det(A)det(B).$





This approach to determinant works equally well if we begin with the volume of a parallelepiped (geometric approach) or with the search of invertibility (algebraic approach). I was motivated by the book Linear algebra and its applications by Lax on chapter 5:



Rather than start with a formula for the determinant, we shall deduce it from the properties forced on it by the geometric properties of signed volume. This approach to determinants is due to E. Artin.





  1. $det (a_1,ldots,a_n)=0$, if $a_i=a_j$, $ineq j.$


  2. $det (a_1,ldots,a_n)$ is a multilinear function of its arguments, in the sense that if all $a_i, i neq j$ are fixed, $det$ is a linear function of the remaining argument $a_j.$

  3. $det(e_1,ldots,e_n)=1.$










share|cite|improve this question











$endgroup$








  • 6




    $begingroup$
    Have you posted this in the Mathematics Educators SE? You might get better answers there.
    $endgroup$
    – SplitInfinity
    Mar 14 '16 at 13:29








  • 2




    $begingroup$
    You're not adding columns but rather, diagonalizing the matrix, (which cannot always be done for matrices with non-zero determinant).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:30








  • 6




    $begingroup$
    See Down with Determinants!: axler.net/DwD.html.
    $endgroup$
    – Martín-Blas Pérez Pinilla
    Mar 14 '16 at 13:34






  • 2




    $begingroup$
    @Juho The problem is there are matrices with both zero determinant and with nonzero determinants that cannot be expressed as a diagonal matrix (non-diagonal entries all $0$).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:53






  • 3




    $begingroup$
    I think the main issue with the plan, however, is that it is not at all clear that the determinant of a matrix found in this way will be unique.
    $endgroup$
    – Henning Makholm
    Mar 14 '16 at 14:20














12












12








12


4



$begingroup$


The concept of determinant is quite unmotivational topic to introduce. Textbooks use such an "strung out" introductions like axiomatic definition, Laplace expansion, Leibniz'a permutation formula or something like signed volume.



Question: is the following a possible way to introduce the determinant?





Determinant is all about determing whether a given set of vectors are linearly independent, and a direct way to check this is to add scalar multiplications of column vectors to get the diagonal form:



$$begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \
a_{21} & a_{22} & a_{23} & a_{24} \
a_{31} & a_{32} & a_{33} & a_{34} \
a_{41} & a_{42} & a_{43} & a_{44} \
end{pmatrix} thicksim begin{pmatrix}
d_1 & 0 & 0 & 0 \
0 & d_2 & 0 & 0 \
0 & 0 & d_3 & 0 \
0 & 0 & 0 & d_4 \
end{pmatrix}.$$



Now it's clear that the vectors are linearly independent if and only if every $d_i$ is nonzero, i.e. $prod_{i=1}^n d_ineq0$. It may also be the case that two columns are equal and there is no diagonal form, so we must add a condition that annihilates the determinant (this is consistent with $prod_{i=1}^n d_i=0$), since column vectors can't be linearly independent.



If we want to have a real valued function that tells us this information, then we simply introduce an ad hoc function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ with following properties:




  1. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=det (a_1,ldots,a_i,ldots,kcdot a_i+a_j,ldots,a_n).$$


  2. $$det(d_1cdot e_1,ldots,d_ncdot e_n)=prod_{i=1}^n d_i.$$


  3. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=0, space space text{if} space space a_i=a_j.$$



The determinant offers information how orthogonal a set of vectors is, that is, if we norm every vector to 1, then the determinant of an orthogonal set of vectors is 1. More generally, with Gram-Schmidt process we can form an orthogonal set of vectors form set $(a_1,ldots, a_n)$, and the absolute value of determinant is actually the volume of parallelepiped formed by the set of vectors.



Definition.
Volume of parallelepiped formed by set of vectors $(a_1,ldots, a_n)$ is $Vol(a_1,ldots, a_n)=Vol(a_1,ldots, a_{n-1})cdot |a_{n}^{bot}|=|a_{1}^{bot}|cdots |a_{n}^{bot}|$, where $a_{i}^{bot} bot span(a_1,ldots, a_{i-1}).$





From the previous definition of determinant we can infer the multilinearity property:



$$[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]thicksim diag[d_1,ldots,c_1 cdot d'_i+c_2 cdot d''_i ,ldots,d_n],$$ so $$det[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]=prod_{j=1:jneq i}^n d_j(c_1 cdot d'_i+c_2 cdot d''_i)$$ $$=c_1det(diag[d_1,ldots, d'_i,ldots,d_n])+c_2det(diag[d_1,ldots, d''_i,ldots,d_n])$$ $$=c_1det[a_1,ldots,u,ldots,a_n]+c_2det[a_1,ldots, v,ldots,a_n].$$



Note that previous multilinearity together with property $(1)$ gives the property $(2)$, so we know from the literature that the determinant function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ actually exists and it is unique.



Now, due to property $(1)$ we have for sets of linearly independent vectors (matrices) $A$ and $B$ following presentations: $A=L_a D_a$ and $B=L_b D_b$, where $D_a$ and $D_b$ are diagonal matrices and $L_a$ and $L_b$ are products of column-addition matrices. Note that if we multiply a diagonal matrix $D$ with a column-addition matrix $L$, then we can always "counter" that with another column-addition matrix $L'$, i.e. $L'DL=D.$



Now we can infer the multiplication property:



$det(A)=det(L_a D_a)=det(D_a)=prod_{i=1}^n alpha_i$ and $det(B)=det(L_b D_b)=det(D_b)=prod_{i=1}^n beta_i$.



Therefore



$det(AB)=det(L_a D_a L_b D_b)=det(D_a L_b D_b)=det(L'_b D_a L_b D_b)=det(D_a D_b)
=prod_{i=1}^n alpha_i beta_i=det(A)det(B).$





This approach to determinant works equally well if we begin with the volume of a parallelepiped (geometric approach) or with the search of invertibility (algebraic approach). I was motivated by the book Linear algebra and its applications by Lax on chapter 5:



Rather than start with a formula for the determinant, we shall deduce it from the properties forced on it by the geometric properties of signed volume. This approach to determinants is due to E. Artin.





  1. $det (a_1,ldots,a_n)=0$, if $a_i=a_j$, $ineq j.$


  2. $det (a_1,ldots,a_n)$ is a multilinear function of its arguments, in the sense that if all $a_i, i neq j$ are fixed, $det$ is a linear function of the remaining argument $a_j.$

  3. $det(e_1,ldots,e_n)=1.$










share|cite|improve this question











$endgroup$




The concept of determinant is quite unmotivational topic to introduce. Textbooks use such an "strung out" introductions like axiomatic definition, Laplace expansion, Leibniz'a permutation formula or something like signed volume.



Question: is the following a possible way to introduce the determinant?





Determinant is all about determing whether a given set of vectors are linearly independent, and a direct way to check this is to add scalar multiplications of column vectors to get the diagonal form:



$$begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \
a_{21} & a_{22} & a_{23} & a_{24} \
a_{31} & a_{32} & a_{33} & a_{34} \
a_{41} & a_{42} & a_{43} & a_{44} \
end{pmatrix} thicksim begin{pmatrix}
d_1 & 0 & 0 & 0 \
0 & d_2 & 0 & 0 \
0 & 0 & d_3 & 0 \
0 & 0 & 0 & d_4 \
end{pmatrix}.$$



Now it's clear that the vectors are linearly independent if and only if every $d_i$ is nonzero, i.e. $prod_{i=1}^n d_ineq0$. It may also be the case that two columns are equal and there is no diagonal form, so we must add a condition that annihilates the determinant (this is consistent with $prod_{i=1}^n d_i=0$), since column vectors can't be linearly independent.



If we want to have a real valued function that tells us this information, then we simply introduce an ad hoc function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ with following properties:




  1. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=det (a_1,ldots,a_i,ldots,kcdot a_i+a_j,ldots,a_n).$$


  2. $$det(d_1cdot e_1,ldots,d_ncdot e_n)=prod_{i=1}^n d_i.$$


  3. $$det (a_1,ldots,a_i,ldots,a_j,ldots,a_n)=0, space space text{if} space space a_i=a_j.$$



The determinant offers information how orthogonal a set of vectors is, that is, if we norm every vector to 1, then the determinant of an orthogonal set of vectors is 1. More generally, with Gram-Schmidt process we can form an orthogonal set of vectors form set $(a_1,ldots, a_n)$, and the absolute value of determinant is actually the volume of parallelepiped formed by the set of vectors.



Definition.
Volume of parallelepiped formed by set of vectors $(a_1,ldots, a_n)$ is $Vol(a_1,ldots, a_n)=Vol(a_1,ldots, a_{n-1})cdot |a_{n}^{bot}|=|a_{1}^{bot}|cdots |a_{n}^{bot}|$, where $a_{i}^{bot} bot span(a_1,ldots, a_{i-1}).$





From the previous definition of determinant we can infer the multilinearity property:



$$[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]thicksim diag[d_1,ldots,c_1 cdot d'_i+c_2 cdot d''_i ,ldots,d_n],$$ so $$det[a_1,ldots,c_1 cdot u+c_2 cdot v,ldots,a_n]=prod_{j=1:jneq i}^n d_j(c_1 cdot d'_i+c_2 cdot d''_i)$$ $$=c_1det(diag[d_1,ldots, d'_i,ldots,d_n])+c_2det(diag[d_1,ldots, d''_i,ldots,d_n])$$ $$=c_1det[a_1,ldots,u,ldots,a_n]+c_2det[a_1,ldots, v,ldots,a_n].$$



Note that previous multilinearity together with property $(1)$ gives the property $(2)$, so we know from the literature that the determinant function $det:mathbb{R}^{n times n} rightarrow mathbb{R}$ actually exists and it is unique.



Now, due to property $(1)$ we have for sets of linearly independent vectors (matrices) $A$ and $B$ following presentations: $A=L_a D_a$ and $B=L_b D_b$, where $D_a$ and $D_b$ are diagonal matrices and $L_a$ and $L_b$ are products of column-addition matrices. Note that if we multiply a diagonal matrix $D$ with a column-addition matrix $L$, then we can always "counter" that with another column-addition matrix $L'$, i.e. $L'DL=D.$



Now we can infer the multiplication property:



$det(A)=det(L_a D_a)=det(D_a)=prod_{i=1}^n alpha_i$ and $det(B)=det(L_b D_b)=det(D_b)=prod_{i=1}^n beta_i$.



Therefore



$det(AB)=det(L_a D_a L_b D_b)=det(D_a L_b D_b)=det(L'_b D_a L_b D_b)=det(D_a D_b)
=prod_{i=1}^n alpha_i beta_i=det(A)det(B).$





This approach to determinant works equally well if we begin with the volume of a parallelepiped (geometric approach) or with the search of invertibility (algebraic approach). I was motivated by the book Linear algebra and its applications by Lax on chapter 5:



Rather than start with a formula for the determinant, we shall deduce it from the properties forced on it by the geometric properties of signed volume. This approach to determinants is due to E. Artin.





  1. $det (a_1,ldots,a_n)=0$, if $a_i=a_j$, $ineq j.$


  2. $det (a_1,ldots,a_n)$ is a multilinear function of its arguments, in the sense that if all $a_i, i neq j$ are fixed, $det$ is a linear function of the remaining argument $a_j.$

  3. $det(e_1,ldots,e_n)=1.$







linear-algebra matrices soft-question determinant






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 16 at 7:29







Hulkster

















asked Mar 14 '16 at 13:27









HulksterHulkster

881720




881720








  • 6




    $begingroup$
    Have you posted this in the Mathematics Educators SE? You might get better answers there.
    $endgroup$
    – SplitInfinity
    Mar 14 '16 at 13:29








  • 2




    $begingroup$
    You're not adding columns but rather, diagonalizing the matrix, (which cannot always be done for matrices with non-zero determinant).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:30








  • 6




    $begingroup$
    See Down with Determinants!: axler.net/DwD.html.
    $endgroup$
    – Martín-Blas Pérez Pinilla
    Mar 14 '16 at 13:34






  • 2




    $begingroup$
    @Juho The problem is there are matrices with both zero determinant and with nonzero determinants that cannot be expressed as a diagonal matrix (non-diagonal entries all $0$).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:53






  • 3




    $begingroup$
    I think the main issue with the plan, however, is that it is not at all clear that the determinant of a matrix found in this way will be unique.
    $endgroup$
    – Henning Makholm
    Mar 14 '16 at 14:20














  • 6




    $begingroup$
    Have you posted this in the Mathematics Educators SE? You might get better answers there.
    $endgroup$
    – SplitInfinity
    Mar 14 '16 at 13:29








  • 2




    $begingroup$
    You're not adding columns but rather, diagonalizing the matrix, (which cannot always be done for matrices with non-zero determinant).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:30








  • 6




    $begingroup$
    See Down with Determinants!: axler.net/DwD.html.
    $endgroup$
    – Martín-Blas Pérez Pinilla
    Mar 14 '16 at 13:34






  • 2




    $begingroup$
    @Juho The problem is there are matrices with both zero determinant and with nonzero determinants that cannot be expressed as a diagonal matrix (non-diagonal entries all $0$).
    $endgroup$
    – Justin Benfield
    Mar 14 '16 at 13:53






  • 3




    $begingroup$
    I think the main issue with the plan, however, is that it is not at all clear that the determinant of a matrix found in this way will be unique.
    $endgroup$
    – Henning Makholm
    Mar 14 '16 at 14:20








6




6




$begingroup$
Have you posted this in the Mathematics Educators SE? You might get better answers there.
$endgroup$
– SplitInfinity
Mar 14 '16 at 13:29






$begingroup$
Have you posted this in the Mathematics Educators SE? You might get better answers there.
$endgroup$
– SplitInfinity
Mar 14 '16 at 13:29






2




2




$begingroup$
You're not adding columns but rather, diagonalizing the matrix, (which cannot always be done for matrices with non-zero determinant).
$endgroup$
– Justin Benfield
Mar 14 '16 at 13:30






$begingroup$
You're not adding columns but rather, diagonalizing the matrix, (which cannot always be done for matrices with non-zero determinant).
$endgroup$
– Justin Benfield
Mar 14 '16 at 13:30






6




6




$begingroup$
See Down with Determinants!: axler.net/DwD.html.
$endgroup$
– Martín-Blas Pérez Pinilla
Mar 14 '16 at 13:34




$begingroup$
See Down with Determinants!: axler.net/DwD.html.
$endgroup$
– Martín-Blas Pérez Pinilla
Mar 14 '16 at 13:34




2




2




$begingroup$
@Juho The problem is there are matrices with both zero determinant and with nonzero determinants that cannot be expressed as a diagonal matrix (non-diagonal entries all $0$).
$endgroup$
– Justin Benfield
Mar 14 '16 at 13:53




$begingroup$
@Juho The problem is there are matrices with both zero determinant and with nonzero determinants that cannot be expressed as a diagonal matrix (non-diagonal entries all $0$).
$endgroup$
– Justin Benfield
Mar 14 '16 at 13:53




3




3




$begingroup$
I think the main issue with the plan, however, is that it is not at all clear that the determinant of a matrix found in this way will be unique.
$endgroup$
– Henning Makholm
Mar 14 '16 at 14:20




$begingroup$
I think the main issue with the plan, however, is that it is not at all clear that the determinant of a matrix found in this way will be unique.
$endgroup$
– Henning Makholm
Mar 14 '16 at 14:20










5 Answers
5






active

oldest

votes


















6





+50







$begingroup$

That seems quite opaque: It's a way of computing a quantity rather than telling what exactly it is or even motivating it. It also leaves completely open the question of why such a function exists and is well-defined. The properties you give are sufficient if you're trying to put a matrix in upper-triangular form, but what about other computations? It also gives no justification for one of the most important properties of the determinant, that $det(ab) = det a det b$.



I think the best way to define the determinant is to introduce the wedge product $Lambda^* V$ of a finite-dimensional space $V$. Given that, any map $f:V to V$ induces a map $bar{f}:Lambda^n V to Lambda^n V$, where $n = dim V$. But $Lambda^n V$ is a $1$-dimensional space, so $bar{f}$ is just multiplication by a scalar (independent of a choice of basis); that scalar is by definition exactly $det f$. Then, for example, we get the condition that $det fnot = 0$ iff $f$ is an isomorphism for free: For a basis $v_1, dots, v_n$ of $V$, we have $det fnot = 0$ iff $f(v_1wedge cdots wedge v_n) = f(v_1) wedge cdots wedge f(v_n) not = 0$; that is, iff the $f(v_i)$ are linearly independent. Furthermore, since $h = fg$ has $bar{h} = bar{f}bar{g}$, we have $det(fg) = det f det g$. The other properties follow similarly. It requires a bit more sophistication than is usually assumed in a linear algebra class, but it's the first construction of $det$ I've seen that's motivated and transparently explains what's otherwise a list of arbitrary properties.






share|cite|improve this answer











$endgroup$









  • 2




    $begingroup$
    I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
    $endgroup$
    – user1551
    Mar 21 '16 at 7:39








  • 4




    $begingroup$
    The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
    $endgroup$
    – anomaly
    Mar 21 '16 at 14:18








  • 3




    $begingroup$
    ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
    $endgroup$
    – anomaly
    Mar 21 '16 at 14:28






  • 1




    $begingroup$
    I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
    $endgroup$
    – user1551
    Mar 26 '16 at 13:18










  • $begingroup$
    As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
    $endgroup$
    – user1551
    Mar 26 '16 at 13:19



















2












$begingroup$

The way I teach determinants to my students is to start with the case $n=2$, and to use the complex numbers and/or trigonometry in order to show that, for $(a,b), (c,d)$ vectors on the plane, the quantity
$$ad-bc=||(a,b)||cdotp ||(c,d)|| sin theta$$
is the signed area between $(a,b)$ and $(c,d)$ (in this order).



Then, using the vector product and its properties (we have seen it before coming to the topic of determinants in full generality), we check that $3$ by $3$ determinants carry the meaning of signed volumes.



The next step is to introduce determinants as alternate multilinear functions. We have seen examples of bilinear maps (inner products), trilinear maps, such as
$$(u,v,w)mapsto (utimes v)bullet w$$
and the quadrilinear maps $$(a,b,c,d)mapsto (abullet c) (bbullet d)-(bbullet c) (abullet d),$$
$$(a,b,c,d)mapsto (a times b)bullet (ctimes d).$$



Now, when explaining multilinearity we did emphasise that the fact that the last two examples are equal can be proven if only we check equality for the case where $a,b,c,d$ are vectors of the canonical basis.



Then the time comes to define the determinant of $n$ vectors in $mathbb{R}^n$, which is a new example of $n-$linear, alternate function. They check that the vector space of such maps is indeed $left(^n_nright).$ The students thus learn that the determinant is essentially the only possible such function, up to a multiple, in the same way they saw that more general multilinear maps depend exclusively on their values on vectors of a chosen basis (say, the canonical basis in our case).



Although I learnt to prove stuff such as $det(AB)=det(A) det(B)$ by strict functoriality, in class we do define the map
$$L(X_1, ldots , X_n)=det(AX_1, ldots , AX_n),$$ which by uniqueness is a constant multiple of the determinant function $T(X_1, ldots , X_n)=det(X_1, ldots, X_n),$ and compute the constant by evaluating on the identity matrix, i.e. $X_i=e_i.$



Thus $det(AB)=det(A)det(B).$



It is in T.W. Korner's book called Vectors, Pure and Applied that one can see a construction that uses elementary matrices and is rigorous. The OP can check Korner's book to see a nice, slightly more down-to-earth exposition.



In op. cit. one can see how Korner uses the fact that an invertible matrix can be decomposed as a product of elementary matrices to obtain the formula $det(AB)=det(A)det(B).$



Note: I have been deliberately brief in my exposition, just so as not to repeat too much stuff that was already included in other answers.






share|cite|improve this answer











$endgroup$





















    1












    $begingroup$

    $det$ is the only multilinear alternating linear map such that $det I = 1$. (2) and (3) combined with the following property would define $det$ uniquely.
    $$ det(a_1,dots,u,dots,a_n) + det(a_1,dots,lambda v,dots,a_n) = det(a_1,dots,u + lambda v,dots,a_n)
    $$
    The only thing missing from your definition is linearity.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
      $endgroup$
      – Hulkster
      Mar 21 '16 at 10:17






    • 2




      $begingroup$
      @Juho Yes, but it is difficult to prove.
      $endgroup$
      – Henricus V.
      Mar 21 '16 at 13:56










    • $begingroup$
      But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
      $endgroup$
      – Hulkster
      Mar 21 '16 at 14:37










    • $begingroup$
      @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
      $endgroup$
      – Henricus V.
      Mar 21 '16 at 14:49










    • $begingroup$
      @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
      $endgroup$
      – user1551
      Mar 26 '16 at 13:02



















    1












    $begingroup$

    The geometric meaning of the determinant of, say, a 3 by 3 matrix is the (signed) volume of the parallelepiped spanned by the three column vectors (alternatively the three row vectors). This generalizes the (signed) area of the parallelogram spanned by the two column vectors of a 2 by 2 matrix.



    At the next stage, to pursue the geometric definition you would have to clarify the meaning of "signed" above. The naive definition of volume is always positive whereas the determinant could be negative, so there is some explaining to do in terms of orientations.



    The route most often chosen both by instructors and textbook writers is the algebraic one where one can write down a magic formula and, boom! the determinant has been defined. This is fine if you want to get through a certain amount of material required by the course, but pedagogically this may not be the best approach.



    Ultimately a combination of the geometry and the algebra is required to explain this concept properly. It connects to more advanced topics like exterior algebras but that's the next stage already.






    share|cite|improve this answer









    $endgroup$





















      -1












      $begingroup$

      A rationale i could give to the determinant function is to generalise the property of the power, i.e we know that for any scalars $x,a,b,c$ it holds:
      $x^{a + b + c ldots} = x^a x^b x^cldots$



      But, lets assume that $a,b,cldots$ are actually the diagonal entries of a matrix $mathbf{A}$ then it holds that $x^{trace(mathbf{A})} = det(x^mathbf{A})$ , with $x^A$ the matrix exponential (of base $x$);






      share|cite|improve this answer











      $endgroup$









      • 4




        $begingroup$
        what is the matrix power?
        $endgroup$
        – ASKASK
        Mar 21 '16 at 0:51












      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1697050%2fthe-definition-of-determinant-in-the-spirit-of-algebra-and-geometry%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      5 Answers
      5






      active

      oldest

      votes








      5 Answers
      5






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6





      +50







      $begingroup$

      That seems quite opaque: It's a way of computing a quantity rather than telling what exactly it is or even motivating it. It also leaves completely open the question of why such a function exists and is well-defined. The properties you give are sufficient if you're trying to put a matrix in upper-triangular form, but what about other computations? It also gives no justification for one of the most important properties of the determinant, that $det(ab) = det a det b$.



      I think the best way to define the determinant is to introduce the wedge product $Lambda^* V$ of a finite-dimensional space $V$. Given that, any map $f:V to V$ induces a map $bar{f}:Lambda^n V to Lambda^n V$, where $n = dim V$. But $Lambda^n V$ is a $1$-dimensional space, so $bar{f}$ is just multiplication by a scalar (independent of a choice of basis); that scalar is by definition exactly $det f$. Then, for example, we get the condition that $det fnot = 0$ iff $f$ is an isomorphism for free: For a basis $v_1, dots, v_n$ of $V$, we have $det fnot = 0$ iff $f(v_1wedge cdots wedge v_n) = f(v_1) wedge cdots wedge f(v_n) not = 0$; that is, iff the $f(v_i)$ are linearly independent. Furthermore, since $h = fg$ has $bar{h} = bar{f}bar{g}$, we have $det(fg) = det f det g$. The other properties follow similarly. It requires a bit more sophistication than is usually assumed in a linear algebra class, but it's the first construction of $det$ I've seen that's motivated and transparently explains what's otherwise a list of arbitrary properties.






      share|cite|improve this answer











      $endgroup$









      • 2




        $begingroup$
        I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
        $endgroup$
        – user1551
        Mar 21 '16 at 7:39








      • 4




        $begingroup$
        The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:18








      • 3




        $begingroup$
        ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:28






      • 1




        $begingroup$
        I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
        $endgroup$
        – user1551
        Mar 26 '16 at 13:18










      • $begingroup$
        As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
        $endgroup$
        – user1551
        Mar 26 '16 at 13:19
















      6





      +50







      $begingroup$

      That seems quite opaque: It's a way of computing a quantity rather than telling what exactly it is or even motivating it. It also leaves completely open the question of why such a function exists and is well-defined. The properties you give are sufficient if you're trying to put a matrix in upper-triangular form, but what about other computations? It also gives no justification for one of the most important properties of the determinant, that $det(ab) = det a det b$.



      I think the best way to define the determinant is to introduce the wedge product $Lambda^* V$ of a finite-dimensional space $V$. Given that, any map $f:V to V$ induces a map $bar{f}:Lambda^n V to Lambda^n V$, where $n = dim V$. But $Lambda^n V$ is a $1$-dimensional space, so $bar{f}$ is just multiplication by a scalar (independent of a choice of basis); that scalar is by definition exactly $det f$. Then, for example, we get the condition that $det fnot = 0$ iff $f$ is an isomorphism for free: For a basis $v_1, dots, v_n$ of $V$, we have $det fnot = 0$ iff $f(v_1wedge cdots wedge v_n) = f(v_1) wedge cdots wedge f(v_n) not = 0$; that is, iff the $f(v_i)$ are linearly independent. Furthermore, since $h = fg$ has $bar{h} = bar{f}bar{g}$, we have $det(fg) = det f det g$. The other properties follow similarly. It requires a bit more sophistication than is usually assumed in a linear algebra class, but it's the first construction of $det$ I've seen that's motivated and transparently explains what's otherwise a list of arbitrary properties.






      share|cite|improve this answer











      $endgroup$









      • 2




        $begingroup$
        I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
        $endgroup$
        – user1551
        Mar 21 '16 at 7:39








      • 4




        $begingroup$
        The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:18








      • 3




        $begingroup$
        ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:28






      • 1




        $begingroup$
        I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
        $endgroup$
        – user1551
        Mar 26 '16 at 13:18










      • $begingroup$
        As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
        $endgroup$
        – user1551
        Mar 26 '16 at 13:19














      6





      +50







      6





      +50



      6




      +50



      $begingroup$

      That seems quite opaque: It's a way of computing a quantity rather than telling what exactly it is or even motivating it. It also leaves completely open the question of why such a function exists and is well-defined. The properties you give are sufficient if you're trying to put a matrix in upper-triangular form, but what about other computations? It also gives no justification for one of the most important properties of the determinant, that $det(ab) = det a det b$.



      I think the best way to define the determinant is to introduce the wedge product $Lambda^* V$ of a finite-dimensional space $V$. Given that, any map $f:V to V$ induces a map $bar{f}:Lambda^n V to Lambda^n V$, where $n = dim V$. But $Lambda^n V$ is a $1$-dimensional space, so $bar{f}$ is just multiplication by a scalar (independent of a choice of basis); that scalar is by definition exactly $det f$. Then, for example, we get the condition that $det fnot = 0$ iff $f$ is an isomorphism for free: For a basis $v_1, dots, v_n$ of $V$, we have $det fnot = 0$ iff $f(v_1wedge cdots wedge v_n) = f(v_1) wedge cdots wedge f(v_n) not = 0$; that is, iff the $f(v_i)$ are linearly independent. Furthermore, since $h = fg$ has $bar{h} = bar{f}bar{g}$, we have $det(fg) = det f det g$. The other properties follow similarly. It requires a bit more sophistication than is usually assumed in a linear algebra class, but it's the first construction of $det$ I've seen that's motivated and transparently explains what's otherwise a list of arbitrary properties.






      share|cite|improve this answer











      $endgroup$



      That seems quite opaque: It's a way of computing a quantity rather than telling what exactly it is or even motivating it. It also leaves completely open the question of why such a function exists and is well-defined. The properties you give are sufficient if you're trying to put a matrix in upper-triangular form, but what about other computations? It also gives no justification for one of the most important properties of the determinant, that $det(ab) = det a det b$.



      I think the best way to define the determinant is to introduce the wedge product $Lambda^* V$ of a finite-dimensional space $V$. Given that, any map $f:V to V$ induces a map $bar{f}:Lambda^n V to Lambda^n V$, where $n = dim V$. But $Lambda^n V$ is a $1$-dimensional space, so $bar{f}$ is just multiplication by a scalar (independent of a choice of basis); that scalar is by definition exactly $det f$. Then, for example, we get the condition that $det fnot = 0$ iff $f$ is an isomorphism for free: For a basis $v_1, dots, v_n$ of $V$, we have $det fnot = 0$ iff $f(v_1wedge cdots wedge v_n) = f(v_1) wedge cdots wedge f(v_n) not = 0$; that is, iff the $f(v_i)$ are linearly independent. Furthermore, since $h = fg$ has $bar{h} = bar{f}bar{g}$, we have $det(fg) = det f det g$. The other properties follow similarly. It requires a bit more sophistication than is usually assumed in a linear algebra class, but it's the first construction of $det$ I've seen that's motivated and transparently explains what's otherwise a list of arbitrary properties.







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Mar 21 '16 at 15:27

























      answered Mar 21 '16 at 0:25









      anomalyanomaly

      17.8k42666




      17.8k42666








      • 2




        $begingroup$
        I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
        $endgroup$
        – user1551
        Mar 21 '16 at 7:39








      • 4




        $begingroup$
        The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:18








      • 3




        $begingroup$
        ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:28






      • 1




        $begingroup$
        I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
        $endgroup$
        – user1551
        Mar 26 '16 at 13:18










      • $begingroup$
        As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
        $endgroup$
        – user1551
        Mar 26 '16 at 13:19














      • 2




        $begingroup$
        I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
        $endgroup$
        – user1551
        Mar 21 '16 at 7:39








      • 4




        $begingroup$
        The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:18








      • 3




        $begingroup$
        ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
        $endgroup$
        – anomaly
        Mar 21 '16 at 14:28






      • 1




        $begingroup$
        I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
        $endgroup$
        – user1551
        Mar 26 '16 at 13:18










      • $begingroup$
        As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
        $endgroup$
        – user1551
        Mar 26 '16 at 13:19








      2




      2




      $begingroup$
      I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
      $endgroup$
      – user1551
      Mar 21 '16 at 7:39






      $begingroup$
      I beg to disagree. This is arguably the most refined, the most elegant or even the most useful definition of determinant, but I think this is also the least motivated definition. Why on earth is there any need to introduce exterior algebra in order to define the volume of a parallelepiped or the proportional change in volume due to the application of a linear map? How can one see the merits of such a definition at the outset? Motivation-wise, this definition makes no sense at all.
      $endgroup$
      – user1551
      Mar 21 '16 at 7:39






      4




      4




      $begingroup$
      The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
      $endgroup$
      – anomaly
      Mar 21 '16 at 14:18






      $begingroup$
      The motivation is the idea that $det f = 0$ is equivalent to singularity, which is much more interesting and important that computing the volume of a parallelepiped (and still makes sense over a field $knot = mathbb{R}$). The volume form on a space on general manifold is defined in terms of the same exterior product, so it follows automatically. (This requires more analysis than is usually assumed in an introductory class, but it's not like computing the volume of a parallelepiped in arbitrary dimensions is really a compelling or interesting motivation anyway.)
      $endgroup$
      – anomaly
      Mar 21 '16 at 14:18






      3




      3




      $begingroup$
      ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
      $endgroup$
      – anomaly
      Mar 21 '16 at 14:28




      $begingroup$
      ...also, in the volume definition, you're still stuck showing that the determinant has the laundry list of properties in the OP, which is not immediate. If you assume that $det$ is well-defined and invariant under conjugation (or similar behavior), then you can interpret $det g$ for $g$ upper-triangular as the volume of a parallelepiped and proceed from there. But there's not an obvious way of showing the geometric meaning of $det$ for an arbitrary matrix, and it's not clear from the given three conditions why $det$ should be multiplicative, one of its most important properties.
      $endgroup$
      – anomaly
      Mar 21 '16 at 14:28




      1




      1




      $begingroup$
      I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
      $endgroup$
      – user1551
      Mar 26 '16 at 13:18




      $begingroup$
      I think we are not on the same page about the meaning of the word "motivation". You seem to be talking about whether a certain goal is worth pursuing, but in the context of the OP and my previous comment, we are talking about whether it's convincing at the outset that a certain definition or a certain setup would lead us to our desired goal. In this sense the use of exterior algebra is rather unmotivated.
      $endgroup$
      – user1551
      Mar 26 '16 at 13:18












      $begingroup$
      As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
      $endgroup$
      – user1551
      Mar 26 '16 at 13:19




      $begingroup$
      As Paul Halmos put it (on p.53, Finite-Dimensional Vector Spaces): "The reader might well charge that the discussion was not very strongly motivated. The complete motivation cannot be contained in this book; the justification for studying multilinear algebra is the wide applicability of the subject. The only application that we shall make is to the theory of determinants (which, to be sure, could be treated by more direct but less elegant methods, involving much greater dependence on arbitrary choices of bases)".
      $endgroup$
      – user1551
      Mar 26 '16 at 13:19











      2












      $begingroup$

      The way I teach determinants to my students is to start with the case $n=2$, and to use the complex numbers and/or trigonometry in order to show that, for $(a,b), (c,d)$ vectors on the plane, the quantity
      $$ad-bc=||(a,b)||cdotp ||(c,d)|| sin theta$$
      is the signed area between $(a,b)$ and $(c,d)$ (in this order).



      Then, using the vector product and its properties (we have seen it before coming to the topic of determinants in full generality), we check that $3$ by $3$ determinants carry the meaning of signed volumes.



      The next step is to introduce determinants as alternate multilinear functions. We have seen examples of bilinear maps (inner products), trilinear maps, such as
      $$(u,v,w)mapsto (utimes v)bullet w$$
      and the quadrilinear maps $$(a,b,c,d)mapsto (abullet c) (bbullet d)-(bbullet c) (abullet d),$$
      $$(a,b,c,d)mapsto (a times b)bullet (ctimes d).$$



      Now, when explaining multilinearity we did emphasise that the fact that the last two examples are equal can be proven if only we check equality for the case where $a,b,c,d$ are vectors of the canonical basis.



      Then the time comes to define the determinant of $n$ vectors in $mathbb{R}^n$, which is a new example of $n-$linear, alternate function. They check that the vector space of such maps is indeed $left(^n_nright).$ The students thus learn that the determinant is essentially the only possible such function, up to a multiple, in the same way they saw that more general multilinear maps depend exclusively on their values on vectors of a chosen basis (say, the canonical basis in our case).



      Although I learnt to prove stuff such as $det(AB)=det(A) det(B)$ by strict functoriality, in class we do define the map
      $$L(X_1, ldots , X_n)=det(AX_1, ldots , AX_n),$$ which by uniqueness is a constant multiple of the determinant function $T(X_1, ldots , X_n)=det(X_1, ldots, X_n),$ and compute the constant by evaluating on the identity matrix, i.e. $X_i=e_i.$



      Thus $det(AB)=det(A)det(B).$



      It is in T.W. Korner's book called Vectors, Pure and Applied that one can see a construction that uses elementary matrices and is rigorous. The OP can check Korner's book to see a nice, slightly more down-to-earth exposition.



      In op. cit. one can see how Korner uses the fact that an invertible matrix can be decomposed as a product of elementary matrices to obtain the formula $det(AB)=det(A)det(B).$



      Note: I have been deliberately brief in my exposition, just so as not to repeat too much stuff that was already included in other answers.






      share|cite|improve this answer











      $endgroup$


















        2












        $begingroup$

        The way I teach determinants to my students is to start with the case $n=2$, and to use the complex numbers and/or trigonometry in order to show that, for $(a,b), (c,d)$ vectors on the plane, the quantity
        $$ad-bc=||(a,b)||cdotp ||(c,d)|| sin theta$$
        is the signed area between $(a,b)$ and $(c,d)$ (in this order).



        Then, using the vector product and its properties (we have seen it before coming to the topic of determinants in full generality), we check that $3$ by $3$ determinants carry the meaning of signed volumes.



        The next step is to introduce determinants as alternate multilinear functions. We have seen examples of bilinear maps (inner products), trilinear maps, such as
        $$(u,v,w)mapsto (utimes v)bullet w$$
        and the quadrilinear maps $$(a,b,c,d)mapsto (abullet c) (bbullet d)-(bbullet c) (abullet d),$$
        $$(a,b,c,d)mapsto (a times b)bullet (ctimes d).$$



        Now, when explaining multilinearity we did emphasise that the fact that the last two examples are equal can be proven if only we check equality for the case where $a,b,c,d$ are vectors of the canonical basis.



        Then the time comes to define the determinant of $n$ vectors in $mathbb{R}^n$, which is a new example of $n-$linear, alternate function. They check that the vector space of such maps is indeed $left(^n_nright).$ The students thus learn that the determinant is essentially the only possible such function, up to a multiple, in the same way they saw that more general multilinear maps depend exclusively on their values on vectors of a chosen basis (say, the canonical basis in our case).



        Although I learnt to prove stuff such as $det(AB)=det(A) det(B)$ by strict functoriality, in class we do define the map
        $$L(X_1, ldots , X_n)=det(AX_1, ldots , AX_n),$$ which by uniqueness is a constant multiple of the determinant function $T(X_1, ldots , X_n)=det(X_1, ldots, X_n),$ and compute the constant by evaluating on the identity matrix, i.e. $X_i=e_i.$



        Thus $det(AB)=det(A)det(B).$



        It is in T.W. Korner's book called Vectors, Pure and Applied that one can see a construction that uses elementary matrices and is rigorous. The OP can check Korner's book to see a nice, slightly more down-to-earth exposition.



        In op. cit. one can see how Korner uses the fact that an invertible matrix can be decomposed as a product of elementary matrices to obtain the formula $det(AB)=det(A)det(B).$



        Note: I have been deliberately brief in my exposition, just so as not to repeat too much stuff that was already included in other answers.






        share|cite|improve this answer











        $endgroup$
















          2












          2








          2





          $begingroup$

          The way I teach determinants to my students is to start with the case $n=2$, and to use the complex numbers and/or trigonometry in order to show that, for $(a,b), (c,d)$ vectors on the plane, the quantity
          $$ad-bc=||(a,b)||cdotp ||(c,d)|| sin theta$$
          is the signed area between $(a,b)$ and $(c,d)$ (in this order).



          Then, using the vector product and its properties (we have seen it before coming to the topic of determinants in full generality), we check that $3$ by $3$ determinants carry the meaning of signed volumes.



          The next step is to introduce determinants as alternate multilinear functions. We have seen examples of bilinear maps (inner products), trilinear maps, such as
          $$(u,v,w)mapsto (utimes v)bullet w$$
          and the quadrilinear maps $$(a,b,c,d)mapsto (abullet c) (bbullet d)-(bbullet c) (abullet d),$$
          $$(a,b,c,d)mapsto (a times b)bullet (ctimes d).$$



          Now, when explaining multilinearity we did emphasise that the fact that the last two examples are equal can be proven if only we check equality for the case where $a,b,c,d$ are vectors of the canonical basis.



          Then the time comes to define the determinant of $n$ vectors in $mathbb{R}^n$, which is a new example of $n-$linear, alternate function. They check that the vector space of such maps is indeed $left(^n_nright).$ The students thus learn that the determinant is essentially the only possible such function, up to a multiple, in the same way they saw that more general multilinear maps depend exclusively on their values on vectors of a chosen basis (say, the canonical basis in our case).



          Although I learnt to prove stuff such as $det(AB)=det(A) det(B)$ by strict functoriality, in class we do define the map
          $$L(X_1, ldots , X_n)=det(AX_1, ldots , AX_n),$$ which by uniqueness is a constant multiple of the determinant function $T(X_1, ldots , X_n)=det(X_1, ldots, X_n),$ and compute the constant by evaluating on the identity matrix, i.e. $X_i=e_i.$



          Thus $det(AB)=det(A)det(B).$



          It is in T.W. Korner's book called Vectors, Pure and Applied that one can see a construction that uses elementary matrices and is rigorous. The OP can check Korner's book to see a nice, slightly more down-to-earth exposition.



          In op. cit. one can see how Korner uses the fact that an invertible matrix can be decomposed as a product of elementary matrices to obtain the formula $det(AB)=det(A)det(B).$



          Note: I have been deliberately brief in my exposition, just so as not to repeat too much stuff that was already included in other answers.






          share|cite|improve this answer











          $endgroup$



          The way I teach determinants to my students is to start with the case $n=2$, and to use the complex numbers and/or trigonometry in order to show that, for $(a,b), (c,d)$ vectors on the plane, the quantity
          $$ad-bc=||(a,b)||cdotp ||(c,d)|| sin theta$$
          is the signed area between $(a,b)$ and $(c,d)$ (in this order).



          Then, using the vector product and its properties (we have seen it before coming to the topic of determinants in full generality), we check that $3$ by $3$ determinants carry the meaning of signed volumes.



          The next step is to introduce determinants as alternate multilinear functions. We have seen examples of bilinear maps (inner products), trilinear maps, such as
          $$(u,v,w)mapsto (utimes v)bullet w$$
          and the quadrilinear maps $$(a,b,c,d)mapsto (abullet c) (bbullet d)-(bbullet c) (abullet d),$$
          $$(a,b,c,d)mapsto (a times b)bullet (ctimes d).$$



          Now, when explaining multilinearity we did emphasise that the fact that the last two examples are equal can be proven if only we check equality for the case where $a,b,c,d$ are vectors of the canonical basis.



          Then the time comes to define the determinant of $n$ vectors in $mathbb{R}^n$, which is a new example of $n-$linear, alternate function. They check that the vector space of such maps is indeed $left(^n_nright).$ The students thus learn that the determinant is essentially the only possible such function, up to a multiple, in the same way they saw that more general multilinear maps depend exclusively on their values on vectors of a chosen basis (say, the canonical basis in our case).



          Although I learnt to prove stuff such as $det(AB)=det(A) det(B)$ by strict functoriality, in class we do define the map
          $$L(X_1, ldots , X_n)=det(AX_1, ldots , AX_n),$$ which by uniqueness is a constant multiple of the determinant function $T(X_1, ldots , X_n)=det(X_1, ldots, X_n),$ and compute the constant by evaluating on the identity matrix, i.e. $X_i=e_i.$



          Thus $det(AB)=det(A)det(B).$



          It is in T.W. Korner's book called Vectors, Pure and Applied that one can see a construction that uses elementary matrices and is rigorous. The OP can check Korner's book to see a nice, slightly more down-to-earth exposition.



          In op. cit. one can see how Korner uses the fact that an invertible matrix can be decomposed as a product of elementary matrices to obtain the formula $det(AB)=det(A)det(B).$



          Note: I have been deliberately brief in my exposition, just so as not to repeat too much stuff that was already included in other answers.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Mar 26 '16 at 20:51

























          answered Mar 26 '16 at 2:47









          Theon AlexanderTheon Alexander

          1,177411




          1,177411























              1












              $begingroup$

              $det$ is the only multilinear alternating linear map such that $det I = 1$. (2) and (3) combined with the following property would define $det$ uniquely.
              $$ det(a_1,dots,u,dots,a_n) + det(a_1,dots,lambda v,dots,a_n) = det(a_1,dots,u + lambda v,dots,a_n)
              $$
              The only thing missing from your definition is linearity.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 10:17






              • 2




                $begingroup$
                @Juho Yes, but it is difficult to prove.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 13:56










              • $begingroup$
                But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 14:37










              • $begingroup$
                @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 14:49










              • $begingroup$
                @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
                $endgroup$
                – user1551
                Mar 26 '16 at 13:02
















              1












              $begingroup$

              $det$ is the only multilinear alternating linear map such that $det I = 1$. (2) and (3) combined with the following property would define $det$ uniquely.
              $$ det(a_1,dots,u,dots,a_n) + det(a_1,dots,lambda v,dots,a_n) = det(a_1,dots,u + lambda v,dots,a_n)
              $$
              The only thing missing from your definition is linearity.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 10:17






              • 2




                $begingroup$
                @Juho Yes, but it is difficult to prove.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 13:56










              • $begingroup$
                But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 14:37










              • $begingroup$
                @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 14:49










              • $begingroup$
                @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
                $endgroup$
                – user1551
                Mar 26 '16 at 13:02














              1












              1








              1





              $begingroup$

              $det$ is the only multilinear alternating linear map such that $det I = 1$. (2) and (3) combined with the following property would define $det$ uniquely.
              $$ det(a_1,dots,u,dots,a_n) + det(a_1,dots,lambda v,dots,a_n) = det(a_1,dots,u + lambda v,dots,a_n)
              $$
              The only thing missing from your definition is linearity.






              share|cite|improve this answer









              $endgroup$



              $det$ is the only multilinear alternating linear map such that $det I = 1$. (2) and (3) combined with the following property would define $det$ uniquely.
              $$ det(a_1,dots,u,dots,a_n) + det(a_1,dots,lambda v,dots,a_n) = det(a_1,dots,u + lambda v,dots,a_n)
              $$
              The only thing missing from your definition is linearity.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Mar 21 '16 at 0:22









              Henricus V.Henricus V.

              15.1k22149




              15.1k22149












              • $begingroup$
                Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 10:17






              • 2




                $begingroup$
                @Juho Yes, but it is difficult to prove.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 13:56










              • $begingroup$
                But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 14:37










              • $begingroup$
                @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 14:49










              • $begingroup$
                @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
                $endgroup$
                – user1551
                Mar 26 '16 at 13:02


















              • $begingroup$
                Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 10:17






              • 2




                $begingroup$
                @Juho Yes, but it is difficult to prove.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 13:56










              • $begingroup$
                But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
                $endgroup$
                – Hulkster
                Mar 21 '16 at 14:37










              • $begingroup$
                @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
                $endgroup$
                – Henricus V.
                Mar 21 '16 at 14:49










              • $begingroup$
                @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
                $endgroup$
                – user1551
                Mar 26 '16 at 13:02
















              $begingroup$
              Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
              $endgroup$
              – Hulkster
              Mar 21 '16 at 10:17




              $begingroup$
              Thanks for answering. Do you belive that it's impossible to derive multilinearity from propetries $(1)$, $(2)$ and $(3)$?
              $endgroup$
              – Hulkster
              Mar 21 '16 at 10:17




              2




              2




              $begingroup$
              @Juho Yes, but it is difficult to prove.
              $endgroup$
              – Henricus V.
              Mar 21 '16 at 13:56




              $begingroup$
              @Juho Yes, but it is difficult to prove.
              $endgroup$
              – Henricus V.
              Mar 21 '16 at 13:56












              $begingroup$
              But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
              $endgroup$
              – Hulkster
              Mar 21 '16 at 14:37




              $begingroup$
              But if the properties $(1)$, $(2)$ and $(3)$ give correct determinant for every matrix, then isn't the multilinearity an automatic additional property?
              $endgroup$
              – Hulkster
              Mar 21 '16 at 14:37












              $begingroup$
              @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
              $endgroup$
              – Henricus V.
              Mar 21 '16 at 14:49




              $begingroup$
              @Juho The current definition of $det$ satisfy (1),(2) and (3), but there is no guarantee that no other nonlinear function also satisfy them.
              $endgroup$
              – Henricus V.
              Mar 21 '16 at 14:49












              $begingroup$
              @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
              $endgroup$
              – user1551
              Mar 26 '16 at 13:02




              $begingroup$
              @Juho I think your definition is equivalent to Artin's, but Artin's definition makes the proofs of several properties of the determinant much easier.
              $endgroup$
              – user1551
              Mar 26 '16 at 13:02











              1












              $begingroup$

              The geometric meaning of the determinant of, say, a 3 by 3 matrix is the (signed) volume of the parallelepiped spanned by the three column vectors (alternatively the three row vectors). This generalizes the (signed) area of the parallelogram spanned by the two column vectors of a 2 by 2 matrix.



              At the next stage, to pursue the geometric definition you would have to clarify the meaning of "signed" above. The naive definition of volume is always positive whereas the determinant could be negative, so there is some explaining to do in terms of orientations.



              The route most often chosen both by instructors and textbook writers is the algebraic one where one can write down a magic formula and, boom! the determinant has been defined. This is fine if you want to get through a certain amount of material required by the course, but pedagogically this may not be the best approach.



              Ultimately a combination of the geometry and the algebra is required to explain this concept properly. It connects to more advanced topics like exterior algebras but that's the next stage already.






              share|cite|improve this answer









              $endgroup$


















                1












                $begingroup$

                The geometric meaning of the determinant of, say, a 3 by 3 matrix is the (signed) volume of the parallelepiped spanned by the three column vectors (alternatively the three row vectors). This generalizes the (signed) area of the parallelogram spanned by the two column vectors of a 2 by 2 matrix.



                At the next stage, to pursue the geometric definition you would have to clarify the meaning of "signed" above. The naive definition of volume is always positive whereas the determinant could be negative, so there is some explaining to do in terms of orientations.



                The route most often chosen both by instructors and textbook writers is the algebraic one where one can write down a magic formula and, boom! the determinant has been defined. This is fine if you want to get through a certain amount of material required by the course, but pedagogically this may not be the best approach.



                Ultimately a combination of the geometry and the algebra is required to explain this concept properly. It connects to more advanced topics like exterior algebras but that's the next stage already.






                share|cite|improve this answer









                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  The geometric meaning of the determinant of, say, a 3 by 3 matrix is the (signed) volume of the parallelepiped spanned by the three column vectors (alternatively the three row vectors). This generalizes the (signed) area of the parallelogram spanned by the two column vectors of a 2 by 2 matrix.



                  At the next stage, to pursue the geometric definition you would have to clarify the meaning of "signed" above. The naive definition of volume is always positive whereas the determinant could be negative, so there is some explaining to do in terms of orientations.



                  The route most often chosen both by instructors and textbook writers is the algebraic one where one can write down a magic formula and, boom! the determinant has been defined. This is fine if you want to get through a certain amount of material required by the course, but pedagogically this may not be the best approach.



                  Ultimately a combination of the geometry and the algebra is required to explain this concept properly. It connects to more advanced topics like exterior algebras but that's the next stage already.






                  share|cite|improve this answer









                  $endgroup$



                  The geometric meaning of the determinant of, say, a 3 by 3 matrix is the (signed) volume of the parallelepiped spanned by the three column vectors (alternatively the three row vectors). This generalizes the (signed) area of the parallelogram spanned by the two column vectors of a 2 by 2 matrix.



                  At the next stage, to pursue the geometric definition you would have to clarify the meaning of "signed" above. The naive definition of volume is always positive whereas the determinant could be negative, so there is some explaining to do in terms of orientations.



                  The route most often chosen both by instructors and textbook writers is the algebraic one where one can write down a magic formula and, boom! the determinant has been defined. This is fine if you want to get through a certain amount of material required by the course, but pedagogically this may not be the best approach.



                  Ultimately a combination of the geometry and the algebra is required to explain this concept properly. It connects to more advanced topics like exterior algebras but that's the next stage already.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Mar 23 '16 at 15:05









                  Mikhail KatzMikhail Katz

                  30.7k14399




                  30.7k14399























                      -1












                      $begingroup$

                      A rationale i could give to the determinant function is to generalise the property of the power, i.e we know that for any scalars $x,a,b,c$ it holds:
                      $x^{a + b + c ldots} = x^a x^b x^cldots$



                      But, lets assume that $a,b,cldots$ are actually the diagonal entries of a matrix $mathbf{A}$ then it holds that $x^{trace(mathbf{A})} = det(x^mathbf{A})$ , with $x^A$ the matrix exponential (of base $x$);






                      share|cite|improve this answer











                      $endgroup$









                      • 4




                        $begingroup$
                        what is the matrix power?
                        $endgroup$
                        – ASKASK
                        Mar 21 '16 at 0:51
















                      -1












                      $begingroup$

                      A rationale i could give to the determinant function is to generalise the property of the power, i.e we know that for any scalars $x,a,b,c$ it holds:
                      $x^{a + b + c ldots} = x^a x^b x^cldots$



                      But, lets assume that $a,b,cldots$ are actually the diagonal entries of a matrix $mathbf{A}$ then it holds that $x^{trace(mathbf{A})} = det(x^mathbf{A})$ , with $x^A$ the matrix exponential (of base $x$);






                      share|cite|improve this answer











                      $endgroup$









                      • 4




                        $begingroup$
                        what is the matrix power?
                        $endgroup$
                        – ASKASK
                        Mar 21 '16 at 0:51














                      -1












                      -1








                      -1





                      $begingroup$

                      A rationale i could give to the determinant function is to generalise the property of the power, i.e we know that for any scalars $x,a,b,c$ it holds:
                      $x^{a + b + c ldots} = x^a x^b x^cldots$



                      But, lets assume that $a,b,cldots$ are actually the diagonal entries of a matrix $mathbf{A}$ then it holds that $x^{trace(mathbf{A})} = det(x^mathbf{A})$ , with $x^A$ the matrix exponential (of base $x$);






                      share|cite|improve this answer











                      $endgroup$



                      A rationale i could give to the determinant function is to generalise the property of the power, i.e we know that for any scalars $x,a,b,c$ it holds:
                      $x^{a + b + c ldots} = x^a x^b x^cldots$



                      But, lets assume that $a,b,cldots$ are actually the diagonal entries of a matrix $mathbf{A}$ then it holds that $x^{trace(mathbf{A})} = det(x^mathbf{A})$ , with $x^A$ the matrix exponential (of base $x$);







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited Mar 28 '16 at 16:27

























                      answered Mar 15 '16 at 14:54









                      cellfundercellfunder

                      111




                      111








                      • 4




                        $begingroup$
                        what is the matrix power?
                        $endgroup$
                        – ASKASK
                        Mar 21 '16 at 0:51














                      • 4




                        $begingroup$
                        what is the matrix power?
                        $endgroup$
                        – ASKASK
                        Mar 21 '16 at 0:51








                      4




                      4




                      $begingroup$
                      what is the matrix power?
                      $endgroup$
                      – ASKASK
                      Mar 21 '16 at 0:51




                      $begingroup$
                      what is the matrix power?
                      $endgroup$
                      – ASKASK
                      Mar 21 '16 at 0:51


















                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1697050%2fthe-definition-of-determinant-in-the-spirit-of-algebra-and-geometry%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Human spaceflight

                      Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

                      File:DeusFollowingSea.jpg