1 norm $||_1$, of non square matrix












1












$begingroup$


Does $1$ norm exist for non-square matrices? By $1$ norm I mean



$d (x,y)=sum_{i=1}^{n} |x^i-y^i|, x=(x_1,dots, x_n), y=(y_1,dots, y_n)$



Suppose $A$ is $mtimes n, (mne n)$ matrix what can we say about $|A|_1$? Also, can we say $|A|_1=|A^T|_1= |A^TA|_1=|AA^T|_1$? and $|AB|_1le |A|_1|B|_1$ Thanks for helping.










share|cite|improve this question









$endgroup$












  • $begingroup$
    What property of the norm are you uncertain about $d$ satisfying?
    $endgroup$
    – mathworker21
    Jan 7 at 10:46










  • $begingroup$
    @OscarLanzi I don't understand what are you talking about, if possible give an example and explain.
    $endgroup$
    – Markov
    Jan 7 at 12:05










  • $begingroup$
    Cannot do so in a comment. If this fails to be clear I am forced to delete.
    $endgroup$
    – Oscar Lanzi
    Jan 7 at 12:17
















1












$begingroup$


Does $1$ norm exist for non-square matrices? By $1$ norm I mean



$d (x,y)=sum_{i=1}^{n} |x^i-y^i|, x=(x_1,dots, x_n), y=(y_1,dots, y_n)$



Suppose $A$ is $mtimes n, (mne n)$ matrix what can we say about $|A|_1$? Also, can we say $|A|_1=|A^T|_1= |A^TA|_1=|AA^T|_1$? and $|AB|_1le |A|_1|B|_1$ Thanks for helping.










share|cite|improve this question









$endgroup$












  • $begingroup$
    What property of the norm are you uncertain about $d$ satisfying?
    $endgroup$
    – mathworker21
    Jan 7 at 10:46










  • $begingroup$
    @OscarLanzi I don't understand what are you talking about, if possible give an example and explain.
    $endgroup$
    – Markov
    Jan 7 at 12:05










  • $begingroup$
    Cannot do so in a comment. If this fails to be clear I am forced to delete.
    $endgroup$
    – Oscar Lanzi
    Jan 7 at 12:17














1












1








1





$begingroup$


Does $1$ norm exist for non-square matrices? By $1$ norm I mean



$d (x,y)=sum_{i=1}^{n} |x^i-y^i|, x=(x_1,dots, x_n), y=(y_1,dots, y_n)$



Suppose $A$ is $mtimes n, (mne n)$ matrix what can we say about $|A|_1$? Also, can we say $|A|_1=|A^T|_1= |A^TA|_1=|AA^T|_1$? and $|AB|_1le |A|_1|B|_1$ Thanks for helping.










share|cite|improve this question









$endgroup$




Does $1$ norm exist for non-square matrices? By $1$ norm I mean



$d (x,y)=sum_{i=1}^{n} |x^i-y^i|, x=(x_1,dots, x_n), y=(y_1,dots, y_n)$



Suppose $A$ is $mtimes n, (mne n)$ matrix what can we say about $|A|_1$? Also, can we say $|A|_1=|A^T|_1= |A^TA|_1=|AA^T|_1$? and $|AB|_1le |A|_1|B|_1$ Thanks for helping.







matrix-norms






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 7 at 10:35









MarkovMarkov

17.3k1059180




17.3k1059180












  • $begingroup$
    What property of the norm are you uncertain about $d$ satisfying?
    $endgroup$
    – mathworker21
    Jan 7 at 10:46










  • $begingroup$
    @OscarLanzi I don't understand what are you talking about, if possible give an example and explain.
    $endgroup$
    – Markov
    Jan 7 at 12:05










  • $begingroup$
    Cannot do so in a comment. If this fails to be clear I am forced to delete.
    $endgroup$
    – Oscar Lanzi
    Jan 7 at 12:17


















  • $begingroup$
    What property of the norm are you uncertain about $d$ satisfying?
    $endgroup$
    – mathworker21
    Jan 7 at 10:46










  • $begingroup$
    @OscarLanzi I don't understand what are you talking about, if possible give an example and explain.
    $endgroup$
    – Markov
    Jan 7 at 12:05










  • $begingroup$
    Cannot do so in a comment. If this fails to be clear I am forced to delete.
    $endgroup$
    – Oscar Lanzi
    Jan 7 at 12:17
















$begingroup$
What property of the norm are you uncertain about $d$ satisfying?
$endgroup$
– mathworker21
Jan 7 at 10:46




$begingroup$
What property of the norm are you uncertain about $d$ satisfying?
$endgroup$
– mathworker21
Jan 7 at 10:46












$begingroup$
@OscarLanzi I don't understand what are you talking about, if possible give an example and explain.
$endgroup$
– Markov
Jan 7 at 12:05




$begingroup$
@OscarLanzi I don't understand what are you talking about, if possible give an example and explain.
$endgroup$
– Markov
Jan 7 at 12:05












$begingroup$
Cannot do so in a comment. If this fails to be clear I am forced to delete.
$endgroup$
– Oscar Lanzi
Jan 7 at 12:17




$begingroup$
Cannot do so in a comment. If this fails to be clear I am forced to delete.
$endgroup$
– Oscar Lanzi
Jan 7 at 12:17










1 Answer
1






active

oldest

votes


















3












$begingroup$

There is no problem to define an "entrywise" matrix norm defined as follows:
$$
|A|_1=|vec(A)|_1=sum_{i,j}|A_{i,j}|
$$

see wikipedia, Matrix norms. It is a norm and the induced distance is (if $A$, $B$ have same dimensions):
$$
d(A,B)=sum_{i,j}|A_{i,j}-B_{i,j}|
$$

This norm is a sub-multiplicative norm (see here):
$$
|AB|_1leq|A|_1|B|_1
$$

but, attention, in general:
$$
|A^tA|_1neq|AA^t|_1
$$



Another example of "entrywise" matrix norm often encountered in practice is the Frobenius norm. This norm is defined as follows:



$$
|A|_F = sqrt{text{tr}(A^tA)}=sqrt{sum_{i,j}|A_{i,j}|^2}
$$

Frobenius norm also fulfills the sub-multiplicative property (Cauchy-Schwarz in action, see
here):
$$
|AB|_Fleq |A|_F|B|_F
$$



Compared to the previous case $|.|_1$, like we have $text{tr}(A^tA)=text{tr}(AA^t)$, Frobenius norm also fulfills the property:



$$
|AA^t|_F=|A^tA|_F
$$





To be complete one must also say a word about Matrix norms induced by vector norms. One can define:
$$
|A|_1=sup_{xneq 0}frac{|Ax|_1}{|x|_1}
$$

(attention despite identical the notation, this norm is different from the previously defined $|vec(.)|_1 $, here we have $|A|_1=max_j sum_i|A_{i,j}|$)



An immediate generalization, valid for any $1leq p leq infty$, is:
$$
|A|_p=sup_{xneq 0}frac{|Ax|_p}{|x|_p}
$$



Such norms are called Matrix norms induced by vector norms, they automatically fulfill the sub-multiplicative property:
$$
|AB|_p leq |A|_p |B|_p
$$



Attention: however in general,
$$
|AA^t|_p neq |A^tA|_p
$$





Also note this important fact, as we are in finite dimension all these previously defined matrix norms are equivalent in the sense that for any two matrix norm $|.|_alpha$ and $|.|_beta$ it exists $r$ and $s$ such that:
$$
forall A, r|A|_alphaleq |A|_beta leq s|A|_alpha
$$

in peculiar if a sequence $nrightarrow (A)_n$ is convergent for a given matrix norm it is also convergent for all the other norms.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064863%2f1-norm-1-of-non-square-matrix%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    There is no problem to define an "entrywise" matrix norm defined as follows:
    $$
    |A|_1=|vec(A)|_1=sum_{i,j}|A_{i,j}|
    $$

    see wikipedia, Matrix norms. It is a norm and the induced distance is (if $A$, $B$ have same dimensions):
    $$
    d(A,B)=sum_{i,j}|A_{i,j}-B_{i,j}|
    $$

    This norm is a sub-multiplicative norm (see here):
    $$
    |AB|_1leq|A|_1|B|_1
    $$

    but, attention, in general:
    $$
    |A^tA|_1neq|AA^t|_1
    $$



    Another example of "entrywise" matrix norm often encountered in practice is the Frobenius norm. This norm is defined as follows:



    $$
    |A|_F = sqrt{text{tr}(A^tA)}=sqrt{sum_{i,j}|A_{i,j}|^2}
    $$

    Frobenius norm also fulfills the sub-multiplicative property (Cauchy-Schwarz in action, see
    here):
    $$
    |AB|_Fleq |A|_F|B|_F
    $$



    Compared to the previous case $|.|_1$, like we have $text{tr}(A^tA)=text{tr}(AA^t)$, Frobenius norm also fulfills the property:



    $$
    |AA^t|_F=|A^tA|_F
    $$





    To be complete one must also say a word about Matrix norms induced by vector norms. One can define:
    $$
    |A|_1=sup_{xneq 0}frac{|Ax|_1}{|x|_1}
    $$

    (attention despite identical the notation, this norm is different from the previously defined $|vec(.)|_1 $, here we have $|A|_1=max_j sum_i|A_{i,j}|$)



    An immediate generalization, valid for any $1leq p leq infty$, is:
    $$
    |A|_p=sup_{xneq 0}frac{|Ax|_p}{|x|_p}
    $$



    Such norms are called Matrix norms induced by vector norms, they automatically fulfill the sub-multiplicative property:
    $$
    |AB|_p leq |A|_p |B|_p
    $$



    Attention: however in general,
    $$
    |AA^t|_p neq |A^tA|_p
    $$





    Also note this important fact, as we are in finite dimension all these previously defined matrix norms are equivalent in the sense that for any two matrix norm $|.|_alpha$ and $|.|_beta$ it exists $r$ and $s$ such that:
    $$
    forall A, r|A|_alphaleq |A|_beta leq s|A|_alpha
    $$

    in peculiar if a sequence $nrightarrow (A)_n$ is convergent for a given matrix norm it is also convergent for all the other norms.






    share|cite|improve this answer











    $endgroup$


















      3












      $begingroup$

      There is no problem to define an "entrywise" matrix norm defined as follows:
      $$
      |A|_1=|vec(A)|_1=sum_{i,j}|A_{i,j}|
      $$

      see wikipedia, Matrix norms. It is a norm and the induced distance is (if $A$, $B$ have same dimensions):
      $$
      d(A,B)=sum_{i,j}|A_{i,j}-B_{i,j}|
      $$

      This norm is a sub-multiplicative norm (see here):
      $$
      |AB|_1leq|A|_1|B|_1
      $$

      but, attention, in general:
      $$
      |A^tA|_1neq|AA^t|_1
      $$



      Another example of "entrywise" matrix norm often encountered in practice is the Frobenius norm. This norm is defined as follows:



      $$
      |A|_F = sqrt{text{tr}(A^tA)}=sqrt{sum_{i,j}|A_{i,j}|^2}
      $$

      Frobenius norm also fulfills the sub-multiplicative property (Cauchy-Schwarz in action, see
      here):
      $$
      |AB|_Fleq |A|_F|B|_F
      $$



      Compared to the previous case $|.|_1$, like we have $text{tr}(A^tA)=text{tr}(AA^t)$, Frobenius norm also fulfills the property:



      $$
      |AA^t|_F=|A^tA|_F
      $$





      To be complete one must also say a word about Matrix norms induced by vector norms. One can define:
      $$
      |A|_1=sup_{xneq 0}frac{|Ax|_1}{|x|_1}
      $$

      (attention despite identical the notation, this norm is different from the previously defined $|vec(.)|_1 $, here we have $|A|_1=max_j sum_i|A_{i,j}|$)



      An immediate generalization, valid for any $1leq p leq infty$, is:
      $$
      |A|_p=sup_{xneq 0}frac{|Ax|_p}{|x|_p}
      $$



      Such norms are called Matrix norms induced by vector norms, they automatically fulfill the sub-multiplicative property:
      $$
      |AB|_p leq |A|_p |B|_p
      $$



      Attention: however in general,
      $$
      |AA^t|_p neq |A^tA|_p
      $$





      Also note this important fact, as we are in finite dimension all these previously defined matrix norms are equivalent in the sense that for any two matrix norm $|.|_alpha$ and $|.|_beta$ it exists $r$ and $s$ such that:
      $$
      forall A, r|A|_alphaleq |A|_beta leq s|A|_alpha
      $$

      in peculiar if a sequence $nrightarrow (A)_n$ is convergent for a given matrix norm it is also convergent for all the other norms.






      share|cite|improve this answer











      $endgroup$
















        3












        3








        3





        $begingroup$

        There is no problem to define an "entrywise" matrix norm defined as follows:
        $$
        |A|_1=|vec(A)|_1=sum_{i,j}|A_{i,j}|
        $$

        see wikipedia, Matrix norms. It is a norm and the induced distance is (if $A$, $B$ have same dimensions):
        $$
        d(A,B)=sum_{i,j}|A_{i,j}-B_{i,j}|
        $$

        This norm is a sub-multiplicative norm (see here):
        $$
        |AB|_1leq|A|_1|B|_1
        $$

        but, attention, in general:
        $$
        |A^tA|_1neq|AA^t|_1
        $$



        Another example of "entrywise" matrix norm often encountered in practice is the Frobenius norm. This norm is defined as follows:



        $$
        |A|_F = sqrt{text{tr}(A^tA)}=sqrt{sum_{i,j}|A_{i,j}|^2}
        $$

        Frobenius norm also fulfills the sub-multiplicative property (Cauchy-Schwarz in action, see
        here):
        $$
        |AB|_Fleq |A|_F|B|_F
        $$



        Compared to the previous case $|.|_1$, like we have $text{tr}(A^tA)=text{tr}(AA^t)$, Frobenius norm also fulfills the property:



        $$
        |AA^t|_F=|A^tA|_F
        $$





        To be complete one must also say a word about Matrix norms induced by vector norms. One can define:
        $$
        |A|_1=sup_{xneq 0}frac{|Ax|_1}{|x|_1}
        $$

        (attention despite identical the notation, this norm is different from the previously defined $|vec(.)|_1 $, here we have $|A|_1=max_j sum_i|A_{i,j}|$)



        An immediate generalization, valid for any $1leq p leq infty$, is:
        $$
        |A|_p=sup_{xneq 0}frac{|Ax|_p}{|x|_p}
        $$



        Such norms are called Matrix norms induced by vector norms, they automatically fulfill the sub-multiplicative property:
        $$
        |AB|_p leq |A|_p |B|_p
        $$



        Attention: however in general,
        $$
        |AA^t|_p neq |A^tA|_p
        $$





        Also note this important fact, as we are in finite dimension all these previously defined matrix norms are equivalent in the sense that for any two matrix norm $|.|_alpha$ and $|.|_beta$ it exists $r$ and $s$ such that:
        $$
        forall A, r|A|_alphaleq |A|_beta leq s|A|_alpha
        $$

        in peculiar if a sequence $nrightarrow (A)_n$ is convergent for a given matrix norm it is also convergent for all the other norms.






        share|cite|improve this answer











        $endgroup$



        There is no problem to define an "entrywise" matrix norm defined as follows:
        $$
        |A|_1=|vec(A)|_1=sum_{i,j}|A_{i,j}|
        $$

        see wikipedia, Matrix norms. It is a norm and the induced distance is (if $A$, $B$ have same dimensions):
        $$
        d(A,B)=sum_{i,j}|A_{i,j}-B_{i,j}|
        $$

        This norm is a sub-multiplicative norm (see here):
        $$
        |AB|_1leq|A|_1|B|_1
        $$

        but, attention, in general:
        $$
        |A^tA|_1neq|AA^t|_1
        $$



        Another example of "entrywise" matrix norm often encountered in practice is the Frobenius norm. This norm is defined as follows:



        $$
        |A|_F = sqrt{text{tr}(A^tA)}=sqrt{sum_{i,j}|A_{i,j}|^2}
        $$

        Frobenius norm also fulfills the sub-multiplicative property (Cauchy-Schwarz in action, see
        here):
        $$
        |AB|_Fleq |A|_F|B|_F
        $$



        Compared to the previous case $|.|_1$, like we have $text{tr}(A^tA)=text{tr}(AA^t)$, Frobenius norm also fulfills the property:



        $$
        |AA^t|_F=|A^tA|_F
        $$





        To be complete one must also say a word about Matrix norms induced by vector norms. One can define:
        $$
        |A|_1=sup_{xneq 0}frac{|Ax|_1}{|x|_1}
        $$

        (attention despite identical the notation, this norm is different from the previously defined $|vec(.)|_1 $, here we have $|A|_1=max_j sum_i|A_{i,j}|$)



        An immediate generalization, valid for any $1leq p leq infty$, is:
        $$
        |A|_p=sup_{xneq 0}frac{|Ax|_p}{|x|_p}
        $$



        Such norms are called Matrix norms induced by vector norms, they automatically fulfill the sub-multiplicative property:
        $$
        |AB|_p leq |A|_p |B|_p
        $$



        Attention: however in general,
        $$
        |AA^t|_p neq |A^tA|_p
        $$





        Also note this important fact, as we are in finite dimension all these previously defined matrix norms are equivalent in the sense that for any two matrix norm $|.|_alpha$ and $|.|_beta$ it exists $r$ and $s$ such that:
        $$
        forall A, r|A|_alphaleq |A|_beta leq s|A|_alpha
        $$

        in peculiar if a sequence $nrightarrow (A)_n$ is convergent for a given matrix norm it is also convergent for all the other norms.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 7 at 12:07

























        answered Jan 7 at 10:45









        Picaud VincentPicaud Vincent

        1,52439




        1,52439






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064863%2f1-norm-1-of-non-square-matrix%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Human spaceflight

            Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

            File:DeusFollowingSea.jpg