Basic doubts in matrix
$begingroup$
Today I was studying about matrices and got stuck with a basic question in my mind . So basically in my book it is written that matrix is a 2 dimensional arrangement of real numbers in rows and columns . But as I had gone through it i realized that matrices are just doing the mathematical problems like that solving linear equations in their own way by arranging the numbers needed to be operated in the problem in a 2D array. They have their own rules to operate on the entities and the output of their operation is also a matrix. Again when we are supposed to rotate the coordinate axes through some angle and them if we want to find out the new position of a point which was let say (2,3) in the previous one. Matrix can solve this by arranging the coordinates in a array and then operating on that matrix by a special matrix which has the property to convert the coordinates into a new one when it's multiplied with it. So can we say that matrices are a new mathematical structures whose rules of operations have been defined by mathematicians? And specifically are matrices an alternative way to solve the problems we used to do normally in a different way ? Like different programming languages in computer
matrix-equations
$endgroup$
add a comment |
$begingroup$
Today I was studying about matrices and got stuck with a basic question in my mind . So basically in my book it is written that matrix is a 2 dimensional arrangement of real numbers in rows and columns . But as I had gone through it i realized that matrices are just doing the mathematical problems like that solving linear equations in their own way by arranging the numbers needed to be operated in the problem in a 2D array. They have their own rules to operate on the entities and the output of their operation is also a matrix. Again when we are supposed to rotate the coordinate axes through some angle and them if we want to find out the new position of a point which was let say (2,3) in the previous one. Matrix can solve this by arranging the coordinates in a array and then operating on that matrix by a special matrix which has the property to convert the coordinates into a new one when it's multiplied with it. So can we say that matrices are a new mathematical structures whose rules of operations have been defined by mathematicians? And specifically are matrices an alternative way to solve the problems we used to do normally in a different way ? Like different programming languages in computer
matrix-equations
$endgroup$
add a comment |
$begingroup$
Today I was studying about matrices and got stuck with a basic question in my mind . So basically in my book it is written that matrix is a 2 dimensional arrangement of real numbers in rows and columns . But as I had gone through it i realized that matrices are just doing the mathematical problems like that solving linear equations in their own way by arranging the numbers needed to be operated in the problem in a 2D array. They have their own rules to operate on the entities and the output of their operation is also a matrix. Again when we are supposed to rotate the coordinate axes through some angle and them if we want to find out the new position of a point which was let say (2,3) in the previous one. Matrix can solve this by arranging the coordinates in a array and then operating on that matrix by a special matrix which has the property to convert the coordinates into a new one when it's multiplied with it. So can we say that matrices are a new mathematical structures whose rules of operations have been defined by mathematicians? And specifically are matrices an alternative way to solve the problems we used to do normally in a different way ? Like different programming languages in computer
matrix-equations
$endgroup$
Today I was studying about matrices and got stuck with a basic question in my mind . So basically in my book it is written that matrix is a 2 dimensional arrangement of real numbers in rows and columns . But as I had gone through it i realized that matrices are just doing the mathematical problems like that solving linear equations in their own way by arranging the numbers needed to be operated in the problem in a 2D array. They have their own rules to operate on the entities and the output of their operation is also a matrix. Again when we are supposed to rotate the coordinate axes through some angle and them if we want to find out the new position of a point which was let say (2,3) in the previous one. Matrix can solve this by arranging the coordinates in a array and then operating on that matrix by a special matrix which has the property to convert the coordinates into a new one when it's multiplied with it. So can we say that matrices are a new mathematical structures whose rules of operations have been defined by mathematicians? And specifically are matrices an alternative way to solve the problems we used to do normally in a different way ? Like different programming languages in computer
matrix-equations
matrix-equations
edited Jan 17 at 16:07
Rifat Safin
asked Jan 17 at 15:59
Rifat SafinRifat Safin
133
133
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
It sounds like you're learning some basic linear algebra, so I'll be writing with that in mind.
A key point that many students (seem to) miss is the difference between a linear transformation and a matrix. Fundamentally, matrices are a useful "bookkeeping" tool for linear transformations. They present all the "information" in the linear transformation in a neat, efficient way.
When you talk about "rotating coordinate axes by some angle", you're talking about a linear transformation of $mathbb{R}^n$. However, this linear transformation (and every linear transformation, in fact) can be represented by a matrix. That is, for a linear transformation $T:mathbb{R}^n rightarrow mathbb{R}^m$, we can write $T(v) = Av$, where $A$ is an $m times n$ matrix.
As for matrix operations, mathematicians defined them to be consistent with the underlying linear transformations the matrices represent. Many students feel that matrix multiplication doesn't make sense (why do we go across the rows and down the columns? why not the other way? why not multiply them point by point?). Well, the reason for the seemingly arbitrary definition follows from fact that linear maps can be composed. Given linear transformations $S: mathbb{R}^n rightarrow mathbb{R}^m$ and $T: mathbb{R}^m rightarrow mathbb{R}^k$, we can look at their composition $T circ S: mathbb{R}^n rightarrow mathbb{R}^k$. This transformation is also linear, and defined by $(T circ S)(v) = T(S(v))$.
In matrix notation, if we write these functions as $S(v) = Av$ and $T(v) = Bv$, then we have $T(S(v)) = T(Av) = BAv$. If we want this to make sense, the product of the matrices $B$ and $A$ had better be consistent with the composition of the functions they represent. And it turns out, the "across the columns, down the rows" matrix multiplication makes this consistent.
Using matrices isn't an "alternative way" to do mathematics. It's just a convenient notation for linear transformations.
It's worth noting that there are some places where using matrices as a bookkeeping tool is counterproductive. For example, suppose I asked you to prove that matrix multiplication is associative: that is, for matrices $A,B,C$, we have $(AB)C = A(BC)$. If you try to write out the matrices entry by entry, this is a mess! However, if we recall that these matrices represent linear maps, and that function composition is associative, the proof is two lines long.
$endgroup$
1
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
add a comment |
$begingroup$
I'd like to add one thing to @Joe's excellent answer, to try to address the idea that matrices are "new" in some way. Your book's definition of a matrix as a 2D arrangement of number is nice and simple, but it requires notions like "2D" and "arrangement". There's a simpler story.
Let me start with vectors: maybe you've been told a vector is a 1D list of numbers, or something like that. But if you've been learning mathematics with various axioms ("addition is commutative", for example), you may think "Hey, we never defined a list!" Let me help out. I'm going to assume you know what a set and a function are, and that you know about the natural numbers ($1, 2, 3, ldots$ for the sake of this explanation, although many folks start at $0$).
For any natural number $n$, let's let $1:n$ denote the set ${1, 2, ldots, n}$. Then an $n$-vector (or "vector with $n$ entries", or "vector with $n$ components" or "$n$-dimensional real vector") can be defined to be ...a function from $1:n$ to $Bbb R$.
For instance, the vector that you might write as $b = (1, -2, 4)$ is (in my terms) the function
$$
b: {1,2,3} to Bbb R : i mapsto begin{cases}
1 & i = 1 \
2 & i = -2\
4 & i = 3
end{cases}
$$
You'll notice that (in my world) $b(1) = 1, b(2) = -2, b(3) = 4$, while in your world, you have $b_1 = 1, b_2 = -2, b_3 = 4$. In short, I've replaced "1D arrangement of numbers" with "a particular kind of function" and "subscripting with $i$" by "applying the function to $i$".
If you look carefully at the rules for addition of functions, etc., you'll find that my "function things" behave just like your vectors. And once you do that, you realize that vectors with their new notation (subscripts, parentheses and commas, etc.) are just a repackaging of something you already knew about (i.e., functions).
In the same way, an $n times k$ matrix is just a function
$$
m : (1:n) times (1:k) to Bbb R
$$
Some programming languages (Fortran, Matlab) actually use the same notation for subscripts and for function application. I halfway like this (because of the "vectors and matrices are functions" idea presented above), and halfway dislike it, because the implementations of the two things in most languages are wildly different, and the textual similarity may lead one to think that they are somehow the same in terms of things like computational complexity, etc.
Once I've said "Vectors and matrices are just functions," does that mean you should always write $b(i)$ instead of $b_i$? And that you should always write function-like descriptions of vectors rather than just listing their elements like this: (1, -2, 4)? Not at all! Once you know about the correspondence, you should use whatever notation lets you express things most compactly and clearly (for you). For most of us, this involves writing rows or columns of numbers, and rectangular arrays of numbers.
$endgroup$
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077160%2fbasic-doubts-in-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It sounds like you're learning some basic linear algebra, so I'll be writing with that in mind.
A key point that many students (seem to) miss is the difference between a linear transformation and a matrix. Fundamentally, matrices are a useful "bookkeeping" tool for linear transformations. They present all the "information" in the linear transformation in a neat, efficient way.
When you talk about "rotating coordinate axes by some angle", you're talking about a linear transformation of $mathbb{R}^n$. However, this linear transformation (and every linear transformation, in fact) can be represented by a matrix. That is, for a linear transformation $T:mathbb{R}^n rightarrow mathbb{R}^m$, we can write $T(v) = Av$, where $A$ is an $m times n$ matrix.
As for matrix operations, mathematicians defined them to be consistent with the underlying linear transformations the matrices represent. Many students feel that matrix multiplication doesn't make sense (why do we go across the rows and down the columns? why not the other way? why not multiply them point by point?). Well, the reason for the seemingly arbitrary definition follows from fact that linear maps can be composed. Given linear transformations $S: mathbb{R}^n rightarrow mathbb{R}^m$ and $T: mathbb{R}^m rightarrow mathbb{R}^k$, we can look at their composition $T circ S: mathbb{R}^n rightarrow mathbb{R}^k$. This transformation is also linear, and defined by $(T circ S)(v) = T(S(v))$.
In matrix notation, if we write these functions as $S(v) = Av$ and $T(v) = Bv$, then we have $T(S(v)) = T(Av) = BAv$. If we want this to make sense, the product of the matrices $B$ and $A$ had better be consistent with the composition of the functions they represent. And it turns out, the "across the columns, down the rows" matrix multiplication makes this consistent.
Using matrices isn't an "alternative way" to do mathematics. It's just a convenient notation for linear transformations.
It's worth noting that there are some places where using matrices as a bookkeeping tool is counterproductive. For example, suppose I asked you to prove that matrix multiplication is associative: that is, for matrices $A,B,C$, we have $(AB)C = A(BC)$. If you try to write out the matrices entry by entry, this is a mess! However, if we recall that these matrices represent linear maps, and that function composition is associative, the proof is two lines long.
$endgroup$
1
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
add a comment |
$begingroup$
It sounds like you're learning some basic linear algebra, so I'll be writing with that in mind.
A key point that many students (seem to) miss is the difference between a linear transformation and a matrix. Fundamentally, matrices are a useful "bookkeeping" tool for linear transformations. They present all the "information" in the linear transformation in a neat, efficient way.
When you talk about "rotating coordinate axes by some angle", you're talking about a linear transformation of $mathbb{R}^n$. However, this linear transformation (and every linear transformation, in fact) can be represented by a matrix. That is, for a linear transformation $T:mathbb{R}^n rightarrow mathbb{R}^m$, we can write $T(v) = Av$, where $A$ is an $m times n$ matrix.
As for matrix operations, mathematicians defined them to be consistent with the underlying linear transformations the matrices represent. Many students feel that matrix multiplication doesn't make sense (why do we go across the rows and down the columns? why not the other way? why not multiply them point by point?). Well, the reason for the seemingly arbitrary definition follows from fact that linear maps can be composed. Given linear transformations $S: mathbb{R}^n rightarrow mathbb{R}^m$ and $T: mathbb{R}^m rightarrow mathbb{R}^k$, we can look at their composition $T circ S: mathbb{R}^n rightarrow mathbb{R}^k$. This transformation is also linear, and defined by $(T circ S)(v) = T(S(v))$.
In matrix notation, if we write these functions as $S(v) = Av$ and $T(v) = Bv$, then we have $T(S(v)) = T(Av) = BAv$. If we want this to make sense, the product of the matrices $B$ and $A$ had better be consistent with the composition of the functions they represent. And it turns out, the "across the columns, down the rows" matrix multiplication makes this consistent.
Using matrices isn't an "alternative way" to do mathematics. It's just a convenient notation for linear transformations.
It's worth noting that there are some places where using matrices as a bookkeeping tool is counterproductive. For example, suppose I asked you to prove that matrix multiplication is associative: that is, for matrices $A,B,C$, we have $(AB)C = A(BC)$. If you try to write out the matrices entry by entry, this is a mess! However, if we recall that these matrices represent linear maps, and that function composition is associative, the proof is two lines long.
$endgroup$
1
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
add a comment |
$begingroup$
It sounds like you're learning some basic linear algebra, so I'll be writing with that in mind.
A key point that many students (seem to) miss is the difference between a linear transformation and a matrix. Fundamentally, matrices are a useful "bookkeeping" tool for linear transformations. They present all the "information" in the linear transformation in a neat, efficient way.
When you talk about "rotating coordinate axes by some angle", you're talking about a linear transformation of $mathbb{R}^n$. However, this linear transformation (and every linear transformation, in fact) can be represented by a matrix. That is, for a linear transformation $T:mathbb{R}^n rightarrow mathbb{R}^m$, we can write $T(v) = Av$, where $A$ is an $m times n$ matrix.
As for matrix operations, mathematicians defined them to be consistent with the underlying linear transformations the matrices represent. Many students feel that matrix multiplication doesn't make sense (why do we go across the rows and down the columns? why not the other way? why not multiply them point by point?). Well, the reason for the seemingly arbitrary definition follows from fact that linear maps can be composed. Given linear transformations $S: mathbb{R}^n rightarrow mathbb{R}^m$ and $T: mathbb{R}^m rightarrow mathbb{R}^k$, we can look at their composition $T circ S: mathbb{R}^n rightarrow mathbb{R}^k$. This transformation is also linear, and defined by $(T circ S)(v) = T(S(v))$.
In matrix notation, if we write these functions as $S(v) = Av$ and $T(v) = Bv$, then we have $T(S(v)) = T(Av) = BAv$. If we want this to make sense, the product of the matrices $B$ and $A$ had better be consistent with the composition of the functions they represent. And it turns out, the "across the columns, down the rows" matrix multiplication makes this consistent.
Using matrices isn't an "alternative way" to do mathematics. It's just a convenient notation for linear transformations.
It's worth noting that there are some places where using matrices as a bookkeeping tool is counterproductive. For example, suppose I asked you to prove that matrix multiplication is associative: that is, for matrices $A,B,C$, we have $(AB)C = A(BC)$. If you try to write out the matrices entry by entry, this is a mess! However, if we recall that these matrices represent linear maps, and that function composition is associative, the proof is two lines long.
$endgroup$
It sounds like you're learning some basic linear algebra, so I'll be writing with that in mind.
A key point that many students (seem to) miss is the difference between a linear transformation and a matrix. Fundamentally, matrices are a useful "bookkeeping" tool for linear transformations. They present all the "information" in the linear transformation in a neat, efficient way.
When you talk about "rotating coordinate axes by some angle", you're talking about a linear transformation of $mathbb{R}^n$. However, this linear transformation (and every linear transformation, in fact) can be represented by a matrix. That is, for a linear transformation $T:mathbb{R}^n rightarrow mathbb{R}^m$, we can write $T(v) = Av$, where $A$ is an $m times n$ matrix.
As for matrix operations, mathematicians defined them to be consistent with the underlying linear transformations the matrices represent. Many students feel that matrix multiplication doesn't make sense (why do we go across the rows and down the columns? why not the other way? why not multiply them point by point?). Well, the reason for the seemingly arbitrary definition follows from fact that linear maps can be composed. Given linear transformations $S: mathbb{R}^n rightarrow mathbb{R}^m$ and $T: mathbb{R}^m rightarrow mathbb{R}^k$, we can look at their composition $T circ S: mathbb{R}^n rightarrow mathbb{R}^k$. This transformation is also linear, and defined by $(T circ S)(v) = T(S(v))$.
In matrix notation, if we write these functions as $S(v) = Av$ and $T(v) = Bv$, then we have $T(S(v)) = T(Av) = BAv$. If we want this to make sense, the product of the matrices $B$ and $A$ had better be consistent with the composition of the functions they represent. And it turns out, the "across the columns, down the rows" matrix multiplication makes this consistent.
Using matrices isn't an "alternative way" to do mathematics. It's just a convenient notation for linear transformations.
It's worth noting that there are some places where using matrices as a bookkeeping tool is counterproductive. For example, suppose I asked you to prove that matrix multiplication is associative: that is, for matrices $A,B,C$, we have $(AB)C = A(BC)$. If you try to write out the matrices entry by entry, this is a mess! However, if we recall that these matrices represent linear maps, and that function composition is associative, the proof is two lines long.
answered Jan 17 at 16:59
JoeJoe
75129
75129
1
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
add a comment |
1
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
1
1
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
$begingroup$
Thanks a lot . Your answer really helped me
$endgroup$
– Rifat Safin
Jan 17 at 17:14
add a comment |
$begingroup$
I'd like to add one thing to @Joe's excellent answer, to try to address the idea that matrices are "new" in some way. Your book's definition of a matrix as a 2D arrangement of number is nice and simple, but it requires notions like "2D" and "arrangement". There's a simpler story.
Let me start with vectors: maybe you've been told a vector is a 1D list of numbers, or something like that. But if you've been learning mathematics with various axioms ("addition is commutative", for example), you may think "Hey, we never defined a list!" Let me help out. I'm going to assume you know what a set and a function are, and that you know about the natural numbers ($1, 2, 3, ldots$ for the sake of this explanation, although many folks start at $0$).
For any natural number $n$, let's let $1:n$ denote the set ${1, 2, ldots, n}$. Then an $n$-vector (or "vector with $n$ entries", or "vector with $n$ components" or "$n$-dimensional real vector") can be defined to be ...a function from $1:n$ to $Bbb R$.
For instance, the vector that you might write as $b = (1, -2, 4)$ is (in my terms) the function
$$
b: {1,2,3} to Bbb R : i mapsto begin{cases}
1 & i = 1 \
2 & i = -2\
4 & i = 3
end{cases}
$$
You'll notice that (in my world) $b(1) = 1, b(2) = -2, b(3) = 4$, while in your world, you have $b_1 = 1, b_2 = -2, b_3 = 4$. In short, I've replaced "1D arrangement of numbers" with "a particular kind of function" and "subscripting with $i$" by "applying the function to $i$".
If you look carefully at the rules for addition of functions, etc., you'll find that my "function things" behave just like your vectors. And once you do that, you realize that vectors with their new notation (subscripts, parentheses and commas, etc.) are just a repackaging of something you already knew about (i.e., functions).
In the same way, an $n times k$ matrix is just a function
$$
m : (1:n) times (1:k) to Bbb R
$$
Some programming languages (Fortran, Matlab) actually use the same notation for subscripts and for function application. I halfway like this (because of the "vectors and matrices are functions" idea presented above), and halfway dislike it, because the implementations of the two things in most languages are wildly different, and the textual similarity may lead one to think that they are somehow the same in terms of things like computational complexity, etc.
Once I've said "Vectors and matrices are just functions," does that mean you should always write $b(i)$ instead of $b_i$? And that you should always write function-like descriptions of vectors rather than just listing their elements like this: (1, -2, 4)? Not at all! Once you know about the correspondence, you should use whatever notation lets you express things most compactly and clearly (for you). For most of us, this involves writing rows or columns of numbers, and rectangular arrays of numbers.
$endgroup$
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
add a comment |
$begingroup$
I'd like to add one thing to @Joe's excellent answer, to try to address the idea that matrices are "new" in some way. Your book's definition of a matrix as a 2D arrangement of number is nice and simple, but it requires notions like "2D" and "arrangement". There's a simpler story.
Let me start with vectors: maybe you've been told a vector is a 1D list of numbers, or something like that. But if you've been learning mathematics with various axioms ("addition is commutative", for example), you may think "Hey, we never defined a list!" Let me help out. I'm going to assume you know what a set and a function are, and that you know about the natural numbers ($1, 2, 3, ldots$ for the sake of this explanation, although many folks start at $0$).
For any natural number $n$, let's let $1:n$ denote the set ${1, 2, ldots, n}$. Then an $n$-vector (or "vector with $n$ entries", or "vector with $n$ components" or "$n$-dimensional real vector") can be defined to be ...a function from $1:n$ to $Bbb R$.
For instance, the vector that you might write as $b = (1, -2, 4)$ is (in my terms) the function
$$
b: {1,2,3} to Bbb R : i mapsto begin{cases}
1 & i = 1 \
2 & i = -2\
4 & i = 3
end{cases}
$$
You'll notice that (in my world) $b(1) = 1, b(2) = -2, b(3) = 4$, while in your world, you have $b_1 = 1, b_2 = -2, b_3 = 4$. In short, I've replaced "1D arrangement of numbers" with "a particular kind of function" and "subscripting with $i$" by "applying the function to $i$".
If you look carefully at the rules for addition of functions, etc., you'll find that my "function things" behave just like your vectors. And once you do that, you realize that vectors with their new notation (subscripts, parentheses and commas, etc.) are just a repackaging of something you already knew about (i.e., functions).
In the same way, an $n times k$ matrix is just a function
$$
m : (1:n) times (1:k) to Bbb R
$$
Some programming languages (Fortran, Matlab) actually use the same notation for subscripts and for function application. I halfway like this (because of the "vectors and matrices are functions" idea presented above), and halfway dislike it, because the implementations of the two things in most languages are wildly different, and the textual similarity may lead one to think that they are somehow the same in terms of things like computational complexity, etc.
Once I've said "Vectors and matrices are just functions," does that mean you should always write $b(i)$ instead of $b_i$? And that you should always write function-like descriptions of vectors rather than just listing their elements like this: (1, -2, 4)? Not at all! Once you know about the correspondence, you should use whatever notation lets you express things most compactly and clearly (for you). For most of us, this involves writing rows or columns of numbers, and rectangular arrays of numbers.
$endgroup$
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
add a comment |
$begingroup$
I'd like to add one thing to @Joe's excellent answer, to try to address the idea that matrices are "new" in some way. Your book's definition of a matrix as a 2D arrangement of number is nice and simple, but it requires notions like "2D" and "arrangement". There's a simpler story.
Let me start with vectors: maybe you've been told a vector is a 1D list of numbers, or something like that. But if you've been learning mathematics with various axioms ("addition is commutative", for example), you may think "Hey, we never defined a list!" Let me help out. I'm going to assume you know what a set and a function are, and that you know about the natural numbers ($1, 2, 3, ldots$ for the sake of this explanation, although many folks start at $0$).
For any natural number $n$, let's let $1:n$ denote the set ${1, 2, ldots, n}$. Then an $n$-vector (or "vector with $n$ entries", or "vector with $n$ components" or "$n$-dimensional real vector") can be defined to be ...a function from $1:n$ to $Bbb R$.
For instance, the vector that you might write as $b = (1, -2, 4)$ is (in my terms) the function
$$
b: {1,2,3} to Bbb R : i mapsto begin{cases}
1 & i = 1 \
2 & i = -2\
4 & i = 3
end{cases}
$$
You'll notice that (in my world) $b(1) = 1, b(2) = -2, b(3) = 4$, while in your world, you have $b_1 = 1, b_2 = -2, b_3 = 4$. In short, I've replaced "1D arrangement of numbers" with "a particular kind of function" and "subscripting with $i$" by "applying the function to $i$".
If you look carefully at the rules for addition of functions, etc., you'll find that my "function things" behave just like your vectors. And once you do that, you realize that vectors with their new notation (subscripts, parentheses and commas, etc.) are just a repackaging of something you already knew about (i.e., functions).
In the same way, an $n times k$ matrix is just a function
$$
m : (1:n) times (1:k) to Bbb R
$$
Some programming languages (Fortran, Matlab) actually use the same notation for subscripts and for function application. I halfway like this (because of the "vectors and matrices are functions" idea presented above), and halfway dislike it, because the implementations of the two things in most languages are wildly different, and the textual similarity may lead one to think that they are somehow the same in terms of things like computational complexity, etc.
Once I've said "Vectors and matrices are just functions," does that mean you should always write $b(i)$ instead of $b_i$? And that you should always write function-like descriptions of vectors rather than just listing their elements like this: (1, -2, 4)? Not at all! Once you know about the correspondence, you should use whatever notation lets you express things most compactly and clearly (for you). For most of us, this involves writing rows or columns of numbers, and rectangular arrays of numbers.
$endgroup$
I'd like to add one thing to @Joe's excellent answer, to try to address the idea that matrices are "new" in some way. Your book's definition of a matrix as a 2D arrangement of number is nice and simple, but it requires notions like "2D" and "arrangement". There's a simpler story.
Let me start with vectors: maybe you've been told a vector is a 1D list of numbers, or something like that. But if you've been learning mathematics with various axioms ("addition is commutative", for example), you may think "Hey, we never defined a list!" Let me help out. I'm going to assume you know what a set and a function are, and that you know about the natural numbers ($1, 2, 3, ldots$ for the sake of this explanation, although many folks start at $0$).
For any natural number $n$, let's let $1:n$ denote the set ${1, 2, ldots, n}$. Then an $n$-vector (or "vector with $n$ entries", or "vector with $n$ components" or "$n$-dimensional real vector") can be defined to be ...a function from $1:n$ to $Bbb R$.
For instance, the vector that you might write as $b = (1, -2, 4)$ is (in my terms) the function
$$
b: {1,2,3} to Bbb R : i mapsto begin{cases}
1 & i = 1 \
2 & i = -2\
4 & i = 3
end{cases}
$$
You'll notice that (in my world) $b(1) = 1, b(2) = -2, b(3) = 4$, while in your world, you have $b_1 = 1, b_2 = -2, b_3 = 4$. In short, I've replaced "1D arrangement of numbers" with "a particular kind of function" and "subscripting with $i$" by "applying the function to $i$".
If you look carefully at the rules for addition of functions, etc., you'll find that my "function things" behave just like your vectors. And once you do that, you realize that vectors with their new notation (subscripts, parentheses and commas, etc.) are just a repackaging of something you already knew about (i.e., functions).
In the same way, an $n times k$ matrix is just a function
$$
m : (1:n) times (1:k) to Bbb R
$$
Some programming languages (Fortran, Matlab) actually use the same notation for subscripts and for function application. I halfway like this (because of the "vectors and matrices are functions" idea presented above), and halfway dislike it, because the implementations of the two things in most languages are wildly different, and the textual similarity may lead one to think that they are somehow the same in terms of things like computational complexity, etc.
Once I've said "Vectors and matrices are just functions," does that mean you should always write $b(i)$ instead of $b_i$? And that you should always write function-like descriptions of vectors rather than just listing their elements like this: (1, -2, 4)? Not at all! Once you know about the correspondence, you should use whatever notation lets you express things most compactly and clearly (for you). For most of us, this involves writing rows or columns of numbers, and rectangular arrays of numbers.
edited Jan 18 at 16:39
answered Jan 17 at 17:27
John HughesJohn Hughes
65.5k24293
65.5k24293
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
add a comment |
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
Can you please tell me if i was asked sometime that what would be the most appropriate defination of matrix. Can it be given like this - a mathematical structure governed by certain rules which makes the purpose of linear transformation easy and efficient
$endgroup$
– Rifat Safin
Jan 18 at 16:25
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
If you were asked "What's the most appropriate definition of a matrix?", I'd recommend answering "Appropriate for what?" My answer for "mathematicians interested in technical rigor" might be rather different from my answer for "people trying to learn Matlab". The answer you've proposed is, I think, too general for either category --- it also applies to vector spaces, dual spaces, and many other things. It's a description rather than a definition.
$endgroup$
– John Hughes
Jan 18 at 16:42
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
Ok i am in 10+2 level now according to this level i am asking
$endgroup$
– Rifat Safin
Jan 18 at 16:45
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
I'm afraid I don't know what "10 + 2 level" means. I think I've said everything useful that I have to say about this question.
$endgroup$
– John Hughes
Jan 18 at 16:46
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
$begingroup$
In our country India 10+2 level is the last level of school study. Anyways thank for helping me 😊😊
$endgroup$
– Rifat Safin
Jan 18 at 16:50
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077160%2fbasic-doubts-in-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown