Showing $sum_{k=1}^{nm} frac{1}{k} approx sum_{k=1}^{n} frac{1}{k} + sum_{k=1}^{m} frac{1}{k}$
$begingroup$
Since $log(nm) = log(n) + log(m)$, and $sum_{k=1}^n frac{1}{k} approx log n$ for large $n$, we would expect that $$sum_{k=1}^{nm} frac{1}{k} approx sum_{k=1}^{n} frac{1}{k} + sum_{k=1}^{m} frac{1}{k}$$
when $n,m$ are large.
I'm wondering if this approximation can be demonstrated through discrete means. That is to say, manipulations of rational fractions and/or elementary number-theoretical considerations, without using the $log n$ approximation.
sequences-and-series harmonic-numbers
$endgroup$
add a comment |
$begingroup$
Since $log(nm) = log(n) + log(m)$, and $sum_{k=1}^n frac{1}{k} approx log n$ for large $n$, we would expect that $$sum_{k=1}^{nm} frac{1}{k} approx sum_{k=1}^{n} frac{1}{k} + sum_{k=1}^{m} frac{1}{k}$$
when $n,m$ are large.
I'm wondering if this approximation can be demonstrated through discrete means. That is to say, manipulations of rational fractions and/or elementary number-theoretical considerations, without using the $log n$ approximation.
sequences-and-series harmonic-numbers
$endgroup$
4
$begingroup$
There is a constant error in the approximation of $gamma approx 0.577$ as the harmonic numbers are about $H_n approx log n + gamma$ so there are two $gamma$'s on the right but only one on the left. This is still a reasonable idea.
$endgroup$
– Ross Millikan
Jan 1 at 3:25
2
$begingroup$
Probably you know this, but just in case you don't: You can using integrals and the Fundamental Theorem of Calculus to define logs. Define $L(x) = int_1^x frac{1}{t} dt$. Then you can check: $L'(x) = 1/x$, $L(ax) = L(a) + L(x)$, and $L(x^p) = pL(x)$. Your question is the series version of showing that $int_1^{ax} frac{1}{t} dt = int_1^a frac{1}{t} dt + int_1^x frac{1}{t}dt$.
$endgroup$
– JavaMan
Jan 1 at 4:02
add a comment |
$begingroup$
Since $log(nm) = log(n) + log(m)$, and $sum_{k=1}^n frac{1}{k} approx log n$ for large $n$, we would expect that $$sum_{k=1}^{nm} frac{1}{k} approx sum_{k=1}^{n} frac{1}{k} + sum_{k=1}^{m} frac{1}{k}$$
when $n,m$ are large.
I'm wondering if this approximation can be demonstrated through discrete means. That is to say, manipulations of rational fractions and/or elementary number-theoretical considerations, without using the $log n$ approximation.
sequences-and-series harmonic-numbers
$endgroup$
Since $log(nm) = log(n) + log(m)$, and $sum_{k=1}^n frac{1}{k} approx log n$ for large $n$, we would expect that $$sum_{k=1}^{nm} frac{1}{k} approx sum_{k=1}^{n} frac{1}{k} + sum_{k=1}^{m} frac{1}{k}$$
when $n,m$ are large.
I'm wondering if this approximation can be demonstrated through discrete means. That is to say, manipulations of rational fractions and/or elementary number-theoretical considerations, without using the $log n$ approximation.
sequences-and-series harmonic-numbers
sequences-and-series harmonic-numbers
asked Jan 1 at 3:05
MathematicsStudent1122MathematicsStudent1122
8,62322467
8,62322467
4
$begingroup$
There is a constant error in the approximation of $gamma approx 0.577$ as the harmonic numbers are about $H_n approx log n + gamma$ so there are two $gamma$'s on the right but only one on the left. This is still a reasonable idea.
$endgroup$
– Ross Millikan
Jan 1 at 3:25
2
$begingroup$
Probably you know this, but just in case you don't: You can using integrals and the Fundamental Theorem of Calculus to define logs. Define $L(x) = int_1^x frac{1}{t} dt$. Then you can check: $L'(x) = 1/x$, $L(ax) = L(a) + L(x)$, and $L(x^p) = pL(x)$. Your question is the series version of showing that $int_1^{ax} frac{1}{t} dt = int_1^a frac{1}{t} dt + int_1^x frac{1}{t}dt$.
$endgroup$
– JavaMan
Jan 1 at 4:02
add a comment |
4
$begingroup$
There is a constant error in the approximation of $gamma approx 0.577$ as the harmonic numbers are about $H_n approx log n + gamma$ so there are two $gamma$'s on the right but only one on the left. This is still a reasonable idea.
$endgroup$
– Ross Millikan
Jan 1 at 3:25
2
$begingroup$
Probably you know this, but just in case you don't: You can using integrals and the Fundamental Theorem of Calculus to define logs. Define $L(x) = int_1^x frac{1}{t} dt$. Then you can check: $L'(x) = 1/x$, $L(ax) = L(a) + L(x)$, and $L(x^p) = pL(x)$. Your question is the series version of showing that $int_1^{ax} frac{1}{t} dt = int_1^a frac{1}{t} dt + int_1^x frac{1}{t}dt$.
$endgroup$
– JavaMan
Jan 1 at 4:02
4
4
$begingroup$
There is a constant error in the approximation of $gamma approx 0.577$ as the harmonic numbers are about $H_n approx log n + gamma$ so there are two $gamma$'s on the right but only one on the left. This is still a reasonable idea.
$endgroup$
– Ross Millikan
Jan 1 at 3:25
$begingroup$
There is a constant error in the approximation of $gamma approx 0.577$ as the harmonic numbers are about $H_n approx log n + gamma$ so there are two $gamma$'s on the right but only one on the left. This is still a reasonable idea.
$endgroup$
– Ross Millikan
Jan 1 at 3:25
2
2
$begingroup$
Probably you know this, but just in case you don't: You can using integrals and the Fundamental Theorem of Calculus to define logs. Define $L(x) = int_1^x frac{1}{t} dt$. Then you can check: $L'(x) = 1/x$, $L(ax) = L(a) + L(x)$, and $L(x^p) = pL(x)$. Your question is the series version of showing that $int_1^{ax} frac{1}{t} dt = int_1^a frac{1}{t} dt + int_1^x frac{1}{t}dt$.
$endgroup$
– JavaMan
Jan 1 at 4:02
$begingroup$
Probably you know this, but just in case you don't: You can using integrals and the Fundamental Theorem of Calculus to define logs. Define $L(x) = int_1^x frac{1}{t} dt$. Then you can check: $L'(x) = 1/x$, $L(ax) = L(a) + L(x)$, and $L(x^p) = pL(x)$. Your question is the series version of showing that $int_1^{ax} frac{1}{t} dt = int_1^a frac{1}{t} dt + int_1^x frac{1}{t}dt$.
$endgroup$
– JavaMan
Jan 1 at 4:02
add a comment |
4 Answers
4
active
oldest
votes
$begingroup$
This is a nice little problem, and thankfully it has a simple solution.
Assume that $m$ and $n$ are large natural numbers. Consider the following inequalities:
$$
sum_{k=cn+1}^{(c+1)n}frac{1}{(c+1)n}leqsum_{k=cn+1}^{(c+1)n}frac{1}{k} leqsum_{k=cn+1}^{(c+1)n}frac{1}{cn+1}
$$
This means that we may estimate the middle sum with the lower sum with an error no greater than the following (using the fact that $n$ is large):
$$
sum_{k=cn+1}^{(c+1)n}frac{n-1}{(c+1)n(nc+1)} approx sum_{k=cn+1}^{(c+1)n}frac{1}{c(c+1)n} = frac{1}{c(c+1)}
$$
Thus we have the following approximate upper bound (using the fact that $m$ is large):
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} lesssimsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}left[frac{1}{c+1}+frac{1}{c(c+1)}right]approxsum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}
$$
And we have the following lower bound:
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} geqsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}frac{1}{c+1} =left[sum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}right]-1
$$
$endgroup$
1
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
add a comment |
$begingroup$
Let $$S = sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k.$$
Note that we can write $sum_{k=1}^{nm} frac{1}k$ as
$$begin{align*}
1 + cdots + frac{1}n \
frac1{n+1} + cdots + frac{1}{2n} \
cdots \
frac{1}{n(m-1)} + cdots + frac{1}{nm}
end{align*} $$
Label the rows of the above table from $1$ upto $m$. Note that all the numbers in the $j$th row are $ge frac{1}{jn}$ so the sum of the numbers in the $j$th row is at least $n cdot frac{1}{jn} = frac{1}j$. Summing this from $j=2$ to $m$ gives us
$$ sum_{k=1}^{nm} frac{1}k ge left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - 1 = S-1.$$
Similarly, each entry in the $j$th row is at most $frac{1}{(j-1)n}$ so summing from $j=2$ to $m$ gives
$$ sum_{k=1}^{nm} frac{1}k le left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - frac{1}m = S-frac{1}m.$$
Therefore,
$$ frac{1}m le sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k - sum_{k=1}^{nm} frac{1}k le 1. $$
$endgroup$
add a comment |
$begingroup$
Avoiding $log$s we still have $H_N=sum_{k=1}^{N}frac{1}{k}=int_{0}^{1}frac{x^N-1}{x-1}$ and
$$ H_{NM}-H_{N}-H_{M} = int_{0}^{1}frac{(x^N-1)(x^M-1)}{x-1},dxleq 0, $$
whose absolute value can be bounded (due to the Cauchy-Schwarz inequality) by
$$ sqrt{int_{0}^{1}frac{(1-x^N)^2}{1-x},dx int_{0}^{1}frac{(1-x^M)^2}{1-x},dx}=sqrt{(2H_N-H_{2N})(2H_M-H_{2M})}. $$
On the other hand
$$ H_{2N}-H_N = sum_{k=1}^{2N}frac{(-1)^{k+1}}{k} $$
is a partial sum for a convergent (to $log 2$) series, hence
$$ H_N+H_M-sqrt{H_N H_M}leq H_{NM} leq H_N+H_M $$
holds for any large $N,M$.
$endgroup$
add a comment |
$begingroup$
Be the sucession ${c_n}_{ninmathbb{N}}$ as:
begin{equation}
c_n = sum_{k=1}^n left(frac{1}{k}right) - ln(n)
end{equation}
We can notice that:
begin{eqnarray*}
c_{n+1}-c_n &=& left[sum_{k=1}^{n+1} left(frac{1}{k}right) - ln(n+1)right]
-left[sum_{k=1}^{n} left(frac{1}{k}right) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - ln(n+1) + ln(n) \
c_{n+1}-c_n &=& frac{1}{n+1} - left[ln(n+1) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx \
end{eqnarray*}
But if we graph $displaystyle y = frac{1}{x}$ and we draw rectangles as in the figure:
We notice that the area of the rectangle between $x=n$ and $x=n+1$ is $displaystylefrac{1}{n+1}$ and it is less than $displaystyleint_{n}^{n+1} frac{1}{x} dx$. Then $forall nin mathbb{N}$:
begin{equation}
c_{n+1}-c_n = frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx < 0
end{equation}
Thus, the sucession ${c_n}_{ninmathbb{N}}$ is monotonically decreasing and $c_{n+1} < c_n < cdots < c_2 < c_1 = 1$, then $lim_{ntoinfty}c_n=gamma$ exists and it is less than 1, it should be noted that $lim_{ntoinfty}c_n$ is the Euler-Mascheroni constant $gamma$.
Be the sucession $leftlbrace b_n rightrbrace_{ninmathbb{N}}$ such that $b_n=nm$ with $minmathbb{N}$, the sucession $c_n$ and its subsucession $c_{b_n}$ converge to the same number:
begin{eqnarray}
lim_{ntoinfty}c_{b_n} &=& lim_{ntoinfty}c_n \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(nm) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(n) - ln(m) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) &=& ln(m) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &=& ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
lim_{mtoinfty}ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
gamma < 1
end{eqnarray}
Then for $n,m$ very big:
begin{eqnarray}
sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &approx& gamma \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& gamma + sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right) \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)
end{eqnarray}
Because $displaystyle gamma << sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)$.
Part of this demonstration is from the book "Euler: The master of us all" [1].
References:
Dunham, William, Euler: The master of us all, The Dolciani Mathematical Expositions. 22. Washington, DC: The Mathematical Association of America. xxviii, 185 p. (1999). ZBL0951.01012.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058192%2fshowing-sum-k-1nm-frac1k-approx-sum-k-1n-frac1k-sum%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is a nice little problem, and thankfully it has a simple solution.
Assume that $m$ and $n$ are large natural numbers. Consider the following inequalities:
$$
sum_{k=cn+1}^{(c+1)n}frac{1}{(c+1)n}leqsum_{k=cn+1}^{(c+1)n}frac{1}{k} leqsum_{k=cn+1}^{(c+1)n}frac{1}{cn+1}
$$
This means that we may estimate the middle sum with the lower sum with an error no greater than the following (using the fact that $n$ is large):
$$
sum_{k=cn+1}^{(c+1)n}frac{n-1}{(c+1)n(nc+1)} approx sum_{k=cn+1}^{(c+1)n}frac{1}{c(c+1)n} = frac{1}{c(c+1)}
$$
Thus we have the following approximate upper bound (using the fact that $m$ is large):
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} lesssimsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}left[frac{1}{c+1}+frac{1}{c(c+1)}right]approxsum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}
$$
And we have the following lower bound:
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} geqsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}frac{1}{c+1} =left[sum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}right]-1
$$
$endgroup$
1
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
add a comment |
$begingroup$
This is a nice little problem, and thankfully it has a simple solution.
Assume that $m$ and $n$ are large natural numbers. Consider the following inequalities:
$$
sum_{k=cn+1}^{(c+1)n}frac{1}{(c+1)n}leqsum_{k=cn+1}^{(c+1)n}frac{1}{k} leqsum_{k=cn+1}^{(c+1)n}frac{1}{cn+1}
$$
This means that we may estimate the middle sum with the lower sum with an error no greater than the following (using the fact that $n$ is large):
$$
sum_{k=cn+1}^{(c+1)n}frac{n-1}{(c+1)n(nc+1)} approx sum_{k=cn+1}^{(c+1)n}frac{1}{c(c+1)n} = frac{1}{c(c+1)}
$$
Thus we have the following approximate upper bound (using the fact that $m$ is large):
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} lesssimsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}left[frac{1}{c+1}+frac{1}{c(c+1)}right]approxsum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}
$$
And we have the following lower bound:
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} geqsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}frac{1}{c+1} =left[sum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}right]-1
$$
$endgroup$
1
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
add a comment |
$begingroup$
This is a nice little problem, and thankfully it has a simple solution.
Assume that $m$ and $n$ are large natural numbers. Consider the following inequalities:
$$
sum_{k=cn+1}^{(c+1)n}frac{1}{(c+1)n}leqsum_{k=cn+1}^{(c+1)n}frac{1}{k} leqsum_{k=cn+1}^{(c+1)n}frac{1}{cn+1}
$$
This means that we may estimate the middle sum with the lower sum with an error no greater than the following (using the fact that $n$ is large):
$$
sum_{k=cn+1}^{(c+1)n}frac{n-1}{(c+1)n(nc+1)} approx sum_{k=cn+1}^{(c+1)n}frac{1}{c(c+1)n} = frac{1}{c(c+1)}
$$
Thus we have the following approximate upper bound (using the fact that $m$ is large):
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} lesssimsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}left[frac{1}{c+1}+frac{1}{c(c+1)}right]approxsum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}
$$
And we have the following lower bound:
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} geqsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}frac{1}{c+1} =left[sum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}right]-1
$$
$endgroup$
This is a nice little problem, and thankfully it has a simple solution.
Assume that $m$ and $n$ are large natural numbers. Consider the following inequalities:
$$
sum_{k=cn+1}^{(c+1)n}frac{1}{(c+1)n}leqsum_{k=cn+1}^{(c+1)n}frac{1}{k} leqsum_{k=cn+1}^{(c+1)n}frac{1}{cn+1}
$$
This means that we may estimate the middle sum with the lower sum with an error no greater than the following (using the fact that $n$ is large):
$$
sum_{k=cn+1}^{(c+1)n}frac{n-1}{(c+1)n(nc+1)} approx sum_{k=cn+1}^{(c+1)n}frac{1}{c(c+1)n} = frac{1}{c(c+1)}
$$
Thus we have the following approximate upper bound (using the fact that $m$ is large):
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} lesssimsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}left[frac{1}{c+1}+frac{1}{c(c+1)}right]approxsum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}
$$
And we have the following lower bound:
$$
sum_{k=1}^{nm}frac{1}{k} = sum_{k=1}^{n}frac{1}{k} + sum_{c=1}^{m-1}sum_{k=cn+1}^{(c+1)n}frac{1}{k} geqsum_{k=1}^nfrac{1}{k}+ sum_{c=1}^{m-1}frac{1}{c+1} =left[sum_{k=1}^nfrac{1}{k}+sum_{k=1}^mfrac{1}{k}right]-1
$$
edited Jan 1 at 3:46
answered Jan 1 at 3:34
InequalitiesEverywhereInequalitiesEverywhere
1313
1313
1
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
add a comment |
1
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
1
1
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
$begingroup$
As an aside, all of the $approx$'s and the $lesssim$ can be replaced with $leq$'s. So this argument is fully rigorous, but it is also not sharp; one of the other answers does better with the upper bound.
$endgroup$
– InequalitiesEverywhere
Jan 1 at 4:08
add a comment |
$begingroup$
Let $$S = sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k.$$
Note that we can write $sum_{k=1}^{nm} frac{1}k$ as
$$begin{align*}
1 + cdots + frac{1}n \
frac1{n+1} + cdots + frac{1}{2n} \
cdots \
frac{1}{n(m-1)} + cdots + frac{1}{nm}
end{align*} $$
Label the rows of the above table from $1$ upto $m$. Note that all the numbers in the $j$th row are $ge frac{1}{jn}$ so the sum of the numbers in the $j$th row is at least $n cdot frac{1}{jn} = frac{1}j$. Summing this from $j=2$ to $m$ gives us
$$ sum_{k=1}^{nm} frac{1}k ge left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - 1 = S-1.$$
Similarly, each entry in the $j$th row is at most $frac{1}{(j-1)n}$ so summing from $j=2$ to $m$ gives
$$ sum_{k=1}^{nm} frac{1}k le left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - frac{1}m = S-frac{1}m.$$
Therefore,
$$ frac{1}m le sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k - sum_{k=1}^{nm} frac{1}k le 1. $$
$endgroup$
add a comment |
$begingroup$
Let $$S = sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k.$$
Note that we can write $sum_{k=1}^{nm} frac{1}k$ as
$$begin{align*}
1 + cdots + frac{1}n \
frac1{n+1} + cdots + frac{1}{2n} \
cdots \
frac{1}{n(m-1)} + cdots + frac{1}{nm}
end{align*} $$
Label the rows of the above table from $1$ upto $m$. Note that all the numbers in the $j$th row are $ge frac{1}{jn}$ so the sum of the numbers in the $j$th row is at least $n cdot frac{1}{jn} = frac{1}j$. Summing this from $j=2$ to $m$ gives us
$$ sum_{k=1}^{nm} frac{1}k ge left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - 1 = S-1.$$
Similarly, each entry in the $j$th row is at most $frac{1}{(j-1)n}$ so summing from $j=2$ to $m$ gives
$$ sum_{k=1}^{nm} frac{1}k le left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - frac{1}m = S-frac{1}m.$$
Therefore,
$$ frac{1}m le sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k - sum_{k=1}^{nm} frac{1}k le 1. $$
$endgroup$
add a comment |
$begingroup$
Let $$S = sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k.$$
Note that we can write $sum_{k=1}^{nm} frac{1}k$ as
$$begin{align*}
1 + cdots + frac{1}n \
frac1{n+1} + cdots + frac{1}{2n} \
cdots \
frac{1}{n(m-1)} + cdots + frac{1}{nm}
end{align*} $$
Label the rows of the above table from $1$ upto $m$. Note that all the numbers in the $j$th row are $ge frac{1}{jn}$ so the sum of the numbers in the $j$th row is at least $n cdot frac{1}{jn} = frac{1}j$. Summing this from $j=2$ to $m$ gives us
$$ sum_{k=1}^{nm} frac{1}k ge left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - 1 = S-1.$$
Similarly, each entry in the $j$th row is at most $frac{1}{(j-1)n}$ so summing from $j=2$ to $m$ gives
$$ sum_{k=1}^{nm} frac{1}k le left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - frac{1}m = S-frac{1}m.$$
Therefore,
$$ frac{1}m le sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k - sum_{k=1}^{nm} frac{1}k le 1. $$
$endgroup$
Let $$S = sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k.$$
Note that we can write $sum_{k=1}^{nm} frac{1}k$ as
$$begin{align*}
1 + cdots + frac{1}n \
frac1{n+1} + cdots + frac{1}{2n} \
cdots \
frac{1}{n(m-1)} + cdots + frac{1}{nm}
end{align*} $$
Label the rows of the above table from $1$ upto $m$. Note that all the numbers in the $j$th row are $ge frac{1}{jn}$ so the sum of the numbers in the $j$th row is at least $n cdot frac{1}{jn} = frac{1}j$. Summing this from $j=2$ to $m$ gives us
$$ sum_{k=1}^{nm} frac{1}k ge left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - 1 = S-1.$$
Similarly, each entry in the $j$th row is at most $frac{1}{(j-1)n}$ so summing from $j=2$ to $m$ gives
$$ sum_{k=1}^{nm} frac{1}k le left(sum_{k=1}^n frac{1}k + sum_{j=1}^m frac{1}j right) - frac{1}m = S-frac{1}m.$$
Therefore,
$$ frac{1}m le sum_{k=1}^n frac{1}k + sum_{k=1}^m frac{1}k - sum_{k=1}^{nm} frac{1}k le 1. $$
answered Jan 1 at 3:41
Sandeep SilwalSandeep Silwal
5,84311236
5,84311236
add a comment |
add a comment |
$begingroup$
Avoiding $log$s we still have $H_N=sum_{k=1}^{N}frac{1}{k}=int_{0}^{1}frac{x^N-1}{x-1}$ and
$$ H_{NM}-H_{N}-H_{M} = int_{0}^{1}frac{(x^N-1)(x^M-1)}{x-1},dxleq 0, $$
whose absolute value can be bounded (due to the Cauchy-Schwarz inequality) by
$$ sqrt{int_{0}^{1}frac{(1-x^N)^2}{1-x},dx int_{0}^{1}frac{(1-x^M)^2}{1-x},dx}=sqrt{(2H_N-H_{2N})(2H_M-H_{2M})}. $$
On the other hand
$$ H_{2N}-H_N = sum_{k=1}^{2N}frac{(-1)^{k+1}}{k} $$
is a partial sum for a convergent (to $log 2$) series, hence
$$ H_N+H_M-sqrt{H_N H_M}leq H_{NM} leq H_N+H_M $$
holds for any large $N,M$.
$endgroup$
add a comment |
$begingroup$
Avoiding $log$s we still have $H_N=sum_{k=1}^{N}frac{1}{k}=int_{0}^{1}frac{x^N-1}{x-1}$ and
$$ H_{NM}-H_{N}-H_{M} = int_{0}^{1}frac{(x^N-1)(x^M-1)}{x-1},dxleq 0, $$
whose absolute value can be bounded (due to the Cauchy-Schwarz inequality) by
$$ sqrt{int_{0}^{1}frac{(1-x^N)^2}{1-x},dx int_{0}^{1}frac{(1-x^M)^2}{1-x},dx}=sqrt{(2H_N-H_{2N})(2H_M-H_{2M})}. $$
On the other hand
$$ H_{2N}-H_N = sum_{k=1}^{2N}frac{(-1)^{k+1}}{k} $$
is a partial sum for a convergent (to $log 2$) series, hence
$$ H_N+H_M-sqrt{H_N H_M}leq H_{NM} leq H_N+H_M $$
holds for any large $N,M$.
$endgroup$
add a comment |
$begingroup$
Avoiding $log$s we still have $H_N=sum_{k=1}^{N}frac{1}{k}=int_{0}^{1}frac{x^N-1}{x-1}$ and
$$ H_{NM}-H_{N}-H_{M} = int_{0}^{1}frac{(x^N-1)(x^M-1)}{x-1},dxleq 0, $$
whose absolute value can be bounded (due to the Cauchy-Schwarz inequality) by
$$ sqrt{int_{0}^{1}frac{(1-x^N)^2}{1-x},dx int_{0}^{1}frac{(1-x^M)^2}{1-x},dx}=sqrt{(2H_N-H_{2N})(2H_M-H_{2M})}. $$
On the other hand
$$ H_{2N}-H_N = sum_{k=1}^{2N}frac{(-1)^{k+1}}{k} $$
is a partial sum for a convergent (to $log 2$) series, hence
$$ H_N+H_M-sqrt{H_N H_M}leq H_{NM} leq H_N+H_M $$
holds for any large $N,M$.
$endgroup$
Avoiding $log$s we still have $H_N=sum_{k=1}^{N}frac{1}{k}=int_{0}^{1}frac{x^N-1}{x-1}$ and
$$ H_{NM}-H_{N}-H_{M} = int_{0}^{1}frac{(x^N-1)(x^M-1)}{x-1},dxleq 0, $$
whose absolute value can be bounded (due to the Cauchy-Schwarz inequality) by
$$ sqrt{int_{0}^{1}frac{(1-x^N)^2}{1-x},dx int_{0}^{1}frac{(1-x^M)^2}{1-x},dx}=sqrt{(2H_N-H_{2N})(2H_M-H_{2M})}. $$
On the other hand
$$ H_{2N}-H_N = sum_{k=1}^{2N}frac{(-1)^{k+1}}{k} $$
is a partial sum for a convergent (to $log 2$) series, hence
$$ H_N+H_M-sqrt{H_N H_M}leq H_{NM} leq H_N+H_M $$
holds for any large $N,M$.
answered Jan 1 at 21:47
Jack D'AurizioJack D'Aurizio
288k33280660
288k33280660
add a comment |
add a comment |
$begingroup$
Be the sucession ${c_n}_{ninmathbb{N}}$ as:
begin{equation}
c_n = sum_{k=1}^n left(frac{1}{k}right) - ln(n)
end{equation}
We can notice that:
begin{eqnarray*}
c_{n+1}-c_n &=& left[sum_{k=1}^{n+1} left(frac{1}{k}right) - ln(n+1)right]
-left[sum_{k=1}^{n} left(frac{1}{k}right) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - ln(n+1) + ln(n) \
c_{n+1}-c_n &=& frac{1}{n+1} - left[ln(n+1) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx \
end{eqnarray*}
But if we graph $displaystyle y = frac{1}{x}$ and we draw rectangles as in the figure:
We notice that the area of the rectangle between $x=n$ and $x=n+1$ is $displaystylefrac{1}{n+1}$ and it is less than $displaystyleint_{n}^{n+1} frac{1}{x} dx$. Then $forall nin mathbb{N}$:
begin{equation}
c_{n+1}-c_n = frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx < 0
end{equation}
Thus, the sucession ${c_n}_{ninmathbb{N}}$ is monotonically decreasing and $c_{n+1} < c_n < cdots < c_2 < c_1 = 1$, then $lim_{ntoinfty}c_n=gamma$ exists and it is less than 1, it should be noted that $lim_{ntoinfty}c_n$ is the Euler-Mascheroni constant $gamma$.
Be the sucession $leftlbrace b_n rightrbrace_{ninmathbb{N}}$ such that $b_n=nm$ with $minmathbb{N}$, the sucession $c_n$ and its subsucession $c_{b_n}$ converge to the same number:
begin{eqnarray}
lim_{ntoinfty}c_{b_n} &=& lim_{ntoinfty}c_n \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(nm) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(n) - ln(m) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) &=& ln(m) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &=& ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
lim_{mtoinfty}ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
gamma < 1
end{eqnarray}
Then for $n,m$ very big:
begin{eqnarray}
sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &approx& gamma \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& gamma + sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right) \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)
end{eqnarray}
Because $displaystyle gamma << sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)$.
Part of this demonstration is from the book "Euler: The master of us all" [1].
References:
Dunham, William, Euler: The master of us all, The Dolciani Mathematical Expositions. 22. Washington, DC: The Mathematical Association of America. xxviii, 185 p. (1999). ZBL0951.01012.
$endgroup$
add a comment |
$begingroup$
Be the sucession ${c_n}_{ninmathbb{N}}$ as:
begin{equation}
c_n = sum_{k=1}^n left(frac{1}{k}right) - ln(n)
end{equation}
We can notice that:
begin{eqnarray*}
c_{n+1}-c_n &=& left[sum_{k=1}^{n+1} left(frac{1}{k}right) - ln(n+1)right]
-left[sum_{k=1}^{n} left(frac{1}{k}right) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - ln(n+1) + ln(n) \
c_{n+1}-c_n &=& frac{1}{n+1} - left[ln(n+1) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx \
end{eqnarray*}
But if we graph $displaystyle y = frac{1}{x}$ and we draw rectangles as in the figure:
We notice that the area of the rectangle between $x=n$ and $x=n+1$ is $displaystylefrac{1}{n+1}$ and it is less than $displaystyleint_{n}^{n+1} frac{1}{x} dx$. Then $forall nin mathbb{N}$:
begin{equation}
c_{n+1}-c_n = frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx < 0
end{equation}
Thus, the sucession ${c_n}_{ninmathbb{N}}$ is monotonically decreasing and $c_{n+1} < c_n < cdots < c_2 < c_1 = 1$, then $lim_{ntoinfty}c_n=gamma$ exists and it is less than 1, it should be noted that $lim_{ntoinfty}c_n$ is the Euler-Mascheroni constant $gamma$.
Be the sucession $leftlbrace b_n rightrbrace_{ninmathbb{N}}$ such that $b_n=nm$ with $minmathbb{N}$, the sucession $c_n$ and its subsucession $c_{b_n}$ converge to the same number:
begin{eqnarray}
lim_{ntoinfty}c_{b_n} &=& lim_{ntoinfty}c_n \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(nm) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(n) - ln(m) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) &=& ln(m) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &=& ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
lim_{mtoinfty}ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
gamma < 1
end{eqnarray}
Then for $n,m$ very big:
begin{eqnarray}
sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &approx& gamma \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& gamma + sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right) \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)
end{eqnarray}
Because $displaystyle gamma << sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)$.
Part of this demonstration is from the book "Euler: The master of us all" [1].
References:
Dunham, William, Euler: The master of us all, The Dolciani Mathematical Expositions. 22. Washington, DC: The Mathematical Association of America. xxviii, 185 p. (1999). ZBL0951.01012.
$endgroup$
add a comment |
$begingroup$
Be the sucession ${c_n}_{ninmathbb{N}}$ as:
begin{equation}
c_n = sum_{k=1}^n left(frac{1}{k}right) - ln(n)
end{equation}
We can notice that:
begin{eqnarray*}
c_{n+1}-c_n &=& left[sum_{k=1}^{n+1} left(frac{1}{k}right) - ln(n+1)right]
-left[sum_{k=1}^{n} left(frac{1}{k}right) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - ln(n+1) + ln(n) \
c_{n+1}-c_n &=& frac{1}{n+1} - left[ln(n+1) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx \
end{eqnarray*}
But if we graph $displaystyle y = frac{1}{x}$ and we draw rectangles as in the figure:
We notice that the area of the rectangle between $x=n$ and $x=n+1$ is $displaystylefrac{1}{n+1}$ and it is less than $displaystyleint_{n}^{n+1} frac{1}{x} dx$. Then $forall nin mathbb{N}$:
begin{equation}
c_{n+1}-c_n = frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx < 0
end{equation}
Thus, the sucession ${c_n}_{ninmathbb{N}}$ is monotonically decreasing and $c_{n+1} < c_n < cdots < c_2 < c_1 = 1$, then $lim_{ntoinfty}c_n=gamma$ exists and it is less than 1, it should be noted that $lim_{ntoinfty}c_n$ is the Euler-Mascheroni constant $gamma$.
Be the sucession $leftlbrace b_n rightrbrace_{ninmathbb{N}}$ such that $b_n=nm$ with $minmathbb{N}$, the sucession $c_n$ and its subsucession $c_{b_n}$ converge to the same number:
begin{eqnarray}
lim_{ntoinfty}c_{b_n} &=& lim_{ntoinfty}c_n \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(nm) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(n) - ln(m) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) &=& ln(m) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &=& ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
lim_{mtoinfty}ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
gamma < 1
end{eqnarray}
Then for $n,m$ very big:
begin{eqnarray}
sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &approx& gamma \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& gamma + sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right) \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)
end{eqnarray}
Because $displaystyle gamma << sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)$.
Part of this demonstration is from the book "Euler: The master of us all" [1].
References:
Dunham, William, Euler: The master of us all, The Dolciani Mathematical Expositions. 22. Washington, DC: The Mathematical Association of America. xxviii, 185 p. (1999). ZBL0951.01012.
$endgroup$
Be the sucession ${c_n}_{ninmathbb{N}}$ as:
begin{equation}
c_n = sum_{k=1}^n left(frac{1}{k}right) - ln(n)
end{equation}
We can notice that:
begin{eqnarray*}
c_{n+1}-c_n &=& left[sum_{k=1}^{n+1} left(frac{1}{k}right) - ln(n+1)right]
-left[sum_{k=1}^{n} left(frac{1}{k}right) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - ln(n+1) + ln(n) \
c_{n+1}-c_n &=& frac{1}{n+1} - left[ln(n+1) - ln(n)right] \
c_{n+1}-c_n &=& frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx \
end{eqnarray*}
But if we graph $displaystyle y = frac{1}{x}$ and we draw rectangles as in the figure:
We notice that the area of the rectangle between $x=n$ and $x=n+1$ is $displaystylefrac{1}{n+1}$ and it is less than $displaystyleint_{n}^{n+1} frac{1}{x} dx$. Then $forall nin mathbb{N}$:
begin{equation}
c_{n+1}-c_n = frac{1}{n+1} - int_{n}^{n+1} frac{1}{x} dx < 0
end{equation}
Thus, the sucession ${c_n}_{ninmathbb{N}}$ is monotonically decreasing and $c_{n+1} < c_n < cdots < c_2 < c_1 = 1$, then $lim_{ntoinfty}c_n=gamma$ exists and it is less than 1, it should be noted that $lim_{ntoinfty}c_n$ is the Euler-Mascheroni constant $gamma$.
Be the sucession $leftlbrace b_n rightrbrace_{ninmathbb{N}}$ such that $b_n=nm$ with $minmathbb{N}$, the sucession $c_n$ and its subsucession $c_{b_n}$ converge to the same number:
begin{eqnarray}
lim_{ntoinfty}c_{b_n} &=& lim_{ntoinfty}c_n \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(nm) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) - ln(n) - ln(m) &=& lim_{ntoinfty}sum_{k=1}^n left(frac{1}{k}right) - ln(n) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) &=& ln(m) \
lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &=& ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
lim_{mtoinfty}ln(m) -sum_{k=1}^{m} left(frac{1}{k}right) \
lim_{mtoinfty}lim_{ntoinfty}sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right)
&=&
gamma < 1
end{eqnarray}
Then for $n,m$ very big:
begin{eqnarray}
sum_{k=1}^{nm} left(frac{1}{k}right) -sum_{k=1}^n left(frac{1}{k}right) -sum_{k=1}^{m} left(frac{1}{k}right) &approx& gamma \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& gamma + sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right) \
sum_{k=1}^{nm} left(frac{1}{k}right) &approx& sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)
end{eqnarray}
Because $displaystyle gamma << sum_{k=1}^n left(frac{1}{k}right) + sum_{k=1}^{m} left(frac{1}{k}right)$.
Part of this demonstration is from the book "Euler: The master of us all" [1].
References:
Dunham, William, Euler: The master of us all, The Dolciani Mathematical Expositions. 22. Washington, DC: The Mathematical Association of America. xxviii, 185 p. (1999). ZBL0951.01012.
edited Jan 17 at 20:45
answered Jan 1 at 5:51
El PastaEl Pasta
46015
46015
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058192%2fshowing-sum-k-1nm-frac1k-approx-sum-k-1n-frac1k-sum%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
$begingroup$
There is a constant error in the approximation of $gamma approx 0.577$ as the harmonic numbers are about $H_n approx log n + gamma$ so there are two $gamma$'s on the right but only one on the left. This is still a reasonable idea.
$endgroup$
– Ross Millikan
Jan 1 at 3:25
2
$begingroup$
Probably you know this, but just in case you don't: You can using integrals and the Fundamental Theorem of Calculus to define logs. Define $L(x) = int_1^x frac{1}{t} dt$. Then you can check: $L'(x) = 1/x$, $L(ax) = L(a) + L(x)$, and $L(x^p) = pL(x)$. Your question is the series version of showing that $int_1^{ax} frac{1}{t} dt = int_1^a frac{1}{t} dt + int_1^x frac{1}{t}dt$.
$endgroup$
– JavaMan
Jan 1 at 4:02