Algorithm to find $ab = N$ with $a$ and $b$ as close as possible
$begingroup$
Given a number $N$ I would like to factor it as $N=ab$ where $a$ and $b$ are as close as possible; say when $|b-a|$ is minimal. For certain $N$ this is trivial: when $N$ is prime, a product of two primes, a perfect square, or one less than a perfect square, for example.
But in general it seems tricky. A lot of obvious first steps don't work. For example, it is tempting to guess that when $N=k^2M$, we can find $a_Mcdot b_M = M$, and the optimal solution for $N$ will be $ka_Mcdot kb_M$. But this is quite wrong; a counterexample is $N=20$. Are there any divide-and-conquer tactics of this sort that do work?
An obvious algorithm to find the optimal $a$ and $b$ might begin by calculating $s=leftlfloorsqrt Nrightrfloor$, which can be done efficiently, and then by working its way down from $s$ looking for a factor of $N$ by trial division, which is slow. Is there a more efficient algorithm?
It seems to me that even if the complete factorization of $N$ is known, partitioning the factors into two sets whose products are as close as possible will be NP-complete; it looks similar to the subset-sum problem, for example.
elementary-number-theory factoring
$endgroup$
add a comment |
$begingroup$
Given a number $N$ I would like to factor it as $N=ab$ where $a$ and $b$ are as close as possible; say when $|b-a|$ is minimal. For certain $N$ this is trivial: when $N$ is prime, a product of two primes, a perfect square, or one less than a perfect square, for example.
But in general it seems tricky. A lot of obvious first steps don't work. For example, it is tempting to guess that when $N=k^2M$, we can find $a_Mcdot b_M = M$, and the optimal solution for $N$ will be $ka_Mcdot kb_M$. But this is quite wrong; a counterexample is $N=20$. Are there any divide-and-conquer tactics of this sort that do work?
An obvious algorithm to find the optimal $a$ and $b$ might begin by calculating $s=leftlfloorsqrt Nrightrfloor$, which can be done efficiently, and then by working its way down from $s$ looking for a factor of $N$ by trial division, which is slow. Is there a more efficient algorithm?
It seems to me that even if the complete factorization of $N$ is known, partitioning the factors into two sets whose products are as close as possible will be NP-complete; it looks similar to the subset-sum problem, for example.
elementary-number-theory factoring
$endgroup$
$begingroup$
By close, I assume $vert b - a vert$ is minimized?
$endgroup$
– user17762
Jan 19 '13 at 3:47
$begingroup$
I was not sure whether to minimize $|b-a|$ or $left|log frac baright|$. But I think the same factorizations are optimal regardless of which metric one chooses. I have amended the question to commit to minimizing $|b-a|$.
$endgroup$
– MJD
Jan 19 '13 at 3:49
$begingroup$
Peripheral comment: I think I first began to consider this problem when I noticed that $7! = 70cdot72$.
$endgroup$
– MJD
Jan 19 '13 at 3:54
1
$begingroup$
Varients of Fermat's factorization method would be more efficient then your algorithm
$endgroup$
– Ethan
Jan 19 '13 at 3:54
1
$begingroup$
Because of the monotonicity of the log function, minimizing $|b-a|$ or $|log frac ba| =| log b - log a| amount to the same thing.
$endgroup$
– Ross Millikan
Jan 19 '13 at 3:57
add a comment |
$begingroup$
Given a number $N$ I would like to factor it as $N=ab$ where $a$ and $b$ are as close as possible; say when $|b-a|$ is minimal. For certain $N$ this is trivial: when $N$ is prime, a product of two primes, a perfect square, or one less than a perfect square, for example.
But in general it seems tricky. A lot of obvious first steps don't work. For example, it is tempting to guess that when $N=k^2M$, we can find $a_Mcdot b_M = M$, and the optimal solution for $N$ will be $ka_Mcdot kb_M$. But this is quite wrong; a counterexample is $N=20$. Are there any divide-and-conquer tactics of this sort that do work?
An obvious algorithm to find the optimal $a$ and $b$ might begin by calculating $s=leftlfloorsqrt Nrightrfloor$, which can be done efficiently, and then by working its way down from $s$ looking for a factor of $N$ by trial division, which is slow. Is there a more efficient algorithm?
It seems to me that even if the complete factorization of $N$ is known, partitioning the factors into two sets whose products are as close as possible will be NP-complete; it looks similar to the subset-sum problem, for example.
elementary-number-theory factoring
$endgroup$
Given a number $N$ I would like to factor it as $N=ab$ where $a$ and $b$ are as close as possible; say when $|b-a|$ is minimal. For certain $N$ this is trivial: when $N$ is prime, a product of two primes, a perfect square, or one less than a perfect square, for example.
But in general it seems tricky. A lot of obvious first steps don't work. For example, it is tempting to guess that when $N=k^2M$, we can find $a_Mcdot b_M = M$, and the optimal solution for $N$ will be $ka_Mcdot kb_M$. But this is quite wrong; a counterexample is $N=20$. Are there any divide-and-conquer tactics of this sort that do work?
An obvious algorithm to find the optimal $a$ and $b$ might begin by calculating $s=leftlfloorsqrt Nrightrfloor$, which can be done efficiently, and then by working its way down from $s$ looking for a factor of $N$ by trial division, which is slow. Is there a more efficient algorithm?
It seems to me that even if the complete factorization of $N$ is known, partitioning the factors into two sets whose products are as close as possible will be NP-complete; it looks similar to the subset-sum problem, for example.
elementary-number-theory factoring
elementary-number-theory factoring
edited Jan 8 at 16:10
MJD
asked Jan 19 '13 at 3:45
MJDMJD
47.4k29213396
47.4k29213396
$begingroup$
By close, I assume $vert b - a vert$ is minimized?
$endgroup$
– user17762
Jan 19 '13 at 3:47
$begingroup$
I was not sure whether to minimize $|b-a|$ or $left|log frac baright|$. But I think the same factorizations are optimal regardless of which metric one chooses. I have amended the question to commit to minimizing $|b-a|$.
$endgroup$
– MJD
Jan 19 '13 at 3:49
$begingroup$
Peripheral comment: I think I first began to consider this problem when I noticed that $7! = 70cdot72$.
$endgroup$
– MJD
Jan 19 '13 at 3:54
1
$begingroup$
Varients of Fermat's factorization method would be more efficient then your algorithm
$endgroup$
– Ethan
Jan 19 '13 at 3:54
1
$begingroup$
Because of the monotonicity of the log function, minimizing $|b-a|$ or $|log frac ba| =| log b - log a| amount to the same thing.
$endgroup$
– Ross Millikan
Jan 19 '13 at 3:57
add a comment |
$begingroup$
By close, I assume $vert b - a vert$ is minimized?
$endgroup$
– user17762
Jan 19 '13 at 3:47
$begingroup$
I was not sure whether to minimize $|b-a|$ or $left|log frac baright|$. But I think the same factorizations are optimal regardless of which metric one chooses. I have amended the question to commit to minimizing $|b-a|$.
$endgroup$
– MJD
Jan 19 '13 at 3:49
$begingroup$
Peripheral comment: I think I first began to consider this problem when I noticed that $7! = 70cdot72$.
$endgroup$
– MJD
Jan 19 '13 at 3:54
1
$begingroup$
Varients of Fermat's factorization method would be more efficient then your algorithm
$endgroup$
– Ethan
Jan 19 '13 at 3:54
1
$begingroup$
Because of the monotonicity of the log function, minimizing $|b-a|$ or $|log frac ba| =| log b - log a| amount to the same thing.
$endgroup$
– Ross Millikan
Jan 19 '13 at 3:57
$begingroup$
By close, I assume $vert b - a vert$ is minimized?
$endgroup$
– user17762
Jan 19 '13 at 3:47
$begingroup$
By close, I assume $vert b - a vert$ is minimized?
$endgroup$
– user17762
Jan 19 '13 at 3:47
$begingroup$
I was not sure whether to minimize $|b-a|$ or $left|log frac baright|$. But I think the same factorizations are optimal regardless of which metric one chooses. I have amended the question to commit to minimizing $|b-a|$.
$endgroup$
– MJD
Jan 19 '13 at 3:49
$begingroup$
I was not sure whether to minimize $|b-a|$ or $left|log frac baright|$. But I think the same factorizations are optimal regardless of which metric one chooses. I have amended the question to commit to minimizing $|b-a|$.
$endgroup$
– MJD
Jan 19 '13 at 3:49
$begingroup$
Peripheral comment: I think I first began to consider this problem when I noticed that $7! = 70cdot72$.
$endgroup$
– MJD
Jan 19 '13 at 3:54
$begingroup$
Peripheral comment: I think I first began to consider this problem when I noticed that $7! = 70cdot72$.
$endgroup$
– MJD
Jan 19 '13 at 3:54
1
1
$begingroup$
Varients of Fermat's factorization method would be more efficient then your algorithm
$endgroup$
– Ethan
Jan 19 '13 at 3:54
$begingroup$
Varients of Fermat's factorization method would be more efficient then your algorithm
$endgroup$
– Ethan
Jan 19 '13 at 3:54
1
1
$begingroup$
Because of the monotonicity of the log function, minimizing $|b-a|$ or $|log frac ba| =| log b - log a| amount to the same thing.
$endgroup$
– Ross Millikan
Jan 19 '13 at 3:57
$begingroup$
Because of the monotonicity of the log function, minimizing $|b-a|$ or $|log frac ba| =| log b - log a| amount to the same thing.
$endgroup$
– Ross Millikan
Jan 19 '13 at 3:57
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
You are right it is the knapsack problem. If you factor $N=prod_i p_i^{a_i}$ and take the log, you get $log N = sum_i a_i log p_i$. Now you are looking to fill a knapsack of size $frac 12 log N$ as full as possible with things of size $log p_i$ (and a limit of the quantity of each) without going over.
$endgroup$
add a comment |
$begingroup$
There are some modest improvements one can make over trial division, at least when $N$ is odd. Certainly not enough to make this competitive with modern factoring methods and knapsack algorithms in most cases, but still better than trial division.
See the sections "Fermat's and trial division" and "Sieve improvement" in the Wikipedia article on Fermat factorization; in the case of odd $N$, $N = ab$ with $a$ and $b$ close together is equivalent to $N=A^2 -B^2$ with $B$ as small as possible.
$endgroup$
add a comment |
$begingroup$
This is a hard problem in general since factoring is already a hard problem. Though perhaps you are assuming that $N$'s factorization is already known? It seems not, since you give an algorithm that does not use its factorization.
Given $N=p_1^{a_1}p_2^{a^2}cdots p_k^{a^k}$ where the $p_i$ are prime, you want to partition this multiset into multisets $S$ and $T$ such that their respective products are as close as possible. I'm going to guess this is NP-Hard since it's pretty similar to the knapsack problem (http://en.wikipedia.org/wiki/Knapsack_problem)
$endgroup$
1
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
1
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
2
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f281762%2falgorithm-to-find-ab-n-with-a-and-b-as-close-as-possible%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You are right it is the knapsack problem. If you factor $N=prod_i p_i^{a_i}$ and take the log, you get $log N = sum_i a_i log p_i$. Now you are looking to fill a knapsack of size $frac 12 log N$ as full as possible with things of size $log p_i$ (and a limit of the quantity of each) without going over.
$endgroup$
add a comment |
$begingroup$
You are right it is the knapsack problem. If you factor $N=prod_i p_i^{a_i}$ and take the log, you get $log N = sum_i a_i log p_i$. Now you are looking to fill a knapsack of size $frac 12 log N$ as full as possible with things of size $log p_i$ (and a limit of the quantity of each) without going over.
$endgroup$
add a comment |
$begingroup$
You are right it is the knapsack problem. If you factor $N=prod_i p_i^{a_i}$ and take the log, you get $log N = sum_i a_i log p_i$. Now you are looking to fill a knapsack of size $frac 12 log N$ as full as possible with things of size $log p_i$ (and a limit of the quantity of each) without going over.
$endgroup$
You are right it is the knapsack problem. If you factor $N=prod_i p_i^{a_i}$ and take the log, you get $log N = sum_i a_i log p_i$. Now you are looking to fill a knapsack of size $frac 12 log N$ as full as possible with things of size $log p_i$ (and a limit of the quantity of each) without going over.
answered Jan 19 '13 at 3:56
Ross MillikanRoss Millikan
297k23198371
297k23198371
add a comment |
add a comment |
$begingroup$
There are some modest improvements one can make over trial division, at least when $N$ is odd. Certainly not enough to make this competitive with modern factoring methods and knapsack algorithms in most cases, but still better than trial division.
See the sections "Fermat's and trial division" and "Sieve improvement" in the Wikipedia article on Fermat factorization; in the case of odd $N$, $N = ab$ with $a$ and $b$ close together is equivalent to $N=A^2 -B^2$ with $B$ as small as possible.
$endgroup$
add a comment |
$begingroup$
There are some modest improvements one can make over trial division, at least when $N$ is odd. Certainly not enough to make this competitive with modern factoring methods and knapsack algorithms in most cases, but still better than trial division.
See the sections "Fermat's and trial division" and "Sieve improvement" in the Wikipedia article on Fermat factorization; in the case of odd $N$, $N = ab$ with $a$ and $b$ close together is equivalent to $N=A^2 -B^2$ with $B$ as small as possible.
$endgroup$
add a comment |
$begingroup$
There are some modest improvements one can make over trial division, at least when $N$ is odd. Certainly not enough to make this competitive with modern factoring methods and knapsack algorithms in most cases, but still better than trial division.
See the sections "Fermat's and trial division" and "Sieve improvement" in the Wikipedia article on Fermat factorization; in the case of odd $N$, $N = ab$ with $a$ and $b$ close together is equivalent to $N=A^2 -B^2$ with $B$ as small as possible.
$endgroup$
There are some modest improvements one can make over trial division, at least when $N$ is odd. Certainly not enough to make this competitive with modern factoring methods and knapsack algorithms in most cases, but still better than trial division.
See the sections "Fermat's and trial division" and "Sieve improvement" in the Wikipedia article on Fermat factorization; in the case of odd $N$, $N = ab$ with $a$ and $b$ close together is equivalent to $N=A^2 -B^2$ with $B$ as small as possible.
answered Jan 19 '13 at 5:32
Erick WongErick Wong
20.2k22666
20.2k22666
add a comment |
add a comment |
$begingroup$
This is a hard problem in general since factoring is already a hard problem. Though perhaps you are assuming that $N$'s factorization is already known? It seems not, since you give an algorithm that does not use its factorization.
Given $N=p_1^{a_1}p_2^{a^2}cdots p_k^{a^k}$ where the $p_i$ are prime, you want to partition this multiset into multisets $S$ and $T$ such that their respective products are as close as possible. I'm going to guess this is NP-Hard since it's pretty similar to the knapsack problem (http://en.wikipedia.org/wiki/Knapsack_problem)
$endgroup$
1
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
1
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
2
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
add a comment |
$begingroup$
This is a hard problem in general since factoring is already a hard problem. Though perhaps you are assuming that $N$'s factorization is already known? It seems not, since you give an algorithm that does not use its factorization.
Given $N=p_1^{a_1}p_2^{a^2}cdots p_k^{a^k}$ where the $p_i$ are prime, you want to partition this multiset into multisets $S$ and $T$ such that their respective products are as close as possible. I'm going to guess this is NP-Hard since it's pretty similar to the knapsack problem (http://en.wikipedia.org/wiki/Knapsack_problem)
$endgroup$
1
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
1
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
2
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
add a comment |
$begingroup$
This is a hard problem in general since factoring is already a hard problem. Though perhaps you are assuming that $N$'s factorization is already known? It seems not, since you give an algorithm that does not use its factorization.
Given $N=p_1^{a_1}p_2^{a^2}cdots p_k^{a^k}$ where the $p_i$ are prime, you want to partition this multiset into multisets $S$ and $T$ such that their respective products are as close as possible. I'm going to guess this is NP-Hard since it's pretty similar to the knapsack problem (http://en.wikipedia.org/wiki/Knapsack_problem)
$endgroup$
This is a hard problem in general since factoring is already a hard problem. Though perhaps you are assuming that $N$'s factorization is already known? It seems not, since you give an algorithm that does not use its factorization.
Given $N=p_1^{a_1}p_2^{a^2}cdots p_k^{a^k}$ where the $p_i$ are prime, you want to partition this multiset into multisets $S$ and $T$ such that their respective products are as close as possible. I'm going to guess this is NP-Hard since it's pretty similar to the knapsack problem (http://en.wikipedia.org/wiki/Knapsack_problem)
answered Jan 19 '13 at 3:59
FixeeFixee
7,46563460
7,46563460
1
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
1
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
2
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
add a comment |
1
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
1
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
2
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
1
1
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring isn't known to be hard. The hardness comes from the knapsack property.
$endgroup$
– John Moeller
Jan 19 '13 at 4:05
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
$begingroup$
Factoring is thought to be hard. Knapsack is NP-Hard, and therefore thought to be hard. If $P = NP$ then all of these are poly-time solvable, but that's unlikely. The best known general factoring algorithms are super-polynomial (on a conventional computer; quantum algorithms are polynomial-time).
$endgroup$
– Fixee
Jan 19 '13 at 4:19
1
1
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Factoring is actually thought not to be NP-hard: en.wikipedia.org/wiki/…
$endgroup$
– John Moeller
Jan 19 '13 at 4:22
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
$begingroup$
Correct, which is why I was careful to not claim that it was. If you know an efficient (meaning poly-time) factoring algorithm, please send me a private note and we'll write a paper. Or start a company. Or be captured/assassinated before we can do either...
$endgroup$
– Fixee
Jan 19 '13 at 4:35
2
2
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
$begingroup$
There's no need to be snide. My point here is that if you mention "hard" in the context of complexity it's good to be specific about what you mean and avoid the colloquial use of "hard" as something that's merely practically intractable. There are problems in $P$ that are practically "hard."
$endgroup$
– John Moeller
Jan 19 '13 at 4:41
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f281762%2falgorithm-to-find-ab-n-with-a-and-b-as-close-as-possible%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
By close, I assume $vert b - a vert$ is minimized?
$endgroup$
– user17762
Jan 19 '13 at 3:47
$begingroup$
I was not sure whether to minimize $|b-a|$ or $left|log frac baright|$. But I think the same factorizations are optimal regardless of which metric one chooses. I have amended the question to commit to minimizing $|b-a|$.
$endgroup$
– MJD
Jan 19 '13 at 3:49
$begingroup$
Peripheral comment: I think I first began to consider this problem when I noticed that $7! = 70cdot72$.
$endgroup$
– MJD
Jan 19 '13 at 3:54
1
$begingroup$
Varients of Fermat's factorization method would be more efficient then your algorithm
$endgroup$
– Ethan
Jan 19 '13 at 3:54
1
$begingroup$
Because of the monotonicity of the log function, minimizing $|b-a|$ or $|log frac ba| =| log b - log a| amount to the same thing.
$endgroup$
– Ross Millikan
Jan 19 '13 at 3:57