What is the advantage of adding $log$ Barrier to solve a Linear program?












6












$begingroup$


Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program



$$
min_{Ax leq b} c^Tx tag{1}
$$

where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$



Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.



1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)

2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?



3- Can we analytically show that the two problems are the same or one has superiority to the other?



Please address my questions carefully by providing rigorous proofs.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Ah, thanks, yes!
    $endgroup$
    – jjjjjj
    Jan 17 at 0:02
















6












$begingroup$


Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program



$$
min_{Ax leq b} c^Tx tag{1}
$$

where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$



Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.



1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)

2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?



3- Can we analytically show that the two problems are the same or one has superiority to the other?



Please address my questions carefully by providing rigorous proofs.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Ah, thanks, yes!
    $endgroup$
    – jjjjjj
    Jan 17 at 0:02














6












6








6


2



$begingroup$


Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program



$$
min_{Ax leq b} c^Tx tag{1}
$$

where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$



Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.



1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)

2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?



3- Can we analytically show that the two problems are the same or one has superiority to the other?



Please address my questions carefully by providing rigorous proofs.










share|cite|improve this question











$endgroup$




Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program



$$
min_{Ax leq b} c^Tx tag{1}
$$

where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$



Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.



1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)

2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?



3- Can we analytically show that the two problems are the same or one has superiority to the other?



Please address my questions carefully by providing rigorous proofs.







optimization convex-analysis convex-optimization linear-programming






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 17 at 0:13









El borito

664216




664216










asked Jan 16 at 23:25









SaeedSaeed

1,149310




1,149310












  • $begingroup$
    Ah, thanks, yes!
    $endgroup$
    – jjjjjj
    Jan 17 at 0:02


















  • $begingroup$
    Ah, thanks, yes!
    $endgroup$
    – jjjjjj
    Jan 17 at 0:02
















$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02




$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02










1 Answer
1






active

oldest

votes


















6












$begingroup$


  1. No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.


  2. When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.


  3. No, the problems are not equivalent.



You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}

with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
    $endgroup$
    – Saeed
    Jan 17 at 0:21










  • $begingroup$
    Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
    $endgroup$
    – littleO
    Jan 17 at 0:26






  • 1




    $begingroup$
    No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
    $endgroup$
    – littleO
    Jan 17 at 0:29










  • $begingroup$
    When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
    $endgroup$
    – littleO
    Jan 17 at 0:31










  • $begingroup$
    Thank you so much.
    $endgroup$
    – Saeed
    Jan 17 at 1:02












Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3076421%2fwhat-is-the-advantage-of-adding-log-barrier-to-solve-a-linear-program%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









6












$begingroup$


  1. No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.


  2. When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.


  3. No, the problems are not equivalent.



You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}

with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
    $endgroup$
    – Saeed
    Jan 17 at 0:21










  • $begingroup$
    Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
    $endgroup$
    – littleO
    Jan 17 at 0:26






  • 1




    $begingroup$
    No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
    $endgroup$
    – littleO
    Jan 17 at 0:29










  • $begingroup$
    When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
    $endgroup$
    – littleO
    Jan 17 at 0:31










  • $begingroup$
    Thank you so much.
    $endgroup$
    – Saeed
    Jan 17 at 1:02
















6












$begingroup$


  1. No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.


  2. When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.


  3. No, the problems are not equivalent.



You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}

with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
    $endgroup$
    – Saeed
    Jan 17 at 0:21










  • $begingroup$
    Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
    $endgroup$
    – littleO
    Jan 17 at 0:26






  • 1




    $begingroup$
    No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
    $endgroup$
    – littleO
    Jan 17 at 0:29










  • $begingroup$
    When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
    $endgroup$
    – littleO
    Jan 17 at 0:31










  • $begingroup$
    Thank you so much.
    $endgroup$
    – Saeed
    Jan 17 at 1:02














6












6








6





$begingroup$


  1. No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.


  2. When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.


  3. No, the problems are not equivalent.



You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}

with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.






share|cite|improve this answer











$endgroup$




  1. No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.


  2. When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.


  3. No, the problems are not equivalent.



You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}

with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 17 at 0:09

























answered Jan 16 at 23:53









littleOlittleO

30.5k649111




30.5k649111












  • $begingroup$
    I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
    $endgroup$
    – Saeed
    Jan 17 at 0:21










  • $begingroup$
    Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
    $endgroup$
    – littleO
    Jan 17 at 0:26






  • 1




    $begingroup$
    No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
    $endgroup$
    – littleO
    Jan 17 at 0:29










  • $begingroup$
    When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
    $endgroup$
    – littleO
    Jan 17 at 0:31










  • $begingroup$
    Thank you so much.
    $endgroup$
    – Saeed
    Jan 17 at 1:02


















  • $begingroup$
    I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
    $endgroup$
    – Saeed
    Jan 17 at 0:21










  • $begingroup$
    Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
    $endgroup$
    – littleO
    Jan 17 at 0:26






  • 1




    $begingroup$
    No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
    $endgroup$
    – littleO
    Jan 17 at 0:29










  • $begingroup$
    When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
    $endgroup$
    – littleO
    Jan 17 at 0:31










  • $begingroup$
    Thank you so much.
    $endgroup$
    – Saeed
    Jan 17 at 1:02
















$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21




$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21












$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26




$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26




1




1




$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29




$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29












$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31




$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31












$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02




$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3076421%2fwhat-is-the-advantage-of-adding-log-barrier-to-solve-a-linear-program%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Human spaceflight

Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

張江高科駅