Minimising a black box function












0












$begingroup$


I have a function with 8 inputs, which yields a single output. I do not know what the function does and so cannot use any derivative-based method to minimise said output.
Currently this is done by picking n random vectors, with some bounds for each input, obtaining an output value for each vector, and picking the lowest output from a vector of n.



I have been told to try Ant Colony Optimisation, however I struggle to see how I could implement that for a function with that many inputs.



Any ideas as to how to approach this problem in a better way than it is currently being done will be much appreciated.



The function itself takes a non-negligible time to run, so I am interested in ways to most efficiently (solving the function as low number of times as possible) find a minimum.



EDIT: The bounds for each of the 8 inputs are (0,1), but could conceivably be tightened a little bit.



EDIT 2: The function is actually a collection of processes, a simulation of sorts, it produces a number of outputs. I have those 8 inputs which I am trying to calibrate such that the outputs closely match the reality I am simulating. So I have 'observed' values for those outputs, and the simulated ones for each set of 8 inputs. That's how the 'loss' is defined: as distance from that fixed set of observed values.
I do not know limits, or whether it is differentiable.










share|cite|improve this question











$endgroup$












  • $begingroup$
    What kind of mathematical soplution are you expecting? If I interpret right, you have a function $f:mathbb R^8tomathbb R$ with random outputs for each input, and you want to minimise $f$?
    $endgroup$
    – YiFan
    Jan 4 at 12:06










  • $begingroup$
    I'm afraid I do not follow. What do you mean by random outputs for each input?
    $endgroup$
    – MrKaplan
    Jan 4 at 12:24










  • $begingroup$
    My point being, we know nnothing about the function you just gave us. We don't even know that it is well-behaved (in the sense that it is continuous, differentiable, etc.), so what results, mathematically, do you think we can derive?
    $endgroup$
    – YiFan
    Jan 4 at 12:27










  • $begingroup$
    Right, I also know relatively little about it. It is continuous. I know that it takes those 8 inputs and produces a single output in the form a 'loss', which is what I want to minimise.
    $endgroup$
    – MrKaplan
    Jan 4 at 12:31










  • $begingroup$
    So it is continuous; is it differentiable? Bounded? Do you know any limits? You should put everything you know in the question, which will make the problem easier to answer. It would definitely help to put the context in which the function comes up as well; what "loss" does this function represent?
    $endgroup$
    – YiFan
    Jan 4 at 12:34
















0












$begingroup$


I have a function with 8 inputs, which yields a single output. I do not know what the function does and so cannot use any derivative-based method to minimise said output.
Currently this is done by picking n random vectors, with some bounds for each input, obtaining an output value for each vector, and picking the lowest output from a vector of n.



I have been told to try Ant Colony Optimisation, however I struggle to see how I could implement that for a function with that many inputs.



Any ideas as to how to approach this problem in a better way than it is currently being done will be much appreciated.



The function itself takes a non-negligible time to run, so I am interested in ways to most efficiently (solving the function as low number of times as possible) find a minimum.



EDIT: The bounds for each of the 8 inputs are (0,1), but could conceivably be tightened a little bit.



EDIT 2: The function is actually a collection of processes, a simulation of sorts, it produces a number of outputs. I have those 8 inputs which I am trying to calibrate such that the outputs closely match the reality I am simulating. So I have 'observed' values for those outputs, and the simulated ones for each set of 8 inputs. That's how the 'loss' is defined: as distance from that fixed set of observed values.
I do not know limits, or whether it is differentiable.










share|cite|improve this question











$endgroup$












  • $begingroup$
    What kind of mathematical soplution are you expecting? If I interpret right, you have a function $f:mathbb R^8tomathbb R$ with random outputs for each input, and you want to minimise $f$?
    $endgroup$
    – YiFan
    Jan 4 at 12:06










  • $begingroup$
    I'm afraid I do not follow. What do you mean by random outputs for each input?
    $endgroup$
    – MrKaplan
    Jan 4 at 12:24










  • $begingroup$
    My point being, we know nnothing about the function you just gave us. We don't even know that it is well-behaved (in the sense that it is continuous, differentiable, etc.), so what results, mathematically, do you think we can derive?
    $endgroup$
    – YiFan
    Jan 4 at 12:27










  • $begingroup$
    Right, I also know relatively little about it. It is continuous. I know that it takes those 8 inputs and produces a single output in the form a 'loss', which is what I want to minimise.
    $endgroup$
    – MrKaplan
    Jan 4 at 12:31










  • $begingroup$
    So it is continuous; is it differentiable? Bounded? Do you know any limits? You should put everything you know in the question, which will make the problem easier to answer. It would definitely help to put the context in which the function comes up as well; what "loss" does this function represent?
    $endgroup$
    – YiFan
    Jan 4 at 12:34














0












0








0





$begingroup$


I have a function with 8 inputs, which yields a single output. I do not know what the function does and so cannot use any derivative-based method to minimise said output.
Currently this is done by picking n random vectors, with some bounds for each input, obtaining an output value for each vector, and picking the lowest output from a vector of n.



I have been told to try Ant Colony Optimisation, however I struggle to see how I could implement that for a function with that many inputs.



Any ideas as to how to approach this problem in a better way than it is currently being done will be much appreciated.



The function itself takes a non-negligible time to run, so I am interested in ways to most efficiently (solving the function as low number of times as possible) find a minimum.



EDIT: The bounds for each of the 8 inputs are (0,1), but could conceivably be tightened a little bit.



EDIT 2: The function is actually a collection of processes, a simulation of sorts, it produces a number of outputs. I have those 8 inputs which I am trying to calibrate such that the outputs closely match the reality I am simulating. So I have 'observed' values for those outputs, and the simulated ones for each set of 8 inputs. That's how the 'loss' is defined: as distance from that fixed set of observed values.
I do not know limits, or whether it is differentiable.










share|cite|improve this question











$endgroup$




I have a function with 8 inputs, which yields a single output. I do not know what the function does and so cannot use any derivative-based method to minimise said output.
Currently this is done by picking n random vectors, with some bounds for each input, obtaining an output value for each vector, and picking the lowest output from a vector of n.



I have been told to try Ant Colony Optimisation, however I struggle to see how I could implement that for a function with that many inputs.



Any ideas as to how to approach this problem in a better way than it is currently being done will be much appreciated.



The function itself takes a non-negligible time to run, so I am interested in ways to most efficiently (solving the function as low number of times as possible) find a minimum.



EDIT: The bounds for each of the 8 inputs are (0,1), but could conceivably be tightened a little bit.



EDIT 2: The function is actually a collection of processes, a simulation of sorts, it produces a number of outputs. I have those 8 inputs which I am trying to calibrate such that the outputs closely match the reality I am simulating. So I have 'observed' values for those outputs, and the simulated ones for each set of 8 inputs. That's how the 'loss' is defined: as distance from that fixed set of observed values.
I do not know limits, or whether it is differentiable.







optimization economics






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 4 at 12:39







MrKaplan

















asked Jan 4 at 11:57









MrKaplanMrKaplan

32




32












  • $begingroup$
    What kind of mathematical soplution are you expecting? If I interpret right, you have a function $f:mathbb R^8tomathbb R$ with random outputs for each input, and you want to minimise $f$?
    $endgroup$
    – YiFan
    Jan 4 at 12:06










  • $begingroup$
    I'm afraid I do not follow. What do you mean by random outputs for each input?
    $endgroup$
    – MrKaplan
    Jan 4 at 12:24










  • $begingroup$
    My point being, we know nnothing about the function you just gave us. We don't even know that it is well-behaved (in the sense that it is continuous, differentiable, etc.), so what results, mathematically, do you think we can derive?
    $endgroup$
    – YiFan
    Jan 4 at 12:27










  • $begingroup$
    Right, I also know relatively little about it. It is continuous. I know that it takes those 8 inputs and produces a single output in the form a 'loss', which is what I want to minimise.
    $endgroup$
    – MrKaplan
    Jan 4 at 12:31










  • $begingroup$
    So it is continuous; is it differentiable? Bounded? Do you know any limits? You should put everything you know in the question, which will make the problem easier to answer. It would definitely help to put the context in which the function comes up as well; what "loss" does this function represent?
    $endgroup$
    – YiFan
    Jan 4 at 12:34


















  • $begingroup$
    What kind of mathematical soplution are you expecting? If I interpret right, you have a function $f:mathbb R^8tomathbb R$ with random outputs for each input, and you want to minimise $f$?
    $endgroup$
    – YiFan
    Jan 4 at 12:06










  • $begingroup$
    I'm afraid I do not follow. What do you mean by random outputs for each input?
    $endgroup$
    – MrKaplan
    Jan 4 at 12:24










  • $begingroup$
    My point being, we know nnothing about the function you just gave us. We don't even know that it is well-behaved (in the sense that it is continuous, differentiable, etc.), so what results, mathematically, do you think we can derive?
    $endgroup$
    – YiFan
    Jan 4 at 12:27










  • $begingroup$
    Right, I also know relatively little about it. It is continuous. I know that it takes those 8 inputs and produces a single output in the form a 'loss', which is what I want to minimise.
    $endgroup$
    – MrKaplan
    Jan 4 at 12:31










  • $begingroup$
    So it is continuous; is it differentiable? Bounded? Do you know any limits? You should put everything you know in the question, which will make the problem easier to answer. It would definitely help to put the context in which the function comes up as well; what "loss" does this function represent?
    $endgroup$
    – YiFan
    Jan 4 at 12:34
















$begingroup$
What kind of mathematical soplution are you expecting? If I interpret right, you have a function $f:mathbb R^8tomathbb R$ with random outputs for each input, and you want to minimise $f$?
$endgroup$
– YiFan
Jan 4 at 12:06




$begingroup$
What kind of mathematical soplution are you expecting? If I interpret right, you have a function $f:mathbb R^8tomathbb R$ with random outputs for each input, and you want to minimise $f$?
$endgroup$
– YiFan
Jan 4 at 12:06












$begingroup$
I'm afraid I do not follow. What do you mean by random outputs for each input?
$endgroup$
– MrKaplan
Jan 4 at 12:24




$begingroup$
I'm afraid I do not follow. What do you mean by random outputs for each input?
$endgroup$
– MrKaplan
Jan 4 at 12:24












$begingroup$
My point being, we know nnothing about the function you just gave us. We don't even know that it is well-behaved (in the sense that it is continuous, differentiable, etc.), so what results, mathematically, do you think we can derive?
$endgroup$
– YiFan
Jan 4 at 12:27




$begingroup$
My point being, we know nnothing about the function you just gave us. We don't even know that it is well-behaved (in the sense that it is continuous, differentiable, etc.), so what results, mathematically, do you think we can derive?
$endgroup$
– YiFan
Jan 4 at 12:27












$begingroup$
Right, I also know relatively little about it. It is continuous. I know that it takes those 8 inputs and produces a single output in the form a 'loss', which is what I want to minimise.
$endgroup$
– MrKaplan
Jan 4 at 12:31




$begingroup$
Right, I also know relatively little about it. It is continuous. I know that it takes those 8 inputs and produces a single output in the form a 'loss', which is what I want to minimise.
$endgroup$
– MrKaplan
Jan 4 at 12:31












$begingroup$
So it is continuous; is it differentiable? Bounded? Do you know any limits? You should put everything you know in the question, which will make the problem easier to answer. It would definitely help to put the context in which the function comes up as well; what "loss" does this function represent?
$endgroup$
– YiFan
Jan 4 at 12:34




$begingroup$
So it is continuous; is it differentiable? Bounded? Do you know any limits? You should put everything you know in the question, which will make the problem easier to answer. It would definitely help to put the context in which the function comes up as well; what "loss" does this function represent?
$endgroup$
– YiFan
Jan 4 at 12:34










1 Answer
1






active

oldest

votes


















0












$begingroup$

A relatively simple way to optimize a continuous, non-differentiable function is Nelder-Mead simplex optimization (which, as the linked Wikipedia article comments, is "[n]ot to be confused with Dantzig's simplex algorithm for the problem of linear optimization"). Nelder-Mead has been implemented in many computational systems (e.g. optim(...,method="Nelder-Mead") in R, scipy.optimize.minimize(..., method='Nelder-Mead') in Python, etc. etc.; Numerical Recipes will give you versions in FORTRAN, C, and C++). It's also simple enough that you could implement it yourself if necessary. The simplex method works by constructing an $n$-dimensional simplex of evaluation points and then iteratively using simple heuristic rules (e.g. "find the worst point and establish a new trial point by reflecting through the opposing face of the simplex") to update until the simplex converges approximately to a point.



It's hard to say how many function evaluations would be required, but I would say you could expect on the order of dozens to hundreds of evaluations for a reasonably well-behaved 8-dimensional optimization problem. At $approx$ 5 minutes per evaluation that would be tedious but not unfeasible.



It's not clear whether the (0,1) bounds on your inputs are hard constraints (i.e., are you pretty sure that the optimum is inside that space, or do you need to constrain the solution to that space)? If they are, things get a bit harder as most Nelder-Mead implementations don't allow for box constraints (i.e., independent upper/lower bounds on parameters). You can take a look at Powell's BOBYQA method, which might actually be a little more efficient than Nelder-Mead (although also more complex; FORTRAN implementations are available ...)



If you can guess that your function is differentiable (which seems like a good guess if the underlying simulation process is deterministic and doesn't contain sharp if/then switches), then you could also (in principle) compute gradients at each point by finite difference methods, i.e.



$$
frac{partial f(x_1,...,x_i,...,x_n)}{partial x_i} approx (1/delta) cdot left(f(x_1^*,...,x_i^*+delta,...,x_n^*) - f(x_1^*,...,x_i^*,...,x_n^*)right)
$$



for a $delta$ that is neither too small (roundoff/floating-point error) nor too big (approximation error). Then you can use any derivative-based method you like. In practice this isn't always worth it; the combination of cost and numerical error in the finite-difference computations means this method may be dominated by the derivative-free methods described above.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
    $endgroup$
    – MrKaplan
    Jan 5 at 16:26











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3061572%2fminimising-a-black-box-function%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

A relatively simple way to optimize a continuous, non-differentiable function is Nelder-Mead simplex optimization (which, as the linked Wikipedia article comments, is "[n]ot to be confused with Dantzig's simplex algorithm for the problem of linear optimization"). Nelder-Mead has been implemented in many computational systems (e.g. optim(...,method="Nelder-Mead") in R, scipy.optimize.minimize(..., method='Nelder-Mead') in Python, etc. etc.; Numerical Recipes will give you versions in FORTRAN, C, and C++). It's also simple enough that you could implement it yourself if necessary. The simplex method works by constructing an $n$-dimensional simplex of evaluation points and then iteratively using simple heuristic rules (e.g. "find the worst point and establish a new trial point by reflecting through the opposing face of the simplex") to update until the simplex converges approximately to a point.



It's hard to say how many function evaluations would be required, but I would say you could expect on the order of dozens to hundreds of evaluations for a reasonably well-behaved 8-dimensional optimization problem. At $approx$ 5 minutes per evaluation that would be tedious but not unfeasible.



It's not clear whether the (0,1) bounds on your inputs are hard constraints (i.e., are you pretty sure that the optimum is inside that space, or do you need to constrain the solution to that space)? If they are, things get a bit harder as most Nelder-Mead implementations don't allow for box constraints (i.e., independent upper/lower bounds on parameters). You can take a look at Powell's BOBYQA method, which might actually be a little more efficient than Nelder-Mead (although also more complex; FORTRAN implementations are available ...)



If you can guess that your function is differentiable (which seems like a good guess if the underlying simulation process is deterministic and doesn't contain sharp if/then switches), then you could also (in principle) compute gradients at each point by finite difference methods, i.e.



$$
frac{partial f(x_1,...,x_i,...,x_n)}{partial x_i} approx (1/delta) cdot left(f(x_1^*,...,x_i^*+delta,...,x_n^*) - f(x_1^*,...,x_i^*,...,x_n^*)right)
$$



for a $delta$ that is neither too small (roundoff/floating-point error) nor too big (approximation error). Then you can use any derivative-based method you like. In practice this isn't always worth it; the combination of cost and numerical error in the finite-difference computations means this method may be dominated by the derivative-free methods described above.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
    $endgroup$
    – MrKaplan
    Jan 5 at 16:26
















0












$begingroup$

A relatively simple way to optimize a continuous, non-differentiable function is Nelder-Mead simplex optimization (which, as the linked Wikipedia article comments, is "[n]ot to be confused with Dantzig's simplex algorithm for the problem of linear optimization"). Nelder-Mead has been implemented in many computational systems (e.g. optim(...,method="Nelder-Mead") in R, scipy.optimize.minimize(..., method='Nelder-Mead') in Python, etc. etc.; Numerical Recipes will give you versions in FORTRAN, C, and C++). It's also simple enough that you could implement it yourself if necessary. The simplex method works by constructing an $n$-dimensional simplex of evaluation points and then iteratively using simple heuristic rules (e.g. "find the worst point and establish a new trial point by reflecting through the opposing face of the simplex") to update until the simplex converges approximately to a point.



It's hard to say how many function evaluations would be required, but I would say you could expect on the order of dozens to hundreds of evaluations for a reasonably well-behaved 8-dimensional optimization problem. At $approx$ 5 minutes per evaluation that would be tedious but not unfeasible.



It's not clear whether the (0,1) bounds on your inputs are hard constraints (i.e., are you pretty sure that the optimum is inside that space, or do you need to constrain the solution to that space)? If they are, things get a bit harder as most Nelder-Mead implementations don't allow for box constraints (i.e., independent upper/lower bounds on parameters). You can take a look at Powell's BOBYQA method, which might actually be a little more efficient than Nelder-Mead (although also more complex; FORTRAN implementations are available ...)



If you can guess that your function is differentiable (which seems like a good guess if the underlying simulation process is deterministic and doesn't contain sharp if/then switches), then you could also (in principle) compute gradients at each point by finite difference methods, i.e.



$$
frac{partial f(x_1,...,x_i,...,x_n)}{partial x_i} approx (1/delta) cdot left(f(x_1^*,...,x_i^*+delta,...,x_n^*) - f(x_1^*,...,x_i^*,...,x_n^*)right)
$$



for a $delta$ that is neither too small (roundoff/floating-point error) nor too big (approximation error). Then you can use any derivative-based method you like. In practice this isn't always worth it; the combination of cost and numerical error in the finite-difference computations means this method may be dominated by the derivative-free methods described above.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
    $endgroup$
    – MrKaplan
    Jan 5 at 16:26














0












0








0





$begingroup$

A relatively simple way to optimize a continuous, non-differentiable function is Nelder-Mead simplex optimization (which, as the linked Wikipedia article comments, is "[n]ot to be confused with Dantzig's simplex algorithm for the problem of linear optimization"). Nelder-Mead has been implemented in many computational systems (e.g. optim(...,method="Nelder-Mead") in R, scipy.optimize.minimize(..., method='Nelder-Mead') in Python, etc. etc.; Numerical Recipes will give you versions in FORTRAN, C, and C++). It's also simple enough that you could implement it yourself if necessary. The simplex method works by constructing an $n$-dimensional simplex of evaluation points and then iteratively using simple heuristic rules (e.g. "find the worst point and establish a new trial point by reflecting through the opposing face of the simplex") to update until the simplex converges approximately to a point.



It's hard to say how many function evaluations would be required, but I would say you could expect on the order of dozens to hundreds of evaluations for a reasonably well-behaved 8-dimensional optimization problem. At $approx$ 5 minutes per evaluation that would be tedious but not unfeasible.



It's not clear whether the (0,1) bounds on your inputs are hard constraints (i.e., are you pretty sure that the optimum is inside that space, or do you need to constrain the solution to that space)? If they are, things get a bit harder as most Nelder-Mead implementations don't allow for box constraints (i.e., independent upper/lower bounds on parameters). You can take a look at Powell's BOBYQA method, which might actually be a little more efficient than Nelder-Mead (although also more complex; FORTRAN implementations are available ...)



If you can guess that your function is differentiable (which seems like a good guess if the underlying simulation process is deterministic and doesn't contain sharp if/then switches), then you could also (in principle) compute gradients at each point by finite difference methods, i.e.



$$
frac{partial f(x_1,...,x_i,...,x_n)}{partial x_i} approx (1/delta) cdot left(f(x_1^*,...,x_i^*+delta,...,x_n^*) - f(x_1^*,...,x_i^*,...,x_n^*)right)
$$



for a $delta$ that is neither too small (roundoff/floating-point error) nor too big (approximation error). Then you can use any derivative-based method you like. In practice this isn't always worth it; the combination of cost and numerical error in the finite-difference computations means this method may be dominated by the derivative-free methods described above.






share|cite|improve this answer











$endgroup$



A relatively simple way to optimize a continuous, non-differentiable function is Nelder-Mead simplex optimization (which, as the linked Wikipedia article comments, is "[n]ot to be confused with Dantzig's simplex algorithm for the problem of linear optimization"). Nelder-Mead has been implemented in many computational systems (e.g. optim(...,method="Nelder-Mead") in R, scipy.optimize.minimize(..., method='Nelder-Mead') in Python, etc. etc.; Numerical Recipes will give you versions in FORTRAN, C, and C++). It's also simple enough that you could implement it yourself if necessary. The simplex method works by constructing an $n$-dimensional simplex of evaluation points and then iteratively using simple heuristic rules (e.g. "find the worst point and establish a new trial point by reflecting through the opposing face of the simplex") to update until the simplex converges approximately to a point.



It's hard to say how many function evaluations would be required, but I would say you could expect on the order of dozens to hundreds of evaluations for a reasonably well-behaved 8-dimensional optimization problem. At $approx$ 5 minutes per evaluation that would be tedious but not unfeasible.



It's not clear whether the (0,1) bounds on your inputs are hard constraints (i.e., are you pretty sure that the optimum is inside that space, or do you need to constrain the solution to that space)? If they are, things get a bit harder as most Nelder-Mead implementations don't allow for box constraints (i.e., independent upper/lower bounds on parameters). You can take a look at Powell's BOBYQA method, which might actually be a little more efficient than Nelder-Mead (although also more complex; FORTRAN implementations are available ...)



If you can guess that your function is differentiable (which seems like a good guess if the underlying simulation process is deterministic and doesn't contain sharp if/then switches), then you could also (in principle) compute gradients at each point by finite difference methods, i.e.



$$
frac{partial f(x_1,...,x_i,...,x_n)}{partial x_i} approx (1/delta) cdot left(f(x_1^*,...,x_i^*+delta,...,x_n^*) - f(x_1^*,...,x_i^*,...,x_n^*)right)
$$



for a $delta$ that is neither too small (roundoff/floating-point error) nor too big (approximation error). Then you can use any derivative-based method you like. In practice this isn't always worth it; the combination of cost and numerical error in the finite-difference computations means this method may be dominated by the derivative-free methods described above.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 5 at 17:39

























answered Jan 4 at 15:25









Ben BolkerBen Bolker

318210




318210












  • $begingroup$
    Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
    $endgroup$
    – MrKaplan
    Jan 5 at 16:26


















  • $begingroup$
    Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
    $endgroup$
    – MrKaplan
    Jan 5 at 16:26
















$begingroup$
Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
$endgroup$
– MrKaplan
Jan 5 at 16:26




$begingroup$
Hi Ben, thank you very much for your answer. I will have a look at the methods listed above. Anything that represents an improvement on the current approach would be fantastic.
$endgroup$
– MrKaplan
Jan 5 at 16:26


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3061572%2fminimising-a-black-box-function%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Human spaceflight

Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

張江高科駅