Is combinatorial novelty without insight useful? Who cares if we're the first to use tool T on problem P?
I'm a bioinformatician and former student of applied math. I want help to see if I should change my view).
Many academics, including all of my PI's on my major projects, justify their work by saying "We're the first ones to apply fashionable technique T on problem P". This is in situations where P is often well-studied and T was developed and established by other groups. I call it "combinatorial novelty" in the title because the novelty is not in new tools, nor new insights, but rather in new combinations. The justification is essentially "We're early adopters."
This would be fine if the studies produced valuable new insight about P. P is important and I'd be proud to make progress on it whether or not I'm using fancy new techniques like T. But usually, our progress on P is weak despite using T, so we need to turn to T's fanciness to justify our work. I see people using this "combinatorial novelty" to make their work seem like a big deal.
This seems flimsy, but if every PI I've worked under is doing it, then either it impresses grant reviewers, or it actually is valuable to science and I just don't understand why. Or both. Is this valuable to science? If so, why?
publishability
|
show 6 more comments
I'm a bioinformatician and former student of applied math. I want help to see if I should change my view).
Many academics, including all of my PI's on my major projects, justify their work by saying "We're the first ones to apply fashionable technique T on problem P". This is in situations where P is often well-studied and T was developed and established by other groups. I call it "combinatorial novelty" in the title because the novelty is not in new tools, nor new insights, but rather in new combinations. The justification is essentially "We're early adopters."
This would be fine if the studies produced valuable new insight about P. P is important and I'd be proud to make progress on it whether or not I'm using fancy new techniques like T. But usually, our progress on P is weak despite using T, so we need to turn to T's fanciness to justify our work. I see people using this "combinatorial novelty" to make their work seem like a big deal.
This seems flimsy, but if every PI I've worked under is doing it, then either it impresses grant reviewers, or it actually is valuable to science and I just don't understand why. Or both. Is this valuable to science? If so, why?
publishability
6
This feels to me more like a rant than a real question (though I think the answers are helpful to you), and I think it's also using a bit of a straw man argument. I often read papers that tout what you term 'combinatorial novelty' of their approach. I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals.
– Bryan Krause
Feb 1 at 17:24
4
if you use the "CMV" tag, then perhaps you may want to replace "good point" and "+1" with "Δ" :P
– Ooker
Feb 1 at 17:30
@BryanKrause It feels like a rant to me too, but the answerers seem to have real answers. You comment is informative to me as well: "I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals." So would you say it's typically not enough except as a means to an end?
– burner account
Feb 1 at 17:41
@burneraccount Not sure what you mean as a 'means to an end' but my personal opinion is that all science that is done should be publishable, regardless of impact. I felt like your question implied that you think people are using these justifications to make their work seem like a 'big deal' when they may be simply explaining what they did. Negative/uninteresting findings can still be very important, as the answers have noted.
– Bryan Krause
Feb 1 at 18:22
6
Not enough for an answer, but I would imagine that if your team thought T might work on P, and decided to spend time and money finding that out, some other team might feel the same way in the future. If you publish your results, the other team(s) may decide to spend their time on money on different avenues of science, instead of carrying out the same experiment.
– Xantix
Feb 1 at 19:06
|
show 6 more comments
I'm a bioinformatician and former student of applied math. I want help to see if I should change my view).
Many academics, including all of my PI's on my major projects, justify their work by saying "We're the first ones to apply fashionable technique T on problem P". This is in situations where P is often well-studied and T was developed and established by other groups. I call it "combinatorial novelty" in the title because the novelty is not in new tools, nor new insights, but rather in new combinations. The justification is essentially "We're early adopters."
This would be fine if the studies produced valuable new insight about P. P is important and I'd be proud to make progress on it whether or not I'm using fancy new techniques like T. But usually, our progress on P is weak despite using T, so we need to turn to T's fanciness to justify our work. I see people using this "combinatorial novelty" to make their work seem like a big deal.
This seems flimsy, but if every PI I've worked under is doing it, then either it impresses grant reviewers, or it actually is valuable to science and I just don't understand why. Or both. Is this valuable to science? If so, why?
publishability
I'm a bioinformatician and former student of applied math. I want help to see if I should change my view).
Many academics, including all of my PI's on my major projects, justify their work by saying "We're the first ones to apply fashionable technique T on problem P". This is in situations where P is often well-studied and T was developed and established by other groups. I call it "combinatorial novelty" in the title because the novelty is not in new tools, nor new insights, but rather in new combinations. The justification is essentially "We're early adopters."
This would be fine if the studies produced valuable new insight about P. P is important and I'd be proud to make progress on it whether or not I'm using fancy new techniques like T. But usually, our progress on P is weak despite using T, so we need to turn to T's fanciness to justify our work. I see people using this "combinatorial novelty" to make their work seem like a big deal.
This seems flimsy, but if every PI I've worked under is doing it, then either it impresses grant reviewers, or it actually is valuable to science and I just don't understand why. Or both. Is this valuable to science? If so, why?
publishability
publishability
edited Feb 3 at 0:14
D.W.
6,30811752
6,30811752
asked Feb 1 at 14:03
burner accountburner account
15926
15926
6
This feels to me more like a rant than a real question (though I think the answers are helpful to you), and I think it's also using a bit of a straw man argument. I often read papers that tout what you term 'combinatorial novelty' of their approach. I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals.
– Bryan Krause
Feb 1 at 17:24
4
if you use the "CMV" tag, then perhaps you may want to replace "good point" and "+1" with "Δ" :P
– Ooker
Feb 1 at 17:30
@BryanKrause It feels like a rant to me too, but the answerers seem to have real answers. You comment is informative to me as well: "I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals." So would you say it's typically not enough except as a means to an end?
– burner account
Feb 1 at 17:41
@burneraccount Not sure what you mean as a 'means to an end' but my personal opinion is that all science that is done should be publishable, regardless of impact. I felt like your question implied that you think people are using these justifications to make their work seem like a 'big deal' when they may be simply explaining what they did. Negative/uninteresting findings can still be very important, as the answers have noted.
– Bryan Krause
Feb 1 at 18:22
6
Not enough for an answer, but I would imagine that if your team thought T might work on P, and decided to spend time and money finding that out, some other team might feel the same way in the future. If you publish your results, the other team(s) may decide to spend their time on money on different avenues of science, instead of carrying out the same experiment.
– Xantix
Feb 1 at 19:06
|
show 6 more comments
6
This feels to me more like a rant than a real question (though I think the answers are helpful to you), and I think it's also using a bit of a straw man argument. I often read papers that tout what you term 'combinatorial novelty' of their approach. I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals.
– Bryan Krause
Feb 1 at 17:24
4
if you use the "CMV" tag, then perhaps you may want to replace "good point" and "+1" with "Δ" :P
– Ooker
Feb 1 at 17:30
@BryanKrause It feels like a rant to me too, but the answerers seem to have real answers. You comment is informative to me as well: "I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals." So would you say it's typically not enough except as a means to an end?
– burner account
Feb 1 at 17:41
@burneraccount Not sure what you mean as a 'means to an end' but my personal opinion is that all science that is done should be publishable, regardless of impact. I felt like your question implied that you think people are using these justifications to make their work seem like a 'big deal' when they may be simply explaining what they did. Negative/uninteresting findings can still be very important, as the answers have noted.
– Bryan Krause
Feb 1 at 18:22
6
Not enough for an answer, but I would imagine that if your team thought T might work on P, and decided to spend time and money finding that out, some other team might feel the same way in the future. If you publish your results, the other team(s) may decide to spend their time on money on different avenues of science, instead of carrying out the same experiment.
– Xantix
Feb 1 at 19:06
6
6
This feels to me more like a rant than a real question (though I think the answers are helpful to you), and I think it's also using a bit of a straw man argument. I often read papers that tout what you term 'combinatorial novelty' of their approach. I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals.
– Bryan Krause
Feb 1 at 17:24
This feels to me more like a rant than a real question (though I think the answers are helpful to you), and I think it's also using a bit of a straw man argument. I often read papers that tout what you term 'combinatorial novelty' of their approach. I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals.
– Bryan Krause
Feb 1 at 17:24
4
4
if you use the "CMV" tag, then perhaps you may want to replace "good point" and "+1" with "Δ" :P
– Ooker
Feb 1 at 17:30
if you use the "CMV" tag, then perhaps you may want to replace "good point" and "+1" with "Δ" :P
– Ooker
Feb 1 at 17:30
@BryanKrause It feels like a rant to me too, but the answerers seem to have real answers. You comment is informative to me as well: "I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals." So would you say it's typically not enough except as a means to an end?
– burner account
Feb 1 at 17:41
@BryanKrause It feels like a rant to me too, but the answerers seem to have real answers. You comment is informative to me as well: "I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals." So would you say it's typically not enough except as a means to an end?
– burner account
Feb 1 at 17:41
@burneraccount Not sure what you mean as a 'means to an end' but my personal opinion is that all science that is done should be publishable, regardless of impact. I felt like your question implied that you think people are using these justifications to make their work seem like a 'big deal' when they may be simply explaining what they did. Negative/uninteresting findings can still be very important, as the answers have noted.
– Bryan Krause
Feb 1 at 18:22
@burneraccount Not sure what you mean as a 'means to an end' but my personal opinion is that all science that is done should be publishable, regardless of impact. I felt like your question implied that you think people are using these justifications to make their work seem like a 'big deal' when they may be simply explaining what they did. Negative/uninteresting findings can still be very important, as the answers have noted.
– Bryan Krause
Feb 1 at 18:22
6
6
Not enough for an answer, but I would imagine that if your team thought T might work on P, and decided to spend time and money finding that out, some other team might feel the same way in the future. If you publish your results, the other team(s) may decide to spend their time on money on different avenues of science, instead of carrying out the same experiment.
– Xantix
Feb 1 at 19:06
Not enough for an answer, but I would imagine that if your team thought T might work on P, and decided to spend time and money finding that out, some other team might feel the same way in the future. If you publish your results, the other team(s) may decide to spend their time on money on different avenues of science, instead of carrying out the same experiment.
– Xantix
Feb 1 at 19:06
|
show 6 more comments
5 Answers
5
active
oldest
votes
Is this valuable to science? If so, why?
Because it leads us to understand if tool T works on problem B. How big the "insight" that we gain from this is depends a lot on how different T is to other tools that have already been used on B, or, conversely, how different B is from other problems that T has been applied to.
The range here goes from "it is mind-blowing that T could work on B" all the way to "meh, everybody knew that T would work because we use it all the time for B' anyway" - although I will grant that most works following this schema in practice end up more on the rather incremental side of things.
6
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
10
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
5
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
3
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
No, I was just a joking.
– burner account
Feb 1 at 20:45
add a comment |
I think this is difficult to answer in general. While it is probably not very valuable to apply random technique A to random problem B, it is also useful to find new ways to explore existing problems. Sometimes this "insight" is just that a tool from some other domain might be used to generate additional insight. For example, finding a new way to prove an old theorem in mathematics is often (not always) valuable as the new proof may, itself, offer insights.
So yes, valuable. So no, not so valuable. But it depends. It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand. But throwing stuff at the wall to see what sticks is just throwing stuff at the wall.
2
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
3
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
add a comment |
I think your gut impression is likely correct. If P and T are relatively well known than there is little value (other than for a new student in his learning) in applying one to the other. For instance, using sophisticated crystallography software to solve a crystal structure that was already done correctly using direct methods.
I am very much a fan of datapoint science. But ideally, you can do something new (make a new compound, find a boiling point, do a correlation of response to a medical treatment, etc.) In other words I am fine with "stamp collecting". But T on P sounds a bit weak. There are so many cool things to look at, you would think these guys could get a little more novelty (even moderate things).
I'm even sympathetic to some "negative" results. But it sounds like these guys are milking things. And then the gushy wording...but don't get me started on hype scientists.
I think you have the right instinct. I would just try to figure out something a little bit more interesting. Doesn't need to be discovering gravity. But in your own work, do a little more.
add a comment |
This would be fine if the studies produced valuable new insight about P
New insight isn't the only important thing in research. Many types of research have resource limits, including manpower, equipment costs, computing resources, etc. If applying technique T to problem P reduces resource requirements, despite giving the exact same results, that can have profound impacts on future research, whether it's being able to increase sample sizes, free up money for other equipment, or being able to do things at a larger scale or finer granularity. This in turn also makes it more feasible for smaller research groups to start tackling the same problem when they couldn't before because of limited resources.
add a comment |
The wording you use is "Combinatorial Novelty". Interesting choice of words.
First, I guess you admit that there is indeed novelty to a work of such kind. And novelty is an important part of academic work. In order for knowledge to progress, new things need to be tried and invented. In academic medium, you also need to either make clear what your innovations are and if you don't you better cite the original source.
I myself look down on anyone who claims that he/she/his group/his company is "the first to...", since in the modern world you require lots of concessions and particularities before indeed you can make such claim. Example : "Our company is the first one to apply this research with only partial public funding into a product released in this region with academic and government advisers". Pretty sure someone else already did the thing, in slightly but not meaningfully different conditions. I'm also guessing whoever says this kind of things is trying to trick a reporter to write "this company is the first to develop a product with this technology".
Note that your combinatorial novelty is usually far from that. Also that unlike reporters and marketing departments, researchers usually report much more honestly and are reviewed with more care.
Of course, there is much less glory to applying a known technique from other fields into a new context than there is to developing a completely knew technique. Then again, what is more important? Glory and effort or results? If the technique you've borrowed heavily improves the state of the art in the field you work, how would you have found this out if not by trying to apply it? And since you've gone through the effort of learning, implementing and applying a new technique, then publish the damn results!
In mathematics it happens that a same technique is rediscovered over and over again by researchers in different fields because said researchers really needed these techniques in the fields they were working (Least Squares method is the example that comes to mind), usually this is restricted to simpler techniques, but imagine a method that would require thousands of lines of code to implement, do you really believe it becomes something so specific it has no further use? Do you think it would be efficient for everyone to develop their own brand of this procedure just to have autorship?
There are indeed cases when you might think in advance that a certain technique will surely not improve anything, but likewise, you sometimes have an idea that you think will be a great advancement, but you discover not to be an improvement at all. When that happens, you probably recognize that it was a worthy attempt. But if you were using a technique from other fields you might trick your self into believing that you knew the idea was bad all along.
Also, remember that people in academic careers don't always find new exciting results every day, maybe not every year, maybe not every decade. Yet, masters and PhDs and post docs need to be completed in their respective time constraints. It's good that we have methods that reliably generate novelty in a limited time frame. It's also good that something can be done to advance an academic career that could be otherwise halted. It's "better than nothing".
It is also important to compare between methods, such that a reference methodology can emerge in certain fields. Sometimes it is the "state of the art technique", sometime sits just the "sanity check" one. For portfolio management theory, Markowitz model can be used as a broad reference fro comparing against improvements, while constant ratio portfolio usually provides good comparison. For a research team to develop these kinds of references it is also important to address the same problems with different tools and cultivate a tradition over it.
However, no research team should restrict itself to doing only that. And after the simple combinatoric is done and a novelty is published, some effort should be dedicated to tweak methods, adapt techniques such that some new meaningful advancement is at least consciously attempted, rather than stumbled upon by luck.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "415"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f124177%2fis-combinatorial-novelty-without-insight-useful-who-cares-if-were-the-first-to%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
Is this valuable to science? If so, why?
Because it leads us to understand if tool T works on problem B. How big the "insight" that we gain from this is depends a lot on how different T is to other tools that have already been used on B, or, conversely, how different B is from other problems that T has been applied to.
The range here goes from "it is mind-blowing that T could work on B" all the way to "meh, everybody knew that T would work because we use it all the time for B' anyway" - although I will grant that most works following this schema in practice end up more on the rather incremental side of things.
6
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
10
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
5
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
3
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
No, I was just a joking.
– burner account
Feb 1 at 20:45
add a comment |
Is this valuable to science? If so, why?
Because it leads us to understand if tool T works on problem B. How big the "insight" that we gain from this is depends a lot on how different T is to other tools that have already been used on B, or, conversely, how different B is from other problems that T has been applied to.
The range here goes from "it is mind-blowing that T could work on B" all the way to "meh, everybody knew that T would work because we use it all the time for B' anyway" - although I will grant that most works following this schema in practice end up more on the rather incremental side of things.
6
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
10
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
5
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
3
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
No, I was just a joking.
– burner account
Feb 1 at 20:45
add a comment |
Is this valuable to science? If so, why?
Because it leads us to understand if tool T works on problem B. How big the "insight" that we gain from this is depends a lot on how different T is to other tools that have already been used on B, or, conversely, how different B is from other problems that T has been applied to.
The range here goes from "it is mind-blowing that T could work on B" all the way to "meh, everybody knew that T would work because we use it all the time for B' anyway" - although I will grant that most works following this schema in practice end up more on the rather incremental side of things.
Is this valuable to science? If so, why?
Because it leads us to understand if tool T works on problem B. How big the "insight" that we gain from this is depends a lot on how different T is to other tools that have already been used on B, or, conversely, how different B is from other problems that T has been applied to.
The range here goes from "it is mind-blowing that T could work on B" all the way to "meh, everybody knew that T would work because we use it all the time for B' anyway" - although I will grant that most works following this schema in practice end up more on the rather incremental side of things.
answered Feb 1 at 14:24
xLeitixxLeitix
102k37245386
102k37245386
6
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
10
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
5
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
3
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
No, I was just a joking.
– burner account
Feb 1 at 20:45
add a comment |
6
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
10
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
5
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
3
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
No, I was just a joking.
– burner account
Feb 1 at 20:45
6
6
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
Good point. I think in my situation, the takeaway is usually "Too bad T didn't tell us as much about P as we hoped." But the paper is usually still pitched as "Look how cool T is!" So there is a disconnect between the rhetoric about T and the true value as seen by this question. Thanks for helping me think this through.
– burner account
Feb 1 at 14:37
10
10
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
"There is a disconnect between the rhetoric and the true value" is the motto of science. You should have it tattooed somewhere.
– user101106
Feb 1 at 15:52
5
5
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
@CJ59, then maybe the tattoo should say something different.
– burner account
Feb 1 at 16:14
3
3
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
@burneraccount are you saying that there's a disconnect between the motto of science and the way the motto should be presented to people?
– John Dvorak
Feb 1 at 20:32
No, I was just a joking.
– burner account
Feb 1 at 20:45
No, I was just a joking.
– burner account
Feb 1 at 20:45
add a comment |
I think this is difficult to answer in general. While it is probably not very valuable to apply random technique A to random problem B, it is also useful to find new ways to explore existing problems. Sometimes this "insight" is just that a tool from some other domain might be used to generate additional insight. For example, finding a new way to prove an old theorem in mathematics is often (not always) valuable as the new proof may, itself, offer insights.
So yes, valuable. So no, not so valuable. But it depends. It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand. But throwing stuff at the wall to see what sticks is just throwing stuff at the wall.
2
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
3
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
add a comment |
I think this is difficult to answer in general. While it is probably not very valuable to apply random technique A to random problem B, it is also useful to find new ways to explore existing problems. Sometimes this "insight" is just that a tool from some other domain might be used to generate additional insight. For example, finding a new way to prove an old theorem in mathematics is often (not always) valuable as the new proof may, itself, offer insights.
So yes, valuable. So no, not so valuable. But it depends. It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand. But throwing stuff at the wall to see what sticks is just throwing stuff at the wall.
2
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
3
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
add a comment |
I think this is difficult to answer in general. While it is probably not very valuable to apply random technique A to random problem B, it is also useful to find new ways to explore existing problems. Sometimes this "insight" is just that a tool from some other domain might be used to generate additional insight. For example, finding a new way to prove an old theorem in mathematics is often (not always) valuable as the new proof may, itself, offer insights.
So yes, valuable. So no, not so valuable. But it depends. It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand. But throwing stuff at the wall to see what sticks is just throwing stuff at the wall.
I think this is difficult to answer in general. While it is probably not very valuable to apply random technique A to random problem B, it is also useful to find new ways to explore existing problems. Sometimes this "insight" is just that a tool from some other domain might be used to generate additional insight. For example, finding a new way to prove an old theorem in mathematics is often (not always) valuable as the new proof may, itself, offer insights.
So yes, valuable. So no, not so valuable. But it depends. It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand. But throwing stuff at the wall to see what sticks is just throwing stuff at the wall.
answered Feb 1 at 14:22
BuffyBuffy
53k15170262
53k15170262
2
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
3
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
add a comment |
2
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
3
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
2
2
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
+1 For "It would be valuable if it helps other researchers get more insight into a field in general, not just into the problem at hand."
– burner account
Feb 1 at 14:32
3
3
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
I’m more familiar with the final adage in the form “Throwing stuff at the wall to see what sticks is just feeding the paint industry”. :-þ
– Janus Bahs Jacquet
Feb 2 at 18:25
add a comment |
I think your gut impression is likely correct. If P and T are relatively well known than there is little value (other than for a new student in his learning) in applying one to the other. For instance, using sophisticated crystallography software to solve a crystal structure that was already done correctly using direct methods.
I am very much a fan of datapoint science. But ideally, you can do something new (make a new compound, find a boiling point, do a correlation of response to a medical treatment, etc.) In other words I am fine with "stamp collecting". But T on P sounds a bit weak. There are so many cool things to look at, you would think these guys could get a little more novelty (even moderate things).
I'm even sympathetic to some "negative" results. But it sounds like these guys are milking things. And then the gushy wording...but don't get me started on hype scientists.
I think you have the right instinct. I would just try to figure out something a little bit more interesting. Doesn't need to be discovering gravity. But in your own work, do a little more.
add a comment |
I think your gut impression is likely correct. If P and T are relatively well known than there is little value (other than for a new student in his learning) in applying one to the other. For instance, using sophisticated crystallography software to solve a crystal structure that was already done correctly using direct methods.
I am very much a fan of datapoint science. But ideally, you can do something new (make a new compound, find a boiling point, do a correlation of response to a medical treatment, etc.) In other words I am fine with "stamp collecting". But T on P sounds a bit weak. There are so many cool things to look at, you would think these guys could get a little more novelty (even moderate things).
I'm even sympathetic to some "negative" results. But it sounds like these guys are milking things. And then the gushy wording...but don't get me started on hype scientists.
I think you have the right instinct. I would just try to figure out something a little bit more interesting. Doesn't need to be discovering gravity. But in your own work, do a little more.
add a comment |
I think your gut impression is likely correct. If P and T are relatively well known than there is little value (other than for a new student in his learning) in applying one to the other. For instance, using sophisticated crystallography software to solve a crystal structure that was already done correctly using direct methods.
I am very much a fan of datapoint science. But ideally, you can do something new (make a new compound, find a boiling point, do a correlation of response to a medical treatment, etc.) In other words I am fine with "stamp collecting". But T on P sounds a bit weak. There are so many cool things to look at, you would think these guys could get a little more novelty (even moderate things).
I'm even sympathetic to some "negative" results. But it sounds like these guys are milking things. And then the gushy wording...but don't get me started on hype scientists.
I think you have the right instinct. I would just try to figure out something a little bit more interesting. Doesn't need to be discovering gravity. But in your own work, do a little more.
I think your gut impression is likely correct. If P and T are relatively well known than there is little value (other than for a new student in his learning) in applying one to the other. For instance, using sophisticated crystallography software to solve a crystal structure that was already done correctly using direct methods.
I am very much a fan of datapoint science. But ideally, you can do something new (make a new compound, find a boiling point, do a correlation of response to a medical treatment, etc.) In other words I am fine with "stamp collecting". But T on P sounds a bit weak. There are so many cool things to look at, you would think these guys could get a little more novelty (even moderate things).
I'm even sympathetic to some "negative" results. But it sounds like these guys are milking things. And then the gushy wording...but don't get me started on hype scientists.
I think you have the right instinct. I would just try to figure out something a little bit more interesting. Doesn't need to be discovering gravity. But in your own work, do a little more.
edited Feb 1 at 19:40
answered Feb 1 at 18:30
guestguest
1872
1872
add a comment |
add a comment |
This would be fine if the studies produced valuable new insight about P
New insight isn't the only important thing in research. Many types of research have resource limits, including manpower, equipment costs, computing resources, etc. If applying technique T to problem P reduces resource requirements, despite giving the exact same results, that can have profound impacts on future research, whether it's being able to increase sample sizes, free up money for other equipment, or being able to do things at a larger scale or finer granularity. This in turn also makes it more feasible for smaller research groups to start tackling the same problem when they couldn't before because of limited resources.
add a comment |
This would be fine if the studies produced valuable new insight about P
New insight isn't the only important thing in research. Many types of research have resource limits, including manpower, equipment costs, computing resources, etc. If applying technique T to problem P reduces resource requirements, despite giving the exact same results, that can have profound impacts on future research, whether it's being able to increase sample sizes, free up money for other equipment, or being able to do things at a larger scale or finer granularity. This in turn also makes it more feasible for smaller research groups to start tackling the same problem when they couldn't before because of limited resources.
add a comment |
This would be fine if the studies produced valuable new insight about P
New insight isn't the only important thing in research. Many types of research have resource limits, including manpower, equipment costs, computing resources, etc. If applying technique T to problem P reduces resource requirements, despite giving the exact same results, that can have profound impacts on future research, whether it's being able to increase sample sizes, free up money for other equipment, or being able to do things at a larger scale or finer granularity. This in turn also makes it more feasible for smaller research groups to start tackling the same problem when they couldn't before because of limited resources.
This would be fine if the studies produced valuable new insight about P
New insight isn't the only important thing in research. Many types of research have resource limits, including manpower, equipment costs, computing resources, etc. If applying technique T to problem P reduces resource requirements, despite giving the exact same results, that can have profound impacts on future research, whether it's being able to increase sample sizes, free up money for other equipment, or being able to do things at a larger scale or finer granularity. This in turn also makes it more feasible for smaller research groups to start tackling the same problem when they couldn't before because of limited resources.
answered Feb 2 at 0:31
anjamaanjama
992
992
add a comment |
add a comment |
The wording you use is "Combinatorial Novelty". Interesting choice of words.
First, I guess you admit that there is indeed novelty to a work of such kind. And novelty is an important part of academic work. In order for knowledge to progress, new things need to be tried and invented. In academic medium, you also need to either make clear what your innovations are and if you don't you better cite the original source.
I myself look down on anyone who claims that he/she/his group/his company is "the first to...", since in the modern world you require lots of concessions and particularities before indeed you can make such claim. Example : "Our company is the first one to apply this research with only partial public funding into a product released in this region with academic and government advisers". Pretty sure someone else already did the thing, in slightly but not meaningfully different conditions. I'm also guessing whoever says this kind of things is trying to trick a reporter to write "this company is the first to develop a product with this technology".
Note that your combinatorial novelty is usually far from that. Also that unlike reporters and marketing departments, researchers usually report much more honestly and are reviewed with more care.
Of course, there is much less glory to applying a known technique from other fields into a new context than there is to developing a completely knew technique. Then again, what is more important? Glory and effort or results? If the technique you've borrowed heavily improves the state of the art in the field you work, how would you have found this out if not by trying to apply it? And since you've gone through the effort of learning, implementing and applying a new technique, then publish the damn results!
In mathematics it happens that a same technique is rediscovered over and over again by researchers in different fields because said researchers really needed these techniques in the fields they were working (Least Squares method is the example that comes to mind), usually this is restricted to simpler techniques, but imagine a method that would require thousands of lines of code to implement, do you really believe it becomes something so specific it has no further use? Do you think it would be efficient for everyone to develop their own brand of this procedure just to have autorship?
There are indeed cases when you might think in advance that a certain technique will surely not improve anything, but likewise, you sometimes have an idea that you think will be a great advancement, but you discover not to be an improvement at all. When that happens, you probably recognize that it was a worthy attempt. But if you were using a technique from other fields you might trick your self into believing that you knew the idea was bad all along.
Also, remember that people in academic careers don't always find new exciting results every day, maybe not every year, maybe not every decade. Yet, masters and PhDs and post docs need to be completed in their respective time constraints. It's good that we have methods that reliably generate novelty in a limited time frame. It's also good that something can be done to advance an academic career that could be otherwise halted. It's "better than nothing".
It is also important to compare between methods, such that a reference methodology can emerge in certain fields. Sometimes it is the "state of the art technique", sometime sits just the "sanity check" one. For portfolio management theory, Markowitz model can be used as a broad reference fro comparing against improvements, while constant ratio portfolio usually provides good comparison. For a research team to develop these kinds of references it is also important to address the same problems with different tools and cultivate a tradition over it.
However, no research team should restrict itself to doing only that. And after the simple combinatoric is done and a novelty is published, some effort should be dedicated to tweak methods, adapt techniques such that some new meaningful advancement is at least consciously attempted, rather than stumbled upon by luck.
add a comment |
The wording you use is "Combinatorial Novelty". Interesting choice of words.
First, I guess you admit that there is indeed novelty to a work of such kind. And novelty is an important part of academic work. In order for knowledge to progress, new things need to be tried and invented. In academic medium, you also need to either make clear what your innovations are and if you don't you better cite the original source.
I myself look down on anyone who claims that he/she/his group/his company is "the first to...", since in the modern world you require lots of concessions and particularities before indeed you can make such claim. Example : "Our company is the first one to apply this research with only partial public funding into a product released in this region with academic and government advisers". Pretty sure someone else already did the thing, in slightly but not meaningfully different conditions. I'm also guessing whoever says this kind of things is trying to trick a reporter to write "this company is the first to develop a product with this technology".
Note that your combinatorial novelty is usually far from that. Also that unlike reporters and marketing departments, researchers usually report much more honestly and are reviewed with more care.
Of course, there is much less glory to applying a known technique from other fields into a new context than there is to developing a completely knew technique. Then again, what is more important? Glory and effort or results? If the technique you've borrowed heavily improves the state of the art in the field you work, how would you have found this out if not by trying to apply it? And since you've gone through the effort of learning, implementing and applying a new technique, then publish the damn results!
In mathematics it happens that a same technique is rediscovered over and over again by researchers in different fields because said researchers really needed these techniques in the fields they were working (Least Squares method is the example that comes to mind), usually this is restricted to simpler techniques, but imagine a method that would require thousands of lines of code to implement, do you really believe it becomes something so specific it has no further use? Do you think it would be efficient for everyone to develop their own brand of this procedure just to have autorship?
There are indeed cases when you might think in advance that a certain technique will surely not improve anything, but likewise, you sometimes have an idea that you think will be a great advancement, but you discover not to be an improvement at all. When that happens, you probably recognize that it was a worthy attempt. But if you were using a technique from other fields you might trick your self into believing that you knew the idea was bad all along.
Also, remember that people in academic careers don't always find new exciting results every day, maybe not every year, maybe not every decade. Yet, masters and PhDs and post docs need to be completed in their respective time constraints. It's good that we have methods that reliably generate novelty in a limited time frame. It's also good that something can be done to advance an academic career that could be otherwise halted. It's "better than nothing".
It is also important to compare between methods, such that a reference methodology can emerge in certain fields. Sometimes it is the "state of the art technique", sometime sits just the "sanity check" one. For portfolio management theory, Markowitz model can be used as a broad reference fro comparing against improvements, while constant ratio portfolio usually provides good comparison. For a research team to develop these kinds of references it is also important to address the same problems with different tools and cultivate a tradition over it.
However, no research team should restrict itself to doing only that. And after the simple combinatoric is done and a novelty is published, some effort should be dedicated to tweak methods, adapt techniques such that some new meaningful advancement is at least consciously attempted, rather than stumbled upon by luck.
add a comment |
The wording you use is "Combinatorial Novelty". Interesting choice of words.
First, I guess you admit that there is indeed novelty to a work of such kind. And novelty is an important part of academic work. In order for knowledge to progress, new things need to be tried and invented. In academic medium, you also need to either make clear what your innovations are and if you don't you better cite the original source.
I myself look down on anyone who claims that he/she/his group/his company is "the first to...", since in the modern world you require lots of concessions and particularities before indeed you can make such claim. Example : "Our company is the first one to apply this research with only partial public funding into a product released in this region with academic and government advisers". Pretty sure someone else already did the thing, in slightly but not meaningfully different conditions. I'm also guessing whoever says this kind of things is trying to trick a reporter to write "this company is the first to develop a product with this technology".
Note that your combinatorial novelty is usually far from that. Also that unlike reporters and marketing departments, researchers usually report much more honestly and are reviewed with more care.
Of course, there is much less glory to applying a known technique from other fields into a new context than there is to developing a completely knew technique. Then again, what is more important? Glory and effort or results? If the technique you've borrowed heavily improves the state of the art in the field you work, how would you have found this out if not by trying to apply it? And since you've gone through the effort of learning, implementing and applying a new technique, then publish the damn results!
In mathematics it happens that a same technique is rediscovered over and over again by researchers in different fields because said researchers really needed these techniques in the fields they were working (Least Squares method is the example that comes to mind), usually this is restricted to simpler techniques, but imagine a method that would require thousands of lines of code to implement, do you really believe it becomes something so specific it has no further use? Do you think it would be efficient for everyone to develop their own brand of this procedure just to have autorship?
There are indeed cases when you might think in advance that a certain technique will surely not improve anything, but likewise, you sometimes have an idea that you think will be a great advancement, but you discover not to be an improvement at all. When that happens, you probably recognize that it was a worthy attempt. But if you were using a technique from other fields you might trick your self into believing that you knew the idea was bad all along.
Also, remember that people in academic careers don't always find new exciting results every day, maybe not every year, maybe not every decade. Yet, masters and PhDs and post docs need to be completed in their respective time constraints. It's good that we have methods that reliably generate novelty in a limited time frame. It's also good that something can be done to advance an academic career that could be otherwise halted. It's "better than nothing".
It is also important to compare between methods, such that a reference methodology can emerge in certain fields. Sometimes it is the "state of the art technique", sometime sits just the "sanity check" one. For portfolio management theory, Markowitz model can be used as a broad reference fro comparing against improvements, while constant ratio portfolio usually provides good comparison. For a research team to develop these kinds of references it is also important to address the same problems with different tools and cultivate a tradition over it.
However, no research team should restrict itself to doing only that. And after the simple combinatoric is done and a novelty is published, some effort should be dedicated to tweak methods, adapt techniques such that some new meaningful advancement is at least consciously attempted, rather than stumbled upon by luck.
The wording you use is "Combinatorial Novelty". Interesting choice of words.
First, I guess you admit that there is indeed novelty to a work of such kind. And novelty is an important part of academic work. In order for knowledge to progress, new things need to be tried and invented. In academic medium, you also need to either make clear what your innovations are and if you don't you better cite the original source.
I myself look down on anyone who claims that he/she/his group/his company is "the first to...", since in the modern world you require lots of concessions and particularities before indeed you can make such claim. Example : "Our company is the first one to apply this research with only partial public funding into a product released in this region with academic and government advisers". Pretty sure someone else already did the thing, in slightly but not meaningfully different conditions. I'm also guessing whoever says this kind of things is trying to trick a reporter to write "this company is the first to develop a product with this technology".
Note that your combinatorial novelty is usually far from that. Also that unlike reporters and marketing departments, researchers usually report much more honestly and are reviewed with more care.
Of course, there is much less glory to applying a known technique from other fields into a new context than there is to developing a completely knew technique. Then again, what is more important? Glory and effort or results? If the technique you've borrowed heavily improves the state of the art in the field you work, how would you have found this out if not by trying to apply it? And since you've gone through the effort of learning, implementing and applying a new technique, then publish the damn results!
In mathematics it happens that a same technique is rediscovered over and over again by researchers in different fields because said researchers really needed these techniques in the fields they were working (Least Squares method is the example that comes to mind), usually this is restricted to simpler techniques, but imagine a method that would require thousands of lines of code to implement, do you really believe it becomes something so specific it has no further use? Do you think it would be efficient for everyone to develop their own brand of this procedure just to have autorship?
There are indeed cases when you might think in advance that a certain technique will surely not improve anything, but likewise, you sometimes have an idea that you think will be a great advancement, but you discover not to be an improvement at all. When that happens, you probably recognize that it was a worthy attempt. But if you were using a technique from other fields you might trick your self into believing that you knew the idea was bad all along.
Also, remember that people in academic careers don't always find new exciting results every day, maybe not every year, maybe not every decade. Yet, masters and PhDs and post docs need to be completed in their respective time constraints. It's good that we have methods that reliably generate novelty in a limited time frame. It's also good that something can be done to advance an academic career that could be otherwise halted. It's "better than nothing".
It is also important to compare between methods, such that a reference methodology can emerge in certain fields. Sometimes it is the "state of the art technique", sometime sits just the "sanity check" one. For portfolio management theory, Markowitz model can be used as a broad reference fro comparing against improvements, while constant ratio portfolio usually provides good comparison. For a research team to develop these kinds of references it is also important to address the same problems with different tools and cultivate a tradition over it.
However, no research team should restrict itself to doing only that. And after the simple combinatoric is done and a novelty is published, some effort should be dedicated to tweak methods, adapt techniques such that some new meaningful advancement is at least consciously attempted, rather than stumbled upon by luck.
edited Feb 5 at 16:19
answered Feb 1 at 16:58
MefiticoMefitico
2378
2378
add a comment |
add a comment |
Thanks for contributing an answer to Academia Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f124177%2fis-combinatorial-novelty-without-insight-useful-who-cares-if-were-the-first-to%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
6
This feels to me more like a rant than a real question (though I think the answers are helpful to you), and I think it's also using a bit of a straw man argument. I often read papers that tout what you term 'combinatorial novelty' of their approach. I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals.
– Bryan Krause
Feb 1 at 17:24
4
if you use the "CMV" tag, then perhaps you may want to replace "good point" and "+1" with "Δ" :P
– Ooker
Feb 1 at 17:30
@BryanKrause It feels like a rant to me too, but the answerers seem to have real answers. You comment is informative to me as well: "I rarely, if ever, see papers where that is the primary or sole justification published in high-impact journals." So would you say it's typically not enough except as a means to an end?
– burner account
Feb 1 at 17:41
@burneraccount Not sure what you mean as a 'means to an end' but my personal opinion is that all science that is done should be publishable, regardless of impact. I felt like your question implied that you think people are using these justifications to make their work seem like a 'big deal' when they may be simply explaining what they did. Negative/uninteresting findings can still be very important, as the answers have noted.
– Bryan Krause
Feb 1 at 18:22
6
Not enough for an answer, but I would imagine that if your team thought T might work on P, and decided to spend time and money finding that out, some other team might feel the same way in the future. If you publish your results, the other team(s) may decide to spend their time on money on different avenues of science, instead of carrying out the same experiment.
– Xantix
Feb 1 at 19:06