Are there logicians who argue that knowing and believing are NOT amenable to formal study via modal logics?












2














Vincent Hendricks and John Symons notes the following about epistemic logic:




Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




However, he also notes the problem of logical omniscience implied in the K axiom:




A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.










share|improve this question



























    2














    Vincent Hendricks and John Symons notes the following about epistemic logic:




    Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




    However, he also notes the problem of logical omniscience implied in the K axiom:




    A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



    Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




    He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



    Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





    Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.










    share|improve this question

























      2












      2








      2


      1





      Vincent Hendricks and John Symons notes the following about epistemic logic:




      Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




      However, he also notes the problem of logical omniscience implied in the K axiom:




      A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



      Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




      He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



      Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





      Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.










      share|improve this question













      Vincent Hendricks and John Symons notes the following about epistemic logic:




      Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




      However, he also notes the problem of logical omniscience implied in the K axiom:




      A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



      Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




      He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



      Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





      Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.







      logic epistemology modal-logic






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Dec 30 '18 at 16:03









      Frank Hubeny

      6,97851444




      6,97851444






















          1 Answer
          1






          active

          oldest

          votes


















          4














          Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and deductively closed they must include all the consequences along with their premises, and nothing contradicting the premises. So if we are describing acquisition of knowledge as elimination of uncertainty, by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



          Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and to describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects introduced in proofs, not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



          This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and he introduced an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The difficulty of deriving some Boolean tautologies from the axioms, even though there are no quantifiers, illustrates this depth. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



          There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong (freely available) even argues that few interesting concepts are analyzable in the reductive sense. A two part chapter there is titled The Demise of Definitions.



          But to say that a concept is unanalyzable is not to say that it is not amenable to formal study. Fodor himself is well known for a formal study of meaning. Euclid's lines and points, or sets and elements of set theory, are also unanalyzable. As we know since Hilbert, basic notions can only be defined implicitly, in terms of their interrelations. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



          Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. Late Wittgenstein moved away from positivist formal approaches to knowledge and meaning to language games in the Philosophical Investigations. For the most radical theses one would have to go to the continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analyses in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






          share|improve this answer























          • "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
            – Eliran
            2 days ago










          • @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
            – Conifold
            2 days ago













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "265"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphilosophy.stackexchange.com%2fquestions%2f59230%2fare-there-logicians-who-argue-that-knowing-and-believing-are-not-amenable-to-for%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4














          Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and deductively closed they must include all the consequences along with their premises, and nothing contradicting the premises. So if we are describing acquisition of knowledge as elimination of uncertainty, by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



          Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and to describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects introduced in proofs, not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



          This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and he introduced an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The difficulty of deriving some Boolean tautologies from the axioms, even though there are no quantifiers, illustrates this depth. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



          There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong (freely available) even argues that few interesting concepts are analyzable in the reductive sense. A two part chapter there is titled The Demise of Definitions.



          But to say that a concept is unanalyzable is not to say that it is not amenable to formal study. Fodor himself is well known for a formal study of meaning. Euclid's lines and points, or sets and elements of set theory, are also unanalyzable. As we know since Hilbert, basic notions can only be defined implicitly, in terms of their interrelations. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



          Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. Late Wittgenstein moved away from positivist formal approaches to knowledge and meaning to language games in the Philosophical Investigations. For the most radical theses one would have to go to the continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analyses in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






          share|improve this answer























          • "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
            – Eliran
            2 days ago










          • @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
            – Conifold
            2 days ago


















          4














          Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and deductively closed they must include all the consequences along with their premises, and nothing contradicting the premises. So if we are describing acquisition of knowledge as elimination of uncertainty, by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



          Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and to describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects introduced in proofs, not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



          This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and he introduced an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The difficulty of deriving some Boolean tautologies from the axioms, even though there are no quantifiers, illustrates this depth. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



          There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong (freely available) even argues that few interesting concepts are analyzable in the reductive sense. A two part chapter there is titled The Demise of Definitions.



          But to say that a concept is unanalyzable is not to say that it is not amenable to formal study. Fodor himself is well known for a formal study of meaning. Euclid's lines and points, or sets and elements of set theory, are also unanalyzable. As we know since Hilbert, basic notions can only be defined implicitly, in terms of their interrelations. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



          Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. Late Wittgenstein moved away from positivist formal approaches to knowledge and meaning to language games in the Philosophical Investigations. For the most radical theses one would have to go to the continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analyses in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






          share|improve this answer























          • "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
            – Eliran
            2 days ago










          • @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
            – Conifold
            2 days ago
















          4












          4








          4






          Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and deductively closed they must include all the consequences along with their premises, and nothing contradicting the premises. So if we are describing acquisition of knowledge as elimination of uncertainty, by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



          Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and to describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects introduced in proofs, not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



          This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and he introduced an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The difficulty of deriving some Boolean tautologies from the axioms, even though there are no quantifiers, illustrates this depth. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



          There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong (freely available) even argues that few interesting concepts are analyzable in the reductive sense. A two part chapter there is titled The Demise of Definitions.



          But to say that a concept is unanalyzable is not to say that it is not amenable to formal study. Fodor himself is well known for a formal study of meaning. Euclid's lines and points, or sets and elements of set theory, are also unanalyzable. As we know since Hilbert, basic notions can only be defined implicitly, in terms of their interrelations. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



          Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. Late Wittgenstein moved away from positivist formal approaches to knowledge and meaning to language games in the Philosophical Investigations. For the most radical theses one would have to go to the continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analyses in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






          share|improve this answer














          Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and deductively closed they must include all the consequences along with their premises, and nothing contradicting the premises. So if we are describing acquisition of knowledge as elimination of uncertainty, by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



          Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and to describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects introduced in proofs, not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



          This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and he introduced an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The difficulty of deriving some Boolean tautologies from the axioms, even though there are no quantifiers, illustrates this depth. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



          There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong (freely available) even argues that few interesting concepts are analyzable in the reductive sense. A two part chapter there is titled The Demise of Definitions.



          But to say that a concept is unanalyzable is not to say that it is not amenable to formal study. Fodor himself is well known for a formal study of meaning. Euclid's lines and points, or sets and elements of set theory, are also unanalyzable. As we know since Hilbert, basic notions can only be defined implicitly, in terms of their interrelations. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



          Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. Late Wittgenstein moved away from positivist formal approaches to knowledge and meaning to language games in the Philosophical Investigations. For the most radical theses one would have to go to the continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analyses in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 31 '18 at 12:19

























          answered Dec 30 '18 at 20:46









          Conifold

          35.1k252139




          35.1k252139












          • "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
            – Eliran
            2 days ago










          • @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
            – Conifold
            2 days ago




















          • "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
            – Eliran
            2 days ago










          • @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
            – Conifold
            2 days ago


















          "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
          – Eliran
          2 days ago




          "logical omniscience is no longer considered a problem" seems like an oversimplification. Just take a look at philpapers.org/s/logical%20omniscience
          – Eliran
          2 days ago












          @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
          – Conifold
          2 days ago






          @Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
          – Conifold
          2 days ago




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Philosophy Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphilosophy.stackexchange.com%2fquestions%2f59230%2fare-there-logicians-who-argue-that-knowing-and-believing-are-not-amenable-to-for%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Human spaceflight

          Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

          File:DeusFollowingSea.jpg