No announcement yet.

Utilitarianism is right, we're just not omniscient

  • Filter
  • Time
  • Show
Clear All
new posts

  • Utilitarianism is right, we're just not omniscient

    What do you make of the argument that in principle utilitarianism is the correct moral ethic, we just can't properly practice it because our finite minds can't take into account the complex causal scheme every human action can prompt?

  • #2
    Just realized I posted this in Religion on accident. Can an admin move it?


    • #3
      It's been moved.

      There are still good reasons to dismiss the claim that pleasure=goodness or happiness. For example the utilitarian thought-experiment involving a pleasure machine you hook up to could be controlled in a way that seemed like the creator was omniscient. But there are still good reasons to not plug into that machine.


      • #4
        That's just a reply to the objection that utilitarianism is impracticable, it's not an argument for utilitarianism. There are lots of better arguments against utilitarianism (incommensurability of goods, impossibility of selecting an appropriate function for weighing consequences, inattention to the actual use of the concept of good, tendency ultimately to rationalize judgments reached on non-utilitarian grounds).

        However, the impracticability point is a start to one of the objections Bernard Williams raises in his debate with J.J.C. Smart. That is, the utilitarian is placed in an uncomfortable position when it comes to saying how reflective agents should actually be. They would be paralyzed if they attempted thorough utilitarian deliberation in each choice they made. But to the extent that one therefore leaves more of the decision-making to habit and prejudice, utilitarianism ceases to be action-guiding (though that's supposed to be one of its major advantages) and tends to eliminate the role of the agent. When one appreciates this it becomes harder to see how to formulate a utilitarian principle. One doesn't want to say that an action is good or right only if it yields the best consequences; or, at least, if we do say that, then we will simultaneously want agents not always to try to choose the best action, because that will lead to better consequences in the long run.