Critically evaluate Stuart Russell’s conception of a beneficial ultra-intelligent machine (UI). Identify clearly what you think are the dangers of a non-beneficial UI, and critically assess Russell’s argument. Clearly identify your own attitudes both to the dangers and to Russell’s argument.

Instructions: Use 1000 words to answer each question.

  1. The classical model of visual perception assumes that vision constructs a three-dimensional map of a subject’s immediate surroundings from two-dimensional projections of those surroundings. Explain. What, in your opinion, is the best evidence that supports this model? On what assumptions is this classic model needed for action? Herbert, the robot, (p. 100-101 of the Clark text) arguably shows that vision does not have to be this way for effective action. Explain and critically assess.
    Structure:
    • Explain the classic model
    • What kind of evidence supports the classic model in your opinion?
    • Why do people think that the classic model of vision is needed for effective action?
    • Describe Herbert the robot.
    • In what ways is Herbert able to operated effectively without having vision of the classic kind?
    • Does Herbert show that the classic model is wrong? Critically explain.
    Link to Andy Clark’s text
    file:///C:/Users/Anita%20Aliu/Downloads/Mindware%20An%20Introduction%20to%20the%20Philosophy%20of%20Cognitive%20Science%20by%20Andy%20Clark%20(z-lib.org)%20(1).pdf
  2. Critically evaluate Stuart Russell’s conception of a beneficial ultra-intelligent machine (UI). Identify clearly what you think are the dangers of a non-beneficial UI, and critically assess Russell’s argument. Clearly identify your own attitudes both to the dangers and to Russell’s argument.

Background: In Chapter 8 of Human Compatible, Stuart Russell argues that ultra-intelligent (UI) machines won’t be guaranteed to be beneficial unless they are uncertain about the preferences of their users. On page 199, he writes (in summary) that the machine (“Robbie”) has a positive incentive to allow the user (“Harriet”) to switch him off under certain circumstances of uncertainty.
Clearly explain the reasoning behind Russell’s argument. Why are machines playing “assistance games” ideally more deferential to their human users than those who are simply programmed with that human’s preferences? Does the structure of assistance games guard against the dangers of machines that take over decision-making from their users?
Here’s how your answer should by structured:
• Why would it be bad if UI took over?
• Russell’s conception of a UI machine that wouldn’t take over.
• His argument to show that it wouldn’t take over.
• Your assessment of his argument.
• An argument to back up your assessment.

Leave a Reply