1 comments

  • Friday, Jun 08 2018

    When the people are displaying symptoms: lets say chest pain, further testing is provided. Note the first sentence here: the move from symptom to diagnosis. Presumably, someone can have a symptom without a diagnosis: chest pain could be something other than a heart attack. Both the doctor and the machine look to diagnose whether the symptom was a heart attack.

    The flaw in this argument is we are basing a judgement on what is better: doctor or machine on the basis of possibly incomplete information: just the positive diagnosis. Meaning the machine says: "yes thats a heart attack" and gets a higher percentage "correct" than the doctor. This leaves open the odd, but nonetheless flawed possibility that the machine was simply saying "yes, thats a heart attack" to every patient. A true rendering of who/what is better at diagnostics would include both diagnostics of both positive (yes, thats a heart attack) and negative (no, that symptom is not a heart attack) instances.

    So lets look at a situation:

    10 patients, 4 have heart attacks present and 6 do not

    the computer correctly guesses the 4, but incorrectly guesses the 6.

    the doctor correctly guesses 3 of the 4, and correctly guesses the 6.

    This brings the total to:

    Machine: 4 corrects

    Human: 9 corrects

    In the situation described above, how can we say the machine is better? Thats what (C) does for us.

    The key for future questions: ask ourselves, is this the full spectrum of data or is this half??

    David

    0

Confirm action

Are you sure?