- Joined
- Apr 2025
- Subscription
- Free
Yes yes please
I need help clarifying LR 1/q 24.
I understand why B is right - if the info in the computer program codes for the structures of the proteins but not their interactions, then a computerized model of the human genome couldn't perform the operations of the human brain. Therefore, scientists will require more than the info in the human genome to create AI.
I can't succinctly explain why A and C are wrong. I chose A on test day, and C on blind review. I probably skipped over B because it's worded confusingly >.<
I'll do my best below, someone please tell me if I'm understanding this right. Been thinking about it for half an hour.
A (negated) 'The functions of the human brain are governed by processes that can be simulated by a computer.'
The biologist's argument could
still be valid. Even if the functions* of the human brain could be simulated by a computer, it will require more than a computerized version of the human genome to mimic the human brain if those functions aren't coded for in the genome (essentially what B is saying).
Here's what confuses me: are we supposed to assume that modeling the operation of the human brain is necessary for creating AI? I know that sounds silly but this is the LSAT, take no common sense for granted.
*I'm assuming that functions = operations.
C (negated) 'there are other ways to create an artificial intelligence besides modeling on the operation of the human brain.'
Here's where I got f'd up. This answer leaves open the possibility that there are other ways to create AI using the human genome and modeling it on something besides a human brain. I know that sounds crazy when it's written out in plain English, but like I said earlier, are we just supposed to assume that we're modeling an AI on the operation of a brain? And isn't that sort of a necessary assumption? I'm a little rusty after nearly a month off of studying, but bear* with me here. If the scientists model AI on something else, couldn't they still use the human genome and nothing else? Why do you need a brain model at all?
(And while we're on the subject of bears, I will admit that I had to read question 23 five times just now before I realized it compares dead bear bones to living bear blood. Honestly I feel like I deserved the 8 point test day drop. The answer was immediately obvious after that. Sloppy.)
@ - I think it's possible that you're interpreting this correctly but I'm still confused on the difference between 'encapsulates the information' and 'encoded in.' To me, 'encapsulates the information contained in' is the same thing as saying 'encoded in.' They're both referring to the information contained in the human genome.
The way I interpreted it, the biologist is saying that the coded information isn't enough to make AI. Something has to trigger the interactions between proteins in order for the brain to operate - and that trigger isn't inherently part of the human genome. This is a really dated example, but I think it would be equivalent to saying 'all you need to watch a movie is this tape.' All the information is encoded on the tape, but without the vcr, you can't watch it.