It looks like you're new here. If you want to get involved, click one of these buttons!
Hello fellow 7Sagers,
A thought struck me this morning and I wanted to share it with you guys. Correct me if I am wrong, but I remember JY Ping saying that the difficulty of questions (Easy, Easier, Medium, Hard, Very Hard) are based on the data of 7Sagers who take the preptests. If enrolled members adhere to the order of taking the exams from older to more recent, and in the process of doing so, it is reasonable to assume that their logical skills are improving and hence their accuracy, then doesn't that eschew the representative difficulty of questions on the more recent tests (i.e. 50s, 60s, and 70s)?
I would very much appreciate if someone, with the appropriate knowledge of this, could clarify or correct my misconceptions. Thank you.
Comments
Well one thing we have to assume for your argument is that people actually are taking the tests orderly and improving their skills from section to section or test to test. Another assumption we need to make is that the tests themselves aren't inherently getting harder.
I understand totally what you are saying. However, my thinking is kind of like this, the easy -medium questions are kind of irrelevant because those questions should be answered correctly regardless. We should detach ourselves from trying to take anything meaningful from what others have answered correctly or incorrectly in the easy - medium range. The Hard - Very hard questions are going to hold true forever and always, because these are the curve breaker questions which essentially establish our curve.
@jknauf Thank you for your response. I already stated that assumptions you pointed out in my initial posting.
-I disagree. Performance on "Hard" and "Very Hard" questions will hold true so long as your logical skills are not up to par.
-A little explanation of what I meant in my initial posting: A question in the 60s of similar logical difficulty to a question in the 30s might end up in the "Medium" pile instead of the "Hard" pile (as it was in the 30s) because the logical skills of the test takers have improved.
I guess what I am wondering about is whether there is a supposed objective standard of difficulty for every question independent of how we perform? I know it is a weird question but I am just curious.
There actually is! In the book LSAT Superprep II, the LSAC goes into detail explaining their questions. One of the components that they provide is the level of difficulty for each question. Is this completely objective? No. But it is probably the closest thing that we can get to it.
I personally think that while the difficulty is mostly an subjective thing, many questions are objectively harder. An extreme example would be preptest 62 section 4 question 18. This is an objectively hard question that a majority of LSAT students will struggle with. Even so, while I do think that there is some objectivity to difficulty, the difficulty is substantially mitigated by the student's level of preparation.
Well it's hard to explain, if you are a beast at, let's say, main point questions, of course a VERY HARD main point (if such a thing even exists) won't be as hard for you as others.
However, the LSAC does assign difficulty ratings to all questions which you can see outlined in the Superprep test booklets. The ratings are based off the experimental sections of the LSAT and how many ppl correctly or incorrectly choose an answer choice.
@jknauf @JustDoIt Thank you both. I appreciate your responses.
@TheLSAT I found that the 7Sage difficulty rankings do not adhere to my own experience. However, @jknauf and @JustDoIt mention, I do find that more 'hardest' questions would be difficult than other rankings. Same with 'easiest' for being easy. There are definitely exceptions, though.
Since there are those questions that are (almost) objectively hard or easy, these outliers will be similarly difficult for nearly all test-takers. That middle ground, though, is much more subjective. There are so many reasons that people choose a wrong answer, be it trap answers, confusing wording, long ACs etc. and we all have strengths and weaknesses that may not be as apparent from the data we are given.
In the earlier phase of my studying, particularly during the core curriculum, I think that the difficulty rankings were more accurate. If one has not studied as much, they may struggle more with harder questions, as they are not familiar with the pitfalls that can lead to a wrong AC. For me, as I progressed through the course and my familiarity with the content grew, I found that my perceived difficulty on a question was less consistent with the 7Sage sample. My strengths and weaknesses applied a personal bias toward certain questions, so I did not always agree with the difficulty.
If this is true, it would mean that the data has more room for error in later tests, as there are more subjective reasons that someone might miss a question on PT 70 vs 40.
Other possible factors affecting the data:
1. Not all PTs are taken in order by every student. People may skip to new PTs before an exam.
2. All data is self-reported, and thus, not confirmable as accurate.
3. Not all users complete PT 36 through the most recent PT, so sample size diminishes (and probably the amount of study time per student increases).
4. Some students use more than one material source, so the sample is not homogenous.
At the end of the day, I use the difficulty info to cluster questions for review. Other than that, I just see them as a guideline to how others may have experienced the question.