Showing posts with label SAT. Show all posts
Showing posts with label SAT. Show all posts

Thursday, October 9, 2014

Step-back SAT/GRE problems -- trying something new at "You Do the Math"

I've been thinking about the problem of adapting lessons for different media in general and for video in particular. There is a popular but wildly misguided impression that you can create an effective video by just sticking a camera in front of a live presentation. Teaching live is an interactive process. Even when the students don't say a word, the good teacher is alert to the class's reactions. You speed up, slow down, offer words of encouragement, come up with new examples and occasionally stop what you're doing and go back and reteach a previous section.

With a video lesson you set the course then you leave the room. What's worse, it's a really big room and many if not most of the kids are there because the standard methods of instruction have not served them well.

One idea I'm playing with is thinking of the problems in terms of a graph (as in graph theory, not data visualization) where the path is determined by how well the student is doing. As a start in that direction I'm playing around with paired problems -- if you are confused by the first (more difficult) problem there an easier one to try -- and I've got the first couple up at the teaching blog.

Here's the medium problem:

Circle 1


The radius of circle 1 is 5. Both line segments pass through the center of the circle. Find the area of the shaded region.


You can find the answer and explanation at You Do the Math. Feedback is always appreciated.







Thursday, March 27, 2014

On SAT changes, The New York Times gets the effect right but the direction wrong

That was quick.

Almost immediately after posting this piece on the elimination of the SAT's correction for guessing (The SAT and the penalty for NOT guessing), I came across this from Todd Balf in the New York Times Magazine.
Students were docked one-quarter point for every multiple-choice question they got wrong, requiring a time-consuming risk analysis to determine which questions to answer and which to leave blank. 
I went through this in some detail in the previous post but for a second opinion (and a more concise one), here's Wikipedia:
The questions are weighted equally. For each correct answer, one raw point is added. For each incorrect answer one-fourth of a point is deducted. No points are deducted for incorrect math grid-in questions. This ensures that a student's mathematically expected gain from guessing is zero. The final score is derived from the raw score; the precise conversion chart varies between test administrations.

The SAT therefore recommends only making educated guesses, that is, when the test taker can eliminate at least one answer he or she thinks is wrong. Without eliminating any answers one's probability of answering correctly is 20%. Eliminating one wrong answer increases this probability to 25% (and the expected gain to 1/16 of a point); two, a 33.3% probability (1/6 of a point); and three, a 50% probability (3/8 of a point). 
You could go even further. You don't actually have to eliminate a wrong answer to make guessing a good strategy. If you have any information about the relative likelihood of the options, guessing will have positive expected value.

The result is that, while time management for a test like the SAT can be complicated, the rule for guessing is embarrassingly simple: give your best guess for questions you read; don't waste time guessing on questions that you didn't have time to read.

The risk analysis actually becomes much more complicated when you take away the penalty for guessing. On the ACT (or the new SAT), there is a positive expected value associated with blind guessing and that value is large enough to cause trouble. Under severe time constraints (a fairly common occurrence with these tests), the minute it would take you to attempt a problem, even if you get it right, would be better spent filling in bubbles for questions you haven't read.

Putting aside what this does to the validity of the test, trying to decide when to start guessing is a real and needless distraction for test takers. In other words, just to put far too fine a point on it, the claim about the effects of the correction for guessing aren't just wrong; they are the opposite of right. The old system didn't  require time-consuming risk analysis but the new one does.

As I said in the previous post, this represents a fairly small aspect of the changes in the SAT (loss of orthogonality being a much bigger concern). Furthermore, the SAT represents a fairly small and perhaps even relatively benign part of the story of David Coleman's education reform initiatives. Nonetheless, this one shouldn't be that difficult to get right, particularly for a publication with the reputation of the New York Times.

Of course, given that this is the second recent high-profile piece from the paper to take an anti-SAT slant, it's possible certain claims weren't vetted as well as others.

Wednesday, March 26, 2014

The SAT and the penalty for NOT guessing

Last week we had a post on why David Coleman's announcement that the SAT would now feature more "real world" problems was bad news, probably leading to worse questions and almost certainly hurting the test's orthogonality with respect to GPA and other transcript-based variables. Now let's take a at the elimination of the so-called penalty for guessing.

The SAT never had a penalty for guessing, not in the sense that guessing lowed your expected score. What the SAT did have was a correction for guessing. On a multiple-choice test without the correction (which is to say, pretty much all tests except the SAT), blindly guessing on the questions you didn't get a chance to look at will tend to raise your score. Let's say, for example, two students took a five-option test where they knew the answers to the first fifty questions and had no clue what the second fifty were asking (assume they were in Sanskrit). If Student 1 left the Sanskrit questions blank, he or she would get fifty point on the test. If Student 2 answered 'B' to all the Sanskrit questions, he or she would probably get around sixty points.

From an analytic standpoint, that's a big concern. We want to rank the students based on their knowledge of the material but here we have two students with the same mastery of the material but with a ten-point difference in scores. Worse yet, let's say we have a third student who knows a bit of Sanskrit and manages to answer five of those questions, leaving the rest blank thus making fifty-five points. Student 3 knows the material better than Student 2 but Student 2 makes a higher score. That's pretty much the worst possible case scenario for a test.

Now let's say that we subtracted a fraction of a point for each wrong answer -- 1/4 in this case, 1/(number of options - 1) in general -- but not for a blank. Now Student 1 and Student 2 both have fifty points while Student 3 still has fifty-five. The lark's on the wing, the snail's on the thorn, the statistician has rank/ordered the population and all's right with the world.

[Note that these scales are set to balance out for blind guessing. Students making informed guesses ("I know it can't be 'E'") will still come out ahead of those leaving a question blank. This too is as it should be.]

You can't really say that Student 2 has been penalized for guessing since the outcome for guessing is, on average, the same as the outcome for not guessing. It would be more accurate to say that 1 and 3 were originally penalized for NOT guessing.

Compared to some of the other issues we've discussed regarding the SAT, this one is fairly small, but it does illustrate a couple of important points about the test. First, the SAT is a carefully designed tests and second, some of the recent changes aren't nearly so well thought out.

Friday, September 24, 2010

Alphametics, the SAT and the theory behind math tests

I once saw an alphametic in an SAT question -- simpler than this one but with the same basic principle. My first thought (after, "Was that an alphametic?") was what a great question.

Of course, solving alphmetics is a completely useless skill. No one has ever or will ever actually needed to do one of these. It is that very frivolousness that makes it such a good question for a college entrance exam. It requires sophisticated mathematical reasoning but it comes in a form almost none of the students will have seen before.

For comparison, consider a problem you would not see on the SAT*, factoring a trinomial that wasn't the square of a binomial (this is another skill you'll never actually need but it's not a bad way for students to get a feel for working with polynomials). Let's look a two students who got the problem right:

Student one hasn't taken algebra since junior high but understands the fundamental relationships, finds the correct answer by multiplying out the possibilities;

Student two was recently taught an algorithm for factoring, doesn't really understand the foundation but is able to grind out the right answer.

Obviously, we have a confounding problem here, and a fairly common one at that. We would like to identify understanding and long term retention but these can easily be confused with familiarity with recently presented information (particularly when certain teachers bend their schedules and curricula out of shape to teach to the test). The people behind SAT have partly addressed this confounding by including puzzle-type questions that most students would be unfamiliar with.**

All too often, the people behind other standardized tests deal with the issue by pretending it doesn't exist.




* Not to be confused with the SAT II, which is a different and less interesting test.

** The type of kid who reads Martin Gardner books for recreation would generally do fine on the SAT even without the familiarity factor (though the prom may not go as well).

Sunday, May 16, 2010

How to ace the essay section of the SAT

Write badly.





When I was teaching at a small urban prep school, we had a faculty meeting to discuss the writing section of the SAT. The people at College Board had provided a set of sample essays with grading guidelines. We individually scored each of the papers then got together to compare our results. There were some minor disagreements -- two of the essays were close in quality and we had trouble deciding which was best -- but there was one thing which we all agreed on: the one that the College Board listed as best came in a distant third.

The College Board's choice was terrible, but it was terrible in a distinct way that any English comp teacher would immediately recognize. The writer had tried to show his or her erudition by stuffing the essay with vocabulary words that weren't apt and literary allusions that didn't advance the argument. The prose was choppy, the sentences were clumsy, and the logic was flawed.

I remember discussing how we should handle this. Should we teach students how to write sharp, readable essays or should we tell them just to use the biggest words they could think of and shoehorn in a list of inappropriate literary references?

Now I know we should have added make it as long as possible.

(with thanks to Mr. Colbert for the tip.)

Friday, May 14, 2010

How to ace the math section of the SAT

The SAT is the toughest ninth grade math test you'll ever take. The questions can be complex, subtle, even tricky in the sense that you have to pay attention to what the problem actually says, not what you expect it to say, but even the most challenging of the problems require no mathematical background beyond Algebra I and a few basic geometry concepts.

Because of this basic-math/difficult-questions dichotomy, many of those question have a short solution and a long one (check out this Colbert clip for an example). Quite a few others just have long solutions. Since the SAT gives students about ninety seconds for each problem, a high score normally indicates a student who was insightful enough to spot AHA! solutions* and fast enough to get through the rest.

Of course, if the SAT were not a timed test, a high score wouldn't indicate much of anything. Which is why the following is so troubling.

From ABC News:
At the elite Wayland High school outside Boston, the number of students receiving special accommodations is more than 12 percent, more than six times the estimated national average of high school students with learning disabilities.

Wayland guidance counselor Norma Greenberg said that it's not that difficult for wealthy, well-connected students to get the diagnoses they want.

"There are a lot of hired guns out there, there are a lot of psychologists who you can pay a lot of money to and get a murky diagnosis of subtle learning issues," Greenberg said. ...

The natural proportion of learning disabilities should be somewhere around 2 percent, the College Board said, but at some elite schools, up to 46 percent of students receive special accommodations to take the tests, including extra time.

This is not a new problem. I know from personal experience as a teacher that public schools have a history of trying to keep kids from being diagnosed as LD, both to save money and avoid paperwork. Everyone in education knew that, just as everyone knew that expensive private schools were working the system in the other direction.

* I assume everyone has read this. If you haven't, you should.

"Just when I thought I was out... they pull me back in."

I was going to to take a week off from blogging and work on another project, but this clip from the Colbert Report was too good to go unmentioned. Pay particular attention to the parts about learning disabilities and the grading of the writing section:

The Colbert ReportMon - Thurs 11:30pm / 10:30c
Stephen's Sound Advice - How to Ace the SATs
www.colbertnation.com
Colbert Report Full EpisodesPolitical HumorFox News