I can never forget an evening late into a semester of my Introduction to Python course, during which I asked my students a question about user-defined classes. Here’s the code I had put on the board:
class MyClass(object): var = 0 def __init__(self): # called MyClass.var = MyClass.var + 1 x = MyClass() # new instance created y = MyClass() # new instance created
As new information for this particular lesson, I informed them that every time a new
MyClass instance is created, the
__init__() method is called implicitly. In other words, the code above calls
__init__() twice, and in executing the code in
__init__(), the variable
MyClass.var is being incremented — so this is also happening twice.
So, I asked them: after the above code is executed, what is the value of
The hand of this class’ most enthusiastic student shot into the air.
“One!” He answered proudly. And for a moment my mouth stood open. I had worked with this student for hours, patiently explained variable assignment, incrementation, method calls, and instantiation; and I was now adding one new piece: that the
__init__() method gets called once for every new instance of the class (expecting them to guess that
MyClass.var were the same variable). My student was an eager learner, and sounded smart when he talked about his work as a hedge fund analyst.
The answer was staring him in the face. And perhaps you could understand my frustration — I mean, you look at the code. How many instances are being created (and thus how many times is
__init__() called, and thus how many times is
MyClass.var incremented)? Why didn’t my earnest student say “two”?
Perhaps he just saw the number 1 in the code and assumed the increment was happening once; or perhaps he wasn’t listening for a moment — lecture is a notoriously poor method of disseminating technical knowledge. But it was clear that he was guessing, and somewhat randomly. In other words, he didn’t seem to be modeling the cause-and-effect of procedural statements. It seemed that in nine weeks of lectures and numerous homework exercises, I had taught him nothing.
And as I smiled in a grandfatherly sort of way and called on the next student, my little teacher’s party balloon quietly deflated and lay itself limply on the floor.
All teachers of programming must at some point acknowledge that some of our students aren’t “getting it.” But what’s to be done? Should we decide that there are some who can code, and some who can’t?
In a series of papers, Richard Bornat and Saeed Dehnadi of Middlesex University initially claimed to have devised a diagnostic test, given to incoming first-year programming students, that revealed a “double hump,” or double bell curve, describing two groups of students: those who “got” programming concepts intuitively, even before their training began, and those who didn’t. Inferring that some people are simply not cut out to be coders, Bornat couldn’t hide the frustration borne of 30 years experience teaching programming to beginners:
In particular, most people can’t learn to program: between 30% and 60% of every university computer science department’s intake fail the first programming course. Experienced teachers are weary but never oblivious to this fact; bright-eyed beginners who believe that the old ones must have been doing it wrong learn the truth from bitter experience; and so it has been for almost two generations, ever since programming took off in the 1960s. [emphasis mine]
But in a subsequent paper, Bornat retracted both the idea about prior aptitude and the reliability of the test, while acknowledging that the first, unpublished paper has spread across the Web and “continues to mislead to this day.”
Indeed, one can find a good number of opinionators weighing in on the subject. In “Separating Programming Sheep from Non-Programming Goats,” Stack Exchange co-founder Jeff Atwood cites Bornat’s initial paper and concludes, “the act of programming seems literally unteachable to a sizable subset of incoming computer science students.” Linux creator Linus Torvalds has been quoted as saying, “I actually don’t believe that everybody should necessarily try to learn to code” — although, he does propose that people be exposed to it to see if they have “the aptitude.” Clayton Lewis of the University of Colorado at Boulder conducted a survey in which 77% of responding computer science faculty strongly disagreed with the statement “Nearly everyone is capable of succeeding in the computer science curriculum if they work at it.”
As a “bright-eyed beginner” (with a scant 15 years of introductory programming teaching under my belt), it’s hard for me to accept the assertion that there are “some who can’t.” Such reasoning smacks of elitism and prejudice, even if such attitudes aren’t expressed consciously. Of course, I’ll be the first to admit that my own opinion rests heavily on my own preconceptions: I’ve always had that “Montessori feeling” — every interested student should be given a chance to try, and sometimes fail, in a supportive environment.
So, rather than give up on some, shouldn’t educators themselves keep trying? The inverse to the question “are there some students who can’t learn?” is this question, “are there some students whom our (current) teaching methods can’t reach?” The first question by itself implies a “yes,” and thus closes a door on some students. The second question opens up a world of inquiry: if basic coding concepts are truly so simple (as they truly are once the abstraction is understood), what do we need to do to bring the hard cases home?
If basic coding concepts are truly so simple, what do we need to do to bring the hard cases home?
I have observed in many hours of one-on-one tutoring work with my students that there is a strange disconnect — what I see as a kind of “code dyslexia” — that affects some of them. (More than a few students have expressed confusion between Python’s
str.split() methods — and what is confusing about these distinct and descriptive English words but the spelling?) I find the problem fascinating, often witnessing how the simplest concepts can elude otherwise manifestly intelligent students.
But I don’t question any student’s aptitude to understand the basics. Instead, I think it’s a function of assumptions and cognitive perspective — they haven’t yet learned how to discern the dynamics of this strange new environment, how even to ask the questions or perform the tests that would lead to such an understanding. Sometimes when I’m describing the statements of a procedural program as though the student is constructing a kind of Rube Goldberg machine, I reflect that if students could see what was happening in their little code machines, if they could visualize the cause and effect of each piece, indeed if each little function and operator were rendered as a little wooden toy that accomplished one little task, they’d be far more likely to see the relationships between elements and eventually construct a solution. After all, we all know how to play with toys!
Fortunately, my observations are not new, and exploration of this kind is ongoing. Studies and opinions on this topic are legion, and a quick review shows the depths already explored. Mark Guzdial, a prolific contributor on this topic, asserts that “How we Teach Introductory Computer Science is Wrong” and exhorts educators to embrace active learning and eschew lecture.” Marcia Linn and Michael Clancy argue that students should spend time reading code before beginning to write code. Jens Bennedsen and Michael E. Caspersen argue that “process recordings” (videos) are far more effective than textbooks because the programming process is dynamic and is thus better suited to dynamic demonstration. Naseem Rahman, David Nandigam, and Sreenivas Sremath Tirumala offer the “coaching mindset” as a superior approach, taking into account the whole student, their perspective, and their inherent skill and knowledge already possessed.
And, of course, we are currently witnessing new ways of approaching coding education through the interactivity of the Web. Codeacademy and Khan Academy are two well-known examples. Scratch from MIT can help children (and adults!) understand the cause-and-effect of code “assertions.” Bret Victor, a former Apple UI designer, provides a fascinating look at how an interactive and visual coding environment could be designed in “Learnable Programming.”
And my approach? I am currently focusing on reading as well as writing. My online students are offered a brief, focused video lesson that is followed by short exercises that allow them to apply the new concepts immediately. There are two keys: first, the student is instructed not to guess, and if stumped, to review the solution carefully, including the annotations and explanation, and to make a note to try the same exercise again later. Second, I offer frequent lessons on debugging. This involves reading and understanding error messages, testing each line before writing the next, and using print statements and the Python debugger to report on code execution as it happens. Gradually, through both coding and review, more coding and more review, my students report lasting understanding based on experience.
And my enthusiastic beginner who got the question wrong? I sat down with him and explained the Python
__init__() constructor from another angle, and he nodded.
“It’s pretty cool, isn’t it?” he smiled.
If you’ve learned the basics of Python and you’re ready for the next step, check out David Blaikie’s video tutorial Python Beyond The Basics — Object Oriented Programming.