Explaining the Why of Your Ed-tech Choices

What does personalized learning mean at your school?

But what exactly does “personalized learning” mean across these varied products and contexts? And more broadly speaking, which labels and claims employed by companies can be trusted? How do the products schools are being offered differ from what teachers are already doing in their classrooms? Is personalized learning being oversold?

They’re all questions that get more complicated by the year for district officials trying to settle on personalized learning strategies and figure out which products will help them meet their goals.

“It’s become such a generic term. It’s aspirin,” said Daniel Gohl, the chief academic officer of Florida’s Broward County Schools, the sixth-largest district in the country. “Slapping on the label ‘personalized’ does not mean that [a product] helps me systematically move student achievement.”

Source: Are Companies Overselling Personalized Learning? — Education Week

I don’t really know what my school district means by personalized learning. It’s nebulous and never really explained. I don’t know the why.

Some years ago, therefore, I hatched the idea of supporting such educators by convening a brain trust of leading theorists, researchers, and practitioners to create – and then disseminate – concise defenses of various features of progressive education. I imagined a set of handouts, each consisting of a single (double-sided) sheet that responded to a common question. The idea was to lay out the case briskly, making liberal use of bullet points and offering a short bibliography at the end for anyone who wanted more information.

One of these “Why Sheets,” for example, might explain a teacher’s decision to create a curriculum based on kids’ questions. Or for setting aside time each day for a class meeting. It might defend helping students to understand mathematical principles rather than just memorizing facts and algorithms. Or it might lay out the case for avoiding worksheets, or tests, or homework, or traditional bribe-and-threat classroom management strategies.

Eventually I started thinking about creating additional Why Sheets to help administrators defend enlightened schoolwide policies: why we don’t track students; why we push back against standardized testing and never brag about high scores; why we have multiage classrooms; why we’ve replaced report cards with student-led parent conferences; why we use a problem-solving approach to discipline in place of suspensions and detentions; why our commitment to building community has led us to avoid awards assemblies, spelling bees, and other rituals that pit kids against one another.

In short, any practice that’s constructive yet still controversial would be fair game for one of these punchy handouts. The idea was to help educators explain why they do what they do – and, equally important, why they deliberately avoid doing some things. The sheets would be made available free of charge, uncopyrighted, and accompanied by an invitation to distribute them promiscuously.

The Why Axis – Alfie Kohn

Kohn presents these why sheets as a way to provide support for progressive teachers trying new things, something I’ve suggested at school a time or two.

Some years ago, therefore, I hatched the idea of supporting such educators by convening a brain trust of leading theorists, researchers, and practitioners to create – and then disseminate – concise defenses of various features of progressive education. I imagined a set of handouts, each consisting of a single (double-sided) sheet that responded to a common question. The idea was to lay out the case briskly, making liberal use of bullet points and offering a short bibliography at the end for anyone who wanted more information.

I don’t consider the mainstream ed-tech notions of personalized learning progressive, but I still want to know the why. I want to know the why of choosing behaviorism and data collection. I want to know the why of choosing, for example, platooning vs. looping. I want to know the why of many things I see in ed.

My professional culture is heavy on writing.

For organizations, the single biggest difference between remote and physical teams is the greater dependence on writing to establish the permanence and portability of organizational culture, norms and habits. Writing is different than speaking because it forces concision, deliberation, and structure, and this impacts how politics plays out in remote teams.

Writing changes the politics of meetings. Every Friday, Zapieremployees send out a bulletin with: (1) things I said I’d do this week and their results, (2) other issues that came up, (3) things I’m doing next week. Everyone spends the first 10 minutes of the meeting in silence reading everyone’s updates.

Remote teams practice this context setting out of necessity, but it also provides positive auxiliary benefits of “hearing” from everyone around the table, and not letting meetings default to the loudest or most senior in the room. This practice can be adopted by companies with physical workplaces as well (in fact, Zapier CEO Wade Foster borrowed this from Amazon), but it takes discipline and leadership to change behavior, particularly when it is much easier for everyone to just show up like they’re used to.

Writing changes the politics of information sharing and transparency.

Source: Distributed teams are rewriting the rules of office(less) politics | TechCrunch

Communication is oxygen. At my company, we build our communication culture on blogging. We create FAQs and Field Guides and Master Posts for everything. Writing and transparency are important parts of managing change and creating alignment.

Administrators are educators. Educate by writing in the open. Educate by publishing why sheets. Borrow from what works in distributed work: a culture of writing and transparency. Do some of the heavy lifting for teachers who have to defend district decisions to parents. Write. Write on the open web so that teachers can reference why sheets when communicating with parents. Default to open.

Persuasion and Operant Conditioning: The Influence of B. F. Skinner in Big Tech and Ed-tech

I would argue, in total seriousness, that one of the places that Skinnerism thrives today is in computing technologies, particularly in “social” technologies. This, despite the field’s insistence that its development is a result, in part, of the cognitive turn that supposedly displaced behaviorism.

Source: B. F. Skinner: The Most Important Theorist of the 21st Century

Audrey Watters notes the Skinner influence in the behaviorism of big tech and ed-tech in two great pieces: “B. F. Skinner: The Most Important Theorist of the 21st Century” and “Education Technology and the New Behaviorism”.

B. J. Fogg and his Persuasive Technology Lab at Stanford is often touted by those in Silicon Valley as one of the “innovators” in this “new” practice of building “hooks” and “nudges” into technology. These folks like to point to what’s been dubbed colloquially “The Facebook Class” – a class Fogg taught in which students like Kevin Systrom and Mike Krieger, the founders of Instagram, and Nir Eyal, the author of Hooked, “studied and developed the techniques to make our apps and gadgets addictive,” as Wired put it in a recent article talking about how some tech executives now suddenly realize that this might be problematic.

(It’s worth teasing out a little – but probably not in this talk, since I’ve rambled on so long already – the difference, if any, between “persuasion” and “operant conditioning” and how they imagine to leave space for freedom and dignity. Rhetorically and practically.)

I’m on the record elsewhere arguing this framing – “technology as addictive” – has its problems. Nevertheless it is fair to say that the kinds of compulsive behavior that we display with our apps and gadgets is being encouraged by design. All that pecking. All that clicking.

These are “technologies of behavior” that we can trace back to Skinner – perhaps not directly, but certainly indirectly due to Skinner’s continual engagement with the popular press. His fame and his notoriety. Behavioral management – and specifically through operant conditioning – remains a staple of child rearing and pet training. It is at the core of one of the most popular ed-tech apps currently on the market, ClassDojo. Behaviorism also underscores the idea that how we behave and data about how we behave when we click can give programmers insight into how to alter their software and into what we’re thinking.

If we look more broadly – and Skinner surely did – these sorts of technologies of behavior don’t simply work to train and condition individuals; many technologies of behavior are part of a broader attempt to reshape society. “For your own good,” the engineers try to reassure us. “For the good of the world.”

Source: B. F. Skinner: The Most Important Theorist of the 21st Century

In that Baffler article, I make the argument that behavior management apps like ClassDojo’s are the latest manifestation of behaviorism, a psychological theory that has underpinned much of the development of education technology. Behaviorism is, of course, most closely associated with B. F. Skinner, who developed the idea of his “teaching machine” when he visited his daughter’s fourth grade class in 1953. Skinner believed that a machine could provide a superior form of reinforcement to the human teacher, who relied too much on negative reinforcement, punishing students for bad behavior than on positive reinforcement, the kind that better trains the pigeons.

But I think there’s been a resurgence in behaviorism. It’s epicenter isn’t Harvard, where Skinner taught. It’s Stanford. It’s Silicon Valley. And this new behaviorism is fundamental to how many new digital technologies are being built.

It’s called “behavior design” today (because at Stanford, you put the word “design” in everything to make it sound beautiful not totally rotten). Stanford psychologist B. J. Fogg and his Persuasive Technology Lab teach engineers and entrepreneurs how to build products – some of the most popular apps can trace their origins to the lab – that manipulate and influence users, encouraging certain actions or behaviors and discouraging others and cultivating a kind of “addiction” or conditioned response. “Contingencies of reinforcement,” as Skinner would call them. “Technique,” Jacques Ellul would say. “Nudges,” per behavioral economist Richard Thaler, recipient of this year’s Nobel Prize for economics.

New technologies are purposefully engineered to demand our attention, to “hijack our minds.” They’re designed to elicit certain responses and to shape and alter our behaviors. Ostensibly all these nudges are supposed to make us better people – that’s the shiniest version of the story promoted in books like Nudge and Thinking about Thinking. But much of this is really about getting us to click on ads, to respond to notifications, to open apps, to stay on Web pages, to scroll, to share – actions and “metrics” that Silicon Valley entrepreneurs and investors value.

There’s a darker side still to this as I argued in the first article in this very, very long series: this kind of behavior management has become embedded in our new information architecture. It’s “fake news,” sure. But it’s also disinformation plus big data plus psychological profiling and behavior modification. The Silicon Valley “nudge” is a corporatenudge. But as these technologies are increasingly part of media, scholarship, and schooling, it’s a civics nudge too.

Those darling little ClassDojo monsters are a lot less cute when you see them as part of a new regime of educational data science, experimentation, and “psycho-informatics.”

Source: Education Technology and the New Behaviorism

Autistic people keep warning us about behaviorism. Behaviorism brings the mindset and legacy of the awful men who developed it (Skinner, Lovaas, et al) into our schools. Behaviorism has history in autistic and gay conversion therapy. It hasn’t grown far enough from that history. It’s a bad lens for seeing and understanding humans. It is primitive moral development.

Autistic self-advocates are very concerned about behaviorism and deficit ideology, particularly ABA. “My experience with special education and ABA demonstrates how the dichotomy of interventions that are designed to optimize the quality of life for individuals on the spectrum can also adversely impact their mental health, and also their self-acceptance of an autistic identity. This is why so many autistic self-advocates are concerned about behavioral modification programs: because of the long-term effects they can have on autistic people’s mental health. This is why we need to preach autism acceptance, and center self-advocates in developing appropriate supports for autistic people. That means we need to take autistic people’s insights, feelings, and desires into account, instead of dismissing them.” With behaviorism, “the literal meaning of the words is irrelevant when you’re being abused. When I was a little girl, I was autistic. And when you’re autistic, it’s not abuse. It’s therapy.” “The abuse of autistic children is so expected, so normalised, so glorified that many symptoms of trauma and ptsd are starting to be seen as autistic traits.

Source: I’m Autistic. Here’s what I’d like you to know.

One of my favorite anecdotes from Asperger’s thesis is when he asks an autistic boy in his clinic if he believes in God. “I don’t like to say I’m not religious,” the boy replies, “I just don’t have any proof of God.” That anecdote shows an appreciation of autistic non-compliance, which Asperger and his colleagues felt was as much a part of their patients’ autism as the challenges they faced. Asperger even anticipated in the 1970s that autistic adults who “valued their freedom” would object to behaviorist training, and that has turned out to be true.

Source: THINKING PERSON’S GUIDE TO AUTISM: On Hans Asperger, the Nazis, and Autism: A Conversation Across Neurologies

It’s time we outgrew this limited and limiting psychological theory.” Reject it from our companies and schools.

Plenty of policies and programs limit our ability to do right by children. But perhaps the most restrictive virtual straitjacket that educators face is behaviorism – a psychological theory that would have us focus exclusively on what can be seen and measured, that ignores or dismisses inner experience and reduces wholes to parts. It also suggests that everything people do can be explained as a quest for reinforcement – and, by implication, that we can control others by rewarding them selectively.

Allow me, then, to propose this rule of thumb: The value of any book, article, or presentation intended for teachers (or parents) is inversely related to the number of times the word “behavior” appears in it. The more our attention is fixed on the surface, the more we slight students’ underlying motives, values, and needs.

It’s been decades since academic psychology took seriously the orthodox behaviorism of John B. Watson and B.F. Skinner, which by now has shrunk to a cult-like clan of “behavior analysts.” But, alas, its reductionist influence lives on – in classroom (and schoolwide) management programs like PBIS and Class Dojo, in scripted curricula and the reduction of children’s learning to “data,” in grades and rubrics, in “competency”- and “proficiency”-based approaches to instruction, in standardized assessments, in reading incentives and merit pay for teachers.>

It’s time we outgrew this limited and limiting psychological theory. That means attending less to students’ behaviors and more to the students themselves.

Source: It’s Not About Behavior – Alfie Kohn

Operant conditioning and the manipulation of response to stimuli are at the heart of theories that support instructional design. But more, they form the foundation of almost all educational technology-from the VLE or LMS to algorithms for adaptive learning. Building upon behaviorism, Silicon Valley-often in collaboration with venture capitalists with a stake in the education market-have begun to realize Skinner’s teaching machines in today’s schools and universities.

And there’s the rub. When we went online to teach, we went online almost entirely without any other theories to support us besides instructional design. We went online first assuming that learning could be a calculated, brokered, duplicatable experience. For some reason, we took one look at the early internet and forgot about all the nuance of teaching, all the strange chaos of learning, and surrendered to a philosophy of see, do, hit submit.

The problem we face is not just coded into the VLE, either. It’s not just coded into Facebook and Twitter and the way we send an e-mail or the machines we use to send text messages. It’s coded into us. We believe that online learning happens this way. We believe that discussions should be posted once and replied to twice. We believe that efficiency is a virtue, that automated proctors and plagiarism detection services are necessary-and more than necessary, helpful.

But these are not things that are true, they are things that are sold.

Source: A Call for Critical Instructional Design

Related:

Algorithms: Opinions Embedded in Math … and Ed-tech

A much more accurate definition of an algorithm is that it’s an opinion embedded in math.

So, we do that every time we build algorithms — we curate our data, we define success, we embed our own values into algorithms.

So when people tell you algorithms make thing objective, you say “no, algorithms make things work for the builders of the algorithms.”

In general, we have a situation where algorithms are extremely powerful in our daily lives but there is a barrier between us and the people building them, and those people are typically coming from a kind of homogenous group of people who have their particular incentives — if it’s in a corporate setting, usually profit and not usually a question of fairness for the people who are subject to their algorithms.

So we always have to penetrate this fortress. We have to be able to question the algorithms themselves.

We live in the age of the algorithm – mathematical models are sorting our job applications, curating our online worlds, influencing our elections, and even deciding whether or not we should go to prison. But how much do we really know about them? Former Wall St quant, Cathy O’Neil, exposes the reality behind the AI, and explains how algorithms are just as prone to bias and discrimination as the humans who program them.

Source: The Truth About Algorithms | Cathy O’Neil – YouTube

Follow the video up with Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Here are some selected quotes from the introduction.

And then I made a big change. I quit my job and went to work as a quant for D. E. Shaw, a leading hedge fund. In leaving academia for finance, I carried mathematics from abstract theory into practice. The operations we performed on numbers translated into trillions of dollars sloshing from one account to another. At first I was excited and amazed by working in this new laboratory, the global economy. But in the autumn of 2008, after I’d been there for a bit more than a year, it came crashing down.

The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment- all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I loved so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to systems that I now recognized as flawed.

If we had been clear-headed, we all would have taken a step back at this point to figure out how math had been misused and how we could prevent a similar catastrophe in the future. But instead, in the wake of the crisis, new mathematical techniques were hotter than ever, and expanding into still more domains. They churned 24/ 7 through petabytes of information, much of it scraped from social media or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending power. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals.

This was the Big Data economy, and it promised spectacular gains. A computer program could speed through thousands of résumés or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top. This not only saved time but also was marketed as fair and objective.

Yet I saw trouble. The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.

I came up with a name for these harmful kinds of models: Weapons of Math Destruction, or WMDs for short.

Equally important, statistical systems require feedback- something to tell them when they’re off track. Statisticians use errors to train their models and make them smarter. If Amazon. ​ com, through a faulty correlation, started recommending lawn care books to teenage girls, the clicks would plummet, and the algorithm would be tweaked until it got it right. Without feedback, however, a statistical engine can continue spinning out faulty and damaging analysis while never learning from its mistakes.

Many of the WMDs I’ll be discussing in this book, including the Washington school district’s value-added model, behave like that. They define their own reality and use it to justify their results. This type of model is self-perpetuating, highly destructive- and very common.

In WMDs, many poisonous assumptions are camouflaged by math and go largely untested and unquestioned.

This underscores another common feature of WMDs. They tend to punish the poor. This is, in part, because they are engineered to evaluate large numbers of people. They specialize in bulk, and they’re cheap. That’s part of their appeal. The wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than will a fast-food chain or a cash-strapped urban school district. The privileged, we’ll see time and again, are processed more by people, the masses by machines.

Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models. And once their model morphs into a belief, it becomes hardwired. It generates poisonous assumptions, yet rarely tests them, settling instead for data that seems to confirm and fortify them. Consequently, racism is the most slovenly of predictive models. It is powered by haphazard data gathering and spurious correlations, reinforced by institutional inequities, and polluted by confirmation bias. In this way, oddly enough, racism operates like many of the WMDs I’ll be describing in this book.

Source: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (pp. 2-3, pp. 6-8). Crown/Archetype. Kindle Edition.

For how this fits into education, read Weapons of Math Destruction along with Lower Ed and Paying the Price.

Lower Ed shows exploitation of vulnerable. Paying the Price tells how we fail them. Weapons of Math Destruction outlines tools we made to do it.

Source: Kyle Johnson on Twitter

Indeed. These three great books provide a systems view of higher education and its intersections with tech and algorithms. Below, I excerpt from their introductions and book blurbs, provide chapter lists, and select a handful of tweets from authors Tressie McMillan Cottom, Sara Goldrick-Rab, and Cathy O’Neil. They are all active on Twitter and well worth a follow.

Source: Lower Ed, Paying the Price, and Weapons of Math Destruction – Ryan Boren

See also Safiya Umoja Noble’sAlgorithms of Oppression: How Search Engines Reinforce Racism”.

This book is about the power of algorithms in the age of neoliberalism and the ways those digital decisions reinforce oppressive social relationships and enact new modes of racial profiling, which I have termed technological redlining. By making visible the ways that capital, race, and gender are factors in creating unequal conditions, I am bringing light to various forms of technological redlining that are on the rise. The near-ubiquitous use of algorithmically driven software, both visible and invisible to everyday people, demands a closer inspection of what values are prioritized in such automated decision-making systems. Typically, the practice of redlining has been most often used in real estate and banking circles, creating and deepening inequalities by race, such that, for example, people of color are more likely to pay higher interest rates or premiums just because they are Black or Latino, especially if they live in low-income neighborhoods. On the Internet and in our everyday uses of technology, discrimination is also embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not. I believe that artificial intelligence will become a major human rights issue in the twenty-first century. We are only beginning to understand the long-term consequences of these decision-making tools in both masking and deepening social inequality. This book is just the start of trying to make these consequences visible. There will be many more, by myself and others, who will try to make sense of the consequences of automated decision making through algorithms in society.

Part of the challenge of understanding algorithmic oppression is to understand that mathematical formulations to drive automated decisions are made by human beings. While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.

Source: Algorithms of Oppression: How Search Engines Reinforce Racism (Kindle Locations 162-177). NYU Press. Kindle Edition.