Algorithms: Opinions Embedded in Math … and Ed-tech

A much more accurate definition of an algorithm is that it’s an opinion embedded in math.

So, we do that every time we build algorithms — we curate our data, we define success, we embed our own values into algorithms.

So when people tell you algorithms make thing objective, you say “no, algorithms make things work for the builders of the algorithms.”

In general, we have a situation where algorithms are extremely powerful in our daily lives but there is a barrier between us and the people building them, and those people are typically coming from a kind of homogenous group of people who have their particular incentives — if it’s in a corporate setting, usually profit and not usually a question of fairness for the people who are subject to their algorithms.

So we always have to penetrate this fortress. We have to be able to question the algorithms themselves.

We live in the age of the algorithm – mathematical models are sorting our job applications, curating our online worlds, influencing our elections, and even deciding whether or not we should go to prison. But how much do we really know about them? Former Wall St quant, Cathy O’Neil, exposes the reality behind the AI, and explains how algorithms are just as prone to bias and discrimination as the humans who program them.

Source: The Truth About Algorithms | Cathy O’Neil – YouTube

Follow the video up with Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Here are some selected quotes from the introduction.

And then I made a big change. I quit my job and went to work as a quant for D. E. Shaw, a leading hedge fund. In leaving academia for finance, I carried mathematics from abstract theory into practice. The operations we performed on numbers translated into trillions of dollars sloshing from one account to another. At first I was excited and amazed by working in this new laboratory, the global economy. But in the autumn of 2008, after I’d been there for a bit more than a year, it came crashing down.

The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment- all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I loved so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to systems that I now recognized as flawed.

If we had been clear-headed, we all would have taken a step back at this point to figure out how math had been misused and how we could prevent a similar catastrophe in the future. But instead, in the wake of the crisis, new mathematical techniques were hotter than ever, and expanding into still more domains. They churned 24/ 7 through petabytes of information, much of it scraped from social media or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending power. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals.

This was the Big Data economy, and it promised spectacular gains. A computer program could speed through thousands of résumés or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top. This not only saved time but also was marketed as fair and objective.

Yet I saw trouble. The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.

I came up with a name for these harmful kinds of models: Weapons of Math Destruction, or WMDs for short.

Equally important, statistical systems require feedback- something to tell them when they’re off track. Statisticians use errors to train their models and make them smarter. If Amazon. ​ com, through a faulty correlation, started recommending lawn care books to teenage girls, the clicks would plummet, and the algorithm would be tweaked until it got it right. Without feedback, however, a statistical engine can continue spinning out faulty and damaging analysis while never learning from its mistakes.

Many of the WMDs I’ll be discussing in this book, including the Washington school district’s value-added model, behave like that. They define their own reality and use it to justify their results. This type of model is self-perpetuating, highly destructive- and very common.

In WMDs, many poisonous assumptions are camouflaged by math and go largely untested and unquestioned.

This underscores another common feature of WMDs. They tend to punish the poor. This is, in part, because they are engineered to evaluate large numbers of people. They specialize in bulk, and they’re cheap. That’s part of their appeal. The wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than will a fast-food chain or a cash-strapped urban school district. The privileged, we’ll see time and again, are processed more by people, the masses by machines.

Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models. And once their model morphs into a belief, it becomes hardwired. It generates poisonous assumptions, yet rarely tests them, settling instead for data that seems to confirm and fortify them. Consequently, racism is the most slovenly of predictive models. It is powered by haphazard data gathering and spurious correlations, reinforced by institutional inequities, and polluted by confirmation bias. In this way, oddly enough, racism operates like many of the WMDs I’ll be describing in this book.

Source: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (pp. 2-3, pp. 6-8). Crown/Archetype. Kindle Edition.

For how this fits into education, read Weapons of Math Destruction along with Lower Ed and Paying the Price.

Lower Ed shows exploitation of vulnerable. Paying the Price tells how we fail them. Weapons of Math Destruction outlines tools we made to do it.

Source: Kyle Johnson on Twitter

Indeed. These three great books provide a systems view of higher education and its intersections with tech and algorithms. Below, I excerpt from their introductions and book blurbs, provide chapter lists, and select a handful of tweets from authors Tressie McMillan Cottom, Sara Goldrick-Rab, and Cathy O’Neil. They are all active on Twitter and well worth a follow.

Source: Lower Ed, Paying the Price, and Weapons of Math Destruction – Ryan Boren

See also Safiya Umoja Noble’sAlgorithms of Oppression: How Search Engines Reinforce Racism”.

This book is about the power of algorithms in the age of neoliberalism and the ways those digital decisions reinforce oppressive social relationships and enact new modes of racial profiling, which I have termed technological redlining. By making visible the ways that capital, race, and gender are factors in creating unequal conditions, I am bringing light to various forms of technological redlining that are on the rise. The near-ubiquitous use of algorithmically driven software, both visible and invisible to everyday people, demands a closer inspection of what values are prioritized in such automated decision-making systems. Typically, the practice of redlining has been most often used in real estate and banking circles, creating and deepening inequalities by race, such that, for example, people of color are more likely to pay higher interest rates or premiums just because they are Black or Latino, especially if they live in low-income neighborhoods. On the Internet and in our everyday uses of technology, discrimination is also embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not. I believe that artificial intelligence will become a major human rights issue in the twenty-first century. We are only beginning to understand the long-term consequences of these decision-making tools in both masking and deepening social inequality. This book is just the start of trying to make these consequences visible. There will be many more, by myself and others, who will try to make sense of the consequences of automated decision making through algorithms in society.

Part of the challenge of understanding algorithmic oppression is to understand that mathematical formulations to drive automated decisions are made by human beings. While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.

Source: Algorithms of Oppression: How Search Engines Reinforce Racism (Kindle Locations 162-177). NYU Press. Kindle Edition.

One thought on “Algorithms: Opinions Embedded in Math … and Ed-tech

Leave a Reply