Algorithms: Opinions Embedded in Math … and Ed-tech

A much more accurate definition of an algorithm is that it’s an opinion embedded in math.

So, we do that every time we build algorithms — we curate our data, we define success, we embed our own values into algorithms.

So when people tell you algorithms make thing objective, you say “no, algorithms make things work for the builders of the algorithms.”

In general, we have a situation where algorithms are extremely powerful in our daily lives but there is a barrier between us and the people building them, and those people are typically coming from a kind of homogenous group of people who have their particular incentives — if it’s in a corporate setting, usually profit and not usually a question of fairness for the people who are subject to their algorithms.

So we always have to penetrate this fortress. We have to be able to question the algorithms themselves.

We live in the age of the algorithm – mathematical models are sorting our job applications, curating our online worlds, influencing our elections, and even deciding whether or not we should go to prison. But how much do we really know about them? Former Wall St quant, Cathy O’Neil, exposes the reality behind the AI, and explains how algorithms are just as prone to bias and discrimination as the humans who program them.

Source: The Truth About Algorithms | Cathy O’Neil – YouTube

Follow the video up with Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Here are some selected quotes from the introduction.

And then I made a big change. I quit my job and went to work as a quant for D. E. Shaw, a leading hedge fund. In leaving academia for finance, I carried mathematics from abstract theory into practice. The operations we performed on numbers translated into trillions of dollars sloshing from one account to another. At first I was excited and amazed by working in this new laboratory, the global economy. But in the autumn of 2008, after I’d been there for a bit more than a year, it came crashing down.

The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment- all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I loved so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to systems that I now recognized as flawed.

If we had been clear-headed, we all would have taken a step back at this point to figure out how math had been misused and how we could prevent a similar catastrophe in the future. But instead, in the wake of the crisis, new mathematical techniques were hotter than ever, and expanding into still more domains. They churned 24/ 7 through petabytes of information, much of it scraped from social media or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending power. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals.

This was the Big Data economy, and it promised spectacular gains. A computer program could speed through thousands of résumés or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top. This not only saved time but also was marketed as fair and objective.

Yet I saw trouble. The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.

I came up with a name for these harmful kinds of models: Weapons of Math Destruction, or WMDs for short.

Equally important, statistical systems require feedback- something to tell them when they’re off track. Statisticians use errors to train their models and make them smarter. If Amazon. ​ com, through a faulty correlation, started recommending lawn care books to teenage girls, the clicks would plummet, and the algorithm would be tweaked until it got it right. Without feedback, however, a statistical engine can continue spinning out faulty and damaging analysis while never learning from its mistakes.

Many of the WMDs I’ll be discussing in this book, including the Washington school district’s value-added model, behave like that. They define their own reality and use it to justify their results. This type of model is self-perpetuating, highly destructive- and very common.

In WMDs, many poisonous assumptions are camouflaged by math and go largely untested and unquestioned.

This underscores another common feature of WMDs. They tend to punish the poor. This is, in part, because they are engineered to evaluate large numbers of people. They specialize in bulk, and they’re cheap. That’s part of their appeal. The wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than will a fast-food chain or a cash-strapped urban school district. The privileged, we’ll see time and again, are processed more by people, the masses by machines.

Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models. And once their model morphs into a belief, it becomes hardwired. It generates poisonous assumptions, yet rarely tests them, settling instead for data that seems to confirm and fortify them. Consequently, racism is the most slovenly of predictive models. It is powered by haphazard data gathering and spurious correlations, reinforced by institutional inequities, and polluted by confirmation bias. In this way, oddly enough, racism operates like many of the WMDs I’ll be describing in this book.

Source: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (pp. 2-3, pp. 6-8). Crown/Archetype. Kindle Edition.

For how this fits into education, read Weapons of Math Destruction along with Lower Ed and Paying the Price.

Lower Ed shows exploitation of vulnerable. Paying the Price tells how we fail them. Weapons of Math Destruction outlines tools we made to do it.

Source: Kyle Johnson on Twitter

Indeed. These three great books provide a systems view of higher education and its intersections with tech and algorithms. Below, I excerpt from their introductions and book blurbs, provide chapter lists, and select a handful of tweets from authors Tressie McMillan Cottom, Sara Goldrick-Rab, and Cathy O’Neil. They are all active on Twitter and well worth a follow.

Source: Lower Ed, Paying the Price, and Weapons of Math Destruction – Ryan Boren

See also Safiya Umoja Noble’sAlgorithms of Oppression: How Search Engines Reinforce Racism”.

This book is about the power of algorithms in the age of neoliberalism and the ways those digital decisions reinforce oppressive social relationships and enact new modes of racial profiling, which I have termed technological redlining. By making visible the ways that capital, race, and gender are factors in creating unequal conditions, I am bringing light to various forms of technological redlining that are on the rise. The near-ubiquitous use of algorithmically driven software, both visible and invisible to everyday people, demands a closer inspection of what values are prioritized in such automated decision-making systems. Typically, the practice of redlining has been most often used in real estate and banking circles, creating and deepening inequalities by race, such that, for example, people of color are more likely to pay higher interest rates or premiums just because they are Black or Latino, especially if they live in low-income neighborhoods. On the Internet and in our everyday uses of technology, discrimination is also embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not. I believe that artificial intelligence will become a major human rights issue in the twenty-first century. We are only beginning to understand the long-term consequences of these decision-making tools in both masking and deepening social inequality. This book is just the start of trying to make these consequences visible. There will be many more, by myself and others, who will try to make sense of the consequences of automated decision making through algorithms in society.

Part of the challenge of understanding algorithmic oppression is to understand that mathematical formulations to drive automated decisions are made by human beings. While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.

Source: Algorithms of Oppression: How Search Engines Reinforce Racism (Kindle Locations 162-177). NYU Press. Kindle Edition.

Strong Opinions, Weakly Held

I try to write and accept feedback with an open posture of “strong opinions, weakly held”, advocating for what I believe but ready to change and refine with new information. Letting go of or altering our pet notions is difficult. Confirmation bias is a cozy blanket. Practice at being wrong in public helps develop the necessary critical distance.

There is so much to know and so many perspectives and angles. None of us are experts, not really. We’re all amateurs on learning curves approaching infinity. We can distill in our writing only the merest fraction of what we know, and what we know is the merest fraction of what there is to know. Write strongly while knowing our ignorance and knowing the curve goes on forever.

A couple years ago, I was talking to the Institute’s Bob Johansen about wisdom, and he explained that – to deal with an uncertain future and still move forward – they advise people to have “strong opinions, which are weakly held.” They’ve been giving this advice for years, and I understand that it was first developed by Institute Director Paul Saffo. Bob explained that weak opinions are problematic because people aren’t inspired to develop the best arguments possible for them, or to put forth the energy required to test them. Bob explained that it was just as important, however, to not be too attached to what you believe because, otherwise, it undermines your ability to “see” and “hear” evidence that clashes with your opinions. This is what psychologists sometimes call the problem of “confirmation bias.”

Source: Strong Opinions, Weakly Held – Bob Sutton

Everything in software is so new and so frequently being reinvented that almost nobody really knows what they are doing. It is amateurs who make all the progress.

When it comes to software development, if you profess expertise, if you pitch yourself as an authority, you’re either lying to us, or lying to yourself. In our heart of hearts, we know: the real progress is made by the amateurs. They’re so busy living software they don’t usually have time to pontificate at length about the breadth of their legendary expertise. If I’ve learned anything in my career, it is that approaching software development as an expert, as someone who has already discovered everything there is to know about a given topic, is the one surest way to fail.

Experts are, if anything, more suspect than the amateurs, because they’re less honest.

I’ll never be one of the best. But what I lack in talent, I make up in intensity.

To me, writing without a strong voice, writing filled with second guessing and disclaimers, is tedious and difficult to slog through. I go out of my way to write in a strong voice because it’s more effective. But whenever I post in a strong voice, it is also an implied invitation to a discussion, a discussion where I often change my opinion and invariably learn a great deal about the topic at hand. I believe in the principle of strong opinions, weakly held.

So when you read one of my posts, please consider it a strong opinion weakly held, a mock fight between fellow amateurs of equal stature, held in an Octagon where everyone retains their sense of humor, has an open mind, and enjoys a spirited debate where we all learn something.

Source: Strong Opinions, Weakly Held

As leaders we should always question new ideas and ensure they’re supported by fact. However, when there is mounting evidence and experience that shows our ideas and beliefs are wrong, we should not resist change. This is why wise leaders keep their strong opinions, weakly held.

When dealing with the complex practices of strategy, leadership and innovation in an uncertain and changing environment wise leaders keep their strong opinions, weakly held.

Strong opinions are not fundamental truths. Rather opinions are a working hypothesis used to guide your thinking, decisions and actions.

Wise leaders emphasise experimentation over theory. They understand that experimentation is a requirement for agility.

The fastest way of moving into the future is through defining and validating a series of hypotheses. Formulate an hypothesis based on the best available information – adopt a strong opinion. Then act, seeking feedback, adjusting as you go – weakly held.

Source: Wise Leaders Keep Strong Opinions, Weakly Held • George Ambler

The point of forecasting is not to attempt illusory certainty, but to identify the full range of possible outcomes. Try as one might, when one looks into the future, there is no such thing as “complete” information, much less a “complete” forecast. As a consequence, I have found that the fastest way to an effective forecast is often through a sequence of lousy forecasts. Instead of withholding judgment until an exhaustive search for data is complete, I will force myself to make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search for further indicators and information. Iterate the process a few times, and it is surprising how quickly one can get to a useful forecast.

Allow your intuition to guide you to a conclusion, no matter how imperfect – this is the “strong opinion” part. Then -and this is the “weakly held” part- prove yourself wrong. Engage in creative doubt. Look for information that doesn’t fit, or indicators that point in an entirely different direction. Eventually your intuition will kick in and a new hypothesis will emerge out of the rubble, ready to be ruthlessly torn apart once again. You will be surprised by how quickly the sequence of faulty forecasts will deliver you to a useful result.

Source: Strong Opinions weakly held : Paul Saffo