For those who have not been following the disaster in the UK with the GCSE A-level exam results, here is a summary:

The History

  • A-levels are the exams taken in the UK, which determine where students go to college. Most English students receive acceptances from universities that are conditional upon attaining specific A-Level results.
  • Due to COVID, these national exams were canceled, which left students with an uncertain future.
  • In late March 2020, Gavin Williamson, Secretary of State for Education instructed Sally Collier, the head of Ofqual (The Office of Qualifications and Examinations Regulation), to “ensure, as far as is possible, that qualification standards are maintained, and the distribution of grades follows a similar profile to that in previous years”. On 31 March, Williamson issued a ministerial direction under the Children and Learning Act 2009.
  • In August, an algorithm devised by Ofqual computed 82% of ‘A level’ grades. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges.

The Outcome

  • Ofqual’s Direct Centre Performance model used the records of each center (school or college) for the subject assessed. Only after the results of the model’s first use in August 2020, were details of the algorithm released and then only in part.
  • Students at small schools or taking minority subjects, such as are offered at small private schools saw their grades inflated than their teacher predicted. Traditionally, such students have a narrower range of marks, as these schools encourage weaker students to leave.
  • Students at large state schools, sixth-form colleges and FE colleges who have open access policies and historically have educated black and minority ethnic students or vulnerable students saw their results plummet, so the fitted with the historic distribution curve. Nearly 300,000 of the 730,000 A-levels were lower than the teacher assessment this summer.
  • While 49% of entries by students at private schools received an A grade or above, only 22% of students at comprehensive schools received such marks.
  • The fact that students are elite private schools benefited at the expense of those from disadvantaged backgrounds sparked national outrage, including protests.
  • According to some, Ofqual has barred individual pupils from appealing against their grades on academic grounds. Families should not waste time complaining but instead should contact college or university admissions offices to confirm their places in the event of unexpectedly poor grades.
  • At first, the government refused to back down and change the results, but due to the level of protest, it soon backed down.
  • The government announced that official results would be the higher of the algorithm approximation or teacher estimates of how their students would have done. On 19 August, The Universities and Colleges Admissions Service determined that with the change, 15,000 pupils were rejected by their first-choice university on the algorithm generated grades.

What is the problem?

Well, first, there is chaos, as many students are not sure they can get into their first choice universities. For many, the algorithm was just another example of how the UK educational system consistently favors those from elite backgrounds. Statisticians have criticized Ofqual’s algorithm, saying it does not have sufficient data to award grades fairly to most state schools in England, because of wide variations in results within schools and between years. Furthermore, the Royal Statistical Society has called for an urgent review of the statistical procedures used in England and Scotland, to be carried out by the UK Statistics Authority.

However, the deep questions for all of us who aren’t affected by these results are (i) how did the algorithm get it wrong? And (ii) how many other algorithms are messing up our personal and business lives without us knowing.

AI Bias

The category of algorithms known as deep learning is behind the vast majority of AI applications. Deep-learning algorithms seek to find patterns in data. However, these technologies have a significant effect on people’s lives. They can perpetuate injustice in hiring, retail,  insurance, advertising, education, and security and may already be doing so in the criminal legal system, leading to decisions that harm the poor, reinforce racism, and amplify inequality. In addition to articles by MIT and others, Cathy O’Neil laid out these issues in her 2016 book, Weapons of Math Destruction – a must-read for anyone with interest in this area. O’Neil argues that these problematic mathematical tools share three key features; they are:

  1. Opaque – especially those run by private companies who don’t want to share their IP. As a result, no one gets to audit the results.
  2. Unregulated – they do damage with little consequence to important areas of people’s lives; and
  3. Difficult to contest – the users don’t know how they were built so deflect and the providers hide behind their IP.

Also, such systems are scalable, which amplifies any inherent biases to affect increasingly larger populations.

Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (because of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.”

A recent MIT article pointed out that AI bias arises for three reasons:

  1. Framing the problem. In creating a deep-learning model, computer scientists first decide what they want it to achieve. For example, if a credit card company wants to predict a customer’s creditworthiness, how is “creditworthiness” defined? What most credit card companies want are customers who will use the card, make partial payments that never take the entire balance down so that they earn lots of interest. Thus, what they mean by “creditworthiness” is profit maximization. When business reasons define the problem, fairness and discrimination are no longer part of what the model considers. If the algorithm discovers that providing subprime loans is an effective way to maximize profit, it will engage in predatory behavior even if that wasn’t the company’s intention.
  2. Collecting the data. Bias shows up in training data for two reasons: either the data collect is unrepresentative of reality, or it reflects existing prejudices. The first has become apparent recently with face recognition software. Feeding the deep-learning algorithms more photos of light-skinned faces than dark-skinned faces, resulted in a face recognition system that is inevitably worse at recognizing darker-skinned faces. The second case is what Amazon discovered with its internal recruiting tool. Trained with historical hiring decisions that favored men over women, the tool dismissed female candidates, as it had learned to do the same.
  3. Preparing the data. Finally, during the data preparation, the introduction of bias can occur. This stage involves identifying which attributes the algorithm is to consider. Do not confuse this with the problem-framing stage. In the creditworthiness case above, possible “attributes” are the customer’s age, income, or the number of paid-off loans. In the Amazon recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. Choosing the appropriate attributes can significantly influence the model’s prediction accuracy, so this is considered the “art” of deep learning. While the attribute’s impact on accuracy is easy to measure, its impact on the model’s bias is not.

So given we know how the bias in models arises, why is it so hard to fix? There are four main reasons:

  1. Unknown unknowns. During a model’s construction, the influence of bias on the downstream impacts of the data and choices is not known until much later. Once a bias is discovered, retroactively identifying what caused it and how to get rid of it isn’t easy. When the engineers realized the Amazon tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like “women’s.” However, they discovered that the revised system still picked up on implicitly gendered words, namely verbs that were highly correlated with men over women, e.g., “executed” and “captured”—and using that to make its decisions.
  2. Imperfect processes. Bias was not a consideration in the design of many deep learning’s standard practices. Testing of deep-learning models before deployment should provide a perfect opportunity to catch any bias; however, in practice, the data used to test the performance of the model has the same preferences as the data used to train it. Thus, it fails to flag skewed or prejudiced results.
  3. Lack of social context. How computer scientists learn to frame problems isn’t compatible with the best way to think about social issues. According to Andrew Selbst, a postdoc at the Data & Society Research Institute, the problem is the “portability trap.” In computer science, a system that is usable for different tasks in different contexts is excellent, i.e., portable. However, this ignores many social settings. As Selbst said, “You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different.”
  4. Definitions of fairness. It is not clear what an absence of bias would look like. However, this is not just an issue for computer science; the question has a long history of debate in philosophy, social science, and law. But in computer science, the concept of fairness must be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system. What researchers have discovered, there are many different mathematical definitions of fairness that are also mutually exclusive. Does “fairness” mean that the same level of risk should result in the same score regardless of race? It’s impossible to fulfill both definitions at the same time, so at some point, you have to pick one. (For a more in-depth discussion of why click here) While other fields accept that these definitions can change over time, computer science cannot. A fixed definition is required. “By fixing the answer, you’re solving a problem that looks very different than how society tends to think about these issues,” says Selbst.

As the UK A-level exam debacle reminded us, algorithms can’t fix broken systems. When the regulator lost sight of the goal and pushed for standardization above all else, the problem began. When someone approaches you with a tempting AI solution, consider all the ramifications from potential bias because if there is bias in the system, you will bear the responsibility, not the AI program.

Recent Posts

Boosting Common Sense Decision-Making in Your Organization

Boosting Common Sense Decision-Making in Your Organization

Discover how to enhance decision-making in your organization by focusing on three crucial areas: solving the right problem, gathering all the available information, and understanding the intent. Learn to empower your team, foster a purpose-driven culture, and improve organizational clarity for better decision-making.

Do You Understand Your Costs to Ensure Profitability?

Do You Understand Your Costs to Ensure Profitability?

You can only determine profitability when you know your costs. I’ve discussed before that you should price according to value, not hours. However, you still need to know your costs to understand the minimum pricing and how it is performing. Do you consider each jobs’ profitability when you price new jobs? Do you know what you should be charging to ensure you hit your profit targets? These discussions about a company’s profitability, and what measure drives profit, are critical for your organization.

Sunk Costs Are Just That, Sunk!

Sunk Costs Are Just That, Sunk!

If you were starting your business today, what would you do differently? This thought-provoking question is a valuable exercise, especially when it brings up the idea of “sunk costs” and how they limit us. A sunk cost is a payment or investment that has already been made. Since it is unrecoverable no matter what, a sunk cost shouldn’t be factored into any future decisions. However, we’re all familiar with the sunk cost fallacy: behavior driven by a past expenditure that isn’t recoupable, regardless of future actions.

Do You REALLY Know Your Business Model?

Do You REALLY Know Your Business Model?

Bringing clarity to your organization is a common theme on The Disruption! blog. Defining your business model is a worthwhile exercise for any leadership team. But how do you even begin to bring clarity into your operations? If you’re looking for a place to start, Josh Kaufman’s “Five Parts of Every Business” offers an excellent framework. Kaufman defines five parts of every business model that all flow into the next, breaking it down into Value Creation, Marketing, Sales, Value Delivery, and Finance.

Ideation! Harder Than It Sounds

Ideation! Harder Than It Sounds

Bringing in new ideas, thoughts, understanding, and logic is key as your organization faces the challenges of a changing environment. But when you do an ideation session in your organization… how does it go? For so many organizations, many times, after a few ideas have been thrown out and rejected, the thought process slows down very quickly, and a form of hopelessness takes over. How does your organization have better ideation? I’ve come across a new approach with a few teams lately.

Recruit, Recruit, Recruit!

Recruit, Recruit, Recruit!

An uptick in business has begun this quarter, and companies are rushing to hire to meet this surge in demand. What amazes me is how many are so unprepared to hire. Continual recruiting is key to the survival of a company. It isn’t the same thing as hiring—continuous recruiting is building a pipeline of people that you would hire if you needed to fill a position, or “A players” you would hire if they were available.

We All Need Clarity

We All Need Clarity

If your organization is focused on obscurity over clarity, whether intentionally or not, your “A” player employees are vulnerable. There is a looming talent crunch. As we start to emerge from COVID, demand is increasing, and many are scrambling to fill positions to meet that demand. Headhunters and recruiters are soon going to be calling your key “A” employees. Have you been giving them a reason to stay?

Not Another **** Meeting

Not Another **** Meeting

As Leonard Bernstein put it so well, “To achieve great things, two things are needed: a plan and not quite enough time.” Your meetings can be shorter, more fruitful, and engaging, with better outcomes for the organization, employees, and managers. It’s time to examine your meeting rhythms and how you set meeting agendas. This week, I break down daily, weekly, monthly, quarterly, annual, and individual meeting rhythms, with sample agendas for each.

Is Your Company Scalable?

Is Your Company Scalable?

Let’s start here: Why should your company be scalable at all? If your business is scalable, you have business freedom–freedom with time, money, and options. Many business leaders get stuck in the “owner’s trap”, where you need to do everything yourself. Sound familiar? If you want a scalable business that gives you freedom, you need to be intentional about what you sell, and how.

Are you ready for the Talent Crunch?

Are you ready for the Talent Crunch?

Companies are gearing up to hire. Unfortunately, many are competing within the same talent pool. Some experts are currently predicting a strong economic recovery starting in May or June. But as the economy booms, there is going to be fierce competition for talent. How will you fare in the looming talent crisis? Your organization should be creating a plan, now, so you can attract the talent you need in the year ahead.