For those who have not been following the disaster in the UK with the GCSE A-level exam results, here is a summary:

The History

  • A-levels are the exams taken in the UK, which determine where students go to college. Most English students receive acceptances from universities that are conditional upon attaining specific A-Level results.
  • Due to COVID, these national exams were canceled, which left students with an uncertain future.
  • In late March 2020, Gavin Williamson, Secretary of State for Education instructed Sally Collier, the head of Ofqual (The Office of Qualifications and Examinations Regulation), to “ensure, as far as is possible, that qualification standards are maintained, and the distribution of grades follows a similar profile to that in previous years”. On 31 March, Williamson issued a ministerial direction under the Children and Learning Act 2009.
  • In August, an algorithm devised by Ofqual computed 82% of ‘A level’ grades. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges.

The Outcome

  • Ofqual’s Direct Centre Performance model used the records of each center (school or college) for the subject assessed. Only after the results of the model’s first use in August 2020, were details of the algorithm released and then only in part.
  • Students at small schools or taking minority subjects, such as are offered at small private schools saw their grades inflated than their teacher predicted. Traditionally, such students have a narrower range of marks, as these schools encourage weaker students to leave.
  • Students at large state schools, sixth-form colleges and FE colleges who have open access policies and historically have educated black and minority ethnic students or vulnerable students saw their results plummet, so the fitted with the historic distribution curve. Nearly 300,000 of the 730,000 A-levels were lower than the teacher assessment this summer.
  • While 49% of entries by students at private schools received an A grade or above, only 22% of students at comprehensive schools received such marks.
  • The fact that students are elite private schools benefited at the expense of those from disadvantaged backgrounds sparked national outrage, including protests.
  • According to some, Ofqual has barred individual pupils from appealing against their grades on academic grounds. Families should not waste time complaining but instead should contact college or university admissions offices to confirm their places in the event of unexpectedly poor grades.
  • At first, the government refused to back down and change the results, but due to the level of protest, it soon backed down.
  • The government announced that official results would be the higher of the algorithm approximation or teacher estimates of how their students would have done. On 19 August, The Universities and Colleges Admissions Service determined that with the change, 15,000 pupils were rejected by their first-choice university on the algorithm generated grades.

What is the problem?

Well, first, there is chaos, as many students are not sure they can get into their first choice universities. For many, the algorithm was just another example of how the UK educational system consistently favors those from elite backgrounds. Statisticians have criticized Ofqual’s algorithm, saying it does not have sufficient data to award grades fairly to most state schools in England, because of wide variations in results within schools and between years. Furthermore, the Royal Statistical Society has called for an urgent review of the statistical procedures used in England and Scotland, to be carried out by the UK Statistics Authority.

However, the deep questions for all of us who aren’t affected by these results are (i) how did the algorithm get it wrong? And (ii) how many other algorithms are messing up our personal and business lives without us knowing.

AI Bias

The category of algorithms known as deep learning is behind the vast majority of AI applications. Deep-learning algorithms seek to find patterns in data. However, these technologies have a significant effect on people’s lives. They can perpetuate injustice in hiring, retail,  insurance, advertising, education, and security and may already be doing so in the criminal legal system, leading to decisions that harm the poor, reinforce racism, and amplify inequality. In addition to articles by MIT and others, Cathy O’Neil laid out these issues in her 2016 book, Weapons of Math Destruction – a must-read for anyone with interest in this area. O’Neil argues that these problematic mathematical tools share three key features; they are:

  1. Opaque – especially those run by private companies who don’t want to share their IP. As a result, no one gets to audit the results.
  2. Unregulated – they do damage with little consequence to important areas of people’s lives; and
  3. Difficult to contest – the users don’t know how they were built so deflect and the providers hide behind their IP.

Also, such systems are scalable, which amplifies any inherent biases to affect increasingly larger populations.

Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (because of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.”

A recent MIT article pointed out that AI bias arises for three reasons:

  1. Framing the problem. In creating a deep-learning model, computer scientists first decide what they want it to achieve. For example, if a credit card company wants to predict a customer’s creditworthiness, how is “creditworthiness” defined? What most credit card companies want are customers who will use the card, make partial payments that never take the entire balance down so that they earn lots of interest. Thus, what they mean by “creditworthiness” is profit maximization. When business reasons define the problem, fairness and discrimination are no longer part of what the model considers. If the algorithm discovers that providing subprime loans is an effective way to maximize profit, it will engage in predatory behavior even if that wasn’t the company’s intention.
  2. Collecting the data. Bias shows up in training data for two reasons: either the data collect is unrepresentative of reality, or it reflects existing prejudices. The first has become apparent recently with face recognition software. Feeding the deep-learning algorithms more photos of light-skinned faces than dark-skinned faces, resulted in a face recognition system that is inevitably worse at recognizing darker-skinned faces. The second case is what Amazon discovered with its internal recruiting tool. Trained with historical hiring decisions that favored men over women, the tool dismissed female candidates, as it had learned to do the same.
  3. Preparing the data. Finally, during the data preparation, the introduction of bias can occur. This stage involves identifying which attributes the algorithm is to consider. Do not confuse this with the problem-framing stage. In the creditworthiness case above, possible “attributes” are the customer’s age, income, or the number of paid-off loans. In the Amazon recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. Choosing the appropriate attributes can significantly influence the model’s prediction accuracy, so this is considered the “art” of deep learning. While the attribute’s impact on accuracy is easy to measure, its impact on the model’s bias is not.

So given we know how the bias in models arises, why is it so hard to fix? There are four main reasons:

  1. Unknown unknowns. During a model’s construction, the influence of bias on the downstream impacts of the data and choices is not known until much later. Once a bias is discovered, retroactively identifying what caused it and how to get rid of it isn’t easy. When the engineers realized the Amazon tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like “women’s.” However, they discovered that the revised system still picked up on implicitly gendered words, namely verbs that were highly correlated with men over women, e.g., “executed” and “captured”—and using that to make its decisions.
  2. Imperfect processes. Bias was not a consideration in the design of many deep learning’s standard practices. Testing of deep-learning models before deployment should provide a perfect opportunity to catch any bias; however, in practice, the data used to test the performance of the model has the same preferences as the data used to train it. Thus, it fails to flag skewed or prejudiced results.
  3. Lack of social context. How computer scientists learn to frame problems isn’t compatible with the best way to think about social issues. According to Andrew Selbst, a postdoc at the Data & Society Research Institute, the problem is the “portability trap.” In computer science, a system that is usable for different tasks in different contexts is excellent, i.e., portable. However, this ignores many social settings. As Selbst said, “You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different.”
  4. Definitions of fairness. It is not clear what an absence of bias would look like. However, this is not just an issue for computer science; the question has a long history of debate in philosophy, social science, and law. But in computer science, the concept of fairness must be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system. What researchers have discovered, there are many different mathematical definitions of fairness that are also mutually exclusive. Does “fairness” mean that the same level of risk should result in the same score regardless of race? It’s impossible to fulfill both definitions at the same time, so at some point, you have to pick one. (For a more in-depth discussion of why click here) While other fields accept that these definitions can change over time, computer science cannot. A fixed definition is required. “By fixing the answer, you’re solving a problem that looks very different than how society tends to think about these issues,” says Selbst.

As the UK A-level exam debacle reminded us, algorithms can’t fix broken systems. When the regulator lost sight of the goal and pushed for standardization above all else, the problem began. When someone approaches you with a tempting AI solution, consider all the ramifications from potential bias because if there is bias in the system, you will bear the responsibility, not the AI program.

Recent Posts

EOS is just that, an Operating System

EOS is just that, an Operating System

The EOS Model® provides a useful foundation for businesses, but it falls short in addressing key aspects of creating an growth. By incorporating additional elements from the Gravitas 7 Attributes of Agile Growth® model, businesses can create a more comprehensive system that promotes growth while maintaining smooth operations. Focusing on Leadership, Strategy, Execution, Customer, Profit, Systems, and Talent, the 7 Attributes of Agile Growth® offer a more encompassing approach to achieving success.

What has COVID done to Company Culture?

What has COVID done to Company Culture?

COVID has affected everyone. However, companies need to examine if they have lived their core values during COVID, how they are reinforcing them in a WFH environment, and especially with the onboarding of new hires.

Profit ≠ Cash Flow

Profit ≠ Cash Flow

Knowing how much cash you generate is essential for planning for growth. Too many companies don’t know and when they grow they find they are continually running out of cash. Understand your cash flow generation and how to improve it through improvements in your Cash Conversion Cycle and using the Power of One.

What Are Your Critical and Counter Critical Numbers?

What Are Your Critical and Counter Critical Numbers?

The key to achieving long term goals is to define short term goals that lead you there. Focusing those short term goals around a key metric is essential. However, ensure that the metric will not lead other areas astray by having an appropriate counter critical metric act as a counter balance.

Rethinking ‘Family’ Culture in Business: Fostering Performance and Success

Rethinking ‘Family’ Culture in Business: Fostering Performance and Success

Explore the importance of company culture and the potential pitfalls of adopting a “Family” culture in organizations. Learn how to foster a high-performance culture while maintaining key family values and discover success factors for family businesses. Rethink the “Family” culture concept and create a thriving environment for your organization.

Do You Truly Know Your Core Customer?

Do You Truly Know Your Core Customer?

Knowing the profit of your core customers is key to building a growth model. Many companies have identified core customers that are generating a sub-optimal profit and so they cannot realize the profits they seek. Identifying the correct core customer allows you to generate profits and often operate in “Blue Ocean.”

The Spectacular Rise and Fall of the European Super League

The Spectacular Rise and Fall of the European Super League

The European Super League (ESL) collapsed within 48 hours of its announcement due to hubris, a lack of value creation, and fan backlash. The founders’ arrogance led them to disregard European football’s deep-rooted traditions and culture. At the same time, the focus on wealthy club owners instead of merit undermined the essence of the competition. The fierce backlash from fans, who felt betrayed by their clubs, demonstrated the importance of prioritizing supporters’ interests in football.

Does Your Financial Model Drive Growth?

Does Your Financial Model Drive Growth?

Working with many companies looking to grow, I am always surprised how many have not built a financial model that drives growth. I have mentioned before a financial model that drives growth? Here I am basing on Jim Collin's Profit/X, which he laid out in Good to...

COVID = Caught Inside

COVID = Caught Inside

As we emerge from COVID, the current employment environment makes me think of a surfing concept: “Being Caught Inside When a Big Set Comes Through.” Basically, the phrase refers to when you paddle like crazy to escape the crash of one wave, only to find that the next wave in the set is even bigger—and you’re exhausted. 2020 was the first wave, leaving us tired and low. But looking forward, there are major challenges looming on the horizon as business picks up in 2021. You are already asking a lot of your employees, who are working flat out and dealing with stress until you are able to hire more. But everyone is looking for employees right now, and hiring and retention for your organization is growing more difficult.