Discover the importance of organizational alignment and agility in this blog post. Learn how establishing a strong CORE and building a strategy around it can lead to sustainable growth and success. Find out how alignment and agility empower your organization to thrive in an ever-changing business landscape.

Algorithms Once More Run Amok
For those who have not been following the disaster in the UK with the GCSE A-level exam results, here is a summary:
The History
- A-levels are the exams taken in the UK, which determine where students go to college. Most English students receive acceptances from universities that are conditional upon attaining specific A-Level results.
- Due to COVID, these national exams were canceled, which left students with an uncertain future.
- In late March 2020, Gavin Williamson, Secretary of State for Education instructed Sally Collier, the head of Ofqual (The Office of Qualifications and Examinations Regulation), to “ensure, as far as is possible, that qualification standards are maintained, and the distribution of grades follows a similar profile to that in previous years”. On 31 March, Williamson issued a ministerial direction under the Children and Learning Act 2009.
- In August, an algorithm devised by Ofqual computed 82% of ‘A level’ grades. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges.
The Outcome
- Ofqual’s Direct Centre Performance model used the records of each center (school or college) for the subject assessed. Only after the results of the model’s first use in August 2020, were details of the algorithm released and then only in part.
- Students at small schools or taking minority subjects, such as are offered at small private schools saw their grades inflated than their teacher predicted. Traditionally, such students have a narrower range of marks, as these schools encourage weaker students to leave.
- Students at large state schools, sixth-form colleges and FE colleges who have open access policies and historically have educated black and minority ethnic students or vulnerable students saw their results plummet, so the fitted with the historic distribution curve. Nearly 300,000 of the 730,000 A-levels were lower than the teacher assessment this summer.
- While 49% of entries by students at private schools received an A grade or above, only 22% of students at comprehensive schools received such marks.
- The fact that students are elite private schools benefited at the expense of those from disadvantaged backgrounds sparked national outrage, including protests.
- According to some, Ofqual has barred individual pupils from appealing against their grades on academic grounds. Families should not waste time complaining but instead should contact college or university admissions offices to confirm their places in the event of unexpectedly poor grades.
- At first, the government refused to back down and change the results, but due to the level of protest, it soon backed down.
- The government announced that official results would be the higher of the algorithm approximation or teacher estimates of how their students would have done. On 19 August, The Universities and Colleges Admissions Service determined that with the change, 15,000 pupils were rejected by their first-choice university on the algorithm generated grades.
What is the problem?
Well, first, there is chaos, as many students are not sure they can get into their first choice universities. For many, the algorithm was just another example of how the UK educational system consistently favors those from elite backgrounds. Statisticians have criticized Ofqual’s algorithm, saying it does not have sufficient data to award grades fairly to most state schools in England, because of wide variations in results within schools and between years. Furthermore, the Royal Statistical Society has called for an urgent review of the statistical procedures used in England and Scotland, to be carried out by the UK Statistics Authority.
However, the deep questions for all of us who aren’t affected by these results are (i) how did the algorithm get it wrong? And (ii) how many other algorithms are messing up our personal and business lives without us knowing.
AI Bias
The category of algorithms known as deep learning is behind the vast majority of AI applications. Deep-learning algorithms seek to find patterns in data. However, these technologies have a significant effect on people’s lives. They can perpetuate injustice in hiring, retail, insurance, advertising, education, and security and may already be doing so in the criminal legal system, leading to decisions that harm the poor, reinforce racism, and amplify inequality. In addition to articles by MIT and others, Cathy O’Neil laid out these issues in her 2016 book, Weapons of Math Destruction – a must-read for anyone with interest in this area. O’Neil argues that these problematic mathematical tools share three key features; they are:
- Opaque – especially those run by private companies who don’t want to share their IP. As a result, no one gets to audit the results.
- Unregulated – they do damage with little consequence to important areas of people’s lives; and
- Difficult to contest – the users don’t know how they were built so deflect and the providers hide behind their IP.
Also, such systems are scalable, which amplifies any inherent biases to affect increasingly larger populations.
Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (because of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.”
A recent MIT article pointed out that AI bias arises for three reasons:
- Framing the problem. In creating a deep-learning model, computer scientists first decide what they want it to achieve. For example, if a credit card company wants to predict a customer’s creditworthiness, how is “creditworthiness” defined? What most credit card companies want are customers who will use the card, make partial payments that never take the entire balance down so that they earn lots of interest. Thus, what they mean by “creditworthiness” is profit maximization. When business reasons define the problem, fairness and discrimination are no longer part of what the model considers. If the algorithm discovers that providing subprime loans is an effective way to maximize profit, it will engage in predatory behavior even if that wasn’t the company’s intention.
- Collecting the data. Bias shows up in training data for two reasons: either the data collect is unrepresentative of reality, or it reflects existing prejudices. The first has become apparent recently with face recognition software. Feeding the deep-learning algorithms more photos of light-skinned faces than dark-skinned faces, resulted in a face recognition system that is inevitably worse at recognizing darker-skinned faces. The second case is what Amazon discovered with its internal recruiting tool. Trained with historical hiring decisions that favored men over women, the tool dismissed female candidates, as it had learned to do the same.
- Preparing the data. Finally, during the data preparation, the introduction of bias can occur. This stage involves identifying which attributes the algorithm is to consider. Do not confuse this with the problem-framing stage. In the creditworthiness case above, possible “attributes” are the customer’s age, income, or the number of paid-off loans. In the Amazon recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. Choosing the appropriate attributes can significantly influence the model’s prediction accuracy, so this is considered the “art” of deep learning. While the attribute’s impact on accuracy is easy to measure, its impact on the model’s bias is not.
So given we know how the bias in models arises, why is it so hard to fix? There are four main reasons:
- Unknown unknowns. During a model’s construction, the influence of bias on the downstream impacts of the data and choices is not known until much later. Once a bias is discovered, retroactively identifying what caused it and how to get rid of it isn’t easy. When the engineers realized the Amazon tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like “women’s.” However, they discovered that the revised system still picked up on implicitly gendered words, namely verbs that were highly correlated with men over women, e.g., “executed” and “captured”—and using that to make its decisions.
- Imperfect processes. Bias was not a consideration in the design of many deep learning’s standard practices. Testing of deep-learning models before deployment should provide a perfect opportunity to catch any bias; however, in practice, the data used to test the performance of the model has the same preferences as the data used to train it. Thus, it fails to flag skewed or prejudiced results.
- Lack of social context. How computer scientists learn to frame problems isn’t compatible with the best way to think about social issues. According to Andrew Selbst, a postdoc at the Data & Society Research Institute, the problem is the “portability trap.” In computer science, a system that is usable for different tasks in different contexts is excellent, i.e., portable. However, this ignores many social settings. As Selbst said, “You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different.”
- Definitions of fairness. It is not clear what an absence of bias would look like. However, this is not just an issue for computer science; the question has a long history of debate in philosophy, social science, and law. But in computer science, the concept of fairness must be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system. What researchers have discovered, there are many different mathematical definitions of fairness that are also mutually exclusive. Does “fairness” mean that the same level of risk should result in the same score regardless of race? It’s impossible to fulfill both definitions at the same time, so at some point, you have to pick one. (For a more in-depth discussion of why click here) While other fields accept that these definitions can change over time, computer science cannot. A fixed definition is required. “By fixing the answer, you’re solving a problem that looks very different than how society tends to think about these issues,” says Selbst.
As the UK A-level exam debacle reminded us, algorithms can’t fix broken systems. When the regulator lost sight of the goal and pushed for standardization above all else, the problem began. When someone approaches you with a tempting AI solution, consider all the ramifications from potential bias because if there is bias in the system, you will bear the responsibility, not the AI program.
Recent Posts
Align and Thrive: The Importance of Organizational Alignment and Agility
How to Achieve Smart Time Management: 10 Tips for Busy Professionals
When you are a busy professional running your own business, it can often feel like there aren’t enough hours in the day to accomplish everything. Being strategic with your time is the best (and possibly the only) way to achieve all of your daily tasks. If you are...
5 Strategic Leadership Skills Every Manager Needs
So often, people view leadership as a talent: you’re either born with this quality or you’re not. However, this is not always the case! In reality, good leadership is made up of skills, and anyone can learn how to improve. Some people may pick up leadership attributes...
How the Sellability Score is Calculated: The Ultimate Guide
Do you have questions about how to calculate your business’s sellability score? Whether you’re looking to sell your business in the near future or years from now, understanding your sellability score will help you thrive. The sellability score identifies the...
The Top 5 Benefits of the Entrepreneurial Operating System
As an entrepreneur running your own business, you know there are bumps in the road and struggles that both you and your business will face over time. However, with the right people and tools at your disposal, you can anticipate what’s coming, plan for it, and continue...
5 Ways to Use Email Automation to Boost Traffic
Every single business in the world wants to evolve and grow. This will happen using a variety of techniques and strategies. In 2022, digital marketing is more than a household name, and most companies will adopt at least a few ideas when long-term planning and coming...
6 Questions To Ask A Potential Business Coach Before Hiring Them
Many entrepreneurs consider executive business coaching when they start struggling on their professional path. A small business coach is an experienced professional mentor who educates, supports, and motivates entrepreneurs. They will listen to your concerns, assess...
3 Ways Proper Long Term Strategic Planning Helps Your Business
Dreams turn into goals when they have a foundation of long-term strategic planning supporting them. They become reality when the ensuing strategic implementation plan is executed properly. With Kaizen Solutions as their strategic planning consultant, small and...
What is a Peer Group, and How Can it Improve Your Career?
If you are a CEO or key executive who has come to a crossroads or crisis in your career, you'll gain valuable insights and solutions from a peer group connection more than anywhere else. But what is a peer group, and how can that statement be made with so much...
Profit and Revenue are Lousy Core Values
As I mentioned last week, I am down with COVID and tired, so spending more time reading rather than working. I read Bill Browder's Freezing Order this weekend, and I highly recommend it. However, at the end of the book, Browder says that oligarchs, autocrats, and...