An interesting look at the pitfalls of blindly using AI, Machine Learning and other correlation methods without subject matter expertise.
We’ve all heard the hype around big data and AI: “Experts will no longer be needed, just put your data through machine learning or AI algorithms and the computer will give the magical answer.” The reality is somewhat different as stated by Nate Silver: “The numbers have no way of speaking for themselves. We speak for them, we imbue them with meaning.” Take the example of school exam grade predictions. In the summer of 2020, the English Government used a predictive model to determine exam grades for school children. There were major objections to the grades given. The government put the blame on a mutant algorithm. Yet we could all see the unfairness of using postcode as a predictor of exam grade performance. This wasn’t a failure of the algorithm, rather it was a failure of the subject matter experts to imbue the data and resulting predictive models with meaning and fairness.
This webinar will be a fascinating demonstration of the importance of integrating subject matter knowledge with data analytic know-how to help ensure we extract the most helpful insights from our data. We will outline the situations when prediction is sufficient and how to treat the data analysis modelling process differently when our goal is to increase understanding (as is often the case in research, development and production settings). Using several examples, we will explore some basic machine learning approaches, their pitfalls and how to avoid misleading conclusions.
You will learn:
- The importance of using your domain knowledge, experience and intuition alongside statistical and machine learning methods.
- How to open the “black box” of data-driven methods and algorithms in a safe way and engage your audience with simple, interactive visuals of your findings.