Is AI “noisy” in the sense described by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein in their 2021 book Noise: A Flaw in Human Judgment? And if so, what might this mean?
In the book ‘noise’ refers to unwanted variability in decisions or outputs that should ideally be consistent:
Imagine a patient who arrives at a hospital emergency department with ambiguous symptoms—mild chest pain and dizziness. On one occasion a likely case of indigestion is diagnosed and minimal intervention is advised. On another day, with nearly identical symptoms, a potential cardiac event is flagged, urging immediate and invasive treatment. It’s precisely the kind of ‘noisy’ decision that is observed in many situations with a human decision maker. Could that also happen with AI performing a similar task?
What is ‘noise’ in AI, then?
In the terms of Kahneman et al, noise is the inconsistency in judgments or decisions made under similar circumstances. While “bias” – which often draws far more attention – refers to a systematic error, “noise” is a random variability that’s due to an error the design of the decision system. AI systems, particularly those driven by machine learning (ML), are notorious for providing different answers to similar queries at different times – noise.
The popular AI systems of today, especially generative large language models, employ elements such as probabilistic mechanisms and random seeds to produce their output. These introduce variability, often intentionally, to enhance creativity or prevent repetitiveness. They’re sensitive to context, which can amplify noise:
- Slight changes in phrasing, timing, or prior interactions may lead to different outputs.
- Identical queries posed on different occasions might yield different results due to updates in training data or model fine-tuning.
This variability, while sometimes beneficial for adaptability, can introduce inconsistencies in scenarios requiring repeatable outcomes.
Bias
Bias attracts far more attention than noise, but bias and noise can coexist in AI systems. A model might consistently generate biased outputs (high bias, low noise) for certain tasks, while being inconsistent (low bias, high noise) in others. Bias might stem from skewed training data, while noise arises from design features of the AI.
Mitigating AI Noise
Efforts to reduce noise in AI systems mirror the strategies for addressing human noise:
- Standardization: Developing deterministic models that provide consistent outputs for similar inputs.
- Calibration: Ensuring models are trained on high-quality, uniform data to minimize inconsistencies.
- Evaluation: Continuously monitoring outputs to identify and address sources of variability.
In scenarios where consistency is paramount, simpler rule-based AI systems—such as ‘expert’ or knowledge-based systems — might offer an alternative. These systems rely on explicitly defined rules and deterministic processes, which might make them inherently less prone to noise. They are, however less adaptable to scenarios outside predefined rules, making them less suitable for dynamic or ambiguous tasks; and they can be difficult to maintain at scale (where they may also appear prone to more complex, unpredictable behaviour, and may have limited learning ability).
Hybrid Approaches
In practice, it’s a matter of choosing the right approach for each case and integrating, rather than forgetting well-understood decision design and simple algorithmic approaches as well.
We shouldn’t just throw AI at everything because not all applications need any more noise adding in.