HOME

TheInfoList



OR:

Inner alignment is a core challenge in
AI safety AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are mor ...
: ensuring that a
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
system that becomes a mesa-optimizer—an optimizer produced by the training process—remains aligned with its original training objective. This issue arises when a system performs well during training but adopts a different goal once deployed, particularly under distributional shifts. A classic analogy is human evolution: while natural selection optimized for reproductive success, humans often pursue pleasure, sometimes at the expense of reproduction—a divergence known as inner misalignment. The concept was introduced in a widely cited paper that distinguishes inner alignment from outer alignment, which focuses on specifying the intended objective correctly. Addressing inner alignment involves managing risks such as deceptive alignment, gradient hacking, and objective drift.


Mesa-optimization

The inner alignment problem frequently involves mesa-optimization, where the trained system itself develops the capability to optimize for its own objectives. During training with techniques like
Stochastic Gradient Descent Stochastic gradient descent (often abbreviated SGD) is an Iterative method, iterative method for optimizing an objective function with suitable smoothness properties (e.g. Differentiable function, differentiable or Subderivative, subdifferentiable ...
(SGD), the model might evolve into an internal optimizer whose goals differ from the original training signal. The base optimizer selects models based on observable outputs, not internal intent, making it possible for a misaligned goal to emerge unnoticed. This internal goal—called a mesa-objective—can lead to unintended behavior, especially when the AI generalizes its learned behavior to new environments in unsafe ways. Evolution is often cited as an example: while it optimized humans to reproduce, modern behavior deviates due to internal goal shifts like pleasure-seeking.


Distinction from outer alignment

The distinction between inner and outer alignment was formalized in the paper ''Risks from Learned Optimization in Advanced Machine Learning Systems''. Outer alignment refers to ensuring that the training objective—also called the "outer objective", such as the loss function in supervised learning—correctly captures the goals and values intended by human designers. In contrast, inner alignment concerns whether the trained model actually pursues a goal that aligns with this outer objective. While the outer objective is explicitly specified, the inner objective is typically implicit and emerges from the training dynamics. This means that a model can perform well during training—appearing aligned—yet internally optimize for a different goal once deployed, especially in novel contexts or under distributional shifts. This risk is compounded by the fact that current machine learning systems do not provide a clear or transparent representation of their internal objectives. Consequently, significant misalignment can arise without being detected during development. This divergence is also explored in research on goal misgeneralization, which highlights how models can generalize their learned behavior in unintended ways that reflect internal goals differing from those specified by the outer training signal.


Practical illustrations

One well-known illustration involves a maze-solving AI trained on environments where the solution is marked with a green arrow. The system learns to seek green arrows rather than actually solving mazes. When the arrow is moved or becomes misleading in deployment, the AI fails—demonstrating how optimizing a proxy feature during training can cause dangerous behavior. Broader analogies include corporations optimizing for profit instead of social good, or social media algorithms favoring engagement over well-being. Such examples show that systems can generalize in capability while failing to generalize in intent. This makes solving inner alignment critical for ensuring that AI systems act as intended in novel or changing environments.


Definitional ambiguity

The meaning of inner alignment has been interpreted in multiple ways, leading to differing taxonomies of alignment failures. One interpretation defines it strictly in terms of mesa-optimizers with misaligned goals. Another broader view includes any behavioral divergence between training and deployment, regardless of whether optimization is involved. A third approach emphasizes optimization flaws observable during training. These perspectives affect how researchers classify and approach specific cases of misalignment. Clarifying these distinctions is seen as essential for advancing theoretical and empirical work in the field, improving communication, and building more robust alignment solutions. Without a shared understanding, researchers may unintentionally talk past each other when discussing the same problem.


Strategic importance

There is a growing sense of urgency around solving inner alignment, especially as advanced AI systems approach general-purpose capabilities. Misalignment has already been observed in deployed systems—for example, recommendation algorithms optimizing for engagement rather than user well-being. Even seemingly minor misbehaviors, such as
AI hallucination In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation, or delusion) is a response generated by AI that contains false or misleading information presented as fact. Thi ...
s, point to misalignment risks. Proposals to address inner alignment include hard-coded optimization routines, internals-based model selection, and adversarial training. However, these techniques are still under development, and there is concern that current approaches may not scale. It has been argued that embedding richer, human-aligned goals within systems is more promising than continuing to optimize narrow performance metrics.


Alternative framings

Several framings of the inner alignment problem have been proposed to clarify the conceptual boundaries between types of misalignment. One framing focuses on behavioral divergence in test environments: failures that arise due to bad training signals are classified as outer misalignment, while failures due to internal misgeneralized goals are classified as inner. A second framing considers the causal source of the failure—whether it stems from the reward function or the inductive biases of the training method. Another framing shifts to cognitive alignment, analyzing whether the AI’s internal goals match human values. A final framing considers alignment during continual learning, where models may evolve their goals post-deployment. Each approach highlights different risks and informs different research agendas.


See also

* Outer alignment *
AI alignment In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered ''aligned'' if it advances the intended objectives. A '' ...
* Mesa-optimization * Deceptive alignment *
Goodhart's law Goodhart's law is an adage that has been stated as, "When a measure becomes a target, it ceases to be a good measure". It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 arti ...
*
Reward hacking Specification gaming or reward hacking occurs when anArtificial intelligence , AI trained with reinforcement learning optimizes an objective function—achieving the literal, formal specification of an objective—without actually achieving an out ...


References

{{reflist AI safety