Cut MoE Inference
Costs by 30-50%
Entropy-guided dynamic expert selection for Mixture-of-Experts models. Same accuracy, dramatically lower compute. Validated on Mixtral, Qwen-MoE, and OLMoE.
def select_experts(router_logits):
# Compute routing entropy
probs = softmax(router_logits)
H = -sum(p * log(p))
# Low entropy = confident routing
# Use fewer experts!
if H < 0.6:
K = 1 # 87.5% compute saved
elif H < 1.2:
K = 2 # 75% compute saved
else:
K = 4 # Full routing
return top_k(probs, K)Validated Results
Real compute savings on production MoE models. Accuracy measured relative to full Top-K routing baseline.
Mixtral 8x7B
MoEK=1 used 78% of the time with minimal quality loss
Qwen-MoE
MoEEffective across all entropy thresholds
OLMoE-1B-7B
MoEConsistent savings on smaller MoE architecture
How It Works
Adaptive-K uses information theory to make intelligent routing decisions. The key insight: routing entropy predicts when fewer experts are sufficient.
Compute Router Entropy
For each token, calculate the entropy H of the router softmax distribution. Low entropy = confident routing.
H = -sum(p * log(p))Dynamic K Selection
Based on entropy thresholds, select fewer experts for confident tokens, more for uncertain ones.
K = 1 if H < 0.6 else (2 if H < 1.2 else 4)Sparse Expert Execution
Only execute the selected K experts. Skip unnecessary computation entirely.
output = sum(expert[i](x) * w[i] for i in top_k)The Key Insight
When the router is confident (low entropy), it has already identified the "right" expert. Running additional experts adds compute cost but minimal value. By dynamically adjusting K based on entropy, we skip unnecessary work while maintaining output quality.