Daniel
Lee

|

Stanford University  ·  Mathematics & Computer Science

2× AIME QualifierUSACO GoldMIT AI Hack-Nation Top 1.5%

Experience

Stanford AI Lab

ML Researcher  ·  Jun 2025 – Present

  • Developing interpretability methods to discover discrete reasoning operators in large language models.

Meta

Software Engineer Intern  ·  Jun 2026incoming

Projects

DYNAMO figure 1

DYNAMO

Reinforcement learning agent for dynamic portfolio optimization under market uncertainty. Trained with Proximal Policy Optimization on synthetic market environments to learn risk-adjusted allocation strategies.

RLPPOFinancePyTorch
Mixture-of-Steering-Vectors (MoSV) figure 1

Mixture-of-Steering-Vectors (MoSV)

Framework for targeted hallucination mitigation in LLMs via a learned mixture of activation steering vectors, applied at inference time through a sparse MLP router.

Steering VectorsAlignmentPyTorch
ShED-HD figure 1

ShED-HD

Entropy-based hallucination detection leveraging token distribution signals across model layers to flag unreliable generations — no external verifier required.

EntropyUncertaintyNLP
coming soon

OPUS

Discovering discrete reasoning operators in large language models via mechanistic interpretability.

InterpretabilityMechanisticTransformers