Lilian Weng
Technical BlogOpenAI researcher focused on AI safety, agents, and reliable deployment of large models.
Lilian Weng is a researcher at OpenAI whose long-form posts have become reference material for anyone studying LLMs, tool use, and the emerging “agent” ecosystem. She writes with unusual thoroughness, often synthesizing dozens of papers into a single, carefully structured survey that still rewards close reading by specialists.
Her articles cover reward hacking, hallucination, retrieval-augmented generation, planning with language models, and the pattern libraries that appear when models are wrapped in loops, memory, and external APIs. The writing is technical—equations and architecture diagrams appear when they clarify—but the organization and citations make it feasible to go from intuition to implementation detail in one sitting.
Follow Weng if you want serious, primary-source-grounded explainers on how frontier systems fail, how people try to mitigate those failures, and what open problems remain when models act over multiple steps. Her blog is especially useful for teams that are moving from prompting a chat UI to building semi-autonomous workflows where safety and evaluation matter.