Elevator Pitch
- Cloudflare's new OAuth provider library, built largely with Anthropic's Claude LLM, demonstrates that while AI can assist with code generation, producing a secure and standards-compliant OAuth implementation still requires deep human expertise and careful review.
Key Takeaways
- The AI-generated code is structurally sound but lacks comprehensive testing and contains several security and specification compliance issues, some of which were not caught during human review.
- Critical implementation flaws include overly permissive CORS handling, missing standard security headers, misuse of deprecated OAuth grants, and subtle cryptographic mistakes.
- Effective use of LLMs for security-sensitive code demands that developers possess enough expertise to identify and correct AI-generated mistakes; otherwise, significant vulnerabilities may slip through.
Most Memorable Aspects
- The revelation that certain security flaws—like biased token generation and incorrect Basic auth handling—were overlooked, challenging claims of exhaustive expert review.
- Illustrative examples where the LLM proposed insecure cryptographic constructs, which only an expert human could spot and correct.
- The commit history exposes both the strengths and the persistent risks of “AI agentic” coding, especially in complex, high-stakes domains like authentication.
Direct Quotes
- "The idea that you can get an LLM to knock one up for you is not serious."
- "What this interaction shows is how much knowledge you need to bring when you interact with an LLM."
- "Yes, this does come across as a bit 'vibe-coded', despite what the README says, but so does a lot of code I see written by humans. LLM or not, we have to give a shit."
Source URL•Original: 2359 words
•Summary: 262 words