AutoKaam Playbook

LangChain, the Framework I Have a Love-Hate Relationship With

Powerful when you need it, overkill when you don't, and the breaking-changes tax is real.

Last reviewed:

The operator take

I shipped my first production LangChain pipeline in early 2024. I have ripped LangChain out of three empire codebases since. Both statements are true.

The pull of LangChain is genuine: it gives you retrieval, prompt templates, output parsers, agents, memory, and integrations with 200+ vendors out of the box. If you want to prototype a RAG pipeline by Friday and you have no opinions yet about chunking or vector stores, you will move faster with LangChain than without.

The push is also genuine. LangChain ships breaking changes the way most projects ship docs. The migration from v0.1 to v0.2 was a weekend; v0.2 to v0.3 was another weekend. By v0.3 I had a recon pipeline that was 80 percent LangChain wrappers around 20 percent Anthropic SDK calls, and the wrappers kept breaking on minor releases. I rewrote it as direct Anthropic SDK plus a 60-line retrieval helper. The rewrite has not needed a single migration in eight months.

For 2026, the question is not "do I use LangChain", it is "where in the stack does the abstraction earn its keep". My answer in the empire today is LangGraph for explicit state machines (multi-agent flows where you want a graph) and direct SDK calls for everything else. LangChain core (chains, runnables, output parsers) is a tax I no longer pay.

The Indian-operator angle is the time cost. Most Indian SaaS startups have one or two engineers across the whole AI stack. Time spent debugging LangChain abstractions is time not spent on product. If you have one engineer and a 3-month runway, write the SDK call directly and move on.

The genuine LangChain win is when you need to swap models cheaply. With direct SDK calls, switching from Anthropic to OpenAI is a half-day rewrite. With LangChain, it is a config flip. If you are running a multi-vendor router for cost or latency reasons, LangChain still wins.

For everyone else, learn what LangChain does, then decide which 20 percent you actually need, and write the rest yourself. This is how I run my empire calls today, and I have not regretted the trade once.

Why it matters in 2026

Despite the love-hate noise, LangChain remains the most complete LLM framework in the ecosystem. LangGraph (the same team's state-machine layer) is the cleanest way to build multi-agent flows with explicit state. The 2026 winner pattern is selective use: LangGraph for orchestration, direct SDK for individual calls, drop the rest.

Cost in INR

Free open source; LangSmith (paid sister product) from Rs 4,000/mo

Use when

  • +Multi-vendor model routing where you want to swap models cheaply
  • +LangGraph for explicit state-machine multi-agent flows
  • +Prototyping when you have no strong opinions on the stack yet
  • +Teams with prior LangChain investment and trained engineers

Skip when

  • xSingle-vendor pipelines (direct SDK is faster to build and maintain)
  • xLatency-critical production paths (the abstraction adds 20-50ms)
  • xCodebases where breaking changes cost more than feature velocity

Alternatives I would consider