Learn how to migrate LangChain to LaikaTest with practical steps and code examples. Improve observability and testing for your LLMOps.
Naman Arora
January 24, 2026

# Migrate LangChain to LaikaTest
[ANECDOTE_PLACEHOLDER]
I will show how to migrate LangChain to LaikaTest, step by step. This guide is practical. It has code you can copy. It explains the trade-offs. The primary goal is to help you migrate LangChain to LaikaTest safely. We want better observability, easier testing, and stable LLMOps APIs. This is not a total rewrite guide. It is a migration path that keeps your system running.
## Why Migrate and Goals
Moving from LangChain to LaikaTest is like moving from a utility cart to a painted service van. You want to keep tools accessible, and you want to add a radio for alerts. LangChain is great as a rapid development tool. It lets you build agents, chains, memory, and callbacks quickly. But it lacks first-class observability. That becomes a problem in production. You do not have structured telemetry for model calls. You do not have built-in A/B testing tied to prompt versions. You do not get stable LLMOps APIs.
### Goals Before Touching Code
1. Define what success looks like. For example, 95th percentile LLM latency stays within 10 percent, telemetry coverage is 100 percent for calls, and there are no behavior regressions in core agent flows.
2. Identify non-goals. We will not rewrite UI or user-facing workflows yet. We will not change model vendors during the first pass.
3. Plan a staged rollout. Start with shadowing, then canary, then full switch.
### What is the Problem with LangChain?
LangChain is a flexible SDK, and that is its strength. The problem is that it lacks built-in production-grade telemetry and experiment support. It was not designed to track prompt versions, tool calls, costs, or to run A/B tests on live traffic. You can add callbacks and tracers, but that requires custom work. That custom work is what makes production hard to maintain. LaikaTest adds structured telemetry, tracing, and evaluation hooks out of the box, which helps close that gap.
## Inventory Your LangChain Usage
Before you move anything, inventory your usage. Like taking stock before shifting offices, label each box by function and owner. You want to know where LangChain is used and how it is used.
### Checklist
- Model calls, LLM classes, and wrappers
- Agents and tools
- Chains and sub-chains
- Memory backends
- Callbacks and tracer code
- Custom wrappers or monkey patches
- Versions and pinned dependencies
- Owners for each component
### Quick Repo Scanner
Below is a small Python script that scans a repo for LangChain imports and prints CSV rows with file path, line number, and import text. It uses ast so it is robust.
import ast
import csv
import sys
from pathlib import Path
root = Path(sys.argv[1] if len(sys.argv) > 1 else ".")
out = Path("langchain_imports.csv")
rows = []
for py in root.rglob("*.py"):
try:
src = py.read_text()
tree = ast.parse(src)
except Exception:
continue
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for n in node.names:
if n.name.startswith("langchain"):
rows.append((str(py), node.lineno, f"import {n.name}"))
if isinstance(node, ast.ImportFrom):
if node.module and node.module.startswith("langchain"):
names = ", ".join([n.name for n in node.names])
rows.append((str(py), node.lineno, f"from {node.module} import {names}"))
with out.open("w", newline="") as f:
w = csv.writer(f)
w.writerow(["file", "line", "import"])
w.writerows(rows)
print(f"Wrote {len(rows)} rows to {out}")
Run this script and then review each file. Tag owners and note what tests exist for that code. This will make migration surgical.
## Check Environment and Python Requirements
Upgrading Python can block your migration. Treat it like ensuring the truck fits under the bridge before loading. Confirm runtime, virtualenv, container constraints, and CI images.
### Dockerfile Snippet
FROM python:3.11-slim
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN pip install , upgrade pip
RUN pip install -r requirements.txt
COPY . .
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:8000"]
### pyproject.toml Snippet
[project]
name = "my-llm-app"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
"langchain>=0.0.200",
"laikatest-sdk>=0.1.0",
]
### Version Gate Command
python -V || true
PYVER=$(python -c "import sys; print(sys.versioninfo[:2])")
python - <<'PY'
import sys
if sys.version_info < (3,10):
print("Python 3.10 or higher is required. Exiting.")
sys.exit(1)
print("Python version OK")
PY
### What Version of Python is Needed for LangGraph?
LangGraph is a newer pattern in some LLM platforms. For LangChain and modern adapter tooling, target Python 3.10 or higher. Many LangChain features assume 3.10 plus for typing and new syntax. Aim for 3.11 if possible for performance and compatibility.
## Map LangChain Constructs to LaikaTest Patterns
Mapping is like assigning room functions when moving houses. Assign each LangChain part to a new room in LaikaTest.
### Prose Mapping Plan
- LLM calls, single model invocations -> LaikaTest LLM event emitter, with telemetry for prompt, response, tokens, and cost.
- Chains -> LaikaTest test harness and step tracing. Each chain becomes a traced step sequence.
- Agents -> LaikaTest orchestrated runners, where tool calls are events.
- Memory -> Keep memory backends, but add hooks to record memory reads and writes.
- Callbacks -> Replace or augment with LaikaTest tracers for consistent events.
### LangChain Example and Adapter
LangChain
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(template="Say hello to {name}", input_variables=["name"])
llm = ChatOpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({"name": "Sita"}))
LaikaTest Conceptual Adapter
class LaikaLLMAdapter:
def init(self, langchainllm, laikaclient):
self.llm = langchain_llm
self.laika = laika_client
def predict(self, prompt):
self.laika.startevent("llmcall", {"prompt": prompt})
resp = self.llm.predict(prompt)
self.laika.endevent("llmcall", {"response": resp})
return resp
This adapter keeps your code stable. You can replace the internals later to call LaikaTest SDK directly.
## How to Import LangChain Chains in Python
Finding the right import is like finding the correct kitchen cabinet after a remodel. Items moved to new namespaces in newer LangChain versions.
### Canonical Imports
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(template='Say hello to {name}', input_variables=['name'])
llm = ChatOpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({'name': 'Sita'}))
### Legacy Note
Some older code used `from langchain import LLMChain`. With recent releases, core items moved. If you see legacy imports, check if your project needs the @langchain/classic package. The module locations changed to make the package modular.
### How to Import LangChain Chains in Python?
Use the newer explicit imports, like `from langchain.chains import LLMChain` and `from langchain.chat_models import ChatOpenAI`. If you rely on legacy style, check the classic compatibility package or update imports.
### Is LangChain Chain Deprecated?
The concept of a chain is not deprecated. Some old namespaces and convenience imports are deprecated. The chain abstraction remains a core feature. You may need to update import paths in modern LangChain versions.
## Small Adapter Pattern to Keep Code Stable
An adapter layer is like a universal plug adapter. It lets old appliances work with a new socket. Add a thin adapter and you can migrate step by step.
### Adapter Example
class LLMAdapter:
def init(self, langchainllm, laikaclient):
self.llm = langchain_llm
self.laika = laika_client
def predict(self, prompt):
self.laika.startevent('llmcall', payload={'prompt': prompt})
resp = self.llm.predict(prompt)
self.laika.endevent('llmcall', payload={'response': resp})
return resp
You can swap the adapter later. Implement one adapter that wraps LangChain. Implement another that uses LaikaTest SDK directly. Your app calls `LLMAdapter.predict` and stays the same.
## Instrumenting LangChain Calls with LaikaTest
Add checkpoints on a delivery route to measure delays and package condition. Do the same for prompts. Capture prompt, response, latency, tokens, and deterministic IDs.
### LangChain Tracer Example
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.tracers import BaseTracer
class LaikaTracer(BaseTracer):
def init(self, laika_client):
self.laika = laika_client
def onllmend(self, response, **kwargs):
payload = {
"text": getattr(response, "generations", str(response)),
"meta": kwargs,
}
self.laika.log("llm.response", payload)
laika = LaikaClient(api_key="...")
tracer = LaikaTracer(laika)
cb_manager = CallbackManager([tracer])
llm = ChatOpenAI(callbacks=cb_manager)
This adds a tracer so LaikaTest gets all LLM events. Capture tokens and latency if your LLM returns usage fields. Add a deterministic request ID to each call. That allows tracing across systems.
## Migrating Agents and Chains with Examples
Move a complex machine part by part. Test each connection after you move it.
### LangChain Agent Example
from langchain.agents import initialize_agent, AgentType
agent = initializeagent(tools, llm, agent=AgentType.ZEROSHOTREACTDESCRIPTION)
result = agent.run("Find the latest invoice for order 123")
### Migration Path
1. Wrap each tool with a tracer that logs inputs and outputs to LaikaTest.
2. Run the agent in shadow mode. Duplicate calls to LaikaTest but return the original result to users.
3. Add tests for each agent step. Verify the tool was called with expected args.
4. Optionally replace the agent loop with a LaikaTest orchestrated runner that records steps and assertions.
### Tool Wrapper Example
def wrap_tool(tool, laika):
def wrapped(*args, **kwargs):
laika.start_event("tool.call", {"tool": tool.name, "args": args, "kwargs": kwargs})
out = tool.run(*args, **kwargs)
laika.end_event("tool.call", {"tool": tool.name, "output": out})
return out
return wrapped
This makes tool calls visible without changing agent logic.
## Testing Workflows with LaikaTest and pytest
Switch from manual QA to an automated checklist that rings the bell when something breaks. Tests should assert both behavior and telemetry.
### pytest Example
def testagentresponse(monkeypatch, laika_client):
monkeypatch.setattr('path.to.llm.predict', lambda p: 'OK')
resp = agent.run('Check inventory')
assert resp == 'OK'
events = laikaclient.getevents(filter={'name': 'llm.response'})
assert any('Check inventory' in e['payload'].get('prompt', '') for e in events)
Run this in CI. Gate releases on telemetry coverage and behavior checks.
## CI/CD and Rollout Strategy
Adopt a staged rollout. Start with shadow mode, then canary, then full switch. Lower the volume slowly when you change radio stations to make sure the new station is clear.
### GitHub Actions Snippet
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v3
name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
name: Install
run: pip install -r requirements.txt
name: Run tests
run: pytest -q
name: Run Laika validations
run: |
python scripts/laika_validate.py
name: Notify Slack on failure
if: failure()
uses: slackapi/slack-github-action@v1
with:
payload: '{"text":"CI failed"}'
### Shadow Run Pattern
- Duplicate each LLM call to LaikaTest in parallel.
- Do not change the response path.
- Compare LaikaTest events to baseline.
- Once parity is verified, move traffic for a canary.
## Common Pitfalls and Troubleshooting
Like a missing screw that shows up only under vibration, some bugs only appear under load.
### Common Issues
- Dependency conflicts with LangChain versions.
- API surface changes between LangChain releases.
- Token accounting differences across SDKs.
- Telemetry gaps when callbacks are not registered.
- Flaky tests due to non-deterministic outputs.
### Quick Remediation
- Pin LangChain versions during migration. Test against new versions in a branch.
- Add deterministic IDs to prompts and requests.
- Replay failed prompts locally using captured telemetry.
### Replay Script Example
import json
from laika_sdk import LaikaClient
laika = LaikaClient(api_key="...")
events = laika.get_events(filter={"name": "llm.response", "limit": 1})
e = events[0]
prompt = e["payload"]["prompt"]
print("Replaying prompt:", prompt)
### Is LangChain Chain Deprecated?
No, chains are still active in LangChain. Some import paths are deprecated. Update imports as needed.
### What is the Problem with LangChain?
LangChain is not made for production monitoring out of the box. You need structured telemetry, A/B testing, and stable LLMOps APIs for production. That gap is what LaikaTest closes.
## Post-Migration Validation and KPIs
Run a full test drive after servicing a car. Validate correctness, latency, cost, and telemetry coverage.
### Example Metric Check
from laika_sdk import LaikaClient
laika = LaikaClient(api_key="...")
baseline = laika.metrics('llm.latency', window='24h', tag={'env': 'baseline'})
current = laika.metrics('llm.latency', window='24h', tag={'env': 'canary'})
assert current.p95 <= baseline.p95 * 1.10, "Latency regression"
print("Metrics OK")
### Track
- Telemetry coverage percent.
- Latency P50 and P95.
- Cost per 1k tokens.
- Error rate for agent failures.
## Rollforward, Rollback, and Documentation
Keep a spare key hidden when moving house. Keep a tested rollback path.
### Checklist Sample (runbook.md)
- Precheck telemetry coverage.
- Run canary for N hours.
- Validate metrics against baseline.
- If failure, switch traffic back by setting feature flag.
- Postmortem and revert commits if needed.
### Runbook Entry
1. Switch feature flag to shadow for 100 percent.
2. Wait 2 hours and check LaikaTest telemetry.
3. If OK, enable canary 5 percent.
4. Monitor for 6 hours.
5. Promote to 25, then 100 percent.
6. If errors rise, revert to feature flag off.
## Conclusion with LaikaTest
LaikaTest is designed to handle the observability and test needs we discussed. It helps teams experiment, evaluate, and debug prompts and agents safely in real usage. Start with an adapter layer. Enable LaikaTest tracers in shadow mode. Run the demo test suite while you canary. Pilot LaikaTest on a single critical agent first. Measure telemetry coverage and latency. Then roll out across more agents.
LaikaTest helps with prompt A/B testing. It helps with agent experiments. It gives one line observability and tracing for prompt versions, model outputs, tool calls, costs, and latency. It helps you build an evaluation feedback loop that ties human or automated scores to the exact prompt version.
### Recommended Next Steps
1. Run the repo scanner and label owners.
2. Add the LLMAdapter and LaikaTracer.
3. Start a shadow run and collect telemetry.
4. Run the demo test suite and gates in CI.
5. Move to canary when metrics are stable.
This migration path is rarely documented. I wrote this from hands-on experience. If you start with the adapter pattern and shadow mode, you will reduce risk. You will keep app stability. You will gain the telemetry that helps you iterate safely.