Experiments

Set up A/B tests, route traffic intelligently, and build multi-variant workflows using visual decision trees.

What is an Experiment?

Experiments in LaikaTest let you compare different prompt versions and deliver dynamic behavior based on user context. They allow you to:

  • Set up A/B tests: Compare prompt variants fairly
  • Define conditional workflows: Show different variants to users based on attributes like region, user type, or subscription tier
  • Route traffic: Control percentage splits like 50-50, 60-40, or multi-variant configs

Key Feature: Experiments support conditional routing. For example, premium users may receive Variant A, while all other users get Variant B.

Tree Architecture (Dashboard UI)

Experiments are built using a visual decision tree in the dashboard. You can combine:

  • Filter Nodes:Route users conditionally (e.g., user_type = premium → Variant A).
  • Split Nodes:Divide traffic by percentage (e.g., 50-50, 60-40).
  • Variant Buckets:Assign each user to a final prompt variant.

Experiment Workflow

1
Design: Build the experiment visually using drag-and-drop nodes.
2
Routing: Users are assigned variants based on your conditions and traffic splits.
3
Assignment: Each user consistently receives the same variant (deterministic hashing).
4
Analytics: Monitor conversion, satisfaction, and custom metrics in the dashboard.

Setting Up A/B Tests

The dashboard provides a guided way to create A/B tests without writing code. Simply:

  1. Navigate to your project
  2. Select Create Experiment
  3. Enter the experiment title and optional end criteria
  4. Add split nodes to route traffic (e.g., 50-50)
  5. Add filter nodes for conditional behavior
  6. Assign your prompt variants to each bucket
  7. Review and launch

Tip: Start with a simple 50-50 test, then gradually add filters as needed.

Traffic Routing

LaikaTest allows flexible control over how users are split between variants:

  • 50-50 Split:Ideal for standard A/B tests.
  • 60-40 Split:Good for conservative rollouts of new variants.
  • Multi-variant:Test multiple variants simultaneously (e.g., 33-33-34).

Important: Users will always receive the same variant on repeated evaluations, ensuring consistency.

Evaluating Experiments (SDK)

After setting up an experiment, you can retrieve the assigned prompt variant in your application using the SDK:

TypeScript
import { LaikaTest } from '@laikatest/js-client';const client = new LaikaTest(process.env.LAIKATEST_API_KEY);const result = await client.getExperimentPrompt('Premium User Greeting Test', {  userId: 'user-12345',  user_type: 'premium',  region: 'us-west'});// Compile the assigned promptconst message = result.prompt.compile({ customerName: 'Alice' });console.log('Bucket:', result.getBucketId());console.log('Message:', message.getContent());client.destroy();

Best Practices

  • Start Simple: Begin with a 50-50 split
  • Use Meaningful Names: Name variants clearly
  • Define End Criteria: Use days or event count
  • Validate Before Launch: Use the dashboard validator
  • Don’t Modify Running Tests: Pause before editing

Next Steps