Retell AI
  • Get Started
    • Build your first phone agent in 5 minutes
    • SDKs
  • Build
    • Overview
    • Handle background noise
    • Handle voicemail
  • Test
    • Best Practices
    • LLM Playground
    • Web Call
  • Deploy
    • Purchase phone number
    • Overview
    • Make & receive phone calls
    • Connect to web call
    • Understand concurrency & limits
Powered by GitBook
On this page
Export as PDF
  1. Test

LLM Playground

PreviousBest PracticesNextWeb Call

Last updated 4 months ago

Learn how to effectively test and debug your AI agents using the LLM Playground

The LLM Playground provides a convenient environment for testing your AI agents without making actual web or phone calls. This interactive testing interface enables:

  • Rapid prototyping and debugging of agent responses

  • Testing different conversation scenarios

  • Immediate feedback on agent behavior

  • Faster development iterations

1

Access the LLM Playground

  1. Navigate to your agent’s detail page

  2. Click on the “Test LLM” tab

  3. You’ll see the chat interface where you can start testing

LLM Playground Interface

2

Test Basic Conversations

  1. Type your message in the input field

  2. Observe the agent’s response

3

Test Function Calling

  1. Use prompts that should trigger specific functions

  2. Verify that functions are called with correct parameters

Iterate and Refine

  1. Monitor agent behavior and responses

  2. Update prompts or functions as needed

  3. Click the “delete” button to reset conversations

  4. Test the updated behavior

Iterate and test

5

Save Test Cases

  1. Click the “Save” button to store your test conversation

  2. Add a descriptive name for the test case

  3. Access saved tests from the agent detail page

Save test case

Saved test cases

6

Test Dynamic Variables

  1. Use dynamic variables in your prompts

  2. Verify that variables are properly interpolated

  3. Test different variable values and scenarios

  • Start with simple conversations and gradually test more complex scenarios

  • Save important test cases for regression testing

  • Test edge cases and error handling

  • Document unexpected behaviors for future reference

4

Best Practices

​