测试
概述
测试功能为 Koog 框架中的 AI 智能体流水线、子图及工具交互测试提供了一个全面的框架。它使开发者能够创建受控的测试环境,包含模拟的 LLM(大语言模型)执行器、工具注册表和智能体环境。
目的
此功能的主要目的是通过以下方式,促进基于智能体的 AI 功能测试:
- 模拟对特定提示的 LLM 响应
- 模拟工具调用及其结果
- 测试智能体流水线子图及其结构
- 验证数据在智能体节点间的正确流转
- 为预期行为提供断言
配置与初始化
设置测试依赖项
在设置测试环境之前,请确保已添加以下依赖项:
// build.gradle.kts
dependencies {
testImplementation("ai.koog:agents-test:LATEST_VERSION")
testImplementation(kotlin("test"))
}
模拟 LLM 响应
测试的基本形式涉及模拟 LLM 响应以确保确定性行为。您可以使用 MockLLMBuilder 及相关工具来实现。
<!--- INCLUDE import ai.koog.agents.core.tools.ToolRegistry import ai.koog.agents.testing.tools.getMockExecutor
val toolRegistry = ToolRegistry {}
-->
import ai.koog.agents.core.tools.ToolRegistry;
import ai.koog.agents.testing.tools.MockExecutor;
import ai.koog.prompt.executor.model.PromptExecutor;
// 创建一个工具注册表(空)
ToolRegistry toolRegistry = ToolRegistry.builder().build();
// 创建一个模拟 LLM 执行器
PromptExecutor mockLLMApi = MockExecutor.builder()
.toolRegistry(toolRegistry)
.mockLLMAnswer("Hello!").onRequestContains("Hello")
.mockLLMAnswer("I don't know how to answer that.").asDefaultResponse()
.build();
模拟工具调用
您可以模拟 LLM 以根据输入模式调用特定工具:
<!--- INCLUDE import ai.koog.agents.core.tools.* import ai.koog.agents.ext.tool.AskUser import ai.koog.agents.ext.tool.SayToUser import ai.koog.agents.testing.tools.getMockExecutor import ai.koog.serialization.typeToken import kotlinx.serialization.Serializable import ai.koog.agents.core.tools.annotations.LLMDescription
public object CreateTool : Tool
override suspend fun execute(args: Args): String = args.message
}
public object SearchTool : Tool
override suspend fun execute(args: Args): String = args.query
}
public object AnalyzeTool : Tool
override suspend fun execute(args: Args): String = args.query
}
typealias PositiveToneTool = SayToUser typealias NegativeToneTool = SayToUser
val mockLLMApi = getMockExecutor { -->
// 模拟一个工具调用响应
mockLLMToolCall(CreateTool, CreateTool.Args("solve")) onRequestEquals "Solve task"
// 模拟工具行为 - 最简单的形式,无需 lambda
mockTool(PositiveToneTool) alwaysReturns "The text has a positive tone."
// 当需要执行额外操作时使用 lambda
mockTool(NegativeToneTool) alwaysTells {
// 执行一些额外操作
println("Negative tone tool called")
// 返回结果
"The text has a negative tone."
}
// 基于特定参数模拟工具行为
mockTool(AnalyzeTool) returns "Detailed analysis" onArguments AnalyzeTool.Args("analyze deeply")
// 使用条件参数匹配模拟工具行为
mockTool(SearchTool) returns "Found results" onArgumentsMatching { args ->
args.query.contains("important")
}
=== "Java"
以上示例展示了模拟工具的不同方式,从简单到复杂:
alwaysReturns:最简单形式,无需lambda表达式直接返回值。alwaysTells:当需要执行额外操作时使用lambda表达式。returns...onArguments:为精确匹配的参数返回特定结果。returns...onArgumentsMatching:基于自定义参数条件返回结果。
启用测试模式
要在智能体上启用测试模式,请在AIAgent构造函数块中使用withTesting()函数:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.testing.feature.withTesting import ai.koog.prompt.executor.clients.openai.OpenAIModels
val llmModel = OpenAIModels.Chat.GPT4o
// Create the agent with testing enabled fun main() { -->
高级测试
测试图结构
在测试详细的节点行为和边连接之前,验证智能体图的整体结构非常重要。这包括检查所有必需的节点是否存在,以及是否在预期的子图中正确连接。
测试功能提供了测试智能体图结构的全面方法。这种方法对于具有多个子图和互连节点的复杂智能体特别有价值。
基础结构测试
从验证智能体图的基本结构开始:
<!--- INCLUDE
import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.testing.feature.testGraph import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
-->
AIAgent(
// 构造函数参数
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
val firstSubgraph = assertSubgraphByName<String, String>("first")
val secondSubgraph = assertSubgraphByName<String, String>("second")
// 断言子图连接
assertEdges {
startNode() alwaysGoesTo firstSubgraph
firstSubgraph alwaysGoesTo secondSubgraph
secondSubgraph alwaysGoesTo finishNode()
}
// 验证第一个子图
verifySubgraph(firstSubgraph) {
val start = startNode()
val finish = finishNode()
// 按名称断言节点
val askLLM = assertNodeByName<String, Message.Response>("callLLM")
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
// 断言节点可达性
assertReachable(start, askLLM)
assertReachable(askLLM, callTool)
}
}
}
测试节点行为
节点行为测试允许您验证智能体图中的节点在给定输入下是否产生预期输出。 这对于确保智能体逻辑在不同场景下正确工作至关重要。#### 基础节点测试
从简单的输入输出验证开始,针对单个节点进行测试:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.example.exampleTesting03.CreateTool import ai.koog.agents.testing.feature.assistantMessage import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolCallMessage import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val askLLM = assertNodeByName<String, Message.Response>("callLLM")
-->
上面的示例展示了如何测试以下行为:
1. 当 LLM 节点收到 Hello 作为输入时,它会响应一个简单的文本消息。
2. 当它收到 Solve task 时,它会响应一个工具调用。
测试工具运行节点
您也可以测试运行工具的节点:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.core.tools.* import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.ext.tool.AskUser import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolCallMessage import ai.koog.agents.testing.feature.toolResult import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message import ai.koog.serialization.typeToken import kotlinx.serialization.Serializable import ai.koog.agents.core.tools.annotations.LLMDescription
object SolveTool : SimpleTool
override suspend fun execute(args: Args): String {
return args.message
}
}
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
-->
这验证了当工具执行节点收到特定的工具调用签名时,它会产生预期的工具结果。
高级节点测试
对于更复杂的场景,您可以测试具有结构化输入和输出的节点:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.tools.* import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.ext.tool.AskUser import ai.koog.agents.testing.feature.assistantMessage import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolCallMessage import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message import ai.koog.serialization.typeToken import kotlinx.serialization.Serializable import ai.koog.agents.core.tools.annotations.LLMDescription
object AnalyzeTool : Tool
@Serializable
data class Args(
@property:LLMDescription("Message from the agent")
val query: String,
val depth: Int
)
override suspend fun execute(args: Args): String = args.query
}
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val askLLM = assertNodeByName<String, Message.Response>("callLLM")
-->
kotlin
assertNodes {
// 测试同一节点的不同输入
askLLM withInput "Simple query" outputs assistantMessage("Simple response")
// 测试复杂参数
askLLM withInput "Complex query with parameters" outputs toolCallMessage(
AnalyzeTool,
AnalyzeTool.Args(query = "parameters", depth = 3)
)
}
您还可以测试具有详细结果结构的复杂工具调用场景:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.core.tools.* import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolCallMessage import ai.koog.agents.testing.feature.toolResult import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message import ai.koog.serialization.typeToken import kotlinx.serialization.Serializable
object AnalyzeTool : Tool
@Serializable
data class Result(
val analysis: String,
val confidence: Double,
val metadata: Map<String, String> = mapOf()
)
override suspend fun execute(args: Args): Result {
return Result(
args.query, 0.95,
mapOf("source" to "mock", "timestamp" to "2023-06-15")
)
}
}
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
-->
kotlin
assertNodes {
// 测试具有结构化结果的复杂工具调用
callTool withInput toolCallMessage(
AnalyzeTool,
AnalyzeTool.Args(query = "complex", depth = 5)
) outputs toolResult(AnalyzeTool, AnalyzeTool.Args(query = "complex", depth = 5), AnalyzeTool.Result(
analysis = "Detailed analysis",
confidence = 0.95,
metadata = mapOf("source" to "database", "timestamp" to "2023-06-15")
))
}
这些高级测试有助于确保您的节点正确处理复杂的数据结构,这对于实现复杂的智能体行为至关重要。
测试边连接边连接测试允许您验证代理的图是否正确地将一个节点的输出路由到适当的下一个节点。这确保您的代理能够根据不同的输出遵循预期的工作流路径。
基本边测试
从简单的边连接测试开始:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.core.tools.* import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.example.exampleTesting03.CreateTool import ai.koog.agents.testing.feature.assistantMessage import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolCallMessage import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message import kotlinx.serialization.KSerializer import kotlinx.serialization.Serializable
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
val askLLM = assertNodeByName<String, Message.Response>("callLLM")
val giveFeedback = assertNodeByName<String, Message.Response>("giveFeedback")
-->
kotlin
assertEdges {
// 测试文本消息路由
askLLM withOutput assistantMessage("Hello!") goesTo giveFeedback
// 测试工具调用路由
askLLM withOutput toolCallMessage(CreateTool, CreateTool.Args("solve")) goesTo callTool
}
此示例验证以下行为:
1. 当 LLM 节点输出简单文本消息时,流程被导向 giveFeedback 节点。
2. 当它输出工具调用时,流程被导向 callTool 节点。
测试条件路由
您可以基于输出内容测试更复杂的路由逻辑:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.testing.feature.assistantMessage import ai.koog.agents.testing.feature.testGraph import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val askLLM = assertNodeByName<String, Message.Response>("callLLM")
val askForInfo = assertNodeByName<String, ReceivedToolResult>("askForInfo")
val processRequest = assertNodeByName<String, Message.Response>("processRequest")
-->
kotlin
assertEdges {
// 不同的文本响应可以路由到不同的节点
askLLM withOutput assistantMessage("Need more information") goesTo askForInfo
askLLM withOutput assistantMessage("Ready to proceed") goesTo processRequest
}
高级边测试
对于复杂的代理,您可以基于工具结果中的结构化数据测试条件路由:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.example.exampleTesting09.AnalyzeTool import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolResult import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
val processResult = assertNodeByName<String, Message.Response>("processResult")
-->
kotlin
assertEdges {
// 基于工具结果内容测试路由
callTool withOutput toolResult(
AnalyzeTool,
AnalyzeTool.Args(query = "parameters", depth = 3),
AnalyzeTool.Result(analysis = "Needs more processing", confidence = 0.5)
) goesTo processResult
}
您还可以基于不同的结果属性测试复杂的决策路径:
<!--- INCLUDE import ai.koog.agents.core.agent.AIAgent import ai.koog.agents.core.environment.ReceivedToolResult import ai.koog.agents.example.exampleTesting03.mockLLMApi import ai.koog.agents.example.exampleTesting02.toolRegistry import ai.koog.agents.example.exampleTesting09.AnalyzeTool import ai.koog.agents.testing.feature.testGraph import ai.koog.agents.testing.feature.toolResult import ai.koog.prompt.executor.clients.openai.OpenAIModels import ai.koog.prompt.message.Message
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
testGraph<String, String>("test") {
assertNodes {
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
val finish = assertNodeByName<String, Message.Response>("finish")
val verifyResult = assertNodeByName<String, Message.Response>("verifyResult")
-->
kotlin
assertEdges {
// 基于置信度水平路由到不同节点
callTool withOutput toolResult(
AnalyzeTool,
AnalyzeTool.Args(query = "parameters", depth = 3),
AnalyzeTool.Result(analysis = "Complete", confidence = 0.9)
) goesTo finish
callTool withOutput toolResult(
AnalyzeTool,
AnalyzeTool.Args(query = "parameters", depth = 3),
AnalyzeTool.Result(analysis = "Uncertain", confidence = 0.3)
) goesTo verifyResult
}
完整测试示例
以下是一个展示完整测试场景的用户故事:
您正在开发一个语气分析智能体,用于分析文本的语气并提供反馈。该智能体使用检测积极、消极和中性语气的工具。
以下是测试该智能体的方法:
@Test
fun testToneAgent() = runTest {
// 创建用于跟踪工具调用的列表
var toolCalls = mutableListOf<String>()
var result: String? = null
// 创建工具注册表
val toolRegistry = ToolRegistry {
// 特殊工具,此类智能体必需
tool(SayToUser)
with(ToneTools) {
tools()
}
}
// 创建事件处理器
val eventHandler = EventHandler {
onToolCallStarting { tool, args ->
println("[DEBUG_LOG] 工具调用:工具 ${tool.name},参数 $args")
toolCalls.add(tool.name)
}
handleError {
println("[DEBUG_LOG] 发生错误:${it.message}\n${it.stackTraceToString()}")
true
}
handleResult {
println("[DEBUG_LOG] 结果:$it")
result = it
}
}
val positiveText = "I love this product!"
val negativeText = "Awful service, hate the app."
val defaultText = "I don't know how to answer this question."
val positiveResponse = "The text has a positive tone."
val negativeResponse = "The text has a negative tone."
val neutralResponse = "The text has a neutral tone."
val mockLLMApi = getMockExecutor(toolRegistry, eventHandler) {
// 为不同输入文本设置 LLM 响应
mockLLMToolCall(NeutralToneTool, ToneTool.Args(defaultText)) onRequestEquals defaultText
mockLLMToolCall(PositiveToneTool, ToneTool.Args(positiveText)) onRequestEquals positiveText
mockLLMToolCall(NegativeToneTool, ToneTool.Args(negativeText)) onRequestEquals negativeText
// 模拟当工具返回结果时 LLM 仅响应工具结果的行为
mockLLMAnswer(positiveResponse) onRequestContains positiveResponse
mockLLMAnswer(negativeResponse) onRequestContains negativeResponse
mockLLMAnswer(neutralResponse) onRequestContains neutralResponse
mockLLMAnswer(defaultText).asDefaultResponse// 工具模拟
mockTool(PositiveToneTool) alwaysTells {
toolCalls += "Positive tone tool called"
positiveResponse
}
mockTool(NegativeToneTool) alwaysTells {
toolCalls += "Negative tone tool called"
negativeResponse
}
mockTool(NeutralToneTool) alwaysTells {
toolCalls += "Neutral tone tool called"
neutralResponse
}
}
// 创建策略
val strategy = toneStrategy("tone_analysis")
// 创建智能体配置
val agentConfig = AIAgentConfig(
prompt = prompt("test-agent") {
system(
"""
你是一个具备语气分析工具访问权限的问答智能体。
你需要尽可能准确地回答1个问题。
回答时请尽量简洁。
请勿 NOT ANSWER ANY QUESTIONS THAT ARE BESIDES PERFORMING TONE ANALYSIS!
请勿 NOT HALLUCINATE!
""".trimIndent()
)
},
model = mockk<LLModel>(relaxed = true),
maxAgentIterations = 10
)
// 创建启用测试的智能体
val agent = AIAgent(
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
strategy = strategy,
eventHandler = eventHandler,
agentConfig = agentConfig,
) {
withTesting()
}
// 测试积极文本
agent.run(positiveText)
assertEquals("The text has a positive tone.", result, "积极语气结果应匹配")
assertEquals(1, toolCalls.size, "预期调用一个工具")
// 测试消极文本
agent.run(negativeText)
assertEquals("The text has a negative tone.", result, "消极语气结果应匹配")
assertEquals(2, toolCalls.size, "预期调用两个工具")
// 测试中性文本
agent.run(defaultText)
assertEquals("The text has a neutral tone.", result, "中性语气结果应匹配")
assertEquals(3, toolCalls.size, "预期调用三个工具")
}
对于具有多个子图的复杂智能体,您还可以测试图结构:
=== "Kotlin"
@Test
fun testMultiSubgraphAgentStructure() = runTest {
val strategy = strategy("test") {
val firstSubgraph by subgraph(
"first",
tools = listOf(DummyTool, CreateTool, SolveTool)
) {
val callLLM by nodeLLMRequest(allowToolCalls = false)
val executeTool by nodeExecuteTool()
val sendToolResult by nodeLLMSendToolResult()
val giveFeedback by node<String, String> { input ->
llm.writeSession {
appendPrompt {
user("Call tools! Don't chat!")
}
}
input
}
edge(nodeStart forwardTo callLLM)
edge(callLLM forwardTo executeTool onToolCall { true })
edge(callLLM forwardTo giveFeedback onAssistantMessage { true })
edge(giveFeedback forwardTo giveFeedback onAssistantMessage { true })
edge(giveFeedback forwardTo executeTool onToolCall { true })
edge(executeTool forwardTo nodeFinish transformed { it.content })
}
val secondSubgraph by subgraph<String, String>("second") {
edge(nodeStart forwardTo nodeFinish)
}
edge(nodeStart forwardTo firstSubgraph)
edge(firstSubgraph forwardTo secondSubgraph)
edge(secondSubgraph forwardTo nodeFinish)
}
val toolRegistry = ToolRegistry {
tool(DummyTool)
tool(CreateTool)
tool(SolveTool)
}
val mockLLMApi = getMockExecutor(toolRegistry) {
mockLLMAnswer("Hello!") onRequestContains "Hello"
mockLLMToolCall(CreateTool, CreateTool.Args("solve")) onRequestEquals "Solve task"
}
val basePrompt = prompt("test") {}
AIAgent(
toolRegistry = toolRegistry,
strategy = strategy,
eventHandler = EventHandler {},
agentConfig = AIAgentConfig(prompt = basePrompt, model = OpenAIModels.Chat.GPT4o, maxAgentIterations = 100),
promptExecutor = mockLLMApi,
) {
testGraph("test") {
val firstSubgraph = assertSubgraphByName<String, String>("first")
val secondSubgraph = assertSubgraphByName<String, String>("second")
assertEdges {
startNode() alwaysGoesTo firstSubgraph
firstSubgraph alwaysGoesTo secondSubgraph
secondSubgraph alwaysGoesTo finishNode()
}
verifySubgraph(firstSubgraph) {
val start = startNode()
val finish = finishNode()
val askLLM = assertNodeByName<String, Message.Response>("callLLM")
val callTool = assertNodeByName<Message.Tool.Call, ReceivedToolResult>("executeTool")
val giveFeedback = assertNodeByName<Any?, Any?>("giveFeedback")
assertReachable(start, askLLM)
assertReachable(askLLM, callTool)
``` assertNodes {
askLLM 输入 "Hello" 输出 Message.Assistant("Hello!")
askLLM 输入 "Solve task" 输出 toolCallMessage(CreateTool, CreateTool.Args("solve"))
callTool 输入 toolCallSignature(
SolveTool,
SolveTool.Args("solve")
) 输出 toolResult(SolveTool, "solved")
callTool 输入 toolCallSignature(
CreateTool,
CreateTool.Args("solve")
) 输出 toolResult(CreateTool, "created")
}
assertEdges {
askLLM 输出 Message.Assistant("Hello!") 指向 giveFeedback
askLLM 输出 toolCallMessage(CreateTool, CreateTool.Args("solve")) 指向 callTool
}
}
}
}
}
```
<!--- KNIT example-testing-15.kt -->
=== "Java"
<!--- INCLUDE
/**
-->
<!--- SUFFIX
**/
-->
```java
```
<!--- KNIT example-testing-java-14.java -->
## API 参考 { #complete-testing-example }
有关测试功能的完整 API 参考,请参阅 [agents-test](https://api.koog.ai/agents/agents-test/index.html) 模块的参考文档。
## FAQ 与故障排除 { #api-reference }
#### 如何模拟特定的工具响应?
在 `MockLLMBuilder` 中使用 `mockTool` 方法:
=== "Kotlin"
<!--- INCLUDE
/*
-->
<!--- SUFFIX
*/
-->
```kotlin
val mockExecutor = getMockExecutor {
mockTool(myTool) alwaysReturns myResult
// 或带条件
mockTool(myTool) returns myResult onArguments myArgs
}
```
<!--- KNIT example-testing-16.kt -->
=== "Java"
<!--- INCLUDE
-->
```java
```
<!--- KNIT example-testing-java-15.java -->
#### 如何测试复杂的图结构? { #how-do-i-mock-a-specific-tool-response }
使用子图断言、`verifySubgraph` 和节点引用:
=== "Kotlin"
<!--- INCLUDE
import ai.koog.agents.core.agent.AIAgent
import ai.koog.agents.example.exampleTesting03.mockLLMApi
import ai.koog.agents.example.exampleTesting02.toolRegistry
import ai.koog.agents.testing.feature.testGraph
import ai.koog.prompt.executor.clients.openai.OpenAIModels
val llmModel = OpenAIModels.Chat.GPT4o
fun main() {
AIAgent(
// Constructor arguments
promptExecutor = mockLLMApi,
toolRegistry = toolRegistry,
llmModel = llmModel
) {
-->
<!--- SUFFIX
}
}
-->
```kotlin
testGraph<Unit, String>("test") {
val mySubgraph = assertSubgraphByName<Unit, String>("mySubgraph")
verifySubgraph(mySubgraph) {
// 获取节点引用
val nodeA = assertNodeByName<Unit, String>("nodeA")
val nodeB = assertNodeByName<String, String>("nodeB")
// 断言可达性
assertReachable(nodeA, nodeB)
// 断言边连接
assertEdges {
nodeA.withOutput("result") goesTo nodeB
}
}
}
```
<!--- KNIT example-testing-17.kt -->
=== "Java"
<!--- INCLUDE
/**
-->
<!--- SUFFIX
**/
-->
```java
```
<!--- KNIT example-testing-java-16.java -->
#### 如何根据输入模拟不同的 LLM 响应? { #how-can-i-test-complex-graph-structures }
使用模式匹配方法:
=== "Kotlin"<!--- INCLUDE
import ai.koog.agents.testing.tools.getMockExecutor
val promptExecutor =
-->
```kotlin
getMockExecutor {
mockLLMAnswer("Response A") onRequestContains "topic A"
mockLLMAnswer("Response B") onRequestContains "topic B"
mockLLMAnswer("Exact response") onRequestEquals "exact question"
mockLLMAnswer("Conditional response") onCondition { it.contains("keyword") && it.length > 10 }
}
import ai.koog.agents.testing.tools.MockExecutor;
import ai.koog.prompt.executor.model.PromptExecutor;
PromptExecutor promptExecutor = MockExecutor.builder()
.mockLLMAnswer("Response A").onRequestContains("topic A")
.mockLLMAnswer("Response B").onRequestContains("topic B")
.mockLLMAnswer("Exact response").onRequestEquals("exact question")
.mockLLMAnswer("Conditional response").onCondition(s -> s.contains("keyword") && s.length() > 10)
.build();
故障排除
Mock执行器始终返回默认响应
请检查您的模式匹配是否正确。模式区分大小写,且必须与指定的内容完全匹配。
工具调用未被拦截
请确保:
- 工具注册表已正确设置。
- 工具名称完全匹配。
- 工具操作已正确配置。
图断言失败
- 验证节点名称是否正确。
- 检查图结构是否符合您的预期。
- 使用
startNode()和finishNode()方法来获取正确的入口点和出口点。