摘要
Abstract
As large language models(LLM)demonstrate significant potential in complex task reasoning and decision-making,an increasing number of studies are focusing on how to utilize LLM for task planning and tool calling.To achieve better results,most methods require fine-tuning of the model,yet high-quality training data is always scarce,preventing these methods from being rapidly applied in the field.To address these issues,we propose a plug-and-play method for complex task handling with few-shot prompting(PnP-FSP).It fully utilizes few-shot prompting for complex task analysis and processing,eliminating the need for LLM fine-tuning.To facilitate rapid application of the proposed method in vertical fields,a task planning strategy based on a task planning reference library is proposed.Use cases in the reference library that are similar to user questions are used as prompt contexts to assist in the rapid planning of complex problems.Additionally,a subsequent task adjustment mechanism based on the results of previous tasks is introduced,effectively addressing the issue of one-step planning methods that emphasize the global nature of task planning while ignoring dynamic changes during task execution.Furthermore,PnP-FSP decouples complex task processing flows such as task planning and tool calling,al-lowing for the selection of advantageous LLM based on actual needs.It achieves optimal collaboration among LLMs in a plug-and-play manner to complete the target task.Experimental results show that PnP-FSP outperforms mainstream complex task handling methods while significantly reducing dependence on supervised data.关键词
大语言模型/任务规划/工具调用/少样本提示/深度学习Key words
large language model/task planning/tool calling/few-shot prompting/deep learning分类
信息技术与安全科学