摘要
Abstract
Generative artificial intelligence offers new technological pathways for library cataloging,yet its professional application faces technical adaptability challenges.Based on the"human-in-the-loop"(HITL)mechanism,this study designs a three-phase experimental framework—baseline testing,model comparison,and targeted optimization—to systematically evaluate the performance of two domestic large language models,Kimi and DeepSeek,in MARC21 cataloging of Western-language books.Experimental results indicate that without professional intervention,the native capabilities of the two models differ significantly(Kimi F1=7.41%,DeepSeek F1=51.30%).Under unified prompt guidance,DeepSeek demonstrates superior overall performance(F1=83.00%),significantly outperforming Kimi(F1=63.50%).After implementing refined prompt engineering,DeepSeek's performance achieves a notable improvement(F1=95.16%).Through dynamic calibration and feedback from human catalogers,generative artificial intelligence can overcome initial technical limitations and adapt from general dialogue to professional cataloging tasks.This study validates the effectiveness of the HITL mechanism in model selection and performance optimization,and proposes practical recommendations,such as establishing a prompt knowledge base and implementing hierarchical field management,providing a reference solution for building a human-machine collaborative intelligent cataloging system in libraries.关键词
生成式人工智能/人在回路/智能化编目/Kimi/DeepSeekKey words
Generative Artificial Intelligence/Human-in-the-Loop/Intelligent Cataloging/Kimi/DeepSeek分类
社会科学