Journal of System Simulation ›› 2025, Vol. 37 ›› Issue (7): 1665-1683.doi: 10.16182/j.issn1004731x.joss.25-0032
• Invited Reviews • Previous Articles
Dong Zhiming1, Hu Zhongqi1,2, Liu Zhaoyang3, Zhou Heyang1
Received:
2025-01-08
Revised:
2025-04-06
Online:
2025-07-18
Published:
2025-07-30
Contact:
Hu Zhongqi
CLC Number:
Dong Zhiming, Hu Zhongqi, Liu Zhaoyang, Zhou Heyang. A Review of Intelligent Generation of Combat Simulation Scenarios[J]. Journal of System Simulation, 2025, 37(7): 1665-1683.
Table 1
Intelligent Generation Paradigm for Combat Simulation Scenarios Based on LLM
生成范式 | 特点 | 缺点 | 所用技术 |
---|---|---|---|
检索生成范式[ | 高效性,利用大语言模型检索想定数据库可以快速生成符合需求的作战仿真想定 便捷性,大语言模型可以根据需要将想定数据库中的数据进行组合和调整,生成多种不同的作战仿真想定 想定数据可重用性,想定数据库中的数据可以多次使用,避免了数据资源的浪费和人工成本的消耗 | 数据处理难度大,想定数据收集、整理和验证工作,需要投入大量的人力和时间 数据局限性,想定数据库无法涵盖所有可能的作战场景,需要人工调整或补充数据 | 微调技术 检索增强生成技术 |
作战想定文本转换生成范式[ | 高效性,大语言模型具备快速处理大量文本数据的能力,可以将输入的作战想定文本迅速转换为作战仿真想定 准确性,大语言模型能够较为准确地理解作战想定文本中的语义和上下文信息,生成符合需求的作战仿真想定 想定模版可重用性,想定模版库中的模版可以多次使用,避免了数据资源的浪费和人工成本的消耗 | 想定模版更新难度大,想定模版制作、验证和管理工作,需要投入大量的人力和时间 模版局限性,面对一些复杂的作战场景和任务需求时,想定模版的类型和数量存在一定的局限性 | 微调技术 信息抽取技术 检索增强生成技术 |
作战态势图转换生成范式 | 高效性,将作战态势图输入大语言模型即可快速生成符合用户需求的作战仿真想定 灵活性,采用大语言模型处理作战态势图,扩展了用户的输入模态 | 方法可行性需要验证,该范式仅停留在理论层面,没有经过实践检验 技术依赖性,依赖于大语言模型多模态数据处理技术成熟度 | 微调技术 图像描述技术 检索增强生成技术 |
Table 2
Representative open-source Chinese large models
模型系列 | 特点优势 | 缺点不足 | 最新作品 |
---|---|---|---|
GLM | 开源灵活、长文本生成能力较强 | 多模态数据处理能力较弱 | GLM-4、GLM-4V |
Qwen | 多语言支持、上下文连贯性强 | 多模态数据处理能力较弱 | Qwen3、Qwen2.5 |
DeepSeek | 深度逻辑推理能力较强、模型参数效率高、支持细粒度规则约束 | 模型规模较小、复杂文本容易碎片化 | DeepSeek LLM、DeepSeek-V2、DeepSeek-V3、DeepSeek-R1 |
文心 | 中文语境理解强、较好理解任务需求、多模态数据处理能力较强 | 复杂逻辑推理能力较弱,并依赖外部知识库 | 文心4.0、文心4.5 |
盘古 | 模型参数大、算力强、跨领域迁移能力较强 | 训练成本高、需要消耗大量的计算资源和存储资源 | 华为云盘古大模型5.0 |
Table 3
Representative quantification techniques
量化技术 | 特点优势 | 缺点不足 | 适用场景 |
---|---|---|---|
LLM.int8()[ | 几乎无损精度、支持超大规模模型 | 推理速度较慢、依赖GPU硬件支持 | 高精度想定生成 |
GPTQ[ | 无需微调、支持单GPU快速量化 | 低比特精度较低、对激活分布敏感 | 轻量化部署 |
QuIP[ | 低比特下保持较高精度 | 计算复杂度高、依赖矩阵预分解 | 高精度想定生成 |
AWQ[ | 几乎无损精度、硬件友好 | 依赖校准数据、对动态输入适应性弱 | 动态调整想定 |
SpQR[ | 压缩率倍增、保留关键路径精度 | 依赖高稀疏性模型、需专用加速器支持 | 轻量化部署 |
FineQuant[ | “精度-效率”平衡最优、自适应模型结构 | 需要复杂分层策略设计、硬件调度逻辑复杂 | 多阶段长文本想定生成 |
SmoothQuant[ | 无需微调、动态范围适应性增强 | 依赖校准数据、极端动态范围处理受限 | 轻量化部署、低精度想定生成 |
Table 5
Representative open-source Chinese embedding models
模型系列 | 特点优势 | 缺点不足 | 最新作品 |
---|---|---|---|
E5-multilingual | 支持多种语言环境、指令优化提升检索精度 | 长文本处理能力弱、多模态扩展能力有限 | E5-multilingual |
BGE-Embedding | 开源可微调、支持Reranker优化结果排序 | 未针对传感器数据优化、长文本需分段处理 | BGE-M3-Embedding BGE-Reranker V2.0 |
BCE-Embedding | 轻量化、边缘部署成本低、支持Reranker优化结果排序 | 多语言支持有限、对复杂逻辑理解不足 | BCE-Embedding-Base_V1 BCE-Reranker-Base_V1 |
M3E-Embedding | 部署灵活、性价比高、支持动态维度裁剪节省资源 | 长文本处理能力弱、多模态扩展能力有限 | M3E-Small、M3E-Base、M3E--Large |
Table 6
Representative scenario for retrieval-augmented generation technologies
分类 | 代表作品 | 特点 | 缺点 |
---|---|---|---|
Retriever-then-Read | G-Retriever[ | 实现高效图问答 | 数据资源需求大、训练时间较长 |
IM-RAG[ | 多轮交互式检索实现复杂推理 | 非线性“内心独白”难以实现 | |
QA-RAG[ | 大语言模型生成结果作为检索的额外辅助信息 | 严重依赖于检索到的上下文文档的质量 | |
Rewrite-Retrieve-Read[ | 缩短输入文本与所需的知识之间差距 | 容易产生语义漂移 | |
DRAGIN[ | 动态填补大语言模型知识空白 | 技术复杂、实现难度大 | |
PromptVoteRAG[ | BM25检索和向量检索多路检索策略 | 没有在多个不同数据集上测试实验 | |
MSRAG[ | 多策略检索控制器自适应搜索策略 | 无法从问题语义层面自适应匹配检索 | |
Generate-then-Read | MuRAG[ | 增加用户输入模态 | 多模型融合、结构复杂、体量大 |
Tree-RAG[ | 利用实体树来增强上下文信息 | 过于依赖高质量的实体树 | |
RAG-end2end[ | 引入神经检索器对模型进行微调 | 模型泛化能力弱、通用性不强 | |
Recite-Read[ | 将外部搜索转换为从大模型内部知识 | 知识的质量受限于大模型训练质量 | |
Retrieval-Generation Synergy | GAR[ | 通过启发式发现的相关上下文来扩充查询 | 未能考虑非英文数据集的效果 |
ITRG[ | 利用参数化和非参数化知识 | 需要大量的训练数据 | |
ITER-RETGEN[ | 迭代进行检索和生成的过程 | 收敛速度慢 |
Table 7
Representative scenario-based key information extraction technologies
分类 | 代表作品 | 特点 | 缺点 |
---|---|---|---|
命名实 体识别 | ChatGPT[ | 通用性强、支持少样本训练 | 实体识别结果需进行错误分析与过滤 |
GPT-NER[ | 序列标注任务转化为生成任务 | 噪声数据易导致抽取错误 | |
思维链微调[ | 深度挖掘大模型推理能力、识别准确率高 | 计算资源需求高、推理速度慢 | |
提示微调[ | 提高大语言模型任务能力、识别准确率高 | 只在通用领域实验,垂直领域效果未知 | |
知识图谱增强[ | 潜在信息挖掘效率高、识别准确率高 | 结构复杂、体量大 | |
关系抽取 | ChatGPT[ | 解决训练数据来源问题、减少对标注数据的依赖 | 模型体量大,抽取速度下降 |
上下文微调[ | 实现上下文少样本学习 | 通用性差 | |
提示微调[ | 提高大语言模型任务能力、抽取准确率高 | 测试数据集较少 | |
高效参数微调[ | 融合问答任务、抽取准确率高 | 计算资源需求高、推理速度慢 | |
GPT-RE[ | 增强大语言模型推理能力、抽取准确率高 | 模型体量大,训练数据需求量大 | |
事件抽取 | InstructUIE[ | 更能发现隐藏的抽取模式、抽取准确率高 | 思维链知识的质量要求高 |
Prompt-pattern[ | 无监督零样本事件抽取 | 训练数据粒度比较粗 | |
ICL-D3IE[ | 实现上下文少样本学习 | 细粒度的信息精准抽取能力较弱 |
[1] | 陈彩辉, 魏曙光. 基于任务空间概念模型(CMMS)的作战想定研究[J]. 计算机科学, 2006, 33(1): 188-190. |
Chen Caihui, Wei Shuguang. The Study of the Battling Scenario Based on CMMS[J]. Computer Science, 2006, 33(1): 188-190. | |
[2] | 黄四牛, 陈宗基, 张鹏. 分布交互仿真/高层体系结构中作战想定的可视化生成系统[J]. 系统仿真学报, 2002, 14(3): 310-312, 325. |
Huang Siniu, Chen Zongji, Zhang Peng. Visual Scenario Generation System in Distributed Interactive System/High Level Architecture (DIS/HLA)[J]. Journal of System Simulation, 2002, 14(3): 310-312, 325. | |
[3] | 李群, 杨峰, 朱一凡, 等. 空军战役过程推演系统的想定生成设计与实现[J]. 系统仿真学报, 2003, 15(3): 414-416. |
Li Qun, Yang Feng, Zhu Yifan, et al. Design and Implementation of Scenario Editor of the War Game Simulation System of Air Force[J]. Journal of System Simulation, 2003, 15(3): 414-416. | |
[4] | 北京华如科技股份有限公司. 一种基于大语言模型快速构建仿真想定方法及装置: CN202410058427.0[P]. 2024-04-19. |
[5] | 中国航天系统科学与工程研究院. 一种基于大语言模型的作战仿真想定智能生成方法: CN202410540523.9[P]. 2024-07-16. |
[6] | 董柏顺, 王虹森, 罗汝斌. 基于大模型的仿真想定智能构建技术研究[C]//第六届体系工程学术会议论文集—体系工程与高质量发展会议论文集. 昆明: 国防科技大学系统工程学院, 2024: 799-806. |
[7] | 康晓予, 邓贵仕. 作战模拟系统想定研究综述[J]. 系统仿真学报, 2009, 21(10): 2797-2800. |
Kang Xiaoyu, Deng Guishi. Overview of Military Scenario Research in Warfare Simulation[J]. Journal of System Simulation, 2009, 21(10): 2797-2800. | |
[8] | 徐享忠, 熊君, 王嘉铭. 作战仿真想定描述语言及描述规范综述[J]. 计算机仿真, 2021, 38(11): 1-4, 26. |
Xu Xiangzhong, Xiong Jun, Wang Jiaming. Overview of Scenario Description Language and Description Specification for Combat Simulation[J]. Computer Simulation, 2021, 38(11): 1-4, 26. | |
[9] | 唐新德, 张宏军, 程恺, 等. 作战实验仿真想定规范化描述方法研究[J]. 信息系统工程, 2019(11): 28-31. |
[10] | 李晨, 柏彦奇, 史宪铭. 军事仿真想定生成问题研究[J]. 指挥控制与仿真, 2017, 39(6): 77-81. |
Li Chen, Bai Yanqi, Shi Xianming. Overview on Military Simulation Scenario Generation[J]. Command Control & Simulation, 2017, 39(6): 77-81. | |
[11] | Lewis P, Perez E, Piktus A, et al. Retrieval-augmented Generation for Knowledge-intensive NLP Tasks[C]//Proceedings of the 34th International Conference on Neural Information Processing System. Red Hook: Curran Associates Inc., 2020: 9459-9474. |
[12] | GLUE. Leaderboard[EB/OL]. (2024-06-15) [2025-02-28]. . |
[13] | Liu Pengfei, Yuan Weizhe, Fu Jinlan, et al. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing[J]. ACM Computing Surveys, 2023, 55(9): 195. |
[14] | 沈思, 冯暑阳, 吴娜, 等. 融合大语言模型的政策文本检索增强生成研究[J/OL]. 数据分析与知识发现. (2024-11-19) [2025-02-28]. . |
Shen Si, Feng Shuyang, Wu Na, et al. Research on Retrieval-augmented Generation of Policy Texts Based on Large Language Models[J/OL]. Data Analysis and Knowledge Discovery. (2024-11-19) [2025-02-28]. . | |
[15] | 张力军, 刘偲, 廖纪童, 等. 基于大模型检索增强生成的计算机网络实验课程问答系统设计与实现[J]. 实验技术与管理, 2024, 41(12): 186-192. |
Zhang Lijun, Liu Si, Liao Jitong, et al. Design and Implementation of Large Language Model Retrieval-augmented Generation-based Computer Network Experiment Course QA System[J]. Experimental Technology and Management, 2024, 41(12): 186-192. | |
[16] | 王合庆, 魏杰, 景红雨, 等. Meta-RAG: 基于元数据驱动的电力领域检索增强生成框架[J/OL]. 计算机工程. (2024-12-25) [2025-02-28]. . |
Wang Heqing, Wei Jie, Jing Hongyu, et al. Meta-RAG: A Metadata-driven Retrieval Augmented Generation Framework for the Power Industry[J/OL]. Computer Engineering. (2024-12-25) [2025-02-28]. . | |
[17] | 孟序阳, 王昊, 李远清, 等. 基于细粒度知识图谱检索增强生成的提示学习研究[J/OL]. 数据分析与知识发现. (2024-12-10) [2025-02-28]. . |
Meng Xuyang, Wang Hao, Li Yuanqing, et al. Prompt Learning Based on Retrieval-augmented Generation of Fine-grained Knowledge Graph[J/OL]. Data Analysis and Knowledge Discovery. (2024-12-10) [2025-02-28]. . | |
[18] | 王润周, 张新生, 王明虎, 等. 基于混合检索增强生成大语言模型的网络舆情多任务分析[J]. 情报杂志, 2025, 44(5): 91-103. |
Wang Runzhou, Zhang Xinsheng, Wang Minghu, et al. Multi-task Analysis of Online Public Opinion Based on Hybrid Retrieval-augmented Generation of Large Language Model[J]. Journal of Intelligence, 2025, 44(5): 91-103. | |
[19] | 翟洁, 李艳豪, 李彬彬, 等. 基于大语言模型的个性化实验报告评语自动生成与应用[J]. 计算机工程, 2024, 50(7): 42-52. |
Zhai Jie, Li Yanhao, Li Binbin, et al. Personalized Experiment Report Comments Auto-generation and Application Based on Large Language Models[J]. Computer Engineering, 2024, 50(7): 42-52. | |
[20] | Peer Jordan, Mordecai Yaniv, Reich Yoram. NLP4ReF: Requirements Classification and Forecasting: From Model-based Design to Large Language Models[C]//2024 IEEE Aerospace Conference. Piscataway: IEEE, 2024: 1-16. |
[21] | 裴炳森, 李欣, 蒋章涛, 等. 基于大语言模型的司法文本摘要生成与评价技术研究[J]. 数据与计算发展前沿(中英文), 2024, 6(6): 62-73. |
Pei Bingsen, Li Xin, Jiang Zhangtao, et al. Research on the Generation and Evaluation of Judicial Text Summarization Based on Large Language Models[J]. Frontiers of Data & Computing, 2024, 6(6): 62-73. | |
[22] | 朱丹浩, 黄肖宇, 李堯霖, 等. 基于大语言模型的法律文本的自动摘要方法[J/OL]. 数据分析与知识发现. (2024-10-14) [2025-02-28]. . |
Zhu Danhao, Huang Xiaoyu, Li Yaolin, et al. Automatic Summarization of Legal Texts Based on Large Language Models[J/OL]. Data Analysis and Knowledge Discovery. (2024-10-14) [2025-02-28]. . | |
[23] | 宋梦鹏, 白海燕. 基于大语言模型的文献综述智能生成与循证研究[J/OL]. 数据分析与知识发现. (2024-12-09) [2025-02-28]. . |
Song Mengpeng, Bai Haiyan. Research on Intelligent Generation and Evidence Based of Literature Review Based on Large Language Model[J/OL]. Data Analysis and Knowledge Discovery. (2024-12-09) [2025-02-28]. . | |
[24] | 刘学. 大规模视觉语言模型在军事装备问答系统中的应用研究[J/OL]. 计算机应用与软件. (2024-10-12) [2025-02-28]. . |
Liu Xue. MILQWEN: A Large Vision Language Model and Application for Military Equipment[J/OL]. Computer Applications and Software. (2024-10-12) [2025-02-28]. . | |
[25] | Perrina Filippo, Marchiori Francesco, Conti Mauro, et al. AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language Generation[C]//2023 IEEE International Conference on Big Data (BigData). Piscataway: IEEE, 2023: 3053-3062. |
[26] | Wang Zhanyu, Liu Lingqiao, Wang Lei, et al. R2GenGPT: Radiology Report Generation with Frozen LLMs[J]. Meta-Radiology, 2023, 1(3): 100033. |
[27] | 钱乾, 孙丽萍, 刘佳霖, 等. 基于判别增强大语言模型微调的医学影像报告生成[J]. 计算机应用研究, 2025, 42(3): 762-769. |
Qian Qian, Sun Liping, Liu Jialin, et al. Medical Imaging Report Generation Via Multi-modal Large Language Models with Discrimination-enhanced Fine-tuning[J]. Application Research of Computers, 2025, 42(3): 762-769. | |
[28] | Jong Hak Moon, Lee Hyungyung, Shin Woncheol, et al. Multi-modal Understanding and Generation for Medical Images and Text via Vision-language Pre-training[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(12): 6070-6080. |
[29] | Wan Zhongwei, Wang Xin, Liu Che, et al. Efficient Large Language Models: A Survey[EB/OL]. (2023-12-06) [2025-02-28]. . |
[30] | Detters T, Lewis M, Belkada Y, et al. LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale[EB/OL]. (2022-08-15) [2025-02-28]. . |
[31] | Frantar Elias, Ashkboos Saleh, Hoefler Torsten, et al. GPTQ: Accurate Post-training Quantization for Generative Pre-trained Transformers[EB/OL]. (2022-10-31) [2025-02-28]. . |
[32] | Chee J, Cai Yaohui, Kuleshov V, et al. QuIP: 2-bit Quantization of Large Language Models with Guarantees[EB/OL]. (2023-07-25) [2025-02-28]. . |
[33] | Lin Ji, Tang Jiaming, Tang Haotian, et al. AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration[EB/OL]. (2023-10-03) [2025-02-28]. . |
[34] | Dettmers T, Svirschevski Ruslan, Egiazarian Vage, et al. SpQR: A Sparse-quantized Representation for Near-lossless LLM Weight Compression[EB/OL]. (2023-06-05) [2025-02-28]. . |
[35] | Kim J Y, Henry R, Fahim R, et al. FineQuant: Unlocking Efficiency with Fine-grained Weight-only Quantization for LLMs[EB/OL]. (2023-08-16) [2025-02-28]. . |
[36] | Xiao Guangxuan, Lin Ji, Seznec M, et al. SmoothQuant: Accurate and Efficient Post-training Quantization for Large Language Models[EB/OL]. (2022-11-18) [2025-02-28]. . |
[37] | Hu E J, Shen Yelong, Wallis P, et al. LoRA: Low-rank Adaptation of Large Language Models[EB/OL]. (2021-06-17) [2025-02-28]. . |
[38] | Chen Yukang, Qian Shengju, Tang Haotian, et al. LongLoRA: Efficient Fine-tuning of Long-context Large Language Models[EB/OL]. (2024-03-08) [2025-02-28]. . |
[39] | Chavan Arnav, Liu Zhuang, Gupta Deepak, et al. One-for-all: Generalized LoRA for Parameter-efficient Fine-tuning[EB/OL]. (2023-06-13) [2025-02-28]. . |
[40] | Zhang Qingru, Chen Minshuo, Bukharin A, et al. Adaptive Budget Allocation for Parameter-efficient Fine-tuning[EB/OL]. (2023-03-18) [2025-02-28]. . |
[41] | Liu Jiachang, Shen Dinghan, Zhang Yizhe, et al. What Makes Good In-context Examples for GPT-3?[EB/OL]. (2021-01-17) [2025-02-28]. . |
[42] | Wei J, Wang Xuezhi, Schuurmans D, et al. Chain of Thought Prompting Elicits Reasoning in Large Language Models[EB/OL]. (2022-01-28) [2025-02-28]. . |
[43] | Chen Wenhu, Ma Xueguang, Wang Xinyi, et al. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks[EB/OL]. (2023-10-23) [2025-02-28]. . |
[44] | Long Jieyi. Large Language Model Guided Tree-of-thought[EB/OL]. (2023-05-15) [2025-02-28]. . |
[45] | Ning Xuefei, Lin Zinan, Zhou Zixuan, et al. Skeleton-of-thought: Large Language Models Can Do Parallel Decoding[EB/OL]. (2023-10-08) [2025-02-28]. . |
[46] | Besta Maciej, Blach Nils, Kubicek Ales, et al. Graph of Thoughts: Solving Elaborate Problems with Large Language Models[EB/OL]. (2024-02-06) [2025-02-28]. . |
[47] | 徐刚, 刘志鹏, 冯骐, 等. 大语言模型在教育信息化中的实践:规范、框架与应用[J]. 通信学报, 2024, 45(增2): 229-241. |
Xu Gang, Liu Zhipeng, Feng Qi, et al. Practical Application of Large Language Models in Educational Informatics: Specification, Framework, and Applications[J]. Journal on Communications, 2024, 45(S2): 229-241. | |
[48] | 杨喆, 许甜, 靳哲, 等. 基于知识图谱的羊群疾病问答系统的构建与实现[J]. 华中农业大学学报, 2023, 42(3): 63-70. |
Yang Zhe, Xu Tian, Jin Zhe, et al. Construction and Application of Knowledge Graph of Sheep & Goat Disease[J]. Journal of Huazhong Agricultural University, 2023, 42(3): 63-70. | |
[49] | 成志宇, 陈星霖, 王菁, 等. 一种基于知识图谱的检索增强生成情报问答技术[J]. 计算机科学, 2025, 52(1): 87-93. |
Cheng Zhiyu, Chen Xinglin, Wang Jing, et al. Retrieval-augmented Generative Intelligence Question Answering Technology Based on Knowledge Graph[J]. Computer Science, 2025, 52(1): 87-93. | |
[50] | He Xiaoxin, Tian Yijun, Sun Yifei, et al. G-retriever: Retrieval-augmented Generation for Textual Graph Understanding and Question Answering[EB/OL]. (2024-02-12) [2025-02-28]. . |
[51] | Yang Diji, Rao Jinmeng, Chen Kezhen, et al. IM-RAG: Multi-round Retrieval-augmented Generation Through Learning Inner Monologues[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 730-740. |
[52] | Roy K, Zi Yuxin, Shyalika C, et al. QA-RAG: Leveraging Question and Answer-based Retrieved Chunk Re-formatting for Improving Response Quality During Retrieval-augmented Generation[EB/OL]. (2024-07-03) [2024-12-24]. . |
[53] | Ma Xinbei, Gong Yeyun, He Pengcheng, et al. Query Rewriting for Retrieval-augmented Large Language Models[EB/OL]. (2023-10-23) [2025-02-28]. . |
[54] | Su Weihang, Tang Yichen, Ai Qingyao, et al. DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language Models[EB/OL]. (2024-09-21) [2025-02-28]. . |
[55] | 王昱婷, 陈波, 闫强, 等. 基于问题导向提示学习和多路推理的检索增强生成问答[J/OL]. 计算机工程与应用. (2024-12-12) [2025-02-28]. . |
Wang Yuting, Chen Bo, Yan Qiang, et al. Retrieval-augmented Question-answering Generation Based on Question-oriented Prompt Learning and Multi-channel Reasoning[J/OL]. Computer Engineering and Applications. (2024-12-12) [2025-02-28]. . | |
[56] | 张艳萍, 陈梅芳, 田昌海, 等. 面向军事领域知识问答系统的多策略检索增强生成方法[J]. 计算机应用, 2025, 45(3): 746-754 |
Zhang Yanping, Chen Meifang, Tian Changhai, et al. Multi-strategy Retrieval-augmented Generation Method for Military Domain Knowledge Question Answering Systems[J]. Journal of Computer Applications, 2025, 45(3): 746-754. | |
[57] | Chen Wenhu, Hu Hexiang, Chen Xi, et al. MuRAG: Multimodal Retrieval-augmented Generator for Open Question Answering over Images and Text[EB/OL]. (2022-10-20) [2025-02-28]. . |
[58] | Fatehkia Masoomali, Ji Kim Lucas, Chawla Sanjay. T-RAG: Lessons from the LLM Trenches[EB/OL]. (2024-06-06) [2025-02-28]. . |
[59] | Siriwardhana Shamane, Weerasekera Rivindu, Wen Elliott, et al. Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1-17. |
[60] | Sun Zhiqing, Wang Xuezhi, Tay Y, et al. Recitation-augmented Language Models[EB/OL]. (2022-10-04) [2024-12-24]. . |
[61] | Mao Yuning, He Pengcheng, Liu Xiaodong, et al. Generation-augmented Retrieval for Open-domain Question Answering[EB/OL]. (2021-08-06) [2025-02-28]. . |
[62] | Feng Zhangyin, Feng Xiaocheng, Zhao Dezhi, et al. Retrieval-generation Synergy Augmented Large Language Models[C]//ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Piscataway: IEEE, 2024: 11661-11665. |
[63] | Shao Zhihong, Gong Yeyun, Shen Yelong, et al. Enhancing Retrieval-augmented Large Language Models with Iterative Retrieval-generation Synergy[EB/OL]. (2023-10-23) [2025-02-28]. . |
[64] | 鲍彤, 章成志. ChatGPT中文信息抽取能力测评-以三种典型的抽取任务为例[J]. 数据分析与知识发现, 2023, 7(9): 1-11. |
Bao Tong, Zhang Chengzhi. Extracting Chinese Information with ChatGPT: An Empirical Study by Three Typical Tasks[J]. Data Analysis and Knowledge Discovery, 2023, 7(9): 1-11. | |
[65] | Li Bo, Fang Gexiang, Yang Yang, et al. Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness[EB/OL]. (2023-04-23) [2025-02-28]. . |
[66] | 张颖怡, 章成志, 周毅, 等. 基于ChatGPT的多视角学术论文实体识别: 性能测评与可用性研究[J]. 数据分析与知识发现, 2023, 7(9): 12-24. |
Zhang Yingyi, Zhang Chengzhi, Zhou Yi, et al. ChatGPT-based Scientific Paper Entity Recognition: Performance Measurement and Availability Research[J]. Data Analysis and Knowledge Discovery, 2023, 7(9): 12-24. | |
[67] | Wang Shuhe, Sun Xiaofei, Li Xiaoya, et al. GPT-NER: Named Entity Recognition via Large Language Models[EB/OL]. (2023-04-20) [2025-02-28]. . |
[68] | 张逸勤, 邓三鸿, 王东波. 基于生成式大语言模型的非遗文本嵌套命名实体识别研究[J/OL]. 现代情报. (2024-12-31) [2025-02-28]. . |
Zhang Yiqin, Deng Sanhong, Wang Dongbo. Research on Nested Named Entity Recognition of Intangible Cultural Heritage Texts Based on Generative Language Models[J/OL]. Journal of Modern Information. (2024-12-31) [2025-02-28]. . | |
[69] | 余池, 陈亮, 许海云, 等. 基于大语言模型的专利命名实体识别方法研究[J/OL]. 数据分析与知识发现. (2024-12-19) [2025-02-28]. . |
Yu Chi, Chen Liang, Xu Haiyun, et al. Research on Patent Named Entity Recognition Method Based on Large Language Model[J/OL]. Data Analysis and Knowledge Discovery. (2024-12-19) [2025-02-28]. . | |
[70] | 陈文杰, 胡正银, 石栖, 等. 融合知识图谱与大语言模型的科技文献复杂知识对象抽取研究[J/OL]. 现代情报. (2024-12-04) [2025-02-28]. . |
Chen Wenjie, Hu Zhengyin, Shi Qi, et al. Research on Scientific and Technological Literature Complex Knowledge Object Extraction Fusing Knowledge Graph and Large Language Model[J/OL]. Journal of Modern Information. (2024-12-04) [2025-02-28]. . | |
[71] | Tang Ruixiang, Han Xiaotian, Jiang Xiaoqian, et al. Does Synthetic Data Generation of LLMs Help Clinical Text Mining?[EB/OL]. (2023-04-10) [2025-02-28]. . |
[72] | Dagdelen J, Dunn A, Lee S, et al. Structured Information Extraction from Scientific Text with Large Language Models[J]. Nature Communications, 2024, 15(1): 1418. |
[73] | 段宇锋, 谢佳宏. 基于大语言模型和提示工程的中文医学文本实体关系抽取研究[J/OL]. 数据分析与知识发现. (2024-12-18) [2025-02-28]. . |
Duan Yufeng, Xie Jiahong. Entity Relation Extraction of Chinese Medical Text Based on Large Language Model and Prompt Engineering[J/OL]. Data Analysis and Knowledge Discovery. (2024-12-18) [2025-02-28]. . | |
[74] | Zhang Kai, Bernal Jiménez Gutiérrez, Su Yu. Aligning Instruction Tasks Unlocks Large Language Models as Zero-shot Relation Extractors[C]//Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 794-812. |
[75] | Wan Zhen, Cheng Fei, Mao Zhuoyuan, et al. GPT-RE: In-context Learning for Relation Extraction Using Large Language Models[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 3534-3547. |
[76] | 赵勤博, 王又辰, 陈荣, 等. 面向开源情报的信息抽取大语言模型[J]. 计算机工程与设计, 2024, 45(12): 3772-3778. |
Zhao Qinbo, Wang Youchen, Chen Rong, et al. Large Language Models for Open-source Intelligence Information Extraction[J]. Computer Engineering and Design, 2024, 45(12): 3772-3778. | |
[77] | 斯彬洲, 孙海春, 吴越. 基于大语言模型和事件融合的电信诈骗事件风险分析[J/OL]. 数据分析与知识发现. (2024-12-24) [2025-02-28]. . |
Si Binzhou, Sun Haichun, Wu Yue. Risk Analysis of Telecom Fraud Events Based on Big Language Models and Event Fusion[J/OL]. Data Analysis and Knowledge Discovery. (2024-12-24) [2025-02-28]. . | |
[78] | He Jiabang, Wang Lei, Hu Yi, et al. ICL-D3IE: In-context Learning with Diverse Demonstrations Updating for Document Information Extraction[EB/OL]. (2023-08-21) [2025-02-28]. . |
[79] | 王舰, 孙宇清. 可控文本生成技术研究综述[J]. 中文信息学报, 2024, 38(10): 1-23. |
Wang Jian, Sun Yuqing. Survey on Controllable Text Generation[J]. Journal of Chinese Information Processing, 2024, 38(10): 1-23. | |
[80] | 曹露, 许林, 张宇洁, 等. 大语言模型在中医领域的标准化评估[J]. 南京中医药大学学报, 2024, 40(12): 1383-1392. |
Cao Lu, Xu Lin, Zhang Yujie, et al. Standardized Evaluation of Large Language Models in Traditional Chinese Medicine[J]. Journal of Nanjing University of Traditional Chinese Medicine, 2024, 40(12): 1383-1392. | |
[81] | 徐月梅, 叶宇齐, 何雪怡. 大语言模型的偏见挑战:识别、评估与去除[J]. 计算机应用, 2025, 45(3): 697-708. |
Xu Yuemei, Ye Yuqi, He Xueyi. Bias Challenges of Large Language Models: Identification, Evaluation, and Mitigation[J]. Journal of Computer Applications, 2025, 45(3): 697-708. | |
[82] | 孙振. 仿真想定描述及想定生成的研究与实现[D]. 北京: 北京理工大学, 2015. |
Sun Zhen. Research and Realization of Simulation Scenario Description and Scenario Generation[D]. Beijing: Beijing Institute of Technology, 2015. |
[1] | Gu Xueqiang, Luo Junren, Zhou Yanzhong, Zhang Wanpeng. Survey on Large Language Agent Technologies for Intelligent Game Theoretic Decision-making [J]. Journal of System Simulation, 2025, 37(5): 1142-1157. |
[2] | Chen Quanlin, Jia Jun. An Event Ontology and Dataset Construction Method for Strategic Operations Analysis [J]. Journal of System Simulation, 2025, 37(4): 943-952. |
[3] | Zhou Yanzhong, Luo Junren, Gu Xueqiang, Zhang Wanpeng. Survey on Intelligent Planning Methods from Large Language Models Perspective [J]. Journal of System Simulation, 2025, 37(4): 823-844. |
[4] | Jing Zheng, Weili Xiong, Xiaodong Wu. kNN Fault Detection Based on Reconstruction Error and Multi-block Modeling Strategy [J]. Journal of System Simulation, 2023, 35(1): 95-109. |
[5] | Xiangping Wu, Lijun Ping, Dongshi Xu. A Visual Analytics of Urban Traffic Events Using Social Media Data [J]. Journal of System Simulation, 2022, 34(5): 1140-1151. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||