遇见您的私人法律顾问:智能法律大模型,智能解答您的法律困惑

2024-02-27 14:20:45 浏览数 (2)

遇见您的私人法律顾问:智能法律大模型,智能解答您的法律困惑

为了让法律服务深入到每个人的身边,让更多的人能够得到法律帮助,开启了【律知】这个项目, 致力于打造一系列引领法律智能化的大模型。AI 法律模型是一位虚拟法律顾问,具备丰富的法律知识和技能,能够回答法律问题和提供法律建议。

语言模型

  • Law-GLM-10B: 基于 GLM-10B 模型, 在 30GB 中文法律数据上进行指令微调.

Name

Params

Language

Corpus

Objective

File

Config

GLM-Base

110M

English

Wiki Book

Token

glm-base-blank.tar.bz2

model_blocklm_base.sh

GLM-Large

335M

English

Wiki Book

Token

glm-large-blank.tar.bz2

model_blocklm_large.sh

GLM-Large-Chinese

335M

Chinese

WuDaoCorpora

Token Sent Doc

glm-large-chinese.tar.bz2

model_blocklm_large_chinese.sh

GLM-Doc

335M

English

Wiki Book

Token Doc

glm-large-generation.tar.bz2

model_blocklm_large_generation.sh

GLM-410M

410M

English

Wiki Book

Token Doc

glm-1.25-generation.tar.bz2

model_blocklm_1.25_generation.sh

GLM-515M

515M

English

Wiki Book

Token Doc

glm-1.5-generation.tar.bz2

model_blocklm_1.5_generation.sh

GLM-RoBERTa

335M

English

RoBERTa

Token

glm-roberta-large-blank.tar.bz2

model_blocklm_roberta_large.sh

GLM-2B

2B

English

Pile

Token Sent Doc

glm-2b.tar.bz2

model_blocklm_2B.sh

GLM-10B

10B

English

Pile

Token Sent Doc

Download

model_blocklm_10B.sh

GLM-10B-Chinese

10B

Chinese

WuDaoCorpora

Token Sent Doc

Download

model_blocklm_10B_chinese.sh

  • GLM-模型结果

dev set, single model, single-task finetuning

Model

COPA

WSC

RTE

WiC

CB

MultiRC

BoolQ

ReCoRD

GLM-10B

98.0

95.2

93.1

75.7

98.7/98.2

88.1/63.3

88.7

94.4/94.0

DeBERTa-XXLarge-v2

97.0

93.5

87.8/63.6

88.3

94.1/93.7

  • Seq2Seq

CNN/Daily Mail (test set, no additional data used)

Model

ROUGE-1

ROUGE-2

ROUGE-L

GLM-10B

44.7

21.4

41.4

T5-11B

43.5

21.6

40.7

PEGASUS-Large

44.2

21.5

41.4

BART-Large

44.2

21.3

40.9

XSum (test set, no additional data used)

Model

ROUGE-1

ROUGE-2

ROUGE-L

GLM-10B

48.9

25.7

40.4

PEGASUS-Large

47.2

24.6

39.3

BART-Large

45.1

22.3

37.3

  • Language Modeling

test set, zero-shot

Model

LAMBADA (accuracy)

Wikitext103 (perplexity)

GLM-10B (bi)

72.35

11.33

GLM-10B (uni)

67.18

12.22

GPT-2

52.66

17.48

Megatron-LM (8.3B)

66.51

10.81

Turing-NLG

67.98

10.21

2.快速使用部署

推出的语言模型支持 HuggingFace

0 人点赞