alea-institute commited on
Commit
6ed9e60
·
verified ·
1 Parent(s): 7306c1f

Update README and config files - README.md

Browse files
Files changed (1) hide show
  1. README.md +254 -167
README.md CHANGED
@@ -1,199 +1,286 @@
1
  ---
 
 
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
- ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
- ### Compute Infrastructure
 
160
 
161
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
 
163
- #### Hardware
164
 
165
- [More Information Needed]
 
 
 
166
 
167
- #### Software
 
168
 
169
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
170
 
171
- ## Citation [optional]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
174
 
175
- **BibTeX:**
 
 
 
 
176
 
177
- [More Information Needed]
178
 
179
- **APA:**
 
 
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
 
187
- [More Information Needed]
 
 
 
 
 
188
 
189
- ## More Information [optional]
 
 
 
 
 
 
 
 
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
 
 
 
 
198
 
199
- [More Information Needed]
 
1
  ---
2
+ language:
3
+ - en
4
  library_name: transformers
5
+ license: cc-by-4.0
6
+ tags:
7
+ - kl3m
8
+ - legal
9
+ - financial
10
+ - mlm
11
+ - roberta
12
+ - embedding
13
+ - uncased
14
+ pipeline_tag: fill-mask
15
+ tags_extended:
16
+ - feature-extraction
17
+ widget:
18
+ - text: "<|cls|> this credit agreement is made and entered into as of january 1, 2025, by and between acme corporation, a delaware <|mask|>, and first national bank, as lender. <|sep|>"
19
+ - example: "<|cls|> pursuant to section 13(a) of the <|mask|> exchange act of 1934, the undersigned hereby certifies that the quarterly report fully complies with the requirements. <|sep|>"
20
+ - temperature: 0.7
21
+ - do_sample: true
22
+ date: '2025-01-25T00:00:00.000Z'
23
  ---
24
 
25
+ # kl3m-doc-small-uncased-001
 
 
 
26
 
27
+ `kl3m-doc-small-uncased-001` is a domain-specific masked language model (MLM) based on the RoBERTa architecture, specifically designed for legal and financial document analysis. With approximately 236M parameters, it provides a balanced model for specialized NLP tasks in both fill-mask prediction and feature extraction for document embeddings. This uncased variant is particularly useful for case-insensitive applications, maintaining strong performance while disregarding capitalization differences.
28
 
29
  ## Model Details
30
 
31
+ - **Architecture**: RoBERTa
32
+ - **Size**: 236M parameters
33
+ - **Hidden Size**: 1024
34
+ - **Layers**: 8
35
+ - **Attention Heads**: 8
36
+ - **Intermediate Size**: 4096
37
+ - **Max Position Embeddings**: 512
38
+ - **Max Sequence Length**: 509
39
+ - **Tokenizer**: [alea-institute/kl3m-004-128k-uncased](https://huggingface.co/alea-institute/kl3m-004-128k-uncased)
40
+ - **Vector Dimension**: 1024 (hidden_size)
41
+ - **Pooling Strategy**: CLS token or mean pooling
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ ## Use Cases
44
 
45
+ This model is particularly useful for:
46
 
47
+ - Document classification in legal and financial domains
48
+ - Entity recognition for specialized terminology
49
+ - Understanding legal citations and references
50
+ - Filling in missing terms in legal documents
51
+ - Feature extraction for downstream legal analysis tasks
52
+ - Document similarity and retrieval tasks where capitalization is not significant
53
+ - Applications requiring a balance between performance and model size
54
+
55
+ The uncased nature of this model makes it efficient for scenarios where case distinctions are not important to the task, while its small size offers a good balance between performance and computational requirements.
56
+
57
+ ## Standard Test Examples
58
+
59
+ Using our standardized test examples for comparing embedding models:
60
+
61
+ ### Fill-Mask Results
62
+
63
+ 1. **Contract Clause Heading**:
64
+ `"<|cls|> 8. representations and<|mask|>. each party hereby represents and warrants to the other party as of the date hereof as follows: <|sep|>"`
65
+
66
+ Top 5 predictions:
67
+ 1. warranties (0.940)
68
+ 2. warranty (0.016)
69
+ 3. warranties (0.015)
70
+ 4. covenants (0.005)
71
+ 5. warrants (0.004)
72
+
73
+ 2. **Defined Term Example**:
74
+ `"<|cls|> \"effective<|mask|>\" means the date on which all conditions precedent set forth in article v are satisfied or waived by the administrative agent. <|sep|>"`
75
+
76
+ Top 5 predictions:
77
+ 1. date (0.989)
78
+ 2. time (0.005)
79
+ 3. deadline (0.003)
80
+ 4. day (0.001)
81
+ 5. dates (0.0005)
82
+
83
+ 3. **Regulation Example**:
84
+ `"<|cls|> all transactions shall comply with the requirements set forth in the truth in<|mask|> act and its implementing regulation z. <|sep|>"`
85
+
86
+ Top 5 predictions:
87
+ 1. lending (0.609)
88
+ 2. information (0.075)
89
+ 3. practices (0.049)
90
+ 4. claims (0.020)
91
+ 5. lending (0.019)
92
+
93
+ ### Document Similarity Results
94
+
95
+ Using the standardized document examples for embeddings:
96
+
97
+ | Document Pair | Cosine Similarity (CLS token) | Cosine Similarity (Mean pooling) |
98
+ |---------------|-------------------------------|----------------------------------|
99
+ | Court Complaint vs. Consumer Terms | 0.429 | 0.440 |
100
+ | Court Complaint vs. Credit Agreement | 0.435 | 0.572 |
101
+ | Consumer Terms vs. Credit Agreement | 0.562 | 0.498 |
102
+
103
+ The small-sized model shows moderate document similarity performance, with better differentiation between document types than some larger models in the family. The model demonstrates balanced performance using both CLS token and mean pooling strategies.
104
+
105
+ ## Usage
106
+
107
+ ### Masked Language Modeling
108
+
109
+ You can use this model for masked language modeling with the simple pipeline approach:
110
+
111
+ ```python
112
+ from transformers import pipeline
113
 
114
+ # Load the fill-mask pipeline with the model
115
+ fill_mask = pipeline('fill-mask', model="alea-institute/kl3m-doc-small-uncased-001")
116
 
117
+ # Example: Contract clause heading
118
+ # Note the mask token placement - directly adjacent to "and" without space
119
+ text = "<|cls|> 8. representations and<|mask|>. each party hereby represents and warrants to the other party as of the date hereof as follows: <|sep|>"
120
+ results = fill_mask(text)
121
+
122
+ # Display predictions
123
+ print("Top predictions:")
124
+ for result in results:
125
+ print(f"- {result['token_str']} (score: {result['score']:.3f})")
126
+
127
+ # Output:
128
+ # Top predictions:
129
+ # - warranties (score: 0.940)
130
+ # - warranty (score: 0.016)
131
+ # - warranties (score: 0.015)
132
+ # - covenants (score: 0.005)
133
+ # - warrants (score: 0.004)
134
+ ```
135
 
136
+ ### Feature Extraction for Embeddings
137
 
138
+ ```python
139
+ from transformers import pipeline
140
+ import numpy as np
141
+ from sklearn.metrics.pairwise import cosine_similarity
142
 
143
+ # Load the feature-extraction pipeline
144
+ extractor = pipeline('feature-extraction', model="alea-institute/kl3m-doc-small-uncased-001", return_tensors=True)
145
 
146
+ # Example legal documents (truncated for brevity)
147
+ texts = [
148
+ # Court Complaint
149
+ "<|cls|> in the united states district court for the eastern district of pennsylvania\n\njohn doe,\nplaintiff,\n\nvs.\n\nacme corporation,\ndefendant. <|sep|>",
150
+
151
+ # Consumer Terms
152
+ "<|cls|> terms and conditions\n\nlast updated: april 10, 2025\n\nthese terms and conditions govern your access to and use of the service. <|sep|>",
153
+
154
+ # Credit Agreement
155
+ "<|cls|> credit agreement\n\ndated as of april 10, 2025\n\namong\n\nacme borrower inc.,\nas the borrower,\n\nand bank of finance,\nas administrative agent. <|sep|>"
156
+ ]
157
 
158
+ # Generate embeddings for each document
159
+ embeddings = []
160
+ for text in texts:
161
+ # Get features for the text
162
+ features = extractor(text)
163
+
164
+ # Extract the CLS token embedding (first token)
165
+ # Convert to numpy if needed
166
+ features_array = features[0].numpy() if hasattr(features[0], 'numpy') else features[0]
167
+ cls_embedding = features_array[0] # Shape: [hidden_size]
168
+
169
+ embeddings.append(cls_embedding)
170
+
171
+ # Calculate cosine similarity between documents
172
+ similarity_matrix = cosine_similarity(embeddings)
173
+ print("\nDocument similarity matrix:")
174
+ print(similarity_matrix)
175
+ ```
176
+
177
+ For more advanced usage with mean pooling:
178
+
179
+ ```python
180
+ # Mean pooling - taking average of all token embeddings
181
+ def mean_pooling(features):
182
+ # Convert to numpy if needed
183
+ features_array = features[0].numpy() if hasattr(features[0], 'numpy') else features[0]
184
+ return np.mean(features_array, axis=0)
185
 
186
+ # Generate mean-pooled embeddings
187
+ mean_embeddings = [mean_pooling(extractor(text)) for text in texts]
188
 
189
+ # Calculate similarity with mean pooling
190
+ mean_similarity = cosine_similarity(mean_embeddings)
191
+ print("\nMean pooling similarity matrix:")
192
+ print(mean_similarity)
193
+ ```
194
 
195
+ ## Training
196
 
197
+ The model was trained on a diverse corpus of legal and financial documents, ensuring high-quality performance in these domains. It leverages the KL3M tokenizer which provides 9-17% more efficient tokenization for domain-specific content than cl100k_base or the LLaMA/Mistral tokenizers.
198
+
199
+ Training included both masked language modeling (MLM) objectives and attention to dense document representation for retrieval and classification tasks. The model was trained on lowercase text to improve efficiency while maintaining strong performance.
200
 
201
+ ## Intended Usage
202
 
203
+ This model is intended for both:
204
 
205
+ 1. **Masked Language Modeling**: Filling in missing words/terms in legal and financial documents
206
+ 2. **Document Embedding**: Generating fixed-length vector representations for document similarity and classification
207
+
208
+ It is particularly well-suited for applications requiring a good balance between model size and performance, and where case-sensitivity is not important.
209
+
210
+ ## Special Tokens
211
+
212
+ This model includes the following special tokens:
213
+
214
+ - CLS token: `<|cls|>` (ID: 5) - Used for the beginning of input text
215
+ - MASK token: `<|mask|>` (ID: 6) - Used to mark tokens for prediction
216
+ - SEP token: `<|sep|>` (ID: 4) - Used for the end of input text
217
+ - PAD token: `<|pad|>` (ID: 2) - Used for padding sequences to a uniform length
218
+ - BOS token: `<|start|>` (ID: 0) - Beginning of sequence
219
+ - EOS token: `<|end|>` (ID: 1) - End of sequence
220
+ - UNK token: `<|unk|>` (ID: 3) - Unknown token
221
+
222
+ Important usage notes:
223
+
224
+ When using the MASK token for predictions, be aware that this model uses a **space-prefixed BPE tokenizer**. The <|mask|> token should be placed IMMEDIATELY after the previous token with NO space, because most tokens in this tokenizer have an initial space encoded within them. For example: `"word<|mask|>"` rather than `"word <|mask|>"`.
225
+
226
+ This space-aware placement is crucial for getting accurate predictions.
227
+
228
+ ## Limitations
229
+
230
+ While providing a good balance between size and performance, this model has some limitations:
231
+
232
+ - Moderate parameter count (236M) compared to larger language models
233
+ - Uncased nature means it cannot distinguish between different capitalizations
234
+ - Primarily focused on English legal and financial texts
235
+ - Best suited for domain-specific rather than general-purpose tasks
236
+ - Maximum sequence length of 512 tokens may require chunking for lengthy documents
237
+ - Requires domain expertise to interpret results effectively
238
+
239
+ ## References
240
+
241
+ - [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247)
242
+ - [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
243
+
244
+ ## Citation
245
+
246
+ If you use this model in your research, please cite:
247
+
248
+ ```bibtex
249
+ @misc{kl3m-doc-small-uncased-001,
250
+ author = {ALEA Institute},
251
+ title = {kl3m-doc-small-uncased-001: A Domain-Specific Uncased Language Model for Legal and Financial Text Analysis},
252
+ year = {2025},
253
+ publisher = {Hugging Face},
254
+ howpublished = {\url{https://huggingface.co/alea-institute/kl3m-doc-small-uncased-001}}
255
+ }
256
 
257
+ @article{bommarito2025kl3m,
258
+ title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications},
259
+ author={Bommarito, Michael J and Katz, Daniel Martin and Bommarito, Jillian},
260
+ journal={arXiv preprint arXiv:2503.17247},
261
+ year={2025}
262
+ }
263
 
264
+ @misc{bommarito2025kl3mdata,
265
+ title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
266
+ author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
267
+ year={2025},
268
+ eprint={2504.07854},
269
+ archivePrefix={arXiv},
270
+ primaryClass={cs.CL}
271
+ }
272
+ ```
273
 
274
+ ## License
275
 
276
+ This model is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/).
277
 
278
+ ## Contact
279
 
280
+ The KL3M model family is maintained by the [ALEA Institute](https://aleainstitute.ai). For technical support, collaboration opportunities, or general inquiries:
281
+
282
+ - Email: hello@aleainstitute.ai
283
+ - Website: https://aleainstitute.ai
284
+ - GitHub: https://github.com/alea-institute/kl3m-model-research
285
 
286
+ ![logo](https://aleainstitute.ai/images/alea-logo-ascii-1x1.png)