Dataset Viewer
url
stringlengths 46
49
| text
stringlengths 20k
205k
|
---|---|
https://aclanthology.org/2024.emnlp-main.1.pdf | Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1–14
November 12-16, 2024 ©2024 Association for Computational Linguistics
UNIGEN: Universal Domain Generalization
for Sentiment Classification via Zero-shot Dataset Generation
Juhwan Choi1, Yeonghwa Kim1, Seunguk Yu1, Jungmin Yun1 and YoungBin Kim1,2
1Department of Artificial Intelligence, Chung-Ang University
2Graduate School of Advanced Imaging Sciences, Multimedia and Film, Chung-Ang University
{gold5230, movie112, bokju128, cocoro357, ybkim85}@cau.ac.kr
Abstract
Although pre-trained language models have
exhibited great flexibility and versatility with
prompt-based few-shot learning, they suffer
from the extensive parameter size and limited
applicability for inference. Recent studies have
suggested that PLMs be used as dataset gener-
ators and a tiny task-specific model be trained
to achieve efficient inference. However, their
applicability to various domains is limited be-
cause they tend to generate domain-specific
datasets. In this work, we propose a novel ap-
proach to universal domain generalization that
generates a dataset regardless of the target do-
main. This allows for generalization of the tiny
task model to any domain that shares the label
space, thus enhancing the real-world applica-
bility of the dataset generation paradigm. Our
experiments indicate that the proposed method
accomplishes generalizability across various
domains while using a parameter set that is
orders of magnitude smaller than PLMs.
1 Introduction
As the size and performance of pre-trained lan-
guage models (PLMs) increase, generation of new
data by using PLMs has attracted the attention of
many researchers (Anaby-Tavor et al., 2020; Ku-
mar et al., 2020; Yoo et al., 2021). While scholars
have applied this method to solve data augmenta-
tion problems, in recent studies, they have started to
explore zero-shot dataset generation settings (Meng
et al., 2022; Ye et al., 2022a, 2023). This novel ap-
proach first generates training data from a PLM
based on a specific prompt and trains a tiny task
model (TAM) by using the dataset generated in the
first step. This strategy facilitates effective distilla-
tion of the knowledge pertaining to the desired task
from the PLM and helps train the TAM without
the need for guidance from human-annotated data,
thereby enabling zero-shot learning and achieving
low-cost inference compared to the case in which
PLMs are used directly for inference.
However, the approaches proposed thus far have
relied on domain-specific prompts, for example,
“The movie review in positive sentiment is: .” Be-
cause the data generated using this prompt are re-
lated only to the domain of movie reviews, the
TAM trained on these data has limited general-
ization ability across other domains. This is the
primary limitation of the TAM-based approach
compared to prompt-based zero-shot learning that
directly uses PLMs (PROMPTING ), which allows
for generalizability across diverse domains. This
restricts the real-world applicability of the TAM-
based approach because it requires many separately
trained TAMs for various domains. Moreover, as
the costs of dataset generation and TAM training
increase, the cost-efficiency of the TAM-based ap-
proach may decrease. Hence, a novel strategy is
desired to effectively distill the domain generaliz-
ability of large-scale PLMs into TAMs while main-
taining the cost-efficiency of TAMs.
Meanwhile, the existing approaches to domain
generalization often require multiple source do-
mains (Wang et al., 2022; Zhou et al., 2022). This
requirement limits the application of these meth-
ods because it is difficult to gather the required
data from multiple domains. Although the concept
of single-domain generalization, which achieves
domain generalizability by using data from only
one source domain, has been proposed in recent
computer vision studies, such a concept is yet to
be explored for natural language processing (Qiao
et al., 2020; Wang et al., 2021).
In this study, we propose a simple but effective
method called UNIGEN to solve the problem of
domain generalizability between PLMs and TAMs.
Table 1 presents a comparison between UNIGEN
and the existing approaches. UNIGEN first fo-
cuses on generating a domain-invariant training
dataset that is not restricted to specific domains.
This allows TAMs to achieve domain generalizabil-
ity without the need for multiple source domains.
1Learning without
Human-annotated Data
Domain
Generalizability
Light
Inference
Handling Noise
of Generated Data
Task-specific Fine-tuning ✗ ✗ ✓
Previous Domain Generalization
(Tan et al., 2022) ✗ ✓ ✓
PROMPTING ✓ ✓ ✗
ZEROGEN(Ye et al., 2022a) ✓ ✗ ✓ ✗
PROGEN& SUNGEN
(Ye et al., 2022b; Gao et al., 2023) ✓ ✗ ✓ ✓
UNIGEN(Ours) ✓ ✓ ✓ ✓
Table 1: Comparison between previous approaches and UNIGEN.
We extend domain generalization strategies based
on supervised contrastive learning (Khosla et al.,
2020), as suggested in a previous work (Tan et al.,
2022). Moreover, we employ additional tactics
such as momentum encoder (He et al., 2020) and
denoised memory bank, in addition to the method
suggested by the previous work (Tan et al., 2022).
Furthermore, because the PLM-based dataset gen-
eration method can generate noisy data (Ye et al.,
2022b; Gao et al., 2023; Zou et al., 2024), we pro-
pose a pseudo-relabeling-based additional denois-
ing method.
Our experiments show that UNIGEN achieves
generalizability across various domains and out-
performs PROMPTING . This indicates that smaller
TAMs can be used universally in various domains,
thereby reducing the costs of PROMPTING , dataset
generation, and TAM training.
Our contributions are summarized as follows:
• We propose UNIGEN, a universal domain gen-
eralization strategy by using zero-shot dataset
generation.
• We develop a pseudo-relabeling-based
method for denoising the generated data.
• Our extensive experiment reveals that the
TAM trained using UNIGEN has domain gen-
eralizability, and it can outperform the PLM
with considerably fewer parameters.
2 Related Work
2.1 Dataset Generation for Efficient Zero-shot
Learning
The evolution of PLMs in terms of parameter size
and performance has facilitated zero-shot learning
through the use of well-designed prompts (Radford
et al., 2019; Brown et al., 2020). However, it is
expensive to directly deploy these massive models
into daily services because the process requires
numerous rounds of inference. Dataset generation
mitigates this problem through the generation of
training datasets by using PLMs and training a
small TAM on the generated datasets (Meng et al.,
2022; Ye et al., 2022a). This TAM is deployed
in downstream tasks to reduce inference costs and
improve performance compared to PROMPTING .
However, mere generation, that is, ZERO GEN,
yields noisy data, such as incorrectly labeled data
or irrelevant data (Ye et al., 2022b; Gao et al.,
2023). PROGEN (Ye et al., 2022b) proposed to al-
leviate this problem by adding examples based on
in-context feedback. Meanwhile, SUNGEN (Gao
et al., 2023) proposed to re-weigh the generated
samples during training using noise-robust loss.
Additionally, a concurrent study suggested to lever-
age multiple PLMs as data generator and assign
weight to generated samples in single training pro-
cedure, different from SUNGEN (Zou et al., 2024).
In this work, we propose a novel approach to
extend dataset generation for universal domain gen-
eralization that is not restricted to specific training
source data, as well as a pseudo-relabeling-based
method to denoise the generated dataset.
2.2 Methods for Learning from Noisy Data
Researchers have explored various methods to mit-
igate noisy label data, which is wrongly labeled
from ground-truth labels (Song et al., 2023). A rel-
evant study in this field defined two types of noisy
labels and evaluated the effectiveness of various
methods with respect to BERT model (Agro and
Aldarmaki, 2023). Another study proposed to lever-
age GPT-4 to provide the guidance to noisy labeled
data (Wang et al., 2023). However, they suffer from
the necessity of massive LLMs that demand cost.
Moreover, these studies primarily focused on the
human-crafted noisy label, rather than the noisy
label of data generated by PLMs.
2In this work, we suggest a straightforward
method to handle noisy data based on pseudo-
relabeling, particularly designed for synthetic data.
2.3 Domain Generalization for Text
Classification
Domain generalization aims to improve the gener-
alization ability in the target domain by employing
source data from multiple domains to mitigate the
domain shift problem (Wang et al., 2022; Zhou
et al., 2022). This domain shift can be observed in
natural language processing tasks, such as restau-
rant reviews and reviews of consumer electronics.
For example, long waiting time in a restaurant’s
reviews can represent a negative sentiment about
the restaurant, while long battery life in a laptop’s
reviews can represent a positive sentiment of the
laptop (Tan et al., 2022).
Previous studies to alleviate domain shift in
text classification have focused primarily on do-
main adaptation setting, for which training data
are needed in the target domain (Chen and Cardie,
2018; Ye et al., 2020; Guo et al., 2020). Recently,
researchers have explored the application of do-
main generalization to natural language processing
tasks. A representative study applied supervised
contrastive learning (Khosla et al., 2020) to achieve
domain generalizability in text classification tasks
(Tan et al., 2022).
In this work, we extend an existing method for
domain generalization to generate datasets, includ-
ing the adoption of momentum encoder (He et al.,
2020), in addition to proposing a denoising mem-
ory bank to further enhance its effectiveness and
handle noisy data.
3 Method
3.1 Preliminaries
3.1.1 Dataset Generation
First, we briefly explain the concept and notation
of the preliminary dataset generation method, that
is, ZERO GEN (Ye et al., 2022a). ZERO GEN aims
to create a synthetic dataset Ssyn = (Xsyn,Ysyn)
by using a large-scale PLM Pand task-specific
prompt Ttask. For a text classification problem,
a desired pseudo-label ysyn is first sampled from
the uniform distribution across every class. Next,
ysyn is passed to the prompt Ttask to construct
Ttask(ysyn), that is, the final prompt for P. There-
after, synthesized input data xsyn are generated
using xsyn ∼P(·|Ttask(ysyn)). Finally, Ssyn is com-
posed of these pairs of generated (xsyn,ysyn). No-
tably, the domain of Ssyn is defined by the structure
of Ttask. For example, a Tbook = “The book review
in <y> sentiment is: ” would harness Pto gener-
ate xsyn about book reviews. The TAM is trained
on the generated Ssyn and deployed for inference
instead of directly using PLMs with PROMPTING .
3.1.2 Supervised Contrastive Learning
Supervised contrastive learning (Khosla et al.,
2020) is a variant of contrastive learning (Chen
et al., 2020) that utilizes label values. It allows
for explicit pulling of the representation of positive
(i.e., same class) samples to the anchor representa-
tion while pushing negative representations away
from the anchor. Studies have reported that this
characteristic is valuable for domain generalization,
which aims to group the representations of different
domains (Kim et al., 2021; Tan et al., 2022). The
supervised contrastive loss is expressed as follows:
LSCL = −∑
zi∈B
1
|P(i)|log exp(zi·zp/τSCL)∑
za∈A(i) exp(zi·za/τSCL)
(1)
where z denotes an encoded representation, and
zi is an anchor. P(i) ≡ zj ∈B,yj = yi is the
set of positive samples for each anchor i, and zp
symbolizes a positive representation from P(i).
A(i) ≡zj ∈B,j ̸= irefers to the union of every
sample, except the anchor, including positive and
negative samples. za indicates each representation
from A(i). Bdenotes a mini-batch, and τSCL is the
temperature of supervised contrastive learning.
Although supervised contrastive learning is ef-
fective, the introduction of a memory bank and
momentum encoder may augment the advantages
of the method (Wu et al., 2018; He et al., 2020).
The potency of contrastive learning is often influ-
enced by the size of B because a larger B may
introduce more diverse negative samples. How-
ever, increasing the size of B can introduce con-
cerns related to memory consumption. A mem-
ory bank is a mechanism that fulfills this demand
for a greater number of negative samples by stor-
ing previously processed samples within the dic-
tionary M. Memory-efficient contrastive learning
can be achieved using this dictionary with the cur-
rent batch, that is, establishing a union of B and
M instead of solely using Bto construct P(i) and
A(i). Momentum encoder is another technique that
smooths the process of updating the representations
3Figure 1: Overall framework for generating a dataset and training a TAM using UNIGEN.
stored in M. The momentum encoder θk is trained
by momentum update, θk ←mθk + (1−m)θq,
where m is a coefficient for momentum update,
and θq is a normal encoder that is updated through
backpropagation. By using the momentum encoder,
the representations in M are processed by θk.
3.2 U NIGEN
To build a TAM that can be applied universally
to various target domains, UNIGEN generates a
domain-invariant dataset by using the universal
prompt Tuni, instead of task-specific Ttask. Consider
“The text in <y> sentiment is:” as an example of
Tuni. Next, the final input prompt for Pis con-
structed as Tuni(ysyn). The synthesized input data
xsyn are generated by following the same process
as that of ZERO GEN:
xsyn ∼P(·|Tuni(ysyn)) (2)
This configuration of prompt design allows us to
generate a sentence with the desired label without
being restricted to any specific domain. Therefore,
it steers Pto generate various sentences within a
predefined label space. This domain-invariant data
generation allows the TAM trained using UNIGEN
to learn the domain-invariant characteristics of the
desired label space, thereby resulting in generaliz-
ability across the domains that share the label space.
Supervised contrastive loss is applied along with
conventional cross entropy loss to aid this process.
The training loss is defined as follows:
L= LCE + αLSCL (3)
where αis a hyperparameter that balances the
ratio between the two losses.
3.3 Handling Noisy Data through Relabeling
However, the application of Tuni instead of Ttask
might lead to the generation of noisy sentences,
which was noted as a drawback ofZERO GEN. This
is because Tuni does not have a specific topic to
guide the generation process. Furthermore, a pre-
viously developed approach to effectively mitigate
this problem is applied in the training phase but not
the generation phase. Therefore, there is scope to
improve the quality of Ssyn (Gao et al., 2023). This
problem highlights the necessity to use a denoising
scheme in the generation procedure. In the present
work, we propose a pseudo-relabeling-based de-
noising process for dataset generation. In a previ-
ous study, the approach of relabeling the generated
data and assigning soft labels for data augmenta-
tion was proposed (Yoo et al., 2021). Herein, we
first perform pseudo-relabeling by using P:
ℓ(yi|xsyn) =P(M(yi)|Tuni(xsyn)) (4)
where M(·) denotes a verbalizer that transforms
each label yi into a word. We share Tuni between
this process and the generation process. These
logit values yielded by Pare normalized using the
softmax function with the temperature τRE :
4ˆyi = p(yi|xsyn) = exp(ℓ(yi|xsyn)/τRE)∑
j exp(ℓ(yj|xsyn)/τRE) (5)
Finally, we assign ˆyi instead of the predefined
ysyn to the generated xsyn. This provides two dis-
tinct advantages: (1) because ˆyiis a soft label rather
than a hard label, it contains richer information
about xsyn, such as the degree of the desired la-
bel, which enhances the effectiveness of training
(Szegedy et al., 2016). (2) Because it relabels the
generated xsyn and replaces the predefined ysyn, it
can solve the noisy label issue, which results in the
generation of xsyn that does not correspond to the
designated ysyn, as pointed out in previous work
(Gao et al., 2023). We validate the effectiveness
of this relabeling strategy in the ablation study de-
scribed in Section 4.5.1.
Furthermore, we discard xsyn if its pseudo-label
ˆyi does not exceed the threshold TRE to enhance
the quality of Ssyn. This guarantees that only those
data that have the desired degree of each label are
maintained.
3.4 Denoising Memory Bank
In addition to the relabeling strategy, we propose a
denoising memory bank mechanism to further alle-
viate the issue of noisy data. We first use SUNGEN
(Gao et al., 2023) that learns weights of each train-
ing sample w for loss function within the training
process to assign small weights to noisy data by
employing a noise-robust loss function. We aim
to ensure that the memory bank M contains clean
samples, rather than noisy samples. We utilize the
weights w learned from the noise-robust loss func-
tion for this purpose. In the process of updating
M, we store only those samples whose weights are
larger than the threshold TMB. This organization of
the memory bank ensures the exclusion of noisy
samples from the comparison, resulting in higher-
quality negative and positive samples (Robinson
et al., 2021).
4 Experiment
4.1 Experimental Setup
In this section, we briefly explain the experimen-
tal setup used herein to validate the effectiveness
of UNIGEN. We employ seven different senti-
ment classification datasets in our main experiment.
Among them, IMDB (Maas et al., 2011), SST-2
(Socher et al., 2013), and Rotten Tomatoes (Pang
and Lee, 2005) are datasets comprising movie re-
views. Meanwhile, the Amazon (McAuley and
Leskovec, 2013) dataset consists of customer re-
views of various products, and the Yelp (Zhang
et al., 2015) dataset is composed of restaurant re-
views. CR (Ding et al., 2008) is another customer
review dataset focusing on consumer electronics.
Lastly, Tweet (Rosenthal et al., 2017) is composed
of messages from Twitter. This configuration al-
lows us to evaluate the ability of UNIGEN, which
can be applied to various domains without pro-
viding any prior information or domain-specific
training. Following the previous study, we adapted
long short-term memory (LSTM) (Hochreiter and
Schmidhuber, 1997) and DistilBERT (Sanh et al.,
2019), and we included RoBERTa (Liu et al., 2019)
as our TAM. We compared our approach to ZE-
ROGEN and SUNGEN, as well as to PROMPTING
using GPT2-XL (Radford et al., 2019), to ensure
a fair comparison. We did not include other larger
PLMs in the experiments because the previous
work discovered that larger PLMs did not offer
performance gains (Ye et al., 2022a). We report the
average of the performance results obtained across
five different random seeds.
4.2 Comparison with Task-specific TAMs
Table 2 presents a comparison between the exper-
imental results of UNIGEN and PROMPTING and
task-specific TAMs trained byZERO GEN and SUN-
GEN. The comparison results suggest that UNI-
GEN can generalize across various domains using
a single model without requiring any prior infor-
mation about the test domain. Nonetheless, UNI-
GEN underperformed compared to the task-specific
baselines in each domain. However, the primary
benefit of UNIGEN lies in its unique domain gener-
alizability while using orders-of-magnitude fewer
parameters than PLMs. Additionally, its training
procedure is more efficient than those of other TAM
training strategies. As can be inferred from Ta-
ble 3, SUNGEN generates and synthesizes 1,000k
data for each task domain. This means that 5,000k
data would be required for our experiment, which
involves five different domains, in addition to in-
dividual denoising processes for finding the best
weights of the samples in each of these domains.
By contrast, UNIGEN is not limited by such restric-
tions and requires only a single data generation and
denoising process, as well as a single training pro-
cess. This is extremely beneficial when a novel test
5Model #Param Training Domain Setup SST-2 IMDB Rotten Amazon Yelp CR Tweet Average
Test Domain Movie Products Restaurant Electronics Tweet
GPT2-XL 1.5B - P ROMPTING82.15 70.26 77.56 79.06 78.04 80.30 80.38 78.25
LSTM 7M
Movie ZEROGEN 75.11 66.39 69.85 67.24 70.25 69.32 63.43 68.80
SUNGEN 78.79 69.97 73.76 72.15 73.21 70.39 66.84 72.16
Products ZEROGEN 64.26 61.82 60.13 70.32 67.78 69.46 62.29 65.15
SUNGEN 67.83 63.87 63.46 74.43 73.71 73.35 63.51 68.59
Restaurant ZEROGEN 67.41 63.01 62.74 68.73 75.51 69.23 66.35 63.28
SUNGEN 69.15 66.62 64.56 73.22 79.56 70.12 67.43 70.09
Electronics ZEROGEN 64.69 59.13 60.20 66.34 67.72 72.50 60.25 64.40
SUNGEN 68.38 64.33 63.25 72.61 73.01 76.18 66.78 69.22
Tweet ZEROGEN 61.84 60.17 59.43 64.13 63.68 65.02 74.10 64.05
SUNGEN 66.57 63.96 64.21 69.36 71.68 72.57 81.29 69.95
- U NIGEN 64.15 60.02 60.51 63.82 63.20 69.61 70.32 64.52
DistilBERT 66M
Movie ZEROGEN 80.06 69.13 74.73 73.02 72.77 73.59 74.83 74.02
SUNGEN 82.43 70.59 76.37 74.13 73.56 75.14 75.96 75.45
Products ZEROGEN 71.04 64.99 65.57 74.54 71.89 74.57 71.93 70.65
SUNGEN 72.35 65.95 66.84 76.92 74.98 75.84 73.01 72.27
Restaurant ZEROGEN 77.32 65.47 68.86 74.01 77.94 74.89 73.74 73.18
SUNGEN 78.93 67.12 69.92 74.93 80.67 76.06 75.28 74.70
Electronics ZEROGEN 73.77 66.14 66.78 72.38 73.21 78.82 74.58 72.24
SUNGEN 74.49 67.19 68.29 73.49 75.34 80.49 75.37 73.52
Tweet ZEROGEN 73.98 66.58 67.43 72.88 71.86 75.68 80.86 72.75
SUNGEN 75.12 67.53 69.06 73.64 72.73 78.17 82.46 74.10
- U NIGEN 77.67 67.81 73.16 75.06 74.81 79.86 81.41 75.68
RoBERTa 110M
Movie ZEROGEN 84.38 73.03 78.38 77.38 76.83 77.36 77.94 77.90
SUNGEN 85.24 74.09 79.19 78.56 77.61 78.21 79.72 78.95
Products ZEROGEN 79.14 71.16 70.92 79.94 75.79 76.35 80.17 76.21
SUNGEN 81.51 71.28 72.67 81.50 77.76 78.55 81.94 77.87
Restaurant ZEROGEN 82.87 70.71 69.58 78.61 81.47 76.43 79.51 77.03
SUNGEN 83.65 71.40 71.05 79.42 82.72 77.60 80.92 78.11
Electronics ZEROGEN 76.82 69.42 67.89 75.02 76.53 81.24 76.51 74.78
SUNGEN 77.51 71.23 68.77 76.91 78.33 83.49 79.03 76.47
Tweet ZEROGEN 78.43 68.31 72.25 78.09 74.61 79.08 82.96 76.25
SUNGEN 82.19 70.62 73.21 79.84 76.27 81.46 83.25 78.12
- U NIGEN 84.86 72.24 78.82 80.79 79.15 86.37 87.89 81.45
Table 2: Experimental results of UNIGEN and baselines across various datasets and training domains. The
performance of TAM, which is superior to that of PROMPTING , is underlined, and the best result in each test dataset
within the group for each TAM is presented in boldface.
Amount of generated data Number of trained TAMs
ZEROGEN 1,000k 5
SUNGEN 5,000k 5
UNIGEN 1,000k 1
Table 3: Amount of data generated for training TAMs
by using each method, and number of trained TAMs per
method.
domain is introduced, where ZERO GEN and SUN-
GEN necessitate a separate procedure for the new
domain, but UNIGEN directly reuses the already
trained TAM.
Notably, the performance of the LSTM-based
TAM trained using UNIGEN was significantly
lower than that of ZERO GEN and SUNGEN. This
implies that while a small-sized TAM can be
trained effectively for a single, specific domain,
but suffers from generalizing to a universal domain
that requires a broad understanding of generated
data, as evidenced by detailed study in Appendix E.
Accordingly, the performance of the TAM trained
using UNIGEN improves significantly as the model
size increases. For instance, the DistilBERT-based
TAM trained using UNIGEN exhibited the best av-
erage performance against each task-specific base-
line. This is particularly remarkable as it outper-
formed the SUNGEN baseline in the movie do-
main, which has three in-domain datasets, giving
it an inherent advantage for average performance.
Moreover, the RoBERTa-based TAM trained using
UNIGEN not only yielded the best average per-
formance against these baselines but also outper-
formed PROMPTING in every domain. This result
indicates that it can surpass the zero-shot perfor-
mance of its PLM counterpart (e.g., GPT2-XL)
while using less than 10% of the number of param-
eters and securing the domain generalizability of
the PLM, extending the achievement of the pre-
vious study that leveraged small TAMs in single
domain (Ye et al., 2022a).
6RoBERTa DVD Electronics Kitchen Book Average
PROMPTING
w/ GPT2-XL77.73 78.71 81.64 80.27 79.59
UNIGEN 78.14 80.68 82.31 80.93 80.52
SUPERVISED
(Tan et al., 2022)91.40 95.10 95.05 93.25 93.70
Table 4: Experiments conducted using multi-domain
review dataset. The experimental result of SUPERVISED
was reported in a previous study (Tan et al., 2022) with
the memory bank size of 64.
4.3 Comparison with Supervised Domain
Generalization Method
Next, we analyzed the performance of UNIGEN
against that of a domain generalization method
that uses human-annotated data (Tan et al., 2022).
For this purpose, we used a multi-domain review
dataset comprising four domains: DVD, books,
kitchen and housewares, and consumer electronics
(Blitzer et al., 2007). Following the previous study,
we split the dataset into 1,600 training data and
400 testing data for each domain. Table 4 presents
the comparison results. These results suggest that
UNIGEN can be applied to various domains, and its
performance is superior to that of its PLM counter-
part. Notably, the SUPERVISED baseline relies on
three source domains with human-annotated data
to generalize to a target domain, while UNIGEN is
based on zero-shot dataset generation and does not
require any human-annotated data, which greatly
improves its real-world applicability.
4.4 Domain Generalizability of U NIGEN
To intuitively examine the domain generalizability
of UNIGEN, we plotted the T-SNE (Van der Maaten
and Hinton, 2008) visualization of the features in-
terpreted by the RoBERTa-based TAM trained us-
ing UNIGEN. Figure 2 depicts the visualization
results. These results suggest that the single TAM
classified the given data from every domain with-
out explicit training or prior information about the
domains, thus demonstrating the unique efficiency
of UNIGEN.
Table 5 presents examples of the sentences gen-
erated using UNIGEN. These examples showcase
that UNIGEN can generate domain-invariant sen-
tences with the designated labels. By training
TAMs on these data, it is possible to distill the
domain generalizability of PLMs into TAMs.
Figure 2: T-SNE visualization of the encoded represen-
tation of the RoBERTa model trained using UNIGEN.
The model was trained only on the data generated using
UNIGEN, which is shown in gray color. We used the
test set of the multi-domain review dataset.
4.5 Ablation Study
This section describes the ablation studies con-
ducted to offer rationales for the engineering
choices made in this study. We used the
DistilBERT-based TAM for these experiments.
4.5.1 Effectiveness of Relabeling Strategy
First, we performed an ablation study to validate
the effectiveness of the relabeling strategy dis-
cussed in Section 3.3. We compared the basic ap-
proach that uses soft labels to the two other options.
The first option utilizes the pseudo-relabeling pro-
cess, but it assigns hard labels instead of soft labels.
In other words, it only reflects the decision emanat-
ing from the PLM, not the probability. The second
option completely excludes the relabeling process.
While this option would generate the dataset faster
than the other options, it might generate text with
noisy labels, as already discussed in previous works
(Ye et al., 2022a,b; Gao et al., 2023).
The experimental results are presented in the
second and third rows of Table 6. They suggest
that the use of soft labels offers practical benefits
in terms of performance. This finding is consistent
with that of a previous study in which the strength
of soft labels was demonstrated (Yoo et al., 2021;
Fang et al., 2024). Therefore, according to the re-
sults of this ablation study, relabeling the generated
data with the assignment of soft labels is effective
for mitigating the issue of noisy labels.
7Positive Examples Labels
You are a person who is hardworking, honest, and reliable. You have a good sense of humor, and you love being in charge.[0.19,0.81]
You are beautiful, you are powerful, you are amazing. [0.29,0.71]
In a city full of great ideas and creativity, I’ve met a few people who have done things you wouldn’t believe.[0.26,0.74]
The American Dream is alive in this great city. As a new generation of American heroes begins to realize their own American Dream.[0.24,0.76]
Negative Examples Labels
No one likes it. Nobody wants it. It is a disgrace. [0.7,0.3]
The company is no longer in business and has ceased operations. [0.71,0.29]
Please don’t use this feature to communicate with customers [0.74,0.26]
Do not buy from this seller. [0.79,0.21]
Table 5: Examples of the data generated using UNIGEN.
DistilBERTSST-2 IMDB Rotten Amazon Yelp CR Tweet AverageUNIGEN 77.67 67.81 73.16 75.06 74.81 79.86 81.4175.68UNIGENw/ Hard Relabeling77.18 67.18 72.37 72.91 72.95 78.14 80.39 74.45
UNIGENw/o Relabeling76.34 66.58 71.78 70.63 70.97 76.59 79.62 73.22
UNIGENw/o Denoising MB77.06 67.13 72.04 74.69 73.66 78.47 80.84 74.84
UNIGENw/o SCL75.53 66.10 69.63 71.43 69.58 77.22 79.31 72.69
Combined Prompts74.19 63.16 71.08 73.62 72.93 78.05 78.02 73.01
Table 6: Results of ablation studies on methodological
choices in Section 4.5.1, 4.5.2, and 4.5.3.
DistilBERTSST-2 IMDB Rotten Amazon Yelp CR Tweet AverageUNIGEN
w/ GPT2-XL77.67 67.81 73.16 75.06 74.81 79.86 81.4175.68
UNIGEN
w/ Gemma-2b71.50 69.40 67.04 76.48 76.89 77.24 52.03 70.08
UNIGEN
w/ Qwen2-1.5B66.37 63.19 63.76 71.69 72.44 66.06 63.49 66.71
UNIGEN
w/ Phi-1.574.98 68.35 70.82 73.86 75.11 71.82 84.01 74.13
Table 7: Results of ablation studies on comparison be-
tween various PLMs in Section 4.5.4.
4.5.2 Effectiveness of Supervised Contrastive
Learning and Denoising Memory Bank
Second, we conducted a comparison to investigate
the effectiveness of supervised contrastive learn-
ing, which was discussed in Section 3.1.2, and
denoising memory bank, which was discussed in
Section 3.4. The results of the comparison are
presented in fourth and fifth rows of Table 6. In-
tuitively, if the quality of each of the data in the
dataset is given as a weight, it would be effective to
employ only high-quality samples for comparing
contrastive learning rather than utilizing all data,
regardless of their quality. The experimental result
in the fourth row demonstrated that the use of a de-
noising memory bank yielded a performance gain,
which was consistent with our intuition. Similarly,
the result in the fifth row suggests that supervised
contrastive learning plays a crucial role in UNI-
GEN.
4.5.3 Comparison with Combined
Domain-specific Datasets
Third, we compared the performance of the TAMs
trained with two different synthetic datasets. The
first uses the synthetic dataset generated with the
prompt of UNIGEN, and the second uses the con-
catenation of datasets generated with five different
domain-specific prompts used in the other experi-
ments. For this experiment, we only differentiated
the synthetic dataset used for training and set every
other configuration identical, such as the usage of
pseudo-relabeling and denoised memory bank, as
well as other hyperparameters. The result of the ab-
lation study is presented in the last row of Table 6.
The result indicates that the model trained with
the dataset generated by the universal prompt in
UNIGEN demonstrated better average performance.
This suggests that the broad understanding of the
label space offered by the synthetic dataset gener-
ated by UNIGEN plays an important role in domain
generalization.
4.5.4 Comparison between PLMs for Data
Generation
Lastly, we evaluated the performance of TAMs
trained using various PLMs. Initially, we utilized
GPT2-XL as the PLM for data generation. In
this experiment, we extended the evaluation by
incorporating more recent models as data genera-
tors. Specifically, we compared the performance
of TAMs trained with UNIGEN using Gemma-
2b (Team et al., 2024), Qwen2-1.5B (Yang et al.,
2024), and Phi-1.5 (Li et al., 2023), which are more
recent models with parameter sizes comparable to
GPT2-XL. All other configurations, aside from the
PLM used for data generation, were kept consistent
with the original GPT2-XL-based TAM.
Table 7 presents the results of this experiment.
Interestingly, the findings suggest that employing
more recent PLMs does not necessarily lead to bet-
ter performance in UNIGEN. The TAM trained
8with GPT2-XL, our original choice for data gen-
eration, achieved the highest average performance.
This aligns with previous studies, which indicate
that using larger PLM does not always result in
superior outcomes (Ye et al., 2022a). However, de-
spite using identical hyperparameters and prompts
to ensure a fair comparison, it is important to rec-
ognize that optimal hyperparameters, such as top-k,
top-p, and τRE, as well as the prompt configurations,
may vary for each PLM. Future research could fo-
cus on developing a unified framework to optimize
hyperparameters and prompts for each PLMs, akin
to methods like AutoAugment (Cubuk et al., 2019;
Ren et al., 2021).
5 Conclusion
In this study, we proposed UNIGEN in an attempt
to achieve universal domain generalization. UNI-
GEN successfully transferred the domain generaliz-
ability of PLMs into orders-of-magnitude smaller
TAMs. Moreover, human annotation was not re-
quired for UNIGEN, which significantly reduced
the burden of acquiring labeled data from multi-
ple source domains. Our relabeling method and
denoising memory bank offered additional perfor-
mance gains. Furthermore, our extensive experi-
ments demonstrated that UNIGEN outperformed
PROMPTING , facilitating light inference while pre-
serving the domain generalizability of PLMs.
Although we explored an interesting framework
for zero-shot, lightweight domain generalization,
the performance of UNIGEN appears weaker than
those of baseline models that are trained on each
domain in several cases. It is desirable to achieve
a higher level of performance than those of the in-
domain baselines, which we will attempt in future
work. To this end, the generation of small task-
specific data for additional training of the TAM
trained using UNIGEN is a possible approach, es-
pecially when a downstream task domain is intro-
duced. By employing TAMs that are pre-trained
using UNIGEN as a warm start, high performance
could be achieved in the target domain with a small
amount of task-specific data, which would reduce
the total amount of data generated compared to
that when individually training each TAM by using
ZERO GEN or SUNGEN from scratch. Another pos-
sible approach may involve combining UNIGEN
with the concept of test-time learning (Jeong et al.,
2023). Similar to the first strategy, it may generate
small amounts of test domain-specific data given
test-time data as in-context examples. We are com-
mitted to exploring these possible strategies, which
will enhance the effectiveness of UNIGEN.
Limitations
The primary limitation of UNIGEN is its relatively
weaker in-domain performance than those of base-
lines that are trained with domain-specific datasets.
While it is beneficial for its smaller parameter set
and lower inference cost while maintaining the
domain generalizability of PLMs, there exists a
tradeoff between in-domain performance and effi-
ciency, unlike ZERO GEN and SUNGEN. Therefore,
a method for further enhancing the performance
of UNIGEN should be explored, as stated in the
Conclusion section. A possible solution is a proper
prompt designed for UNIGEN because the quality
of the generated sentences is affected by prompt de-
sign. Even though we adapted an effective prompt
designed in a previous work (Ye et al., 2022a), a
more effective prompt for UNIGEN that aims to
generate diverse and general expressions could ex-
ist.
Ethics Statement
The data generated by the PLM may contain biased
sentences, which may offend the readers. This can
be attributed to the potential bias of PLMs (Liu
et al., 2022). These generated biased sentences do
not reflect the views of the authors.
Acknowledgements
This research was supported by Basic Science Re-
search Program through the National Research
Foundation of Korea(NRF) funded by the Ministry
of Education(NRF-2022R1C1C1008534), and In-
stitute for Information & communications Tech-
nology Planning & Evaluation (IITP) through the
Korea government (MSIT) under Grant No. 2021-
0-01341 (Artificial Intelligence Graduate School
Program, Chung-Ang University).
References
Maha Agro and Hanan Aldarmaki. 2023. Handling
realistic label noise in bert text classification. In
Proceedings of ICNLSP, pages 11–20.
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich,
Amir Kantor, George Kour, Segev Shlomov, Naama
Tepper, and Naama Zwerdling. 2020. Do not have
enough data? deep learning to the rescue! In Pro-
ceedings of AAAI, pages 7383–7390.
9John Blitzer, Mark Dredze, and Fernando Pereira. 2007.
Biographies, bollywood, boom-boxes and blenders:
Domain adaptation for sentiment classification. In
Proceedings of ACL, pages 440–447.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. In Proceedings of NeurIPS, pages 1877–
1901.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and
Geoffrey Hinton. 2020. A simple framework for
contrastive learning of visual representations. In Pro-
ceedings of ICML, pages 1597–1607.
Xilun Chen and Claire Cardie. 2018. Multinomial adver-
sarial networks for multi-domain text classification.
In Proceedings of NAACL, pages 1226–1240.
Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay
Vasudevan, and Quoc V Le. 2019. Autoaugment:
Learning augmentation strategies from data. In Pro-
ceedings of CVPR, pages 113–123.
Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A
holistic lexicon-based approach to opinion mining.
In Proceedings of WSDM, pages 231–240.
Tianqing Fang, Wenxuan Zhou, Fangyu Liu, Hongming
Zhang, Yangqiu Song, and Muhao Chen. 2024. On-
the-fly denoising for data augmentation in natural
language understanding. In Findings of EACL, pages
766–781.
Jiahui Gao, Renjie Pi, Lin Yong, Hang Xu, Jiacheng
Ye, Zhiyong Wu, Weizhong Zhang, Xiaodan Liang,
Zhenguo Li, and Lingpeng Kong. 2023. Self-guided
noise-free data generation for efficient zero-shot
learning. In Proceedings of ICLR.
Han Guo, Ramakanth Pasunuru, and Mohit Bansal.
2020. Multi-source domain adaptation for text clas-
sification via distancenet-bandits. In Proceedings of
AAAI, pages 7830–7838.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and
Ross Girshick. 2020. Momentum contrast for unsu-
pervised visual representation learning. In Proceed-
ings of CVPR, pages 9729–9738.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long
short-term memory. Neural computation, 9(8):1735–
1780.
Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung
Hwang, and Jong Park. 2023. Test-time self-adaptive
small language models for question answering. In
Findings of EMNLP, pages 15459–15469.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao
Chen, Linlin Li, Fang Wang, and Qun Liu. 2020.
Tinybert: Distilling bert for natural language under-
standing. In Findings of EMNLP, pages 4163–4174.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron
Sarna, Yonglong Tian, Phillip Isola, Aaron
Maschinot, Ce Liu, and Dilip Krishnan. 2020. Su-
pervised contrastive learning. In Proceedings of
NeurIPS, pages 18661–18673.
Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu
Kim, and Jaekoo Lee. 2021. Selfreg: Self-supervised
contrastive regularization for domain generalization.
In Proceedings of ICCV, pages 9619–9628.
Yoon Kim. 2014. Convolutional neural networks for
sentence classification. In Proceedings of EMNLP,
pages 1746–1751.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In Proceedings
of ICLR.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained trans-
former models. In Proceedings AACL 2020 Work-
shop on Life-long Learning for Spoken Language
Systems, pages 18–26.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie
Del Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023.
Textbooks are all you need ii: phi-1.5 technical report.
arXiv preprint arXiv:2309.05463.
Ruibo Liu, Chenyan Jia, Jason Wei, Guangxuan Xu,
and Soroush V osoughi. 2022. Quantifying and alle-
viating political bias in language models. Artificial
Intelligence, 304:103654.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining ap-
proach. arXiv preprint arXiv:1907.11692.
Andrew Maas, Raymond E Daly, Peter T Pham, Dan
Huang, Andrew Y Ng, and Christopher Potts. 2011.
Learning word vectors for sentiment analysis. In
Proceedings of ACL, pages 142–150.
Julian McAuley and Jure Leskovec. 2013. Hidden fac-
tors and hidden topics: understanding rating dimen-
sions with review text. In Proceedings of RecSys,
pages 165–172.
Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han.
2022. Generating training data with language mod-
els: Towards zero-shot language understanding. In
Proceedings of NeurIPS, pages 462–477.
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Jus-
tifying recommendations using distantly-labeled re-
views and fine-grained aspects. In Proceedings of
EMNLP, pages 188–197.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting
class relationships for sentiment categorization with
respect to rating scales. In Proceedings of ACL, pages
115–124.
10Fengchun Qiao, Long Zhao, and Xi Peng. 2020. Learn-
ing to learn single domain generalization. In Pro-
ceedings of CVPR, pages 12556–12565.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Shuhuai Ren, Jinchao Zhang, Lei Li, Xu Sun, and Jie
Zhou. 2021. Text autoaugment: Learning composi-
tional augmentation policy for text classification. In
Proceedings of EMNLP, pages 9029–9043.
Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra,
and Stefanie Jegelka. 2021. Contrastive learning with
hard negative samples. In Proceedings of ICLR.
Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017.
Semeval-2017 task 4: Sentiment analysis in twitter.
In Proceedings of SemEval, pages 502–518.
Victor Sanh, Lysandre Debut, Julien Chaumond, and
Thomas Wolf. 2019. Distilbert, a distilled version
of bert: smaller, faster, cheaper and lighter. arXiv
preprint arXiv:1910.01108.
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D Manning, Andrew Y Ng, and
Christopher Potts. 2013. Recursive deep models for
semantic compositionality over a sentiment treebank.
In Proceedings of EMNLP, pages 1631–1642.
Hwanjun Song, Minseok Kim, Dongmin Park, Yooju
Shin, and Jae-Gil Lee. 2023. Learning from noisy
labels with deep neural networks: A survey. IEEE
Transactions on Neural Networks and Learning Sys-
tems, 34(11):8135–8153.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe,
Jon Shlens, and Zbigniew Wojna. 2016. Rethinking
the inception architecture for computer vision. In
Proceedings of CVPR, pages 2818–2826.
Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou
Ng. 2022. Domain generalization for text classifica-
tion with memory-based supervised contrastive learn-
ing. In Proceedings of COLING, pages 6916–6926.
Gemma Team, Thomas Mesnard, Cassidy Hardin,
Robert Dadashi, Surya Bhupatiraju, Shreya Pathak,
Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale,
Juliette Love, et al. 2024. Gemma: Open models
based on gemini research and technology. arXiv
preprint arXiv:2403.08295.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. Journal of machine
learning research, 9(86):2579–2605.
Jindong Wang, Cuiling Lan, Chang Liu, Yidong
Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun
Zeng, and Philip Yu. 2022. Generalizing to unseen
domains: A survey on domain generalization. IEEE
Transactions on Knowledge and Data Engineering,
35(8):8052–8072.
Song Wang, Zhen Tan, Ruocheng Guo, and Jundong
Li. 2023. Noise-robust fine-tuning of pretrained lan-
guage models via external guidance. In Findings of
EMNLP, pages 12528–12540.
Zijian Wang, Yadan Luo, Ruihong Qiu, Zi Huang, and
Mahsa Baktashmotlagh. 2021. Learning to diversify
for single domain generalization. In Proceedings of
ICCV, pages 834–843.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow-
icz, et al. 2020. Transformers: State-of-the-art natu-
ral language processing. In Proceedings of EMNLP
(Demo Track), pages 38–45.
Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua
Lin. 2018. Unsupervised feature learning via non-
parametric instance discrimination. In Proceedings
of CVPR, pages 3733–3742.
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng,
Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan
Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2
technical report. arXiv preprint arXiv:2407.10671.
Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou
Ng, and Lidong Bing. 2020. Feature adaptation of
pre-trained language models across languages and
domains with robust self-training. In Proceedings of
EMNLP, pages 7386–7399.
Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao
Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong.
2022a. Zerogen: Efficient zero-shot learning via
dataset generation. In Proceedings of EMNLP, pages
11653–11669.
Jiacheng Ye, Jiahui Gao, Zhiyong Wu, Jiangtao Feng,
Tao Yu, and Lingpeng Kong. 2022b. Progen: Pro-
gressive zero-shot dataset generation via in-context
feedback. In Findings of EMNLP, pages 3671–3683.
Jiacheng Ye, Chengzu Li, Lingpeng Kong, and Tao Yu.
2023. Generating data for symbolic language with
large language models. In Proceedings of EMNLP,
pages 8418–8443.
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo
Lee, and Woomyoung Park. 2021. Gpt3mix: Lever-
aging large-scale language models for text augmenta-
tion. In Findings of EMNLP, pages 2225–2239.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text clas-
sification. In Proceedings of NeurIPS.
Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and
Chen Change Loy. 2022. Domain generalization: A
survey. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 45(4):4396–4415.
Tianyuan Zou, Yang Liu, Peng Li, Jianqing Zhang,
Jingjing Liu, and Ya-Qin Zhang. 2024. Fusegen: Plm
fusion for data-generation based zero-shot learning.
arXiv preprint arXiv:2406.12527.
11A Prompt for Each Domain
Domain Prompt
Movie Themovie reviewin [positive/negative] sentiment is:Products Theproduct reviewin [positive/negative] sentiment is:Restaurant Therestaurant reviewin [positive/negative] sentiment is:ElectronicsTheelectronics product reviewin [positive/negative] sentiment is:Tweet Thetweetin [positive/negative] sentiment is:UNIGEN&PROMPTING Thetextin [positive/negative] sentiment is:
Table 8: The prompt used for each domain inZERO GEN
and SUNGEN, as well as the prompt used for UNIGEN
and PROMPTING .
B Implementation Detail
For UNIGEN, we first generated 1,000k data from
the 1.5B GPT2-XL model asPby using the prompt
Tuni “The text in positive/negative sentiment is: ”,
which is a slightly modified version of the best
prompt suggested in a previous study. Top-k and
top-p were set to 40 and 0.9 during the generation
procedure, respectively. The soft relabeling process
was performed using a τRE of 0.1. After obtaining
the soft labels of each of the generated samples, we
filtered them using TRE of 0.2. This required the
largest value from the soft labels to be larger than
the sum of the uniform distribution and TRE, for
instance, 0.7 in binary classification with TRE of
0.2. As an example, the sentence corresponding to
the soft label [0.64,0.36] was discarded because it
did not exceed the threshold.
After generation, we followed the bi-level opti-
mization approach proposed in SUNGEN to cleanse
the generated dataset and find the sample weights
for 50 epochs. The outer learning rate was set
to 5e-2, and we randomly sampled 50k data for
each outer validation process. Then, we selected
200k data with high weights, which represent high-
quality data, to train the TAMs.
We used a one-layer bi-LSTM model for
the LSTM-based TAM and the distilbert-base-
uncased and roberta-base from Transformers (Wolf
et al., 2020) for the DistilBERT-based TAM and
RoBERTa-based TAM, respectively. We trained the
LSTM-based TAM for 5 epochs with the learning
rate of 1e-3 by using the Adam (Kingma and Ba,
2015) optimizer. The DistilBERT-based TAM was
trained for 3 epochs with a learning rate of 2e-5 by
using the Adam optimizer. The RoBERTa-based
TAM was trained for 3 epochs with a learning rate
of 2e-5 by using the Adam optimizer. During the
training process, αfor supervised contrastive learn-
ing loss was set to 0.5, with a projection size of
256. The temperature τSCL was set to 0.2, and the
memory bank size Mwas set to 64. The coefficient
mfor updating the momentum encoder was set to
0.999, and the threshold of the denoising memory
bank TMB was set to 0.8. The dataset generation
and training procedures were executed using on a
single NVIDIA A100 40GB GPU. Please refer to
attached source code for further details.1
C Extensibility of Relabeling Strategy
DistilBERTSST-2 IMDB Rotten Amazon Yelp CR Tweet AverageZEROGEN 80.06 69.13 74.73 73.02 72.77 73.59 74.83 74.02ZEROGENw/ Hard Relabeling80.72 69.25 73.98 73.41 73.18 73.76 74.91 74.17
ZEROGENw/ Soft Relabeling81.79 70.40 75.32 73.65 73.31 74.72 75.1474.90
Table 9: Experimental result on the extensibility of rela-
beling strategy. We trained the TAM usingZERO GEN
based on the movie domain.
We examined the extensibility of the relabeling
strategy discussed in Section 3.3. We applied two
different options for relabeling, namely assigning
hard labels and soft labels to ZERO GEN. Table 9
summarizes the results. These results suggest that
the relabeling strategy is beneficial for the perfor-
mance of the TAM trained usingZERO GEN. There-
fore, filtering the generated data through the relabel-
ing strategy is an extensive strategy for enhancing
zero-shot learning methods based on dataset gener-
ation. Furthermore, the assignment of soft labels
was more beneficial compared to the assignment
of hard labels, which is consistent with the results
of the ablation study described in Section 4.5.1.
We will further investigate the relabeling-based ap-
proach to enhance ZERO GEN and SUNGEN in fu-
ture works.
D Additional Experiment on Domain
Generalizability
To further reveal the domain generalizability of
UNIGEN, we conducted an additional experiment
on Amazon Review dataset (Ni et al., 2019). We
used 5-core data for 29 domains and reported the
performance of PROMPTING using GPT2-XL (Rad-
ford et al., 2019) and RoBERTa-based TAM trained
by UNIGEN. The result in Table 10 demonstrates
the performance of UNIGEN that is comparable
with PROMPTING , with parameters less than 10%.
Additionally, this experiment showcases the univer-
sality of UNIGEN, the characteristics that distin-
1https://github.com/c-juhwan/unigen
12Domain PROMPTING UNIGEN
Fashion 93.29 91.16
Beauty 95.63 92.87
Appliances 68.27 79.10
Arts, Crafts and Sewing 91.05 92.08
Automotive 91.07 88.23
Books 89.18 91.26
CDs and Vinyl 82.44 86.42
Cell Phones and Accessories 90.47 88.65
Clothing, Shoes and Jewelry 91.83 90.80
Digital Music 93.72 90.62
Electronics 88.68 88.34
Gift Cards 94.03 92.50
Grocery and Gourmet Food 92.31 91.09
Home and Kitchen 92.11 91.53
Industrial and Scientific 91.07 92.34
Kindle Store 89.49 92.76
Luxury Beauty 90.03 91.82
Magazine Subscriptions 85.97 89.64
Movies and TV 86.39 88.19
Musical Instruments 90.72 90.20
Office Products 91.74 89.60
Patio, Lawn and Garden 89.96 87.87
Pet Supplies 90.60 89.91
Prime Pantry 93.64 88.15
Software 82.55 83.39
Sports and Outdoors 88.63 90.36
Tools and Home Improvement87.41 88.90
Toys and Games 91.54 92.02
Video Games 85.79 86.07
Average 89.30 89.51
Table 10: The result of the experiment on the Amazon
Review dataset.
guish UNIGEN from previous ZERO GEN and SUN-
GEN. Compared to previous methods that would
require 29 separately trained TAMs to conduct this
experiment, UNIGEN only used one single TAM
to perform the experiment, which exhibits the real-
world applicability of UNIGEN.
E Additional Study on the Performance
of UNIGEN on Small-sized TAMs
We found that UNIGEN suffers to exhibit its perfor-
mance on the LSTM model from the experiment
in Table 2. To further investigate this phenomenon,
we expand our experiment into two different small-
sized TAMs: TextCNN (Kim, 2014) and TinyBERT
(Jiao et al., 2020). Table 11 showcases the result of
the additional experiment. In the case of TextCNN-
based TAM, baseline methods such as ZERO GEN
and SUNGEN demonstrated comparable or slightly
higher performance compared to that of LSTM-
based TAM. Nonetheless, TextCNN-based TAM
trained on UNIGEN reported slightly worse per-
formance compared to LSTM-based TAM despite
increased parameter size. We hypothesize that
this phenomenon is owing to the architecture of
TextCNN, which leverages CNN layers that have
fixed window size, leading to limited ability to
understand the context of diverse expression gen-
erated by UNIGEN. On the contrary, TinyBERT-
based TAM trained on UNIGEN exhibited the best
average performance among the baselines. Fur-
thermore, its average performance is comparable
to DistilBERT-based TAM despite a much smaller
parameter size. It is noteworthy that TinyBERT is
also a model that has a general understanding of
the language through knowledge distillation from
BERT. Through this investigation, we reveal that
the pre-trained knowledge of the TAM aids the
successful training of the TAM through UNIGEN.
13Model #Param Training Domain Setup SST-2 IMDB Rotten Amazon Yelp CR Tweet Average
Test Domain Movie Products Restaurant Electronics Tweet
GPT2-XL 1.5B - P ROMPTING82.15 70.26 77.56 79.06 78.04 80.30 80.38 78.25
LSTM 7M
Movie ZEROGEN 75.11 66.39 69.85 67.24 70.25 69.32 63.43 68.80
SUNGEN 78.79 69.97 73.76 72.15 73.21 70.39 66.84 72.16
Products ZEROGEN 64.26 61.82 60.13 70.32 67.78 69.46 62.29 65.15
SUNGEN 67.83 63.87 63.46 74.43 73.71 73.35 63.51 68.59
Restaurant ZEROGEN 67.41 63.01 62.74 68.73 75.51 69.23 66.35 63.28
SUNGEN 69.15 66.62 64.56 73.22 79.56 70.12 67.43 70.09
Electronics ZEROGEN 64.69 59.13 60.20 66.34 67.72 72.50 60.25 64.40
SUNGEN 68.38 64.33 63.25 72.61 73.01 76.18 66.78 69.22
Tweet ZEROGEN 61.84 60.17 59.43 64.13 63.68 65.02 74.10 64.05
SUNGEN 66.57 63.96 64.21 69.36 71.68 72.57 81.29 69.95
- U NIGEN 64.15 60.02 60.51 63.82 63.20 69.61 70.32 64.52
CNN 10M
Movie ZEROGEN 74.34 67.91 70.22 68.69 71.03 70.89 64.77 69.69
SUNGEN 76.98 68.97 73.49 73.04 73.97 71.55 69.43 72.49
Products ZEROGEN 63.46 62.13 60.35 70.94 68.34 72.34 65.71 66.18
SUNGEN 65.89 63.27 61.97 73.98 72.81 74.02 67.38 68.47
Restaurant ZEROGEN 67.76 64.18 62.16 70.17 76.65 71.27 65.43 68.23
SUNGEN 68.86 65.62 64.96 73.20 77.87 72.43 68.36 70.19
Electronics ZEROGEN 65.05 63.04 62.13 67.19 69.50 73.66 63.23 66.26
SUNGEN 67.43 65.13 63.25 70.82 72.79 77.42 67.19 69.15
Tweet ZEROGEN 60.56 60.68 61.33 64.91 64.37 66.86 75.62 64.90
SUNGEN 65.12 61.56 63.42 66.45 68.46 68.71 80.17 67.70
- U NIGEN 62.31 60.48 61.82 61.08 61.63 68.24 65.95 63.07
TinyBERT 14.5M
Movie ZEROGEN 78.95 68.37 71.34 70.59 71.35 71.18 68.94 71.53
SUNGEN 80.78 69.86 73.47 72.36 72.42 73.75 70.81 73.35
Products ZEROGEN 69.22 62.79 63.44 72.57 69.70 73.22 71.21 68.88
SUNGEN 71.74 64.38 64.51 75.81 73.76 74.17 72.86 71.03
Restaurant ZEROGEN 75.79 64.62 65.53 71.33 77.10 73.52 70.84 71.25
SUNGEN 77.45 67.41 68.01 74.41 79.16 75.86 72.11 73.49
Electronics ZEROGEN 71.22 64.37 63.06 69.51 70.75 75.71 69.49 69.16
SUNGEN 73.10 65.81 66.71 71.33 74.86 78.43 73.88 72.02
Tweet ZEROGEN 70.76 63.40 64.43 68.74 70.44 73.72 78.14 69.95
SUNGEN 73.94 64.87 66.31 71.39 72.21 78.16 81.23 72.59
- U NIGEN 76.74 66.88 69.63 73.29 72.10 78.64 80.52 73.97
Table 11: Result of ablation study that examines the performance of UNIGEN and baselines on small-sized TAMs.
The performance of TAM, which is superior to that of PROMPTING , is underlined, and the best result in each test
dataset within the group for each TAM is presented in boldface.
14 |
https://aclanthology.org/2024.emnlp-main.2.pdf | Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15–29
November 12-16, 2024 ©2024 Association for Computational Linguistics
MULTI -NEWS +: Cost-efficient Dataset Cleansing
via LLM-based Data Annotation
Juhwan Choi1, Jungmin Yun1, Kyohoon Jin2 and YoungBin Kim1,2
1Department of Artificial Intelligence, Chung-Ang University
2Graduate School of Advanced Imaging Sciences, Multimedia and Film, Chung-Ang University
{gold5230, cocoro357, fhzh123, ybkim85}@cau.ac.kr
Abstract
The quality of the dataset is crucial for ensuring
optimal performance and reliability of down-
stream task models. However, datasets often
contain noisy data inadvertently included dur-
ing the construction process. Numerous at-
tempts have been made to correct this issue
through human annotators. However, hiring
and managing human annotators is expensive
and time-consuming. As an alternative, recent
studies are exploring the use of large language
models (LLMs) for data annotation.
In this study, we present a case study that ex-
tends the application of LLM-based data anno-
tation to enhance the quality of existing datasets
through a cleansing strategy. Specifically, we
leverage approaches such as chain-of-thought
and majority voting to imitate human anno-
tation and classify unrelated documents from
the Multi-News dataset, which is widely used
for the multi-document summarization task.
Through our proposed cleansing method, we
introduce an enhanced MULTI -NEWS +. By em-
ploying LLMs for data cleansing, we demon-
strate an efficient and effective approach to im-
proving dataset quality without relying on ex-
pensive human annotation efforts.
1 Introduction
The significance of dataset quality in deep learning
applications cannot be overstated as mislabeled or
noisy data can severely degrade performance (Song
et al., 2023). Datasets with incorrect labels, noise,
or inconsistencies undermine the consistency and
stability of model training. Cleansing these datasets
contributes to enhancing model performance and
generalization capabilities. Hence, ensuring the
quality of the dataset by identifying and eliminat-
ing noisy data is imperative. In the realm of natural
language processing, several researchers have at-
tempted to improve the quality of noisy datasets
(Jiang et al., 2020, 2022). For example, ReDo-
cRED (Tan et al., 2022) addressed issues such as
Source 1
Starting in 1996, alexa internet has been donating their
crawl data to the internet archive. Flowing in every day,
these data are added to the wayback machine after an
embargo period.
Source 2
... For the first time in decades, researchers trying to de-
velop a vaccine for malaria have discovered a new target
they can use to attack this deadly and common parasite...
Source 3
Focused crawls are collections of frequently-updated
webcrawl data from narrow ( as opposed to broad or
wide ) web crawls, often focused on a single domain or
subdomain.
Summary
Researchers think they’ve found a promising new potential
weapon in the fight against malaria in a fairly unlikely
place: the blood of toddlers. In a paper published in sci-
ence today, ...
Table 1: Examples of noisy documents in Multi-News
dataset. Sources 1 and 3 do not contribute to the sum-
mary. We aim to identify such noisy documents without
a human annotator.
false negatives in DocRED (Yao et al., 2019), a
widely used dataset for relation extraction. Simi-
larly, annotation inconsistencies were found in the
MultiWOZ dataset (Budzianowski et al., 2018) for
dialogue state tracking (Qian et al., 2021), leading
to efforts to rectify these issues (Eric et al., 2020;
Zang et al., 2020; Han et al., 2021; Ye et al., 2022a).
Despite these efforts, relying on human annota-
tors to enhance datasets poses challenges such as
high costs and time constraints. The quality of the
annotation might also be affected by potential vari-
ations, such as subjective bias and the proficiency
of the annotator (Rashtchian et al., 2010). Further-
more, cleansing a noisy dataset typically requires
a larger budget, often involving majority voting by
multiple annotators or validation by experts (Tan
et al., 2022). Given the significance and neces-
sity of enhancing the quality of existing datasets,
these obstacles hinder practical efforts to cleanse
datasets efficiently. Therefore, it is crucial to ex-
plore cost-effective methods that can cleanse the
15Figure 1: Overall framework for cleansing data and composing MULTI -NEWS +.
existing dataset, minimizing human involvement.
In this study, we propose leveraging large lan-
guage model (LLM)-based annotation for dataset
cleansing. Researchers have explored cost-efficient
alternatives to human annotators by employing
LLMs across various tasks (Wang et al., 2021; Ding
et al., 2023; He et al., 2024; Bansal and Sharma,
2023; Zhang et al., 2023; Choi et al., 2024). How-
ever, the real-world applicability of LLM-based
annotation on existing datasets is still less explored.
Building on these insights, we extend the appli-
cation of LLM-based annotations to denoise the
existing dataset and improve its quality. Specifi-
cally, we conduct a case study to cleanse the Multi-
News (Fabbri et al., 2019), a dataset for multi-
document summarization tasks. This dataset con-
sists of news articles crawled from the internet and
is widely used in multi-document summarization
research. However, as shown in Table 1, we iden-
tify several issues related to the noise in the dataset.
For instance, the set of documents contained sys-
tem messages from platforms such as Twitter, Way-
back Machine, or Dow Jones that are unrelated to
the summary and degrade the dataset quality.
To accomplish our purpose, we utilize LLMs to
analyze the summary and associated documents,
identifying and excluding any documents that are
not relevant to the summary. Specifically, we em-
ploy approaches such as chain-of-thought (CoT),
providing the rationale for decision-making with
enhanced transparency and facilitating human in-
vestigation. We further enhance our cleansing pro-
cess by incorporating self-consistency considera-
tions, which mimic the majority voting process
used by human annotators (Wang et al., 2023b).
Based on our carefully designed framework, we
introduce MULTI -NEWS +, an enhanced version of
the existing Multi-News dataset, achieved through
our LLM-based cleansing strategy. To the best of
our knowledge, this is the first attempt to exploit
LLMs to enhance the quality of real-world datasets.
Our experiments demonstrate the effectiveness of
MULTI -NEWS +, providing a valuable resource for
future research. We make MULTI -NEWS + and our
source code publicly available for further study.
2 Related Work
Dataset quality has been an interest to researchers
because of its importance in ensuring the qual-
ity of the model trained with the dataset (Budach
et al., 2022). Previous studies found that large
amounts of data automatically crawled from the
web may contain noisy documents, and proper
filtering procedures can be an efficient solution
against them (Xu and Koehn, 2017; Khayrallah
and Koehn, 2018; Kry´sci´nski et al., 2019; Luccioni
and Viviano, 2021; Kreutzer et al., 2022). Accord-
ingly, several studies in text summarization inves-
tigated various strategies to filter out noisy data
(Matsumaru et al., 2020; Nan et al., 2021; Guo
et al., 2022) and released new datasets with better
quality (Grusky et al., 2018; Urlana et al., 2022).
However, their strategies are primarily composed
of coarse rule-based methods and less interpretable
model output, or costly human investigation has
been applied for constructing new datasets. Fur-
thermore, such strategies have not been applied to
multi-document summarization datasets.
In the meantime, with the advancement of LLMs
(Zhao et al., 2023), researchers have explored the
usage of LLMs for data annotation, a task that
traditionally relied on human annotators. Initial
attempts have revealed the potential capabilities
of models like GPT-3 for data annotation (Wang
16Figure 2: Histogram comparing the amount of input
articles in each dataset.
et al., 2021). These studies indicate that GPT-3
can annotate datasets more efficiently and cost-
effectively than human annotators. This results in
enhanced downstream task performance, with the
model trained on the GPT-3 annotated dataset out-
performing the one trained on the human-annotated
dataset. Subsequent studies have further demon-
strated the capabilities of GPT-3, showing its ability
to generate labeled data using external knowledge
or instructions about desired labels and domains
(Ding et al., 2023). Additionally, researchers have
examined the usefulness of newer models like GPT-
3.5 and evaluated the effectiveness of CoT in im-
proving annotation quality (He et al., 2024). LLM-
based annotation has also been extended to low-
resource languages where hiring human annotators
is challenging (Choi et al., 2024).
In this work, we introduce a novel approach
to filtering noisy documents from multi-document
summarization dataset by extending cost-efficient
LLM-based annotation beyond traditional data
annotation tasks. By leveraging the capabili-
ties of LLMs, our study facilitates real-world
dataset cleansing, enhancing the quality of existing
datasets. This attempt is noteworthy as it broadens
the scope of LLM applications, offering effective
solutions for improving dataset quality and stream-
lining its cleansing process, minimizing reliance
on human annotations.
3 M ULTI -NEWS +
The previous Multi-News dataset plays an im-
portant role in multi-document summarization re-
search. It consists of sets of documents and their
corresponding summaries. However, as shown in
Table 1 and detailed in Appendix G and H, the
Multi-News dataset contains several noisy and ir-
relevant articles that are unrelated to the summary
or other documents. This issue arises from their
construction process, which relies on automated
crawling from the Internet Archive.
To solve this issue and cleanse the dataset, we
defined our problem as a classification task deter-
mining whether each document is relevant to the
summary. To this end, we designed the prompt
for the model as shown in Appendix J. We inte-
grated CoT to enhance the model’s performance by
evaluating the relevance of each document to the
summary. Thus, a rationale for the decision can
be made available, which marks the difference be-
tween LLM-based and human annotations. While
traditional human annotation through crowdsourc-
ing platforms like Amazon Mechanical Turk usu-
ally produces annotation results without underlying
reasons due to additional costs, LLM-based anno-
tators can easily offer explanations through CoT.
These rationales can assist human managers in re-
viewing results and rectifying erroneous decisions.
Furthermore, we imitated the conventional
dataset cleansing procedure which typically in-
volves multiple human annotators and their col-
lective judgments, primarily through majority vot-
ing. Similarly to the majority voting process used
by human annotators, we applied this approach
to the LLM-based annotators. In particular, we
generated five individual LLM agents to read the
summary and documents and determine if the doc-
ument is relevant to the summary. This strategy
based on self-consistency can boost the quality of
annotations, by rectifying potential errors made by
individual agents (Wang et al., 2023b). Figure 1
presents the summary of the overall process.
Based on the proposed method, we utilized
five LLM agents to individually annotate 56,216
sets of summaries and documents from the Multi-
News dataset. Specifically, we employed the
GPT-3.5-turbo-0125 model1, the most re-
cent model at the time of this study. With a prompt
designed for a 3-shot CoT, approximately 3,500 to-
kens were required to annotate the input summaries
and articles, along with around 100 tokens for gen-
erating reasoning processes and annotation results.
The cost per annotation sample amounted to ap-
proximately 0.01$ (0.002$ per agent), resulting in
a total cost of approximately 550$ to annotate the
1GPT-3.5-turbo-0125 charges 0.0005$ for the input
of 1,000 tokens, and 0.0015$ for the generation of 1,000
tokens.
17Model BART-large-cnn
Metric ROUGE-1 ROUGE-2 ROUGE-L BERTScore BARTScore
Multi-News 48.64 18.86 24.11 0.6401 -2.763
MULTI-NEWS+ 49.17 19.04 24.36 0.6418 -2.698
Ablation (Urlana et al., 2022)47.48 18.27 23.81 0.6362 -2.767
Model T5-base
Metric ROUGE-1 ROUGE-2 ROUGE-L BERTScore BARTScore
Multi-News 40.11 13.90 21.58 0.6003 -2.407
MULTI-NEWS+ 40.45 14.17 21.84 0.6027 -2.362
Ablation (Urlana et al., 2022)39.30 13.65 21.42 0.5967 -2.457
Table 2: Performance comparison of the Multi-News and MULTI -NEWS + datasets on two models. The “Ablation”
row represents a version of the Multi-News dataset that has been cleansed using methods from previous study
(Urlana et al., 2022).
entire Multi-News dataset.
After annotation, we found that 27,052 of the
153,091 articles can be considered noisy documents
and do not contribute to the summarization. Sub-
sequently, we constructed MULTI -NEWS + by re-
moving these noisy documents from Multi-News
while preserving the train/valid/test split. Figure 2
presents the comparison of the Multi-News and
MULTI -NEWS + datasets in terms of the number of
documents per set. More than 15% of the docu-
ments in Multi-News are irrelevant, diminishing
the dataset’s quality and degrading the model’s per-
formance. Furthermore, 379 sets have no relevant
source articles, as shown in Appendix H. In con-
trast, by deleting noisy documents, MULTI -NEWS +
demonstrates enhanced quality.
4 Experiment
4.1 Experimental Design
To validate the efficacy of data cleansing and the
development of MULTI -NEWS + in filtering out
noisy documents and improving the performance
of downstream task models, we measured the multi-
document summarization performance of models
trained on each dataset, similar to previous study
(Guo et al., 2022). Enhanced model performance
indicates superior dataset quality (Ye et al., 2022b;
Choi et al., 2024). We fine-tuned two different
models, BART (Lewis et al., 2020) and T5 (Raffel
et al., 2020) on Multi-News and MULTI -NEWS +.
Performance evaluation metrics included the fol-
lowing metrics: ROUGE (Lin, 2004), BERTScore
(Zhang et al., 2020), and BARTScore (Yuan et al.,
2021). For a fair comparison, we used the test set
of MULTI -NEWS + for each model and reported the
average performance across three random seeds.
4.2 Result
The results in Table 2 demonstrate the superiority
of the MULTI -NEWS + dataset in enhancing the per-
formance of summarization models compared to
the original Multi-News dataset. Across various
metrics, models trained on MULTI -NEWS + con-
sistently outperform those trained on Multi-News,
indicating better summarization quality with the
refined dataset. This highlights the effectiveness of
dataset cleansing in removing noisy and irrelevant
documents, thereby enhancing the overall perfor-
mance of summarization models. Additionally, we
performed a human evaluation on the output of
379 sets that are classified as having no relevant
source articles and found that 356 sets are correctly
classified, which represents 93.9% of the human-
machine agreement rate. We provide an example
of error analysis in Appendix I.
Additionally, we conducted an ablation study us-
ing the cleansing method proposed by a previous
study (Urlana et al., 2022), detailed in Appendix F.
Our findings indicate that this method is ineffec-
tive in improving downstream task performance on
the Multi-News dataset, which focuses on multi-
document summarization and differs from the con-
figuration used in the prior study. This underscores
the effectiveness of our proposed method and the
value of MULTI -NEWS +.
5 Discussion and Future Works
In this section, we discuss recent advancements in
the field since the submission of the manuscript
and propose strategies for incorporating them in
future research.
Cutting-edge models. Although we employed
five GPT-3.5-turbo-0125 models for our ex-
periments, the field has seen the release of more
18advanced models, such as GPT-4o (OpenAI,
2024b), GPT-4o-mini (OpenAI, 2024a), and
OpenAI O1 (OpenAI, 2024c), along with the con-
tinued development of open-source models like
LLaMA-3 (Dubey et al., 2024), Gemma-2 (Team
et al., 2024), andMistral Nemo (Mistral, 2024).
Models such as GPT-4o-mini and other open-
source alternatives offer reduced costs compared to
GPT-3.5-turbo-0125, making their adoption
promising for both lowering the expense of dataset
cleansing and improving the accuracy of detecting
noisy documents.
Weighted majority voting. The availabil-
ity of high-performance yet cost-effective
models like GPT-4o presents the oppor-
tunity to use them as expert annotators,
given their superior capabilities compared
to models like GPT-3.5-turbo-0125 or
GPT-4o-mini. For example, rather than using
five GPT-3.5-turbo-0125 models, we could
employ three GPT-3.5-turbo-0125 models
alongside one GPT-4o, with GPT-4o carrying
double the weight of a GPT-3.5-turbo-0125
annotator. This approach positions GPT-4o as
an expert, where agreement between at least one
GPT-3.5-turbo-0125 model and GPT-4o
would trigger document deletion.
Supervision from superior models. Another po-
tential approach involves using more capable mod-
els to verify annotation results. In this scenario,
GPT-4o would not participate in the initial annota-
tion process but would instead verify the outcomes
produced by GPT-3.5-turbo-0125 models.
By taking the documents, summaries, and anno-
tation results as input, GPT-4o acts as an expert
reviewer overseeing the outputs of standard anno-
tators.
Cost-efficient cleansing via pre-screening. In this
paper, we applied the data cleansing strategy to
every document in the dataset. However, a more
cost-efficient approach could involve performing
the annotation procedure only on documents likely
to contain noise. Techniques such as dataset car-
tography (Swayamdipta et al., 2020) could serve as
a pre-screening method to identify cleansing candi-
dates, thereby reducing the overall cost of dataset
cleansing.
6 Conclusion
In this study, we suggest deploying cost-efficient
LLM-based data annotation to cleanse real-world
datasets by identifying and excluding irrelevant
and noisy data. We conducted a case study us-
ing this strategy to cleanse the Multi-News dataset
and proposed the improvedMULTI -NEWS + dataset.
Our case study revealed that MULTI -NEWS + pro-
vides superior data quality compared to the orig-
inal Multi-News dataset. Additionally, we have
made MULTI -NEWS + publicly available, thereby
supporting further research in the field of multi-
document summarization.
Our work paves the road to extending our data
cleansing strategy to other datasets, broadening the
scope of utilizing LLMs. This extension would
enhance the quality of existing datasets across var-
ious domains without the need to construct new
datasets from scratch. As such, our approach
not only contributes to the advancement of multi-
document summarization research but also offers a
cost-efficient solution for enhancing dataset quality.
We are committed to extending our LLM-based
method to other datasets, further solidifying its ap-
plicability to other tasks.
Limitations
We acknowledge several limitations regarding our
proposed method. First, our method is primarily
limited by the possibility of wrong classification
even with majority voting and CoT. In the future,
we may adopt various LLMs as agents and apply
weighted majority voting according to their perfor-
mance to alleviate this issue, as discused in Sec-
tion 5.
Secondly, the nature of the Multi-News dataset
might exhibit a real-world case of automatic collec-
tion of documents from the web that are not always
relevant to the summary. In other words, the in-
clusion of noisy documents might demonstrate the
characteristics of real-world automatic crawling.
For instance, the model trained on the Multi-News
dataset may be more suitable for a real-time sys-
tem that automatically crawls data from the web
and summarizes them. However, we believe such a
possibility can be dealt with through the reciprocal
usage of our MULTI -NEWS + and previous Multi-
News dataset. For instance, one could utilize a pre-
vious Multi-News dataset when the trained model
is expected to consistently deal with noisy docu-
ments for inference and there are no pre-defined
strategies for filtering out these noisy documents
at inference time. Otherwise, for cases where the
model is expected to only handle clean documents,
19it will be more beneficial to utilize our proposed
MULTI -NEWS + dataset for training the model.
Ethics Statement
As we are exploiting LLMs for classifying irrel-
evant documents rather than text generation, the
ethical concern with our method is smaller than
that of studies that utilize LLMs to generate texts.
Nonetheless, recent studies suggest that the CoT
technique may induce ethical bias in LLM (Shaikh
et al., 2023). In future work, we plan to investigate
this phenomenon’s appearance in our method.
Acknowledgements
This research was supported by Basic Science Re-
search Program through the National Research
Foundation of Korea(NRF) funded by the Ministry
of Education(NRF-2022R1C1C1008534), and In-
stitute for Information & communications Tech-
nology Planning & Evaluation (IITP) through the
Korea government (MSIT) under Grant No. 2021-
0-01341 (Artificial Intelligence Graduate School
Program, Chung-Ang University).
References
Parikshit Bansal and Amit Sharma. 2023. Large lan-
guage models as annotators: Enhancing generaliza-
tion of nlp models at minimal cost. arXiv preprint
arXiv:2306.15766.
Lukas Budach, Moritz Feuerpfeil, Nina Ihde, Andrea
Nathansen, Nele Noack, Hendrik Patzlaff, Felix Nau-
mann, and Hazar Harmouch. 2022. The effects of
data quality on machine learning performance. arXiv
preprint arXiv:2207.14529.
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang
Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra-
madan, and Milica Gasic. 2018. Multiwoz-a large-
scale multi-domain wizard-of-oz dataset for task-
oriented dialogue modelling. In Proceedings of
EMNLP, pages 5016–5026.
Juhwan Choi, Eunju Lee, Kyohoon Jin, and YoungBin
Kim. 2024. GPTs are multilingual annotators for
sequence generation tasks. In Findings of EACL,
pages 17–40.
Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken
Chia, Boyang Li, Shafiq Joty, and Lidong Bing. 2023.
Is GPT-3 a good data annotator? In Proceedings of
ACL, pages 11173–11195.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey,
Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman,
Akhil Mathur, Alan Schelten, Amy Yang, Angela
Fan, et al. 2024. The llama 3 herd of models. arXiv
preprint arXiv:2407.21783.
Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi,
Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj
Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. Mul-
tiwoz 2.1: A consolidated multi-domain dialogue
dataset with state corrections and state tracking base-
lines. In Proceedings of LREC, pages 422–428.
Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi
Li, and Dragomir Radev. 2019. Multi-news: A large-
scale multi-document summarization dataset and ab-
stractive hierarchical model. In Proceedings of ACL,
pages 1074–1084.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018.
Newsroom: A dataset of 1.3 million summaries
with diverse extractive strategies. In Proceedings
of NAACL, pages 708–719.
Yanzhu Guo, Chloé Clavel, Moussa Kamal Eddine, and
Michalis Vazirgiannis. 2022. Questioning the valid-
ity of summarization datasets and improving their fac-
tual consistency. In Proceedings of EMNLP, pages
5716–5727.
Ting Han, Ximing Liu, Ryuichi Takanabu, Yixin Lian,
Chongxuan Huang, Dazhen Wan, Wei Peng, and Min-
lie Huang. 2021. Multiwoz 2.3: A multi-domain
task-oriented dialogue dataset enhanced with anno-
tation corrections and co-reference annotation. In
Proceedings of NLPCC, pages 206–218.
Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin,
Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan
Duan, and Weizhu Chen. 2024. Annollm: Making
large language models to be better crowdsourced an-
notators. In Proceedings of NAACL (Industry Track),
pages 165–190.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Men-
sch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guil-
laume Lample, Lucile Saulnier, et al. 2023. Mistral
7b. arXiv preprint arXiv:2310.06825.
Chao Jiang, Mounica Maddela, Wuwei Lan, Yang
Zhong, and Wei Xu. 2020. Neural crf model for
sentence alignment in text simplification. In Proceed-
ings of ACL, pages 7943–7960.
Chao Jiang, Wei Xu, and Samuel Stevens. 2022. arx-
ivedits: Understanding the human revision process in
scientific writing. In Proceedings of EMNLP, pages
9420–9435.
Huda Khayrallah and Philipp Koehn. 2018. On the im-
pact of various types of noise on neural machine trans-
lation. In Proceedings of ACL 2018 Workshop on
Neural Machine Translation and Generation, pages
74–83.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In Proceedings
of ICLR.
20Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab,
Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera
Tapo, Nishant Subramani, Artem Sokolov, Claytone
Sikasote, et al. 2022. Quality at a glance: An audit of
web-crawled multilingual datasets. Transactions of
the Association for Computational Linguistics, 10:50–
72.
Wojciech Kry´sci´nski, Nitish Shirish Keskar, Bryan Mc-
Cann, Caiming Xiong, and Richard Socher. 2019.
Neural text summarization: A critical evaluation. In
Proceedings of EMNLP, pages 540–551.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for nat-
ural language generation, translation, and compre-
hension. In Proceedings of ACL, pages 7871–7880.
Chin-Yew Lin. 2004. Rouge: A package for automatic
evaluation of summaries. In Proceedings of ACL
2004 Workshop Text Summarization Branches Out,
pages 74–81.
Alexandra Luccioni and Joseph Viviano. 2021. What’s
in the box? an analysis of undesirable content in the
common crawl corpus. In Proceedings of ACL, pages
182–189.
Kazuki Matsumaru, Sho Takase, and Naoaki Okazaki.
2020. Improving truthfulness of headline generation.
In Proceedings of ACL, pages 1335–1346.
Mistral. 2024. Mistral nemo. Accessed: Sep 21, 2024.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos,
Ça˘glar G˙ulçehre, and Bing Xiang. 2016. Abstractive
text summarization using sequence-to-sequence rnns
and beyond. In Proceedings of CoNLL, pages 280–
290.
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero dos
Santos, Henghui Zhu, Dejiao Zhang, Kathleen Mck-
eown, and Bing Xiang. 2021. Entity-level factual
consistency of abstractive text summarization. In
Proceedings of EACL, pages 2727–2733.
OpenAI. 2024a. Gpt-4o mini: advancing cost-efficient
intelligence. Accessed: Sep 21, 2024.
OpenAI. 2024b. Hello gpt-4o. Accessed: Sep 21, 2024.
OpenAI. 2024c. Introducing openai o1-preview. Ac-
cessed: Sep 21, 2024.
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, et al. 2019. Pytorch: An imperative style,
high-performance deep learning library. In Proceed-
ings of NeurIPS.
Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De,
Alborz Geramifard, Zhou Yu, and Chinnadhurai
Sankar. 2021. Annotation inconsistency and entity
bias in multiwoz. In Proceedings of SIGDIAL, pages
326–337.
Colin Raffel, Noam Shazeer, Adam Roberts, Kather-
ine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the
limits of transfer learning with a unified text-to-text
transformer. Journal of Machine Learning Research,
21(140):1–67.
Cyrus Rashtchian, Peter Young, Micah Hodosh, and Ju-
lia Hockenmaier. 2010. Collecting image annotations
using amazon’s mechanical turk. In Proceedings of
NAACL 2010 Workshop on Creating Speech and Lan-
guage Data with Amazon’s Mechanical Turk, pages
139–147.
Omar Shaikh, Hongxin Zhang, William Held, Michael
Bernstein, and Diyi Yang. 2023. On second thought,
let’s not think step by step! bias and toxicity in zero-
shot reasoning. In Proceedings of ACL, pages 4454–
4470.
Hwanjun Song, Minseok Kim, Dongmin Park, Yooju
Shin, and Jae-Gil Lee. 2023. Learning from noisy
labels with deep neural networks: A survey. IEEE
Transactions on Neural Networks and Learning Sys-
tems, 34(11):8135–8153.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie,
Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith,
and Yejin Choi. 2020. Dataset cartography: Mapping
and diagnosing datasets with training dynamics. In
Proceedings of EMNLP, pages 9275–9293.
Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng, and
Sharifah Mahani Aljunied. 2022. Revisiting docred-
addressing the false negative problem in relation ex-
traction. In Proceedings of EMNLP, pages 8472–
8487.
Gemma Team, Morgane Riviere, Shreya Pathak,
Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati-
raju, Léonard Hussenot, Thomas Mesnard, Bobak
Shahriari, Alexandre Ramé, et al. 2024. Gemma 2:
Improving open language models at a practical size.
arXiv preprint arXiv:2408.00118.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Al-
bert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open founda-
tion and fine-tuned chat models. arXiv preprint
arXiv:2307.09288.
Ashok Urlana, Nirmal Surange, Pavan Baswani,
Priyanka Ravva, and Manish Shrivastava. 2022.
Tesum: Human-generated abstractive summarization
corpus for telugu. In Proceedings of LREC, pages
5712–5722.
Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao
Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang,
Xu Chen, Yankai Lin, et al. 2023a. A survey on large
language model based autonomous agents. arXiv
preprint arXiv:2308.11432.
21Shuohang Wang, Yang Liu, Yichong Xu, Chenguang
Zhu, and Michael Zeng. 2021. Want to reduce la-
beling cost? gpt-3 can help. In Findings of EMNLP,
pages 4195–4205.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H Chi, Sharan Narang, Aakanksha Chowdhery,
and Denny Zhou. 2023b. Self-consistency improves
chain of thought reasoning in language models. In
Proceedings of ICLR.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow-
icz, et al. 2020. Transformers: State-of-the-art natu-
ral language processing. In Proceedings of EMNLP
(Demo Track), pages 38–45.
Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast
and scalable data cleaning system for noisy web-
crawled parallel corpora. In Proceedings of EMNLP,
pages 2945–2950.
Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin,
Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou,
and Maosong Sun. 2019. Docred: A large-scale
document-level relation extraction dataset. In Pro-
ceedings of ACL, pages 764–777.
Fanghua Ye, Jarana Manotumruksa, and Emine Yilmaz.
2022a. Multiwoz 2.4: A multi-domain task-oriented
dialogue dataset with essential annotation corrections
to improve state tracking evaluation. In Proceedings
of SIGDIAL, pages 351–360.
Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao
Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong.
2022b. Zerogen: Efficient zero-shot learning via
dataset generation. In Proceedings of EMNLP, pages
11653–11669.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text gener-
ation. In Proceedings of NeurIPS, pages 27263–
27277.
Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara,
Raghav Gupta, Jianguo Zhang, and Jindong Chen.
2020. Multiwoz 2.2: A dialogue dataset with addi-
tional annotation corrections and state tracking base-
lines. In Proceedings of ACL 2020 Workshop on NLP
for Conversational AI, pages 109–117.
Ruoyu Zhang, Yanzeng Li, Yongliang Ma, Ming Zhou,
and Lei Zou. 2023. LLMaAA: Making large lan-
guage models as active annotators. In Findings of
EMNLP, pages 13088–13103.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Wein-
berger, and Yoav Artzi. 2020. Bertscore: Evaluating
text generation with bert. In Proceedings of ICLR.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen
Zhang, Junjie Zhang, Zican Dong, et al. 2023. A
survey of large language models. arXiv preprint
arXiv:2303.18223.
Figure 3: A screenshot of a webpage that is relevant to
the article in Appendix H. Multi-News includes the text
in the red box instead of the desired content in the blue
box.
A Dataset Statistics
MULTI -NEWS + keeps the train/valid/test split of
Multi-News, which is 80%, 10%, and 10%. Table 3
displays the number of articles per each split.
Multi-NewsMULTI-NEWS+ % of modification
Sets Articles Sets Articles Sets Articles
Train 44,972 125,41744,668 102,0570.7% 18.6%
Validation5,622 15,367 5,585 12,5090.7% 18.6%
Test 5,622 15,505 5,584 12,7030.7% 18.1%
Table 3: Number of sets and articles per each split.
B Construction Process of Multi-News
In this section, we briefly explain the construc-
tion process of the Multi-News dataset. Multi-
News is based on data from newser.com2 that offers
human-written summaries of news articles. Each
summary is written by professional human editors
and involves several outlinks to the original arti-
cles and relevant websites. Multi-News collected
this human-written summary and documents from
its outlinks, which behave as source documents
for summarization. Notably, the authors of Multi-
News archived every article leveraging Wayback
Machine3, a system that supports archiving of the
circumstances of a given website, to ensure the re-
producibility and support future investigation. Con-
tents of each document have been accessed and
crawled from these Wayback-archived links.
2https://newser.com
3https://web.archive.org
22However, this affected problems regarding the
quality of the dataset. As shown in examples of
noisy documents in Appendix G, several noisy doc-
uments consist of a message from Wayback Ma-
chine. Moreover, the failure to crawl the content
of the webpage caused other problems. We investi-
gated the case shown in Appendix H and found
that it is a result of the crawling of the wrong
part of the website. Figure 3 clearly showcases
this phenomenon where the content in the red box
is crawled instead of the content in the blue box,
which is desired. Even though the content in the
blue box is different for each article, the system
wrongly crawled the shared red box, which resulted
in five noisy documents that share the same content
and do not contribute to the summary.
From the example above, we revealed the pres-
ence of the wrongly crawled documents, that af-
fect the quality of the dataset. We believe such
phenomena would be alleviated with the advance-
ment of LLM-based autonomous agents (Wang
et al., 2023a), as they could visit the website and
only crawl the text relevant to the summary. Even
though we leave this as future work, this research
direction should be prompted.
C Implementation Details
We utilized PyTorch (Paszke et al., 2019) and Hug-
gingface Transformers (Wolf et al., 2020) to im-
plement and evaluate the model. Specifically, we
employed facebook/bart-large-cnn4 and google-
t5/t5-base, with 406M and 220M parameters, re-
spectively, for BART and T5. Each model was
trained using Adam (Kingma and Ba, 2015) with
a learning rate of 2e-5 over 3 epochs. We used
a batch size of 4 and implemented a gradient
accumulation step of 4, resulting in a practical
batch size of 16. For evaluation, we utilized
bert-base-uncased and facebook/bart-large-cnn for
BERTScore and BARTScore, respectively. We re-
ported BERTScore-F1 in Table 2. ROUGE scores
were measured using the rouge-score5 library, with
the F1 score of each metric. The training was con-
ducted on a single NVIDIA A100 40GB GPU. We
provide the source code and dataset to the public.6
For the human evaluation, we recruited three vol-
4Note that this model is already fine-tuned with the
CNN/DM dataset (Nallapati et al., 2016), a single-document
summarization dataset.
5https://pypi.org/project/rouge-score/
6https://github.com/c-juhwan/multi_
news_plus
Model Mistral-7B-Instruct-v0.2
Metric BERTScore BARTScore
No Noisy Example 0.6004 -2.704
One Noisy Example 0.5976 -2.721
Two Noisy Examples 0.5954 -2.738
Model Llama-2-7b-chat-hf
Metric BERTScore BARTScore
No Noisy Example 0.6038 -2.507
One Noisy Example 0.6022 -2.521
Two Noisy Examples 0.6016 -2.539
Table 4: Performance of LLM-based summarization
of Multi-News with different amounts of noisy exam-
ples. We only report two model-based metrics as the
human-generated reference summary has a different
form compared to the LLM-generated summary.
unteers and individually asked them to determine
whether the decision of the model was correct or
not given the summary, original articles, and ratio-
nale of the model. We defined the model made an
incorrect decision when at least one human evalua-
tor flagged the output as an incorrect classification.
D Manual Analysis
To perform a more detailed analysis of the accuracy
of the proposed method, we randomly selected 60
instances from the validation set, which comprises
153 documents. A confusion matrix was defined
to evaluate the classification for each document as
follows:
• True Positive (TP): Relevant documents that
were correctly classified as relevant.
• False Positive (FP): Documents classified as
relevant but are not actually relevant.
• True Negative (TN): Irrelevant documents cor-
rectly classified as not relevant.
• False Negative (FN): Relevant documents in-
correctly classified as not relevant.
Upon review, we found that 127 documents were
classified as TP, 24 as TN, and 2 as FN. The anno-
tation framework identified 26 documents as irrele-
vant and noisy, which accounts for approximately
17% of the total 153 documents. This aligns closely
with the statistics in Table 3 of Appendix A, which
indicates that 18.6% of documents in the validation
set were classified as noisy.
23From these results, the precision is 1.0, as there
were no FP documents, while the recall is approxi-
mately 0.984. Additionally, we observed that 17 of
the 24 TN documents could be classified as noisy
system messages, such as “This will appear next
to all of your comments; this will not appear any-
where on Newser,” as illustrated in Appendix G.
The remaining 7 documents were irrelevant to the
summary.
Furthermore, we investigated the two FN cases.
In one instance, the summary included a portion
related to the misclassified document at the very
end. In the other, the misclassified document pro-
vided context for the summary but was not directly
connected to it. These cases are consistent with the
error patterns discussed in Appendix I.
It is important to note that while individual anno-
tators occasionally made incorrect classifications,
the majority voting process effectively corrected
these errors. This highlights the efficacy of our pro-
posed method in improving data annotation quality
and ensuring thorough dataset cleansing.
E Additional Experiment with Large
Language Models
This section introduces our additional experiment
that investigates the influence of noisy examples for
LLMs in a few-shot learning scheme. For this pur-
pose, we used 7B-sized, instruction-tuned Llama2
(Touvron et al., 2023) and Mistral (Jiang et al.,
2023). Specifically, we used meta-llama/Llama-2-
7b-chat-hf and mistralai/Mistral-7B-Instruct-v0.2
from Transformers (Wolf et al., 2020). In this ex-
periment, we prompted the model to summarize
the documents in the test set of Multi-News with
two-shot examples selected from the training set
of Multi-News. Additionally, we differentiated the
number of noisy documents in the examples given
as the prompt. Table 4 presents the experimental
result. The result demonstrates that the inclusion
of the noise in the example degrades the quality of
the summary generated by the LLM. This suggests
the significance of the exclusion and filtering of the
noise for LLMs, which underscores the necessity
of dataset cleansing presented in this paper.
F Analysis of Multi-News
Following the previous study of TeSum (Urlana
et al., 2022), we apply filtering strategies and ana-
lyze the characteristics of Multi-News with these
strategies. Table 5 exhibits the result of the analy-
Multi-News
Dataset Size 56,216
Source Article Size 156,289
Avg Words in Source 433.62
Avg Sentences in Source 23.42
Avg Words in Summary 228.69
Avg Sentences in Summary 11.52
Empty Summary 0
Duplicated Summary 0
Prefixes Summary 0
Empty Source 570
Duplicated Source 544
Source < 4 Sentences 45
Source < 40 Words 7
Summary < 10 Words 0
Compression < 50% 31,994
Compression > 80% 390
Abstractivity < 10 496
Abstractivity > 80 126
Avg Abstractivity 41.42
Avg Compression 46.19%
Table 5: The result of analysis of Multi-News dataset
with rule-based filtering methods (Urlana et al., 2022).
We concatenated every source document to measure
their average word and sentence length.
sis. First, we found that 0.7% of total source docu-
ments can be considered noisy documents as it is
empty or duplicated from other source documents
within the same set. Second, we found previous
rule-based filtering methods are not very effective
standards for the Multi-News dataset. For instance,
there were no sets that had empty summaries, sum-
maries that were duplicated with other summaries,
or summaries that repeated the first few sentences
of source documents. The only exception is Com-
pression < 50%, which identified more than half of
the dataset. However, it should be noted that Multi-
News is a multi-document summarization dataset,
which is different from datasets for previous stud-
ies. For instance, average compression is signifi-
cantly lower than other single-document summa-
rization datasets reported in the previous study
(Urlana et al., 2022), as multiple source documents
in Multi-News involve more information compared
to the source document of single-document sum-
marization datasets. In conclusion, this analysis
demonstrates that previous filtering strategies are
less practical for multi-document summarization
datasets such as Multi-News and enlightens the
necessity of novel approaches for these datasets.
24G Examples of Noisy Documents
This section demonstrates several examples of noisy documents observed in the Multi-News dataset not
related to the summary. Please refer to the released dataset file for details.
• Tweet with a location you can add location information to your tweets, such as your city or precise
location, from the web and via third-party applications. You always have the option to delete your
tweet location history. Learn more
• Focused crawls are collections of frequently-updated webcrawl data from narrow ( as opposed to
broad or wide ) web crawls, often focused on a single domain or subdomain.
• Dow jones reprints: this copy is for your personal, non-commercial use only. To order
presentation-ready copies for distribution to your colleagues, clients or customers, use the order
reprints tool at the bottom of any article or visit www.djreprints.com
• This crawl of online resources of the 115th us congress was performed on behalf of the united states
national archives & records
• The seed for this crawl was a list of every host in the wayback machine this crawl was run at a level 1
( urls including their embeds, plus the urls of all outbound links including their embeds ) the warc
files associated with this crawl are not currently available to the general public.
• These crawls are part of an effort to archive pages as they are created and archive the pages that they
refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the
version that was live when the page was written will be preserved.then the internet archive hopes that
references to these archived pages will be put in place of a link that would be otherwise be broken, or
• Please enable cookies on your web browser in order to continue. The new european data protection
law requires us to inform you of the following before you use our website: we use cookies and other
technologies to customize your experience, perform analytics and deliver personalized advertising
on our sites, apps and newsletters and across the internet based on your interests. By clicking “i
agree” below, you consent to the use by us and our third-party partners of cookies and data gathered
from your use of our platforms. See our privacy policy and third party partners to learn more about
the use of data and your rights. You also agree to our terms of service.
• Thank you for reading. Please purchase a subscription to continue reading. A subscription is
required to continue reading. Thank you for reading 5 free articles. You can come back at the end of
your 30-day period for another 5 free articles, or you can purchase a subscription and continue to
enjoy valuable local news and information. If you are a current 7-day subscriber you are granted an
all-access pass to the website and digital newspaper replica. Please click sign up to subscribe, or
login if you are already a member. Thank you for reading 5 free articles. You can come back at the
end of your 30-day period for another 5 free articles, or you can purchase a subscription and continue
to enjoy valuable local news and information. If you are a current 7-day subscriber you are granted
an all-access pass to the website and digital newspaper replica. Please click below to get started.
• Add a location to your tweets when you tweet with a location, twitter stores that location. You can
switch location on/off before each tweet and always have the option to delete your location history.
Learn more
25H Extreme Cases of Noisy Documents
In addition to examples of noisy documents, we discovered the following extreme case of noisy data in
the Multi-News dataset. In this example, five documents have the same content but offer no information
on the summary. Thus, it cannot generate a reasonable summary based on the given documents. We
witnessed 379 similar cases during the dataset cleansing process, as reported in Figure 2. While they were
excluded from training and testing, we included them in the dataset file for future investigation.
Summary
Note to tweeting politicians: watch what you post, because politwoops will remember it forever. The
transparency-minded website is safeguarding politicians’deleted tweets, enabling the rest of us to giggle
or ponder over them at our leisure, the atlantic reports. The site’s current 6-month stash includes a few
doozey deletions, including john mccain mocking vladimir putin’s tears and rep. Jeff miller posting a link
to a poll that asked, " was obama born in the united states? " a few deletions are more odd than obvious,
begging us to ask what politicians were thinking. Why, for example, did rep. Tom graves remove a tweet
about going out one night with his wife? or rep. Kathy hochul delete one about her visit to a cancer
institute? perhaps rep. Stephen fincher’s tweet comparing the bachelor to the hunger games is a more
obvious case, but the online avenues of a politician’s mind can be dimly lit indeed.
Document 1
An archive of the public statements deleted by u.s. Politicians. Explore the tweets they would prefer you
couldn’t see. If you aren’t an elected official or running for office and feel your account is being tracked
by mistake then please contact us.
Document 2
An archive of the public statements deleted by u.s. Politicians. Explore the tweets they would prefer you
couldn’t see. If you aren’t an elected official or running for office and feel your account is being tracked
by mistake then please contact us.
Document 3
An archive of the public statements deleted by u.s. Politicians. Explore the tweets they would prefer you
couldn’t see. If you aren’t an elected official or running for office and feel your account is being tracked
by mistake then please contact us.
Document 4
An archive of the public statements deleted by u.s. Politicians. Explore the tweets they would prefer you
couldn’t see. If you aren’t an elected official or running for office and feel your account is being tracked
by mistake then please contact us.
Document 5
An archive of the public statements deleted by u.s. Politicians. Explore the tweets they would prefer you
couldn’t see. If you aren’t an elected official or running for office and feel your account is being tracked
by mistake then please contact us.
26I Error Analysis
Following the form of the previous study (Choi et al., 2024), we provide an error analysis to provide a
more balanced view of the behavior and limitations of our proposed method. In the first example, we can
observe that while Document 1 can be regarded as irrelevant to the summary except that there is a mention
of fusion tv, Document 2 contains information about Mike Tyson and his new TV documentary series.
However, the model predicted both documents are irrelevant to the summary, primarily because the model
concentrated on the mention of the “world team tennis exhibition” from Document 2. From this insight,
we hypothesize GPT-3.5 suffers from a mixture of irrelevant and relevant information in one document.
Summary
Over his career, former heavyweight champion mike tyson recorded 50 wins and six losses. But he
recently notched another big loss in latin america — this time as a coach of a bird, reports the ap. Tyson
traveled to suriname as part of the new fusion tv documentary series outpost, and was soundly beaten
when he entered a bird in a songbird contest, a cherished local tradition. Cameras captured iron mike as
he learned about the contest, located a bird to enter — he dubbed the tiny guy " little mike " — but then
suffered a tko when a competing champion cheeped and peeped more than his bird did in the same
15-minute period. " little mike let us down, man. I was in his corner, though, " said tyson. " it was just
amazing meeting the people, meeting the culture — i had a great time. " the series, kicking off on sunday
with tyson’s episode, mixes travel adventure, history, and journalism to shine a light on global stories.
The first season focuses on latin america and includes as hosts the late show with stephen colbert
bandleader jon batiste, brain games star jason silva, and transgender model carmen carrera. Spanish
versions air on unimas. Tyson was lured onto the show by the chance to visit a country he’d never heard
of and his love of birds. The former boxer has loved pigeons and kept them since he was a kid in
brooklyn. ( sunday’s show recorded the moment tyson lovingly released his bird in suriname. ) " my wife
always says the reason i keep my pigeons is they connect me to my childhood, " tyson said. " once it’s in
your blood, it never leaves. It’s just who you are. "
Document 1
Starting in 1996, alexa internet has been donating their crawl data to the internet archive. Flowing in
every day, these data are added to the wayback machine after an embargo period. [Abbreviated duplicated
text] Outpost shows you the world like you’ve never seen it. The series lives at the intersection of
investigative journalism and adventure travel, bringing you a local perspective on faraway places and
inviting you to explore. The series premieres march 26 @ 8 and 11 pm on fusion tv. In the first episode,
transgender model carmen carrera travels to brazil, a place where rates of violence against lgbt people are
some of the highest in the world, to find out what’s happening, what life is like for young transgendered
people in brazil, and what the future might hold. Gabriel leigh takes us to el alto, bolivia, where some of
the craziest architecture on earth is taking shape as part of a surge in indigenous purchasing power.
Document 2
[Abbreviated duplicated text]file - in this monday, oct. 10, 2016, file photo, mike tyson attends a world
team tennis exhibition to benefit the elton john aids foundation in las vegas. Tyson traveled to suriname as
part of the new fusion tv documentary series "outpost " and was soundly beaten when he entered a bird in
a songbird... ( associated press ) [Abbreviated duplicated text]new york ( ap ) — over his career, former
heavyweight champion mike tyson recorded 50 wins and six losses. But he recently notched another big
loss in latin america — this time as a coach of a bird. Tyson traveled to suriname as part of the new fusion
tv documentary series " outpost " and was soundly beaten when he
27This second example also showcases the characteristics of GPT-3.5 model we used. In this example, it is
obvious that Document 2 is less relevant to the summary, which is mainly about the relationship between
Gwyneth Paltrow and Chris Martin. However, while it is not the main content of the document as well
as Document 2, Document 1 contains a sentence that mentions the relationship between the two (“her
amicable split from husband chris martin of coldplay”). Nonetheless, the model predicted Document 1 is
also irrelevant to the summary, implying the model is stringent to the partial contribution of the document
to the summary. However, it is important to note that we categorized these instances as errors based on
rigorous human evaluation, and such errors constituted fewer than 10% of the total classifications, where
a single flag by multiple human evaluators was sufficient to deem it an error. We are planning to manually
revise these errors in the released version of MULTI -NEWS +.
Summary
Gwyneth paltrow continues to paint the sunniest of pictures of her post-conscious-uncoupling life with
chris martin, but the description she gives glamour in a new interview may be the most interesting one so
far. " we’re still very much a family, even though we don’t have a romantic relationship. He’s like my
brother, " she says, explaining that the two of them and their two kids still spend quite a bit of time
together, even staying in one another’s houses and spending holidays together ( not to mention
collaborating on songs together ). " the ideal is to stay married. But if you can’t stay married, wouldn’t
the ideal be that you could still be a family and you could put aside your own stuff long enough to explore
— what is this new family and who am i in it? " paltrow muses. " and chris is a great ex-husband ’ cause
he’s a very, very willing partner in how to do that. " she adds that, though she’s " very independent, " she
does see the value in having a husband, and though she’s not quite divorced yet, she could perhaps see
herself getting married again someday. ( click to see what she has to say about her other famous exes. )
Document 1
Gwyneth paltrow is in a state of deep focus. The new goop office is under construction — "it’s like a dust
bowl, " she says with a laugh — so today she’s helming her company from the kitchen island of her los
angeles home. Fitting, considering it was at her kitchen table ( then in london ) that paltrow, 43, started
goop as a newsletter to friends nearly eight years ago. Since then, she has built goop into a global brand:
it has produced sought-after collaborations with valentino and stella mccartney; opened pop-up shops;
and brought terms like conscious uncoupling and vaginal steaming to the masses ( the first a description
of her amicable split from husband chris martin of coldplay; the second, a way to cleanse one’s uterus —
don’t try it at home ). Her presence has also unwittingly exposed a dirty little secret: as fans, we provide
actresses with wealth and fame, only to scoff when they actually lead that rich and famous lifestyle
publicly. We want these stars to be "just like us. " but paltrow’s life simply isn’t. She won’t pretend that
she shops at the dollar store for beauty products or feeds her kids, apple, 11, and moses, 9, a steady diet of
fast food;
Document 2
Gwyneth paltrow was definitely in the mood to share during her appearance on howard stern’s siriusxm
radio show on wednesday.... Especially when it came to her a-list exes. In the hour-long chat, stern of
course wanted to know all about paltrow’s ex-fiance brad pitt, who the shakespeare in love star was
engaged to when she was 24 years old. The beautiful blondes eventually called it quits in 1997 after three
years together. Getty images " i didn’t think about it at the time, but i ’ m sure it did help with my career, "
the now 42-year-old actress admits about the start of all the paparazzi attention when the two got together
on the set of seven. " i definitely fell in love with him. He was so gorgeous, and sweet – i mean, he was
brad pitt, you know? " video: a history of gwyneth’s former flames her parents, the late bruce paltrow and
actress blythe danner, also 100 percent approved. " my father was so devastated when we broke up, " she
dishes. " my father loved him like he was his son. " in hindsight, she blames the demise of their
relationship on her youth. " i was such a kid, i was 22 when we met, " she explains.
28J Prompt
This section describes the prompt used for dataset cleansing. We truncated several parts to save space and
only displayed a 1-shot CoT example. Please refer to the released source code for the exact prompt.
System
You are a helpful assistant. Your job is to identify the document that is not relevant to the summary. The
relevance is determined by the mention of the key points in the summary. For example, the alert message
from the twitter, the location information, and the unrelated information are considered irrelevant. Please
response with the resaoning process and the result as document number or ‘None’. Do not end the
response with a period. If multiple documents are irrelevant to the summary, please resonse in the form of
‘Document 1|Document 3’
User
Given the summary and documents, please identify the document that is not relevant to the summary such
as advertisements. If every document is relevant to the summary, output ‘None’. Input document will be
given in next dialogue.
User
[Summary]
# The content of summary
[Document 1]
# The content of document 1
[Document 2]
# The content of document 2
[Document 3]
# The content of document 3
[Document 4]
# The content of document 4
Assistant
The summary discusses Tucker Carlson’s controversial tweet regarding Sarah Palin and the subsequent
backlash he faced from conservative women. Document 1 is related to Tucker Carlson’s controversial
tweet. Document 2 is unrelated as it contains information about adding location information to tweets and
does not contribute to the discussion about Tucker Carlson’s tweet and the reaction it received. Document
3 is a tweet message that calls Tucker Carlson a misogynist, related to the summary. Document 4 is
related to the summary as it deals with Tucker Carlson’s movement. Therefore, the irrelevant document is:
Document 2
29 |
https://aclanthology.org/2024.emnlp-main.3.pdf | Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 30–45
November 12-16, 2024 ©2024 Association for Computational Linguistics
FIZZ: Factual Inconsistency Detection by Zoom-in Summary and
Zoom-out Document
Joonho Yang1, Seunghyun Yoon2, Byeongjeong Kim1, Hwanhee Lee1†
1Department of Artificial Intelligence, Chung-Ang University, 2Adobe Research, USA
{plm3332, michael97k, hwanheelee}@cau.ac.kr, syoon@adobe.com
Abstract
Through the advent of pre-trained language
models, there have been notable advancements
in abstractive summarization systems. Simulta-
neously, a considerable number of novel meth-
ods for evaluating factual consistency in ab-
stractive summarization systems has been de-
veloped. But these evaluation approaches in-
corporate substantial limitations, especially on
refinement and interpretability. In this work,
we propose highly effective and interpretable
factual inconsistency detection method FIZZ
(Factual Inconsistency Detection by Zoom-in
Summary and Zoom-out Document) for ab-
stractive summarization systems that is based
on fine-grained atomic facts decomposition.
Moreover, we align atomic factsdecomposed
from the summary with the source document
through adaptive granularity expansion. These
atomic facts represent a more fine-grained
unit of information, facilitating detailed un-
derstanding and interpretability of the sum-
mary’s factual inconsistency. Experimental re-
sults demonstrate that our proposed factual con-
sistency checking system significantly outper-
forms existing systems. We release the code at
https://github.com/plm3332/FIZZ.
1 Introduction
With the development of pre-trained language
models, abstractive summarization systems us-
ing these language models have made remarkable
progress in generating fluent and natural summa-
rizations (Chang et al., 2023). However, one of the
notable challenges these systems confront is the
hallucination, causing language models to gener-
ate summaries that are factually inconsistent with
the given article (Maynez et al., 2020; Kryscin-
ski et al., 2020; Tam et al., 2023; Zhang et al.,
2023). Recognizing the significance of this is-
sue, various evaluation metrics have been intro-
duced to detect these errors, starting from tra-
†Corresponding author.
Summary
the 27-year-old joined spurs from
manchester city in 2011. (0.53)
Emmanuel Adebayor is 27 years old. (0.09)
Emmanuel Adebayor joined Spurs. (0.97)
Sentence Level Evaluation Atomic Facts Level Evaluation
emmanuel adebayor posted a video of himself performing a
strange jig in front of the arc de triomphe in paris. ...
... the 27-year-old joined spurs from manchester city in 2011.
(The age of Emmanuel Adebayor is not mentioned in document)
“You can only find which
sentences are suspicious.”
“You can understand
why the summary is incorrect.”
Figure 1: Comparison between sentence level evalua-
tion and atomic facts level evaluation. The numbers
in parentheses represent the maximum NLI entailment
scores obtained by comparing each sentence and atomic
fact with the source document on a sentence-wise basis.
ditional methods like ROUGE (Lin, 2004) and
BERTScore (Zhang et al., 2020) to a large num-
ber of advanced metrics (Goyal and Durrett, 2020,
2021; Scialom et al., 2021; Fabbri et al., 2022; La-
ban et al., 2022; Luo et al., 2023; Zha et al., 2023;
Wang et al., 2023a). Especially, many of the recent
works (Laban et al., 2022; Schuster et al., 2022;
Zha et al., 2023) adopted sentence level evaluation
using Natural Language Inference (NLI) systems
for factual consistency checking.
Although these studies have shown a certain
level of performance in summary evaluation, they
still exhibit significant deficiencies in accuracy. Ad-
ditionally, they substantially lack in interpretability,
an area crucial for further development in the field
of summarization factual consistency detection. As
shown in Figure 1, sentence level evaluation often
fails to check the details of the various facts in each
sentence, resulting in lower accuracy and lower in-
terpretability. Furthermore, we find that pair-wise
single sentence level evaluation is vulnerable to
summary evaluation that requires multi-sentence
reasoning. In addition, expressions such as pro-
nouns in sentences can lead the NLI system to
30make incorrect judgments in single sentence level
evaluation.
In this paper, we propose an interpretable sum-
marization factual inconsistency detection system,
FIZZ, which overcomes the issues of previous
sentence level NLI-based evaluation. As in Fig-
ure 2, FIZZ first resolves coreferences in both the
source document and the generated summary. Sub-
sequently, we decompose this coreference resolved
summary into atomic facts, which is an approach
that zooms in the summary. This atomic factcan
be considered a more fine-grained information unit
embedded within the text than a sentence at a broad
level. As in the atomic factexamples in Figure 1,
a single sentence from the summary can be seg-
mented into two or more distinct units of infor-
mation. This approach allows for a more detailed
analysis of textual information, which is crucial for
evaluating the factuality of generated text. Using
these atomic facts, we check the consistency of
each atomic factagainst the source document using
an NLI model. As highlighted in Figure 1, factual
inconsistencies that cannot be detected at the sen-
tence level can be identified through evaluation at
this atomic fact level with higher interpretability.
Also, we propose a granularity expansion method
that can adaptively increase the number of context
sentences when verifying the consistency of each
atomic fact. Through this way of zooming out
the document, we efficiently check the consistency
of certain atomic facts that require multi-sentence
level reasoning.
Experimental results show that our proposed sys-
tem FIZZ achieves state-of-the-art performance on
the AGGRE FACT (Tang et al., 2023) benchmark
dataset. FIZZ exhibits high interpretability by uti-
lizing atomic facts. Furthermore, We have tested
on various LLMs to implement atomic fact gener-
ation task and identified the best model suited for
this task. Additionally, our analysis shows that flex-
ibly increasing the granularity choice of the source
document significantly enhances accuracy.
2 Related Work
Summarization Factual Consistency Evaluation
A multitude of metrics designed to evaluate sum-
marization factual consistency are currently being
refined by leveraging NLP pipelines originally de-
veloped for disparate tasks, including QA-based
evaluation, parsing-based methods, LLM-based
prompting, and NLI-based metrics.
QA-based methods involve two steps of ques-
tion generation (QG) and question answering(QA).
While FEQA (Durmus et al., 2020) generate ques-
tions with the summary as the source, QUEST E-
VAL (Scialom et al., 2021) and QAFACT E-
VAL (Fabbri et al., 2022) generate questions with
both the summary and the document.
Parsing-based methods discover relationships by
employing syntactic parsing process, thereafter cal-
culating the proportion of summary-derived rela-
tions that align with those extracted from source
documents. Goodrich et al. (2019) extract relation
tuples for the evaluation. DAE (Goyal and Durrett,
2020, 2021) propose utilizing a dependency arc
between the entities and the relationship.
There is a growing trend for using LLMs like
ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI,
2023) on summarization factual consistency check-
ing (Luo et al., 2023; Chen et al., 2023; Wang et al.,
2023a; Gekhman et al., 2023; Yang et al., 2024).
Initially, Luo et al. (2023) explores ChatGPT’s abil-
ity in evaluating factual consistency for text sum-
marization with zero-shot prompting. Yang et al.
(2024) extend the work by excluding irrelevant
sentences from both documents before providing
prompts to GPT-4.
SUMMA C (Laban et al., 2022) re-visit NLI-
based models and granularity choice for incon-
sistency detection in summarization. ALIGN -
SCORE (Zha et al., 2023) develops an alignment
system, incorporating a summarization consistency
checking metric and an NLI model, which has
been trained across a diverse array of tasks that
can be aligned with NLI. The recently proposed
method, FENICE (Scirè et al., 2024), also aligns
decomposed atomic factswith several document
sentences, but it lacks interpretability on summary
side. Our proposed system, FIZZ, is also based on
NLI. However, unlike the aforementioned systems,
which mostly compare the summary at the sentence
level, FIZZ conducts comparisons at a more fine-
grained atomic fact level with high interpretability.
Atomic Facts Generation To the best of our
knowledge, van Halteren and Teufel (2003) pio-
neered the introduction of an atomic information
unit, named a factoid, within the field of summa-
rization evaluation. Building on this foundational
work, Nenkova and Passonneau (2004) proposed
the Pyramid method, a manual evaluation proto-
col for summarization that employs Summariza-
tion Content Units(SCUs), also referred to as Se-
31Summary with
Coreference Resolution
[Atomic Facts Decomposition]
[Atomic Facts Scoring]
Atomic Facts Generation Filtered Atomic Facts
Source Document with
Coreference Resolution Atomic Facts Pair-Wise Scoring Granularity Expansion
Fizz
Score
Atomic Facts Filtering
1. Wales defender Chris Gunter is a soccer player.
2. Chris Gunter plays as a defender.
3. Chris Gunter is from Wales.
4. Chris Gunter says it would be a "massive mistake"
to get complacent.
5. Chris Gunter says this as they close in on Euro 2016.
6. Euro 2016 is a soccer tournament.
Wales defender Chris Gunter says
it would be a `` massive mistake'' to
get complacent as they close in on
euro 2016.
Sentence 1
Sentence 2
Sentence 3
Sentence 4
Sentence 5
Sentence 6
Doc
Atomic Facts
Doc
Atomic Facts
Atomic Fact 2
Atomic Fact 3
Atomic Fact 4
Atomic Fact 5
0.98
0.86
0.02
0.93
0.98
0.86
0.83
0.93
0.830.83
Sentence 1
Sentence 2
Sentence 3
Sentence 4
Sentence 5
Sentence 6
Atomic Fact 2
Atomic Fact 3
Atomic Fact 4
Atomic Fact 5
1. Wales defender Chris Gunter is a soccer player.
2. Chris Gunter plays as a defender.
3. Chris Gunter is from Wales.
4. Chris Gunter says it would be a "massive mistake"
to get complacent.
5. Chris Gunter says this as they close in on Euro 2016.
6. Euro 2016 is a soccer tournament.
... Sentence 4: The near misses are there as
a reminder that in football even the most
unlikely thing can happen until the job is
don," Gunter added.
Sentence 5: "We've worked so hard for so
long, it'd be a massive mistake to get
complacent and think the job is done."...
Figure 2: Overall flow of FIZZ. The pipeline begins by applying coreference resolution to both the summary and
the document. Atomic facts are then decomposed from the summary using an LLM. These atomic facts are filtered
and subsequently scored against the document. The scores are refined through granularity expansion. The ultimate
score is defined by choosing the minimum score.
mantic Content Units. This innovative approach
has inspired a significant body of subsequent re-
search (Harnly et al., 2005; Shapira et al., 2019;
Gao et al., 2019; Bhandari et al., 2020; Zhang and
Bansal, 2021). Liu et al. (2023) referred to these el-
ementary information units asAtomic Content Unit,
or Atomic Facts. However, the realm of these in-
vestigations is primarily concentrated on assessing
summarization itself via the examination of atomic
facts crafted by human annotators1.
In the scope of hallucination detection and fact
verification for text generated by models, there has
been a recent initiative to employ LLMs to cre-
ate atomic facts. FACTSCORE (Min et al., 2023)
utilize InstructGPT (Ouyang et al., 2022) for the
creation of atomic facts. Following this work, FAC-
TOOL (Chern et al., 2023) introduce a fact veri-
fication pipeline that leverages fine-grained infor-
mation units generated by ChatGPT, referred to as
claims. In this study, we present a novel method
FIZZ leveraging atomic semantic unit, from now
on called atomic fact, in the domain of summariza-
tion factual inconsistency detection.
3 FIZZ
The overall flow of our proposed system FIZZ is
presented in Figure 2. Our method first begins with
the application of a coreference resolution model to
a given (document, summary) pair, resulting in
a new pair of texts (document, summary) where
coreferences have been resolved (Section 3.1). Fol-
1We note that Zhang and Bansal (2021) generated SCUs
with semantic role labeling.
lowing this, we proceed to generate atomic facts
from the coreference-resolved summary leveraging
LLMs as a zooming-in approach for the summary
(Section 3.2). Using the generated atomic facts,
we compute the score of each atomic factwith the
NLI system (Section 3.3). Finally, we propose a
granularity expansion method, which is a way of
zooming out the documents, to compute the score
for the summaries that contain high abstractiveness
more accurately.
3.1 Coreference Resolution
To enhance the entailment recognition capabili-
ties of NLI models, FIZZ first conducts centered
around coreference resolution in both document
and summary texts. The motivation behind this
approach is driven by the inherent limitations ob-
served in NLI models when processing texts with
pronouns. Specifically, we find that NLI models
tend to struggle with recognizing entailment when
presented with premises and hypotheses that con-
tain the same content but differ in their use of pro-
nouns and explicit entity names. To address this
challenge, FIZZ employs pronoun resolution in
summaries by analyzing them on a sentence-by-
sentence basis to extract atomic facts. This strategy
not only facilitates a more granular understanding
of the summary content but also avoids the limited
context length in LLMs.
Furthermore, applying pronoun resolution to the
document text ensures that the entities are explic-
itly named, aligning the premise more closely with
the hypothesis. By resolving coreferences in both
32documents and summaries, our approach aims to
bridge the gap between pronoun use and explicit
entity naming, thereby improving the performance
of NLI models in entailment tasks. This dual focus
on both document and summary texts underscores
the comprehensive nature of our strategy to bol-
ster the accuracy and reliability of NLI models in
handling a variety of linguistic expressions.
Formally, given a document D and its summary
S, we define coreference resolution asfcoref, which
makes:
D′= fcoref(D), S′= fcoref(S) (1)
where D′and S′are coreference resolved texts of
D and S, respectively.
3.2 Atomic Facts Decomposition
Atomic Facts Generation As demonstrated in
Figure 1, sentence level evaluation of summaries
can often yield inaccurate results. Therefore, we
propose a method that evaluates the factuality of
summaries at a more fine-grained level, specifically
at the level of atomic factsas exemplified in Fig-
ure 2. By employing atomic facts, which are highly
detailed units of information, FIZZ considerably
enhances interpretability.
The definition of an atomic factdiffers across
studies, primarily due to the inherently subjective
nature of this concept. We propose our own defini-
tion of an atomic factthat is designed to align with
and complement the nature of NLI models. Build-
ing upon Bhandari et al. (2020), we specify further
that an atomic factis short and concise, contain-
ing no more than two or three entities, with person
entities specifically resolved any of coreferences.
We generate atomic facts from summaries at the
sentence level after resolving coreferences. This
strategy for atomic fact generation not only in-
creases the quantity of atomic facts but also substan-
tially augments the generated summary’s pool of
information. To extract atomic facts from the sum-
maries, we input prompts into the LLM that include
both a task description and a sentence-level sum-
mary, as exemplified in Table 10. This approach
systematically decomposes each sentence in the
summary into individual atomic facts, facilitating
a comprehensive extraction and representation of
information. The coreference resolved summary
S′ = {s′
j}N
j=1, where s′
j represents the jth sen-
tence in S′and N the total number of sentences in
S′, could be decomposed to a set of atomic facts
Algorithm 1Filtering Out Incorrect Atomic Facts
Input: An NLI model M; coreference resolved summary
S′ = {s′
j}N
j=1; decomposed atomic facts A′ = {a′
k}L
k=1.
Initialize: set Afiltered = ϕ
1: for k= 1,2,...,L do
2: for j = 1,2,...,N do
3: (ej,k,cj,k,nj,k) ←M(s′
j,a′
k)
4: if max(ej,k,cj,k,nj,k) is ej,k then
5: Append a′
k to Afiltered .
6: end if
7: end for
8: end for
Output: A set of atomic facts Afiltered .
A′= {a′
k}L
k=1, with L denotes the total number of
sentences in A′.
Atomic Facts Filtering One significant issue
with atomic facts generated by LLMs is that these
facts are often produced not from the content of
summaries themselves but from the pretrained
knowledge embedded within the LLMs. For ex-
ample, when we decompose the sentence of the
summary "The mass, which has risen some 50ft
above sea level, measures roughly 1,000 - 1,640ft
long, and 100ft wide", the decomposed atomic facts
contain an atomic fact "The mass is a noun". Such
atomic facts may not align with either the sum-
maries or the documents and can significantly influ-
ence the scoring method described in Section 3.3.
Consequently, the exclusion of these atomic facts
becomes a necessary step in our process.
Hence, we utilize an NLI model to filter out in-
correct atomic facts. Our approach leverages the
probabilistic distribution of the NLI model, which
categorizes outcomes into three types: Entailment
(E), Contradiction (C), and Neutral (N). In the
filtering process, we set the summary S′ as the
premise, and the atomic fact A′as the hypothesis.
We filter out atomic facts that exhibit exception-
ally low entailment scores. We outline the detailed
procedure of the atomic facts filtering process in
Algorithm 1.
3.3 Atomic Facts Scoring
Atomic Facts Pair-Wise Scoring To compute
the score for each atomic fact of the summaries,
FIZZ first decomposes the coreference resolved
document into sentences. We split the document
D′into M sentences and the filtered atomic facts
Afiltered into N sentences, formulating D′ =
{d′
i}M
i=1 and Afiltered = {ak}L
k=1, respectively.
We use each (di, ak) as an input for an NLI model,
positioning the generated atomic fact as the hy-
33pothesis and the sentence of the document as the
premise.
Finally, we assign scores to each atomic fact
based on the maximum entailment score obtained
through comparison with every sentence in the
document. The atomic fact entailment scores
E = {ei,k}, where 1 ≤i ≤M and 1 ≤k ≤L,
are computed to a vector T:
tk = max
1≤i≤M
ei,k
T = {t1, . . . ,tL}
(2)
Adaptive Granularity Expansion Summaries
generated by abstractive summarization systems
contain a high degree of abstractiveness. This ab-
stractiveness occurs when content spread across
multiple sentences in the document is condensed
into one or two sentences in the summary. To ac-
curately detect factual inconsistencies within such
summaries, it is necessary to zoom out and exam-
ine multiple sentences across the source document.
Furthermore, several studies have demonstrated
that considering multiple sentences from the docu-
ment leads to better accuracy (Laban et al., 2022;
Glover et al., 2022).
We aim to identify scores wheremax(ek, ck, nk)
is not ek from the T. For atomic facts associated
with these scores, we further increase the granular-
ity of the document and perform computation once
again. We incrementally increase the granularity
starting from the document sentence di that con-
tributed to each identified score, limiting the granu-
larity at a maximum of three sentences (di−1 + di,
di + di+1, di−2 + di−1 + di, di + di+1 + di+2, di−1
+ di + di+1). Subsequently, we re-calculate the
scores within this expanded context and replace the
original scores with the maximum value observed
among the re-calculated scores and the original.
As a result, the vector T is transformed into T∗
as certain scores are replaced by new scores. De-
tailed information on this procedure is provided in
Algorithm 2.
The final score is then determined by the
minimum score within vector T∗, enabling a highly
interpretable evaluation:
FIZZ score = min(T∗) (3)
4 Experiments
4.1 Experimental Setups
In our experiments, we leverage MT5 (Bohnet et al.,
2023) for coreference resolution, which returns
Algorithm 2Scoring with Document Granularity
Expansion
Input: An NLI model M; coreference resolved document
D′= {d′
i}M
i=1; decomposed atomic facts A′= {a′
k}L
k=1.
Initialize: T∗= ϕ; Max granularity size gran = 3.
1: Define C(D,g) = list of subsets of Dwith size of g.
2: Define F(C(D,g)) which returns whether C(D,g) is a
consecutive list.
3: Define D(C(D,g)) = list of document sentences in index
list in C(D,g).
4: for k= 1,2,...,L do
5: set E = ϕ
6: for i= 1,2,...,M do
7: (ei,k,ci,k,ni,k) ←M(d′
i,a′
k)
8: Append ei,k to E.
9: end for
10: midx = E.index(max(E))
11: if max(ei,k,ci,k,ni,k) is not ei,k then
12: set Didx = [0,...,M −1]
13: set Dexpanded = ϕ
14: for g= 1,2,...,gran + 1do
15: if midx in C(Didx,g) and F(C(Didx,g))
then
16: Extend C(Didx,g) to Dexpanded.
17: end if
18: end for
19: set Eexpanded = ϕ
20: for dexpanded ∈D(Dexpanded) do
21: (e,c,n ) ←M(dexpanded,a′
k)
22: Append eto Eexpanded.
23: end for
24: Append max(Eexpanded) to T∗.
25: else
26: Append ei,k to T∗.
27: end if
28: end for
Output: vector T∗with maximum entailment scores from
each atomic fact.
with the identification of clusters referring to the
same entities. With these clusters, we further im-
plement rule-based pronoun substitution strategies
to generate coreference resolved texts. For atomic
fact decomposition, the Orca-2 model (Mitra et al.,
2023) is utilized. Additionally, this work adopts
the same off-the-shelf NLI model as implemented
in SUMMA C (See Appendix D for more details).
4.2 Benchmark Datasets
We useAGGRE FACT (Tang et al., 2023) benchmark
dataset, a comprehensive aggregation of 9 lead-
ing summary factual consistency detection datasets
currently available. AGGRE FACT is stratified into
three distinct splits, namely FTSOTA, EXFORMER ,
and OLD, with each split containing its own valida-
tion and test sets. We standardize the evaluation as
a binary classification and choose the best threshold
from the validation set following SummaC. Finally,
we apply this threshold to the test set and report
the balanced accuracy score, considering the imbal-
34AGGREFACT-
CNN-FTSOTA
AGGREFACT-
XSUM-FTSOTA AVG
DAE 65.4 ±4.4 70.2±2.3 67.8
QuestEval 70.2 ±3.2 59.5 ±2.7 64.9
SummaC-ZS 64.0 ±3.8 56.4 ±1.2 60.2
SummaC-Conv 61.0 ±3.9 65.0 ±2.2 63.0
QAFactEval 67.8 ±4.1 63.9 ±2.4 65.9
AlignScore 62.5 ±3.3 69.6 ±1.7 66.1
ChatGPT-ZS 56.3 ±2.9 62.7 ±1.7 59.5
ChatGPT-COT 52.5 ±3.3 55.9 ±2.1 54.2
ChatGPT-DA 53.7 ±3.5 54.9 ±1.9 54.3
ChatGPT-Star 56.3 ±3.1 57.8 ±0.2 57.1
FactScore 60.8 ±3.2 68.0 ±2.0 64.4
FacTool 49.3 ±3.5 59.0 ±2.0 54.2
FIZZ(Ours) 72.6±3.0 69.3 ±1.9 71.0
w/o GE 72.2±2.8 66.3 ±1.9 69.3
w/o Filtering 64.7±3.3 70.0 ±1.8 67.4
w/o AF 63.6±2.9 65.8 ±2.0 64.7
Table 1: Balanced accuracy using a single threshold with
95% confidence intervals on the AGGRE FACT-FTSOTA
split dataset. Highest performance is highlited in bold,
and the second highest is underlined.
ance in the dataset.
4.3 Baselines
We adopt all of the baselines of AGGRE FACT
dataset: DAE (Goyal and Durrett, 2020, 2021),
QuestEval (Scialom et al., 2021), SummaC-
ZS and SummaC-Conv (Laban et al., 2022),
QAFactEval (Fabbri et al., 2022), ChatGPT-ZS and
ChatGPT-CoT (Luo et al., 2023), ChatGPT-DA and
ChatGPT-Star (Wang et al., 2023a). Also, we re-
port the results with AlignScore (Zha et al., 2023),
which is a recently introduced system for checking
the factual consistency of summaries based on NLI.
Additionally, we incorporate FACTSCORE (Min
et al., 2023) and FACTOOL (Chern et al., 2023) in
our baselines. These methods decompose gener-
ated texts into atomic factsand then retrieve cor-
responding entries from a given knowledge base,
such as Wikipedia, to evaluate the factuality of the
generated context. For the purpose of verification,
we assume the availability of this knowledge base,
which we use as the source document to assess
summary factual consistency. In FACTSCORE , we
employ a No-context LMfor factual verification.
This approach operates on a QA basis, assessing
whether atomic factsare true or false with respect
to the source document. In FACTOOL , we utilize
a Knowledge-based QAapproach. This also fol-
lows a QA format but incorporates the CoT method,
where the LLM evaluates if claims are true or false
relative to the source document. Details of the
experiments are provided in Appendix B.
AGGREFACT-CNN AGGREFACT-XSUM
FTSOTA EXF OLD FTSOTA EXF OLD AVG
Baseline 50.0 50.0 50.0 50.0 50.0 50.0 50.0
DAE* 59.4 67.9 69.7 73.1 - - 67.5
QuestEval 63.7 64.3 65.2 61.6 60.1 59.7 62.4
SummaC-ZS 63.3 76.5 76.3 56.1 51.4 53.3 62.8
SummaC-Cv 70.3 69.8 78.9 67.0 64.6 67.5 69.7
QAFactEval 61.6 69.1 80.3 65.9 59.6 60.5 66.2
AlignScore 53.4 73.1 80.2 70.2 80.1 63.7 70.1
ChatGPT-ZS 66.2 64.5 74.3 62.6 69.2 60.1 66.2
ChatGPT-CoT 49.7 60.4 66.7 56.0 60.9 50.1 57.3
ChatGPT-DA 48.0 63.6 71.0 53.6 65.6 61.5 60.6
ChatGPT-Star 55.8 65.8 71.2 57.7 70.6 53.8 62.5
FactScore 69.9 71.6 73.9 68.0 63.5 66.8 69.0
FacTool 72.7 66.1 60.8 68.0 64.0 62.2 65.6
FIZZ(Ours) 73.2 67.3 76.0 69.7 72.4 68.5 71.2
Table 2: Balanced accuracy on the AGGRE FACT dataset.
As in Tang et al. (2023), we omitted the results from
DAE, as it was trained on the XSumFaith (Goyal and
Durrett, 2021) dataset, which includes human-annotated
summaries from EXFORMER and OLD.
4.4 Results
We present the performance outcomes obtained by
applying each metric to the AGGRE FACT bench-
mark dataset in Table 2. We show the perfor-
mance of three versions of our proposed met-
ric: FIZZ, its without granularity expanded ver-
sion, FIZZw/o GE, and its without atomic facts
version, FIZZw/o AF. The complete results for
AGGRE FACT-CNN and AGGRE FACT-XS UM are
displayed in Table 2. FIZZ demonstrates the high-
est average performance, followed by FIZZw/o GE
and FIZZw/o AF.
Additionally, we provide results for a single-
threshold approach on AGGRE FACT-FTSOTA split
as in Tang et al. (2023). We list the best threshold
findings for the AGGRE FACT-CNN-FTSOTA and
AGGRE FACT-XS UM-FTSOTA splits, with corre-
sponding binary classification balanced accuracy
scores in Table 1. In this setting, FIZZ achieves
the highest average performance, withFIZZw/o GE
coming in second. Both metrics perform exception-
ally well on the CNN split. Furthermore, the gran-
ularity expansion in FIZZ leads to notably higher
performance improvements on the XSUM split.
Both FACTSCORE and FACTOOL have demon-
strate scores that are comparable to or exceed those
of ChatGPT-based metrics. It appears that decom-
posing summaries into atomic facts and comparing
them with the source document is more effective
than performing factuality checking on the entire
summary. However, metrics based on ChatGPT in-
herently face disadvantages compared to other met-
rics, which can be tuned by adjusting thresholds;
35LLM CNN XSUM AVG AVG. TOKENLENGTH
Zephyr 65.1±3.3 65.2±2.0 65.2 97.6gpt-3.5-turbo68.7±3.4 68.7±2.0 68.7 95.9gpt-3.5-turbo-instruct70.7±3.1 67.0±1.8 68.9 90.5Mistral 70.5±3.5 68.7±2.1 69.6 86.5
Orca-2 72.6±3.0 69.3±1.9 71.0 81.4
Table 3: Experimental results of FIZZ with atomic facts
generated by different LLMs using the same prompt
on AGGRE FACT-FTSOTA split. Avg. Token Length
indicates the average number of total tokens of atomic
facts per summary.
such tuning is unnecessary for ChatGPT-based met-
rics. This distinction may limit the effectiveness of
ChatGPT-based evaluations in some contexts.
4.5 Analysis
LLMs used for Atomic Facts Decomposition
To investigate the most suitable LLMs for gen-
erating atomic facts, we evaluate the generation
of atomic facts using various LLMs, including
gpt-3.5-turbo, gpt-3.5-turbo-instruct, and
other 7B models such as Zephyr (Tunstall et al.,
2023) and Mistral (Jiang et al., 2023). The results,
documented in Table 3, demonstrate that while
the atomic facts generated by gpt-3.5-turbo and
gpt-3.5-turbo-instruct generally perform bet-
ter compared to other metrics, they are still inferior
to those produced by Orca-2. The performance
drop associated with the gpt series suggests a note-
worthy observation. We explain that this discrep-
ancy is due to the length of the atomic facts. As
shown in Table 3, which includes the average token
length of atomic facts after the filtering process
per summary, there is a clear inverse relationship
between the number of tokens in an atomic fact
and its average performance. Longer atomic facts
tend to contain more entities and are less concise.
Such sentences are less suitable ashypotheses when
compared sentence-wise using NLI models. Fur-
thermore, the sensitivity of using the minimum
atomic fact scores as the final score exacerbates the
challenge, making it difficult to achieve desired out-
comes with lengthy sentences. In contrast, other 7B
ROUGE-1 AVG. NUMBER OFAVG. TOKEN
P R F1 ATOMICFACTS LENGTH
Human 1.00 1.00 1.00 8.7 98.4
Orca-2 0.70 0.69 0.68 8.7 96.3gpt-3.5-turbo0.78 0.84 0.79 7.8 105.0gpt-3.5-turbo-instruct0.73 0.72 0.70 13.0 149.6Mistral 0.63 0.62 0.61 9.6 104.1Zephyr 0.51 0.60 0.52 10.1 122.0
Table 4: Experimental results of generated atomic facts
on RoSE dataset. The results with the highest human
correlation are highlighted in bold.
Granularity Expansion(b) Only
(c) Coreference Resolution + Granularity Expansion
(a) Only Coreference Resolution
Atomic Facts �.��
�.��
Document
Chris Gunter says it would
be a "massive mistake" to get
complacent.
The near misses are there as a reminder that
in football even the most unlikely thing can
happen until the job is don," Gunter added.
"We've worked so hard for so long, it'd be a
massive mistake to get complacent and
think the job is done."
Atomic Facts
�.��
Document
Chris Gunter says it would
be a "massive mistake" to get
complacent.
Atomic Facts
�.��
Document
Chris Gunter says it would
be a "massive mistake" to get
complacent.
The near misses are there as a reminder that
in football even the most unlikely thing can
happen until the job is don," He added.
"We've worked so hard for so long, it'd be a
massive mistake to get complacent and
think the job is done."
The near misses are there as a reminder that
in football even the most unlikely thing can
happen until the job is don," Gunter added.
"We've worked so hard for so long, it'd be a
massive mistake to get complacent and
think the job is done."
Figure 3: The effect of granularity expansions and coref-
erence resolution in real AGGRE FACT dataset. The en-
tailment score of an atomic fact and document sentence
with (a) only Coreference Resolution, (b) only Granu-
larity Expansion, and (c) the both.
models such as LLaMa (Touvron et al., 2023) show
limitations in adhering to instructions for atomic
fact decomposition. Details of the model usage are
provided in Appendix C.
In previous studies (Zhang and Bansal, 2021;
Chern et al., 2023; Scirè et al., 2024), the evalu-
ation of the quality and the completeness of the
LLM generated atomic facts focuses solely on con-
tent similarity (i.e., ROUGE-1) with human-written
atomic facts. However, we consider content similar-
ity evaluation to be insufficient and added two ad-
ditional factors: 1) Average token length in atomic
facts and 2) Average number of atomic facts. In
Table 3, we demonstrate the correlation between
the average token length of atomic facts and overall
performance. Building on this, we now analyze the
token length of both human-written and generated
atomic facts. Additionally, since the content sim-
ilarity metric does not take into account the num-
ber of atomic facts, we also include the average
number of atomic facts in our results. We report
the comparative analysis of the LLM generated
atomic facts against human-written atomic facts
in Table 4. The experiments were implemented
using the RoSE (Liu et al., 2023) dataset, which
includes 2,500 summaries and their corresponding
human-written atomic facts. As shown in the ex-
perimental results, gpt-3.5-turbo demonstrates
the highest capability by achieving the top score in
content similarity. However, it shows a significant
36Doc. Max GranularityAGGREFACT-
CNN-FTSOTA
AGGREFACT-
XSUM-FTSOTA AVG s/it
One Sent. 72.2±2.8 66.3 ±1.9 69.25 2.49
Two Sent. 71.0±3.2 69.3 ±2.0 70.15 2.53
Three Sent. 72.6±3.0 69.3 ±1.9 70.95 2.64
Four Sent. 72.1±3.1 70.0±1.8 71.05 2.80
Table 5: Size of granularity choicein granularity ex-
pansion on AGGRE FACT-FTSOTA split. s/it indicates
seconds per iteration for the inference of an NLI model.
difference in the number of atomic facts and the
number of tokens in atomic facts. In contrast, Mis-
tral scores lower in content similarity but exhibits
higher human correlation in the number of atomic
facts and token lengths. The model that achieves
the highest human correlation in both the number
of atomic facts and token lengths is Orca-2, which
shows the best performance among LLMs as in
Table 3. These findings suggest that while content
similarity is important, the number of atomic facts
and token lengths are equally critical factors to con-
sider. Details on computing content similarity are
provided in Appendix G.
Sizes of Granularity ExpansionAs underscored
in Section 3.3, accurately evaluating the factual
consistency of abstractive summaries necessitates
an expansion of document granularity. This re-
quirement stems from the observation that a single
sentence within a summary may incorporate con-
tent from multiple sentences within the document.
Illustrative of this point, Figure 3 highlights that
segmenting conversational dialogues into discrete
sentences can lead to a loss of contextual clarity,
where the synthesis of various segmented sentences
is required for an accurate interpretation.
SUMMA C present experimental results across
different granularity choices, categorizing docu-
ment granularity into a sentence, two sentences,
paragraph, and full document levels. However,
adjusting document granularity in such a manner
reduces interpretability and undermines result re-
liability. Our approach is to adaptively increase
granularity only for atomic facts where the entail-
ment score significantly decreases.
Table 5 presents the outcomes associated with
varying granularity sizes in adaptive granularity
expansion. The experimental findings reveal a con-
sistent improvement in average performance with
increasing granularity, particularly for summaries
derived from XSum (Narayan et al., 2018). This
significant performance boost can be attributed to
the inherently abstractive nature of XSum-based
Atomic Facts Doc CNN XSUM AVG
Original Original 63.2±2.3 66.4±1.8 64.8
Coref Resolved65.7±3.4 67.8±2.0 66.7(+1.95)
Coref Resolved Original 66.2±3.4 66.6±1.9 66.4
Coref Resolved72.2±2.7 66.3±1.9 69.2(+2.85)
Table 6: Effect of coreference resolutionof document
and atomic facts on AGGRE FACT-FTSOTA splits before
the process of granularity expansion.
summaries.
Despite the increase in average score for the
maximum of four sentences, the scores for CNN
summaries actually declined. Additionally, we ob-
serve that computational costs rose with increasing
granularity. Hence, we determined that the maxi-
mum of three sentences represents the best trade-
off between computational cost and performance.
Details on granularity expansion condition choice
are provided in Appendix F.
Effectiveness of Coreference ResolutionIn the
application of NLI models for comparing premises
with hypotheses, the significance of coreference
resolution cannot be overstated. As outlined in Sec-
tion 3.1, failure to resolve pronouns in the premise
significantly hinders the attainment of desired out-
comes. This point is vividly illustrated in Figure
3, where the difference between document(b) and
document(c) is merely the resolution of pronouns.
Yet, this seemingly minor modification leads to
a stark contrast in entailment scores, with docu-
ment(b) achieving a score of 0.09 compared to
document(c)’s 0.83. The discrepancy arises due
to the document (premise)’s reference to "he" not
being recognized as pertaining to "Chris Gunter",
as stated in the atomic fact (hypothesis).
Moreover, Table 6 presents more granular ex-
perimental results on the impact of coreference
resolution. We implemented experiments to eval-
uate the impact of coreference resolution on both
documents and atomic facts. Our investigation in-
cluded scenarios where coreference resolution was
applied and cases where it was not. We show that
texts with resolved coreferences, whether they be
atomic facts or documents, consistently outperform
those without resolution. Notably, there is a marked
improvement in performance on datasets based on
CNN (Hermann et al., 2015) summaries compared
to those based on XSum summaries. This is likely
due to the extractive nature of CNN-based sum-
maries, as opposed to the more abstractive sum-
maries derived from XSum. Details on coreference
37Summary Document
Atomic Facts
Elon Musk tweeted.
The tweet was about a
rocket landing.
The rocket landed, but
tipped over.
Elon Musk tweeted that the
rocket landed, but tipped
over.
0.99
0.98
0.33
0.98
SpaceX founder Elon Musk
tweeted : “ Ascent successful.
Dragon enroute to Space
Station.
Rocket landed on droneship,
but too hard for survival.”
Elon Musk later clarified that
the rocket landed, but tipped
over.
Figure 4: Drawbacks of atomic fact level evaluation
versus the sentence level evaluation. The numbers rep-
resent the maximum NLI entailment scores obtained
by comparing each sentence and atomic fact with the
source document on a sentence-wise basis.
resolution usage are provided in Appendix E.
Failure Case Study We analyze the drawbacks
of decomposing summaries into atomic facts in
the summary factual consistency checking task,
through the main example in Figure 4, which com-
pares the drawbacks of analyzing atomic facts ver-
sus sentences. When comparisons are made at the
sentence level, a sentence can be correctly judged
as entailing the content of a document. Conversely,
when breaking down the content into atomic facts,
the fact "The tweet was about a rocket landing."
receives a maximum entailment score of only 0.33.
This particular atomic fact remains even after under-
going the filtering process. As a result, a summary
that is factually consistent may be erroneously clas-
sified as factually inconsistent due to the analysis
of this single atomic fact.
5 Conclusion
In this work, we propose a novel method, FIZZ,
in detecting summary factual inconsistency. Our
approach decomposes summaries into atomic facts
and conducts a sentence-wise comparison with
the document, and achieves state-of-the-art per-
formance on the AGGRE FACT benchmark dataset.
Also, our proposed system has a higher inter-
pretability due to its ability to precisely identify
which parts of a summary are factually inaccurate
by breaking it down intoatomic facts. Furthermore,
we analyze the necessity and significance of coref-
erence resolution and granularity expansion in the
context of summary factual consistency checking.
Limitations
Our proposed method is quite time-consuming. No-
tably, during the coreference resolution phase, we
leverage 11B model. This process requires more
time than other factual consistency checking sys-
tems. The practical applicability of FIZZ in real-
time settings remains to be determined.
Furthermore, our research was limited to sum-
maries based on articles and news domains. We
did not verify the effectiveness of FIZZ in other
domains such as dialogue summarization (Tang
et al., 2024) or medical summarization (Wang et al.,
2023b). Additionally, our study was confined to
English-language data. The validity of FIZZ needs
to be assessed in datasets based on other languages.
Despite these limitations, we believe our method
paves a new path in the area of summarization
factual consistency detection. This work could be a
significant contribution to the field, pending further
validation across varied domains and languages.
Ethics Statement
This work uses English document summarization
dataset, AGGRE FACT. This dataset is publicly
available online. We also provided adequate ci-
tations for the papers and sources we consulted in
writing our paper. Our work may have implica-
tions for society in terms of preventing the spread
of inaccurate information, as it deals with factual
consistency checking.
Acknowledgement
This research was supported by the Chung-Ang
University Research Grants in 2023. This research
was partly supported by Institute for Information &
Communications Technology Planning & Evalua-
tion (IITP) through the Korea government (MSIT)
under Grant No. 2021-0-01341 (Artificial Intelli-
gence Graduate School Program (Chung-Ang Uni-
versity)).
References
Manik Bhandari, Pranav Narayan Gour, Atabak Ash-
faq, Pengfei Liu, and Graham Neubig. 2020. Re-
evaluating evaluation in text summarization. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 9347–9359, Online. Association for Computa-
tional Linguistics.
Stephen Bird, Edward Loper, and Ewan Klein. 2009.
Natural Language Processing with Python. O’Reilly
Media Inc.
Bernd Bohnet, Chris Alberti, and Michael Collins. 2023.
Coreference resolution through a seq2seq transition-
38based system. Transactions of the Association for
Computational Linguistics, 11:212–226.
Samuel R. Bowman, Gabor Angeli, Christopher Potts,
and Christopher D. Manning. 2015. A large anno-
tated corpus for learning natural language inference.
In Proceedings of the 2015 Conference on Empiri-
cal Methods in Natural Language Processing, pages
632–642, Lisbon, Portugal. Association for Compu-
tational Linguistics.
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu,
Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi,
Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang,
Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie.
2023. A survey on evaluation of large language mod-
els.
Shiqi Chen, Siyang Gao, and Junxian He. 2023. Eval-
uating factual consistency of summaries with large
language models.
I Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua
Feng, Chunting Zhou, Junxian He, Graham Neubig,
Pengfei Liu, et al. 2023. Factool: Factuality detec-
tion in generative ai–a tool augmented framework
for multi-task and multi-domain scenarios. arXiv
preprint arXiv:2307.13528.
Esin Durmus, He He, and Mona Diab. 2020. FEQA: A
question answering evaluation framework for faith-
fulness assessment in abstractive summarization. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 5055–
5070, Online. Association for Computational Lin-
guistics.
Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and
Caiming Xiong. 2022. QAFactEval: Improved QA-
based factual consistency evaluation for summariza-
tion. In Proceedings of the 2022 Conference of the
North American Chapter of the Association for Com-
putational Linguistics: Human Language Technolo-
gies, pages 2587–2601, Seattle, United States. Asso-
ciation for Computational Linguistics.
Yanjun Gao, Chen Sun, and Rebecca J. Passonneau.
2019. Automated pyramid summarization evaluation.
In Proceedings of the 23rd Conference on Computa-
tional Natural Language Learning (CoNLL), pages
404–418, Hong Kong, China. Association for Com-
putational Linguistics.
Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen
Elkind, and Idan Szpektor. 2023. TrueTeacher:
Learning factual consistency evaluation with large
language models. In Proceedings of the 2023 Con-
ference on Empirical Methods in Natural Language
Processing, pages 2053–2070, Singapore. Associa-
tion for Computational Linguistics.
John Glover, Federico Fancellu, Vasudevan Jagan-
nathan, Matthew R. Gormley, and Thomas Schaaf.
2022. Revisiting text decomposition methods for
NLI-based factuality scoring of summaries. In Pro-
ceedings of the 2nd Workshop on Natural Language
Generation, Evaluation, and Metrics (GEM), pages
97–105, Abu Dhabi, United Arab Emirates (Hybrid).
Association for Computational Linguistics.
Ben Goodrich, Vinay Rao, Peter J. Liu, and Mohammad
Saleh. 2019. Assessing the factual accuracy of gener-
ated text. In Proceedings of the 25th ACM SIGKDD
International Conference on Knowledge Discovery
& Data Mining, KDD ’19, page 166–175, New York,
NY , USA. Association for Computing Machinery.
Tanya Goyal and Greg Durrett. 2020. Evaluating factu-
ality in generation with dependency-level entailment.
In Findings of the Association for Computational Lin-
guistics: EMNLP 2020, pages 3592–3603, Online.
Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2021. Annotating and
modeling fine-grained factuality in summarization.
In Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 1449–1462, Online. Association for Computa-
tional Linguistics.
Aaron Harnly, Ani Nenkova, Rebecca Passonneau, and
Owen Rambow. 2005. Automation of summary eval-
uation by the pyramid method. In International Con-
ference on Recent Advances in Natural Language
Processing, RANLP 2005 - Proceedings, Interna-
tional Conference Recent Advances in Natural Lan-
guage Processing, RANLP, pages 226–232. Associ-
ation for Computational Linguistics (ACL). Inter-
national Conference on Recent Advances in Natural
Language Processing, RANLP 2005 ; Conference
date: 21-09-2005 Through 23-09-2005.
Karl Moritz Hermann, Tomáš Koˇciský, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend.
Matthew Honnibal, Ines Montani, Sofie Van Lan-
deghem, and Adriane Boyd. 2020. spaCy: Industrial-
strength Natural Language Processing in Python.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men-
sch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guil-
laume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
and William El Sayed. 2023. Mistral 7b.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong,
and Richard Socher. 2020. Evaluating the factual
consistency of abstractive text summarization. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computa-
tional Linguistics.
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and
Marti A. Hearst. 2022. SummaC: Re-visiting NLI-
based models for inconsistency detection in summa-
rization. Transactions of the Association for Compu-
tational Linguistics, 10:163–177.
39Chin-Yew Lin. 2004. ROUGE: A package for auto-
matic evaluation of summaries. In Text Summariza-
tion Branches Out, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yixin Liu, Alex Fabbri, Pengfei Liu, Yilun Zhao, Liny-
ong Nan, Ruilin Han, Simeng Han, Shafiq Joty,
Chien-Sheng Wu, Caiming Xiong, and Dragomir
Radev. 2023. Revisiting the gold standard: Ground-
ing summarization evaluation with robust human
evaluation. In Proceedings of the 61st Annual Meet-
ing of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 4140–4170, Toronto,
Canada. Association for Computational Linguistics.
Zheheng Luo, Qianqian Xie, and Sophia Ananiadou.
2023. Chatgpt as a factual inconsistency evaluator
for text summarization.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and
Ryan McDonald. 2020. On faithfulness and factu-
ality in abstractive summarization. In Proceedings
of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 1906–1919, On-
line. Association for Computational Linguistics.
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis,
Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettle-
moyer, and Hannaneh Hajishirzi. 2023. FActScore:
Fine-grained atomic evaluation of factual precision
in long form text generation. In Proceedings of the
2023 Conference on Empirical Methods in Natural
Language Processing, pages 12076–12100, Singa-
pore. Association for Computational Linguistics.
Arindam Mitra, Luciano Del Corro, Shweti Mahajan,
Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi
Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Ag-
garwal, Hamid Palangi, Guoqing Zheng, Corby Ros-
set, Hamed Khanpour, and Ahmed Awadallah. 2023.
Orca 2: Teaching small language models how to rea-
son.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don’t give me the details, just the summary!
topic-aware convolutional neural networks for ex-
treme summarization. In Proceedings of the 2018
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 1797–1807, Brussels, Bel-
gium. Association for Computational Linguistics.
Ani Nenkova and Rebecca Passonneau. 2004. Evaluat-
ing content selection in summarization: The pyramid
method. In Proceedings of the Human Language
Technology Conference of the North American Chap-
ter of the Association for Computational Linguistics:
HLT-NAACL 2004, pages 145–152, Boston, Mas-
sachusetts, USA. Association for Computational Lin-
guistics.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal,
Jason Weston, and Douwe Kiela. 2020. Adversarial
NLI: A new benchmark for natural language under-
standing. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 4885–4901, Online. Association for Computa-
tional Linguistics.
OpenAI. 2022. Chatgpt blog post. https://openai.
com/blog/chatgpt.
OpenAI. 2023. Gpt-4 technical report.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Gray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
Training language models to follow instructions with
human feedback. In Advances in Neural Information
Processing Systems.
Tal Schuster, Sihao Chen, Senaka Buthpitiya, Alex
Fabrikant, and Donald Metzler. 2022. Stretching
sentence-pair NLI models to reason over long doc-
uments and clusters. In Findings of the Association
for Computational Linguistics: EMNLP 2022, pages
394–412, Abu Dhabi, United Arab Emirates. Associ-
ation for Computational Linguistics.
Tal Schuster, Adam Fisch, and Regina Barzilay. 2021.
Get your vitamin C! robust fact verification with
contrastive evidence. In Proceedings of the 2021
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, pages 624–643, Online. As-
sociation for Computational Linguistics.
Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier,
Benjamin Piwowarski, Jacopo Staiano, Alex Wang,
and Patrick Gallinari. 2021. QuestEval: Summariza-
tion asks for fact-based evaluation. In Proceedings of
the 2021 Conference on Empirical Methods in Natu-
ral Language Processing, pages 6594–6604, Online
and Punta Cana, Dominican Republic. Association
for Computational Linguistics.
Alessandro Scirè, Karim Ghonim, and Roberto Navigli.
2024. FENICE: Factuality evaluation of summariza-
tion based on natural language inference and claim
extraction. In Findings of the Association for Compu-
tational Linguistics ACL 2024, pages 14148–14161,
Bangkok, Thailand and virtual meeting. Association
for Computational Linguistics.
Ori Shapira, David Gabay, Yang Gao, Hadar Ronen, Ra-
makanth Pasunuru, Mohit Bansal, Yael Amsterdamer,
and Ido Dagan. 2019. Crowdsourcing lightweight
pyramids for manual summary evaluation. In Pro-
ceedings of the 2019 Conference of the North Amer-
ican Chapter of the Association for Computational
Linguistics: Human Language Technologies, Volume
1 (Long and Short Papers), pages 682–687, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah
Kwan, Mohit Bansal, and Colin Raffel. 2023. Evalu-
ating the factual consistency of large language mod-
els through news summarization. In Findings of
40the Association for Computational Linguistics: ACL
2023, pages 5220–5255, Toronto, Canada. Associa-
tion for Computational Linguistics.
Liyan Tang, Tanya Goyal, Alex Fabbri, Philippe La-
ban, Jiacheng Xu, Semih Yavuz, Wojciech Kryscin-
ski, Justin Rousseau, and Greg Durrett. 2023. Un-
derstanding factual errors in summarization: Errors,
summarizers, datasets, error detectors. In Proceed-
ings of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 11626–11644, Toronto, Canada. Association
for Computational Linguistics.
Liyan Tang, Igor Shalyminov, Amy Wing mei Wong,
Jon Burnsky, Jake W. Vincent, Yu’an Yang, Siffi
Singh, Song Feng, Hwanjun Song, Hang Su, Lijia
Sun, Yi Zhang, Saab Mansour, and Kathleen McK-
eown. 2024. Tofueval: Evaluating hallucinations of
llms on topic-focused dialogue summarization.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
Grave, and Guillaume Lample. 2023. Llama: Open
and efficient foundation language models. ArXiv,
abs/2302.13971.
Lewis Tunstall, Edward Beeching, Nathan Lambert,
Nazneen Rajani, Kashif Rasul, Younes Belkada,
Shengyi Huang, Leandro von Werra, Clémentine
Fourrier, Nathan Habib, Nathan Sarrazin, Omar San-
seviero, Alexander M. Rush, and Thomas Wolf. 2023.
Zephyr: Direct distillation of lm alignment.
Hans van Halteren and Simone Teufel. 2003. Examin-
ing the consensus between human summaries: initial
experiments with factoid analysis. In Proceedings of
the HLT-NAACL 03 Text Summarization Workshop,
pages 57–64.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui
Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu,
and Jie Zhou. 2023a. Is ChatGPT a good NLG evalu-
ator? a preliminary study. In Proceedings of the 4th
New Frontiers in Summarization Workshop, pages
1–11, Singapore. Association for Computational Lin-
guistics.
Lucy Lu Wang, Yulia Otmakhova, Jay DeYoung,
Thinh Hung Truong, Bailey Kuehl, Erin Bransom,
and Byron Wallace. 2023b. Automated metrics
for medical multi-document summarization disagree
with human evaluations. In Proceedings of the 61st
Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 9871–
9889, Toronto, Canada. Association for Computa-
tional Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sen-
tence understanding through inference. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume
1 (Long Papers), pages 1112–1122, New Orleans,
Louisiana. Association for Computational Linguis-
tics.
Jiuding Yang, Hui Liu, Weidong Guo, Zhuwei Rao,
Yu Xu, and Di Niu. 2024. Sifid: Reassess summary
factual inconsistency detection with llm.
Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu.
2023. AlignScore: Evaluating factual consistency
with a unified alignment function. In Proceedings
of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 11328–11348, Toronto, Canada. Association
for Computational Linguistics.
Shiyue Zhang and Mohit Bansal. 2021. Finding a bal-
anced degree of automation for summary evaluation.
In Proceedings of the 2021 Conference on Empiri-
cal Methods in Natural Language Processing, pages
6617–6632, Online and Punta Cana, Dominican Re-
public. Association for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evalu-
ating text generation with bert.
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu,
Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang,
Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei
Bi, Freda Shi, and Shuming Shi. 2023. Siren’s song
in the ai ocean: A survey on hallucination in large
language models. ArXiv, abs/2309.01219.
41A Prompt for Atomic Facts
Decomposition
The prompt for atomic fact decomposition in shown
in Table 10. The examples given in the prompt are
similarly used in other LLMs.
B Details on Baselines
In this section, we present the implementation de-
tails of FACTSCORE and FACTOOL , which have
been integrated into our experimental baseline.
For decomposing atomic facts, FACTSCORE uses
the gpt-3.5-turbo-instruct model, and the QA
process is conducted using gpt-3.5-turbo, with
prompts exactly as specified in the paper2. We gave
1 point for each answer that is answered ture and
then divided by the total number of atomic facts:
score = 1
|A|
∑
a∈A
I[ a is True ] (4)
Similar to FACTSCORE , FACTOOL employs
gpt-3.5-turbo for both the claim extraction and
the QA tasks, again using prompt directly from the
paper3.
C Details on the Usage of Large
Language Models
We report on the details and Huggingface links of
LLMs used in Section 4. We employed Orca-2-
7B model4 for experiments in AGGRE FACT bench-
mark dataset. For Zephyr, we used Zephyr-7B-
beta 5, while for Mistral, we used Mistral-7B-
instruct-v0.2 6. Additionally, we used ChatGPT
version of gpt-3.5-turbo-0125.
D Details on the Usage of NLI Model
In this study, we tried to analyze the effect of our
proposed atomic fact level decomposition instead
of using entire sentences. To ensure a fair compari-
son of our approach with SUMMA C, which demon-
strated the best performance using whole sentences,
we employed the same NLI model that was utilized
in SUMMA C7. The model has been trained on the
2https://github.com/shmsw25/FActScore
3https://github.com/GAIR-NLP/factool
4https://huggingface.co/microsoft/Orca-2-7b
5https://huggingface.co/HuggingFaceH4/
zephyr-7b-beta
6https://huggingface.co/mistralai/
Mistral-7B-Instruct-v0.2
7https://huggingface.co/tals/
albert-xlarge-vitaminc-mnli
conventional NLI datasets SNLI (Bowman et al.,
2015), MNLI (Williams et al., 2018), ANLI (Nie
et al., 2020), and also on VitaminC (Schuster et al.,
2021).
In Table 7, we present the performance results
of various NLI models. Specifically, we have in-
cluded the results for DeBERTa-large-mnli8 and
RoBERTa-large-pyrxsum9. The average perfor-
mance scores for DeBERTa and RoBERTa are 68.7
and 68.5, respectively. Although these scores are
lower than that of ALBERT, they surpass the pre-
vious best score of 67.8 achieved by DAE on the
FtSota split.
NLI ModelAGGREFACT-
CNN-FTSOTA
AGGREFACT-
XSUM-FTSOTA AVG
ALBERT 72.6±3.0 69.3 ±1.9 71.0
DeBERTa 67.3±3.0 70.1±1.9 68.7
RoBERTa 70.5±3.0 66.5 ±1.9 68.5
Table 7: Performance of different NLI models on
AGGRE FACT-FTSOTA split.
E Details on the Usage of Coreference
Resolution
We used MT5-11B model for coreference resolu-
tion10. Coreference resolution is the task of iden-
tifying all expressions that refer to the same entity
within a text. While recent models perform well
on this task, returning a text with resolved corefer-
ences is an entirely different challenge. We have
tested various models, but none have functioned
adequately. A significant issue was the prevalent
method of using the first word in a cluster for res-
olution instead of the entity’s name, which fre-
quently resulted in improper replacements with
pronouns. To address this, we slightly modified
the code to ensure that where an entity name is
available, it replaces pronouns as much as possi-
ble11. Furthermore, when an adjective or a modifier
refers to an entity, we prefixed it with the entity’s
name followed by a comma. Table 11 illustrates
these modifications. By enhancing coreference res-
olution in this manner, we were able to capture
8https://huggingface.co/MoritzLaurer/
DeBERTa-v3-large-mnli-fever-anli-ling-wanli
9https://huggingface.co/shiyue/
roberta-large-pyrxsum
10https://huggingface.co/mt5-coref-pytorch/
link-append-xxl
11https://github.com/google-research/
google-research/tree/master/coref_mt5
42Condition AGGREFACT-
CNN-FTSOTA
AGGREFACT-
XSUM-FTSOTA AVG
!(e>c & e>n) 72.6±3.0 69.3±1.9 71.0
!(e>c || e>n) 71.1±2.9 68.7 ±1.9 69.9
Table 8: Granularity Expansion condition choice on
AGGRE FACT-FTSOTA split.
more comprehensive atomic facts without omitting
critical information.
F Details on Granularity Expansion
In Section 3.3, we set the criterion for granularity
expansion as max(e, c, n)! =e. This criterion was
chosen because it intuitively signifies a lack of en-
tailment. Notably, max(e, c, n)! =e is equivalent
to !(e > c& e > n), and thus, we also conducted
experiments using the !(e > c∥e > n) condition.
Table 8 presents the results of these experiments.
G Details on Computing Content
Similarity
The content similarity (ROUGE-1) in Table 4 was
conducted using the following equation:
1
Ndata
∑
Ndata
1
Nc
Nc∑
i=1
Ng
max
j=1
(ROUGE(ci, gj)) (5)
where c denotes LLM generated atomic facts and
g denotes human-written atomic facts.
H Other Details
In this section, we report the differences ob-
served when splitting text into sentences using
NLTK (Bird et al., 2009) and Spacy (Honnibal
et al., 2020). We utilized NLTK sentence splitter in
FIZZ. The results of the experiments are presented
in Table 9.
Sentence SplitterAGGREFACT-
CNN-FTSOTA
AGGREFACT-
XSUM-FTSOTA AVG
Spacy 72.5±3.4 67.0 ±2.0 69.8
NLTK 72.6±3.0 69.3±1.9 71.0
Table 9: Sentence splitter choice on AGGRE FACT-
FTSOTA split.
43Input Prompt
You are a helpful assistant. Please give me a list of atomic facts of the following texts.
lisa courtney, of hertfordshire, has spent most of her life collecting pokemon memorabilia.
- Lisa Courtney is from Hertfordshire.
- Lisa Courtney has spent most of her life collecting Pokémon memorabilia.
prince jan zylinski said he was fed up with discrimination against poles living in britain.
- Prince Jan Zylinski made a statement.
- The statement made by Prince Jan Zylinski was about discrimination.
- The statement made by Prince Jan Zylinski was regarding Poles living in Britain.
- Prince Jan Zylinski expressed feeling fed up with this type of discrimination.
no charges were filed, there will be no travel ban.
- No charges were filed.
- There will be no travel ban.
rudd has pleaded guilty to threatening to kill and possession of drugs in a court.
- Rudd has pleaded guilty.
- Rudd has pleaded guilty to threatening to kill.
- Rudd has pleaded guilty to possession of drugs.
Lee made his acting debut in the film The Moon is the Sun’s Dream (1992), and continued to appear in small and supporting roles throughout the 1990s.
- Lee made his acting debut in The Moon is the Sun’s Dream.
- The Moon is the Sun’s Dream is a film.
- The Moon is the Sun’s Dream was released in 1992.
- After Lee’s acting debut, he appeared in small and supporting roles throughout the 1990s.
In 1963, Collins became one of the third group of astronauts selected by NASA and he served as the back-up Command Module Pilot for the Gemini 7 mission.
- Collins became an astronaut.
- Collins became one of the third group of astronauts selected by NASA in 1963.
- Collins served as the back-up Command Module Pilot for the Gemini 7 mission.
In addition to his acting roles, Bateman has written and directed two short films and is currently in development on his feature debut.
- Bateman has acting roles.
- Bateman has written two short films.
- Bateman has directed two short films.
- Bateman is currently in development on his feature debut.
Michael Collins (born October 31, 1930) is a retired American astronaut and test pilot who was the Command Module Pilot for the Apollo 11 mission in 1969.
- Michael Collins was born on October 31, 1930.
- Michael Collins is retired.
- Michael Collins is an American.
- Michael Collins was an astronaut.
- Michael Collins was a test pilot.
- Michael Collins was the Command Module Pilot for the Apollo 11 mission in 1969.
Summary Sentence
Table 10: Prompt used to generate atomic facts from coreference resolved summary in Section 3.2. We employed
8-shot learning to enhance the model’s performance.
44Original Text The 27-year-oldjoined spurs from manchester city in 2011.
Others
Coref Resolved Text Emmanuel Adebayorjoined spurs from manchester city in 2011.
Atomic Fact #1 Emmanuel Adebayor joined spurs.
Atomic Fact #2 Emmanuel Adebayor joined spurs from manchester city.
Atomic Fact #3 Emmanuel Adebayor joined spurs in 2011.
Ours
Coref Resolved Text Emmanuel Adebayor, the 27-year-oldjoined spurs from manchester city in 2011.
Atomic Fact #1 Emmanuel Adebayor is 27-year-old.
Atomic Fact #2 Emmanuel Adebayor joined spurs.
Atomic Fact #3 Emmanuel Adebayor joined spurs from manchester city.
Atomic Fact #4 Emmanuel Adebayor joined spurs in 2011.
Table 11: Our distinct approach for coreference resolution. The original text is coreference resolved by two ways,
which are Others and Ours. We ensure that critical information is preserved while generating atomic facts by
prefixing modifiers with the names of entities during the coreference resolution.
45 |
https://aclanthology.org/2024.emnlp-main.4.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 46–(...TRUNCATED) |
https://aclanthology.org/2024.emnlp-main.5.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 75–(...TRUNCATED) |
https://aclanthology.org/2024.emnlp-main.6.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 93–(...TRUNCATED) |
https://aclanthology.org/2024.emnlp-main.7.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 128(...TRUNCATED) |
https://aclanthology.org/2024.emnlp-main.8.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 146(...TRUNCATED) |
https://aclanthology.org/2024.emnlp-main.9.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 159(...TRUNCATED) |
https://aclanthology.org/2024.emnlp-main.10.pdf | "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 172(...TRUNCATED) |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 41