A Story About Cohesion and Separation: Unsupervised Metric for Log Parser Evaluation
Log parsing is a critical stage in log analysis: converting log messages into structured event templates allows for automated log analysis and reduces the effort in manual inspection. To select the most compatible parser for a specific system, multiple evaluation metrics are commonly used for performance comparisons. However, we noticed that the existing log parser’s evaluation metrics heavily rely on labeled log data, which limits prior studies to a fixed set of datasets and hinders parser evaluations and selections in the industry. Further, we discovered that different versions of ground-truth used in existing studies can lead to inconsistent performance conclusions. Motivated by these challenges, we propose a novel label-free template-level metric, PMSS (parser medoid silhouette score), to evaluate log parser performance. PMSS evaluates both parser grouping and template quality with medoid silhouette analysis and Levenshtein distance within a linear time complexity in general. To understand its relationship with label-based template-level metrics (i.e., FGA and FTA), we compared their evaluation outcomes for six log parsers on the standard corrected Loghub 2.0 dataset. Our results indicate that log parsers achieving the highest PMSS or FGA exhibit comparable performance, differing by only 2.1% on average in terms of the FGA score; the difference is 9.8% for FTA. PMSS is also significantly ($p<1e^{-8}$) and positively correlated to both FGA and FTA: the Spearman’s $\rho$ correlation coefficient of PMSS-FGA and PMSS-FTA are respectively 0.648 and 0.587. Based on the experiments, we extended our discussion on how to interpret the conclusions and provided guidelines on conducting parser selections with our metric. Our label-free metric provides a valuable evaluation alternative when ground-truths are inconsistent or no labeled data is available.