Welcome to EvalxNLP documentation!
EvalxNLP is a Python framework for benchmarking state-of-the-art feature attribution methods for transformer-based NLP models
The framework allows users to:
-
Visualize and compare Transformers-based models output explanations using various Ph-FA methods.
-
Use natural language text explanations from LLMs to interpret importance scores and evaluation metrics for different explainers.
-
Evaluate effectiveness of explainers using a variety of evaluation metrics for faithfulness, plausibility, and complexity.