evaluation securityQuick Start ↓
Get Started with Jury
Evaluate NLP model outputs with automated text-to-text metrics.
Getting Started
1
Read the official documentation
The Jury team maintains comprehensive docs that cover installation, configuration, and common patterns.
Open Jury Docs↗2
Create an account
Visit the Jury website to create your account and explore pricing options.
Visit Jury↗3
Review strengths, tradeoffs, and alternatives
Our full tool profile covers Jury's strengths, weaknesses, pricing, and how it compares to alternatives.
View full profile→Best For
Research teams needing a comprehensive set of evaluation metrics for NLG models
Developers integrating automated text-to-text metrics into their NLP pipelines
Data scientists who require easy integration and community support