Draft:Measuring Massive Multitask Language Understanding - Pro

(Redirected from Draft:MMLU-PRO)

In artificial intelligence, Measuring Massive Multitask Language Understanding - Pro (MMLU-Pro) is a benchmark for evaluating the capabilities of large language models.[1]

Benchmark

edit

It consists of about 12,000 multiple-choice questions spanning 14 academic subjects including mathematics, physics, chemistry, law, engineering, psychology, and health. It is one of the most commonly used benchmarks for comparing the capabilities of large language models.

The MMLU-Pro was released by Yubo Wang and a team of researchers in 2024[2] and was designed to be more challenging than then-existing benchmarks such as Measuring Massive Multitask Language Understanding (MMLU) on which new language models were achieving better-than-human accuracy. At the time of the MMLU-Pro's release, most existing language models performed around the level of random chance (10%), with the best performing GPT-4o model achieving 72.6% accuracy.[2] The developers of the MMLU-Pro estimate that human domain-experts achieve around 90% accuracy.[2]

Leaderboard[3]

edit
Caption text
Organisation LLM MMLU-Pro
Anthropic Claude 3.5 Sonnet[4] 76.12
Google Gemini-1.5 Pro[5] 75.8
xAI Grok-2[6] 75.46
Rubik's AI Nova-Pro[7] 74.2
OpenAI GPT-4o 72.55

References

edit
  1. ^ Roose, Kevin (15 April 2024). "A.I. Has a Measurement Problem". The New York Times.
  2. ^ a b c Wang, Yubo; Ma, Xueguang; Zhang, Ge; Ni, Yuansheng; Chandra, Abhranil; Guo, Shiguang; Ren, Weiming; Arulraj, Aaran; He, Xuan; Jiang, Ziyan; Li, Tianle; Ku, Max (2024). "Measuring Massive Multitask Language Understanding - Pro". arXiv:2406.01574 [cs.CL].
  3. ^ "MMLU-Pro Dataset". HuggingFace. 24 July 2024.
  4. ^ "Introducing Claude 3.5 Sonnet". www.anthropic.com.
  5. ^ "Gemini Pro". Google DeepMind. September 26, 2024.
  6. ^ "Grok-2 Beta Release". x.ai.
  7. ^ AI, Rubik's. "Nova Release - Introducing Our Latest Suite of LLMs". rubiks.ai.

Category:Large language models