The Standard Performance Evaluation Corp. (SPEC)
announced the formation of the SPEC Machine
Learning Committee. The SPEC ML Committee will develop practical
methodologies for benchmarking artificial intelligence (AI) and machine
learning (ML) performance in the context of real-world platforms and
environments. The Committee will also work with other SPEC committees to update
their benchmarks for ML environments. Current members include AMD, Dell,
Inspur, Intel, NetApp, NVIDIA and Red Hat.
"IDC
expects enterprises to spend nearly $342
billion on AI in 2021, and it's essential that these companies understand
what that money will buy," said Arthur Kang, Chair of the SPEC ML Committee.
"This new committee will design and develop the vendor agnostic benchmarks that
vendors need to prove their solutions in a competitive market and enterprises
need to make an informed buying decision. I encourage anyone interested in the
future of ML processing to join the SPEC ML Committee and help shape these
invaluable benchmarks."
The
SPEC ML Committee is initially developing benchmarks to measure end-to-end
performance of a system under test (SUT) handling ML training and inference
tasks. The goal of these benchmarks is to better represent industry practices
compared to other existing benchmarks by including major parts of the
end-to-end ML/DL pipeline, including data prep and training/inference. This
vendor-neutral third-party benchmark will enable ML system designers to
benchmark their offerings against those of their competitors and allow ML
users, such as enterprises and scientific research institutions, to better
understand how solutions will perform in real-world environments, enabling them
to make better purchasing decisions.
The SPEC ML Committee welcomes any current SPEC
members willing to join the development and management of the SPEC ML
benchmark, as well as new SPEC members, especially ML/DL end users and
manufacturers.