Scale-out processors: Bridging the efficiency gap between servers and emerging cloud workloads
- Scale-out workloads (web search, social networking, business analytics) have dramatically different hardware utilization trends than conventional workloads
- Conventional and emerging server processors are unsuitable for scale-out computing
- Scale-out processors improve server efficiency and optimize datacenter total cost of ownership
March 20, 2012. Cloud computing has emerged as a dominant computing platform providing billions of users world-wide with online services. The software applications powering these services, commonly referred to as scale-out workloads and which include web search, social networking and business analytics, tend to be characterized by massive working sets, high degrees of parallelism, and real-time constraints – features that set them apart from desktop, parallel and traditional commercial server applications.
To support the growing popularity and continued expansion of cloud services, providers must overcome the physical space and power constraints that limit the growth of data centers. Problematically, the predominant processor micro-architecture is inherently inefficient for running these demanding scale-out workloads, which results in low compute density and poor trade-offs between performance and energy. Continuing the current trends for data production and analysis will further exacerbate these inefficiencies.
Improving the cloud’s computational resources whilst operating within physical constraints requires server efficiency to be optimized in order to ensure that server hardware meets the needs of scale-out workloads.
To this end, the team of HiPEAC member Babak Falsafi, a Professor in the School of Computer and Communication Sciences at EPFL, the director of the EcoCloud research center at EPFL (founded to innovate future energy-efficient and environmentally friendly cloud technologies), presented Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware, which received the best paper award as ASPLOS 2012. ASPLOS is a flagship international computer systems venue with a high citation index.
“While we have been studying and tuning conventional server workloads (such as transaction processing and decision support) on hardware for over a decade, we really wanted to see how emerging scale-out workloads in modern datacenters behave.” says Falsafi. “To our surprise, we found that much of a modern server processor’s hardware resources including the cores, caches and off-chip connectivity are overprovisioned when running scale-out workloads leading to huge inefficiencies.”
Mike Ferdman, a senior PhD student team member explains: “efficiently executing scale-out workloads requires optimizing the instruction-fetch path for up to a few megabytes of program instructions, reducing the core complexity while increasing core counts, and shrinking the capacity of on-die caches to reduce area and power overheads.”
“The insights from the evaluation are now driving us to develop server processors tuned to the demands of scale-out workloads”, says Boris Grot a postdoctoral team member. “In a paper that will appear in the flagship computer architecture conference, ISCA, this year, our team proposes the Scale-Out Processor, a processor organization that unlike current industrial chip design trends does away with power-hungry cores and much of on-die cache capacity and network fabric to free area and power for a large number of simple cores built around a streamlined memory hierarchy.” Not only do these improvements lead to greater performance and efficiency at the level of each processor chip, they also enable a net reduction in the total cost of ownership in datacenters.
This work was partially funded by the EuroCloud Server project, a European Commission FP7 Computing Systems Program that grew out of the HiPEAC network, and is led by major research centers and industrial partners such as ARM, IMEC, Nokia and the University of Cyprus. The EuroCloud Server project coordinator Emre Özer from ARM adds “Our goal is a ten-fold increase in overall server power efficiency through mobile processors and 3D memory stacking.” EuroCloud was also a highlight in a recent keynote by Max Lemke at HiPEAC 2012 conference. Lemke is the Deputy Head of Unit for Embedded Systems and Control in the Directorate General Information Society and Media of the European Commission. He says, "Europe has to leverage its unique expertise in embedded and mobile computing systems to innovate in energy efficient and low-cost computing technologies”.
The Team. From left to right: Almutaz Adileh, Mike Ferdman (first author), Onur Kocberber, Stavros Volos, Djordje Jevdjic, Cansu Kaynak, Prof. Babak Falsafi. Not pictured: Mohammad Alisafaee, Adrian Popescu, Anastasia Ailamaki. Credit: EPFL.
Notes for Editors
Prof. Falsafi joined the School of Computer and Communication Sciences at EPFL in 2008. Prior to that, he was a full Professor of Electrical & Computer Engineering and Computer Science at Carnegie Mellon. He is the founding director of the EcoCloud research center pioneering future energy-efficient and environmentally-friendly cloud technologies at EPFL. His research targets technology-scalable datacenters, design for dark silicon, architectural support for software and hardware robustness, and analytic and simulation tools for computer system performance evaluation. He is a fellow of IEEE. For more information please visit EcoCloud’s website at www.ecocloud.ch.
About EuroCloud Servers
Emerging scale-out workloads are primarily data-driven (e.g., streaming, analytics, data serving, search, and web) and as such require highly-efficient and parallel access to data with simple processing components. The EuroCloud Server project is an EU FP7 funded initiative to showcase energy reductions of up to 10x with 3D-DRAM integrated server chips with ARM cores in future servers. For more information please visit www.eurocloudserver.com.
The FP7 HiPEAC network of excellence is Europe’s premier organization for coordinating research, improving mobility, and enhancing visibility in the computing system field. Created in 2004, HiPEAC today gathers over 1000 leading European academic and industrial computing system researchers from about 100 universities and 50 companies in one virtual center of excellence. HiPEAC covers all computing market segments: embedded systems, general purpose computing systems, data centers and high performance computing. For more information please visit www.hipeac.net.
Contact: Eduardo Martínez (firstname.lastname@example.org)
This item on other websites
Alpha Galileo 20/03/2012
Compute Scotland 20/03/2012
Elankovan Sundararajan 19/03/2012
eScience News 21/03/2012
I4U News 21/03/2012
ACM Tech News 23/03/2012
Communications of the ACM 23/03/2012
Cloud Cofares 24/03/2012
Tendencias Informáticas (Spanish) 27/03/2012