HyperScale Data Centers Featured Article


HP Launches Compute Platforms to Support the Transition to Software-Defined Data Centers

Share
Tweet
May 12, 2015

IT departments are under heavy stress due to the recent explosion of mobile data, social media, cloud services, new security technologies and Big Data, which antiquated IT systems were never designed to handle. The issue will only get worse with the impending expansion of Internet of Things (IoT) and Artificial Intelligence (AI). This massive increase in data has created the need for new technologies and architectures better suited to modern trends that can effectively process vast quantities and help businesses sustain efficient, effective operations.

In recognition of this trend as well as its status as a leader in the industry, tech giant HP recently announced its purpose-built Compute platforms and solutions that enable effective leveraging of extensive data assets. These offerings lay down the path for helping businesses move toward operating with a software-defined data center (SDDC). According to HP this process occurs in three phases for IT operations: Organized Compute, in which the focus is on efficiency and productivity and the line between server, storage and networking begins to disappear; Predictive Compute, with an emphasis on flexibility and resource utilization to enable more service-oriented and cloud-like delivery in a software-defined environment; and Autonomic Compute, the phase where fungible resource pools are disaggregated to the component level and IT systems can be fully resilient and self-healing.

HP’s Compute is essentially a pool of processing resources divided into Scale-Out and Scale-Up Compute Portfolios. The Scale-out portfolio is composed of the HP Apollo 2000 and the Apollo 4000 server family, addressing scale-out workloads that require performance scalability, density optimization, storage simplicity and configuration flexibility. These solutions are supported by HP’s Haven Big Data platform and can be combined with HP Moonshot to form the HP Big Data Reference Infrastructure, an Apache Hadoop infrastructure design that banks on Compute to deliver a differentiated solution.

The Scale-Up Compute Portfolio addresses data-intensive scale-up workloads such as in-memory and structured databases with the HP Integrity Superdome X as well as the ProLiant DL580, 560 and BL660c Gen9 servers. These scale-up offerings address the need for high performance, availability, reliability and disaster tolerance that is central to demanding workload environments like databases, virtualization, consolidation, simulation and public cloud. This group of servers is ideal for Microsoft SQL Server 2014 deployments and in-memory computing applications, which require more flexibility.

“The ever-increasing volume, velocity and variety of data have stretched traditional server technologies beyond their limits — it needs a set of purpose-built compute platforms specifically designed to extract the maximum value of the data,” said Alain Andreoli, Senior Vice President and General Manager of the Servers Business Unit at HP. “HP is innovating the designs of its broad Compute portfolio to align it to specific workload needs in order to help customers deliver the most impactful business outcomes by using data in ways that was impossible in the past.”

The HP Apollo 2000, HP Integrity Superdome X and HP ProLiant DL580 Gen9 are currently available worldwide, whereas the other servers are expected to be launched early next month. HP’s vision for the future of IT through its Compute portfolios is enhanced by comprehensive support, consulting and financial services as well as a robust partner ecosystem.




Edited by Dominick Sorrentino

Article comments powered by Disqus


Freedom from rigid architectures
Learn More ›
FREE Transforming Network Infrastructure eNewsletter - Sign Up

Featured Blog Entries

Reflections from an Interop Veteran and Alum

When I returned to the Fiber Mountain™ offices in Connecticut after exhibiting at Interop Las Vegas 2015, I couldn’t help but think about how much the event has evolved through the years. I have been attending this seminal IT and networking conference since its inception in 1986 when it was called the TCP/IP Vendor Workshop, focused on interoperability of various TCP/IP program stacks.

What Fiber Mountain's Interop Recognition Means for Our Industry

When Fiber Mountain™ began its journey with a launch at Interop New York last fall, we certainly believed that we had a solution that would make a significant impact in the data center space.

What On-Board Optics Means for Density and Flexibility

This past week I read an article in Lightwave Magazine and another in Network World about the formation of the Consortium for On-board Optics (COBO), a group that seeks to create specifications and increase the faceplate density of data center switches and adapters.

Scaling Hyperscale in an Age of Exponential Growth and Virtualization

Over the past several years server, network, storage and application virtualization has revolutionized the way hyperscale data centers are built by consolidating workloads. The trend has simplified network architecture significantly and resulted in huge cost savings as well.

Video Showcase