Related News- HPC Wire
The battle over science funding – how much and for what kinds of science – under the Trump administration is heating up. Today, the Information Technology and Innovation Foundation (ITIF) labeled a potential Trump plan to slash funding along the lines of a Heritage Foundation blueprint as harmful to U.S. innovation and competitiveness. Last week, Rep. Lamar Smith (R-TX), chairman of the House Science, Space and Technology Committee blasted NSF for past “frivolous and wasteful” projects, while still affirming NSF’s role as the bedrock of bedrock of taxpayer-funded basic science.
The emerging tug of war over science funding directions isn’t likely to diminish soon as competing forces struggle to influence the new administration’s policy. NSF invests about $7 billion of public funds each year on research projects and related activities. ITIF’s just released report (Bad Blueprint: Why Trump Should Ignore the Heritage Plan to Gut Federal Investment) takes direct aim at the Heritage Foundation plans (Blueprint for Balance) said to underpin Trump thinking on science and technology funding.
ITIF: “There is no doubt that many federal programs, including some that support business, could be cut, or even eliminated, with little or no negative effect on economic growth. But that doesn’t mean that most could. In fact, many programs are intended to compensate for serious market failures and effectively advance one or more of three key national goals: competitiveness, productivity, and innovation. Rather than being cut or eliminated, such programs should be improved and expanded.
“Such nuance and pragmatism, however, are not Heritage’s strengths; doctrinaire ideology is. Heritage’s analysis to support its efforts to cut $10 trillion from the deficit over 10 years is marked by profound misunderstandings about markets, technology, and the global economy. Markets sometimes work wonders, but they sometimes fail. They fail to provide sufficient incentives for innovation and knowledge creation. In an environment marked by financial market short-termism, markets fail to foster long-term investments in people and capabilities. And even if markets acting alone did maximize economic welfare, that doesn’t mean that maximization will occur on U.S. shores.”
Rep. Smith’s commentary (Fund science for a new millennium in America: Lamar Smith) presumably more reflective of the Trump position, was published in USA Today and posted on the committee web site; it seems less an attack on funding levels than a clear directive to NSF to focused on applied research directly connected to U.S. competitiveness – the defining the latter has always been a matter of debate.
Excerpt: “Despite the U.S. government spending more on research and development than any other country, American pre-eminence in several fields is slipping. Other countries are focusing investments on new technologies, advanced scientific and manufacturing facilities, and harnessing their workforces to go into STEM fields. For example, last year China launched the fastest supercomputer in the world, five times faster than any supercomputer in the United States.
“Business as usual is not the answer. NSF must be as nimble and innovative as the speed of technology, and as open and transparent as information in the digital age. NSF Director France Cordova has publicly committed NSF to accountability and transparency and restoring its original mission to support science in the national interest…When NSF is only able to fund one out of every five proposals submitted by scientists, why did it award $225,000 to study animal photos in National Geographic or $920,000 to study textile-making in Iceland during the Viking era? Why did studying tourism in northern Norway warrant $275,000 of limited federal funds?
“These grants and hundreds like them might be worthwhile projects, but how are they in the national interest and how can they justify taxpayer dollars? The federal government should not fund this type of research at the expense of other potentially ground-breaking science.”
Link to Heritage Foundation report: http://www.heritage.org/budget-and-spending/report/blueprint-balance-federal-budget-2017
Link to Rep. Smith commentary: https://science.house.gov/news/in-the-news/fund-science-new-millennium-america-lamar-smith
The post Battle Brews over Trump Intentions for Funding Science appeared first on HPCwire.
As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. to its Google Compute Engine. The cloud provider is the first to offer the next-gen Xeons, and is getting access ahead of traditional server-makers like Dell and HPE.
Google announced plans to incorporate the next-generation Intel server chips into its public could last November. On Friday (Feb. 24), Urs Hölzle, Google’s senior vice president for cloud infrastructure, said the Skylake upgrade would deliver a significant performance boost for demanding applications and workloads ranging from genomic research to machine learning.
The cloud vendor noted that Skylake includes Intel Advanced Vector Extensions (AVX-512) that target workloads such as data analytics, engineering simulations and scientific modeling. When compared to previous generations, the Skylake extensions are touted as doubling floating-point performance “for the heaviest calculations,” Hölzle noted in a blog post.
Internal testing showed improved application performance by as much as 30 percent compared to earlier generations of the Xeon-based chip. The addition of Skylake chips also gives the cloud vendor a temporary performance advantage over its main cloud rivals, Amazon Web Services and Microsoft Azure, as well as server makers. (Intel also collaborates with AWS.)
Google and Intel launched a cloud alliance last fall designed to boost enterprise cloud adoption. At the time, company executives noted that the processor’s AVX-512 extensions could help optimize enterprise and HPC workloads.
“Google and Intel have had a long standing engineering partnership working on datacenter innovation,” Diane Bryant, general manager of Intel’s datacenter group, added in a statement.
“This technology delivers significant enhancements for compute-intensive workloads” such as data analytics.
Hölzle added that Skylake was tweaked for Google Compute Engine’s family of virtual machines, ranging from standard through “custom machine types” to boost the performance of compute instances for enterprise workloads.
Google said Skylake processors are available in five public cloud regions, including those across the United States, Western Europe and the eastern Asian Pacific.
A version of this article also appears on EnterpriseTech.
BARCELONA, Spain, Feb. 27 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, has announced that its ConnectX-5 100Gb/s Ethernet Network Interface Card (NIC) has achieved 126 million packets per second (Mpps) of record-setting forwarding capabilities running the open source Data Path Development Kit (DPDK). This breakthrough performance signifies the maturity of high-volume server I/O to support large-scale, efficient production deployments of Network Function Virtualization (NFV) in both Communication Service Provider (CSP) and cloud data centers. The DPDK performance of 126 Mpps was achieved on HPE ProLiant 380 Gen9 servers with Mellanox ConnectX-5 100Gb/s interface.
The I/O intensive nature of the Virtualized Network Functions (VNFs) including virtual Firewall, virtual Evolved Packet Core (vEPC), virtual Session Border Controller (vSBC), Anti-DDoS and Deep Packet Inspection (DPI) applications have posed significant challenges to build cost-effective NFV Infrastructures that meet packet rate, latency, jitter and security requirements. Leveraging its wealth of experience in building high-performance server/storage I/O components and switching systems for High Performance Computing (HPC), Hyperscale data centers, and telecommunications operators, Mellanox has the industry’s broadest range of intelligent Ethernet NIC and switch solutions; spanning interface speeds from 10, 25, 40, 50 to 100Gb/s. In addition, both the Mellanox ConnectX series of NICs and the Spectrum series of Ethernet switches feature best-of-class packet rates with 64-Byte traffic, low and consistent latency, and enhanced security with hardware-based memory protection.
In addition to designing cutting-edge hardware, Mellanox also actively works with infrastructure software partners and open source consortiums to drive system-level performance to new levels. Mellanox has continually improved DPDK Poll Mode Driver (PMD) performance and functionality through multiple generations of ConnectX-3 Pro, ConnectX-4, ConnectX-4 Lx, and ConnectX-5 NICs.
“We have established Mellanox as the leading cloud networking vendor, by working closely with 9 out of 10 of hyperscale customers who now leverage our advanced offload and acceleration capabilities that boost total infrastructure efficiency of their cloud, analytics, machine learning deployments,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “We are extending the same benefits to our CSP customers through a distinctive blend of enhanced packet processing and virtualization and storage offload technologies, enabling them to deploy Telco cloud and NFV with confidence.”
“As CSPs deploy NFV in production, they demand reliable NFV Infrastructure (NFVI) that delivers the quality of service their subscribers demand. A critical aspect of this is making sure the NFVI offers the data packet processing performance required to support the service traffic,” said Claus Pedersen, Director, Communication Service Provider Platforms, Data Center Infrastructure Group, Hewlett Packard Enterprise. “The HPE NFV Infrastructure lab has worked closely with Mellanox to ensure that HPE ProLiant Servers with the Mellanox ConnectX series of NICs will enable our CSP customers to achieve the scale, reliability and efficiency they require of their NFV deployments.”
Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.
The post Mellanox Sets New DPDK Performance Record With ConnectX-5 appeared first on HPCwire.
Feb. 27 — The first major call-for-participation deadline for the inaugural PEARC Conference is March 6, 2017. PEARC17, which will take place in New Orleans, July 9-13, 2017, is open to professionals and students in advanced research computing. Technical papers and tutorials submissions are due by March 6 and must be submitted through EasyChair, which can be found on the PEARC17 Call for Participation webpage here.
The official Call for Participation contains details about each of the four technical tracks of papers, and tutorials. The technical track paper submissions may be full papers (strongly preferred) or extended abstracts. External Program and Workshop proposals are due March 31 and Poster, Visualization Showcase and Birds-of-a-Feather submissions are due May 1.
The PEARC (Practice & Experience in Advanced Research Computing) conference series is being ushered in with support from many organizations and will build upon earlier conferences’ success and core audiences to serve the broader community. In addition to XSEDE, organizations supporting the new conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Science Gateways Community Institute (SGCI), the Campus Research Computing Consortium (CaRC), the ACI-REF consortium, the Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC), and Internet2.
The post PEARC17 Call for Participation Deadline Approaching appeared first on HPCwire.
BARCELONA, Spain, Feb. 27 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the IDG4400 6WIND Network Routing and IPsec platform based on the combination of Indigo, Mellanox’s newest network processor, and 6WIND’s 6WINDGate packet processing software, which includes routing and security features such as IPsec VPNs.
The IDG4400 6WIND 1U platform supports 10, 40 and 100GbE network connectivity and is capable of sustaining record rates of up to 180Gb/s of encryption/decryption while providing IPv4/IPv6 Routing functions at rates up to 400Gb/s. As the result of the strong partnership between the two companies, Mellanox’s IDG4400 6WIND delivers a price/performance advantage in a turnkey solution designed for carrier, data center, cloud and Web 2.0 applications requiring high-performance cryptographic capabilities along with IP routing capabilities. The IDG4400 6WIND complements Mellanox’s Spectrum-based Ethernet switches to provide a full solution for the datacenter.
“We are proud to partner with Mellanox to include our high performance networking software in the new IDG4400 6WIND appliance,” said Eric Carmès, CEO and Founder of 6WIND. “By combining the performance and flexibility of Mellanox’s Indigo network processor together with our 6WINDGate routing and security features, we bring to market a ready-to-use networking appliance with an impressive cost/performance advantage for customers.”
“The need scalable IPsec solutions has become vital for telco, data center, hyperscale infrastructures and more,” said Yael Shenhav, vice president of product marketing at Mellanox. “As security concerns in data centers continue to rise, encryption data by default becomes a crucial requirement. The combined Mellanox and 6WIND solution provides the required security capabilities in the most efficient manner possible to our mutual customers.”
The IDG4400 delivers an effective routing and crypto alternative to traditional networking vendors at a fraction of the cost. For carriers, it enables the ability to overcome the security exposure of today’s LTE networks. In addition, scaling to millions of routes, the IDG4400 6WIND is an ideal solution for Point of Presence (POP) applications. It can also be used to enable secure data interconnect in between geographically dispersed data-centers. Customers using 6WIND software on standard x86 servers can migrate to the IDG4400 and gain a cost/performance advantage while still enjoying same software and features. The IDG4400 6WIND is a complete product backed by the extensive support capabilities of Mellanox.
For more information on the IDG4400 6WIND, come and visit us at HP’s booth no. 3E11, Hall 3.
Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.
The post Mellanox Introduces 6WIND-Based Router and IPsec Indigo Platform appeared first on HPCwire.
The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Often these have been intertwined with cross collaborations in all possible combinations and under the guidance of Federal agencies with mission critical goals. But each class of R&D environment has operated at its own pace and with differing goals, strengths, and timeframes, the superposition of which has met the short, medium, and long term needs of the nation and the field of HPC.
Different countries from the Americas, Europe, Asia, and Africa have evolved their own formulas of such combinations sometimes in cooperation with others. Many, but not all, emphasize the educational component of academic contributions for workforce development, incorporate the products of international industrial suppliers, and specialize their own government bodies to specific needs. In the US, academic involvement has provided critical long-term vision and perhaps most importantly greatly expanded the areas of pursuit.Thomas Sterling, Director, CREST, Indiana University
The field of HPC is unique in that its success appears heavily weighted in terms of its impact on adoption by industry and community. This tight coupling sometimes works against certain classes of research, especially those that explore long term technologies, that investigate approaches outside the mainstream, or that require substantial infrastructure often beyond the capabilities or finances of academic partners. A more subtle but insidious factor is the all-important driver of legacy applications, often agency mission critical, that embody past practices constraining future possibilities.
How university research in HPC stays vibrant, advances the state of the art, and still makes useful contributions to the real world is a challenge that demands innovation in organization within schools and colleges. Perhaps most importantly, real research as opposed to important development not only involves but demands risk – it is the exploration of the unknown. Risk adverse strategies are important when goals and approaches are already determined and time to deployment is the determining factor of success. But when beyond a certain point, honesty recognizes that future methods are outside the scope of certainty, then the scientific method applies and when employed must not just tolerate by benefit from uncertainty of outcome.
Without such research into the unknown, the field is restricted to incremental perturbations of the conventional, essentially limiting the future to the cul de sac of the past. This is insufficient to drive the future means into areas beyond our sight. The power and richness of the mixed and counter balancing approaches of government labs, industry, and academia guarantee both the near-term quality of deployable hardware and software platforms and the long-term as yet understood improved concepts where the enabling technologies and their trends are distinct from the present.
This is the strength of the US HPC R&D approach and was reflected in the 2015 NSCI executive order for exascale computing. How academia conducts its component of this triad is a bit of a messy and diverse methodology sensitive to the nature of the institutions of which they are a part, the priorities of their universities, funding sources, and the vision of the individual faculty and senior administrators responsible for its direction, strategy, staffing, facilities, and accomplishments by which success will be measured. This article presents one such enterprise, the Center for Research for Extreme Scale Technologies (CREST) at Indiana University (IU) which incorporates one possible strategy balancing cost, impact, and risk on the national stage.
CREST is a medium scale research center, somewhere between small single-faculty led research groups found at many universities and those few premiere research environments such as the multiple large-scale academic laboratories at MIT and similar facilities like TACC and NCSA at UT-Austin and UIUC, respectively. While total staffing numbers are routinely in flux, a representative number is on the order of 50 people. It occupies a modern two-story building of about 20,000 square feet conveniently located within walking distance of the IU Bloomington campus and the center of the city.
CREST was established in the fall of 2011 by Prof. Andrew Lumsdaine as its founding Director, Dr. Craig Stewart as its Assistant Director, and Prof. Thomas Sterling as its Chief Scientist. Over almost six years of its existence, CREST has evolved with changes in responsibilities. Sterling currently serves as Director, Prof. Martin Swany as Associate Director, and Laura Pettit as Assistant Director. Overall staffing is deemed particularly important to ensure that all required operating functions are performed. This means significant engagement of administrative staff which is not typical of academic environments. But cost effectiveness to maximize productivity in research and education is a goal eliminating tasks that could be better performed, and at lower cost, by others. An important strategy of CREST is let everyone working as part of a team do what they are best at resulting in highest impact at lowest cost.
As per IU policy, research direction is faculty led with as many as six professors slotted for CREST augmented with another half dozen full-time research scientists including post-docs. A small number of hardware and software engineers both expedites and enhances quality of prototype development for experimentation and product delivery to collaborating institutions. CREST can support as many as three-dozen doctoral students with additional facilities for Masters and undergraduate students.
Organizationally, CREST has oversight by the Office of the Dean of the IU School of Informatics and Computing (SOIC) in cooperation with the Office of the VP of IT and the Office of the VP of Research. It coexists with the many departments making up SOIC and has the potential to include faculty and students from any and all of them. It also extends its contributions and collaborations to other departments within the university as research opportunities and interdisciplinary projects permit. While these details are appropriate, they are rather prosaic and more importantly do not describe either the mandate or the essence of CREST; that is about the research it enables.
CREST was established, not for the purposes of creating a research center, but as an enabler to conduct a focused area of research; specifically, to advance the state-of-the-art in high performance computing systems beyond conventional practices. This was neither arbitrary nor naive on the part of IU senior leadership and was viewed as the missing piece of an ambitious but realizable strategy to bring HPC leadership and capability to Indiana. Already in place was strong elements of cyber-infrastructure support and HPC data center facilities for research and education. More about this shortly. CREST was created as the third pillar of this HPC thrust by bringing original research to IU in hardware and software with a balanced portfolio of near and long term initiatives providing both initial computing environments of immediate value and extended exploration of alternative concepts unlikely to be undertaken by mainstream product oriented activities. Therefore, the CREST research strategy addresses real-world challenges in HPC including classes of applications not currently well satisfied through incremental changes to conventional practices.
One of the critical factors in the impact of CREST is its close affiliation with the Office of the Vice President for Information Technology (OVPIT), including the IU Pervasive Technology Institute (IUPTI), and University Information Technology Services (UITS). This dramatically reduces the costs and ancillary activities of CREST research by leveraging the major investments of OVPIT in support of broader facilities and services for the IU computing community permitting CREST as a work unit to be more precisely focused on its mission research while staying lean and mean. IU VP for IT and COI Brad Wheeler played an instrumental role in the creation of CREST and the recruitment of Thomas Sterling and Martin Swany to IU.
The IUPTI operates supercomputers with more than 1 PetaFLOPS aggregate processing capability, including the new Big Red II Plus, a Cray supercomputer allowing large scale testing and performance analysis of HPX+ software. This is housed and operated in a state-of-the-art 33,000 square feet data center that among its other attributes is tornado proof. IUPTI exists to aid the transformation of computer science innovation into tools usable by the practicing scientist within IU. IUPTI creates special provisions for support of CREST software on their systems and at the same time has provided two experimental compute systems (one cluster, one very small Cray test system) for dedicated use within CREST.CREST founding director Andrew Lumsdaine (l) and current director Thomas Sterling in front of Big Red II Plus (Cray)
IUPTI staff are engaged and active in CREST activities. For example, IUPTI Executive Director Craig Stewart gave the keynote address at the 2016 SPPEXA (Special Priority Project on EXascale Applications) held in Munich, and discussed US Exascale initiatives in general and CREST technologies in particular. IUPTI coordinates their interactions with vendors with CREST so as to create opportunities for R&D partnerships and promulgation of CREST software. Last, and definitely not least, the UITS Learning Technologies Division CREST in distribution of online teaching materials created by CREST. All in all, CREST, SOIC, and OVPIT are partners in supporting basic research in HPC and rendering CS innovations to practical use for science and society while managing costs.
The CREST charter is one of focused research towards a common goal of advancing future generation of HPC system structures and applications; the Center is simply a vehicle for achieving IU’s goals in HPC and the associated research objectives, rather than is its actual existence the purpose itself. The research premise is that key factors determine ultimate delivered performance. These are: starvation, latency, overhead, waiting for contention resolution, availability including resilience, and the normalizing operation issue rate reflecting power (e.g., clock rate). Additional factors of performance portability and user productivity also contribute to overall effectiveness of any particular strategy of computation.
A core postulate of CREST HPC research and experimental development is the opportunity to address these challenge parameters through dynamic adaptive techniques through runtime resource management and task scheduling to achieve (if/when possible) dramatic improvements in computing efficiency and scalability. The specific foundational principles of the dynamic computational method used are established by the experimental ParalleX execution model which expands computational parallelism, addresses the challenge of uncertainty caused by asynchrony, permits exploitation of heterogeneity, and exhibits a global name space to the application.
ParalleX is intended to replace prior execution models such as Communicating Sequential Processes (CSP), SMP-based multiple threaded shared memory computing (e.g., OpenMP), vector and SIMD-array computing, and the original von Neumann derivatives. ParalleX has been formally specified through operational semantics by Prof. Jeremy Siek for verification of correctness, completeness, and compliance. As a first reduction to practice, a family of HPX runtime systems have been developed and deployed for experimentation and application. LSU has guided important extensions to C++ standards led by Dr. Hartmut Kaiser. HPX+ is being used to extend the earlier HPX-5 runtime developed by Dr. Luke D’Alessandro and others into areas of cyber-physical systems and other diverse application domains while supporting experiments in computer architecture.
One important area pursued by CREST in system design and operation is advanced lightweight messaging and control through the Photon communication protocol led by Prof. Martin Swany with additional work in low overhead NIC design. Many application areas have been explored. Some conventional problems exhibiting static regular data structures show little improvement through these methods. But many applications incorporating time-varying irregular data structures such as graphs found in adaptive mesh refinement, wavelet algorithms, N-body problems, particle in cell codes, and fast multiple methods among others demonstrate improvements, sometimes significant, in the multi-dimensional performance tradeoff space. These codes are developed by Drs. Matt Anderson, Bo Zhang, and others have driven this research while producing useful codes including the DASHMM library.
The CREST research benefits from both internal and external sponsorship. CREST has contributed to NSF, DOE, DARPA, and NSA projects over the last half dozen years and continues to participate in advanced research projects as appropriate. CREST represents an important experience base in advancing academic research in HPC systems for future scalable computing, employing co-design methodologies between applications and innovations in hardware and software system structures and continues to evolve. It provides a nurturing environment for mentoring of graduate students and post-docs in the context of advanced research even as the field itself continues to change under national demands and changing technology opportunities and challenges.
The post Thomas Sterling on CREST and Academia’s Role in HPC Research appeared first on HPCwire.
Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems.
In Europe, the DEEP project has successfully built a next-generation heterogeneous architecture based on an innovative “cluster-booster” approach. The new architecture can dynamically assign individual code parts in a simulation to different hardware components based on which component can deliver the highest computational efficiency. It also provides a foundation for a modular type of supercomputing where a variety of top-level system components, such as a memory module or a data analytics module for example, could be swapped in and out based on workload characteristics. Recently, Norbert Eicker, head of the Cluster Computing research group at Jülich Supercomputing Centre (JSC), explained how the DEEP and DEEP-ER projects are advancing the idea of “modular supercomputing” in pursuit of exascale performance.
Why go DEEP?
Eicker says that the use of vectorization or multi-core processors have become the two main strategies for acceleration. He noted that the main advantages in general purpose multi-core processors include high single-thread performance due to relatively high frequency along with their ability to do out-of-order processing. Their downsides include limited energy efficiency and a higher cost per FLOP. Accelerators, such as the Intel Xeon Phi coprocessor or GPUs, on the other hand are more energy efficient but harder to program.
Given the different characteristics of general purpose processors and accelerators, it was only a matter for time before researchers began looking for ways to integrate different types of compute modules into an overall HPC system. Eicker said that most efforts have involved building heterogeneous clusters wherein standard cluster nodes are connected using a fabric and then accelerators are attached to each cluster node.Figure 1: An example of a basic architecture for a heterogeneous cluster.
Per Eicker, this heterogeneous approach has drawbacks, including the need for static assignment of accelerators to CPUs. Since some applications benefit greatly from accelerators and others not at all, getting the ratio of CPUs to accelerators right is tricky and inevitably leads to inefficiencies. Eicker explained that the idea behind the DEEP project was to combine compute resources into a common fabric and make the accelerating resources more autonomous. The goal was to not only enable dynamic assignments between cluster nodes and the accelerator, but also to enable the accelerators to run a kind of MPI so the system could offload more complex kernels to the accelerators rather than needing to always rely on the CPU.
The building blocks of a successful prototype
Work on the prototype Dynamical Exascale Entry Platform (DEEP) system began in 2011, and was mostly finalized toward the end of 2015. It took the combined efforts of 20 partners to complete the European Commission funded project. The 500 TFLOP/s DEEP prototype system includes a “cluster” component with general-purpose Intel Xeon processors and a “booster” component with Intel Xeon Phi coprocessors along with a software stack capable of dynamically separating code parts in a simulation based on concurrency levels and sending them to the appropriate hardware component. The University of Heidelberg developed the fabric, which has been commercialized by EXTOLL and dubbed the EXTOLL 3D Torus Network.Figure 2: The DEEP cluster-booster hardware architecture. The cluster is based on an Aurora HPC system from Eurotech. The booster includes 384 Intel Xeon Phi processors interconnected by Extoll fabric.
Given the unusual architecture, the project team knew it would need to modify and test applications from a variety of HPC fields on the DEEP system to prove its viability. The team analyzed each selected application to determine which parts would run better on the cluster and which would run better on the booster, and modified the applications accordingly. One example is a climate application from Cyprus Institute. The standard climate model part of the application runs on the cluster side while an atmospheric chemical simulation runs on the booster side, with both sides interacting with each other from time to time to exchange data.
The new software architecture
One of the most important developments of the DEEP project is a software architecture that includes new communication protocols for transferring data between network technologies, programming model extensions and other important advancements.Figure 3: The DEEP software architecture includes standard software stack components along with some new components developed specifically for the project.
While left- and right-hand sides of the architecture in figure 3 are identical to the standard MPI-based software-stacks of most present day HPC architectures, the components in the middle add some important new capabilities. Eicker explained that in the DEEP software architecture, the main part of applications and less scalable code are only run on the cluster nodes and everything starts on the cluster side. What’s different is that the cluster part of the application can collectively start a crowd of MPI-processes on the right-hand side using a global MPI.
The spawn for the booster is a collective operation of cluster processes that creates an inter-communicator containing all parents on one side and all children on the other. For example, the MPI_COMM_WORLD or a subset of processes on the cluster side, collectively called the MPI_Comm_spawn function, can create a new MPI_COMM WORLD on the booster side that is capable of standard MPI communication. Once started, the processes on the booster side can communicate amongst each other and exchange messages, making it possible to offload complex kernels to the booster.
Using MPI to bridge between the different fabrics in the cluster and booster may seem like it would significantly complicate the lives of application developers. However, Barcelona Supercomputing Center invented what is basically a source-to-source compiler, called the OmpSs Offload Abstraction compiler that does much of the work. Developers see a familiar looking cluster side with an Infiniband-based MPI and a booster side with an EXTOLL-based MPI. Their job is to annotate the code to tell the compiler which parts should run on the cluster versus the booster. The OmpSs compiler introduces the MPI_Comm_spawn call and the other required communication calls for sharing data between the two code parts.
Eicker explained that the flexible DEEP approach has many advantages, including options for multiple operational modes that enable much more efficient use of system resources. Beyond the specialized symmetric mode described above, the booster can be used discretely, or as a pool of accelerators. He used applications that could scale on the Blue Gene system as an example, noting they be run entirely on the booster side with no cluster interaction.
From DEEP to DEEP-ER
Plans for the DEEP-ER (Dynamical Exascale Entry Platform – Extended Reach) phase include updating the booster to include the latest generation of Intel Xeon Phi processors. The team is also exploring how on-node Non-Volatile Memory (NVM), network attached memory and a simplified interface can improve the overall system capabilities.Figure 4: The DEEP-ER cluster-booster hardware architecture.
Eicker said that since Xeon Phi processors are self-booting, the upgrade will make the hardware implementation easier. The team also significantly simplified the interface by using the EXTOLL fabric throughout the entire system. The global use of the EXTOLL fabric enabled the team to eliminate the booster interface nodes and the DEEP cluster-booster protocol. The DEEP-ER system will use a standard EXTOLL protocol running the two types of nodes. The EXTOLL interconnect also enables the system to take advantage of the network attached memory.
One of the main objectives of the DEEP-ER project is to explore scalable I/O. To that end, the project team is investigating the integration of different storage types, starting from the disks using NVM while also making use of the network attached memory. Eicker said the team is using the BeeGFS file system and extensions that enable smart caching to local NVMe devices in the common namespace of the file system to help improve performance as well as SIONlib, a scalable I/O library developed by JSC for parallel access to task-local files, to enable more efficient local tasking of I/O. Exascale10 I/O software from Seagate also sits on top of the BeeGFS file system, enabling the MPI I/O to make use of the file system cache extensions.
Beyond I/O, the DEEP-ER project is also exploring how to improve resiliency. Eicker noted that because the offloaded parts of programs are stateless in the DEEP approach, it’s possible to improve the overall resiliency of the software and make functions like checkpoint restart a lot more efficient than standard approaches.
Toward modular supercomputing
Each phase of the DEEP project is an important step forward toward modular supercomputing. Eicker said that the DEEP cluster-booster concept showed that it’s possible to integrate heterogeneous systems in new ways. With DEEP-ER, the combination of the NAM and network attached storage add what is essentially a memory booster module. Moving forward, there are all kinds of possibilities for new modules, according to Eicker. He mentioned an analytics module that might look like a cluster, but include more memory or different types of processors, or a module that acts as a graphics cluster for online visualization.Figure 5: The end goal of the DEEP project is to create a truly modular supercomputer, which could pave the way for increasingly specialized modules for solving different types of supercomputing challenges.
The ultimate goal of the DEEP project is to build a flexible modular supercomputer that allows users to organize applications for efficient use of the various system modules. Eicker said that the DEEP-ER team hopes to extend its JURECA cluster with the next-generation Xeon Phi processor-based booster. Then the team will begin exploring new possibilities for the system, which could include adding new modules, such as a graphics, storage and data analytics modules. The next steps could even include a collaboration with the Human Brain Project on neuromorphic computing. And these ideas are only the beginning. The DEEP approach could enable scientists to dream up new modules for tackling their specific challenges. Eicker acknowledges that there is much work to be done, but he believes the co-design approach used by the DEEP team will continue to drive significant steps forward.
Watch a short video capturing highlights of Eicker’s presentation.
About the Author
Sean Thielen, the founder and owner of Sprocket Copy, is a freelance writer from Portland, Oregon who specializes in high-tech subject matter.
The post Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures appeared first on HPCwire.
SAN DIEGO, Calif., Feb. 24 — The Leverage Big Data + EnterpriseHPC 2017 Summit, a live hosted event dedicated to exploring the convergence happening as enterprises increasingly leverage High Performance Computing (HPC) to solve modern scaling challenges of the big data era, today announced that Dell EMC has joined the summit as a sponsor.
The summit, scheduled for March 19-21, 2017 at the Ponte Vedra Inn & Club in Ponte Vedra Beach, Florida, will focus on bridging the challenges that CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions face as they work to build systems and applications that require increasing amounts of performance and throughput.
Dell EMC, a part of Dell Technologies, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies. This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions. Dell EMC services customers across 180 countries – including 98% of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud.
Bringing together the industry’s leading hardware and software solutions under one roof, the converged Leverage Big Data + EnterpriseHPC 2017 Summit is uniting leaders in overcoming streaming and high-performance challenges across industries who drive their organizations to success. Attendees of this invitation-only summit will engage with luminaries faced with similar technical challenges, build dialogue and share solutions to delivering both systems and software performance in this emerging era of computing.
“Streaming analytics and high-performance computing loom large in the future of enterprises which are realizing the scaling limitations of their legacy environments,” said Tom Tabor, CEO of Tabor Communications. “As organizations develop analytic models that require increasing levels of compute, throughput and storage, there is a growing need to understand how businesses can leverage high performance computing architectures that can meet the increasing demands being put on their infrastructure. At Leverage Big Data + EnterpriseHPC’17, we look to support the leaders who are trying to navigate their scaling challenges, and connect them with others who are finding new and novel ways to succeed.”
The summit will be co-chaired by EnterpriseTech Managing Editor, Doug Black, and Datanami Managing Editor, Alex Woodie.
ATTENDING THE SUMMIT
This is an invitation-only hosted summit that is fully paid for qualified attendees, including flight, hotel, meals and summit badge. Targets of the summit include CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions. To apply for an invitation to this exclusive event, please fill out the qualification form at the following link: Hosted Attendee Interest Form
Current sponsors for the summit include Amazon Web Services, ANSYS, ASRock Rack, Birst, Caringo, Cray, DDN Storage, Dell EMC, HDF Group, Impetus, Intel, Lawrence Livermore National Lab, Paxata, Quantum, Redline Performance, Striim, Verne Global, with more to be announced. For sponsorship opportunities, please contact us at firstname.lastname@example.org.
The summit is hosted by Datanami, EnterpriseTech and HPCwire through a partnership between Tabor Communications and nGage Events, the leader in host-based, invitation-only business events.
Source: Tabor Communications
The post Dell EMC to Sponsor Leverage Big Data + EnterpriseHPC 2017 Summit appeared first on HPCwire.
OAK RIDGE, Tenn., Feb. 24 — Dr. Thom Mason will step down as director of Oak Ridge National Laboratory effective July 1, 2017, exactly 10 years after becoming director of the nation’s largest science and energy laboratory.
Mason will take on a new role as Senior Vice President for Laboratory Operations at Battelle in Columbus, Ohio. Battelle, in partnership with the University of Tennessee, has managed and operated ORNL for the U.S. Department of Energy since April 2000.
“Thom has been an exemplary scientific leader and we’re fortunate that he will continue to be engaged with Oak Ridge National Laboratory as he uses his experience and expertise to benefit DOE, Battelle, and other labs where Battelle has a management role,” said Joe DiPietro, chairman of the UT-Battelle board of governors. Battelle has a substantial management role at six DOE labs and one lab for the Department of Homeland Security.
Mason is an experimental condensed matter physicist by training and came to ORNL in 1998 to work on the Spallation Neutron Source (SNS), a facility that was under construction to serve scientists worldwide. He soon assumed responsibility for completion of the $1.4 billion project.
SNS and its sister neutron facility, the High Flux Isotope Reactor, solidified ORNL’s role as the leading source of neutrons for scientific research in the U.S. and recently welcomed their 20,000th scientific user.
Mason’s decade as lab director was marked by a number of other milestones, including:
- Two supercomputers ranked as the most powerful in the world, bringing the power of high performance computing to a wide range of science and engineering problems;
- Leadership of ambitious, multi-institutional research organizations such as the BioEnergy Science Center and Consortium for Advanced Simulation of Light Water Reactors;
- Establishment of ORNL as a center for advanced manufacturing, and creation of game-changing technologies in support of clean energy and industry;
- Continued revitalization of laboratory infrastructure;
- Support of national priorities in nuclear science and energy, in fission, fusion, isotope production, and nuclear security.
At Battelle, Mason will work with Executive Vice President of Global Laboratory Operations Ron Townsend by participating in governance at each Battelle-managed lab, engaging key sponsors, contributing to capture management, and leading strategic planning for lab operations that integrate with Battelle’s overall strategic plan.
ORNL has formed a search committee to seek Mason’s replacement. More information will be available at the ORNL Director Search website http://public.ornl.gov/ornlsearch.
PALO ALTO, Calif., Feb. 23 — Hewlett Packard Enterprise (NYSE: HPE) today announced financial results for its fiscal 2017 first quarter, ended January 31, 2017.
First quarter net revenue of $11.4 billion was down 10% from the prior-year period and down 4% when adjusted for divestitures and currency.
First quarter GAAP diluted net earnings per share (EPS) was $0.16, up from $0.15 in the prior-year period, and above its previously provided outlook of $0.03 to $0.07. First quarter non-GAAP diluted net EPS was $0.45, up from $0.41 in the prior-year period, and near the high end of its previously provided outlook of $0.42 to $0.46. First quarter non-GAAP net earnings and non-GAAP diluted net EPS exclude after-tax costs of $505 million and $0.29 per diluted share, respectively, related to separation costs, restructuring charges, amortization of intangible assets, acquisition and other related charges, an adjustment to earnings from equity interests, defined benefit plan settlement and remeasurement charges and tax indemnification adjustments.
“I believe HPE remains on the right track,” said Meg Whitman, President and CEO of Hewlett Packard Enterprise. “The steps we’re taking to strengthen our portfolio, streamline our organization, and build the right leadership team, are setting us up to win long into the future.”
HPE fiscal 2017 first quarter financial performanceQ1 FY17 Q1 FY16 Y/Y GAAP net revenue ($B) $11.4 $12.7 (10%) GAAP operating margin 4.1% 3.0% 1.1 pts. GAAP net earnings ($B) $0.3 $0.3 flat GAAP diluted net earnings per share $0.16 $0.15 7% Non-GAAP operating margin 9.2% 8.1% 1.1 pts. Non-GAAP net earnings ($B) $0.8 $0.7 6% Non-GAAP diluted net earnings per share $0.45 $0.41 10% Cash flow from operations ($B) ($1.5) ($0.1) ($1.4)
Three significant headwinds have developed since Hewlett Packard Enterprise provided its original fiscal 2017 outlook at its Securities Analyst Meeting in October 2016: increased pressure from foreign exchange movements, higher commodities pricing, and some near-term execution issues. Given these challenges, the company is reducing its FY17 outlook by $0.12 in order to continue making the appropriate investments to secure the long-term success of the business.
For the fiscal 2017 second quarter, Hewlett Packard Enterprise estimates GAAP diluted net EPS to be in the range of ($0.03) to $0.01 and non-GAAP diluted net EPS to be in the range of $0.41 to $0.45. Fiscal 2017 second quarter non-GAAP diluted net EPS estimates exclude after-tax costs of approximately $0.44 per diluted share, related primarily to separation costs, restructuring charges and the amortization of intangible assets.
For fiscal 2017, Hewlett Packard Enterprise estimates GAAP diluted net EPS to be in the range of $0.60 to $0.70 and non-GAAP diluted net EPS to be in the range of $1.88 to $1.98. Fiscal 2017 non-GAAP diluted net EPS estimates exclude after-tax costs of approximately $1.28 per diluted share, related primarily to separation costs, restructuring charges and the amortization of intangible assets.
Fiscal 2017 first quarter segment results
- Enterprise Group revenue was $6.3 billion, down 12% year over year, down 6% when adjusted for divestitures and currency, with a 12.7% operating margin. Servers revenue was down 12%, down 11% when adjusted for divestitures and currency, Storage revenue was down 13%, down 12% when adjusted for divestitures and currency, Networking revenue was down 33%, up 6% when adjusted for divestitures and currency, and Technology Services revenue was down 2%, up 4% when adjusted for divestitures and currency.
- Enterprise Services revenue was $4.0 billion, down 11% year over year, down 6% when adjusted for divestitures and currency, with a 7.0% operating margin. Infrastructure Technology Outsourcing revenue was down 8%, down 7% when adjusted for divestitures and currency, and Application and Business Services revenue was down 17%, down 3% when adjusted for divestitures and currency.
- Software revenue was $721 million, down 8% year over year, down 1% when adjusted for divestitures and currency, with a 21.4% operating margin. License revenue was down 9%, down 2% when adjusted for divestitures and currency, Support revenue was down 9%, down 2% when adjusted for divestitures and currency, Professional Services revenue was down 7%, down 5% when adjusted for divestitures and currency, and Software-as-a-service (SaaS) revenue was up 4%, up 6% when adjusted for divestitures and currency.
- Financial Services revenue was $823 million, up 6% year over year, net portfolio assets were up 2%, and financing volume was down 10%. The business delivered an operating margin of 9.5%.
Revenue adjusted for divestitures and currency excludes revenue resulting from businesses divestitures in fiscal 2017, 2016 and 2015 and also assumes no change in the foreign exchange rate from the prior-year period. A reconciliation of GAAP revenue to revenue adjusted for divestitures and currency is provided in the earnings presentation at investors.hpe.com.
About Hewlett Packard Enterprise
Hewlett Packard Enterprise (HPE) is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.
Source: Hewlett Packard Enterprise
DENVER, Colo., Feb. 23 — Applications are now being accepted for the Student Volunteers program at the SC17 conference to be held Nov. 12-17 in Denver. Both undergraduate and graduate students are encouraged to apply.
Students will be required to work a minimum number of hours during the conference, giving them time to engage in important education and career-advancing activities such as tutorials, technical talks, panels, poster sessions and workshops. Student Volunteers help with the administration of the conference and have the opportunity to participate in student-oriented activities, including professional development workshops, technical talks by famous researchers and industry leaders, exploring the exhibits and developing lasting peer connections.
The Student Volunteers program will accept a large number of students, both domestic and international, with the goal of transitioning students into the main conference by way of the Technical Program, Doctoral Showcase and Early Career professional development sessions.
Being a Student Volunteer can be transformative, from helping to find internships to deciding to pursue graduate school. Read about how Ather Sharif’s Student Volunteer experience inspired him to enroll in a Ph.D. program.
The deadline to apply is June 15.
The post Applications Now Open for Student Volunteers at SC17 Conference appeared first on HPCwire.
FRANKFURT, Germany, Feb. 23 — The organizers of the ISC High Performance conference are very pleased to introduce data scientist, Prof. Dr. Jennifer Tour Chayes, the managing director and co-founder of Microsoft Research New England and Microsoft Research New York City, as the ISC 2017 conference keynote speaker. Her talk will be titled “Network Science: From the Massive Online Networks to Cancer Genomics.”
She will be speaking at 9 am, which is right after the opening session on Monday, June 19. This year’s ISC High Performance conference will be held at Messe Frankfurt from June 18 – 22, and will be attended by over 3,000 HPC community members, including researchers, scientists and business leaders.
In her keynote abstract, Chayes sets up her topic as follows:
“Everywhere we turn these days, we find massive data sets that are appropriately described as networks. In the high tech world, we see the Internet, the World Wide Web, mobile phone networks, a variety of online social networks like Facebook and LinkedIn, and massive online networks of users and products like Netflix and Amazon. In economics, we are increasingly experiencing both the positive and negative effects of a global networked economy. In epidemiology, we find disease spreading over our ever growing social networks, complicated by mutation of the disease agents. In biomedical research, we are beginning to understand the structure of gene regulatory networks, with the prospect of using this understanding to manage many human diseases.”
Chayes is one of the inventors of the field of graphons, which are graph functions now widely used for machine learning of massive networks. She will briefly introduce some of the models she and her collaborator are using to describe these networks, the processes they are studying on the networks, the algorithms they have devised for the networks, and finally, methods to indirectly infer latent network structure from measured data as well as some of the processes, methods and algorithms they are using to derive insights from those networks.
“I’ll discuss in some detail two particular applications: the very efficient machine learning algorithms for doing collaborative filtering on massive sparse networks of users and products, like the Netflix network; and the inference algorithms on cancer genomic data to suggest possible drug targets for certain kinds of cancer,” explains Chayes.
She joined Microsoft Research in 1997, when she co-founded the Theory Group. She is the co-author of over 135 scientific papers and the co-inventor of more than 30 patents. Her research areas include phase transitions in discrete mathematics and computer science, structural and dynamical properties of self-engineered networks, graph theory, graph algorithms, algorithmic game theory, and computational biology.
Chayes holds a BA in biology and physics from Wesleyan University, where she graduated first in her class, and a PhD in mathematical physics from Princeton. She did postdoctoral work in the Mathematics and Physics Departments at Harvard and Cornell. She is the recipient of the NSF Postdoctoral Fellowship, the Sloan Fellowship, the UCLA Distinguished Teaching Award, and the ABI Women of Leadership Vision Award. She has twice been a member of the IAS in Princeton. Chayes is a Fellow of the American Association for the Advancement of Science, the Fields Institute, the Association for Computing Machinery, and the American Mathematical Society, and an elected member of the American Academy of Arts and Sciences. She is the winner of the 2015 John von Neumann Award, the highest honor of the Society of Industrial and Applied Mathematics. In 2016, Chayes received an Honorary Doctorate from Leiden University.
2017 Conference Registration Opens March 1
The organizers are also pleased to announce that the early-bird registration for this year’s conference and exhibition will open March 1. By registering early, attendees will be able to save money and secure their choice of hotels. For ISC 2017 partner hotels and special rates, please look at Frankfurt Hotels under Travel & Stay.
About ISC High Performance
First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.
Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.
Source: ISC High Performance
The post Microsoft Researcher Tapped for Opening Keynote at ISC 2017 appeared first on HPCwire.
Feb. 23 — CIARA, a global technology provider specializing in the design, development, manufacturing, integration and support of cutting-edge products and services, announced today they are an official Intel Technology Provider, HPC Data Center Specialist. This recognition certifies CIARA as experts in creating innovative high performance computing technology using the latest Intel processors to customers.
“We are proud to have achieved Intel Technology Provider HPC Data Center Specialist status,” said Shannon Shragie, Director of Product Management at CIARA. “Our collaboration with Intel HPC experts enables us to leverage Intel test tools, reducing our research and development costs, ensuring the highest quality, and offering customers the lowest total cost of ownership.”
This certification recognizes CIARA’s history of technology excellence in the design and deployment of HPC solutions, including high performance computing clusters, performance-optimized servers, storage solutions, GPU platforms and large-scale data center solutions for businesses worldwide. CIARA’s total end-to-end data center solution includes design, rack and stack, on-site rack deployment, hardware support and recycling.
CIARA works with Intel to develop solutions using the latest technologies and architectures. Achieving HPC Data Center Specialist status, means Intel has validated CIARA as a trusted partner.
Founded in 1984, CIARA is a global technology provider that specializes in the design, engineering, manufacturing, integration, deployment, support and recycling of cutting-edge IT products. With its vast range of products and services including desktops, workstations, servers and storage, HPC products, high frequency servers, OEM services, deployment services, colocation services and IT asset disposition services; CIARA is considered to be one of the largest system manufacturers in North America and the only provider capable of offering a total hardware lifecycle management solution. The company’s products are employed worldwide by organizations small to large in the sectors of public cloud, content delivery, finance, aerospace, engineering, transportation, energy, government, education and defense.
The post CIARA Achieves Platinum Intel Technology Provider, HPC Data Center Specialist Status appeared first on HPCwire.
Feb. 23 — As with many fields, computing is changing how geologists conduct their research. One example: the emergence of digital rock physics, where tiny fragments of rock are scanned at high resolution, their 3-D structures are reconstructed, and this data is used as the basis for virtual simulations and experiments.
Digital rock physics complements the laboratory and field work that geologists, petroleum engineers, hydrologists, environmental scientists, and others traditionally rely on. In specific cases, it provides important insights into the interaction of porous rocks and the fluids that flow through them that would be impossible to glean in the lab.
In 2015, the National Science Foundation (NSF) awarded a team of researchers from The University of Texas at Austin and the Texas Advanced Computing Center (TACC) a two-year, $600,000 grant to build the Digital Rocks Portal where researchers can store, share, organize and analyze the structures of porous media, using the latest technologies in data management and computation.
“The project lets researchers organize and preserve images and related experimental measurements of different porous materials,” said Maša Prodanović, associate professor of petroleum and geosystems engineering at The University of Texas at Austin (UT Austin). “It improves access to them for a wider geosciences and engineering community and thus enables scientific inquiry and engineering decisions founded on a data-driven basis.”
The grant is a part of EarthCube, a large NSF-supported initiative that aims to create an infrastructure for all available Earth system data to make the data easily accessible and useable.
Small pores, big impacts
The small-scale material properties of rocks play a major role in their large-scale behavior – whether it is how the Earth retains water after a storm or where oil might be discovered and how best to get it out of the ground.
As an example, Prodanović points to the limestone rock above the Edwards Aquifer, which underlies central Texas and provides water for the region. Fractures occupy about five percent of the aquifer rock volume, but these fractures tend to dominate the flow of water through the rock.
“All of the rain goes through the fractures without accessing the rest of the rock. Consequently, there’s a lot of flooding and the water doesn’t get stored,” she explained. “That’s a problem in water management.”
Digital rocks physicists typically perform computed tomography (CT) scans of rock samples and then reconstruct the material’s internal structure using computer software. Alternatively, a branch of the field creates synthetic, virtual rocks to test theories of how porous rock structures might impact fluid flow.
In both cases, the three-dimensional datasets that are created are quite large — frequently several gigabytes in size. This leads to significant challenges when researchers seek to store, share and analyze their data. Even when data sets are made available, they typically only live online for a matter of months before they are erased due to space issues. This impedes scientific cross-validation.
Furthermore, scientists often want to conduct studies that span multiple length scales — connecting what occurs at the micrometer scale (a millionth of a meter: the size of individual pores and grains making up a rock) to the kilometer scale (the level of a petroleum reservoir, geological basin or aquifer), but cannot do so without available data.
The Digital Rocks Portal helps solve many of these problems.
James McClure, a computational scientist at Virginia Tech uses the Digital Rocks Portal to access the data he needs to perform large-scale fluid flow simulations and to share data directly with collaborators.
“The Digital Rocks Portal is essential to share and curate experimentally-generated data, both of which are essential to allow for re-analyses and reproducibility,” said McClure. “It also provides a mechanism to enable analyses that span multiple data sets, which researchers cannot perform individually.”
The Portal is still young, but its creators hope that, over time, material studies at all scales can be linked together and results can be confirmed by multiple studies.
“When you have a lot of research revolving around a five-millimeter cube, how do I really say what the properties of this are on a kilometer scale?” Prodanović said. “There’s a big gap in scales and bridging that gap is where we want to go.”
A framework for knowledge sharing
When the research team was preparing the Portal, they visited the labs of numerous research teams to better understand the types of data researchers collected and how they naturally organized their work.
Though there was no domain-wide standard, there were enough commonalities to enable them to develop a framework that researchers could use to input their data and make it accessible to others.
“We developed a data model that ended up being quite intuitive for the end-user,” said Maria Esteva, a digital archivist at TACC. “It captures features that illustrate the individual projects but also provides an organizational schema for the data.”
The entire article can be found here.
Source: Aaron Dubrow, TACC
The post Supercomputer-Powered Portal Provides Data, Simulations to Geology and Engineering Community appeared first on HPCwire.
OAK RIDGE, Tenn., Feb. 23 — The Department of Energy’s Oak Ridge National Laboratory has announced the latest release of its Adaptable I/O System (ADIOS), a middleware that speeds up scientific simulations on parallel computing resources such as the laboratory’s Titan supercomputer by making input/output operations more efficient.
While ADIOS has long been used by researchers to streamline file reading and writing in their applications, the production of data in scientific computing is growing faster than I/O can handle. Reducing data “on the fly” is critical to keep I/O up to speed with today’s largest scientific simulations and realize the full potential of resources such as Titan to make real-world scientific breakthroughs. And it’s also a key feature in the latest ADIOS release.
“As we approach the exascale, there are many challenges for ADIOS and I/O in general,” said Scott Klasky, scientific data group leader in ORNL’s Computer Science and Mathematics Division. “We must reduce the amount of data being processed and program for new architectures. We also must make our I/O frameworks interoperable with one another, and version 1.11 is the first step in that direction.”
The upgrade boasts a number of new improvements aimed at ensuring these challenges are met, including
- a simplified write application programming interface (API) that reduces complexity via introduction of a novel buffering technique;
- lossy compression with ZFP, a software from Peter Lindstrom at Lawrence Livermore National Laboratory, that reduces the size of data on storage;
- a query API with multiple indexing/query methods, from John Wu at Lawrence Berkeley National Laboratory and Nagiza Samatova of North Carolina State University;
- a “bprecover” utility for resilience that exploits the ADIOS file format’s multiple copies of metadata;
- in-memory time aggregation for file-based output, allowing for efficient I/O with difficult write patterns;
- novel Titan-scale-supported staging from Manish Parashar at Rutgers University; and
- a laundry list of various other performance improvements.
These modifications represent the latest evolution in ADIOS’s journey from research to production, as version 1.11 now makes it easier to move data from one code to another. ADIOS’s user base has gone from just a single code to hundreds of parallel applications spread across dozens of domain areas.
“ADIOS has been a vital part of our large-scale XGC fusion code,” said Choong-Seock Chang, head of the Center for Edge Physics Simulation at Princeton Plasma Physics Laboratory. “With the continuous version updates, the performance of XGC keeps getting better; during one of our most recent ITER runs, we were able to further accelerate the I/O, which enabled new insights into our scientific results.”
ADIOS’s success in the scientific community has led to its adoption among several industrial applications seeking more efficient I/O. Demand for ADIOS has grown sufficiently so that the development team is now partnering with Kitware, a world leader in data visualization infrastructure, to construct a data framework for the scientific community that will further the efficient location and reduction of data plaguing parallel scientific computing and likely further grow ADIOS’s user base.
Throughout its evolution, ADIOS’s development team has ensured that the middleware remains fast, concurrent, scalable, portable, and perhaps most of all, resilient (the bprecover feature in 1.11 that allows for the recovery of uncorrupted data). According to Klasky, being part of the DOE national lab system was critical to ensuring the scalability of the ever-growing platform, an asset that will remain critical as ORNL moves towards the exascale.
Because exascale hardware is widely expected to be disruptive, particularly in terms of incredibly fast nodes that will make it difficult for networks and I/O to keep up, researchers are preparing now for the daunting I/O challenge to come.
ADIOS was one of four ORNL-led software development projects to receive funding from the Exascale Computing Project, a collaborative effort between the DOE’s Office of Science and the National Nuclear Security Administration to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce to meet the scientific and national security mission needs of DOE in the mid-2020 timeframe.
The award is a testament to ADIOS’s ability in making newer technologies sustainable, usable, fast, and interoperable – so that they will all be able to read from and possibly write to other important file formats.
As the journey to exascale continues, ADIOS’s unique I/O capabilities will be necessary to ensure that the world’s most powerful computers, and the applications they host, can continue to facilitate scientific breakthroughs impossible through experimentation alone.
“With ADIOS we saw a 20-fold increase in I/O performance compared to our best previous solution,” said Michael Bussmann, a junior group leader in computational radiation physics at Helmholtz-Zentrum Dresden-Rossendorf. “This made it possible to take full snapshots of the simulation, enabling us to study our laser-driven particle accelerator from the single-particle level to the full system. It is a game changer, going from 20 minutes to below one minute for a snapshot.”
The Titan supercomputer is part of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility.
ORNL is managed by UT-Battelle for DOE’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
The post ADIOS Version 1.11 Moves I/O Framework from Research to Production appeared first on HPCwire.
The performance of trade and match servers can be a critical differentiator for financial trading houses. Latency is often the most problematic bottleneck affecting an institution’s ability to quickly match and completing trades. Earlier this month, HPE’s ProLiant XL170r Gen9 Trade & Match Servers demonstrated the lowest max latency of any systems tested using the STAC-N1 test according to STAC.
Compared to all other public STAC-N1 reports of Ethernet-based SUTs, this SUT (stack under test) demonstrated:
- The lowest max latency at both the base rate (100K messages per second) and the highest rate tested (1M messages per second). Max at 1 million messages per second was 18 microseconds vs. the previous best 51 microseconds (SUT ID SFC141110).
- The lowest mean latency at both the base rate and the highest rate tested.
STAC notes, “[Because STAC-N1 is not tied to a particular network API, it can be used to compare stacks using different APIs (for example, UDP/Ethernet vs. RDMA/Infiniband). However, STAC-N1 is often used to compare different stacks using the same API (for example, UDP with one vendor’s NIC and driver vs. UDP with another vendor’s NIC and driver). When making the latter type of comparison, it is essential that the SUTs you are comparing used the same STAC-N1 binding.”
In this instance, the stack under test consisted of UDP over 10GbE using OpenOnload on RHEL 6.6 with Solarflare SFN 8522-PLUS Adapters on HPE ProLiant XL170r Gen9 Trade & Match Servers.
Test details: “The STAC-N benchmark was performed on two of the 4 HPE ProLiant XL170r Gen9 Servers in a 2U HPE Apollo r2600 chassis, a component of the HPE Apollo 2000 System. The HPE Apollo Trade and Match Server Solution is designed to minimize system latency, utilizing the HPE Apollo 2000 system that has been optimized for applications performing best at maximum frequency and with lower core count. This solution utilizes custom tools to enable over-clocked processors for improved performance specifically for high-frequency trading operations. The HPE Trade and Match Server Solution building block is based on the HPE Apollo 2000 System. Each chassis can accommodate up to four HPE ProLiant XL170r Gen9 Server trays, supporting one Intel Xeon E5-1680v3 processor each.
“The STAC N1 Benchmark exercised two of four HPE ProLiant XL170r Gen9 Servers in a 2U HPE Apollo r2600 Chassis, with each server configured with one Intel E5-1680v3 processor and eight 32 GiB DIMMs. The chassis was configured with two 1400W Power supplies.”
In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information.
Cray (NASDAQ: CRAY)
Cray has reported fourth quarter and full year 2016 financial results. Total revenue for the year was $629.8 million, which was an increase of over $100 million from the year prior ($724.7 million). Net income was $10.6 million for the year, or $0.26 per diluted share.
For the fourth quarter, sales reached $346.6 million, a substantial increase from the same quarter of 2015 ($267.5 million). Net income for the quarter was $51.8 million, or $1.27 per diluted share.
“While 2016 wasn’t nearly as strong as we originally targeted, we finished the year well, with the largest revenue quarter in our history and solid cash balances, as well as delivering profitability for the year,” said Peter Ungaro, president and CEO of Cray. “We completed numerous large system installations around the world in the fourth quarter, providing our customers with the most scalable, highest performance supercomputing, storage and analytics solutions in the market. We continue to lead the industry at the high-end and, despite an ongoing downturn in the market, we’re in excellent position to continue to deliver for our customers and drive long-term growth.”
Super Micro Computer (NASDAQ: SMCI)
Supermicro has announced second quarter 2017 financial results. The company reported quarterly net sales of $652 million, which was an increase of 23.3% from the first quarter of the year and up 2% from the same quarter of 2016. GAAP net income was $22 million, up 62.5% from the first quarter and down 36.6% from the same quarter of 2016. Server offerings accounted for 68.1% of the total revenue.
For the third quarter of 2017, Supermicro expects $570-$630 million in net sales and GAAP earnings per diluted share to sit between $0.34 and $0.42. For more information, click here.
“We are pleased to report record second quarter revenues of $652.0 million that exceeded our guidance and outpaced a strong compare with last year. Contributing to this strong growth was our Twin family product line including our FatTwin, Storage, HPC, MicroBlade, and strong growth from enterprise cloud and Asia Pacific, particularly China. Component shortages and pricing, product and geographic mix adversely impacted gross margins while improved leverage allowed us to deliver stronger operating margins from last quarter,” said Charles Liang, Chairman and Chief Executive Officer. “We expect to continue the growth of last quarter and be reflected in the year-over-year revenue growth in the March quarter based on an increasing number of sizable customer engagements demanding the performance and advantages of our leading product lines. In addition, we are well positioned to benefit from technology transitions in 2017 and have upgraded our product lines to optimize these new technologies.”
Mellanox Technologies (NASDAQ: MLNX)
Mellanox Technologies has reported fourth quarter and full year 2016 results. For the year, total revenue was $857.5 million, GAAP operating income was $30.6 million, and GAAP net income was $18.5 million ($0.37 per diluted share). For the fourth quarter, revenue was $221.7 million, GAAP operating income was $13.4 million, and GAAP net income was $9 million ($0.18 per diluted share).
For the first quarter of 2017, the company predicts revenue to range between $200-210 million. For more information, click here.
“During the fourth quarter we saw continued sequential growth in our InfiniBand business, driven by robust customer adoption of our 100 Gigabit EDR solutions into artificial intelligence, machine learning, high-performance computing, storage, database and more. Our quarterly, and full-year 2016 results, highlight InfiniBand’s continued leadership in high-performance interconnects,” said Eyal Waldman, president and CEO of Mellanox. “Customer adoption of our 25, 50, and 100 gigabit Ethernet solutions continued to grow in the fourth quarter. Adoption of Spectrum Ethernet switches by customers worldwide generated positive momentum exiting 2016. Our fourth quarter and full-year 2016 results demonstrate Mellanox’s diversification, and leadership in both Ethernet and InfiniBand. We anticipate growth in 2017 from all Mellanox product lines.”
Hewlett Packard Enterprise (NYSE: HPE)
HPE has announced full year and fourth quarter financial results for 2016. The company brought in $50.1 billion for the year, down 4% from the prior year period. For the fourth quarter, HPE’s net revenue was $12.5 billion, a decrease of 7% from the fourth quarter of 2015. HPE reported GAAP diluted net earnings per share of $1.82 for the year and $0.18 for the quarter.
For the first quarter of 2017, HPE predicts GAAP diluted net earnings per share to sit between $0.03 and $0.07. For the year, the company expects it to range between $0.72 and $0.82. For more information, click here.
“FY16 was a historic year for Hewlett Packard Enterprise,” said Meg Whitman, president and CEO of Hewlett Packard Enterprise. “During our first year as a standalone company, HPE delivered the business performance we promised, fulfilled our commitment to introduce groundbreaking innovation, and began to transform the company through strategic changes designed to enable even better financial performance.”
NVIDIA (NASDAQ: NVDA)
NVIDIA has reported results for the fourth quarter and fiscal 2017. Total sales for the year reached $6.91 billion, an increase of 38% from the year prior. GAAP earnings per diluted share were $1.13, up 117% from the previous year ($0.52). For the quarter, revenue was $2.17 billion, up 55% from the same quarter of 2016. GAAP earnings per diluted share reached $0.99, an increase of 19% from the third quarter.
For the first quarter of 2018, NVIDIA expects sales to sit around $1.90 billion. For more information, click here.
“We had a great finish to a record year, with continued strong growth across all our businesses,” said Jen-Hsun Huang, founder and CEO of NVIDIA. “Our GPU computing platform is enjoying rapid adoption in artificial intelligence, cloud computing, gaming, and autonomous vehicles. Deep learning on NVIDIA GPUs, a breakthrough approach to AI, is helping to tackle challenges such as self-driving cars, early cancer detection and weather prediction. We can now see that GPU-based deep learning will revolutionize major industries, from consumer internet and transportation to health care and manufacturing. The era of AI is upon us.”
IBM (NYSE: IBM)
IBM has reported 2016 fourth quarter and full year financial results. For the year, IBM announced $11.9 billion in net income from continuing operations, down 11% from the previous year ($13.4 billion). Diluted earnings per share were $12.39, also down (9%) from the year before. For the fourth quarter, the company reported net income of $4.5 billion from continuing operations, up 1% from the same quarter a year prior.
For 2017, IBM predicts GAAP diluted earnings per share to be at least $11.95. For more information, click here.
“In 2016, our strategic imperatives grew to represent more than 40 percent of our total revenue and we have established ourselves as the industry’s leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and CEO. “IBM Watson is the world’s leading AI platform for business, and emerging solutions such as IBM Blockchain are enabling new levels of trust in transactions of every kind. More and more clients are choosing the IBM Cloud because of its differentiated capabilities, which are helping to transform industries, such as financial services, airlines and retail.”
AMD (NASDAQ: AMD)
AMD has reported 2016 fourth quarter and full year financial results. For the year, the company announced revenue of $4.27 billion, an increase of 7% from 2015 ($3.99 billion). Total revenue for the quarter was $1.11 billion, up 15% year-over-year ($958 million).
For the first quarter of 2017, AMD predicts revenue to decrease 11%, plus or minus 3%. For more information, click here.
“We met our strategic objectives in 2016, successfully executing our product roadmaps, regaining share in key markets, strengthening our financial foundation, and delivering annual revenue growth,” said Dr. Lisa Su, AMD president and CEO. “As we enter 2017, we are well positioned and on-track to deliver our strongest set of high-performance computing and graphics products in more than a decade.”
Fujitsu (OTC: FJTSY)
Fujitsu has announced 2016 third quarter results. Consolidated revenue for the quarter was 1,115.4 billion yen, down 51.4 billion yen from the same quarter of 2015. The company also reported an operating profit of 37.3 billion yen, up 23.2 billion yen from the year prior. Net financial income was 5.5 billion yen, an improvement of 2.9 billion yen from the same period of 2015.
For the full year of 2016, Fujitsu expects revenue to reach 4,500 billion yen with an operating profit of 120 billion yen. For more information, click here.
Seagate Technology (NASDAQ: STX)
Seagate has reported second quarter 2017 financial results. The company announced revenue of $2.9 billion, net income of $297 million, and diluted earnings per share of $1.00. For more information, click here.
“The Company’s product execution, operational performance, and financial results improved every quarter throughout 2016. In the December quarter we achieved near record results in gross margin, cash flow, and profitability. Seagate’s employees are to be congratulated for their incredible effort,” said Steve Luczo, Seagate’s chairman and CEO. “Looking ahead, we are optimistic about the long-term opportunities for Seagate’s business as enterprises and consumers embrace and benefit from the shift of storage to cloud and mobile applications. Seagate is well positioned to work with the leaders in this digital transformation with a broad market-leading storage solution portfolio.”
Just what constitutes HPC and how best to support it is a keen topic currently. A new paper posted last week on arXiv.org – Rethinking HPC Platforms: Challenges, Opportunities and Recommendations – by researchers from the University of Edinburgh and University of St. Andrews suggests the emergence of “second generation” HPC applications (and users) requires a new approach to supporting infrastructure that draws on container-like technology and services.
In the paper they describe a set of services, which they call ‘cHPC’ (container HPC), to accommodate these emerging HPC application requirements and indicate they plan to benchmark key applications as a next step. “Many of the emerging second generation HPC applications move beyond tightly-coupled, compute-centric methods and algorithms and embrace more heterogeneous, multi-component workflows, dynamic and ad-hoc computation and data-centric methodologies,” write authors Ole Weidner, Rosa Filgueira Vicente, Malcolm Atkinson, and Adam Barker.
“While diverging from the traditional HPC application profile, many of these applications still rely on the large number of tightly coupled cores, cutting-edge hardware and advanced interconnect topologies provided only by HPC clusters. Consequently, HPC platform providers often find themselves faced with requirements and requests that are so diverse and dynamic that they become increasingly difficult to fulfill efficiently within the current operational policies and platform models.”
It’s best to read the paper in full which examines in some detail the challenges and potential solutions. The authors single out three applications areas and report that as a group they have deep experience working with them:
- Data Intensive Applications. Data-intensive applications require large volumes of data and devote a large fraction of their execution time to I/O and manipulation of data. Careful attention to data handling is necessary to achieve acceptable performance or completion. “They are frequently sensitive to local storage for intermediate results and reference data. It is also sensitive to the data-intensive frameworks and workflow systems available on the platform and to the proximity of data it uses.” Examples of large-scale, data-intensive HPC applications are seismic noise cross-correlation and misfit calculation as encountered, e.g. in the VERCE project.
- Dynamic Applications. These fall into two broad categories: “applications for which we do not have full understanding of the runtime behavior and resource requirements prior to execution and (ii) applications which can change their runtime behavior and resource requirements during execution.” Two examples cited are: (a) applications that use ensemble Kalman-Filters for data assimilation in forecasting, (b) simulations that use adaptive mesh refinement (AMR) to refine the accuracy of their solutions.
- Federated applications. “Based on the idea that federation fosters collaboration and allows scalability beyond a single platform, policies and funding schemes explicitly supporting the development of concepts and technology for HPC federations have been put into place. Larger federations of HPC platforms are XSEDE in the US, and the PRACE in the EU. Both platforms provide access to several TOP-500 ranked HPC clusters and an array of smaller and experimental platforms.”
“To explore the implementation options for our new platform model, we have developed cHPC, a set of operating-system level services and APIs that can run alongside and integrate with existing job via Linux containers (LXC) to pro- vide isolated, user-deployed application environment containers, application introspection and resource throttling via the cgroups kernel extension. The LXC runtime and software-defined networking are provided by Docker and run as OS services on the compute nodes,” say the authors. (see figure 2 from the papers shown here)
The authors note prominently in their discussion that many traditional HPC applications are still best served by traditional HPC environments for which they have been carefully coupled.
“It would be false to claim that current production HPC platforms fail to meet the requirements of their application communities. It would be equally wrong to claim that the existing platform model is a pervasive problem that generally stalls the innovation and productivity of HPC applications…[There are] significant classes of applications, often from the monolithic, tightly-coupled parallel realm, [that] have few concerns regarding the issues out-lined in this paper…They are the original tenants and drivers of HPC and have an effective social and technical symbiosis with their platform environments.
“However, it is equally important to understand that other classes of applications (that we call second generation applications) and their respective user communities share a less rosy perspective. These second generation applications are typically non-monolithic, dynamic in terms of their runtime behavior and resource requirements, or based on higher-level tools and frameworks that manage compute, data and communication. Some of them actively explore new compute and data handling paradigms, and operate in a larger, federated context that spans multiple, distributed HPC clusters.”
To qualify and quantify their assumptions, the authors report they are in the process of designing a survey that will be sent out to platform providers and application groups to verify current issues on a broader and larger scale. They write, “The main focus of our work will be on the further evaluation of our prototype system. We are working on a ‘bare metal’ deployment on HPC cluster hardware at EPCC. This will allow us to carry out detailed measurements and benchmarks to analyze the overhead and scalability of our approach. We will also engage with computational science groups working on second generation applications to explore their real-life application in the context of cHPC.”
The post Rethinking HPC Platforms for ‘Second Gen’ Applications appeared first on HPCwire.
SANTA CLARA, Calif., Feb. 22 — DataDirect Networks (DDN) today announced that it was – once again – the top storage provider among HPC sites surveyed by Intersect360. For the third consecutive year, DDN posted the largest share of installed systems at HPC sites and held its solid lead over other storage providers at HPC sites surveyed in Intersect360 Research’s “Top of All Things in HPC” survey. This report caps off a year of strong recognition of DDN as the performance storage leader that included awards ranging from best HPC storage product/technology company, best big data innovator, best storage company and best enterprise NAS to leadership recognition in IDC’s MarketScape report.
As illustrated in the table below, DDN had the largest share of installed systems at HPC sites (14.8 percent), gaining almost a full percentage point over the previous year. DDN’s closest competitors follow at 12.7 and 11.0 percent, and all other suppliers had less than 10 percent share of reported storage systems. DDN’s continued strong showing is a testament to the success of the company’s focus on solving the toughest data access and management challenges to deliver consistent, cost-effective performance at scale.
Intersect360 Research forecasts storage to be the fastest growing hardware sector in HPC, and according to a recent DDN survey, end users in the world’s most data-intense environments, like those in many general IT environments, are increasing their use of cloud. However, unlike general IT environments, the HPC sector is overwhelmingly opting for private and hybrid clouds instead of the public cloud. More than 90 percent of HPC sites surveyed are modernizing their data centers with flash, with the largest cited use cases being flash acceleration of parallel file system metadata, specific application data and specific end-user data. Survey responses show that I/O performance and rapid data growth remain the biggest issues for HPC organizations – a circumstance that favors continuing strong demand for DDN technologies that are leading the market in solving these challenges.
“High-performance sites are incredibly challenging IT environments with massive data requirements across very diverse application and user types,” said Laura Shepard, senior director of product marketing, DDN. “Because we are a leader in this space, we have the expertise to provide the optimal solutions for traditional and commercial high-performance customers to ensure they are maximizing their compute investment with the right storage infrastructure.”
DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.
The post DDN Named Top Storage Provider Among HPC Sites by Intersect360 appeared first on HPCwire.
UNIVERSITY PARK, Pa., Feb. 22 — The Penn State Cyber-Laboratory for Astronomy, Materials, and Physics (CyberLAMP) is acquiring a high-performance computer cluster that will facilitate interdisciplinary research and training in cyberscience and is funded by a grant from the National Science Foundation. The hybrid computer cluster will combine general purpose central processing unit (CPU) cores with specialized hardware accelerators, including the latest generation of NVIDIA graphics processing units (GPUs) and Intel Xeon Phi processors.
“This state-of-the-art computer cluster will provide Penn State researchers with over 3200 CPU and Phi cores, as well as 101 GPUs, a significant increase in the computing power available at Penn State,” said Yuexing Li, assistant professor of astronomy and astrophysics and the principal investigator of the project.(Source: Penn State)
Astronomers and physicists at Penn State will use this computer cluster to improve the analysis of the massive observational datasets generated by cutting-edge surveys and instruments. They will be able to broaden the search for Earth-like planets by the Habitable Zone Planet Finder, sharpen the sensitivity of the Laser Interferometer Gravitational-Wave Observatory (LIGO) to the cataclysmic merger of ultra-massive astrophysical objects like black holes and neutron stars, and dramatically enhance the ability of the IceCube experiment to detect and reconstruct elusive cosmological and atmospheric neutrinos.
“The order-of-magnitude improvement in processing power provided by CyberLAMP GPUs will revolutionize the way the IceCube experiment analyzes its data, enabling it to extract many more neutrinos, with much finer detail, than ever before,” said co-principal investigator Doug Cowen, professor of physics and astronomy and astrophysics.
“The CyberLAMP team performs sophisticated simulations to study the formation of planetary systems and the universe,” said co-principal investigator Eric Ford, professor of astronomy and astrophysics and deputy director of the Center for Exoplanets and Habitable Worlds. “The CyberLAMP cluster will enable simulations with greater realism to investigate mysteries such as how Earth-like planets form, and to probe the nature of dark energy.”
“Researchers from Penn State’s Material Research Institute (MRI) will perform realistic, atomistic-scale simulations to guide the design and development of next-generation complex materials,” said co-principal investigator Adri van Duin, professor of mechanical and nuclear engineering and director of the Materials Computation Center.
Co-principal investigator Mahmut Kandemir, professor of computer science and engineering, said, “Computer scientists will work with other scientists to analyze the performance of their calculations when using new hardware accelerators so as to increase the efficiency of their simulations and to inform the design of future computer architectures.”
“Penn State’s Institute for CyberScience (ICS) is excited by this opportunity to rapidly expand the access of Penn State researchers and students to the new generation of hardware accelerators that will be critical to meet the growing computational needs of `Big Data’ and `Big Simulation’ research,” said Jenni Evans, professor of meteorology and interim director of the Institute for CyberScience.
“This grant will enable Penn State to shed new light on high-priority topics in U.S. national strategic plans,” said Andrew Stephenson, distinguished professor of biology and associate dean for research and innovation of Penn State’s Eberly College of Science, “such as the National Research Council’s 2010 Decadal Survey for astronomy and astrophysics to search for habitable planets and to understand the fundamental physics of the cosmos, as well as the White House’s Materials Genome Initiative to expedite development of new materials.”
The new system will support research in five broad research groups, including 29 Penn State faculty members across seven departments, three colleges and two institutes at Penn State’s University Park campus, as well as four faculty members from three Commonwealth Campuses, and numerous graduate students. The CyberLAMP system will be installed in Penn State’s new Tower Road Data Center and will be accessible to faculty and students across the Commonwealth.
“The grant will also provide access to the CyberLAMP system to support a wide range of outreach programs at regional and national levels, including the training of students and young researchers nationwide, educational programs for K-12 students and teachers, broadening participation of women and underrepresented minority students in cyberscience, and partnering with industry on materials research and the design of next-generation high-performance computer architectures,” said Chris Palma, outreach director of CyberLAMP.
The 3-year project, titled “MRI: Acquisition of High Performance Hybrid Computing Cluster to Advance Cyber-Enabled Science and Education at Penn State,” is led by Li, and co-principal investigators Ford, Cowen, Kandemir and van Duin in partnership with the Institute of CyberScience’s (ICS) Advanced CyberInfrastructure group, led by Chuck Gilbert, chief architect of ICS, and Wayne Figurelle, assistant director of ICS. The ICS is a University-wide institute whose mission is to promote interdisciplinary research. ICS was established in 2012 to develop a strategic and coherent vision for cyberscience at Penn State.
Source: Penn State