Feed aggregator

Lustre: Stronger-than-Ever

Related News- HPC Wire - 2 hours 47 min ago

Clearly, Lustre* will continue to dominate the persistent parallel file system arena, at least for a few years. The development of such complex technology doesn’t flow as quickly as for many other applications, and even though parallel file systems may soon be replaced, a gap would still exist until that technology would be available. DDN® announced in November 2016 that all its Lustre features would be merged into the Lustre master branch to allow the entire community to have more transparent access to the code, reducing the overhead of code development management and better aligning with the latest advancements. Although numerous contributors and collaborators have asked why DDN would choose to share these patches rather than leverage them as a competitive advantage and differentiator, DDN is committed to delivering these features as a foundation framework coded into the Lustre file system. These features will then support DDN’s broader development which is now looking into areas such as security, performance, RAS, and data management.

Along with the recently announced features, DDN proposes a new, novel approach for Lustre’s policy engine (LiPE) that aims to reduce installation and deployment complexity while delivering significantly faster results. LiPE relies on a set of components that allows the engine to scan Lustre MDTs quickly, create an in-memory mapping of the file system’s objects, and implement data management policies based on that mapped information. This approach initially allows users to define policies that trigger data automation via Lustre HSM hooks or external data management mechanisms. In the next stage of development, LiPE may be integrated with a File Heat Map mechanism for more automated and transparent data management, resulting in a better utilization of parallel storage infrastructure. (File Heat Map is another feature under development that will create a file mapping that weights the state object according to its utilization. For example, over time, the weight un-modified files will decay, indicating the likelihood of such a file being a WORM-style file suitable for moving into a different disk tier.)

Regarding performance, DDN has designed and developed a new Quality of Service (QoS) approach. QoS based on the Token Bucket Filter algorithm has been implemented on the OST level that allows system administrators to define the maximum number of RPCs to be issued by a user/group or job ID to a given OST. Throttling performance provides I/O control and bandwidth reservation; for example, by guaranteeing jobs with higher priority run in a more predictable time, performance variations due to I/O delays can be avoided. A new initiative between DDN and few renowned European universities will investigate the implementation of a high-level tool, possibly at the user level, that would allow an easier utilization and configuration of QoS with a set of new usability enhancements.

Other interesting features from DDN that will be available on Lustre 2.10 and its minor releases during the LTS cycle include the Project Quotas facility, single-thread performance enhancements, and secured Lustre (MLS and isolation), among others. In keeping with new HPC trends, a tremendous amount of work has also been invested into the integration of Lustre with Linux container-based workloads, providing native Lustre file system capabilities within containers, support for new kernel and specialized Artificial Intelligence and Machine Learning appliances. Customers who are moving toward software-defined storage may be surprised to learn that, as part of its strategy regarding parallel file systems, DDN has also recently announced that it will support ZFS and Lustre as software-only.

For more information about DDN’s Lustre offerings, visit the EXAScaler® product page.

Note: Other names and brands may be claimed as the property of others.

The post Lustre: Stronger-than-Ever appeared first on HPCwire.

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

Related News- HPC Wire - Sun, 09/24/2017 - 13:54

From its annual tech conference last week (Sept. 13), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue’s max capacity and doubling 2016 attendee numbers), the one-of-four foundry business launched a raft of announcements, including a new 12nm FinFET process for high performance applications. Prominent customer Advanced Micro Devices announced it will use the 12nm technology in its Ryzen CPUs and Vega GPUs, bolstering its competitiveness against Intel and Nvidia.

GlobalFoundries said the new 12nm platform, which will be ready for risk production in the first half of 2018, will offer a 15 percent improvement in circuit density and a greater than 10 percent performance boost over industry 16nm FinFET solutions. (There’s an intended improvement over 14nm too, of course, but no specific figures were offered.) Manufactured at the GlobalFoundries’ Fab 8 factory in Malta, NY, 12LP (LP stands for Leading Performance) builds on its 14nm FinFET platform in high-volume production since early 2016.

“It pushes new design rules and some new constructs,” said GlobalFoundries CEO Sanjay Jha, “but our fundamental focus is to enable people who have already designed 14nm to able to migrate to 12LP. It is a cost reduction path as well as a performance enhancement path for a number of our customers, including AMD.”

Source: GlobalFoundries

The 12LP platform features enhancements for RF/analog applications, and also new market-focused capability for automotive, a major focus for GlobalFoundries and close customer AMD, which according to a report published Wednesday from CNBC has snagged Tesla as a customer (however both GlobalFoundries and AMD categorically deny the rumor).

The 12nm technology is an intermediate step on the way to the 7nm FinFET node, slated for risk production in the first half of 2018. “It’s not a full redesign, but there is some design work to move into it,” said GlobalFoundries Chief Technology Officer Gary Patton in a press briefing. “We want it to be as close to an extension of 14 as possible. If you’ve invested all this design IP in 14, he added, “you want to extend that as much as possible. We’ve already done several performance enhancements on our 14nm, and this is just the next step to provide another performance enhancement but also provide a path to get some additional scaling [ahead of 7nm].”

Tirias Research analyst Jim McGregor has a positive outlook on 12nm, despite what he believes is a marketing-driven naming convention. “It’s really a subnode,” he shared with HPCwire, “It’s kind of funny because most people don’t make their subnodes public. Every company, every manufacturer whether it be Intel or GlobalFoundries has those sub-nodes. GlobalFoundries isn’t doing a full node at 10nm; they’re still going to 7nm.”

“So the announcement of calling it 12nm kind of surprised me but [having a subnode is] not something new. Where most people are just getting into 10nm manufacturing at the end of this year, right now, GlobalFoundries is pushing for the beginning of their manufacturing for first half of 2018, so they’re still aggressive. It surprised me a little bit, but I think that it was more of a benefit for AMD than anything else.

“AMD is very competitive with Intel right now, and in some cases is slaughtering Intel, so they don’t want to be perceived as being behind the curve of Intel from a process perspective,” McGregor added.

AMD uses GlobalFoundries 14nm FinFET process technology for its x86 Zen-based chips (Ryzen and EPYC) and for its Vega high performance GPUs. So far only Ryzen and Vega have been tapped for a 12nm upgrade; whether the EPYC server platform will also take this intermediary step may well hinge on 7nm’s readiness and capacity.

With the introduction of its 7nm process node, GlobalFoundries is touting a 40 percent performance improvement over 14nm and a 60 percent power reduction. The company is launching 7nm with optical lithography and has designed the technology to be drop-in compliant with EUV.

GlobalFoundries’ CEO Jha said he expects 12nm to be the last optical node, followed by 7nm becoming the first EUV node. “It will start out optical just as… 90nm started out being 200mm and then migrated to 300mm and of course the vast majority of 90nm shipped actually happened in 300mm so at the moment, I think the last 200mm node that we see is really 130-110nm and thereafter everything is 300mm. I think EUV will turn out to be that way; 7nm is the first place that will start.”

Jha’s keynote emphasized the coming age of connected intelligence (read EnterpriseTech coverage here) that is pushing the development of new silicon technologies.

“We’re seeing an important shift in the business model of the foundry business,” the CEO said. “System companies, like Google, like Amazon, like Tesla, like Microsoft, are coming directly to foundries, they are working with EDA companies, IP companies and system design houses to get the IP. They want to control the hardware/software interface for the next generation of AI developments. They really want to control the architecture of both hardware and software, and it’s been the scenario over the last 10 years.

“People who control and capture the hardware/software interface capture most of the value in the industry, and certainly Apple has proven that’s where innovation occurs. I think more and more people are beginning to see this business model and I think we’re seeing more system houses hiring semiconductor engineers and driving innovation.”

The post GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET appeared first on HPCwire.

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

Related News- HPC Wire - Fri, 09/22/2017 - 12:47

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art of “The Grand Hotel Of The West,” contrasted nicely with presentations on the latest trends in modern computing – deep learning, machine learning and AI.

Over the course of two days of presentations, a couple common themes became obvious: First, that machine and deep learning are focused currently on specific rather than general use cases and second, that ML and DL need to be part of an integrated workflow to be effective.

This was exemplified by Dr. Maarten Sierhuis from Nissan Research Facility Silicon Valley with his presentation “Technologies for Making Self-Driving Vehicles the Norm.” One of the most engaging talks, Dr. Sierhuis’s multi-media presentation on the triumphs and challenges facing Nissan while developing its self-driving vehicle program showcased that machine and deep learning “drives” the autonomous vehicle revolution.

The challenge that Nissan and other deep learning practitioners face is that current deep learning algorithms are programmed to learn to do one thing extremely well – the specific use case: image recognition of stop signs for example. Once an algorithm learns to recognize stop signs, the same amount of discrete learning must apply for every other road sign a vehicle may encounter. To create a general-purpose “road sign learning algorithm”, not only do you need a massive amount of image data (in the tens of millions of varied images), but also the compute to power the learning effort.

Dr. Weng-Keen Wong from the NSF echoed much the same distinction between the specific and general case algorithm during his talk “Research in Deep Learning: A Perspective From NSF” and was also mentioned by Nvidia’s Dale Southard during the disruptive technology panel. Arno Kolster from Providentia Worldwide in his presentation “Machine and Deep Learning: Practical Deployments and Best Practices for the Next Two Years” claimed as well that general purpose learning algorithms are obviously the way to go, but are still some time out.

Nissans’s Dr. Sierhuis went on to highlight some challenges computers still face which human drivers take for granted. For example, what does an autonomous vehicle do when a road crew is blocking the road in front of it? As a human driver, we’d simply move into the opposite lane to “just go around”, but to algorithms, this breaks all the rules: Crossing a double line, checking the opposite lane for oncoming traffic, shoulder checking, ensuring no crossing pedestrians, etc. All need real-time re-programming for the encountering vehicle and other vehicles that arriving at the obstacle.

Nissan proposes an “FAA-like” control system, but the viability of such a system remains to be seen. Certainly, autonomous technologies are integrating slowly into new cars to augment human drivers but a complete self-driving vehicle won’t appear in the marketplace overnight -cars will continue to function in a hybrid mode for some time. Rest assured, many of today’s young folks likely will never learn how to drive (or ask their parents to borrow the car on Saturday night).

This algorithmic specificity spotlights the difficulty of integrating deep learning into an actual production workflow.

Tim Barr’s (Cray) “Perspectives on HPC-Enabled AI” showed how Cray’s HPC technologies can be leveraged for Machine and Deep Learning for vision, speech and language. Stating that it all starts with analytics, Mr. Barr illustrated how industries such as Daimler improve manufacturing processes and products by leveraging deep learning to curtail vehicle noise and reduce vibration in its newest vehicles. Nikunj Oza from NASA Ames gave examples of machine learning behind aviation safety and astronaut health maintenance in “NASA Perspective on Deep Learning.” Dr. Oza’s background in analytics brought a fresh perspective to the proceedings and showcased that machine learning from history has earned a real place alongside modeling for industrial best practices.

In the simulation space, a fascinating talk from the LLNL HPC4Mfg program was William Elmer’s (LLNL) discussion of Proctor & Gamble’s “Faster Turnaround for Multiscale Models of Paper Fiber Products.” Simulating various paper product textures and fibers greatly reduce the amount of energy from drying and compaction. Likewise, Shiloh Industries’ Hal Gerber described “High Pressure Casting for Structural Requirements and The Implications on Simulation.” Shiloh’s team leverages HPC for changing vehicle structure — especially in creating lighter components with composites like carbon fiber and mixed materials.

It’s clear from the discussion that machine learning and AI are set to be first class citizens alongside traditional simulation within the HPC community in short order. While still unproven and with a wide variety of new software implementations, HP Labs presented a first-of-its-kind analysis of ML benchmarking on HPC Platforms. Hewlett Packard Labs’ Natalia Vassilieva’s “Characterization and Benchmarking of Deep Learning” showcased the “Book of Recipes” HP Labs is developing with various hardware and software configurations. Fresh off their integration of SGI technology into their technology stack, the talk not only highlighted the newer software platforms which the learning systems leverage, but demonstrated that HPE’s portfolio of systems and experience in both HPC and hyper scale environments is impressive indeed.

Graham Anthony, CFO of BioVista spoke on the “Pursuit of Sustainable Healthcare Through Personalized Medicine With HPC.” Mr. Anthony was very passionate about the work BioVista is doing with HPE and how HPC and deep learning change the costs of healthcare by increased precision in treatment through deriving better insights from data. BioVista takes insight from deep learning and feeds that into simulations for better treatments – a true illustration that learning is here to stay, and works hand in hand with business process flows for traditional HPC.

In his talk entitled “Charliecloud: Containers are Good for More Than Serving Cat Pictures?” Reid Priedhorsky from LANL covered a wide range of topics including software stacks, design philosophy and demoed Charliecloud which enables execution of docker containers on supercomputers.

The tongue-in-cheek title about cat pictures being synonymous with deep learning image recognition is not by accident. Stand-alone image recognition is really cool, but as expounded upon above, the true benefit from deep learning is having an integrated workflow where data sources are ingested by a general purpose deep learning platform with outcomes that benefit business, industry and academia.

From the talks, it is also clear that Machine Learning, Deep Learning and AI are presently fueled more by industry than by academia. This could be due to strategic and competitive business drivers as well as the sheer amount of data that companies like Facebook, Baidu and Google have available to them driving AI research and deep learning-backed products. HPC might not be needed to push these disciplines forward and is likely why we see this trend becoming more prevalent in everyday news.

There was obvious concern from the audience about a future where machines rule the world. Ethical questions of companies knowingly replacing workers with robots or AI came up in a very lively discussion. Some argued that there is a place for both humans and AI — quieting the fear that tens of thousands of people would be replaced by algorithms and robots. Others see a more dismal human future with evil and malevolent robots taking control and little left for humans to do. These are, of course, difficult questions to answer and further debates will engage and entertain everyone as we keep moving toward an uncertain, technical future.

On a lighter note, Wednesday evening’s dinner featured a local volunteer docent, Dave Fehlauer, giving an enjoyable, informative talk on Captain Frederick Pabst: his family, his world and his well-known Milwaukee staple, The Pabst Brewing Company.

By all accounts, this was one of the most enjoyed HPC User Forums meetings. With a coherent theme and a dynamic range of presentations, the Forum kept everyone’s interest and showcased the realm of possibilities within this encouraging trend of computing, both from industry and academic research perspectives.

The next domestic HPC User Forum will be held April 16-18, 2018 at the Loews Ventana Canyon in Tucson, Arizona. See http://hpcuserforum.com for further information.

About the Author

Arno Kolster is Principal & Co-Founder of Providentia Worldwide, a technical consulting firm. Arno focuses on bridging enterprise and HPC architectures and was co-winner of IDC’s HPC Innovation Award with his partner Ryan Quick in 2012 and 2014. He was recipient of the Alan El Faye HPC Inspiration Award in 2016. Arno can be reached at Arno.kolster@providentiaworldwide.com.

The post Machine Learning at HPC User Forum: Drilling into Specific Use Cases appeared first on HPCwire.

Biosoft Integrates Lab Equipment for Genetics Research with Help from PSSC Labs

Related News- HPC Wire - Fri, 09/22/2017 - 09:43

LAKE FOREST, Calif., Sept. 22, 2017 — PSSC Labs, a developer of custom High-Performance Computing (HPC) and Big Data computing solutions, today announced its work with Biosoft Integrators to provide powerful, turn-key HPC Cluster solutions for researchers in the biotech and genetic research fields.

Biosoft Integrators (BSI) works with researchers around the world to integrate laboratory technology platforms. With extensive experience in laboratory settings, the company’s founders realized that often equipment and software are poorly integrated and lack the functionality to work with each other, requiring researchers to manually transfer work and data between software and equipment. BSI provides researchers with greater efficiency and management by providing tools which unify the laboratory and laboratory informatics. BSI combines knowledge, experience and technology platforms to the biotechnology marketplace including the manually tracked lab to the fully automated and integrated consumer genomics facility.

PSSC Labs will work with BSI to create truly, turn-key high performance computing (HPC) clusters, servers and storage solutions.  PSSC Labs has already delivered several hundred computing platforms for worldwide genomics and bioinformatics research. Utilizing the PowerWulf HPC Cluster as a base solution platform, PSSC Labs and BSI can customize individual components for a specific end user’s research goals.

PowerWulf HPC Clusters are proven compatible with several genomics research platforms including both Illumina and Pacific Biosciences. Each solution includes the latest Intel Xeon processors, high performance memory, advanced storage arrays and fast networking topology. The PowerWulf HPC Clusters also include PSSC Labs CBeST Cluster Management Toolkit to help researchers easily manage, monitor, maintain and upgrade their clusters.

“PSSC Labs was willing to work with us to design each HPC systems, even allowing our software engineers to work directly with personnel at their production facility to ensure each HPC platform was built to work with each individual research project,” said Stu Shannon Co- Founder and COO of BSI. “The performance and reliability of PSSC Labs’ products are amazing. Many of our clients are conducting research in remote regions in southeast Asia, where repairs to equipment is extremely difficult to perform, and since partnering with PSSC Labs’ the HPC systems have required little more than the occasional hard drive replacement.”

PSSC Labs’ PowerWulf HPC Cluster offers a reliable, flexible, high performance computing platform for a variety of applications in the following verticals: Design & Engineering, Life Sciences, Physical Science, Financial Services and Machine/Deep Learning.

Every PowerWulf HPC Cluster includes a three-year unlimited phone/email support package (additional year support available) with all support provided by their US based team of experienced engineers. Prices for a custom built PowerWulf HPC Cluster solution start at $20,000.  For more information see http://www.pssclabs.com/solutions/hpc-cluster/

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership.  All products are designed and built at the company’s headquarters in Lake Forest, California.

Source: PSSC Labs

The post Biosoft Integrates Lab Equipment for Genetics Research with Help from PSSC Labs appeared first on HPCwire.

Intel Awards Paderborn University a Cluster Powered by Xeon Processors and Arria 10 FPGAs

Related News- HPC Wire - Fri, 09/22/2017 - 09:20

Sept. 22, 2017 — The Paderborn Center for Parallel Computing (PC²) has been selected by Intel to host a computer cluster that uses Intel’s Xeon processor with its Arria 10 FPGA software development platform. This server cluster connects Intel Xeon processor with an in-package field-programmable gate array (FPGA) via the platform’s high-speed QuickPath interconnect improving system bandwidth. The Intel FPGA can be programmed to serve as a workload-optimized accelerator offering substantial performance, agility, and energy-efficiency advantages.  This solution is suitable for a number of application domains, such as machine learning, data encryption, compression, image processing and video-stream processing. The platform also an ideal experimentation platform for innovative operating system or computing systems research, that focuses on novel approaches of integrating CPUs with accelerators at the software and hardware level. 

“We are very happy to have been selected by Intel as one of only two academic sites worldwide to host a cluster based on Intel Xeon processors and Intel Arria 10 FPGAs . Our computing center has a strong research background in accelerating demanding applications with FPGAs. The availability of these systems allows us to further expand our leadership in this area and – as a next step – bring Intel FPGA accelerators from the lab to HPC production systems,” says Prof. Dr. Christian Plessl, director of the Paderborn Center for Parallel Computing, who is been active in this research area for almost two decades.

Researchers worldwide can get access to the cluster by applying to Intel’s Hardware Accelerator Research Program. “We are looking forward to collaborate with Intel and other members of the Hardware Accelerator Research Program on using FPGA acceleration for emerging HPC and data center workloads. By provisioning access to the system to a large number of researchers, we are also gathering experience in how to manage systems with FPGA accelerators in a multi-user setting and for handling parallel applications that use multiple servers with FPGAs. This experience is crucial for deploying systems with FPGAs at scale,” explains Dr. Tobias Kenter, senior researcher and FPGA expert at the Paderborn Center for Parallel Computing.

Currently, the Paderborn Center is working on accelerating applications including theoretical physics, material sciences and machine learning with FPGAs.  This work is in collaboration with scientists from the application areas. In addition, novel domain-specific programming approaches for FPGAs are being developed to simplify the use of FPGAs for developers without a hardware design background.

About the Paderborn Center for Parallel Computing

The Paderborn Center for Parallel Computing, PC², is a scientific institute of Paderborn University, Germany. Our mission is to advance interdisciplinary research in parallel and distributed computing with innovative computer systems. We operate several high-performance cluster systems with up to 10’000 cores to provide HPC services to researchers from computational sciences at Paderborn University and the state of North Rhine-Westfalia.

One of our key research areas of is the study of computing systems with FPGA accelerators for energy-efficient HPC. The ability to customize the processing architecture implemented by the FPGA to the needs of applications allows us to build high-performance and at the same time efficient-efficient accelerators for demanding applications.

Source: Paderborn Center

The post Intel Awards Paderborn University a Cluster Powered by Xeon Processors and Arria 10 FPGAs appeared first on HPCwire.

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

Related News- HPC Wire - Thu, 09/21/2017 - 16:11

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” The search giant announced Thursday (Sept. 21) the beta launch of the high-end P100 Nvidia Tesla GPUs on the Google Cloud Platform as well as general availability of Tesla P80s, which have been in public beta since February.

Google also announced discounts for users running virtual machine instances for more than one week per month on Google Compute Engine. The discounts that increase on a sliding scale based on monthly usage apply to both K80 and P100 GPUs. Google said the discounts mean customers pay only for the number of minutes they use an instance during a given month.

Google and other public cloud providers have been ramping GPU integration on their platforms as a way of differentiating their services in a cutthroat market that is gradually shifting to multi-cloud deployments. A recent industry survey found that enterprises are on average using three public cloud providers as they seek spread out workloads and avoid vendor lock in.

For its part, Google is stressing cloud GPUs as a way of accelerating workloads that utilize machine learning training and inference as well as geophysical data processing, genomics and other high-performance computing applications.

Released last year as a datacenter accelerator, the Tesla P100 GPU based on its Pascal architecture is touted as delivering a ten-fold performance increase compared to the K80. Google said the roll out would allow cloud customers to attach up to four P100s or eight K80s per VM. It is also offering up to four K80 boards with two GPUs per board.

Google is the latest public cloud vendor to embrace Nvidia’s P100 GPUs for hardware acceleration in the cloud. IBM said in April it would add P100s to its Bluemix development cloud for customers running computing intensive workloads such as deep learning and data analytics. Microsoft followed in May with plans to debut Pascal-generation GPU instances on its Azure cloud later this year. Microsoft hasn’t deployed them yet though (and neither has cloud king Amazon), which makes Google the first of the big three to have them.

With K80 GPUs from Nvidia now generally available on Google Compute Engine and P100s in beta, cloud GPUs are now being integrated “at all levels of the stack,” the company noted in a blog post announcing the hardware upgrades.

In terms of infrastructure, GPU workloads can run with VMs or application containers. For machine learning applications, Google stressed that its cloud tools could be reconfigured to leverage cloud GPUs to reduce the time required to train and scale models using the TensorFlow machine intelligence library.

The cloud GPUs are available within Google’s U.S. East and West Coast regions as well as European West and Asia East regions.

The post Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs appeared first on HPCwire.

Cray Wins $48M Supercomputer Contract from KISTI

Related News- HPC Wire - Thu, 09/21/2017 - 15:48

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scalable processors and Intel Xeon Phi processors, will be the largest supercomputer in South Korea and will provide supercomputing services for universities, research institutes, and industries.

The see-saw sale cycles for supercomputer vendors is always challenging and Cray had hit a couple of speed bumps caused by market slowness and other issues. The new system is expected to be put into production in 2018.

“Our supercomputing division is focused on maximizing research performance while significantly reducing research duration and costs by building a top-notch supercomputing infrastructure,” said Pillwoo Lee, General Director, KISTI. “Cray’s proficiency in designing large and complex high-performance computing systems ensures our researchers can now apply highly-advanced HPC cluster technologies towards resolving scientific problems using the power of Cray supercomputers.”

Since 1962, KISTI has served as a national science and technology information center and has provided information that researchers need to enhance South Korea’s national competitiveness as a specialized science and technology research institute supported by the government.

The Cray CS500 systems provide flexible node configurations featuring the latest processor and interconnect technologies giving customers the ability to tailor a system to specific needs — from an all-purpose high-performance computing cluster to an accelerated system configured for shared memory, large memory, or accelerator-based tasks. The contract includes the product and services.

“Leading global supercomputing centers like KISTI are pushing the boundaries of science and technology for the benefit of everyone,” said Trish Damkroger, Vice President of Technical Computing at Intel. “The leading Intel Xeon Scalable processors, Intel Xeon Phi processors and high-bandwidth Intel Omni-Path Architecture, combined with the expertise and innovation of Cray supercomputers, unleash researchers to achieve groundbreaking discoveries that address society’s most complex challenges and yield answers faster than has ever been possible before.”

Link to release: http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-newsArticle&ID=2302209

The post Cray Wins $48M Supercomputer Contract from KISTI appeared first on HPCwire.

Avoiding the Storage Silo Trap; Best Practices for Data Storage in Scientific Research

Related News- HPC Wire - Thu, 09/21/2017 - 14:42

From mismatches between compute and storage capabilities to colossal data volumes, data storage presents a number of challenges for scientific research. And as silos pop up and challenges expand, the pace of research often suffers.

The post Avoiding the Storage Silo Trap; Best Practices for Data Storage in Scientific Research appeared first on HPCwire.

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

Related News- HPC Wire - Thu, 09/21/2017 - 13:50

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI).

“We see a huge potential to make a positive impact on the nation’s security by bringing our unique extreme-scale data expertise to bear on challenges of national importance,” said CSI Director Kerstin Kleese van Dam in the announcement. “The formation of this new department in CSI is our first step in this direction.”

Adolfy Hoisie, Brookhaven

Worries over computer and cyber attack need little introduction. The rapid growth in internet traffic and users and the voluminous data exchanges required between organizations to conduct business make the protection of the nation’s critical assets—including power grid infrastructure, telecommunication networks, and nuclear power stations—a big data–real-time analysis challenge.

Hoisie is an experienced and familiar name in the HPC community. Most recently he was founding director of the Department of Energy’s Center for Advanced Technology Evaluation (CENATE) based at Pacific Northwest National Laboratory. He first joined PNNL as a laboratory fellow in 2010, and went on to direct the Advanced Computing, Mathematics, and Data Division, and serve as PNNL’s lead for DOE’s ASCR programs.

“Adolfy is a long-time principal investigator in DOE’s Advanced Scientific Computing Research(ASCR) programs,” said Kleese van Dam. “At Brookhaven, he will continue in this capacity and contribute to solving computing challenges faced by other federal agencies, including those within the Department of Defense, such as the Defense Threat Reduction Agency and Defense Advanced Research Projects Agency, and the National Nuclear Security Administration. In addition, he will work closely with me and my leadership team to further CSI’s overall computing endeavors.”

Brookhaven describes the scope of the effort as, “From field-programmable gate arrays (configurable computing devices) integrated with traditional central processing units, and quantum computing that takes advantage of the way the tiniest of particles behave, to neuromorphic computing that mimics the neural networks of the human brain, these architectures are someday expected to perform operations much more quickly and with less energy. Ensuring the optimal performance of these architectures and achieving the timescales needed for different national security applications requires evaluating new hardware technologies and developing the needed system software, programming models, and analytical software in tandem.”

Link to announcement: https://www.bnl.gov/newsroom/news.php?a=212363

The post Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort appeared first on HPCwire.

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

Related News- HPC Wire - Thu, 09/21/2017 - 13:00

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, decimating its pumping function, thus diminishing the circulation of blood through the body. Some kind of cardiac arrhythmia, if not treated with a defibrillator, will cause death within minutes.

Before a new drug reaches the market, pharmaceutical companies need to check for the risk of inducing arrhythmias. Currently, this process takes years and involves costly animal and human studies. In this project, the Living Matter Laboratory of Stanford University developed a new software tool enabling drug developers to quickly assess the viability of a new compound. This means better and safer drugs reaching the market to improve patients’ lives.

This research project has been performed by researchers from the Living Matter Laboratory at Stanford University, and supported by Living Heart Project members from SIMULIA, Hewlett Packard Enterprise, Advania, and UberCloud. It is based on the development of a Living Heart Model (LHM) that encompasses advanced electro-physiological modeling. The end goal is to create a biventricular finite element model to be used to study drug-induced arrhythmias of a human heart.

The Living Heart Project is uniting leading cardiovascular researchers, educators, medical device developers, regulatory agencies, and practicing cardiologists around the world on a shared mission to develop and validate highly accurate personalized digital human heart models. These models will establish a unified foundation for cardiovascular in silico medicine and serve as a common technology base for education and training, medical device design, testing, clinical diagnosis and regulatory science —creating an effective path for rapidly translating current and future cutting-edge innovations directly into improved patient care.

The Stanford team in conjunction with SIMULIA have developed a multi-scale 3-dimensional model of the heart that can predict the risk of this lethal arrhythmias caused by drugs. The team added capabilities to the Living Heart Model to include highly detailed cellular models, to differentiate cell types within the tissue and to compute electro-cardiograms (ECGs). This model is now able to bridge the gap between the effect of drugs at the cellular level to the chaotic electrical propagation that a patient would experience at the organ level.

A computational model that is able to assess the response of new drug compounds rapidly and inexpensively is of great interest for pharmaceutical companies, doctors, and patients. Such a tool will increase the number of successful drugs that reach the market, while decreasing cost and time to develop them, and thus help hundreds of thousands of patients in the future. However, the creation of a suitable model requires taking a multiscale approach that is computationally expensive: the electrical activity of cells is modelled in high detail and resolved simultaneously in the entire heart. Due to the fast dynamics that occur in this problem, the spatial and temporal resolutions are highly demanding.

Figure 1: Tetrahedral mesh (left) and cube mesh (right)

During the preparation and Proof of Concept phase (UberCloud Experiment 196), we set out to build and calibrate the healthy baseline case, which we then used to perturb with different drugs. After creating the UberCloud software container for SIMULIA’s Abaqus 2017 and deploying it on the HPE server in the Advania cloud, we started refining the computational mesh which consisted of roughly 5 million tetrahedral elements and 1 million nodes. Due to the intricate geometry of the heart, the mesh quality limited the time step, which in this case was 0.0012 ms for a total simulation time of 5000 ms. After realizing that it would be very difficult to calibrate our model with such a big runtime, we decided to work on our mesh, which was the current bottleneck to speed up our model. We created a mesh that was made out of cube elements (Figure 1). With this approach, we lost the smoothness of the outer surface, but reduced the number of elements by a factor of ten and increased the time step by a factor of four, for the same element size (0.7 mm). With a much faster model, we were able to calibrate the healthy, baseline case, which was assessed by electro-cardiogram (ECG) tracing (Figure 2) that recapitulates the essential features.

Figure 2: ECG tracing for the healthy, baseline case

During the final production phase, we have adapted all features of the model to a finer mesh with now 7.5 million nodes and 250,000,000 internal variables that are updated and stored within each step of the simulation. We have run 42 simulations to study whether a drug causes arrhythmias or not. With all the changes above we were able to speed up one simulation by a factor of 27 which then (still) took 40 hours using 160 CPU cores on Advania’s HPE system equipped with latest Intel Broadwell E5-2683v4 nodes and Intel OmniPath interconnect. In these simulations, we applied the drugs by blocking different ionic currents in our cellular model, replicating what is observed in cellular experiments. For each case, we let the heart beat naturally and see if the arrhythmia is developed.

Figure 3: Evolution of the electrical activity for the baseline case (no drug) and after the application of Quinidine. The electrical propagation turns chaotic after the drug is applied, showing the high risk of Quinidine to produce arrhythmias.

Figure 3 shows the application of the drug Quinidine, which is an anti-arrhythmic agent, but it has a high risk of producing Torsades de Points, which is a particular type of arrhythmia. It shows the electrical transmembrane potentials of a healthy versus a pathological heart that has been widely used in studies of normal and pathological heart rhythms and defibrillation. The propagation of the electrical potential turns chaotic (Figure 3, bottom) when compared to the baseline case (Figure 3, top), showing that our model is able to correctly and reliably predict the anti-arrhythmic risk of commonly used drugs. We envision that our model will help researchers, regulatory agencies, and pharmaceutical companies rationalize safe drug development and reduce the time-to-market of new drugs.

Acknowledgement: The authors are deeply grateful for the support from Hewlett Packard Enterprise (the Sponsor), Dassault Systemes SIMULIA (for Abaqus 2017), Advania (providing HPC Cloud resources), and the UberCloud tech team for containerizing Abaqus and integrating all software and hardware components into one seamless solution stack.

The post Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations appeared first on HPCwire.

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

Related News- HPC Wire - Thu, 09/21/2017 - 13:00

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENATE’s ambitious mission was to be a proving ground for near-term and long-term technologies that could impact DoE workloads and HPC broadly. This month the leadership baton was passed from founding director Adolfy Hoisie to Kevin Barker, a veteran PNNL researcher and member of the CENATE project since its start. Hoisie has moved to Brookhaven National Lab to lead another new initiative as chair of the just-formed Computing for National Security Department.

In its short lifespan, CENATE has made steady strides. It has assembled an impressive infrastructure of test and measurement capabilities to explore computer technology. It has tackled several specific projects, ranging from the study of novel architecture from Data Vortex and Nvidia’s DGX-1 to longer horizon efforts around neuromorphic technology. The change in leadership, emphasizes Barker, won’t alter CENATE’s ambitious plans, but it will enable refinement of several processes, notably an effort to forge tighter links to the HPC community writ large, including DoE researchers, academia, and commercial technology partners.

Today there are about a dozen CENATE staff at PNNL, says Barker. One of the biggest changes will be standing up a more inclusive, more activist steering committee to guide CENATE.

Kevin Barker, P.I., Center for Advanced Technology Evaluation (CENATE), PNNL

Recently, HPCwire talked with Barker about the developing plans for CENATE and its priorities. Barker is certainly no stranger to HPC. He joined PNNL in 2010 as a senior HPC research scientist rising to team lead for modeling and simulation in 2014. Before joining PNNL, Barker spent nearly six years at Los Alamos National Laboratory as an HPC research scientist.

HPCwire: Your prior CENATE experience will no doubt be helpful. Perhaps you could describe your role and provide a sense of what CENATE has accomplished to date.

Barker: Really, I’ve been with CENATE since it began. It had a couple of focus areas in terms of how it was organized internally. I was in charge of one of those areas around performance predictions. The idea was that CENATE would be a program that encompassed not only of testbed and performance measurements but also would take those performance measurements we could get from physical systems, or prototype systems at small scale, and use performance prediction techniques to explore what those performance impacts would be at large scale. That was my role. Now, I am the PI.

In the first two years of the project, CENATE has deployed testbeds incorporating emerging technologies in the areas of processing, memory, and networking, and has engaged with the research community to assess the applicability of these technologies to workloads of interest to the Department of Energy. Specifically, CENATE has explored high-throughput architectures applied to large-scale Machine Learning frameworks; non-volatile memories; reconfigurable optical networking technology; and self-routing, dynamic, congestion-free networks applied to graph analytics. Through a broad community engagement, CENATE has ensured that its findings are fed back through workshops and deep collaborations with leading researchers.

HPCwire: That’s an extensive list. What’s going to change?

Barker: This change in leadership isn’t a dramatic change in terms of the technical capabilities or what we hope to accomplish technically. Now we want to ensure CENATE is more tightly integrated with the outside community, the HPC community in the DoE, and the vendor space. We also want to make sure the work we are doing at CENATE has an impact back at the DoE.

We’re working on getting a good plan in place to accomplish that – engaging with the vendor community, engaging with application developers, systems software developers, with the DoE complex making CENATE resources available to those people so that we can have a collaborative research environment. They can bring their problems to CENATE and we could provide access to some of these novel and emerging technologies that CENATE is tasked with assessing.

HPCwire: Maybe we should step back and review CENATE’s mission. How do you see it evolving, particularly in light of establishing a more inclusive and activist steering committee?

Barker: Again, it hasn’t really changed. When the steering committee stands up we envision CENATE taking on two kinds of tracks in terms of the research and the resources that we look at in each track. In the first track, we envision a shorter time scale where we are looking at technologies that are very near to market that we can get close to either prototype hardware, early release hardware, or engineering sample hardware. For the second track, in terms of timescale, we want CENATE to have an impact on more novel or high risk architectural approaches. So we might look at such things as beyond Moore’s Law computing technologies.

We envision the steering committee having a big impact because we want to have some indication from the community regarding what are the technologies that we should be most interested in from a community perspective. [Tentatively] we envision a fixed six-month schedule steering committee meetings in particular to discuss what architectures should be look at in the next six months window and feedback from the previous six month windows. We haven’t decided yet whether those will take the form of a meeting or workshop where we have more community involvement from outside the just the steering committee. Those are some of the thing still under discussion.

HPCwire: Given the importance of the steering committee, how big will it be and who will be on it?

Barker: It could be 15 or so organizations, maybe a person from each organization. We would like to have participation from other labs in the DoE community, and potentially academic partners. For example Indiana University is a major user of the Data Vortex architecture so it makes sense for them to participate. The third group being the commercial vendor space. We want to have this settled (and up on the web) before supercomputing (SC17, November 12-17).

HPCwire: There are so many new technologies bubbling up at various stages of development. Adolfy had mentioned interest in developing neuromorphic chips. Is that work ongoing? What’s the thinking on longer term technologies?

Barker: We are definitely interested in these longer term technologies and think that CENATE can have a big impact in the community, presenting that to the funding sources and say hey we want to have CENATE really to be positioned to have an impact beyond the next thing that you can buy from your hardware vendor. To explore the next gen technologies that aren’t necessarily tied to commercial products at this point but may still have real impact, particularly in the generation of beyond exascale.

For example, the exascale systems are pretty well covered with the Exascale Computing Project. We’re very familiar now with what those systems are going to look like. People are very focused to get their applications to run on those architectures. That’s not really where we see CENATE having a play. In looking beyond that, what are the technologies that are going to shape high performance computing beyond exascale timeframe? We really want CENATE to be positioned to have an impact in those areas. This is what I mean by saying a refinement of the CENATE direction. Up until now CENATE has looked at a number of near-to-market or new to market technologies. And they have had a big impact. The DGX-1 is great example. We stood up a DGX-1 and immediately we had users from around the lab complex and academia clamoring to get on the machine to explore how their applications are going to perform to develop system software and things such as that.

But we want CENATE also to look beyond that, things like extreme heterogeneity, software reconfigurable computing. So this is really why we are placing and emphasis on the connection to the research community so that we can get as much as is possible an accurate prediction of why these are the technologies that we think are going to make an impact. How can CENATE position itself to help assess those technologies in the near-term and that might involve a much deeper dive into specific technologies? CENATE doesn’t have an unlimited amount of resources (time personnel dollars) so it’s very important we target those things as effectively as we can.

HPCwire: Funding is always an issue on advanced research and DoE is a big player. What about tapping into programs such as DARPA’s Electronics Resurgence Initiative (ERI) which is focused on post-Moore’s law technology and just received additional funding?

Barker: There are definitely some programs that worth [looking at]. We are not working with ERI in particular although that could be something we explore together with program management from DoE headquarters. But there are some opportunities exactly along those lines that we are looking into but nothing firm at this point.

HPCwire: Given the various changes, and the enhanced role of the steering committee, is it premature to identify the top five technologies we are going to tackle in the next year?

Barker: Exactly. Unfortunately the end of the year is kind of a busy time in the HPC world with SC (supercomputing conference) and everything else going on. We hope those kinds of things pinned down with at least some degree of certainty within the next few months.

HPCwire: One of the distinguishing aspects of CENATE is the diversity and sophistication of the test and measurement equipment and capabilities at PNNL. What’s happening on that front?

Barker: We have equipment for testing power and energy and well as for thermal measurement capability. That is still all in place. We’re expanding the evaluation test suite that we have been using up until this point, the benchmark codes. CENATE itself has an evaluation test suite in addition to reaching out to collaborators who are interested in the equipment and who bring their own software test suite. We’re interested in looking at these machines in the context of numerical simulation, high performance computing codes, as well as graph analytics codes, machine learning codes, so we are expanding that set of benchmark codes, but the measurement capabilities we have in place are still in place.

HPCwire: It sounds like, among other things, you are adapting your capabilities to be able to handle emerging, nontraditional ‘HPC’ needs such as deep learning and data analytics?

Barker: Right. One of the important things when we are looking at these architectures, and the DGX-1 is good example, is we want to evaluate those technologies in the mode they are designed to operate in. The DGX-1 really is designed as a deep learning/ machine learning architecture. Exploring simply traditional HPC simulation codes on it might not be the most appropriate thing to do. We want to paint it [DGX-1 performance and potential] in the light it was designed for. Our evaluation suite of kernels and benchmarks needs to encompass those application areas that these architectures are targeting. And things like machine learning and deep learning are becoming such a part of the DoE workload that for CENATE to remain relevant to the DoE we need to have that capability. The DoE HPC landscape is much more than tightly couple code.

HPCwire: In the past there had been talk of CENATE workshops and other outreach efforts to diffuse CENATE learnings into the community but I don’t think much has happened along those lines yet. How do you share results and what are the plans going forward?

Barker: This is one area where we have decided that some refinement is necessary. Currently the mechanism that we use to present some of these results back to the community is through publications. It’s a pretty typical route. We’ve had some success there and, for example, we have papers on our work with DGX-1 in the submission process right now. We want to expand how we do this and are still developing the plans.

Hosting user group meetings is another way. Just two weeks ago, we hosted the first Data Vortex user group meeting at PNNL and CENATE was a player in that it brought together a couple of other programs that were looking at the data vortex architecture. That was a really successful workshop. Researchers from DoE, other government agencies, academic researchers came here to PNNL specifically about the data vortex architecture which is a big architecture in CENATE. We actually have two Data Vortex machines. That’s an example we can point to where we can say CENATE is making an impact in the community.

The NDA issues are sometimes very tricky but we have some experience with other projects where similar issues have arisen so we do have some strategies to deal with NDA issues.

HPCwire: How will you reach potential collaborators. There’s the technical steering committee but given its relatively small size, who will you reach beyond its immediate interests and attract other collaborators?

Barker: We are standing up a new CENATE web site that we hope to have up very soon, which will solicit this kind of input and have a mechanism where we can say if you’re a commercial or vendor partner and you want to participate in the CENATE program, here’s how you can get in touch with us. We definitely don’t want to be an exclusive club. We want to cast a wide net in terms of the types of technologies that are represented in the steering committee. Some of this is still in progress

One of the things we are exploring [for the web site] is a way to have potential interested external parties propose what they would like to do and the equipment they would be interested in evaluating. Again, this where the technical steering committee again comes to evaluate these proposals. It might be a model where – and this is what we are moving towards – where we essentially put out a call [for proposal]. That sounds a bit formal. CENATE is not a funding organization and won’t fund external collaborators. But it will be a way for submitters to say what interesting problems are they interested in solving that CENATE could then participate in and possibly provide access to technology. So if you are professor with some graduate students you might say, ‘Here’s an application that we want to develop and we want to explore how it might work on architecture x but we don’t have the means to get architecture x, can CENATE help?’

HPCwire: Thank you for your time.

The post PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties appeared first on HPCwire.

Los Alamos Gains Role in High-Performance Computing for Materials Program

Related News- HPC Wire - Thu, 09/21/2017 - 11:34

LOS ALAMOS, N.M., Sept. 21, 2017 — A new high-performance computing (HPC) initiative announced this week by the U.S. Department of Energy will help U.S. industry accelerate the development of new or improved materials for use in severe environments. Los Alamos National Laboratory, with a strong history in the materials science field, will be taking an active role in the initiative.

“Understanding and predicting material performance under extreme environments is a foundational capability at Los Alamos,” said David Teter, Materials Science and Technology division leader at Los Alamos. “We are well suited to apply our extensive materials capabilities and our high-performance computing resources to industrial challenges in extreme environment materials, as this program will better help U.S. industry compete in a global market.”

“The High-Performance Computing for Materials Program will provide opportunities for our industry partners to access the high-performance computing capabilities and expertise of DOE’s national labs as they work to create and improve technologies that combat extreme conditions,” said Secretary of Energy Rick Perry. “This initiative combines two, crucial elements of the Administration’s mission at DOE – advances in high-performance computing and the improved transition of energy technologies to market.”

The HPC4Mtls initiative will initially focus on challenges facing industry as they work to develop new or improved materials that can sustain extreme conditions—including extreme pressure, radiation, and temperature, corrosion, chemical environment, vibration, fatigue, or stress states. It will focus on developing improved lightweight material technologies, as well. The program aims to enable a step change in the cost, development time, and performance of materials in severe environments and save millions of dollars in fuel and maintenance across sectors. These material advancements will also increase U.S. competitiveness in the global marketplace.

Through HPC4Mtls, industry will be able to solve common materials issues, discover new or improved materials and structures, and enhance their products and processes using the labs’ world-class computational resources and capabilities. These capabilities include:

  • Access to HPC systems, including five of the world’s ten fastest computers
  • Higher-fidelity simulations to augment products or processes
  • Prediction of material behavior in specific severe environments
  • Modeling of missing physical phenomena to enable more realistic simulations
  • Development of more complex models to capture interactions between physical phenomena
  • Access to expertise in computational fluid dynamics, thermodynamics, kinetics, materials modeling, and additive manufacturing.

Companies will be selected to participate in the initiative through an open, two-stage, competitive process and will contribute at least 20 percent of project costs. DOE will hold a closed-press workshop on October 12, 2017 in Pittsburgh, PA to provide more information on the program and engage U.S.-based companies, industry, universities, and government stakeholders.

Sponsored by DOE’s Office of Fossil Energy, the High Performance Computing for Materials (HPC4Mtls) Program is part of the larger HPC4 Energy Innovation Initiative, a Department-wide effort comprised of the Office of Fossil Energy, the Office of Energy Efficiency and Renewable Energy, and the Office of Nuclear EnergyLawrence Livermore National LaboratoryLos Alamos National LaboratoryOak Ridge National Laboratory, and the National Energy Technology Laboratory serve as the principal leads on this initiative, which could ultimately lower emissions, reduce fuel and maintenance costs across the economy, and save millions of dollars.

About Los Alamos National Laboratory (www.lanl.gov)

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWX Technologies, Inc. and URS Corporation for the Department of Energy’s National Nuclear Security Administration.

Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health and global security concerns.

Source: Los Alamos National Laboratory

The post Los Alamos Gains Role in High-Performance Computing for Materials Program appeared first on HPCwire.

ALCF Simulations Aim to Reduce Jet Engine Noise

Related News- HPC Wire - Thu, 09/21/2017 - 10:53

CHICAGO, Ill., Sept. 21, 2017 — Humans make a lot of noise. The riffs of heavy metal bands like Metallica and Kiss have soared to levels in the 130-decibel range, levels sure to lead to auditory damage.

But try as they might, bands just can’t compete with the decibel ranges produced by jet engines. They are, said Joe Nichols, among the loudest sources of human-made noise that exist.

An assistant professor of Aerospace Engineering and Mechanics at the University of Minnesota, Nichols is fascinated by sound and its ability to find order in chaos – and by applying that understanding to the development of new technologies that can reduce noise in aircraft.

“His project leverages computational data with what he calls input-output analysis, which reveals the origins of jet noise that are otherwise hidden in direct run-of-the-mill forward simulations, or even experiments.” – Ramesh Balakrishnan, Argonne computational scientist

Nichols is working with the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility within the DOE’s Argonne National Laboratory, to create high-fidelity computer simulations to determine how jet turbulence produces noise. The results may lead to novel engineering designs that reduce noise over commercial flight paths and on aircraft carrier decks.

“Noise tells you something about the fundamental nature of turbulence, because noise reveals order that is otherwise hidden in complex, highly nonlinear, chaotic phenomena,” he said.

That is why jet noise presents both a challenging and a beautiful problem for Nichols.

Taming the roar of the engine

Jet engines produce noise in different ways, but mainly it comes from the high-speed exhaust stream that leaves the nozzle at the rear of the engine. And planes are loudest when they move slowly, such as at takeoff or at landing. As the exhaust stream meets relatively still air, it creates tremendous shear that quickly becomes unstable. The turbulence produced from this instability becomes the roar of the engine.

Aeronautic engineers incorporate chevrons, broken eggshell-shaped patterns, into exhaust nozzle designs to change the shape of the jet as it leaves the engine. The idea is to reduce the noise by changing the pattern of the turbulence. But much of the design work remains a guessing game.

Working with ALCF computational scientist Ramesh Balakrishnan and Argonne’s supercomputer Mira, Nichols and his team are applying computational fluid dynamics to remove some of that guesswork. They start by conducting high-fidelity large eddy simulations that accurately capture the physics of the turbulence that is making the noise.

From those simulations they extract reduced-order, or more concise, models that explain what part of the turbulence actually makes the sound. In addition to improving scientific understanding of jet noise, these reduced-order models also provide a fast, yet accurate, means for engineers to evaluate new designs.

Simulating complex geometries like jet turbulence requires the use of an unstructured mesh — a non-uniform 3-D grid — to represent the dynamics involved. In this case, one simulation could have 500 million grid points. Multiply that by five to account for pressure, density and three components of velocity to describe the flow at every grid point. That equates to billions of degrees of freedom, or the number of variables Mira uses to simulate jet noise.

“But what if inside the jet turbulence there is a skeleton of coherent flow structures that we can describe with just 50 degrees of freedom,” suggested Nichols. “Which aspects are most important to the jet noise production? How do the flow structures interact with each other? How closely can the skeleton model represent the high-fidelity simulation?”

This work, published last year in the journal Physics of Fluids, could help engineers more precisely direct the modeling of jet engine nozzle geometries by determining, for instance, the ideal number and length of chevrons.

“What distinguishes Joe’s work from those of the other computational fluid dynamics projects at ALCF is that it involves the development of a method that could mature into becoming a design tool for aero-acoustics,” said ALCF’s Balakrishnan. “His project leverages computational data with what he calls input-output analysis, which reveals the origins of jet noise that are otherwise hidden in direct run-of-the-mill forward simulations, or even experiments.”

Simulating waves of aviation

One of the leading ways to predict the instability waves that create sound inside of turbulence is through methods based on a type of computational tool called parabolized stability equations. But while they’re good at predicting supersonic sound sources, they have a hard time predicting all the components of subsonic jet noise, especially in the sideline direction, or perpendicular to the exhaust stream.

The University of Minnesota team developed a new method based on input-output analysis that can predict both the downstream noise and the sideline noise. While it was thought that the sideline noise was random, the input-output modes show coherent structure in the jet that is connected to the sideline noise, such that it can be predicted and controlled.

Nichols also uses a variation on the input-output analysis to study noise produced by impingement, where a jet blast is directed at a flat surface, such as aircraft taking off from or hovering over an aircraft carrier deck.

Like decibel-breaking guitar licks, impingement produces a feedback loop when the turbulence hits a flat surface and accelerates outward. As the noise loops back towards the jet nozzle, new turbulence is triggered, creating extremely large tones that can reach into the 170-decibel range and do structural damage to the aircraft in question.

Nichols and his team are applying computational fluid dynamics to reduce the noise by changing the pattern of the turbulence. With Nichols are Anubhav Dwivedi (left) and Jinah Jeun (right), graduate students in Aerospace Engineering and Mechanics at the University of Minnesota. (Image courtesy of University of Minnesota.)

The team turned to Mira to conduct a high-fidelity simulation of an impinging jet without any modifications, and then measured the noise it produced. When compared to ongoing experiments, they predicted those same tones very accurately. A reduced-order model of the simulations helped Nichols more precisely predict how to change the jet configuration to eliminate feedback tones. Another simulation of the modified jet showed that the tones were almost completely gone.

“The simulations play a crucial role because they let us see spatio-temporally resolved fluid motions that would be impossible to measure experimentally, especially if you’re talking about a hot exhaust moving at Mach 1.5,” noted Nichols.

This research, says Balakrishnan, is still a work in progress, but the results are encouraging. While it still needs some refinement, it holds the promise of becoming a design tool that jet engine manufacturers may one day use to help quiet the skies.

For electric guitar makers Fender and Gibson, on the other hand, perhaps not so much.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Source: John Spizzirri, ANL

The post ALCF Simulations Aim to Reduce Jet Engine Noise appeared first on HPCwire.

NSF-funded ‘SLATE’ Platform to Stitch Together Global Science Efforts

Related News- HPC Wire - Thu, 09/21/2017 - 08:14

Sept. 21, 2017 — Today’s most ambitious scientific quests — from the cosmic radiation measurements by the South Pole Telescope to the particle physics of the Large Hadron Collider — are multi-institutional research collaborations requiring computing environments that connect instrumentation, data, and computational resources. Because of the scale of the data and the complexity of this science,  these resources are often distributed among university research computing centers, national high performance computing centers, or commercial cloud providers.  This resource heterogeneity causes scientists to spend more time on the technical aspects of computation than on discoveries and knowledge creation, while computing support staff are required to invest more effort integrating domain specific software with limited applicability beyond the community served.

With Services Layer At The Edge (SLATE), a $4 million project funded by the National Science Foundation, a team from the Enrico Fermi and Computation Institutes at University of Chicago will lead an effort with the Universities of Michigan and Utah to provide technology that simplifies connecting university and laboratory data center capabilities to the national cyberinfrastructure ecosystem. Once installed, SLATE connects local research groups with their far-flung collaborators, allowing central research teams to automate the exchange of data, software and computing tasks among institutions without burdening local system administrators with installation and operation of highly customized scientific computing services. By stitching together these resources, SLATE will also expand the reach of domain-specific “science gateways” and multi-site research platforms.

SLATE works by implementing “cyberinfrastructure as code”, augmenting high bandwidth science networks with a programmable “underlayment” edge platform. This platform hosts advanced services needed for higher-level capabilities such as data and software delivery, workflow services and science gateway components.

SLATE uses best-of-breed data center virtualization components, and where available, software defined networking, to enable automation of lifecycle management tasks by domain experts. As such, it simplifies the creation of scalable platforms that connect research teams, institutions and resources, accelerating science while reducing operational costs and development time. Since SLATE needs only commodity components, it can be used for distributed systems across all data center types and scales, thus enabling creation of ubiquitous, science-driven cyberinfrastructure.

 

At UChicago, the SLATE team will partner with the Research Computing Center and Information Technology Services to help the ATLAS experiment at CERN, the South Pole Telescope and the XENON dark matter search collaborations create the advanced cyberinfrastructure necessary for rapidly sharing data, computer cycles and software between partner institutions.  The resulting systems will provide blueprints for national and international research platforms supporting a variety of science domains.

For example, the SLATE team will work with researchers from the Computation Institute’s Knowledge Lab to develop a hybrid platform that elastically scales computational social science applications between commercial cloud and campus HPC resources. The platform will allow researchers to use their local computational resources with the analytical tools and sensitive data shared through Knowledge Lab’s Cloud Kotta infrastructure, reducing cost and preserving data security.

“SLATE is about creating a ubiquitous cyberinfrastructure substrate for hosting, orchestrating and managing the entire lifecycle of higher level services that power scientific applications that span multiple institutions,” said Rob Gardner, a Research Professor in the Enrico Fermi Institute and Senior Fellow in the Computation Institute. “It clears a pathway for rapidly delivering capabilities to an institution, maximizing the science impact of local research IT investments.”

Many universities and research laboratories use a “Science DMZ” architecture to balance security with the ability to rapidly move large amounts of data in and out of the local network. As sciences from physics to biology to astronomy become more data-heavy, the complexity and need for these subnetworks grows rapidly, placing additional strain on local IT teams.

That stress is further compounded when local scientists join multi-institutional collaborations, often requiring the installation of specialized, domain-specific services for the sharing of compute and data resources.

“Science, ultimately, is a collective endeavor. Most scientists don’t work in a vacuum, they work in collaboration with their peers at other institutions,” said Shawn McKee, director of the Center for Network and Storage-Enabled Collaborative Computational Science at the University of Michigan. “They often need to share not only data, but systems that allow execution of workflows across multiple institutions. Today, it is a very labor-intensive, manual process to stitch together data centers into platforms that provide the research computing environment required by forefront scientific discoveries.”

With SLATE, research groups will be able to fully participate in multi-institutional collaborations and contribute resources to their collective platforms with minimal hands-on effort from their local IT team. When joining a project, the researchers and admins can select a package of software from a cloud-based service — a kind of “app store” — that allows them to connect and work with the other partners.

“Software and data can then be updated automatically by experts from the platform operations and research teams, with little to no assistance required from local IT personnel,” said Joe Breen, Senior IT Architect for Advanced Networking Initiatives at the University of Utah’s Center for High Performance Computing. “While the SLATE platform is designed to work in any data center environment, it will utilize advanced network capabilities, such as software defined overlay networks, when the devices support it.”

By reducing the technical expertise and time demands for participating in multi-institution collaborations, the SLATE platform will be especially helpful to smaller universities that lack the resources and staff of larger institutions and computing centers. The SLATE functionality can also support the development of “science gateways” which make it easier for individual researchers to connect to HPC resources such as the Open Science Grid and XSEDE.

“A central goal of SLATE is to lower the threshold for campuses and researchers to create research platforms within the national cyberinfrastructure,” Gardner said.

Initial partner sites for testing the SLATE platform and developing its architecture include New Mexico State University and Clemson University, where the focus will be creating distributed  cyberinfrastructure in support of large scale bioinformatics and genomics workflows. The project will also work with the Science Gateways Community Institute, an NSF funded Scientific Software Innovation Institute, on SLATE integration to make gateways more powerful and reach more researchers and resources.

Source: Rob Mitchum, University of Chicago

The post NSF-funded ‘SLATE’ Platform to Stitch Together Global Science Efforts appeared first on HPCwire.

Berkeley Lab Cosmology Software Scales Up to 658,784 Knights Landing Cores

Related News- HPC Wire - Wed, 09/20/2017 - 15:00

Sept. 20 — The Cosmic Microwave Background (CMB) is the oldest light ever observed and is a wellspring of information about our cosmic past. This ancient light began its journey across space when the universe was just 380,000 years old. Today it fills the cosmos with microwaves. By parsing its subtle features with telescopes and supercomputers, cosmologists have gained insights about both the properties of our Universe and of fundamental physics.

Despite all that we’ve learned from the CMB so far, there is still much about the universe that remains a mystery. Next-generation experiments like CMB Stage-4 (CMB-S4) will probe this primordial light at even higher sensitivity to learn more about the evolution of space and time and the nature of matter. But before this can happen scientists need to ensure that their data analysis infrastructure will be able to handle the information deluge.

Cumulative daily maps of the sky temperature and polarization at each frequency showing how the atmosphere and noise integrate down over time. The year-long campaign spanned 129 observation-days during which the ACTpol SS patch was available for a 13-hour constant elevation scan. To make these maps, the signal, noise, and atmosphere observations were combined (including percent level detector calibration error), filtered with a 3rd order polynomial, and binned into pixels. (Image Credit: Julian Borrill, Berkeley Lab)

That’s where researchers in Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Computational Cosmology Center (C3) come in. They recently achieved a critical milestone in preparation for upcoming CMB experiments: scaling their data simulation and reduction framework TOAST (Time Ordered Astrophysics Scalable Tools) to run on all 658,784 Intel Knights Landing (KNL) Xeon Phi processor cores on the National Energy Research Scientific Computing Center’s (NERSC’s) Cori system.

The team also extended TOAST’s capabilities to support ground-based telescope observations, including implementing a module to simulate the noise introduced by looking through the atmosphere, which must then be removed to get a clear picture of the CMB. All of these achievements were made possible with funding from Berkeley’s Laboratory Directed Research and Development (LDRD) program.

“Over the next 10 years, the CMB community is expecting a 1,000-fold increase in the volume of data being gathered and analyzed—better than Moore’s Law scaling, even as we enter an era of energy-constrained computing,” says Julian Borrill, a cosmologist in Berkeley Lab’s Computational Research Division (CRD) and head of C3. “This means that we’ve got to sit at the bleeding edge of computing just to keep up with the data volume.”

TOAST: Balancing Scientific Accessibility and Performance

Cori Supercomputer at NERSC.

To ensure that they are making the most of the latest in computing technology, the C3 team worked closely with staff from NERSC, Intel and Cray to get their TOAST code to run on all of Cori supercomputer’s 658,784 KNL processors. This collaboration is part of the NERSC Exascale Science Applications Program (NESAP), which helps science code teams adapt their software to take advantage of Cori’s manycore architecture and could be a stepping-stone to next generation exascale supercomputers.

“In the CMB community, telescope properties differ greatly, and up until now each group typically had its own approach to processing data. To my knowledge, TOAST is the first attempt to create a tool that is useful for the entire CMB community,” says Ted Kisner, a Computer Systems Engineer in C3 and one of the lead TOAST developers.

“TOAST has a modular design that allows it to adapt to any detector or telescope quite easily,” says Rollin Thomas, a big data architect at NERSC who helped the team scale TOAST on Cori. “So instead of having a lot of different people independently re-inventing the wheel for each new experiment, thanks to C3 there is now a tool that the whole community can embrace.”

According to Kisner, the challenges to building a tool that can be used by the entire CMB community were both technical and sociological. Technically, the framework had to perform well at high concurrency on a variety of systems, including supercomputers, desktop workstations and laptops. It also had to be flexible enough to interface with different data formats and other software tools. Sociologically, parts of the framework that researchers interact with frequently had to be written in a high-level programming language that many scientists are familiar with.

The C3 team achieved a balance between computing performance and accessibility by creating a hybrid application. Parts of the framework are written in C and C++ to ensure that it can run efficiently on supercomputers, but it also includes a layer written in Python, so that researchers can easily manipulate the data and prototype new analysis algorithms.

“Python is a tremendously popular and important programming language, it’s easy to learn and scientists value how productive it makes them. For many scientists and graduate students, this is the only programming language they know,” says Thomas. “By making Python the interface to TOAST, the C3 team essentially opens up HPC resources and experiments to scientists that would otherwise be struggling with big data and not have access to supercomputers. It also helps scientists focus their customization efforts at parts of the code where differences between experiments matter the most, and re-use lower-level algorithms common across all the experiments.”

To ensure that all of TOAST could effectively scale up to 658,784 KNL cores, Thomas and his colleagues at NERSC helped the team launch their software on Cori with Shifter—an open-source, software package developed at NERSC to help supercomputer users easily and securely run software packaged as Linux Containers. Linux container solutions, like Shifter, allow an application to be packaged with its entire software stack including libraries, binaries and scripts as well as defining other run-time parameters like environment variables.  This makes it easy for a user to repeatedly and reliably run applications even at large-scales.

“This collaboration is a great example of what NERSC’s NESAP for data program can do for science,” says Thomas. “By fostering collaborations between the C3 team and Intel engineers, we increased their productivity on KNL. Then, we got them to scale up to 658,784 KNL cores with Shifter. This is the biggest Shifter job done for science so far.”

With this recent hero run, the cosmologists also accomplished an important scientific milestone: simulating and mapping 50,000 detectors observing 20 percent of the sky at 7 frequencies for 1 year. That’s the scale of data expected to be collected by the Simons Observatory, which is an important stepping-stone to CMB-S4.

“Collaboration with NERSC is essential for Intel Python engineers – this is unique opportunity for us to scale Python and other tools to hundreds thousands of cores,” says Sergey Maidanov, Software Engineering Manager at Intel. “TOAST was among a few applications where multiple tools helped to identify and address performance scaling bottlenecks, from Intel MKL and Intel VTune Amplifier to Intel Trace Analyzer and Collector and other tools. Such a collaboration helps us to set the direction for our tools development.”

Accounting for the Atmosphere

.”>

The telescope’s view through one realization of turbulent, wind-blown, atmospheric water vapor. The volume of atmosphere being simulated depended on (a) the scan width and duration and (b) the wind speed and direction, both of which changed every 20 minutes. The entire observation used about 5000 such realizations. (Image Credit: Julian Borrill)

The C3 team originally deployed TOAST at NERSC nearly a decade ago primarily to support data analysis for Planck, a space-based mission that observed the sky for four years with 72 detectors. By contrast, CMB-S4 will scan the sky with a suite of ground-based telescopes, fielding a total of around 500,000 detectors for about five years beginning in the mid 2020s.

In preparation for these ground-based observations, the C3 team recently added an atmospheric simulation module that naturally generates correlated atmospheric noise for all detectors, even detectors on different telescopes in the same location. This approach allows researchers to test new analysis algorithms on much more realistic simulated data.

“As each detector observes the microwave sky through the atmosphere it captures a lot of thermal radiation from water vapor, producing extremely correlated noise fluctuations between the detectors,” says Reijo Keskitalo, a C3 computer systems engineer who led the atmospheric simulation model development.

Keskitalo notes that previous efforts by the CMB community typically simulated the correlated atmospheric noise for each detector separately. The problem with this approach is it can’t scale to the huge numbers of detectors expected for experiments like CMB-S4. But by simulating the common atmosphere observed by all the detectors once, the novel C3 method ensures that the simulations are both tractable and realistic.

“For satellite experiments like Planck, the atmosphere isn’t an issue. But when you are observing the CMB with ground-based telescopes, the atmospheric noise problem is malignant because it doesn’t average out with more detectors. Ultimately, we needed a tool that would simulate something that looks like the atmosphere because you don’t get a realistic idea of experiment performance without it,” says Keskitalo.

“The ability to simulate and reduce the extraordinary data volume with sufficient precision and speed will be absolutely critical to achieving CMB-S4’s science goals,” says Borrill.

In the short term, tens of realizations are needed to develop the mission concept, he adds. In the medium term, hundreds of realizations are required for detailed mission design and the validation and verification of the analysis pipelines. Long term, tens of thousands of realizations will be vital for the Monte Carlo methods used to obtain the final science results.

“CMB-S4 will be a large, distributed collaboration involving at least 4 DOE labs. We will continue to use NERSC – which has supported the CMB community for 20 years now – and, given our requirements, likely need the Argonne Leadership Class Facility (ALCF) systems too. There will inevitably be several generations of HPC architecture over the lifetime of this effort, and our recent work is a stepping stone that allows us to take full advantage of the Xeon Phi based systems currently being deployed at NERSC,” says Borrill.

The work was funded through Berkeley Lab’s LDRD program designed to seed innovative science and new research directions. NERSC and ALCF are both DOE Office of Science User Facilities.

The Office of Science of the U.S. Department of Energy supports Berkeley Lab. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

More on the history of CMB research at NERSC:

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. Learn more about computing sciences at Berkeley Lab.

Source: NERSC

The post Berkeley Lab Cosmology Software Scales Up to 658,784 Knights Landing Cores appeared first on HPCwire.

NVM Express, Inc. to Host NVMe over Fabrics Webcast on Sept. 20

Related News- HPC Wire - Wed, 09/20/2017 - 10:13

WAKEFIELD, Mass., Sept. 20, 2017 — NVM Express, Inc., the organization that developed the NVM Express (NVMe) and NVMe Management Interface (NVMe-MI) specifications for accessing solid-state drives (SSDs) on a PCI Express (PCIe) bus as well over Fabrics, will lead an educational webcast titled “NVMe over Fabrics: Market Uptake, Benefits and Use Cases” on Wednesday, Sept. 20, 2017, at noon Eastern. There is no charge to attend, but advance registration is required.

WHAT: The webcast will cover how NVMe over Fabrics (NVMe-oF™) takes the NVMe beyond the single-computer paradigm to be useful in the cloud, clusters, distributed systems, and megawebsites. The presentation will examine how the NVMe-oF specification can unleash the benefits of NVMe drives in a scalable manner by leveraging the capabilities of mainstream high-performance interconnects.

Attendees will learn about the benefits of using different fabrics as the platform, such as Fibre Channel, RoCE, iWarp, and Infiniband. The presentation will explore the next frontier of the NVMe-oF specification, including use at the TCP layer or deployed as NVMe SSDs in the data center. Furthermore, the webcast discussion will advance beyond transports to discuss new storage architectures and use cases.

WHO: Presenter is NVM Express’ Brandon Hoff, distinguished software architect, Broadcom

WHERE: https://www.brighttalk.com/webcast/12367/275423

WHEN: Wednesday, Sept. 20, 2017, at noon Eastern

About NVM Express, Inc.

With more than 100 members, NVM Express, Inc. is a non-profit organization focused on enabling broad ecosystem adoption of high performance and low latency non-volatile memory (NVM) storage through a standards-based approach. The organization offers an open collection of NVM Express (NVMe™) specifications and information to fully expose the benefits of non-volatile memory in all types of computing environments from mobile to data center. NVMe-based specifications are designed from the ground up to deliver high bandwidth and low latency storage access for current and future NVM technologies. For more information, visit http://www.nvmexpress.org.

Source: NVM Express, Inc.

The post NVM Express, Inc. to Host NVMe over Fabrics Webcast on Sept. 20 appeared first on HPCwire.

DeepL Anchors Neural Machine Translator at Verne Global’s HPC-Optimised Data Center

Related News- HPC Wire - Wed, 09/20/2017 - 10:05

LONDON & KEFLAVIK, Iceland, Sept. 20, 2017 — Verne Global, a provider of highly optimised, secure, and 100% renewably powered data center solutions, today announced that DeepL has deployed its 5.1 petaFLOPS supercomputer in its campus. Designed to support DeepL’s artificial intelligence (AI) driven, neural network translation service, this supercomputer is viewed by many as the world’s most accurate and natural-sounding machine translation service. Verne Global was selected because of the following factors:

  • The innovative campus design specialised to support HPC and other intensive compute environments driven by the rise in AI, machine learning and big data analytics
  • The expertise and technical knowledge of the Verne Global team, and
  • Verne Global’s access to Iceland’s abundant, renewable power and its highly reliable, low-cost energy grid.

“For DeepL, we needed a data center optimised for high-performance computing (HPC) environments and determined that our needs could not be met in Germany. Verne Global’s Icelandic campus provides us with the scalability, flexibility and technical resources we need. In addition, the abundance of low-cost renewable energy and free cooling will allow us to train DeepL’s neural networks at lower cost and faster scalability,” says Jaroslaw Kutylowski, CTO of DeepL. “Verne Global’s team has a high level of technical expertise, which helps us to implement ad hoc requests quickly and easily. I’ve never seen such an excellent cooperation before.”

On the supercomputer located within Verne Global’s campus, DeepL trains the neuronal translation networks based on collected data sets. As DeepL learns, the network leverages AI to examine millions of translations and learn independently how to translate with the right grammar and structure.

“We are pleased that our HPC-optimised campus was the ideal location for DeepL’s supercomputer. Our location in Iceland provides a low and stable energy price with the highest possible availability and scalability – criteria that are indispensable for computational and power-intensive applications,” says Tate Cantrell, Chief Technology Officer of Verne Global. “We are seeing growing interest from companies using AI tools, such as deep neural network (DNN) applications, to revolutionise how they move their businesses forward, create change, and elevate how we work, live and communicate.”

The market for AI, machine learning and cognitive computing is expanding rapidly. According to a recent paper, “Artificial Intelligence, The Next Digital Frontier?”, issued by the McKinsey Global Institute, the total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. McKinsey also states that, “A confluence of developments is driving this new wave of AI development. Computer power is growing, algorithms and AI models are becoming more sophisticated, and, perhaps most important of all, the world is generating once unimaginable volumes of the fuel that powers AI—data. Billions of gigabytes every day, collected by networked devices ranging from web browsers to turbine sensors.”

Verne Global’s data center, located on a former NATO base in Iceland, draws its electricity from hydroelectric and geothermal energy. The cool, temperate climate in Iceland enables free cooling, that when combined with the low-cost, renewable power, means that companies can save more than 70% on the total cost of operations for their compute resources over less optimal locations within the US, UK and continental Europe. The combination of innovative, technical design and an optimal location make Verne Global one of the worlds most efficient data center campuses.

Source: Verne Global

The post DeepL Anchors Neural Machine Translator at Verne Global’s HPC-Optimised Data Center appeared first on HPCwire.

DataSite Deploys High Performance Computing Solution for Enterprise Customer in Orlando

Related News- HPC Wire - Wed, 09/20/2017 - 09:38

ORLANDO, Fla., Sept. 20, 2017 — DataSite, a wholesale data center space and network service provider that owns and operates world-class colocation facilities in Atlanta, Boise and Orlando, announces it has deployed a high-performance compute (HPC) system, for a major hospitality corporation, from its Orlando data center. The mission critical environment will accommodate air cooled power densities of 20kW+ per rack.

Market research firm, IDC, projects the HPC market will grow from $23.1B in 2016 to $31.4B in 2019, fueled by the desire to reap the of benefits of power hungry servers, storage, middleware, and applications. DataSite has responded to soaring demand for HPC with a multitude of solutions to accommodate high density workloads across its three data centers. Chimney cabinets, which direct hot air up and into the active containment plenum; extra rack depth to accommodate hot air containment; and the use of blanking panels and materials are all part of this sophisticated, integrated solution.

“DataSite is thrilled to grow its deployment HPC solutions to a growing list of customers,” comments Rob Wilson, Executive Vice President for DataSite. “We are excited to support other industries as they embark on HPC initiatives. DataSite is well-equipped to sensitively meet all custom HPC requirements that call for up to 90kW per rack armed with expertise in thermodynamics and a wide arsenal of technological solutions at its disposal.”

Visit www.datasitecolo.com to learn more about the company’s data centers and HPC initiatives.

About DataSite

DataSite offers secure world-class facilities in Atlanta, Boise and Orlando, capable of accommodating varying needs in wholesale data center space. DataSite data centers are a unique blend of purpose-built, specially constructed data center facilities and expertly managed data center infrastructure designed to offer affordable colocation options that meet the demanding power density and up-time requirements of the modern computing environment. DataSite’s Tier III data center design provides completely redundant and continually operating facilities that are concurrently maintainable with zero scheduled downtime.

Source: DataSite

The post DataSite Deploys High Performance Computing Solution for Enterprise Customer in Orlando appeared first on HPCwire.

Altair Extends Strategic Relationship with HPE

Related News- HPC Wire - Wed, 09/20/2017 - 09:27

TROY, Mich., Sept. 19, 2017 — Altair announces today that it has entered into a multi-year original equipment manufacturing (OEM) agreement with HPE. This agreement represents an expansion of the long-term partnership between HPE and SGI (whom HPE recently acquired). HPE will now be able to include Altair’s PBS Professional workload manager and job scheduler on all of HPE’s high performance computing (HPC) systems, ensuring scalability of price and performance as system sizes and CPU-core counts continue to increase.

“We are delighted to strengthen our strategic collaboration with HPE,” said Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair. “With PBS Professional as its premier workload management software supplier, HPE will be able to provide our common customers with a powerful solution to meet their growing HPC requirements.”

PBS Professional gives HPE cluster users a more efficient, reliable solution for HPC workload management. As an HPE-integrated product, PBS Professional optimizes job scheduling on HPE Apollo and HPE SGI servers to achieve the highest levels of system utilization. PBS Professional is also integrated with HPE’s HPC system management solutions: HPE Insight Cluster Management Utility for (CMU) for HPE Apollo and HPE ProLiant platforms as well as HPE SGI Management Suite for HPE SGI 8600 systems.

“Altair’s PBS Professional is an established leader in HPC workload management,” said Bill Mannel, Vice President and General Manager for HPC and AI segment solutions at HPE. “We look forward to leveraging this agreement to give our customers access to an attractive PBS Professional offering to manage job scheduling and maximize system utilization on HPE’s industry leading HPC infrastructure.”

As the hardware vendor with the largest HPC market share, HPE offers the broadest spectrum of high-performance computing solutions, from workgroup and departmental servers to systems designed for the engineering enterprise and supercomputing centers (for more information please visit www.hpe.com/info/hpc).

Altair has served the HPC market for decades with award-winning workload management, engineering, and cloud computing software. Used by thousands of companies worldwide, PBS Professional enables engineers in HPC environments to improve productivity, optimize resource utilization and efficiency, and simplify the process of cluster workload management.

Click here for more information about HPE and Altair collaborations.

Customers can already obtain PBS Professional through HPE and its authorized resellers under the terms of the OEM agreement.

About Altair

Altair is focused on the development and broad application of simulation technology to synthesize and optimize designs, processes and decisions for improved business performance. Privately held with more than 2,600 employees, Altair is headquartered in Troy, Michigan, USA and operates 68 offices throughout 24 countries. Today, Altair serves more than 5,000 corporate clients across broad industry segments.

Source: Altair

The post Altair Extends Strategic Relationship with HPE appeared first on HPCwire.

HSA and ROCm Architectures to be Highlighted at Next Week’s CppCon

Related News- HPC Wire - Wed, 09/20/2017 - 09:19

BEAVERTON, Ore., Sept. 20, 2017 — The HSA (Heterogeneous System Architecture) Foundation and Foundation member AMD will be providing a comprehensive session on HSA technologies and AMD’s ROCm architecture at next week’s CppCon. The conference will be held from Sept. 24-29 in Bellevue, WA at the Meydenbauer Conference Center.

CppCon is an annual gathering for the worldwide C++ community and is geared to appeal to anyone from C++ novices to experts.

The presentation by AMD Fellow Paul Blinzer is included as part of a session on ‘concurrency and parallelism’ running from 8:30-10 PM on Tuesday, Sept. 28 at the Meydenbauer Conference Center, Harvard, Room #406. Attendees will learn about what allows these architectures to use computational hardware accelerators like GPUs, DSPs and others with native C++, without resorting to proprietary APIs, programming libraries or limited language features.

Heterogeneous System Architecture (HSA) is a standardized platform design that unlocks the performance and power efficiency of the parallel computing engines found in most modern electronic devices. It provides an ideal mainstream platform for next-generation SoCs in a range of applications including artificial intelligence.

For more information on the presentation and to register, please see https://cppcon.org/registration/.

For more information, including a full list of speakers, supporting organizations and sponsors please visit: https://cppcon.org/cppcon-2017-program/

About Paul Blinzer

Paul Blinzer works on a wide variety of Platform System Software architecture projects and specifically on the Heterogeneous System Architecture (HSA) System Software at Advanced Micro Devices, Inc. (AMD) as a Fellow in the System Software group. Living in the Seattle, WA area, during his career he has worked in various roles on system level driver development, system software development, graphics architecture, graphics & compute acceleration since the early ’90s. Paul is the chairperson of the “System Architecture Workgroup” of the HSA Foundation. He has a degree in Electrical Engineering (Dipl.-Ing) from TU Braunschweig, Germany.

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source: HSA Foundation

The post HSA and ROCm Architectures to be Highlighted at Next Week’s CppCon appeared first on HPCwire.

Pages

Subscribe to Research Computing aggregator