Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 19 min 53 sec ago

Quantum Corporation Names Patrick Dennis CEO

Tue, 01/16/2018 - 18:46

SAN JOSE, Calif., Jan. 16, 2018 — Quantum Corp. today announced that its board of directors has appointed Patrick Dennis as president and CEO, effective today. Dennis was most recently president and CEO of Guidance Software and has also held senior executive roles in strategy, operations, sales, services and engineering at EMC. He succeeds Adalio Sanchez, a member of Quantum’s board who had served as interim CEO since early November 2017. Sanchez will remain on the board and assist with the transition.

“Patrick has been a successful public company CEO and brings a broad range of experience in storage and software, including a proven track record leading business transformations,” said Raghu Rau, Quantum’s chairman. “The other board members and I look forward to working closely with him to drive growth, cost reductions, and profitability and deliver long-term shareholder value. We also want to thank Adalio for stepping in and leading the company during a critical transition period.”

“During my time as CEO, I’ve greatly appreciated the commitment to change I’ve seen from team members across Quantum and will be supporting Patrick in any way I can to build on the important work we started,” said Sanchez.

Dennis served as president and CEO of Guidance Software, a provider of cyber security software solutions, from May 2015 until its acquisition by OpenText last September. During his tenure, he turned the company around, growing revenue and significantly improving profitability. Before joining Guidance Software, Dennis was senior vice president and chief operating officer, Products and Marketing, at EMC, where he led the business operations of its $10.5 billion enterprise and mid-range systems division, including management of its cloud storage business. Dennis spent 12 years at EMC, including as vice president and chief operating officer of EMC Global Services, overseeing a 3,500-person technical sales force. In addition to his time at EMC, he served as group vice president, North American Storage Sales, at Oracle, where he turned around a declining business.

“With its long-standing expertise in addressing the most demanding data management challenges, Quantum is well-positioned to help customers maximize the strategic value of their ever-growing digital assets in a rapidly changing environment,” said Dennis. “I’m excited to be joining the company as it looks to capitalize on this market opportunity by leveraging its strong solutions portfolio in a more focused way, improving its cost structure and execution, and continuing to innovate.”

About Quantum

Quantum is a leading expert in scale-out tiered storage, archive and data protection, providing solutions for capturing, sharing, managing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. Quantum’s end-to-end, tiered storage foundation enables customers to maximize the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.

Source: Quantum Corp.

The post Quantum Corporation Names Patrick Dennis CEO appeared first on HPCwire.

New C-BRIC Center Will Tackle Brain-Inspired Computing

Tue, 01/16/2018 - 15:17

WEST LAFAYETTE, Ind., Jan. 16, 2018 — Purdue University will lead a new national center to develop brain-inspired computing for intelligent autonomous systems such as drones and personal robots capable of operating without human intervention.

The Center for Brain-inspired Computing Enabling Autonomous Intelligence, or C-BRIC, is a five-year project supported by $27 million in funding from the Semiconductor Research Corp (SRC) via their Joint University Microelectronics Program, which provides funding from a consortium of industrial sponsors as well as from the Defense Advanced Research Projects Agency. The SRC operates research programs in the United States and globally that connect industry to university researchers, deliver early results to enable technological advances, and prepare a highly-trained workforce for the semiconductor industry. Additional funds include $3.96 million from Purdue and as well as support from other participating universities. At the state level, the Indiana Economic Development Corporation will be providing funds, pending board approval, to establish an intelligent autonomous systems laboratory at Purdue.

C-BRIC, which begins operating in January 2018, will be led by Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering (ECE), with Anand Raghunathan, Purdue professor of ECE, as associate director. Other Purdue faculty involved in the center include Suresh Jagannathan, professor of computer science and ECE; and Eugenio Culurciello, associate professor of biomedical engineering, ECE and mechanical engineering. The center will involve seven other universities, pending final contracts, which include Arizona State University, Georgia Institute of Technology, Pennsylvania State University, Portland State University, Princeton University, University of Pennsylvania, and University of Southern California, around seventeen faculty, and around 85 graduate students and postdoctoral researchers.

“The center’s goal is to develop neuro-inspired algorithms, architectures and circuits for perception, reasoning and decision-making, which today’s standard computing is unable to do efficiently,” Roy said.

Efficiency here implies energy use. For example, while advanced computers such as IBM’s Watson and Google’s AlphaGo have beaten humans at high-level cognitive tasks, they also consume hundreds of thousands of watts of power to do so, whereas the human brain requires only around 20 watts.

“We have to narrow this huge efficiency gap to enable continued improvements in artificial intelligence in the face of diminishing benefits from technology scaling,” Raghunathan said. “C-BRIC will develop technologies to perform brain-like functions with brain-like efficiency.”

In addition, the center will enable next-generation autonomous intelligent systems capable of accomplishing both “end-to-end” functions and completion of mission-critical tasks without human intervention.

“Autonomous intelligent systems will require real-time closed-loop control, leading to new challenges in neural algorithms, software and hardware,” said Venkataramanan (Ragu) Balakrishnan, Purdue’s Michael and Katherine Birck Head and Professor of Electrical and Computer Engineering. “Purdue’s long history of preeminence in related research areas such as neuromorphic computing and energy-efficient electronics positions us well to lead this effort.”

“Purdue is up to the considerable challenges that will be posed by C-BRIC,” said Suresh Garimella, Purdue’s executive vice president for research and partnerships and the R. Eugene and Susie E. Goodson Distinguished Professor of Mechanical Engineering. “We are excited that our faculty and students are embarking on this ambitious mission to shape the future of intelligent autonomous systems.”

Mung Chiang, Purdue’s John A. Edwardson Dean of the College of Engineering, said, “C-BRIC represents a game-changer in artificial intelligence. These outstanding colleagues in Electrical and Computer Engineering and other departments at Purdue will carry out transformational research on efficient, distributed intelligence.”

To achieve their goals, C-BRIC researchers will improve the theoretical and mathematical underpinnings of neuro-inspired algorithms.

“This is very important,” Raghunathan said. “The underlying theory of brain-inspired computing needs to be better worked out, and we believe this will lead to broader applicability and improved robustness.”

At the same time, new autonomous systems will have to possess “distributed intelligence” that allows various parts, such as the multitude of “edge devices” in the so-called Internet of Things, to work together seamlessly.

“We are excited to bring together a multi-disciplinary team with expertise spanning algorithms, theory, hardware and system-building, that will enable us to pursue a holistic approach to brain-inspired computing, and to hopefully deliver an efficiency closer to that of the brain,” Roy said.

Information about the SRC can be found at https://www.src.org/.

Source: Purdue University

The post New C-BRIC Center Will Tackle Brain-Inspired Computing appeared first on HPCwire.

New Center at Carnegie Mellon University to Build Smarter Networks to Connect Edge Devices to the Cloud

Tue, 01/16/2018 - 15:14

PITTSBURGH, Jan. 16, 2018 — Carnegie Mellon University will lead a $27.5 million Semiconductor Research Corporation (SRC) initiative to build more intelligence into computer networks.

Researchers from six U.S. universities will collaborate in the CONIX Research Center headquartered at Carnegie Mellon. For the next five years, CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.

“The extent to which IoT will disrupt our future will depend on how well we build scalable and secure networks that connect us to a very large number of systems that can orchestrate our lives and communities. CONIX will develop novel architectures for large-scale, distributed computing systems that have immense implications for social interaction, smart buildings and infrastructure, and highly connected communities, commerce, and defense,” says James H. Garrett Jr., dean of Carnegie Mellon College of Engineering.

CONIX, an acronym for Computing on Network Infrastructure for Pervasive Perception, Cognition, and Action, is directed by Anthony Rowe, associate professor of Electrical and Computer Engineering at Carnegie Mellon. The assistant director, Prabal Dutta, is an associate professor at the University of California, Berkeley.

IoT has pushed a major focus on edge devices. These devices make our homes and communities smarter through connectivity, and they are capable of sensing, learning, and interacting with humans. In most current IoT systems, sensors send data to the cloud for processing and decision-making. However, massive amounts of sensor data coupled with technical constraints have created bottlenecks in the network that curtail efficiency and the development of new technologies especially if timing is critical.

“There isn’t a seamless way to merge cloud functionality with edge devices without a smarter interconnect, so we want to push more intelligence into the network,” says Rowe. “If networks were smarter, decision-making could occur independent of the cloud at much lower latencies.”

The cloud’s centralized nature makes it easier to optimize and secure, however, there are tradeoffs. “Large systems that are centralized tend to struggle in terms of scale and have trouble reacting quickly outside of data centers,” explains Rowe. CONIX researchers will look at how machine-learning techniques that are often used in the context of cloud computing can be used to self-optimize networks to improve performance and even defend against cyberattacks.

Developing a clean-slate distributed computing network will take an integrated view of sensing, processing, memory, dissemination and actuation. CONIX researchers intend to define the architecture for such networks now before attempts to work around current limitations create infrastructure that will be subject to rip-and-repair updates, resulting in reduced performance and security.

CONIX’s research is driven by three applications:

Smart and connected communities—Researchers will explore the mechanisms for managing and processing millions of sensors’ feeds in urban environments. They will deploy CONIX edge devices across participating universities to monitor and visualize the flow of pedestrians. At scale, this lays the groundwork for all kinds of infrastructure management.

Enhanced situational awareness at the edge—Efforts here will create on-demand information feeds for decision makers by dispatching human-controlled swarming drones to provide aerial views of city streets. Imagine a system like Google Street View, only with live real-time data. This would have both civilian and military applications. For example, rescue teams in a disaster could use the system to zoom in on particular areas of interest at the click of a button.

Interactive Mixed Reality—Physical and virtual reality systems will merge in a collaborative digital teleportation system.  Researchers will capture physical aspects about users in a room, such as their bodies and facial expressions. Then, like a hologram, this information will be shared with people in different locations. The researchers will use this technology for meetings, uniting multiple CONIX teams. This same technology will be critical to support next-generation augmented reality systems being used in applications ranging from assisted surgery and virtual coaching to construction and manufacturing.

In addition to Carnegie Mellon and the University of California, Berkeley, other participants include the University of California, Los Angeles, University of California, San Diego, University of Southern California, and University of Washington Seattle.

CONIX is one of six research centers funded by the SRC’s Joint University Microelectronics Program (JUMP), which represents a consortium of industrial participants and the Defense Advanced Research Projects Agency (DARPA).

About the College of Engineering at Carnegie Mellon University

The College of Engineering at Carnegie Mellon University is a top-ranked engineering college that is known for our intentional focus on cross-disciplinary collaboration in research. The College is well-known for working on problems of both scientific and practical importance. Our “maker” culture is ingrained in all that we do, leading to novel approaches and transformative results. Our acclaimed faculty have a focus on innovation management and engineering to yield transformative results that will drive the intellectual and economic vitality of our community, nation and world.

About the SRC

Semiconductor Research Corporation (SRC), a world-renowned, high technology-based consortium serves as a crossroads of collaboration between technology companies, academia, government agencies, and SRC’s highly regarded engineers and scientists. Through its interdisciplinary research programs, SRC plays an indispensable part to address global challenges, using research and development strategies, advanced tools and technologies. Sponsors of SRC work synergistically together, gain access to research results, fundamental IP, and highly experienced students to compete in the global marketplace and build the workforce of tomorrow. Learn more at: www.src.org.

Source: Carnegie Mellon University

The post New Center at Carnegie Mellon University to Build Smarter Networks to Connect Edge Devices to the Cloud appeared first on HPCwire.

SRC Spends $200M on University Research Centers

Tue, 01/16/2018 - 15:10

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communications, nanotechnology, and more. It’s not a bad way to begin 2018 for the winning institutions which include Notre Dame University, University of Michigan, University of Virginia, Carnegie Mellon University, Purdue University, and UC Santa Barbara.

SRC’s JUMP (Joint University Microelectronics Program) is a collaborative network of research centers sponsored by U.S. industry participants and DARPA. As described in the SRC web site, “[JUMP’s] mission is to enable the continued pace of growth of the microelectronics industry with discoveries which release the evolutionary constraints of traditional semiconductor technology development. JUMP research, guided by the university center directors, tackles fundamental physical problems and forges a nationwide effort to keep the United States and its technology firms at the forefront of the global microelectronics revolution.”

The six projects, funded over five years, were launched on January 1st and are listed below with short descriptions. Links to press releases from each center are at the end of the article:

  • ASCENT (Applications and Systems driven Center for Energy-Efficient Integrated NanoTechnologies at Notre Dame). “ASCENT focuses on demonstration of foundational material synthesis routes and device technologies, novel heterogeneous integration (package and monolithic) schemes to support the next era of functional hyper-scaling. The mission is to transcend the current limitations of high-performance transistors confined to a single planar layer of integrated circuit by pioneering vertical monolithic integration of multiple interleaved layers of logic and memory.”
  • ADA (Applications Driving Architectures Center at University of Michigan). “[ADA will drive] system design innovation by drawing on opportunities in application driven architecture and system-driven technology advances, with support from agile system design frameworks that encompass programming languages to implementation technologies. The center’s innovative solutions will be evaluated and quantified against a common set of benchmarks, which will also be expanded as part of the center efforts. These benchmarks will be initially derived from core computational aspects of two application domains: visual computing and natural language processing.”
  • Kevin Skadron, University of Virginia

    CRISP (Center for Research on Intelligent Storage and Processing-in-memory at University of Virginia). “Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” says Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible.”

  • CONIX (Computing On Network Infrastructure for Pervasive Perception, Cognition, and Action at Carnegie Mellon University). “CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.”
  • CBRIC (Center for Brain-inspired Computing Enabling Autonomous Intelligence at Purdue University). Charged with delivering key advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems, “CBRIC will address these challenges through synergistic exploration of Neuro-inspired Algorithms and Theory, Neuromorphic Hardware Fabrics, Distributed Intelligence, and Application Drivers.”
  • ComSenTer (Center for Converged TeraHertz Communications and Sensing at UCSB). ComSenTer will develop the technologies for a future cellular infrastructure using hubs with massive spatial multiplexing, providing 1-100Gb/s to the end user, and, with 100-1000 simultaneous independently-modulated beams, aggregate hubs capacities in the 10’s of Tb/s. Backhaul for this future cellular infrastructure will be a mix of optical links and Tb/s-capacity point-point massive MIMO links.”

Links to individual press releases/program descriptions:

ASCENT, Notre Dame: https://www.src.org/newsroom/press-release/2018/921/

ADA, University of Michigan: https://www.src.org/newsroom/press-release/2018/922/

CRISP, University of Virginia: https://www.src.org/newsroom/press-release/2018/920/

CONIX, Carnegie Mellon: https://www.prnewswire.com/news-releases/new-center-headquartered-at-carnegie-mellon-university-will-build-smarter-networks-to-connect-edge-devices-to-the-cloud-300582210.html

CBRIC, Purdue: https://www.src.org/newsroom/press-release/2018/919/

ComSentTer, UCSB: https://www.src.org/program/jump/comsenter/

The post SRC Spends $200M on University Research Centers appeared first on HPCwire.

UVA Engineering Tapped to Lead $27.5 Million Center to Reinvent Computing

Tue, 01/16/2018 - 15:09

CHARLOTTESVILLE, Va., Jan. 16, 2018 — The University of Virginia School of Engineering & Applied Science has been selected to establish a $27.5 million national center to remove a bottleneck built into computer systems 70 years ago that is increasingly hindering technological advances today.

UVA Engineering’s new Center for Research in Intelligent Storage and Processing in Memory, or CRISP, will bring together researchers from eight universities to remove the separation between memories that store data and processors that operate on the data.

That separation has been part of all mainstream computing architectures since 1945, when John von Neumann, one of the pioneering computer scientists, first outlined how programmable computers should be structured. Over the years, processor speeds have improved much faster than memory and storage speeds, and also much faster than the speed at which wires can carry data back and forth.

These trends lead to what computer scientists call the “memory wall,” in which data access becomes a major performance bottleneck. The need for a solution is urgent, because of today’s rapidly growing data sets and the potential to use big data more effectively to find answers to complex societal challenges.

“Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” said Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “One example is in medicine, where we can imagine mining massive data sets to look for new indicators of cancer. The scale of computation needed to make advances for health care and many other human endeavors, such as smart cities, autonomous transportation, and new astronomical discoveries, is not possible today. Our center will try to solve this problem by breaking down the memory-wall bottleneck and finally moving beyond the 70-year-old paradigm. This will enable entirely new computational capabilities, while also improving energy efficiency in everything from mobile devices to datacenters.”

CRISP is part of a $200 million, five-year national program that will fund centers led by six top research universities: UVA, University of California at Santa Barbara, Carnegie Mellon University, Purdue University, the University of Michigan and the University of Notre Dame. The Joint University Microelectronics Program is managed by North Carolina-based Semiconductor Research Corporation, a consortium that includes engineers and scientists from technology companies, universities and government agencies.

Each research center will examine a different challenge in advancing microelectronics, a field that is crucial to the U.S. economy and its national defense capabilities. The centers will collaborate to develop solutions that work together effectively. Each center will have liaisons from the program’s member companies, collaborating on the research and supporting technology transfer.

“The trifecta of academia, industry and government is a great model that benefits the country as a whole,” Skadron said. “Close collaboration with industry and government agencies can help identify interesting and relevant problems that university researchers can help solve, and this close collaboration also helps accelerate the impact of the research.”

The program includes positions for about a dozen new Ph.D. students at UVA Engineering, and altogether, about 100 Ph.D. students across the entire center. The center will also create numerous opportunities for undergraduate students to get involved in research. The program provides all these students with professional development opportunities and internships with companies that are program sponsors.

Engineering Dean Craig Benson said the new center expresses UVA Engineering’s commitment to research and education that add value to society.

“Most of the grand challenges the National Academy of Engineering has identified for humanity in the 21st century will require effective use of big data,” Benson said. “This investment affirms the national research community’s confidence that UVA has the vision and expertise to lead a new era for technology.”

Pamela Norris, UVA Engineering’s executive associate dean for research, said the center is also an example of the bold ideas that propelled the School to a near 36 percent increase in research funding in fiscal year 2017, compared to the prior year.

“UVA Engineering has a culture of collaborative, interdisciplinary research programs,” Norris said. “Our researchers are determined to use this experience to address some of society’s most complex challenges.”

UVA’s center will include researchers from seven other universities, working together in a holistic approach to solve the data bottleneck in current computer architecture.

“Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible,” Skadron said.

This starts at the chip level, where computer processing capabilities will be built inside the memory storage. Processors will also be paired with memory chips in 3-D stacks. UVA Electrical and Computer Engineering Professor Mircea Stan, an expert on the design of high-performance, low-power chips and circuits, will help lead the center’s research on 3-D chip architecture, thermal and power optimization, and circuit design.

CRISP researchers also will examine how other aspects of computer systems will have to change when computer architecture is reinvented, from operating systems to software applications to data centers that house entire computer system stacks. UVA Computer Science Assistant Professor Samira Khan, an expert in computer architecture and its implications for software systems, will help guide the center’s efforts to rethink how the many layers of hardware and software in current computer systems work together.

CRISP also will develop new system software and programming frameworks so computer users can accomplish their tasks without having to manage complex hardware details, and so that software is portable across diverse computer architectures. All this work will be developed in the context of several case studies to help guide the hardware and software research to practical solutions and real-world impact. These include searching for new cancer markers; mining the human gut microbiome for new insights on interactions among genetics, environment, lifestyle and wellness; and data mining for improving home health care.

“Achieving a vision like this requires a large team with diverse expertise across the entire spectrum of computer science and engineering, and such a large-scale initiative is very hard to put together without this kind of investment,” Skadron said. “These large, center-scale programs profoundly enhance the nation’s ability to maintain technological leadership, while simultaneously training a large cohort of students who will help address the nation’s rapidly growing need for technology leadership. This is an incredibly exciting opportunity for us.”

Source: University of Virginia

The post UVA Engineering Tapped to Lead $27.5 Million Center to Reinvent Computing appeared first on HPCwire.

Notre Dame to Lead $26 Million Multi-University Research Center Developing Next-Generation Computing Technologies

Tue, 01/16/2018 - 15:03

Jan. 16, 2018 — In today’s age of ubiquitous computing, society produces roughly the same amount of data in 10 minutes that would have previously taken 100 years. Within the next decade, experts anticipate the ability to create, share and store a century’s worth of data in less than 10 seconds.

To get there, researchers and technologists must overcome data-transfer bottlenecks and improve the energy efficiency of current electronic devices.

Now, a new $26 million center led by the University of Notre Dame will focus on conducting research that aims to increase the performance, efficiency and capabilities of future computing systems for both commercial and defense applications.

At the state level, the Indiana Economic Development Corporation (IEDC) has offered to provide funding for strategic equipment, pending final approval from the IEDC Board of Directors, to support execution of the program’s deliverables.

“We have assembled a group of globally recognized technical leaders in a wide range of areas — from materials science and device physics to circuit design and advanced packaging,” said Suman Datta, director of the Applications and Systems-driven Center for Energy-Efficient integrated Nano Technologies (ASCENT) and Frank M. Freimann Professor of Engineering at Notre Dame. “Working together, we look forward to developing the next generation of innovative device technologies.”

The multidisciplinary research center will develop and utilize advanced technologies to sustain the semiconductor industry’s goals of increasing performance and reducing costs. Researchers have been steadily advancing toward these goals via relentless two-dimensional scaling as well as the addition of performance boosters to complementary metal oxide semiconductors, or CMOS technology. Both approaches have provided enhanced performance to energy consumption ratios.

The exponentially increasing demand for connected devices, big data analytics, cloud computing and machine-learning technologies, however, requires future innovations that transcend the impending limits of current CMOS technology.

ASCENT comprises 20 faculty members from 13 of the nation’s leading research universities, including Arizona State University, Cornell University, Georgia Institute of Technology, Purdue University, Stanford University, University of Minnesota, University of California-Berkeley, University of California-Los Angeles, University of California-San Diego, University of California-Santa Barbara, University of Colorado, and the University of Texas-Dallas.

Sayeef Salahuddin, professor of electrical engineering and computer science, at the University of California-Berkeley, will serve as the center’s associate director.

Datta said the center’s research agenda has been shaped by valuable lessons learned from past research conducted at the Notre Dame’s Center for Nano Science and Technology (NDnano), as well as the Notre Dame-led Center for Low Energy Systems Technology (LEAST) and the Midwest Institute for Nanoelectronics Discovery (MIND), which stemmed from the Semiconductor Research Corporation’s (SRC) STARnet program and Nanoelectronics Research Initiative, respectively.

Researchers at ASCENT will pursue four areas of technology including three-dimensional integration of device technologies beyond a single planar layer (vertical CMOS); spin-based device concepts that combine processing and memory functions (beyond CMOS); heterogeneous integration of functionally diverse nano-components into integrated microsystems (heterogeneous integration fabric); and hardware accelerators for data intensive cognitive workloads (merged logic-memory fabric).

“The problems that Professor Datta and his team will try to solve are among the most challenging and important facing the electronics industry,” said Thomas G. Burish, Charles and Jill Fischer Provost of Notre Dame. “The selection committee in their feedback was highly complimentary of the vision, technical excellence, diverse talent and collaborative approach that Suman and his colleagues have undertaken. Notre Dame is delighted to be able to host this effort.”

ASCENT is one of six research centers funded by the SRC’s Joint University Microelectronics Program (JUMP), which represents a consortium of industrial participants and the Defense Advanced Research Projects Agency (DARPA). Information about the SRC can be found at https://www.src.org/.

Source: University of Notre Dame

The post Notre Dame to Lead $26 Million Multi-University Research Center Developing Next-Generation Computing Technologies appeared first on HPCwire.

UMass Center for Data Science Partners with Chan Zuckerberg Initiative to Accelerate Science and Medicine

Tue, 01/16/2018 - 14:34

AMHERST, Mass., Jan. 16, 2018 — Distinguished scientist and professor Andrew McCallum, director of the Center for Data Science at the University of Massachusetts Amherst, will lead a new partnership with the Chan Zuckerberg Initiative to accelerate science and medicine. The goal of this project, called Computable Knowledge, is to create an intelligent and navigable map of scientific knowledge using a branch of artificial intelligence known as knowledge representation and reasoning.

The Computable Knowledge project will facilitate new ways for scientists to explore, navigate, and discover potential connections between millions of new and historical scientific research articles. Once complete, the service will be accessible through Meta, a free CZI tool, and will help scientists track important discoveries, uncover patterns, and deliver insights among an up-to-date collection of published scientific texts, including more than 60 million articles.

“We are excited for the opportunity to advance our research in deep learning, representation and reasoning for such a worthy challenge,” said McCallum. “We believe the result will be a first-of-its-kind guide for every scientist, just as map apps are now indispensable tools for navigating the physical world. We hope our results will help solve the mounting problem of scientific knowledge complexity, democratize scientific knowledge, and put powerful reasoning in the hands of individual scientists.”

The Chan Zuckerberg Initiative (CZI) is building a team of AI scientists to collaborate on the project, and has made an initial grant of $5.5 million to the university’s Center for Data Science. It is CZI’s first donation and partnership with the University of Massachusetts Amherst.

McCallum expects CZI’s investment to result in hiring software engineers in Western Massachusetts to work on the project. It will also support the related research of several graduate, Ph.D. and postdoctoral students in the Center for Data Science and create internships for UMass Amherst students at other CZI projects worldwide.

“We are very pleased CZI selected UMass Amherst to play a major role in this groundbreaking initiative that will give scientists tremendous power to share their research around the world,” Massachusetts Governor Charlie Baker said. “Massachusetts’ renowned research and health care institutions make the Commonwealth an attractive location to advance CZI’s work, and we welcome their engagement here.”

“We are grateful for CZI’s generous support and recognition of UMass Amherst’s leadership in artificial intelligence,” said UMass Amherst Chancellor Subbaswamy. “Andrew McCallum and his colleagues are engaged in extraordinary and innovative research, and we are thrilled to be partners with CZI in their goal to cure, prevent, or manage all diseases by the end of the century.”

“This project has the potential to accelerate the work of millions of scientists around the globe,” said Cori Bargmann, president of science at the Chan Zuckerberg Initiative. “Andrew McCallum and the Center for Data Science at UMass Amherst are global leaders in artificial intelligence and natural language processing. Andrew will bring deep knowledge and expertise to this effort, and we are honored to partner with him.”

About Professor Andrew McCallum

McCallum, who joined the UMass Amherst faculty in 2002, focuses his research on statistical machine learning applied to text, including information extraction, social network analysis, and deep neural networks for knowledge representation. He served as president of the International Society of Machine Learning and is a Fellow of the Association for the Advancement of Artificial Intelligence as well as the Association for Computing Machinery. Recognized as a pre-eminent researcher in these field, he has published more 150 papers and received over 50,000 citations from fellow researchers. He was named the founding director of the UMass Amherst Center for Data Science in 2015.

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded by Facebook founder and CEO Mark Zuckerberg and his wife Priscilla Chan in December 2015. The philanthropic organization brings together world-class engineering, grant-making, impact investing, policy, and advocacy work. Its initial areas of focus include supporting science through basic biomedical research and education through personalized learning. It is also exploring other issues tied to the promotion of equal opportunity including access to affordable housing and criminal justice reform.

Source: UMass Amherst

The post UMass Center for Data Science Partners with Chan Zuckerberg Initiative to Accelerate Science and Medicine appeared first on HPCwire.

US Seeks to Automate Video Analysis

Tue, 01/16/2018 - 12:11

U.S. military and intelligence agencies continue to look for new ways to use artificial intelligence to sift through huge amounts of video imagery in hopes of freeing analysts to identify threats and otherwise put their skills to better use.

The latest AI effort announced last week by the research arm of the U.S. intelligence apparatus focuses on video surveillance and using machine vision to automate video monitoring. The initiative is similar to a Pentagon effort to develop computer vision algorithms to scan full-motion vision.

The new effort unveiled by the Intelligence Advanced Research Projects Activity (IARPA) would focus on public safety applications such as securing government facilities or monitoring public spaces that have become targets for terror attacks.

Program officials said last week they have selected six teams to develop machine vision techniques to scan video under a new program called Deep Intermodal Video Activity, or DIVA. The U.S. National Institute of Standards and Technology along with contractor Kitware Inc., an HPC visualization specialist, will evaluate research data and test proposed DIVA systems, the research agency said.

Among the goals is developing an automated capability to detect threats and, failing that, quickly locating attacks using machine vision and automated video monitoring. “There [are] an increasing number of cases where officials, and the communities they represent, are tasked with viewing large stores of video footage, in an effort to locate perpetrators of attacks, or other threats to public safety,” Terry Adams, DIVA program manager, noted in a statement announcing the effort.

“The resulting technology will provide the ability to detect potential threats while reducing the need for manual video monitoring,” Adams added.

The agency also stressed that the surveillance technology would not be used to track the identity of individuals and “will be implemented to protect personal privacy.” Program officials did not elaborate.

IARPA was established in 2006 to coordinate research across the National Security Agency, CIA and other U.S. spy agencies. The office is modeled after the Defense Advanced Research Projects Agency, which funds risky but promising technology development. Those efforts have focused on the ability to process the enormous video and data haul generated by spy satellites and, increasingly, drones and sensor networks.

Similarly, the Pentagon launched an AI effort last year dubbed Project Maven to accelerate DoD’s integration of big data and machine learning into its intelligence operations. The first computer vision algorithms focused on parsing full-motion video were scheduled for release by the end of 2017.

These and other efforts are aimed at automating the tedious task of pouring through hours of surveillance data to detect threats. Among IARPA’s research thrusts is speeding the analysis of sensor data “to maximize insight from the information we collect,” the agency said.

The post US Seeks to Automate Video Analysis appeared first on HPCwire.

RAIDIX 4.6 Ensures Data Integrity on Power Down

Tue, 01/16/2018 - 10:46

Jan. 16, 2018 — Data storage vendor RAIDIX launches a new edition of the software-defined storage technology – RAIDIX 4.6. The RAIDIX volume management software powers commodity hardware to create fault-tolerant high-performance data storage systems for data-intensive applications. Building on in-house RAID algorithms, advanced data reconstruction and smart QoS, RAIDIX enables peak GB/s and IOPS in Media & Entertainment, HPC, CCTV, and Enterprise with minimal hardware overheads.

RAIDIX 4.5 (shipped in October 2017) focused on hybrid storage performance, efficient SSD caching and virtualization of siloed SAN storage devices. Ver. 4.5 further improved multi-thread data processing and employed proprietary intelligent algorithms to avoid redundant write levels. Adding to the previous major edition, RAIDIX 4.6 enables the use of NVDIMM-N, ensures support for new 100Gbit adapters and brings along more features and improvements.

In version 4.6, the RAIDIX R&D implemented write-back cache protection leveraging non-volatile dual in-line memory (NVDIMM-N). RAIDIX-based systems prevent data loss in case of power down or other failures on the node. Unlike hardware controllers, NVDIMM does not require battery replacement, and data storage built on non-volatile memory does not require a second controller to ensure reliability. NVDIMM-powered solutions also reveal higher performance whereas guaranteed cache synchronization in the dual controller mode leads to inevitable latencies. Thus, the implemented protection mechanism puts together high write speeds with caching and the reliability of synchronous write.

Enhancing the interoperability matrix, RAIDIX 4.6 includes the ability to connect to a Linux client through the high-speed InfiniBand Mellanox ConnectX-4 100Gbit interfaces. This results in accelerated performance in Big Data, HPC and corporate environments – as well as minimal available latencies. On the ease-of-use front, RAIDIX 4.6 encompasses a host of interface tweaks for better control and manageability.

Established in 2009, RAIDIX is an SDS vendor that empowers system integrators and end customers to design and operate high-performance and cost-effective data storage systems. Flexible RAIDIX configurations ranging from entry-level systems up to multi-petabyte clusters are employed by the global partner network in 35 countries. IT solution providers utilize RAIDIX as the key component in turnkey projects or deliver industry-tailored appliances powered by RAIDIX.

About RAIDIX

RAIDIX (www.raidix.com) is a leading solution provider and developer of high-performance data storage systems. The company’s strategic value builds on patented erasure coding methods and innovative technology designed by the in-house research laboratory. The RAIDIX Global Partner Network encompasses system integrators, storage vendors and IT solution providers offering RAIDIX-powered products for professional and enterprise use.

Source: RAIDIX

The post RAIDIX 4.6 Ensures Data Integrity on Power Down appeared first on HPCwire.

Cray Announces Selected Preliminary 2017 Financial Results

Tue, 01/16/2018 - 10:32

SEATTLE, Jan. 16, 2018 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced selected preliminary 2017 financial results. The 2017 anticipated results presented in this release are based on preliminary financial data and are subject to change until the year-end financial reporting process is complete.

Based on preliminary results, total revenue for 2017 is expected to be about $390 million.

While a wide range of results remains possible for 2018 and based on the Company’s preliminary 2017 results, Cray expects revenue to grow by 10-15% for 2018. Revenue is expected to be about $50 million for the first quarter of 2018.

“With a strong effort across the company and in partnership with our customers, we completed all our large acceptances during the fourth quarter,” said Peter Ungaro, president and CEO of Cray. “A couple of smaller acceptances that we did not finish are now expected to be completed early in 2018. While 2017 was challenging, we’re beginning to see early signs of a rebound in our core market and I’m proud of the progress we made during the year to position the company for long-term growth.”

Based on currently available information, Cray estimates that the impact of the Tax Cuts and Jobs Act (Tax Legislation) passed in December 2017 will result in a reduction to the Company’s GAAP earnings for the fourth quarter and year ended December 31, 2017 in the range of $30-35 million.  The large majority of this charge is due to the remeasurement of the Company’s U.S. deferred tax assets at lower enacted corporate tax rates.  The charge may differ from this estimate, possibly materially, due to, among other things, changes in interpretations and assumptions the Company has made, and guidance that may be issued. This charge has no impact on the Company’s previously provided non-GAAP guidance.  Going forward, the Company does not expect an increase in its non-GAAP tax rates as a result of the Tax Legislation.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post Cray Announces Selected Preliminary 2017 Financial Results appeared first on HPCwire.

Honored Physicist Steven Chu Selected as AAAS President-Elect

Tue, 01/09/2018 - 14:40

Jan. 9, 2018 — Nobel laureate and former Energy Secretary Steven Chu has been chosen as president-elect of the American Association for the Advancement of Science. Chu will start his three-year term as an officer and member of the Executive Committee of the AAAS Board of Directors at the 184th AAAS Annual Meeting in Austin, Texas, in February.

“As Secretary of Energy, I was reminded daily that science must continue to be elevated and integrated into our national life and throughout the world. The work of AAAS in connecting science with society, public policy, human rights, education, diplomacy and journalism – through its superb journals and programs – is essential,” said Chu in his candidacy statement.

“Never has there been a more important time than today for AAAS to communicate the advances in science, the methods we use to acquire this knowledge and the benefits of these discoveries to the public and our policymakers,” he said.

Chu cited his role in key reports by National Academies and the American Academy of Arts and Sciences on the competitiveness of the U.S. scientific enterprise and the state of fundamental research, studies that “sounded alarms that the health of science, science education and integration of science into public decision-making in the U.S. was in peril and heading in the wrong direction,” he said in his candidacy statement. “Concern among scientists and friends of science is even greater today and we in AAAS have our work cut out for us.”

AAAS must continue its efforts to communicate the benefits of scientific progress, Chu noted, saying the world’s largest general scientific organization must continue to ensure scientists and students have access to the free exchange of ideas and the ability to pursue discovery across national boundaries.

Chu currently serves as the William R. Kenan Jr. Professor of Physics and Professor of Molecular and Cellular Physiology at Stanford University. Prior to rejoining Stanford in 2013, Chu was secretary of energy during President Barack Obama’s first term, the first scientist to head the Department of Energy, the home of the nation’s 17 National Laboratories.

Prior to his appointment as energy secretary, Chu was director of the Lawrence Berkeley National Laboratory as well as a professor of physics and molecular and cell biology at University of California, Berkeley. He first joined Stanford University in 1987, where he was a professor of physics until 2004.

Between 1978 and 1987, Chu worked at Bell Labs, where he ultimately led its Quantum Electronics Research Department. At Bell Labs, Chu carried out research on laser cooling and atom trapping, work that would earn him – along with Claude Cohen-Tannoudji and William Daniel Phillips – the Nobel Prize for Physics in 1997. Their new methods for using laser light to “trap” and slow down atoms to study them in greater detail “contributed greatly to increasing our knowledge of the interplay between radiation and matter,” the Nobel Committee said in 1997.

Chu received bachelor’s degrees in mathematics and physics from the University of Rochester and a Ph.D. in physics from the University of California, Berkeley.

He was named an elected fellow of AAAS in 2000 and has been a member of AAAS since 1995. He served on the AAAS Committee on Nominations, which selects the annual slate of candidates for AAAS president-elect and Board of Directors elections, from 2009 to 2011.

The current AAAS president-elect, Margaret Hamburg, will begin her term as AAAS president at the close of the 2018 Annual Meeting. Hamburg is foreign secretary of the National Academy of Medicine. The current president, Susan Hockfield, will become chair of the AAAS Board of Directors. Hockfield is president emerita of the Massachusetts Institute of Technology.

Source: AAAS

The post Honored Physicist Steven Chu Selected as AAAS President-Elect appeared first on HPCwire.

Micron and Intel Announce End to NAND Memory Joint Development Program

Tue, 01/09/2018 - 11:20

BOISE, Idaho, and SANTA CLARA, Calif., Jan. 8, 2018 – Micron and Intel today announced an update to their successful NAND memory joint development partnership that has helped the companies develop and deliver industry-leading NAND technologies to market.

The announcement involves the companies’ mutual agreement to work independently on future generations of 3D NAND. The companies have agreed to complete development of their third-generation of 3D NAND technology, which will be delivered toward the end of this year and extending into early 2019. Beyond that technology node, both companies will develop 3D NAND independently in order to better optimize the technology and products for their individual business needs.

Micron and Intel expect no change in the cadence of their respective 3D NAND technology development of future nodes. The two companies are currently ramping products based on their second-generation of 3D NAND (64 layer) technology.

Both companies will also continue to jointly develop and manufacture 3D XPoint at the Intel-Micron Flash Technologies (IMFT) joint venture fab in Lehi, Utah, which is now entirely focused on 3D XPoint memory production.

“Micron’s partnership with Intel has been a long-standing collaboration, and we look forward to continuing to work with Intel on other projects as we each forge our own paths in future NAND development,” said Scott DeBoer, executive vice president of Technology Development at Micron. “Our roadmap for 3D NAND technology development is strong, and we intend to bring highly competitive products to market based on our industry-leading 3D NAND technology.”

“Intel and Micron have had a long-term successful partnership that has benefited both companies, and we’ve reached a point in the NAND development partnership where it is the right time for the companies to pursue the markets we’re focused on,” said Rob Crooke, senior vice president and general manager of Non-Volatile Memory Solutions Group at Intel Corporation. “Our roadmap of 3D NAND and Optane technology provides our customers with powerful solutions for many of today’s computing and storage needs.”

Source: Intel

The post Micron and Intel Announce End to NAND Memory Joint Development Program appeared first on HPCwire.

Activist Investor Ratchets up Pressure on Mellanox to Boost Returns

Tue, 01/09/2018 - 10:29

Activist investor Starboard Value has sent a letter to Mellanox CEO Eyal Waldman demanding dramatic operational changes to boost returns to shareholders. This is the latest missive in an ongoing struggle between Starboard and Mellanox that began back in November when Starboard raised its stake in the interconnect specialist to 10.7 percent. Starboard argues Mellanox is significantly undervalued and that its costs, notably R&D, are unreasonably high.

The letter, dated January 8 and under the signature of Peter Feld, is pointed as shown in this excerpt:

“As detailed in the accompanying slides, over the last twelve months Mellanox’s R&D expenditures as a percentage of revenue were 42%, compared to the peer median of 22%. On SG&A, Mellanox spent 24% of revenue versus the peer median of 17%. It is critical to appreciate that Mellanox is not just slightly worse than peers on these key metrics, it is completely out of line with the peer group.”

Mellanox issued 2018 guidance for “low-to-mid-teens” (percent) revenue growth. Starboard cites a ‘consensus’ estimate of $816.5 million in revenue for 2017 and $986.4 million (14.5 percent). At 70.6 percent, Mellanox has one of the highest gross margins among comparable companies, and one of the lowest operating margins at 13.8 percent, according to Starboard.

“We believe there is a tremendous opportunity at Mellanox, but it will require substantial change, well beyond just the Company’s recently announced 2018 targets,” wrote Feld.

Link to Starboard letter: http://www.starboardvalue.com/wp-content/uploads/Starboard_Value_LP_Letter_to_MLNX_01.08.2018.pdf

The post Activist Investor Ratchets up Pressure on Mellanox to Boost Returns appeared first on HPCwire.

ACM Names New Director of Global Policy and Public Affairs

Tue, 01/09/2018 - 10:24

NEW YORK, Jan. 9, 2018 — ACM, the Association for Computing Machinery, has named Adam Eisgrau as its new Director of Global Policy and Public Affairs, effective January 3, 2018. Eisgrau will coordinate and support ACM’s engagement with public technology policy issues involving information technology, globally and particularly in the US and Europe. ACM aims to educate and inform computing professionals, policymakers, and the public about information technology policy and its consequences, and to shape public technology policy through a deeper understanding of the information technology issues involved.

“ACM has long been committed to providing policy makers in the US and abroad with the most current, accurate, objective and non-partisan information about all things digital as they wrestle with issues that profoundly affect billions of people,” said ACM President Vicki L. Hanson. “We’re thrilled to add a communicator of Adam’s caliber to our team as the computing technologies pioneered, popularized and promulgated by ACM members become ever more integrated to the fabric of daily life.”

“Speaking tech to power clearly, apolitically and effectively has never been more important,” said Eisgrau. “The chance to do so for ACM in Washington, Brussels and beyond is a dream opportunity.”

A former communications attorney, Eisgrau began his policy career as Judiciary Committee Counsel to then-freshman US Senator Dianne Feinstein (D-CA). Since leaving Senator Feinstein’s office in 1995, he has represented both public- and private-sector interests in international forums and to Congress, federal agencies and the media on a host of technology-driven policy matters. These include: digital copyright, e-commerce competition, peer-to-peer software, cybersecurity, encryption, online financial services, warrantless surveillance and digital privacy.

Prior to joining ACM, Eisgrau directed the government relations office of the American Library Association. He is a graduate of Dartmouth College and Harvard Law School.

About ACM

ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post ACM Names New Director of Global Policy and Public Affairs appeared first on HPCwire.

NOAA to Expand Compute Capacity by 50 Percent with Two New Dells

Tue, 01/09/2018 - 10:10

January 9, 2018 — NOAA’s combined weather and climate supercomputing system will be among the 30 fastest in the world, with the ability to process 8 quadrillion calculations per second, when two Dell systems are added to the IBMs and Crays at data centers in Reston, Virginia, and Orlando, Florida, later this month.

“NOAA’s supercomputers play a vital role in monitoring numerous weather events from blizzards to hurricanes,” said Secretary of Commerce Wilbur Ross. “These latest updates will further enhance NOAA’s abilities to predict and warn American communities of destructive weather.”

This upgrade completes phase three of a multi-year effort to build more powerful supercomputers that make complex calculations faster to improve weather, water and climate forecast models. It adds 2.8 petaflops of speed at both data centers combined, increasing NOAA’s total operational computing speed to 8.4 petaflops — or 4.2 petaflops per site.

Sixty percent more storage

The upgrade also adds 60 percent more storage capacity, allowing NOAA to collect and process more weather, water and climate observations used by all the models than ever before.

“NOAA’s supercomputers ingest and analyze billions of data points taken from satellites, weather balloons, airplanes, buoys and ground observing stations around the world each day,” said retired Navy Rear Adm. Timothy Gallaudet, Ph.D., acting NOAA administrator. “Having more computing speed and capacity positions us to collect and process even more data from our newest satellites — GOES-East, NOAA-20 and GOES-S — to meet the growing information and decision-support needs of our emergency management partners, the weather industry and the public.”

With this upgrade, U.S. weather supercomputing paves the way for NOAA’s National Weather Service to implement the next generation Global Forecast System, known as the “American Model,” next year. Already one of the leading global weather prediction models, the GFS delivers hourly forecasts every six hours. The new GFS will have significant upgrades in 2019, including increased resolution to allow NOAA to run the model at 9 kilometers and 128 levels out to 16 days, compared to the current run of 13 kilometers and 64 levels out to 10 days. The revamped GFS will run in research mode on the new supercomputers during this year’s hurricane season.

“As we look toward launching the next generation GFS in 2019, we’re taking a ‘community modeling approach’ and working with the best and brightest model developers in this country and abroad to ensure the new U.S. model is the most accurate and reliable in the world,” said National Weather Service Director Louis W. Uccellini, Ph.D.

Supporting a Weather-Ready Nation

The upgrade announced today – part of the agency’s commitment to support the Weather-Ready Nation initiative – will lead to more innovation, efficiency and accuracy across the entire weather enterprise. It opens the door for the National Weather Service to advance its seamless suite of weather, water and climate models over the next few years, allowing for more precise forecasts of extreme events a week in advance and beyond.

Improved hurricane forecasts and expanded flood information will enhance the agency’s ability to deliver critical support services to local communities. In addition, the new supercomputers will allow NOAA’s atmosphere and ocean models to run as one system, helping forecasters to more readily identify interaction between the two and reducing the number of operational models; as well as allow for development of a new seasonal forecast system to replace the Climate Forecast System in 2022, paving the way for improved seasonal forecasts as part of the Weather Research and Forecasting Innovation Act.

The added computing power will support upgrades to the National Blend of Models, which is being developed to provide a common starting point for all local forecasts; allow for more sophisticated ensemble forecasting, which is a method of improving the accuracy of forecasts by averaging results of various models; and provide quicker turnaround for atmosphere and ocean simulations, leading to earlier predictions of severe weather.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources.

Source: NOAA

The post NOAA to Expand Compute Capacity by 50 Percent with Two New Dells appeared first on HPCwire.

Momentum Builds for US Exascale

Tue, 01/09/2018 - 10:08

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. quest for exascale on a solid foundation. In my last article, I provided a description of the elements of the High Performance Computing (HPC) ecosystem and its importance for advancing and sustaining this strategically important technology. It is good to report that the U.S. exascale program seems to be hitting the full range of ecosystem elements.

As a reminder, the National Strategic Computing Initiative (NSCI) assigned the U.S. Department of Energy (DOE) Office of Science (SC) and the National Nuclear Security Administration (NNSA) to execute a joint program to deliver capable exascale computing that emphasizes sustained performance on relevant applications and analytic computing to support their missions. The overall DOE program is known as the Exascale Computing Initiative (ECI) and is funded by the SC Advanced Scientific Computing Research (ASCR) program and the NNSA Advanced Simulation and Computing (ASC) program. Elements of the ECI include the procurement of exascale class systems and the facility investments in site preparations and non-recurring engineering. Also, ECI includes the Exascale Computing Project (ECP) that will conduct the Research and Development (R&D) in the areas of middleware (software stack), applications, and hardware to ensure that exascale systems will be productively usable to address Office of Science and NNSA missions.

In the area of hardware – the last part of 2017 revealed a number of important developments. First and most visible, is the initial installation of the SC Summit system at Oak Ridge National Laboratory (ORNL) and the NNSA Sierra system at Lawrence Livermore National Laboratory (LLNL). Both systems are being built by IBM using Power9 processors with Nvidia GPU co-processors. The machines will have two Power9 CPUs per system board and will use a Mellenox InfinBand interconnection network.

Beyond that, the architecture of each machine is slightly different. The ORNL Summit machine will use six Nvidia Volta GPUs per two Power9 CPUs on a system board and will use NVLink to connect to 512 GB of memory. The Summit machine will use a combination of air and water cooling. The LLNL Sierra machine will use four Nvidia Voltas and 256 GB of memory connected with the two Power9 CPUs per board. The Sierra machine will use only air cooling. As was reported by HPCwire in November 2017, the peak performance of the Summit machine will be about 200 petaflops and the Sierra machine is expected to be about 125 petaflops.

Installation of both the Summit and Sierra systems is currently underway with about 279 racks (without system boards) and the interconnection network already installed at each lab. Now that IBM has formally released the Power9 processors, the racks will soon start being populated with the boards that contain the CPUs, GPUs and memory. Once that is completed, the labs will start their acceptance testing, which is expected to be finished later in 2018.

Another important piece of news about the DOE exascale program is the clarification of the status of the Argonne National Laboratory (ANL) Aurora machine. This system was part of the collaborative CORAL procurement that also selected the Sierra and Summit machines. The Aurora system is being manufactured by Intel with Cray Inc. acting as the system integrator. The machine was originally scheduled to be an approximately 180 peak petaflops system using the Knights Hill third generation Phi processors. However, during SC17, we learned that Intel is removing the Knights Hill chip from its roadmap. This explains the reason why during the September ASCR Advisory Committee (ASCAC) meeting, Barb Helland, the Associate Director of the ASCR office, announced that the Aurora system would be delayed to 2021 and upgraded to 1,000 petaflops (aka 1 exaflops).

The full details of the revised Aurora system are still under wraps. We have learned that it is going to use “novel” processor technologies, but exactly what that means is unclear. The ASCR program subjected the new Aurora design to an independent outside review. It found, “The hardware choices/design within the node is extremely well thought through. Early projections suggest that the system will support a broad workload.” The review committee even suggested that, “The system as presented is exciting with many novel technology choices that can change the way computing is done.” The Aurora system is in the process of being “re-baselined” by the DOE. Hopefully, once that is complete, we will get a better understanding of the meaning of “novel” technologies. If things go as expected, the changes to Aurora will allow the U.S. to achieve exascale by 2021.

An important, but sometimes overlooked, aspect of the U.S. exascale program is the number of computing systems that are being procured, tested and optimized by the ASCR and ASC programs as part of the buildup to exascale. Other computing systems involved with “pre-exascale” systems include the 8.6 petaflops Mira computer at ANL and the 14 petaflops Cori system at Lawrence Berkeley National Lab (LBNL). The NNSA also has the 14.1 petaflops Trinity system at Los Alamos National Lab (LANL). Up to 20 percent of these precursor machines will serve as testbeds to enable computing science R&D needed to ensure that the U.S. exascale systems will be able to productively address important national security and discovery science objectives.

The last, but certainly not least, bit of hardware news is that the ASCR and ASC programs are expected to start their next computer system procurement processes in early 2018. During her presentation to the U.S. Consortium for the Advancement of Supercomputing (USCAS), Barb Helland told the group that she expects that the Request for Proposals (RFP) will soon be released for the follow-ons to the Summit and Sierra systems. These systems, to be delivered in the 2021-2023 timeframe, are expected to be provide in excess of exaFLOP/s performance. The procurement process to be used will be similar to the CORAL procurement and will be a collaboration between the DOE-SC ASCR and NNSA ASC programs. The ORNL exascale system will be called Frontier and the LLNL system will be known as El Capitan.

2017 also saw significant developments for the people element of the U.S HPC ecosystem. As was previously reported, at last September’s ASCAC meeting, Paul Messina announced that he would be stepping down as the ECP Director on October 1st. Doug Kothe, who was previously the applications development lead, was announced as the new ECP Director. Upon taking the Director job, Kothe with his deputy, Stephen Lee of LANL, instituted a process to review the organization and management of the ECP. At the December ASCAC conference call, Doug reported that the review had been completed and resulted in a number of changes. This included paring down ECP from five to four components (applications development, software technology, hardware and integration, and project management). He also reported that ECP has implemented a more structured management approach that includes a revised work breakdown structure (WBS) and additional milestones, new key performance parameters and risk management approaches. Finally, the new ECP Director reported that they had established an Extended Leadership Team with a number of new faces.

Another important, element of the HPC ecosystem are the people doing the R&D and other work need to keep the ecosystem going. The DOE ECI involves a huge number of people. Last year, there were about 500 researchers who attended the ECP Principle Investigator meeting and there are many more involved in other DOE/NNSA programs and from industry. The ASCR and ASC programs are involved with a number of programs to educate and train future members of the HPC ecosystem. Such programs are the ASCR and ASC co-funded Computational Science Graduate Fellowship (CSGF) and the Early Career Research Program. The NNSA offers similar opportunities. Both the ASCR and ASC programs continue to coordinate with National Science Foundation educational programs to ensure that America’s top computational science talent continues to flow into the ecosystem.

Finally, in addition to people and hardware, the U.S. program continues to develop the software stack (aka middleware) to develop end users’ applications to ensure that exascale will be used productively. Doug Kothe reported that ECP has adopted standard Software Development Kits. These SDKs are designed to support the goal of building a comprehensive, coherent software stack that enables application developers to productively write highly parallel applications that effectively target diverse exascale architectures. Kothe also reported that ECP is making good progress in developing applications software. This includes the implementation of innovative approaches that include Machine Learning to utilize the GPUs that are part of the future exascale computers.

All in all – the last several months of 2017 have set the stage for a very exciting 2018 for the U.S. exascale program. It has been about 5 years since the ORNL Titan supercomputer came onto the stage at #1 on the TOP500 list. Over that time, other more powerful DOE computers have come online (Trinity, Cori, etc.) but they were overshadowed by Chinese and European systems. It remains unclear whether or not the upcoming exascale systems will put the U.S. back on the top of the supercomputing world. However, the recent developments help to reassure the country is not going to give up its computing leadership position without a fight. That is great news because for more than 60 years, the U.S. has sought leadership in high performance computing for the strategic value it provides in the areas of national security, discovery science, energy security, and economic competitiveness.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post Momentum Builds for US Exascale appeared first on HPCwire.

Stampede1 Helps Researchers Examine a Greener Carbon Fiber Alternative

Tue, 01/09/2018 - 07:48

Jan. 9, 2018 — From cars and bicycles to airplanes and space shuttles, manufacturers around the world are trying to make these vehicles lighter, which helps lower fuel use and lessen the environmental footprint.

One way that cars, bicycles, airplanes and other modes of transportation have become lighter over the last several decades is by using carbon fiber composites. Carbon fiber is five-times stronger than steel, twice as stiff, and substantially lighter, making it the ideal manufacturing material for many parts. But with the industry relying on petroleum products to make carbon fiber today, could we instead use renewable sources?

In the December 2017 issue of Science, Gregg Beckham, a group leader at the National Renewable Energy Laboratory (NREL), and an interdisciplinary team reported the results of experimental and computational investigations on the conversion of lignocellulosic biomass into a bio-based chemical called acrylonitrile, the key precursor to manufacturing carbon fiber.

The catalytic reactor shown here is for converting chemical intermediates into acrylonitrile. The work is part of the Renewable Carbon fiber Consortium. Photo by Dennis Schroeder/NREL

Acrlyonitrile is a large commodity chemical, and it’s made today through a complex petroleum-based process at the industrial scale. Propylene, which is derived from oil or natural gas, is mixed with ammonia, oxygen, and a complex catalyst. The reaction generates high amounts of heat and hydrogen cyanide, a toxic by-product. The catalyst used to make acrylonitrile today is also quite complex and expensive, and researchers still do not fully understand its mechanism.

“That’s where our study comes in,” Beckham said. “Acrylonitrile prices have witnessed large fluctuations in the past, which has in turn led to lower adoption rates for carbon fibers for making cars and planes lighter weight. If you can stabilize the acrylonitrile price by providing a new feedstock from which to make acrylonitrile, in this case renewably-sourced sugars from lignocellulosic biomass, we might be able to make carbon fiber cheaper and more widely adopted for everyday transportation applications.”

To develop new ideas to make acrylonitrile manufacturing from renewable feedstocks, the Department of Energy (DOE) solicited a proposal several years ago that asked: Is it possible to make acrylonitrile from plant waste material? These materials include corn stover, wheat straw, rice straw, wood chips, etc. They’re basically the inedible part of the plant that can be broken down into sugars, which can then be converted to a large array of bio-based products for everyday use, such as fuels like ethanol or other chemicals.

“If we could do this in an economically viable way, it could potentially decouple the acrylonitrile price from petroleum and offer a green carbon fiber alternative to using fossil fuels,” Beckham said.

Beckham and the team moved forward to develop a different process. The NREL process takes sugars derived from waste plant materials and converts those to an intermediate called 3-hydroxypropionic acid (3-HP). The team then used a simple catalyst and new chemistry, dubbed nitrilation, to convert 3-HP to acrylonitrile at high yields. The catalyst used for the nitrilation chemistry is about three times less expensive than the catalyst used in the petroleum-based process and it’s a simpler process. The chemistry is endothermic so it doesn’t produce excess heat, and unlike the petroleum-based process, it doesn’t produce the toxic byproduct hydrogen cyanide. Rather, the bio-based process only produces water and alcohol as its byproducts.

From a green chemistry perspective, the bio-based acrylonitrile production process has multiple advantages over the petroleum-based process that is being used today. “That’s the crux of the study,” Beckham said.

XSEDE’s Role in the Chemistry

Beckham is no stranger to XSEDE, the eXtreme Science and Engineering Discovery Environment that’s funded by the National Science Foundation. He’s been using XSEDE resources, including Stampede1, Bridges, Comet and now Stampede2, for about nine years as a principal investigator. Stampede1 and Stampede2 (currently #12 on the Top500 list list) are deployed and maintained by the Texas Advanced Computing Center.

Most of the biological and chemistry research conducted for this project was experimental, but the mechanism of the nitrilation chemistry was only at first hypothesized by the team. A postdoctoral researcher in the team, Vassili Vorotnikov of NREL, was recruited to run periodic density functional theory calculations on Stampede1 as well as the machines at NREL to elucidate the mechanism of this new chemistry.

Over about two months and several millions of CPU-hours used on Stampede1, the researchers were able to shed light on the chemistry of this new catalytic process. “The experiments and computations lined up nicely,” Vorotnikov said.

Because they had an allocation on Stampede1, they were able to rapidly turn around a complete mechanistic picture of how this chemistry works. “This will help us and other Top500 institutions to develop this chemistry further and design catalysts and processes more rationally,” Vorotnikov said. “XSEDE and the predictions of Stampede1 are pointing the way forward on how to improve nitrilation chemistry, how we can apply it to other molecules, and how we can make other renewable products for industry.”

“After the initial experimental discovery, we wanted to get this work out quickly,” Beckham continued. “Stampede1 afforded a great deal of bandwidth for doing these expensive, computationally intensive density functional theory calculations. It was fast and readily available and just a great machine to do these kind of calculations on, allowing us to turn around the mechanistic work in only a matter of months.”

Next Steps

There’s a large community of chemists, biologists and chemical engineers who are developing ways to make everyday chemicals and materials from plant waste materials instead of petroleum. Researchers have tried to do this before with acrylonitrile. But no one has been as successful in the context of developing high yielding processes with possible commercial potential for this particular product. With their new discovery, the team hopes this work makes the transition into industry sooner rather than later.

The immediate next step is scaling the process up to produce 50 kilograms of acrylonitrile. The researchers are working with several companies including a catalyst company to produce the necessary catalyst for pilot-scale operation; an agriculture company to help scale up the biology to produce 3-HP from sugars; a research institute to scale the separations and catalytic process; a carbon fiber company to produce carbon fibers from the bio-based acrylonitrile; and a car manufacturer to test the mechanical properties of the resulting composites.

“We’ll be doing more fundamental research as well,” Beckham said. “Beyond scaling acrylonitrile production, we are also excited about is using this powerful, robust chemistry to make other everyday materials that people can use from bio-based resources. There are lots of applications for nitriles out there — applications we’ve not yet discovered.”

Source: Faith Singer-Villalobos, TACC

The post Stampede1 Helps Researchers Examine a Greener Carbon Fiber Alternative appeared first on HPCwire.

Mixed-Signal Neural Net Leverages Memristive Technology

Mon, 01/08/2018 - 13:11

Memristive technology has long been attractive for potential use in neuromorphic computing. Among other things it would permit building artificial neural network (ANN) circuits that are processed in parallel and more directly emulate how neuronal circuits in the brain work. Recent work led by an Oak Ridge National Laboratory researcher proposes a mixed signal approach that leverages memristive technology to build better ANNs.

“[Our] mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system… The proposed [system] includes synchronous digital long term plasticity (DLTP), an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead,” writes Catherine Schuman, a Liane Russell Early Career Fellow in Computational Data Analytics at Oak Ridge National Laboratory, and colleagues[i].

Their paper, Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level, was published in the IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

The researchers point out that digital and analog approaches to building ANNs each have drawbacks. While digital implementations have precision, robustness, noise resilience and scalability, they are area intensive. Conversely, analog counterparts are efficient in terms of silicon area and processing speed, but “rely on representing synaptic weights as volatile voltages on capacitors or in resistors, which do not lend themselves to energy and area efficient learning.”

Instead, they propose a mixed-signal system where communication and control is digital while the core multiply-and-accumulate functionality is analog. Researchers used a hafnium-oxide memristor design (“A practical hafnium-oxide memristor model suitable for circuit design and simulation,” in Proceedings of IEEE International Symposium on Circuits and Systems.) based on earlier work.

Their design (figure two, shown below) consists of m x n memristive neuromorphic cores. “Each core has several memristive synapses and one mixed-signal neuron (analog in, digital out) to implement a spiking neural network. This arrangement helps maintain similar capacitance at the synaptic outputs and corresponding neurons. The similar distance between synapse and inputs also results in negligible difference in charge accumulation,” write the authors.

Also exciting is the researchers’ approach to implementing learning. Most ANNs require offline learning. For a network to learn online, Long Term Plasticity plays an important role in training the circuit with continuous updates of synaptic weights based on the timing of pre- and post-neuron fires.

“Instead of carefully crafting analog tails to provide variation in the voltage across the synapses, we utilize digital pre- and post-neuron firing signals and apply pulse modulation to implement a digital LTP (DLTP) technique…Basically the online learning process implemented here is one clock cycle tracking version of Spike time Dependent Plasticity… A more thorough STDP learning implementation would need to track several clock cycles before and after the post-neuron fire leading to more circuitry and hence increased power and area. Our DLTP approach acts similarly but ensures lower area and power,” write the authors.

Link to paper: http://ieeexplore.ieee.org/document/8119503/

Feature image source: ORNL

[i] Gangotree Chakma, Student Member, IEEE, Md Musabbir Adnan, Student Member, IEEE, Austin R. Wyer, Student Member, IEEE, Ryan Weiss, Student Member, IEEE, Catherine D. Schuman, Member, IEEE, and Garrett S. Rose, Member, IEEEAustin R. Wyer, Student Member, IEEE, Ryan Weiss, Student Member, IEEE, Catherine D. Schuman, Member, IEEE, and Garrett S. Rose, Member, IEEE

The post Mixed-Signal Neural Net Leverages Memristive Technology appeared first on HPCwire.

Curie Supercomputer Uses HPC to Help Improve Agricultural Production

Mon, 01/08/2018 - 11:47

Jan. 8, 2018 — Agriculture is the principal means of livelihood in many regions of the developing world, and the future of our world depends on a sustainable agriculture at planetary level. High Performance Computing is becoming critical in agricultural activity, plague control, pesticides design and pesticides effects. Climate data are used to understand the impacts on water and agriculture in many regions of the world, help local authorities in the management of water and agricultural resources, and assist vulnerable communities in the region through improved drought management and response.

Image courtesy of the European Commission.

The demand for agricultural products has increased globally and meeting this growing demand would have a negative effect on the environment.  Increased agricultural production needs the use of 70% of the world’s water resources and a rise in greenhouse gas emissions.

To be able to reduce the negative impact to the ecosystem, seed companies are on the lookout for new plant varieties that yield more produce. Companies normally find such new varieties through field trials.  These field trials are a simple observation method but they cost a lot of money and are time consuming taking years to find the best ones.

Using High Performance Computing (HPC), the Curie supercomputer is able to provide the most efficient solution to this problem.  HPC enables numerical simulations of plant growth that help seed companies to achieve superior varieties instead of doing field trials which are more expensive and harmful for the environment.

For example, if a farmer wants to know what the conditions are for a plant to grow best in ( its genetic parameter), they would have to test its growth rate under various conditions to select the best parameter corresponding to the specific environment of the region. With the help of HPC, the estimation of these parameters is made more accurate and simpler by simulating plant growth. The simulation models take into account, the plant’s interaction with the environment.  It reduces the number of field trials by a large percent, for example, instead of 100, 10 field trials would be enough to  estimate the best genetic parameter.

Cybele Tech, the French company has used High Performance Computing to enable farmers to produce more with less and know what exactly their plants need to get a better yield.

They’ve been awarded with 4 million core hours on Curie hosted by GENCI at CEA, France.

Source: European Commission

The post Curie Supercomputer Uses HPC to Help Improve Agricultural Production appeared first on HPCwire.

Ellexus Publishes White Paper Advising HPCers on Meltdown, Spectre

Mon, 01/08/2018 - 10:10

Jan 8 — Can you afford to lose a third of your compute real estate? If not, you need to pre-empt the impact of Meltdown and Spectre.

Meltdown and Spectre are quickly becoming household names and not just in the HPC space. The severe design flaws in Intel microprocessors that could allow sensitive data to be stolen and the fixes are likely to be bad news for any I/O intensive applications such as those often used in HPC.

Ellexus Ltd, the I/O profiling company, has released a white paper: How the Meltdown and Spectre bugs work and what you can do to prevent a performance plummet.

Why is the Meltdown fix worse for HPC applications?

The changes that are being imposed on the Linux kernel (called the KAISER patch) to more securely separate user and kernel space are causing additional overhead to context switches. This is having a measurable impact on the performance of shared file systems and I/O intensive applications, which is particularly noticeable in I/O heavy workloads. A performance penalty could reach 10-30%.

Systems that were previously just about coping with I/O heavy workloads could now be in real trouble. It’s very easy for applications sharing datasets to overload the file system and prevent other applications from working, but bad I/O can also affect each program in isolation, even before the patches for the attacks make that worse.

Profile application I/O to rescue lost performance

You don’t have to put up with poor performance in order to improve security, however. The most obvious way to mitigate performance losses is to profile I/O and identify ways to optimise applications’ I/O performance.

By using the tool suites from Ellexus, Breeze and Mistral, to analyse workflows it is possible to identify changes that will help to eliminate bad I/O and regain the performance lost to these security patches.

Ellexus’ tools locate bottlenecks and applications with bad I/O on large distributed systems, cloud infrastructure and super computer clusters. Once applications with bad I/O patterns have been located, our tools will indicate the potential performance increases as well as pointers on how to achieve them. Often the optimisation is as simple as changing an environment variable, changing a single line in a script or changing a simple I/O call to read more than one byte at a time.

In some cases, the candidates for optimisation will be obvious – a workflow that clearly stresses the file system every time it is run, for example, or one that runs for significantly longer than a typical task.

In others it may be necessary to perform an initial high-level analysis of each job. Follow three steps to optimise application I/O and mitigate the impact of the KAISER patch:

1.       Profile all your applications with Mistral to look for the worst I/O patterns

Mistral, our I/O profiling tool, is lightweight enough to run at scale. In this case Mistral would be set up to record relatively detailed information on the type of I/O that workflows are performing over time. It would look for factors such as how many meta data operations are being performed, the number of small I/O and so on.

2.       Deal with the worst applications, delving into detail with Breeze

Once the candidate workflows have been identified they can be analysed in detail with Breeze. As a first step, the Breeze trace can be run through our Healthcheck tool that identifies common issues such as an application that has a high ratio of file opens to writes or a badly configured $PATH causing the file system to be trawled every time a workflow uses “grep”.

3.       Put in place longer-term I/O quality assurance

Implement the Ellexus tools across your systems to get the most from the compute and storage and to prevent problems reoccurring.

By following these simple steps and our best practices guidance it is easy to find and fix the biggest issues quickly and give you more time to optimise for the best performance possible.

Source: Ellexus Ltd

The post Ellexus Publishes White Paper Advising HPCers on Meltdown, Spectre appeared first on HPCwire.

Pages