Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 52 min 24 sec ago

AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud

11 hours 19 min ago

Just as AI has become the leitmotif of the advanced scale computing market, infusing much of the conversation about HPC in commercial and industrial spheres, it also is impacting high-level management changes in the industry.

This week saw two headliner announcements:

  • Naveen Rao, former CEO of AI company Nervana, acquired by Intel last year, announced he will lead Intel’s new Artificial Intelligence Products Group (AIPG), a strategic, “cross-Intel organization.”
  • Andrew Ng, one of the highest profile of players in AI, announced that he has resigned his post as chief scientist at Baidu. His destination: unknown.

In addition, Nvidia announced that Tencent Cloud will integrate its Tesla GPU accelerators and deep learning platform, along with Nvidia NVLink technology, into Tencent’s public cloud platform.

Naveen Rao of Intel

Rao announced his new position and AIPG in a blog (“Making the Future Starts with AI”) that underscores Intel’s AI push, along with its recent $15B acquisition of Mobileye. Formation of AIPG adds fodder to the drumbeat among industry observers that the company views AI, broadly defined, as its next big growth market. In addition, the company’s processor roadmap emphasizes co-processors (aka accelerators) used for AI workloads. To date, Nvidia GPUs have enjoyed the AI processor spotlight. But in commenting on Intel’s x86-based roadmap at this week’s Leverage Big Data+Enterprise HPC event in Florida, a senior IT manager at a financial services company believes Intel will mount a major competitive response in the AI market. “I wouldn’t want to be Nvidia right now,” he said.

Rao himself referred to Intel as “a data company.”

“The new organization (AIPG) will align resources from across the company to include engineering, labs, software and more as we build on our current leading AI portfolio: the Intel Nervana platform, a full-stack of hardware and software AI offerings that our customers are looking for from us,” Rao said.

“Just as Intel has done in previous waves of computational trends, such as personal and cloud computing, Intel intends to rally the industry around a set of standards for AI that ultimately brings down costs and makes AI more accessible to more people – not only institutions, governments and large companies, as it is today,” he said.

Nvidia had significant news of its own this week in announcing Tencent Cloud’s adoption of its Tesla GPU accelerators to help advanced AI for enterprise customers.

“Tencent Cloud GPU offerings with NVIDIA’s deep learning platform will help companies in China rapidly integrate AI capabilities into their products and services,” said Sam Xie, vice president of Tencent Cloud. “Our customers will gain greater computing flexibility and power, giving them a powerful competitive advantage.”

As part of the companies’ collaboration, Tencent Cloud said it will offer a range of cloud products that will include GPU cloud servers incorporating Nvidia Tesla P100, P40 and M40 GPU accelerators and Nvidia deep learning software.

As for Andrew Ng, he did not state what his next career step will be, only saying “I will continue my work to shepherd in this important societal change.

Andrew Ng

“In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research,” he said on Twitter. “I want all of us to have self-driving cars; conversational computers that we can talk to naturally; and healthcare robots that understand what ails us. The industrial revolution freed humanity from much repetitive physical drudgery; I now want AI to free humanity from repetitive mental drudgery, such as driving in traffic. This work cannot be done by any single company — it will be done by the global AI community of researchers and engineers.”

Ng, who was a founder of the Google Brain project, joined Baidu in 2014 to work on AI, and since then, he said, Baidu’s AI group has grown to roughly 1,300 people.

“Our AI software is used every day by hundreds of millions of people,” said Ng. “My team birthed one new business unit per year each of the last two years: autonomous driving and the DuerOS Conversational Computing platform. We are also incubating additional promising technologies, such as face-recognition (used in turnstiles that open automatically when an authorized person approaches), Melody (an AI-powered conversational bot for healthcare) and several more.”

The post AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud appeared first on HPCwire.

Huawei, Altair Sign MoU to Jointly Pursue HPC Opportunities

Fri, 03/24/2017 - 11:33

HANNOVER, Germany, March 24, 2017 — Huawei and Altair have signed a Memorandum of Understanding (MoU) at CeBIT 2017, marking the beginning of a cooperation for High-performance Computing (HPC) and Cloud Solutions. The two companies will cooperate with each other to develop industrial simulation cloud solutions for their customers.

In accordance with the terms of the cooperation, they will build a joint test center in Huawei’s Munich OpenLab to carry out software and hardware optimization tests based on Altair’s PBS Works and Huawei’s HPC and cloud platforms. Taking full advantage of the high performance and reliability of Huawei’s HPC and cloud platforms, the joint tests will help customers reduce software integration and performance verification workloads considerably and simplify the deployment and management of industrial simulation cloud platforms. Altair PBS Works is the leading HPC workload management suite that offers comprehensive, reliable HPC resource management solutions and policy-based job scheduling solutions.

“Digital transformation is now bringing revolutionary changes to the manufacturing industry, especially in the automobile industry. An industrial simulation cloud platform can accelerate engineering simulation tests and allow local and remote R&D personnel to simultaneously work on product designs with cutting-edge technologies and designs, gaining edges in the market. The Huawei-Altair cooperation will be dedicated to building highly efficient, high-performance industrial simulation cloud solutions leveraging Altair’s PBS Works software suite,” said Yu Dong, President of Industry Marketing & Solution Dept of Enterprise BG, Huawei. “Committed to a vision of openness, cooperation, and win-win, Huawei cooperates with global partners to provide customers with innovative solutions for industrial manufacturing and help them achieve business success.”

“We are very happy about this cooperation with Huawei,” said Dr. Detlef Schneider, Senior Vice President EMEA at Altair. “By combining our HPC technologies, namely Altair’s market-leading HPC workload management suite PBS Works and Huawei’s HPC and cloud platforms, we will provide our industrial manufacturing customers with more value for their HPC and cloud applications. This combined solution will significantly reduce software integration efforts and simplify the deployment and management of industrial simulation cloud platforms.”

About Huawei

Huawei is a leading global information and communications technology (ICT) solutions provider. Our aim is to build a better connected world, acting as a responsible corporate citizen, innovative enabler for the information society, and collaborative contributor to the industry. Driven by customer-centric innovation and open partnerships, Huawei has established an end-to-end ICT solutions portfolio that gives customers competitive advantages in telecom and enterprise networks, devices and cloud computing. Huawei’s 170,000 employees worldwide are committed to creating maximum value for telecom operators, enterprises and consumers. Our innovative ICT solutions, products and services are used in more than 170 countries and regions, serving over one-third of the world’s population. Founded in 1987, Huawei is a private company fully owned by its employees. For more information, visit Huawei online at www.huawei.com.

About PBS Works

PBS Works is the market leader in comprehensive, secure workload management for high-performance computing (HPC) and cloud environments. This market-leading workload management suite allows HPC users to simplify their environment while optimizing system utilization, improving application performance, and improving ROI on hardware and software investments. PBS Works is the preferred solution for many of the planet’s largest, most complex clusters and supercomputers – and is the choice for smaller organizations needing HPC solutions that are easy to adopt and use.  www.pbsworks.com

About Altair
Founded in 1985, Altair is focused on the development and application of simulation technology to synthesize and optimize designs, processes and decisions for improved business performance. Privately held with more than 2,600 employees, Altair is headquartered in Troy, Michigan, USA with more than 45 offices throughout 20 countries, and serves more than 5,000 corporate clients across broad industry segments. To learn more, please visit www.altair.com.

Source: Altair

The post Huawei, Altair Sign MoU to Jointly Pursue HPC Opportunities appeared first on HPCwire.

Tencent Cloud Adopts NVIDIA Tesla for AI Cloud Computing

Fri, 03/24/2017 - 08:11

SANTA CLARA, Calif., Mar 24, 2017 — NVIDIA (NASDAQ: NVDA) today announced that Tencent Cloud will adopt NVIDIA Tesla GPU accelerators to help advance artificial intelligence for enterprise customers.

Tencent Cloud will integrate NVIDIA’s GPU computing and deep learning platform into its public cloud computing platform. This will provide users with access to a set of new cloud services powered by Tesla GPU accelerators, including the latest Pascal architecture-based Tesla P100 and P40 GPU accelerators with NVIDIA NVLink technology for connecting multiple GPUs and NVIDIA deep learning software.

NVIDIA’s AI computing technology is used worldwide by cloud service providers, enterprises, startups and research organizations for a wide range of applications.

“Companies around the world are harnessing their data with our AI computing technology to create breakthrough products and services,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Through Tencent Cloud, more companies will have access to NVIDIA’s deep learning platform, the world’s most broadly adopted AI platform.”

“Tencent Cloud GPU offerings with NVIDIA’s deep learning platform will help companies in China rapidly integrate AI capabilities into their products and services,” said Sam Xie, vice president of Tencent Cloud. “Our customers will gain greater computing flexibility and power, giving them a powerful competitive advantage.”

GPU-Based Cloud Offerings for AI

Organizations across many industries are seeking greater access to the core AI technologies required to develop advanced applications, such as facial recognition, natural language processing, traffic analysis, intelligent customer service, and machine learning.

The massively efficient parallel processing capabilities of GPUs make the NVIDIA computing platform highly effective at accelerating a host of other data-intensive workloads, including advanced analytics and high performance computing.

As part of the companies’ collaboration, Tencent Cloud intends to offer customers a wide range of cloud products based on NVIDIA’s AI computing platforms. This will include GPU cloud servers incorporating NVIDIA Tesla P100, P40 and M40 GPU accelerators and NVIDIA deep learning software. Tencent Cloud launched GPU servers based on NVIDIA Tesla M40 GPUs and NVIDIA deep learning software in December.

During the first half of this year, these cloud servers will integrate up to eight GPU accelerators, providing users with superior performance while meeting the requirements for deep learning and algorithms that involve ultra-high data volume and ultra-sized equipment.

About NVIDIA

NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post Tencent Cloud Adopts NVIDIA Tesla for AI Cloud Computing appeared first on HPCwire.

GCS Furthers European Research As Hosting Member of PRACE 2

Fri, 03/24/2017 - 08:04

BERLIN, Germany, March 24, 2017 — The Gauss Centre for Supercomputing (GCS) extends its role as a hosting member of the Partnership of Advanced Computing Europe (PRACE) into the European programme’s 2nd phase, PRACE 2, which will run from 2017 to 2020. At the 25th PRACE council meeting held in Amsterdam (The Netherlands), it was agreed that the petascale high performance computing (HPC) systems of the GCS members HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre Garching/Munich) will continue to be available for European research activities of outstanding scientific excellence and value. By extending its partnership as PRACE 2 hosting member, GCS will again take a leading role in HPC in Europe and will significantly contribute to boost scientific and industrial advancement by offering principal investigators access to GCS’s world-class HPC infrastructure to be used for approved large-scale research activities.

With providing computing time on their three petascale HPC systems Hazel Hen hosted at HLRS, JUQUEEN hosted at JSC, and SuperMUC hosted at LRZ, the Gauss Centre for Supercomputing is the PRACE 2 hosting member that provides the lion’s share of computing resources for the European HPC programme. Together with the other four hosting members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, and GENCI representing France), the PRACE programme provides a federated world-class Tier-0 supercomputing infrastructure that is architecturally diverse and allows for capability allocations that are competitive with comparable programmes in the USA and in Asia. The offering of core hours capacity for European research projects through PRACE 2 is planned to grow to 75 million node hours per year.

Eligible to apply for computing time for large-scale simulation projects on the supercomputers through the PRACE 2 programme are investigators from academia and industry residing in one of the 24 PRACE member countries. Resource allocations will be granted based on a single, thorough peer review process which is exclusively based on scientific excellence of the highest standard. The principal investigators of accepted research projects will be able to use computing resources of the highest level for a predefined period of time. Additionally, they will be supported by coordinated high-level support teams of the supercomputing centres providing access to their world-class HPC infrastructure.

The continuation of the European HPC programme with the 2nd phase of their partnership, PRACE 2, was resolved at the 25th PRACE Council Meeting in Amsterdam in early March, 2017. The overarching goal of PRACE is to provide the federated European supercomputing infrastructure that is science-driven and globally competitive. It builds on the strengths of European science providing high-end computing and data analysis resources to drive discoveries and new developments in all areas of science and industry, from fundamental research to applied sciences including: mathematics and computer sciences, medicine, and engineering, as well as digital humanities and social sciences.

Further information: http://www.prace-ri.eu/IMG/pdf/2017-03-20-PRACE-2-Press-Release-V11.pdf

About GCS

The Gauss Centre for Supercomputing (GCS) combines the three national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe to serve a wide range of industrial and research activities in various disciplines. They also provide top-class training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advanced Computing in Europe), an international non-profit association consisting of 25 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level. GCS is jointly funded by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria, and North Rhine-Westphalia. It has its headquarters in Berlin/Germany. www.gauss-centre.eu

Source: GCS

The post GCS Furthers European Research As Hosting Member of PRACE 2 appeared first on HPCwire.

Scalable Informatics Ceases Operations

Thu, 03/23/2017 - 15:59

On the same day we reported on the difficulties HPC compiler PathScale is facing, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. For the last 15 years, Scalable Informatics, the HPC storage and system vendor founded by Joe Landman in 2002, provided high performance software-defined storage and compute solutions to a wide range of markets — from financial and scientific computing to research and big data analytics. Their platform was based on placing tightly coupled storage and computing in the same unit to eliminate bottlenecks and enable high-performance data and I/O for computationally intensive workloads.

Tom Tabor with Jacob Loveless, CEO, Lucera (right) and Joe Landman, CEO, Scalable Informatics (left). Source

In a letter to the community posted on the company website, founder and CEO Joe Landman writes:

We want to thank our many customers and partners, whom have made this an incredible journey. We enjoyed designing, building, and delivering market dominating performance and density systems to many groups in need of this tight coupling of massive computational, IO, and network firepower.

Sadly, we ran into the economic realities of being a small player in a capital intensive market. We offered real differentiation in terms of performance, by designing and building easily the fastest systems in market.

But building the proverbial “better mousetrap” was not enough to cause the world to beat a path to our door. We had to weather many storms, ride out many fads. At the end of this process, we simply didn’t have the resources to continue fighting this good fight.

Performance does differentiate, and architecture is the most important aspect of performance. There are no silver bullets, there are no magical software elements that can take a poor architecture and make it a good one. This has been demonstrated by us and many others so often, it ought to be an axiom. Having customers call you up to express regret for making the wrong choice, while somewhat satisfying, doesn’t pay the bills. This happened far too often, and is in part, why we had to make this choice.

This is not how we wanted this to end. But end it did.

Thank you all for your patronage, and maybe in the near future, we will all be …

Landman shares additional thoughts about this difficult situation here: Requiem.

The post Scalable Informatics Ceases Operations appeared first on HPCwire.

Helix Nebula Science Cloud Moves to Prototype Phase

Thu, 03/23/2017 - 14:42

GENEVA, March 23, 2017 — HNSciCloud, the H2020 Pre Commercial Procurement Project aiming at establishing a European hybrid cloud platform that will support high-performance, data-intensive scientific use-cases, today announced its a webcast for the awards ceremony for the successful contractors moving to the Prototype Phase.

CEST, the awards ceremony for the successful contractors moving to the Prototype Phase of the Helix Nebula Science Cloud Pre-Commercial Procurement will take place at CERN, in Geneva, Switzerland on April 3, 2017 at 14:30.

In November 2016, 4 Consortia won the €5.3 million joint HNSciCloud Pre-Commercial Procurement (PCP) tender and started to develop the designs for the European hybrid cloud platform that will support high-performance, data-intensive scientific use-cases. At the beginning of February 2017, the four consortia met at CERN to present their proposals to the buyers. After the submission of their designs, the consortia were asked to prepare their bids for the prototyping phase.

In early April the winners of the bids to build prototypes will be announced at CERN during the “Launching the Helix Nebula Science Cloud Prototype Phase” webcast event. The award ceremony and the presentations of the solutions moving into the prototyping phase will be the focus of the webcast.

If you are interested in understanding more about the prototypes that will be developed, or simply want more insights on the Pre-Commercial Procurement process, mark the date in your agenda and follow the live webcast of the event directly from our website www.hnsicloud.eu.

For more information about the event, please contact: info@hnscicloud.eu

About HNSciCloud

HNSciCloud, with Grant Agreement 687614, is a Pre-Commercial Procurement Action sponsored by 10 of Europe’s leading public research organisations and co-funded by the European Commission.

Source: HNSciCloud

The post Helix Nebula Science Cloud Moves to Prototype Phase appeared first on HPCwire.

TACC Supercomputer Facilitates Reverse Engineering of Cellular Control Networks

Thu, 03/23/2017 - 12:40

AUSTIN, Texas, March 23, 2017 — Texas Advanced Computing Center (TACC) announced today that its Stampede supercomputer has helped researchers from Tufts, University of Maryland, Baltimore County create tadpoles with pigmentation never before seen in nature.

The flow of information between cells in our bodies is exceedingly complex: sensing, signaling, and influencing each other in a constant flow of microscopic engagements. These interactions are critical for life, and when they go awry can lead to the illness and injury.

Scientists have isolated thousands of individual cellular interactions, but to chart the network of reactions that leads cells to self-organize into organs or form melanomas has been an extreme challenge.

“We, as a community are drowning in quantitative data coming from functional experiments,” says Michael Levin, professor of biology at Tufts University and director of the Allen Discovery Center there. “Extracting a deep understanding of what’s going on in the system from the data in order to do something biomedically helpful is getting harder and harder.”

Working with Maria Lobikin, a Ph.D. student in his lab, and Daniel Lobo, a former post-doc and now assistant professor of biology and computer science at the University of Maryland, Baltimore County (UMBC), Levin is using machine learning to uncover the cellular control networks that determine how organisms develop, and to design methods to disrupt them. The work paves the way for computationally-designed cancer treatments and regenerative medicine.

“In the end, the value of machine learning platforms is in whether they can get us to new capabilities, whether for regenerative medicine or other therapeutic approaches,” Levin says.

Writing in Scientific Reports in January 2016, the team reported the results of a study where they created a tadpole with a form of mixed pigmentation never before seen in nature. The partial conversion of normal pigment cells to a melanoma-like phenotype — accomplished through a combination of two drugs and a messenger RNA — was predicted by their machine learning code and then verified in the lab.

Read the full report from TACC at: https://www.tacc.utexas.edu/-/machine-learning-lets-scientists-reverse-engineer-cellular-control-networks

Source: TACC

The post TACC Supercomputer Facilitates Reverse Engineering of Cellular Control Networks appeared first on HPCwire.

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

Thu, 03/23/2017 - 11:41

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings (John Wiley & Sons, Inc., Jan. 2017) is both an introductory text and a field guide for anyone working with biomedical data, IT professionals and as well as medical and research staff.

Director of operations at Arizona State University’s Research Computing program, Etchings writes the primary motivation for the book was to bridge the divide “between IT and data technologists, on one hand, and the community of clinicians, researchers, and academics who deliver and advance healthcare, on the other.” As biology and medicine move squarely into the realm of data sciences, driven by the twin engines of big compute and big data, removing the traditional silos between IT and biomedicine will allow both groups to work better and more efficiently, Etchings asserts.

“Work in sciences is routinely compartmentalized and segregated among specialists,” ASU Professor Ken Buetow, PhD, observes in the foreword. “This segregation is particularly true in biomedicine as it wrestles with the integration of data science and its underpinnings in information technology. While such specialization is essential for progress within disciplines, the failure to have cross-cutting discussions results in lost opportunities.”

Aimed at this broader audience, “Strategies in Biomedical Data Science” introduces readers to the cutting-edge and fast moving field of biomedical data. The 443-page book lays out a foundation in the concepts in data management biomedical sciences and empowers readers to:

Efficiently gather data from disparate sources for effective analysis;

Get the most out of the latest and preferred analytic resources and technical tool sets; and

Intelligently examine bioinformatics as a service, including the untapped possibilities for medical and personal health devices.

A diverse array of use cases and case studies highlight specific applications and technologies being employed to solve real-world challenges and improve patient outcomes. Contributing authors, experts working and studying at the intersection of IT and biomedicine, offer their knowledge and experience in traversing this rapidly-changing field.

We reached out to BioTeam VP Ari Berman to get his view on the IT/research gap challenge. “This is exactly what BioTeam [a life sciences computing consultancy] is focused on,” he told us.  “Since IT organizations have traditionally supported business administration needs, they are not always equipped to handle the large amounts of data that needs to be moved and stored, or the amount of computational power needed to run the analysis pipelines that may yield new discoveries for the scientists. Because of this infrastructure, skills, and services gap between IT and biomedical data science, many research organizations spend too much time and money trying to bridge that gap on their own through cloud infrastructures or shadow IT running in their laboratories. I’ve spent my career bridging this gap, and I can tell you first hand that doing it correctly has certainly moved the needle forward on scientists’ ability to make new discoveries.”

Arizona State University’s director of operations for research computing and senior HPC architect Jay Etchings

Never lost in this far-ranging survey of biomedical data challenges and strategies is the essential goal: to improve human life and reduce suffering. Etchings writes that the book was inspired by “the need for a collaborative and multidisciplinary approach to solving the intricate puzzle that is cancer.” Author proceeds support the Pediatric Brain Tumor Foundation. The charity serves the more than 28,000 children and teens in the United States who are living with the diagnosis of a brain tumor.

To read an excerpt, visit the book page on the publisher’s website.

A listing of Chapter headings:

Chapter 1 Healthcare, History, and Heartbreak 7

Chapter 2 Genome Sequencing: Know Thyself, One Base Pair at a Time 27

Chapter 3 Data Management 53

Chapter 4 Designing a Data-Ready Network Infrastructure 105

Chapter 5 Data-Intensive Compute Infrastructures 163

Chapter 6 Cloud Computing and Emerging Architectures 211

Chapter 7 Data Science 235

Chapter 8 Next-Generation Cyberinfrastructures 307

The post ‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies appeared first on HPCwire.

Scientists Use IBM Power Systems to Assemble Genome of West Nile Mosquito

Thu, 03/23/2017 - 11:34

ARMONK, NY, March 23, 2017 — A team led by researchers from The Center for Genome Architecture (TC4GA) at Baylor College of Medicine have used technologies from IBM, Mellanox and NVIDIA to assemble the 1.2 billion letter genome of the Culex quinquefasciatus mosquito, which carries West Nile virus. The new genome can help enable scientists to better combat West Nile virus by identifying vulnerabilities in the mosquito that the virus uses to spread.

The high performance computing (HPC) system dubbed “VOLTRON,” is based on the IBM Power Systems platform, which provides scalable HPC capabilities necessary to accommodate a broad spectrum of data-enabled research activities. Baylor College of Medicine joins leading supercomputing agencies globally – the Department of Energy’s Oak Ridge and Lawrence Livermore National Labs and the U.K. government’s Science and Technology Facilities Council’s Hartree Centre – that have recently selected IBM’s Power Systems platform for cutting-edge HPC research.

VOLTRON’s 3D assembly is changing the way in which researchers are able to sequence genomes, by using DNA folding patterns to trace the genome as it crisscrosses the nucleus. The resulting methodology is faster and less expensive. For example, while the original Human Genome Project took ten years and cost $4 billion, 3D assembly produces a comparable genome sequence in a few weeks and for less than $10,000.

Such efforts take on increased urgency when they are needed to combat disease outbreaks, like the West Nile virus.

“Taking advantage of IBM POWER8 and Mellanox InfiniBand interconnect, we are now able to change the way we assemble a genome,” said Olga Dudchenko, a postdoctoral fellow at The Center for Genome Architecture at Baylor College of Medicine. “And while we originally created Voltron to sequence the human genome, the method can be applied to a dizzying array of species. This gives us an opportunity to explore mosquitoes, which carry diseases that impact many people around the globe.”

“3D assembly and IBM technology are a terrific combination: one requires extraordinary computational firepower, which the other provides,” said Erez Lieberman Aiden, Director of The Center for Genome Architecture.

The Center for Genome Architecture is working closely with Mellanox to maximize their research capabilities with the VOLTRON high-performance computing system. By leveraging Mellanox’s intelligent interconnect technology and acceleration engines, TC4GA is able to provide its researchers with an efficient and scalable platform to enhance genome sequencing in order to find cures for the world’s life-threatening diseases.

Key to Baylor’s research breakthrough is a multi-year collaboration between IBM and NVIDIA to design systems capable of leveraging the POWER processor’s open architecture to take advantage of the NVIDIA Tesla accelerated computing platform.

Incorporated into the design of VOLTRON is POWER and Tesla technology combination that allows Baylor researchers to handle extreme amounts of data with incredible speed. Voltron consists of a cluster of four systems, each featuring a set of eight NVIDIA Tesla GPUs tuned by NVIDIA engineers to help Baylor’s researchers achieve optimum performance on their data-intensive genomic research computations.

Source: IBM

The post Scientists Use IBM Power Systems to Assemble Genome of West Nile Mosquito appeared first on HPCwire.

IEEE Unveils Next Phase of IRDS to Drive Beyond Moore’s Law

Thu, 03/23/2017 - 09:25

PISCATAWAY, N.J., March 23, 2017 — IEEE today announced the next milestone phase in the development of the International Roadmap for Devices and Systems (IRDS)—an IEEE Standards Association (IEEE-SA) Industry Connections (IC) Program sponsored by the IEEE Rebooting Computing (IEEE RC) Initiative—with the launch of a series of nine white papers that reinforce the initiative’s core mission and vision for the future of the computing industry. The white papers also identify industry challenges and solutions that guide and support future roadmaps created by IRDS.

IEEE is taking a lead role in building a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software. In May 2016, IEEE announced the formation of the IRDS under the sponsorship of IEEE RC. The historical integration of IEEE RC and the International Technology Roadmap for Semiconductors (ITRS) 2.0 addresses mapping the ecosystem of the new reborn electronics industry. The new beginning of the evolved roadmap—with the migration from ITRS to IRDS—is proceeding seamlessly as all the reports produced by the ITRS 2.0 represent the starting point of IRDS.

While engaging other segments of IEEE in complementary activities to assure alignment and consensus across a range of stakeholders, the IRDS team is developing a 15-year roadmap with a vision to identify key trends related to devices, systems, and other related technologies.

“Representing the foundational development stage in IRDS is the publishing of nine white papers that outline the vital and technical components required to create a roadmap,” said Paolo A. Gargini, IEEE Fellow and Chairman of IRDS. “As a team, we are laying the foundation to identify challenges and recommendations on possible solutions to the industry’s current limitations defined by Moore’s Law. With the launch of the nine white papers on our new website, the IRDS roadmap sets the path for the industry benefiting from all fresh levels of processing power, energy efficiency, and technologies yet to be discovered.”

“The IRDS has taken a significant step in creating the industry roadmap by publishing nine technical white papers,” said IEEE Fellow Elie Track, 2011-2014 President, IEEE Council on Superconductivity; Co-chair, IEEE RC; and CEO of nVizix. “Through the public availability of these white papers, we’re inviting computing professionals to participate in creating an innovative ecosystem that will set a new direction for the greater good of the industry. Today, I open an invitation to get involved with IEEE RC and the IRDS.”

The series of white papers delivers the starting framework of the IRDS roadmap—and through the sponsorship of IEEE RC—will inform the various roadmap teams in the broader task of mapping the devices’ and systems’ ecosystem:

“IEEE is the perfect place to foster the IRDS roadmap and fulfill what the computing industry has been searching for over the past decades,” said IEEE Fellow Thomas M. Conte, 2015 President, IEEE Computer Society; Co-chair, IEEE RC; and Professor, Schools of Computer Science, and Electrical and Computer Engineering, Georgia Institute of Technology. “In essence, we’re creating a new Moore’s Law. And we have so many next-generation computing solutions that could easily help us reach uncharted performance heights, including cryogenic computing, reversible computing, quantum computing, neuromorphic computing, superconducting computing, and others. And that’s why the IEEE RC Initiative exists: creating and maintaining a forum for the experts who will usher the industry beyond the Moore’s Law we know today.”

The IRDS leadership team hosted a winter workshop and kick-off meeting at the Georgia Institute of Technology on 1-2 December 2016. Key discoveries from the workshop included the international focus teams’ plans and focus topics for the 2017 roadmap, top-level needs and challenges, and linkages among the teams. Additionally, the IRDS leadership invited presentations from the European and Japanese roadmap initiatives. This resulted in the 2017 IRDS global membership expanding to include team members from the “NanoElectronics Roadmap for Europe: Identification and Dissemination” (NEREID) sponsored by the European Semiconductor Industry Association (ESIA), and the “Systems and Design Roadmap of Japan” (SDRJ) sponsored by the Japan Society of Applied Physics (JSAP).

The IRDS team and its supporters will convene 1-3 April 2017 in Monterey, California, for the Spring IRDS Workshop, which is part of the 2017 IEEE International Reliability Physics Symposium (IRPS). The team will meet again for the Fall IRDS Conference—in partnership with the 2017 IEEE International Conference on Rebooting Computing (ICRC)—scheduled for 6-7 November 2017 in Washington, D.C. More information on both events can be found here: http://irds.ieee.org/events.

IEEE RC is a program of IEEE Future Directions, designed to develop and share educational tools, events, and content for emerging technologies.

IEEE-SA’s IC Program helps incubate new standards and related products and services, by facilitating collaboration among organizations and individuals as they hone and refine their thinking on rapidly changing technologies.

About the IEEE Standards Association

The IEEE Standards Association, a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. IEEE standards set specifications and best practices based on current scientific and technological knowledge. The IEEE-SA has a portfolio of over 1,100 active standards and more than 500 standards under development. For more information visit the IEEE-SA website.

About IEEE

IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. Learn more at http://www.ieee.org.

Source: IEEE

The post IEEE Unveils Next Phase of IRDS to Drive Beyond Moore’s Law appeared first on HPCwire.

EDEM Brings GPU-Optimized Solver to the Cloud with Rescale

Thu, 03/23/2017 - 08:16

SAN FRANCISCO, Calif., March 23, 2017 — Rescale and EDEM are pleased to announce that the EDEM GPU solver engine is now available on Rescale’s ScaleX platform, a scalable, on-demand cloud platform for high-performance computing. The GPU solver, which was a highlight of the latest release of EDEM, enables performance increases from 2x to 10x compared to single-node, CPU-only runs.

EDEM offers Discrete Element Method (DEM) simulation software for virtual testing of equipment that processes bulk solid materials in the mining, construction, and other industrial sectors. EDEM software has been available on Rescale’s ScaleX platform since July 2016. Richard LaRoche, CEO of EDEM commented: “The introduction of the EDEM GPU solver has made a key impact on our customers’ productivity by enabling them to run larger simulations faster. Our partnership with Rescale means more users will be able to harness the power of the EDEM engine by accessing the market’s latest GPUs through Rescale’s cloud platform.”

The addition of an integrated GPU solver to Rescale gives users shorter time-to-answer and enables a deeper impact on design innovation. To Rescale, the addition of EDEM’s GPU solver also signals a strengthening partnership. “Rescale’s GPUs are the cutting edge of compute hardware, and EDEM is ahead of the curve in optimizing their software to leverage GPU capabilities. We are proud to be their partner of choice to bring this forward-thinking simulation solution to the cloud, bringing HPC within easy reach of engineers everywhere,” said Rescale CEO Joris Poort.

About EDEM

EDEM is the market-leading Discrete Element Method (DEM) software for bulk material simulation. EDEM software is used for ‘virtual testing’ of equipment that handles or processes bulk materials in the manufacturing of mining, construction, off-highway and agricultural machinery, as well as in the mining and process industries. Blue-chip companies around the world use EDEM to optimize equipment design, increase productivity, reduce costs of operations, shorten product development cycles and drive product innovation. In addition EDEM is used for research at over 200 academic institutions worldwide. For more information visit: www.edemsimulation.com.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post EDEM Brings GPU-Optimized Solver to the Cloud with Rescale appeared first on HPCwire.

Google Launches New Machine Learning Journal

Wed, 03/22/2017 - 10:07

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” for machine learning. Writing on the Google Research Blog, Shan Carter and Chris Olah described the project as follows:

“Science isn’t just about discovering new results. It’s also about human understanding. Scientists need to develop notations, analogies, visualizations, and explanations of ideas. This human dimension of science isn’t a minor side project. It’s deeply tied to the heart of science.

“That’s why, in collaboration with OpenAI, DeepMind, YC Research, and others, we’re excited to announce the launch of Distill, a new open science journal and ecosystem supporting human understanding of machine learning. Distill is an independent organization, dedicated to fostering a new segment of the research community.

“Modern web technology gives us powerful new tools for expressing this human dimension of science. We can create interactive diagrams and user interfaces the enable intuitive exploration of research ideas. Over the last few years we’ve seen many incredible demonstrations of this kind of work.

“Unfortunately, while there are a plethora of conferences and journals in machine learning, there aren’t any research venues that are dedicated to publishing this kind of work. This is partly an issue of focus, and partly because traditional publication venues can’t, by virtue of their medium, support interactive visualizations. Without a venue to publish in, many significant contributions don’t count as “real academic contributions” and their authors can’t access the academic support structure.”

According to Carter and Olah, “Distill aims to build an ecosystem to support this kind of work, starting with three pieces: a research journal, prizes recognizing outstanding work, and tools to facilitate the creation of interactive articles.

Here’s a snapshot of guidelines for working with the new Journal:

  • “Distill articles are prepared in HTML using the Distill infrastructure — see the getting started guide for details. The infrastructure provides nice default styling and standard academic features while preserving the flexibility of the web.
  • Distill articles must be released under the Creative Commons Attribution license. Distill is a primary publication and will not publish content which is identical or substantially similar to content published elsewhere.
  • To submit an article, first create a GitHub repository for your article. You can keep it private during the review process if you would like — just share it with @colah and @shancarter. Then email review@distill.pub to begin the process.

Distill handles all reviews and editing through GitHub issues. Upon publication, the repository is made public and transferred to the @distillpub organization for preservation. This means that reviews of published work are always public. It is at the author’s discretion whether they share reviews of unpublished work.”

The post Google Launches New Machine Learning Journal appeared first on HPCwire.

Penguin Computing Announces Expanded HPC Cloud

Wed, 03/22/2017 - 09:38

FREMONT, Calif., March 22, 2017 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced the availability of the company’s expanded Penguin Computing On-Demand (POD) High Performance Computing Cloud.

“As current Penguin POD users, we are excited to have more resources available to handle our mission-critical real-time global environmental prediction workload,” said Dr. Greg Wilson, CEO, EarthCast Technologies. “The addition of the Lustre file system will allow us to scale our applications to full global coverage, run our jobs faster and provide more accurate predictions.”

The expanded POD HPC cloud extends into Penguin Computing’s latest cloud datacenter location, MT2. The MT2 location offers this expansion with the addition of Intel Xeon E5-2680 v4 processors through our B30 node class offering.

B30 Node Specifications

  • Dual Intel Xeon E5-2680 v4 processors
  • 28 non-hyperthreaded cores per node
  • 256GB RAM per node
  • Intel Omni-Path low-latency, non-blocking, 100Gb/s fabric

In addition to the new processors, the MT2 location provides customers with access to a Lustre, parallel file system – delivered through Penguin’s FrostByte storage solution.   POD’s latest Lustre file system provides high speed storage with an elastic billing model – only billing customers for the storage they consume, metered hourly.

The new POD MT2 public cloud location also provides customers with cloud redundancy – enabling multiple, distinct cloud locations to ensure that business critical, and time sensitive HPC workflows are always able compute.

“The latest expansion to our MT2 location extends the capabilities of our HPC cloud,” said Victor Gregorio, SVP Cloud Services at Penguin Computing. “As an HPC service, we work closely with our customers to deliver their growing cloud needs – scalable

bare-metal compute, easy access to ready-to-run applications, and tools such as our Scyld Cloud Workstation for remote 3D visualization.”

Penguin Computing customers in fields such as manufacturing, engineering, and weather sciences are able to run more challenging HPC applications and workflows on POD with the addition of these capabilities.

These workloads can be time sensitive and complex – demanding the specialized HPC cloud resources Penguin makes available on POD. The compute needs of HPC users are not normally satisfied in a general-purpose public cloud, and Penguin Computing continues to be a leader in unique, cost effective, high-performance cloud services for HPC workloads.

POD customers have immediate access to these new offerings through their existing accounts through the POD Portal. Experience POD by visiting https://www.pod.penguincomputing.com to request a free trial account.

About Penguin Computing

Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivery of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets.

Source: Penguin

The post Penguin Computing Announces Expanded HPC Cloud appeared first on HPCwire.

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

Wed, 03/22/2017 - 09:14

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. New advances reported by Swiss researchers in Nature last week suggest practical use of x-rays for fast, accurate, reverse-engineering of chips may be near.

“You’ll pop in your chip and out comes the schematic. Total transparency in chip manufacturing is on the horizon. This is going to force a rethink of what computing is,” said Anthony Levi of the University of Southern California describing the research in an IEEE Spectrum article (X-rays Map the 3D Interior of Integrated Circuits). “This is going to force a rethink of what computing is” what it means for a company to add value in the computing industry.

The work by Mirko Holler, Manuel Guizar-Sicairos, Esther H. R. Tsai, Roberto Dinapoli, Elisabeth Müller, Oliver Bunk, Jörg Raabe (all of Paul Scherrer Institut) and Gabriel Aeppli (ETH) is described in their Nature Letter, “High-resolution non-destructive three-dimensional imaging of integrated circuits.”

“[We] demonstrate that X-ray ptychography – a high-resolution coherent diffractive imaging technique – can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip,” write the researchers in the abstract.

“Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.”

Starting with a known structure – an ASIC developed at the institute – and then moving to an Intel chip (Intel G3260 processor) about which they had limited information, the researchers were able to accurate identify and map components in the chips. A good summary of the experiment is provided in the IEEE Spectrum article:

“The ASIC was produced using 110-nanometer chip manufacturing technology, more than a decade from being cutting edge. But the Intel chip was just a couple of generations behind the state of the art: It was produced using the company’s 22-nm process…To produce a 3D rendering of the Intel chip—an Intel G3260 processor—the team shined an X-ray beam through a portion of the chip. The various circuit components—its copper wires and silicon transistors, for example—scatter the light in different ways and cause constructive and destructive interference. Through a technique called X-ray ptychography, the researchers could point the beam at their sample from a number of different angles and use the resulting diffraction patterns to reconstruct chip’s internal structure.”

The experiment was carried out at the cSAXS beamline of the Swiss Light Source (SLS) at the Paul Scherrer Institut, Villigen, Switzerland. Details of the components are as follows. Coherent X-rays enter the instrument and pass optical elements that in their combination form an X-ray lens used to generate a defined illumination of the sample. These elements are a gold central stop, a Fresnel zone plate and an order sorting aperture. The diffracted X-rays are measured by a 2D detector, a Pilatus 2M in the present case. Accurate sample positioning is essential in a scanning microscopy technique and is achieved by horizontal and vertical interferometers.

As the IEEE Spectrum article notes, “Even if this approach isn’t widely adopted to tear down competitors’ chips, it could find a use in other applications. One of those is verifying that a chip only has the features it is intended to have, and that a “hardware Trojan”—added circuitry that could be used for malicious purposes—hasn’t been introduced.”

Link to IEEE article: http://spectrum.ieee.org/nanoclast/semiconductors/processors/xray-ic-imaging

Link to Nature paper: http://www.nature.com/nature/journal/v543/n7645/full/nature21698.html

The post Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging appeared first on HPCwire.

ISC High Performance Adds STEM Student Day to the 2017 Program

Wed, 03/22/2017 - 07:37

FRANKFURT, Germany, March 22, 2017 — ISC High Performance is pleased to announce the inclusion of the STEM Student Day & Gala at this year’s conference. The new program aims to connect the next generation of regional and international STEM practitioners with the high performance computing industry and its key players.

ISC 2017 has created this program to welcome STEM students into the world of HPC with the hope that an early exposure to the community will encourage them to acquire the necessary HPC skills to propel their future careers.

The ISC STEM Student Day & Gala will take place on Wednesday, June 21, and is free to attend for 200 undergraduate and graduate students. All regional and international students are welcome to register for the program, including those not attending the main conference. The organizers also encourage female STEM students to exploit this opportunity as ISC 2017 is very committed to improving gender diversity

Students will be able to register for the program starting mid-April via the program webpage.

Participating students will enjoy an afternoon discovering HPC by visiting the exhibition and then joining a conference keynote before participating in a career fair. In the evening, they can network with key HPC players at a special gala event.

Supermicro, PRACE, CSCS and GNS Systems GmbH have already come forward to support this program. Funding from another six organizations is needed to ensure the full success of the STEM Day & Gala. Sponsorship opportunities start at 500 euros, with all resources flowing directly into the event organization. Please contact anna.schachoff@isc-group.com to get involved.

“There is currently a shortage of a skilled STEM workforce in Europe and it is projected that the gap between available jobs and suitable candidates will grow very wide beyond 2020 if nothing is done about it,” said Martin Meuer, the general co-chair of ISC High Performance.     

“This gave us the idea to organize the STEM Day, as many organizations that exhibit at ISC could profit from meeting the future workforce directly.” 

The ISC STEM Student Day & Gala is also a great opportunity for organizations to associate themselves as STEM employers and invest in their future HPC user base. 

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.

Source: ISC

The post ISC High Performance Adds STEM Student Day to the 2017 Program appeared first on HPCwire.

LANL Simulation Shows Massive Black Holes Break “Speed Limit”

Tue, 03/21/2017 - 10:48

A new computer simulation based on codes developed at Los Alamos National Laboratory is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. The simulation is based on a computer code used to understand the coupling of radiation and certain materials.

“Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory,  “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?”

Using codes developed at Los Alamos for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.

“It turns out that while supermassive black holes have a growth speed limit, certain types of massive stars do not,” said Smidt. “We asked, what if we could find a place where stars could grow much faster, perhaps to the size of many thousands of suns; could they form supermassive black holes in less time?” The work is detailed in a recent paper, “The Formation Of The First Quasars In The Universe.”

It turns out the Los Alamos computer model not only confirms the possibility of speedy supermassive black hole formation, but also fits many other phenomena of black holes that are routinely observed by astrophysicists. The research shows that the simulated supermassive black holes are also interacting with galaxies in the same way that is observed in nature, including star formation rates, galaxy density profiles, and thermal and ionization rates in gasses.

“This was largely unexpected,” said Smidt.  “I thought this idea of growing a massive star in a special configuration and forming a black hole with the right kind of masses was something we could approximate, but to see the black hole inducing star formation and driving the dynamics in ways that we’ve observed in nature was really icing on the cake.”

A key mission area at Los Alamos National Laboratory is understanding how radiation interacts with certain materials.  Because supermassive black holes produce huge quantities of hot radiation, their behavior helps test computer codes designed to model the coupling of radiation and matter. The codes are used, along with large- and small-scale experiments, to assure the safety, security, and effectiveness of the U.S. nuclear deterrent.

“We’ve gotten to a point at Los Alamos,” said Smidt, “with the computer codes we’re using, the physics understanding, and the supercomputing facilities, that we can do detailed calculations that replicate some of the forces driving the evolution of the Universe.”

Link to LANL release: http://www.lanl.gov/discover/news-release-archive/2017/March/03.21-supermassive-black-hole-speed-limit.php?source=newsroom

Link to paper: https://arxiv.org/pdf/1703.00449.pdf

Link to video about the discovery: https://youtu.be/LD4xECbHx_I

Source: LANL

The post LANL Simulation Shows Massive Black Holes Break “Speed Limit” appeared first on HPCwire.

Supermicro Launches Intel Optane SSD Optimized Platforms

Tue, 03/21/2017 - 07:53

SAN JOSE, Calif., March 21, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in compute, storage and networking technologies including green computing, expands the Industry’s broadest portfolio of Supermicro NVMe Flash server and storage systems with support for Intel Optane SSD DC P4800X, the world’s most responsive data center SSD.

Supermicro’s NVMe SSD Systems with Intel Optane SSDs for the Data Center enable breakthrough performance compared to traditional NAND based SSDs. The Intel Optane SSDs for the data center are the first breakthrough that begins to blur the line between memory and storage, enabling customers to do more per server, or extend memory working sets to enable new usages and discoveries. The PCI-E compliant expansion card delivers an industry leading combination of 2 times better latency performance, up to more than 3 times higher endurance, and up to 3 times higher write throughput than NVMe NAND SSDs. Optane is supported across Supermicro’s complete product line including: BigTwin, SuperBlade, Simply Double Storage and Ultra servers supporting the current and next generation Intel Xeon Processors. These innovative solutions enable a new high performance storage tier that combines the attributes of memory and storage ideal for Financial Services, Cloud, HPC, Storage and overall Enterprise applications.

The first generation Supermicro supported Intel Optane SSDs are initially a PCI-E compliant expansion card with additional form factors to follow. A 2U Supermicro Ultra system will be able to deliver 6 million WRITE IOPs and 16.5 TB of high performance Optane storage. Intel Optane will deliver optimal performance in the 1U 10 NVMe All-Flash SuperServer and the capacity optimized 2U 48 All-Flash NVMe Simply Double Storage Server and provide accelerated caching across the complete line of NVMe supported scale out storage servers including the new 4U 45 Drive system with NVMe Cache drives.

“Being First-To-Market with the latest in computing technology continues to be our corporate strength, the addition of Intel Optane memory technology gives our top tier customers a new memory deployment strategy that provides better write performance and latency than existing NVMe NAND SSD solutions including more than 30 drive writes per day,” said Charles Liang, President and CEO of Supermicro. “In addition this new memory is slated to consume 30 percent lower max-power than SSD NAND memory, supporting our customer’s green computing priorities.”

“Supermicro’s system readiness for the new Optane memory technology will provide fast storage and cache for MySQL and HCI applications, ” said Bill Lesczinske, Vice President, Non-Volatile Memory Solutions Group. “With 77x better read latency in the presence of a high write workload and as a memory replacement with Intel Memory Drive Technology – software will make the Optane SSD look like DRAM transparently to the OS, providing greater in-memory compute performance to Supermicro systems.”

For more information on Supermicro’s complete range of NVMe Flash Solutions, please visit http://www.supermicro.com/products/nfo/NVMe.cfm.

About Super Micro Computer, Inc. (NASDAQ: SMCI)
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Launches Intel Optane SSD Optimized Platforms appeared first on HPCwire.

DDN Names Bret Costelow VP of Global Sales

Tue, 03/21/2017 - 07:45

SANTA CLARA, Calif., March 21, 2017 — DataDirect Networks (DDN) today announced the appointment of Bret Costelow as the company’s vice president of global sales. In his new role, Costelow will oversee technical computing sales worldwide, and will leverage more than 25 years of sales and sales leadership experience to further boost visibility of DDN’s deep technical expertise and high-performance computing (HPC) storage platform offerings, develop new business strategies and drive revenue growth. Costelow’s leadership and experience spans leading technology companies, including Intel and Ricoh Americas.

“Bret Costelow is an inspiring sales leader with a clear understanding of our customers’ needs and a vision of how DDN’s technologies and solutions can best solve their toughest data storage challenges,” said Robert Triendl, senior vice president, global sales, marketing, and field services, DDN. “Bret’s proven success in high-growth business settings, deep knowledge of the Lustre* and HPC market, proven track record for generating traction with innovative, advanced technologies, and his broad experience with software sales make him a great asset to our team and a great resource for our partners and customers around the world.” 

Costelow joins DDN from Intel Corporation, where he led a global sales and business development team for Intel’s HPC software business and supported Intel’s 2012 acquisition of Whamcloud, the main development arm for the open source Lustre file system, and its subsequent sales and marketing. Costelow was instrumental in leading the Lustre business unit to expand into adjacent markets, reaching beyond HPC file systems to HPC cluster orchestration software. Under his leadership, the HPC software business unit opened new markets in Asia, launched a comprehensive, global software sales channel program and drove year-over-year revenue growth that averaged more than 30 percent in each of the past five years. Costelow is also on the board of directors of the European Open File Systems (EOFS), a non-profit organization focused on the promotion and support of open scalable file systems for high-performance computing in the technical computing and enterprise computing markets.

“DDN is the uncontested market leader in HPC storage, with a highly differentiated portfolio of solutions for technical computing users in all vertical markets. This portfolio, combined with aggressive investments in new technologies, positions the company incredibly well for continued growth and success as disruptive technologies, such as non-volatile memory (NVM), unsettle the storage market landscape and create exciting new opportunities,” said Bret Costelow, vice president, global sales at DDN. “The current market dynamics and DDN’s agility to respond made this the perfect time to join DDN. I look forward to working with the incredible talent in DDN’s field team, product management, product development and software engineering teams to help drive DDN’s success and growth to new levels, and to help accelerate the success of DDN’s customers and partners around the world.”

Supporting Resources

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Names Bret Costelow VP of Global Sales appeared first on HPCwire.

Cray CEO to Speak on Convergence of Big Data, Supercomputing at TechIgnite

Tue, 03/21/2017 - 07:41

SEATTLE, Wash., March 21, 2017 — Supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced that the company’s President and CEO, Peter Ungaro, will give a presentation on “The Convergence of Big Data and Supercomputing” at TechIgnite, a IEEE Computer Society conference exploring the trends, threats, and truth behind technology.

The convergence of artificial intelligence technologies and supercomputing at scale is happening now. As a featured speaker at TechIgnite’s “AI and Machine Learning” track, Ungaro’s presentation will examine how the convergence of big data and modeling and simulation run on supercomputing platforms at scale is creating new opportunities for organizations to discover innovative ways of extracting value from massive data sets.

Other TechIgnite speakers include Apple co-founder Steve Wozniak, Tony Jebara, director of machine learning at Netflix, William Ruh, CEO for GE Digital, and more.

TechIgnite will take place on March 21-22, 2017 at the Hyatt Regency San Francisco Airport Hotel in Burlingame, CA. Ungaro’s presentation will be held at 2:00pm PT on Wednesday, March 22. A complete list of TechIgnite speakers is available online via the following URL: http://techignite.computer.org/speakers/.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray CEO to Speak on Convergence of Big Data, Supercomputing at TechIgnite appeared first on HPCwire.

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

Tue, 03/21/2017 - 06:38

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-world applications are depends on whom you talk to and for what kinds of applications. Los Alamos National Lab, for example, has an active application development effort for its D-Wave system and LANL researcher Susan Mniszewski and colleagues have made progress on using the D-Wave machine for aspects of quantum molecular dynamics (QMD) simulations.

At CeBIT this week D-Wave and Volkswagen will discuss their pilot project to monitor and control taxi traffic in Beijing using a hybrid HPC-quantum system – this is on the heels of recent customer upgrade news from D-Wave (more below). Last week IBM announced expanded access to its five-qubit cloud-based quantum developer platform. In early March, researchers from the Google Quantum AI Lab published an excellent commentary in Nature examining real-world opportunities, challenges and timeframes for quantum computing more broadly. Google is also considering making its homegrown quantum capability available through the cloud.

As an overview, the Google commentary provides a great snapshot, noting soberly that challenges such as the lack of solid error correction and the small size (number of qubits) in today’s machines – whether “universal” digital machines like IBM’s or “analog” adiabatic annealing machines like D-Wave’s – have prompted many observers to declare useful quantum computing is still a decade way. Not so fast, says Google.

“This conservative view of quantum computing gives the impression that investors will benefit only in the long term. We contend that short-term returns are possible with the small devices that will emerge within the next five years, even though these will lack full error correction…Heuristic ‘hybrid’ methods that blend quantum and classical approaches could be the foundation for powerful future applications. The recent success of neural networks in machine learning is a good example,” write Masoud Mohseni, Peter Read, and John Martinis (a 2017 HPCwire Person to Watch) and colleagues (Nature, March 8, “Commercialize early quantum technologies”)

The D-Wave/VW project is a good example of a hybrid approach (details to follow) but first here’s a brief summary of recent quantum computing news:

  • IBM released a new API and upgraded simulator for modeling circuits up to 20 qubits on its 5-qubit platform. It also announced plans for a software developer kit by mid-year for building “simple” quantum applications. So far, says IBM, its quantum cloud has attracted about 40,000 users, including, for example, the Massachusetts Institute of Technology, which used the cloud service for its online quantum information science course. IBM also noted heavy use of the service by Chinese researchers. (See HPCwire coverage, IBM Touts Hybrid Approach to Quantum Computing)
  • D-Wave has been actively extending its development ecosystem (qbsolv (D-wave) and qmasm (LANL, et al.) and says researchers have recently been able to simulate a 20,000 qubit system on 1,000-qubit machine using qbsolv (more below). After announcing a 2,000-quibit machine in the fall, the company has begun deploying them. The first will be for a new customer, Temporal Defense System, and another is planned for the Google/NASA/USRA partnership which has a 1,000-qubit machine now. D-wave also just announced Virginia Tech and the Hume Center will begin using D-Wave systems for work on defense and intelligence applications.
  • Google’s commentary declares: “We anticipate that, within a few years, well-controlled quantum systems may be able to perform certain tasks much faster than conventional computers based on CMOS (complementary metal oxide–semiconductor) technology. Here we highlight three commercially viable uses for early quantum-computing devices: quantum simulation, quantum-assisted optimization and quantum sampling. Faster computing speeds in these areas would be commercially advantageous in sectors from artificial intelligence to finance and health care.”
D-Wave 2000Q System

Clearly there is a lot going on even at this stage of quantum computing’s development. There’s also been a good deal of wrangling over just what is a quantum computer and the differences between IBM’s “universal” digital approach – essentially a machine able to do anything computers do now – and D-Wave’s adiabatic annealing approach, which is currently intended to solve specific classes of optimization problems.

“They are different kinds of machines. No one has a universal quantum computer now, so you have to look at each case individually for its particular strengths and weaknesses,” explained Martinis to HPCwire. “The D-wave has minimal quantum coherence (it loses the information exchanged between qubits quite quickly), but makes up for it by having many qubits.”

“The IBM machine is small, but the qubits have quantum coherence enough to do some standard quantum algorithms. Right now it is not powerful, as you can run quantum simulations on classical computers quite easily. But by adding qubits the power will scale up quickly. It has the architecture of a universal machine and has enough quantum coherence to behave like one for very small problems,” Martinis said.

Noteworthy, Google has developed 9-qubit devices that have 3-5x more coherence than IBM, according to Martinis, but they are not on the cloud yet. “We are ready to scale up now, and plan to have this year a ‘quantum supremacy’ device that has to be checked with a supercomputer. We are thinking of offering cloud also, but are more or less waiting until we have a hardware device that gives you more power than a classical simulation.”

Quantum supremacy as described in the Google commentary is a term coined by theoretical physicist John Preskill to describe “the ability of a quantum processor to perform, in a short time, a well-defined mathematical task that even the largest classical supercomputers (such as China’s Sunway TaihuLight) would be unable to complete within any reasonable time frame. We predict that, in a few years, an experiment achieving quantum supremacy will be performed.”

Bo Ewald

For the moment, D-Wave is the only vendor offering near-production machines versus research machines, said Bo Ewald, the company’s ever-cheerful evangelist. He quickly agrees though that at least for now there aren’t any production-ready applications. Developing a quantum tool/software ecosystem is a driving focus at D-wave. The LANL app dev work, though impressive, still represents proto-application development. Nevertheless the ecosystem of tools is growing quickly.

“We have defined a software architecture that has several layers starting at the quantum machine instruction layer where if you want to program in machine language you are certainly welcome to do that; that is kind of the way people had to do it in the early days,” said Ewald.

“The next layer up is if you want to be able to create quantum machine instructions from C or C++ or Python. We have now libraries that run on host machines, regular HPC machines, so you can use those languages to generate programs that run on the D-Wave machine but the challenge that we have faced, that customers have faced, is that our machines had 500 qubits or 1,000 qubits and now 2,000; we know there are problems that are going to consume many more qubits than that,” he said.

For D-Wave systems, qbsolv helps address this problem. It allows a “meta-description of the machine and the problem you want to solve as quadratic unconstrained binary optimization or QUBO. It’s an intermediate representation. D-Wave then extended this capability to what it calls virtual QUBOs likening it to virtual memory.

“You can create QUBOs or representations of problems which are much larger than the machine itself and than using combined classical computer and quantum computer techniques we could partition the problem and solve them in chunks and then kind of glue them back together after we solved the D-Wave part. We’ve done that now with the 1,000-qubit machine and run problems that have the equivalent of 20,000 qubits,” said Ewald, adding the new 2,000-qubit machines will handle problems of even greater size using this capability.

At LANL, researcher Scott Pakin has developed another tool – a quantum macro assembler for D-Wave systems (QMASM). Ewald said part of the goal of Pakin’s work was to determine, “if you could map gates onto the machine even though we are not a universal or a gate model. You can in fact model gates on our machine and he has started to [create] a library of gates (or gates, and gates, nand gates) and you can assemble those to become macros.”

Pakin said,My personal research interest has been in making the D-Wave easier to program. I’ve recently built something really nifty on top of QMASM: edif2qmasm, which is my answer to the question: Can one write classical-style code and run it on the D-Wave?

“For many difficult computational problems, solution verification is simple and fast. The idea behind edif2qmasm is that one can write an ordinary(-ish) program that reports if a proposed solution to a problem is in fact valid. This gets compiled for the D-Wave then run _backwards_, giving it ‘true’ for the proposed solution being valid and getting back a solution to the difficult computational problem.”

Pakin noted there are many examples on github to provide a feel for the power of this tool.

“For example, mult.v is a simple, one-line multiplier. Run it backwards, and it factors a number, which underlies modern data decryption. In a dozen or so lines of code, circsat.v evaluates a Boolean circuit. Run it backwards, and it tells you what inputs lead to an output of “true”, which used in areas of artificial intelligence, circuit design, and automatic theorem proving. map-color.v reports if a map is correctly colored with four colors such that no two adjacent regions have the same color. Run it backwards, and it _finds_ such a coloring.

“Although current-generation D-Wave systems are too limited to apply this approach to substantial problems, the trends in system scale and engineering precision indicate that some day we should be able to perform real work on this sort of system. And with the help of tools like edif2qmasm, programmers won’t need an advanced degree to figure out how to write code for it,” he explained.

The D-Wave/VW collaboration, just a year or so old, is one of the more interesting quantum computing proof-of-concept efforts because it tackles an optimization problem of the kind that is widespread in everyday life. As described by Ewald, VW CIO Martin Hoffman was making his yearly swing through Silicon Valley and stopped in at D-Wave and talk turned to the many optimization challenges big automakers face, such as supply logistics, vehicle delivery, and various machine learning tasks and doing a D-Wave project around one of them. Instead, said Ewald, VW eventually settled on a more driver-facing problem.

It turns out there are about 10,000 taxis in Beijing, said Ewald. Each has a GPS device and their positions are recorded every five seconds. Traffic congestion, of course, is a huge problem in Beijing. The idea was to explore if it was possible to create an application running on both traditional computer resources and D-Wave to help monitor and guide taxi movement more quickly and effectively.

“Ten thousand taxis on all of the streets in Beijing is way too big for our machine at this point, but they came to this same idea we talked about with qbsolve where you partition problems,” said Ewald. “On the traditional machines VW created a map and grid and subdivided the grid into quadrants and would find the quadrant that was the most red.” That’s red as in long cab waits.

The problem quadrant was then sent to D-Wave to be solved. “We would optimize the flow, basically minimize the wait time for all of the taxis within the quadrant, send that [solution] back to the traditional machine which would then send us the next most red, and we would try to turn it green,” said Ewald.

According to Ewald, VW was able to relatively create the “hybrid” solutions quickly and “get what they say are pretty good results.” They have talked about then being able to extend this project to predict where traffic jams are going to be and give people perhaps 45 minute warnings that there’s the potential for a traffic jam at such and such intersection. The two companies have a press conference planned this week at CeBIT to showcase the project.

It’s good to emphasize that the VW/D-wave exercise is developmental – what Ewald labels as a proto application: “But just the fact that they were able to get it running is a great step forward in many ways in that we believe our machine will be used side by side with existing machines, much like GPUs were used in the early days on graphics. In this case VW has demonstrated quite clearly how our machine, our QPU, if you can be used in helping accelerate the work being done on a traditional HPC machines.”

Image art, chip diagram: D-Wave

The post Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access appeared first on HPCwire.

Pages