Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 32 min 29 sec ago

MSST 2017 Announces Conference Themes, Keynote

5 hours 21 min ago

April 25, 2017 — The 33rd International Conference on Massive Storage Systems and Technology (MSST 2017) will dedicate five days to computer-storage technology, including a day of tutorials, two days of invited papers, two days of peer-reviewed research papers, and a vendor exposition. The conference will be held on the beautiful campus of Santa Clara University, in the heart of Silicon Valley May 15-19, 2017.

Kimberly Keeton, Hewlett Packard Enterprise, will keynote:

Data growth and data analytics requirements are outpacing the compute and storage technologies that have provided the foundation of processor-driven architectures for the last five decades. This divergence requires a deep rethinking of how we build systems, and points towards a memory-driven architecture, where memory is the key resource and everything else, including processing, revolves around it.

Memory-driven computing (MDC) brings together byte-addressable persistent memory, a fast memory fabric, task-specific processing, and a new software stack to address these data growth and analysis challenges. At Hewlett Packard Labs, we are exploring MDC hardware and software design through The Machine. This talk will review the trends that motivate MDC, illustrate how MDC benefits applications, provide highlights from our Machine-related work in data management and programming models, and outline challenges that MDC presents for the storage community.

Themes for the conference this year include:

  • Emerging Open Source Storage System Design for Hyperscale Computing
  • Leveraging Compression, Encryption, and Erasure Coding Chip
  • Hardware Support to Construct Large Scale Storage Systems
  • The Limits of Open Source in Large-Scale Storage Systems Design
  • Building Extreme-Scale SQL and NoSQL Processing Environments
  • Storage Innovation in Large HPC Data Centers
  • How Large HPC Data Centers Can Leverage Public Cloud for Computing and Storage
  • Supporting Extreme-Scale Name Spaces with NAS Technology
  • Storage System Designs Leveraging Hardware Support
  • How Can Large Scale Storage Systems Support Containerization?
  • Trends in Non-Volatile Media

For registration and the full agenda visit the MSST 2017 website:

Source: MSST

The post MSST 2017 Announces Conference Themes, Keynote appeared first on HPCwire.

Cycle Computing Flies Into HTCondor Week

12 hours 43 min ago

NEW YORK, April 25, 2017 — Cycle Computing today announced that it will address attendees at HTCondor Week 2017, to be held May 2-5 in Madison, Wisconsin. Cycle will also be sponsoring a reception for attendees, slated for Wednesday, May 3rd from 6:00 pm to 7:00 pm at the event in Madison.

Cycle’s Customer Operations Manager, Andy Howard, will present “Using Docker, HTCondor, and AWS for EDA model development” Thursday, May 4th at 1:30 pm. Andy’s session will detail how a Cycle Computing customer used HTCondor to manage Docker containers in AWS to increase productivity, throughput, and reduce overall time-to-results.

HTCondor develops, implements, deploys, and evaluates mechanisms and policies that support High Throughput Computing (HTC). Guided by both the technological and sociological challenges of such a computing environment, the Center for High Throughput Computing at UW-Madison continues to build the open source HTCondor distributed computing software and related technologies to enable scientists and engineers to increase their computing throughput. An extension of that research is HTCondor Week, the annual conference for the HTCondor batch scheduler, featuring presentations from developers and users in academia and industry. The conference gives collaborators and users the chance to exchange ideas and experiences, to learn about the latest research, to experience live demos, and to influence HTCondor’s short and long term research and development directions.

“At Cycle we have a great deal of history and context for HTCondor. Even today, some of our largest customers are using HTCondor under the hood in their cloud environments,” said Jason Stowe, CEO, Cycle Computing. “Simply put, HTCondor is an important scheduler to us and to our customers. We’re happy to remain part of the HTCondor community and support it with our presentation and the reception.”

Cycle Computing’s CycleCloud orchestrates Big Compute and Cloud HPC workloads enabling users to overcome the challenges typically associated large workloads. CycleCloud takes the delays, configuration, administration, and sunken hardware costs out of HPC clusters. CycleCloud easily leverages multi-cloud environments moving seamlessly between internal clusters, Amazon Web Services, Google Cloud Platform, Microsoft Azure and other cloud environments.

More information about the CycleCloud cloud management software suite can be found at


Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit:

Source: Cycle Computing

The post Cycle Computing Flies Into HTCondor Week appeared first on HPCwire.

IBM, NVIDIA, Stone Ridge Claim Gas & Oil Simulation Record

13 hours 50 min ago

IBM, NVIDIA, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Using IBM Minsky servers with NVIDIA P100 GPUs and Stone Ridge’s ECHELON petroleum reservoir simulation software, the trio claim their effort “shatters previous (Exxon) results using one-tenth the power and 1/100th of the space. The results were achieved in 92 minutes with 60 Power processors and 120 GPU accelerators and broke the previous published record (Aramco) of 20 hours using thousands of processors.”

The ‘billion cell” simulation represents a heady challenge typically tackled with ‘supercomputer’ class HPC infrastructure. The Minsky, of course, is the top of IBM’s Power server line and leverages NVIDIA’s fastest GPU and NVlink interconnect. This simulation used 60 processors and 120 accelerators. IBM owed the systems – each Minsky had 2 Power8 CPUs with 256GB of memory, 4 NVIDIA P100 GPUs, InfiniBand EDR.

Reservoir simulation
Source: Stone Ridge

“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer. That speed lets reservoir engineers run more models and ‘what-if’ scenarios than previously so they can produce oil more efficiently, open up fewer new fields and make responsible use of limited resources” said Vincent Natoli, President of Stone Ridge Technology, in the official announcement. “By increasing compute performance and efficiency by more than an order of magnitude, we’re democratizing HPC for the reservoir simulation community.”

According to the collaborators, the data set was taken from public information and used to mimic large oil fields like those found in the middle east. Key code optimization included taking advantage of the CPU-GPU NVLink and GPU-GPU NVLink in the Power systems and also scaling the software to take advantage of 10s of Minsky systems in a HPC cluster.

The new solution, say the collaborators, is intended “to transform the price and performance for business critical High Performance Computing (HPC) applications for simulation and exploration.” The performance is impressive but overly not cheap. IBM estimates the cost of the 30 Minsky systems in the range of $1.5 million to $2 million. ECHELON is a standard Stone Ridge product and IBM and Stone Ridge plan to jointly sell the new solution into the oil and gas market.

Sumit Gupta, IBM

Sumit Gupta, IBM Vice President, High Performance Computing & Analytics, said, “The bottom line is that by running ECHELON on Minsky, users can achieve faster run-times using a fraction of the hardware. One recent effort used more than 700,000 processors in a server installation that occupies nearly half a football field. Stone Ridge did this calculation on two racks of IBM machines that could fit in the space of half a ping-pong table.”  

IBM has been steadily ratcheting up efforts to showcase its Power systems – including Minsky – as it tries to wrestle market share in an x86 dominated landscape. Last month, the company spotlighted another Power8-based system – VOLTRON at Baylor College – which researchers used to assemble the 1.2 billion letter genome of the mosquito that carries the West Nile virus.

IBM and its collaborators argue “this latest advance” challenges misconceptions that GPUs can’t be efficient on complex application codes such as reservoir simulators and are better suited to simple, more naturally parallel applications such as seismic imaging.

They do note, “Billion cell models in the industry are rare in practice, but the calculation was accomplished to highlight the growing disparity in performance between new fully GPU based codes like ECHELON and equivalent legacy CPU codes. ECHELON scales from the cluster to the workstation and while it can turn over a billion cells on 30 servers, it can also run smaller models on a single server or even on a single NVIDIA P100 board in a desktop workstation, the latter two use cases being more in the sweet spot for the industry.”

The post IBM, NVIDIA, Stone Ridge Claim Gas & Oil Simulation Record appeared first on HPCwire.

ASC17 Championship to Challenge Front-end Science

19 hours 20 min ago

AI, challenge a Gordon Bell Prize application, optimize the latest third generation sequencing assembly tool, attempt to revitalize traditional scientific computing software on a quantum computing platform. All these sound like what a team of top engineers would do, but the truth is that these are the challenges that groups of university students, with an average age of 20 years old, need to overcome in the finals of the 2017 ASC Student Supercomputer Challenge (ASC17). The finals of this tournament are scheduled to be held at the National Supercomputing Center in Wuxi, China, from April 24 to 28, where 20 teams from around the world will compete to be crowned the champion.

In the ASC17 finals, the competitors have to use PaddlePaddle framework to accurately predict the traffic situation in a city for a particular day in the future. This requires each team to design and build an intelligent “brain” on their own, and then employ high-intensity training to coach this “brain” to come up with the results. They also need to ensure that the training is efficient and the trained “brains” will have a high recognition accuracy.

MASNUM, which is the third generation oceanic wave numerical model developed by China and was nominated for the Gordon Bell Prize. For compatibility with these top applications, the participants will get to perform their calculations using the world’s fastest supercomputer, Sunway TaihuLight, in the finals, as they attempt to extend parallel calculations in the software to 10,000 computing cores or more.

Currently for third-generation gene sequencers, each sequencing can generate as many as hundreds of thousands of gene fragments. Once the sequencing is completed, a more critical challenge emerges where the scientists have to assemble millions of gene fragments into a complete and correct genome and chromosome sequence. The finalists in ASC17 will attempt to optimize Falcon, a third-generation gene sequencing assembly tool, and the results will help research work in human genetics and even the origin of life to advance.

LAMMPS is the abbreviation for Large-scale Atomic/Molecular Massively Parallel Simulator, and is the most widely used molecular dynamics simulation software worldwide. It is the key software for research in many cutting-edge disciplines including chemistry, materials, and molecular biology. The challenge for ASC17 finalists is to port this very mature software to the latest “Knights Landing” architecture platform, and to improve the operational efficiency of this software.

In addition, the teams in ASC17 finals are also required by the organizing committee to make use of the supercomputing nodes from Inspur to design and build a supercomputer on their own under 3000W power to optimize HPL , HPCG and one mystery application. Each team should also provide an English presentation.

The ASC Student Supercomputer Challenge is initiated by China, and supported by experts and institutions worldwide. The competition aims to be the platform to promote exchanges among young supercomputing talent from different countries and regions, as well as to groom young talent. It also aims to be the key driving force in promoting technological and industrial innovations by improving the standards in supercomputing applications and research. ASC Challenge has been held for 6 years. This year the ASC17 Challenge is co-organized by  Zhengzhou University, the National Supercomputing Centre in Wuxi , and Inspur,with 230 teams from all over the world having taken part in the competition.

The post ASC17 Championship to Challenge Front-end Science appeared first on HPCwire.

PEARC17 Announces Keynote Speakers

Mon, 04/24/2017 - 22:41

April 24, 2017 — The organizers of PEARC17 (Practice & Experience in Advanced Research Computing) today announced the keynote speakers for the conference in New Orleans, July 9–13, 2017.

The PEARC17 keynote on Tuesday, July 11 will be presented by Paula Stephan, professor of economics, Georgia State University and a research associate, National Bureau of Economic Research. Stephan’s talk, “How Economics Shapes Science,” will focus on the effects of incentives and costs on U.S. campuses.

Paul Morin, founder and director of the Polar Geospatial Center, an NSF science and logistics support center at the University of Minnesota, will present a keynote session on Wednesday, July 12, titled “Mapping the Poles with Petascale.” This is the compelling story of a small NSF-funded team from academia joining with the National Geospatial-Intelligence Agency and Blue Waters to create the largest ever topographic mapping project.

PEARC17’s inaugural conference will address the challenges of using and operating advanced research computing within academic and open science communities. Bringing together the high-performance computing and advanced digital research communities, this year’s theme—Sustainability, Success and Impact—reflects key objectives for those who manage, develop, and use advanced research computing throughout the nation and the world.

About the Speakers:

Paula Stephan is a Fellow of the American Association for the Advancement of Science and a member of the Board of Reviewing Editors, Science. Science Careers named Stephan its first “Person of the Year” in December 2012. Stephan has published numerous articles in such journals as The American Economic Review, The Journal of Economic Literature, Management Science, Nature, Organization Science, Research Policy and Science. Her book, How Economics Shapes Science, was published by Harvard University Press. Her research has been supported by the Alfred P. Sloan Foundation, the Andrew W. Mellon Foundation, and the National Science Foundation. Stephan serves on the National Academies Committee on the Next Generation of Researchers Initiative and the Research Council of The State University of New York (SUNY) System. See Stephan’s full bio at

Paul Morin is Founder and Director of the Polar Geospatial Center, an NSF science and logistics support center at the University of Minnesota. Morin leads a team of two dozen responsible for imaging, mapping, and monitoring the Earth’s polar regions for the National Science Foundation’s Division of Polar Programs. Morin is the liaison between the National Science Foundation and the National Geospatial-Intelligence Agency’s commercial imagery program. Before founding PGC, Morin was at the National Center for Earth-Surface Dynamics at the University of Minnesota, and he has worked at the University of Minnesota since 1987. Morin serves as the National Academy of Sciences-appointed U.S. representative to the Standing Committee on Antarctic Geographic Information under the Scientific Committee for Antarctic Research (i.e., the Antarctic Treaty System). One of his current projects is ArcticDEM, a White House initiative to produce a high-resolution, time-dependent elevation model of the Arctic using Blue Waters. See Morin’s full bio at

Source: PEARC

The post PEARC17 Announces Keynote Speakers appeared first on HPCwire.

ASC17 Makes Splash at Wuxi Supercomputing Center

Mon, 04/24/2017 - 20:13

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17).

As the sun rose higher in the sky over nearby Taihu Lake and with the world’s fastest TaihuLight supercomputer in close proximity, the 100-some students focused intently on their task: unboxing their shiny new hardware and building their clusters.

From an initial pool of 220 teams, representing more than one-thousand students from schools around the globe, these 20 teams earned their spots in the final round. Among them are former champions, such as Huazhong University of Science and Technology, Shanghai Jiao Tong University, and “triple crown” winners Tsinghua University, but for seven of the teams, ASC17 marks their first time as competition finalists. Contest officials are particularly proud of the event’s reach to cultivate young talent.

In the six years since its inception, ASC has developed into the largest student supercomputing competition and is also one with the highest award levels. During the four days of the competition, the 20 teams at ASC17 will race to conduct real-world benchmarking and science workloads as they vie for a total of six prizes worth nearly $35,000.

Inspur provides the teams with a rack and NF5280M4 servers, outfitted with two Intel Xeon E5-2680v4 (2.4Ghz, 14 cores) CPUs. The primary event sponsor also supplies DDR4 memory, SATA storage, Mellanox InfiniBand networking (card, switch and cables), as well an Ethernet switch and cables.

Eight Nvidia P100 boxes

Teams can substitute or add other componentry (except the servers) at their own expense or through sponsorship opportunities. Most of the teams we spoke with were able to forge a relationship with Nvidia, whose GPU gear is now widely used at all three major cluster challenges (at SC, ISC and ASC). We saw mostly P100 cards getting snapped into server trays this morning, but at least two teams had acquired the K40 parts with the hopes that they would offer a more optimal energy profile conducive to staying within the 3,000 watt contest power threshold. The most common configuration placed eight P100 GPUs in four nodes but on everyone’s mind was how much of the available compute power they would be able to leverage without exceeding the power threshold.

Days one and two of the competition are devoted to cluster building and testing. The on-site clusters are used for an application set that includes the High Performance Linpack (HPL), the High Performance Conjugate Gradient (HPCG), the mystery application (to be announced Wednesday), the genome analysis code Falcon and a traffic prediction problem to be solved with the Baidu deep learning framework, Paddle Paddle. The teams report different levels of experience with Paddle Paddle and with scaling to multiple GPUs, a skill that will be critical for achieving optimum performance.

Two other platforms will be used in the competition: the homegrown TaihuLight and a Xeon Phi Knights Landing (KNL) machine. Students will use TaihuLight to run and optimize the China-developed numerical wave modeler MASNUM application; the Inspur NF6248 KNL server (there’s a 20-node rack of these inside the contest hall) will be used for and the molecular dynamics simulator LAMMPS.  There is no 3,000 watt power limit for these workloads. Teams can receive a total of 100 points: 90 points for performance optimizations and 10 points for the presentation that they deliver to the judges after the conclusion of the testing.

One of the most exciting parts of this year’s competition is the inclusion of the Sunway TaihuLight machine, which teams have have had access to since March. Each team will be allowed to use at most 64 SW CPUs with 256 CGs. According to the rules: “Every team is allowed to design and implement proper parallel algorithm optimization and many-core optimization for the MASNUM source code. Each team needs to pass the correctness checking of each workload, and the goal is to achieve the shortest runtime of each workload.”

All in on AI

The addition of the Paddle Paddle framework continues the contest’s focus on AI and deep learning that was begun last year with the incorporation of a deep neural network program under the e-Prize category.

Wang Endong, founder of the ASC challenge, academician of the Chinese Academy of Engineering and chief scientist at Inspur, believes that with the convergence of HPC, big data and cloud computing, intelligent computing as represented by artificial intelligence will become the most important and significant component for the coming computing industry, bringing new challenges in computing technologies.

The AI thread has also been woven into the HPC Connection Workshop, which will be held at the Wuxi Supercomputing Center on Thursday. The theme for the 15th HPC Connection Workshop is machine intelligence and supercomputing. The impressive lineup of speakers includes Jack Dongarra (ASC Advisory Committee Chair, University of Tennessee, Oak Ridge National Laboratory), Depei Qian (professor, Beihang University, Sun Yat-sen University; director of the Key Project on HPC, National High-Tech R&D program); Simon See (chief solution architect, Nvidia AI Technology Center and Solution Architecture and Engineering), and Haohuan Fu (deputy director, National Supercomputing Center in Wuxi, Associate Professor, Tsinghua University).

The awards ceremony will be held Friday afternoon.

The 20 ASC17 Teams (asterisk indicates first-time finalist):

Tsinghua University

Beihang University

Sun Yat-sen University

Shanghai Jiao Tong University

Hong Kong Baptist University

Southeast University*

Northwestern Polytechnical University

Taiyuan University of Technology

Dalian University of Technology

The PLA Information Engineering University*

Ocean University of China*

Weifang University*

University of Erlangen-Nuremberg*

National Tsing Hua University

Saint Petersburg State University

Ural Federal University

University of Miskolc

University of Warsaw*

Huazhong University of Science and Technology

Zhengzhou University*

Taihu Lake, Wuxi, China

The post ASC17 Makes Splash at Wuxi Supercomputing Center appeared first on HPCwire.

NCSA Director Named U of I VP for Economic Development and Innovation

Mon, 04/24/2017 - 15:10

URBANA, Ill., April 24, 2017 — NCSA Director Edward Seidel has been named vice president for economic development and innovation for the University of Illinois System, pending Board of Trustees approval, President Tim Killeen announced Monday. Seidel has served since August as interim vice president for research, a position that has been restructured and retitled to reflect the U of I System’s focus on fostering innovation to help drive the state’s economy through research and discovery.

Killeen said Seidel’s leadership over the last eight months has helped advance several new initiatives, such as working with executives of leading Illinois companies to develop collaborative research projects that will serve their businesses and lift the state’s economy. A longtime administrator and award-winning researcher, Seidel will lead an office that works with the System’s three universities to help harness their nearly $1 billion per year sponsored-research portfolio for technology commercialization and economic development activities.

“Ed’s personal experience with leading-edge research and with federal and international agencies – combined with his deep understanding of the U of I System’s capabilities and aspirations – has given him a rock-solid foundation for success,” Killeen said. “He’s off to a flying start.”

Seidel served since 2013 as director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. Seidel retained the title of NCSA director while serving as interim vice president, while Dr. William “Bill” Gropp took on the role of acting director. Gropp, the Thomas M. Siebel Chair in Computer Science and director of the Parallel Computing Institute in the Coordinated Science Laboratory, will continue to serve as interim director until a permanent NCSA director is named.

“NCSA congratulates Vice President Seidel on this well-earned appointment,” Gropp said. “It has been an honor co-leading and planning a vibrant and innovative future for NCSA. As interim director, I am looking forward to continuing to work with Ed, in his new role, as we advance new opportunities for the University of Illinois and NCSA.”

Seidel’s appointment as director three years ago marked a return to NCSA, where he once led the center’s numerical relativity group from 1991-96. He also was among the original co-principal investigators for Blue Waters, a federally funded project that brought one of the world’s most powerful supercomputers to Urbana-Champaign. He also is a Founder Professor in the Department of Physics and a professor in the Department of Astronomy at Illinois.

“It has been an honor leading NCSA during this exciting period,” said Seidel. “I am proud of what the center’s team has done to keep NCSA in a prominent national leadership position with projects like Blue Waters, XSEDE, LSST, the Midwest Big Data Hub, the National Data Service, and many others. I am also pleased to have helped NCSA move in directions that better leverage the great strengths of the university, in creating the world’s most advanced integrated cyberinfrastructure environment, in making it a home for transdisciplinary research and education programs at Illinois, and in enhancing NCSA’s industry program. As I take on new challenges with the U of I system, I look forward to continuing as a member of NCSA’s faculty, and to working with Bill as he and the team take NCSA to new heights in the future.”

About the National Center for Supercomputing Applications

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

The post NCSA Director Named U of I VP for Economic Development and Innovation appeared first on HPCwire.

IARPA Launches QEO Program to Develop Quantum Enhanced Computers

Mon, 04/24/2017 - 11:55

WASHINGTON, D.C., April 24, 2017 — The Intelligence Advanced Research Projects Activity, within the Office of the Director of National Intelligence (ODNI), announced today that it has embarked on a multi-year research effort to develop special-purpose algorithms and hardware that harness quantum effects to surpass conventional computing. Practical applications include more rapid training of machine learning algorithms, circuit fault diagnostics on larger circuits than possible today, and faster optimal scheduling of multiple machines on multiple tasks. If successful, technology developed under the Quantum Enhanced Optimization—“QEO”—program will provide a plausible path to performance beyond what is possible with today’s computers.

“The goal of the QEO program is a design for quantum annealers that provides a 10,000-fold increase in speed on hard optimization problems, which improves at larger and larger problem sizes when compared to conventional computing methods,” said Dr. Karl Roenigk, QEO program manager at IARPA.

Through a competitive Broad Agency Announcement process, IARPA has awarded a research contract in support of the QEO program to an international team led by the University of Southern California. Subcontractors include the California Institute of Technology, Harvard University, Massachusetts Institute of Technology, University of California at Berkley, University College London, Saarland University, University of Waterloo, Tokyo Institute of Technology, Lockheed Martin, and Northrup Grumman. Other participants providing validation include NASA Ames Research Center and Texas A&M. Participants providing government-furnished hardware and test bed capabilities include MIT Lincoln Laboratory and MIT.

For any questions, please contact us at


IARPA invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community. Additional information on IARPA and its research may be found on

Source: ODNI

The post IARPA Launches QEO Program to Develop Quantum Enhanced Computers appeared first on HPCwire.

ALCF Seeks Proposals to Advance Big Data Problems in Big Science

Mon, 04/24/2017 - 10:30

Argonne, Ill., April 24, 2017 — The Argonne Leadership Computing Facility Data Science Program (ADSP) is now accepting proposals for projects hoping to gain insight into very large datasets produced by experimental, simulation, or observational methods. The larger the data, in fact, the better.

From April 24 to June 15, ADSP’s open call provides an opportunity for researchers to make transformational advances in data science and software technology through allocations of computer time and supporting resources at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility.

The ADSP, now in its second year, is the first program of its kind in the nation, and targets “big data” science problems that require the scale and performance of leadership computing resources, such as ALCF’s two petascale supercomputers: Mira, an IBM Blue Gene/Q, and Theta, an Intel/Cray system that came online earlier this year.

Data—the raw, voluminous, bits and bytes that pour out of today’s large-scale experiments—are the proverbial haystacks to the science community’s needles. Data analysis is the art (of sorts) of sorting and making sense of the output of supercomputers, telescopes, particle accelerators, and other big instruments of scientific discovery.

ADSP projects will focus on employing leadership-class systems and infrastructure to explore, prove, and improve a wide range of data science techniques. These techniques include uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows.

The winning proposals will be awarded time on ALCF resources and will receive support and training from dedicated ALCF staff. Applications undergo a review process to evaluate potential impact, data scale readiness, diversity of science domains and algorithms, and other criteria. This year, there will be an emphasis on identifying projects that can use the architectural features of Theta in particular, as future ADSP projects will eventually transition to Aurora, ALCF’s 200-petaflops Intel/Cray system expected to arrive late next year.

To submit an application or for additional details about the proposal requirements, visit Proposals will be accepted until the call deadline of 5 p.m. CDT on Thursday, June 15, 2017. Awards will be announced in September and commence October 1, 2017.

About Argonne National Laboratory

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

About the U.S. Department of Energy’s Office of Science

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Source: Argonne National Laboratory

The post ALCF Seeks Proposals to Advance Big Data Problems in Big Science appeared first on HPCwire.

Rescale Adds CST STUDIO SUITE to Its ScaleX Cloud Platform for HPC

Mon, 04/24/2017 - 08:57

SAN FRANCISCO, April 24, 2017 — Rescale is pleased to announce a partnership with Computer Simulation Technology (CST), part of SIMULIA, a Dassault Systèmes brand, that will allow engineers and scientists running simulations in CST STUDIO SUITE to easily access the world’s largest high-performance computing (HPC) network via Rescale’s ScaleX platform.

CST STUDIO SUITE is a best-in-class software package for electromagnetic simulation. Customers often demand high-performance IT resources for large system-level simulation. Reducing run-times, particularly for multi-parameter optimization, can improve design throughput and the critical time-to-market for a product. Such IT resources are traditionally on-premise, but can incur large start-up and maintenance costs and can be redundant within 3 years as new technology comes along. Rescale offers an alternative scalable, secure and turn-key, cloud-based platform that now allows CST STUDIO SUITE to run on its worldwide network of high-performance computers, including the most state-of-the-art hardware available.

Under the new partnership, CST customers can bring their own licenses, and CST STUDIO SUITE will be available pre-configured on Rescale’s ScaleX platform. By accessing Rescale’s ScaleX platform through any browser, CST STUDIO SUITE users can run sophisticated engineering simulations on Rescale’s global multi-cloud HPC network of over 60 data centers in 30 plus locations worldwide. Demanding users can scale out to thousands of cores and choose hardware configurations optimized to the requirements of CST STUDIO SUITE’s complete technology portfolio, with options ranging from economical HPC configurations to cutting-edge bare metal systems, low-latency InfiniBand interconnect, and the latest Intel and NVIDIA GPU chipsets.

With Rescale’s ScaleX platform, enterprises can leverage built-in administration and collaboration tools to build teams, manage resources, and share jobs with team members. Additionally, enterprise administrators can take advantage of best-in-class security features such as multi-factor authentication, single sign-on, and set IP access rights, on a platform that meets the highest security standards, including ISO 27001 and 27017, SOC2 Type 2, ITAR, and HIPAA.

“We are very excited to be partnering with CST, as a new part of the SIMULIA brand of Dassault Systèmes,” said Joris Poort, CEO at Rescale. “We believe that CST STUDIO SUITE users will benefit from the fast, flexible, secure, and huge on-demand resources that Rescale can bring to computationally-demanding tools, such as electromagnetic simulation.”

Dr. Martin Timm, Director Global Marketing at CST added, “CST STUDIO SUITE provides comprehensive, advanced solving engines based on various numerical methods for world-class electromagnetic simulation. These engines run optimally on various types of hardware, and we believe that making them available on Rescale’s ScaleX platform will allow our customers access to the best possible performance across the whole suite of tools.”

Rescale is sponsoring the CST European User Conference 2017 in Darmstadt, Germany this week on April 27-28, 2017. Attend Rescale’s presentation or booth to discuss the advantages of running CST STUDIO SUITE on the cloud with Rescale.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit

Source: Rescale

The post Rescale Adds CST STUDIO SUITE to Its ScaleX Cloud Platform for HPC appeared first on HPCwire.

Lenovo Drives into Software Defined Datacenter with DSS-G Storage Solution

Mon, 04/24/2017 - 08:52

ORLANDO, Fla. April 24, 2017 — Lenovo (SEHK:0992) (Pink Sheets:LNVGY) today announced, at its annual Accelerate Partner Forum, the Lenovo Distributed Storage Solution for IBM Spectrum Scale (DSS-G)— a scalable software-defined storage (SDS) solution. Designed to support dense scalable file and object storage suitable for high-performance and data-intensive environments, the Lenovo DSS-G enables customers to manage the exponential rate of data growth and the subsequent need to store large amounts of both structured and unstructured data.

Today, deploying storage solutions for HPC, Artificial Intelligence (AI), analytics and cloud environments, key technology trends that are dramatically reshaping the data center, places a significant burden on IT resources1. DSS-G is Lenovo’s latest offering intended to accelerate adoption of software-defined data center technology, which provides customers with key benefits such as greater infrastructure simplicity, enhanced performance and lower total cost of ownership.

This announcement is a first step in executing Lenovo’s HPC and AI commitment of bringing the benefits of Software Defined Storage (SDS) to HPC clusters, and will follow with additional offerings for customers deploying Ceph or Luster.

Built on Lenovo’s System x3650 M5 server with powerful Intel Xeon processors, renowned for its industry-leading reliability and performance, the Lenovo DSS-G is available as a pre-integrated, easy-to-deploy rack-level offering. Featuring the Lenovo D1224 and D3284 12Gbps SAS storage enclosures and drives as well as software and networking components – including Red Hat Enterprise Linux support – the new offering allows for a wide choice of technology within an integrated solution.

As a follow-on to the successful GPFS Storage Server (GSS), the Lenovo DSS-G delivers on the needs of today’s agile and digital businesses. New features include:

  • Easy Scalability: Start small and easily grow performance / capacity via a modular approach
  • Innovative RAID: With IBM Spectrum Scale Declustered RAID, reduce rebuild overhead by up to 8X
  • Choice of High-speed network: Including Infiniband or Ethernet up to 100Gbps

The new Lenovo DSS-G offering is fulfilled by Lenovo Scalable Infrastructure (LeSI). LeSi leverages decades of engineering experience and leadership to reduce the complexity of deployment and delivers an integrated and fully-supported solution that matches best-in-industry components with optimized solution design. This enables maximum system availability and rapid root-cause problem detection throughout the life of the system.

Collectively, these features empower customers running data intensive HPC, big data or cloud workloads to focus their efforts on maximizing business value and reclaim valuable resources previously spent on designing, optimizing, and installing and supporting the infrastructure required to meet business demands.

In addition, Lenovo offers a comprehensive portfolio of services that supports the full lifecycle of the Lenovo DSS-G and all Lenovo IT assets. Expert professionals can assist with complex deployments as well as provide 24×7 monitoring and technical systems management with managed services. Available benefits also include a single point-of-contact for solution-level support.

For more information on the Lenovo DSS-G please click here.

Lenovo Quote (Madhu Matta, VP & GM, High Performance Computing and A.I.)

“The Lenovo HPC solutions are part of research projects focused on solving humanity’s most complex challenges. One in every five supercomputers in the world is built on Lenovo HPC offerings and we are proud to count major research universities among our partners. The Lenovo DSS-G offering enhances that capability. Clients can now deploy a software defined storage solution that enhances performance, scalability and capability of the HPC environment.”

About Lenovo

Lenovo (SEHK:0992) (Pink Sheets:LNVGY) is a $45 billion global Fortune 500 company and a leader in providing innovative consumer, commercial, and enterprise technology. Our portfolio of high-quality, secure products and services covers PCs (including the legendary Think and multimode Yoga brands), workstations, servers, storage, smart TVs and a family of mobile products like smartphones (including the Moto brand), tablets and apps.

Source: Lenovo

The post Lenovo Drives into Software Defined Datacenter with DSS-G Storage Solution appeared first on HPCwire.

Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC

Mon, 04/24/2017 - 08:48

SUNNYVALE, Calif. and YOKNEAM, Israel, April 24, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that EDR 100Gb/s InfiniBand solutions have demonstrated from 30 to 250 percent higher HPC applications performance versus Omni-Path. These performance tests were conducted at end-user installations and Mellanox benchmarking and research center, and covered a variety of HPC application segments including automotive, climate research, chemistry, bioscience, genomics and more.

Examples of extensively used mainstream HPC applications:

  • GROMACS is a molecular dynamics package design for simulations of proteins, lipids and nucleic acids and is one of the fastest and broadly used applications for chemical simulations. GROMACS has demonstrated a 140 percent performance advantage on an InfiniBand-enabled 64-node cluster.
  • NAMD is highly noted for its parallel efficiency and is used to simulate large biomolecular systems and plays an important role in modern molecular biology. Using InfiniBand, the NAMD application has demonstrated a 250 percent performance advantage on a 128-node cluster.
  • LS-DYNA is an advanced multi-physics simulation software package used across automotive, aerospace, manufacturing and bioengineering industries. Using InfiniBand interconnect, the LS-DYNA application has demonstrated a 110 percent performance advantage running on a 32-node cluster.

Due to its scalability and offload technology advantages, InfiniBand has demonstrated higher performance utilizing just 50 percent of the needed data center infrastructure and thereby enabling the industry’s lowest Total Cost of Ownership (TCO) for these applications and HPC segments. For the GROMACS application example, a 64-node InfiniBand cluster delivers 33 percent higher performance in comparison to a 128-node Omni-Path cluster; for the NAMD application, a 32-node InfiniBand cluster delivers 55 percent higher performance in comparison to a 64-node Omni-Path cluster; and for the LS-DYNA application, a 16-node InfiniBand cluster delivers 75 percent higher performance than a 32 node Omni-Path cluster.

“InfiniBand solutions enable users to maximize their data center performance and efficiency versus proprietary competitive products. EDR InfiniBand enables users to achieve 2.5X higher performance while reducing their capital and operational costs by 50 percent,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “As a standard and intelligent interconnect, InfiniBand guarantees both backward and forward compatibility, and delivers optimized data center performance to users for any compute elements – whether they include CPUs by Intel, IBM, AMD or ARM, or GPUs or FPGAs. Utilizing the InfiniBand interconnect, companies can gain a competitive advantage, reducing their product design time while saving on their needed data center infrastructure.”

The application testing was conducted utilizing end-user data centers and the Mellanox benchmarking and research center. The full report of testing conducted at end-user data centers and the Mellanox benchmarking and research center will be available on the Mellanox web site. For more information please contact Mellanox Technologies.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at:

Source: Mellanox

The post Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC appeared first on HPCwire.

DreamWorks Taps HPE, Qumulo to Accelerate Digital Content Pipeline

Mon, 04/24/2017 - 08:39

SEATTLE, Wash., April 24, 2017 —  Hewlett Packard Enterprise (HPE) and Qumulo today announced that DreamWorks Animation has selected the two companies to accelerate its digital content pipeline. The joint solution of HPE Apollo Servers and Qumulo Core software enables DreamWorks Animation to replace legacy storage systems used for HPC file-based workloads such as data intensive simulations for animated films and programs.

DreamWorks Animation was challenged to keep pace with the vast amount of small file data generated from animation rendering workflows. The studio faced significant challenges with their existing systems including insufficient scalability and write performance for large numbers of small files, lack of data visibility, and limited APIs for custom integrations with important media workflows. DreamWorks Animation upgraded its architecture to HPE Apollo servers and Qumulo Core for a flash-first hybrid storage architecture that is more performant and scalable to meet the demands of DreamWorks Animation’s digital content needs.

“Our film creation process requires an exceptional amount of digital manufacturing, and file-based data is one of the core assets of our business,” said Skottie Miller, Technology Fellow for Engineering and Infrastructure at DreamWorks Animation. “If storage fails to perform, everything is impacted. HPE and Qumulo deliver the next generation of scale-out storage that meets our demanding requirements. For any given film, we can generate more than half a billion files. Having the capability to support that with a best of breed solution such as HPE Apollo Servers and Qumulo’s modern scale-out storage software keeps our pipeline humming. Qumulo’s modern code base and architecture, write scalability, and integrated file systems analytics provides great value to our business and further strengthens our relationship with Hewlett Packard Enterprise.”

HPE Apollo servers and Qumulo Core offer maximum flexibility, scale and performance for on-premises and private cloud workloads. It is a complete and reliable solution for storing and managing tens of billions of files and objects and hundreds of petabytes of data. Qumulo’s scale-out storage easily scales capacity and performance linearly through an efficient and low cost flash-first hybrid architecture. Qumulo Core is also the world’s smartest storage system for continual delivery, real-time insight into data and storage, and offers users the option to choose their own hardware. Customers can integrate Qumulo Core into their existing workflows via robust REST APIs.

“With a sterling reputation for innovation, DreamWorks Animation makes every new technology investment in support of driving creativity and new entertainment experiences,” said Peter Godman, Co-Founder and CTO of Qumulo. “The collaboration between HPE, Qumulo and DreamWorks Animation demonstrates the power of technology innovation to push the industry forward. DreamWorks Animation and large enterprises can now implement a truly modern IT infrastructure for storing and managing file-based data at scale, achieve remarkable efficiencies and extreme performance, and gain real-time analytics for their massive data footprints.”

About Qumulo

Qumulo, headquartered in Seattle, the leader in modern scale-out storage, enables enterprises to manage and store enormous numbers of digital assets through real-time analytics built directly into the file system. Qumulo Core is a software-only solution designed to leverage the price/performance of commodity hardware coupled with the modern technologies of flash, virtualization and cloud. Qumulo was founded in 2012 by the inventors of scale-out NAS, and has attracted a team of storage innovators from Isilon, Amazon Web Services, Google and Microsoft. Qumulo has raised $130 million in four rounds of funding from leading investors. For more information, visit

Source: Qumulo

The post DreamWorks Taps HPE, Qumulo to Accelerate Digital Content Pipeline appeared first on HPCwire.

OCF Supports Scientific Research at the Atomic Weapons Establishment

Mon, 04/24/2017 - 07:40

LONDON, April 24, 2017 — High Performance Computing (HPC) storage and data analytics integrator, OCF, is supporting scientific research at the UK Atomic Weapons Establishment (AWE), with the design, testing and implementation of a new HPC, cluster and separate big data storage system.

AWE has been synonymous with science, engineering and technology excellence in support of the UK’s nuclear deterrent for more than 60 years. AWE, working to the required Ministry of Defence programme, provides and maintains warheads for the Trident nuclear deterrent.

The new HPC system is built on IBM’s POWER8 architecture and a separate parallel file system, called Cedar 3, built on IBM Spectrum Scale. In early benchmark testing, Cedar 3 is operating 10 times faster than the previous high-performance storage system at AWE. Both server and storage systems use IBM Spectrum Protect for data backup and recovery.

“Our work to maintain and support the Trident missile system is undertaken without actual nuclear testing, which has been the case ever since the UK became a signatory to the Comprehensive Nuclear Test Ban Treaty (CTBT); this creates extraordinary scientific and technical challenges – something we’re tackling head on with OCF,” comments Paul Tomlinson, HPC Operations at AWE. “We rely on cutting-edge science and computational methodologies to verify the safety and effectiveness of the warhead stockpile without conducting live testing. The new HPC system will be vital in this ongoing research.”

From the initial design and concept to manufacture and assembly, AWE works across the entire life cycle of warheads through the in-service support to decommissioning and disposal, ensuring the maximum safety and protecting national security at all times.

The central data storage, Cedar 3, will be in use for scientists across the AWE campus, with data replicated across the site.

“The work of AWE is of national importance and so its team of scientists need complete faith and trust in the HPC and big data systems in use behind the scenes, and the people deploying the technology,” says Julian Fielden, managing director, OCF. “Through our partnership with IBM, and the people, skills and expertise of our own team, we have been able to deliver a system which will enable AWE maintain its vital research,”

The new HPC system runs on a suite of IBM POWER8 processor-based Power systems servers running the IBM AIX V7.1 and Red Hat Enterprise Linux operating system. The HPC platform consists of IBM Power E880, IBM Power S824L, IBM Power S812L and IBM Power S822 servers to provide ample processing capability to support all of AWE’s computational needs and an IBM tape library device to back up computation data.

Cedar 3, AWE’s parallel file system storage, is an IBM Storwize storage system. IBM Spectrum Scale is in use to enable AWE to more easily manage data access amongst multiple servers.

About the Atomic Weapons Establishment (AWE)

The Atomic Weapons Establishment has been central to the defence of the United Kingdom for more than 60 years through its provision and maintenance of the warheads for the country’s nuclear deterrent. This encompasses the initial concept, assessment and design of the nuclear warheads, through component manufacture and assembly, in-service support, decommissioning and then disposal.

Around 4,500 staff are employed at the AWE sites together with over 2,000 contractors. The workforce consists of scientists, engineers, technicians, crafts-people and safety specialists, as well as business and administrative experts – many of whom are leaders in their field. The AWE sites and facilities are government owned but the UK Ministry of Defence (MOD) has a government-owned contractor-operated contract with AWE Management Limited (AWE ML) to manage the day-to-day operations and maintenance of the UK’s nuclear stockpile. AWE ML is formed of three shareholders – Lockheed Martin, Serco and Jacobs Engineering Group. For further information, visit:

About OCF

OCF specialises in supporting the significant big data challenges of private and public UK organisations. Our in-house team and extensive partner network can design, integrate, manage or host the high performance compute, storage hardware and analytics software necessary for customers to extract value from their data. With a 14-year heritage in HPC, managing big data challenges, OCF now works with over 20 per cent of the UK’s Universities, Higher Education Institutes and Research Councils, as well as commercial clients from the automotive, aerospace, financial, manufacturing, media, oil & gas, pharmaceutical and utilities industries.

Source: OCF

The post OCF Supports Scientific Research at the Atomic Weapons Establishment appeared first on HPCwire.

Internet2 Announces Winners of 2017 Gender Diversity Award

Mon, 04/24/2017 - 07:36

WASHINGTON, D.C., April 24, 2017 — Internet2 today announced six recipients of the Gender Diversity Award and two recipients of the Network Startup Resource Center (NSRC)-Internet2 Fellowship ahead of its annual meeting, the Internet2 Global Summit, taking place this week in Washington, D.C. from April 23-26. The Global Summit meeting hosts nearly 1,000 C-level information technology decision-makers and high-level influencers from higher education, government and scientific research organizations. This year’s winners and fellows are:

  • Zeynep Ondin, Virginia Tech, gender diversity award winner
  • Meloney Linder, University of Wisconsin, gender diversity award winner
  • Courtney Fell, University of Colorado Boulder, gender diversity award winner
  • Kerry Havens, University of Colorado Boulder, gender diversity award winner
  • Claire Stirm, Purdue University, gender diversity award winner
  • Jieyu Gao, Purdue University, gender diversity award winner
  • Sarah Kiden, Uganda Christian University, NSRC-Internet2 fellow
  • Dr. Kanchana Kanchanasut, Asian Institute of Technology in Thailand, NSRC-Internet2 fellow

According to a recent report by the National Center for Science and Engineering Statistics, while women have reached parity with men among science and engineering degree recipients overall, they constitute disproportionally smaller percentages of employed scientists and engineers than they do of the U.S. population.

The Gender Diversity Award was established in 2014 by the Internet2 community as part of a larger Gender Diversity Initiative, with the aim of improving gender diversity in the information technology field within research and education. It provides awardees the opportunity to engage in discussions around the latest applied innovations and best-practices for their campuses, as well as access to mentors and a network of women IT and technology professionals. The Gender Diversity Award is offered twice a year, once at the Internet2 Global Summit meeting and once at the Internet2 Technology Exchange meeting.

Since 2011, the NSRC and Internet2 have worked with universities, network service providers, and industry and government agencies in Africa, Asia, Europe, the Pacific Islands, the Middle East, Latin America, and the Caribbean to provide support to research and education communities in countries underserved by the current research and education networking infrastructure.

“We continue to see a growing number of talented nominees each year and I’m so grateful for our community’s continued efforts to promote diversity and support our colleagues who are just starting their career or thinking about growing their career in the IT and technology field,” said Ana Hunsinger, Internet2’s vice president of community engagement. “These awards and fellowships are significant because they remove financial barriers from women’s participation in timely discussions around applied innovations and best-practices in their profession, and gives them access to a new experience of professional growth and development for their career. It’s also an opportunity for our community to engage with talented individuals from both the U.S. and abroad, and help mentor the next generation of community leadership.”

Both the award and fellowship cover travel expenses, hotel accommodation, and conference registration for the 2017 Global Summit. Funding for two of this year’s award is made possible by the Internet2 Gender Diversity Initiative, while Cisco Systems and ServiceNow, in their capacity as industry sponsors of the 2017 Global Summit, are funding one award each. The University of Colorado Boulder and Purdue University are providing travel support for one of their respective award winners. Funding for the two fellowships is provided by NSRC and Internet2.

Ondin, Linder, Fell, Havens, Stirm, Jieyu, Kiden, and Dr. Kanchanasut will be recognized during the 2017 Global Summit General Session on Wednesday, April 26 at 10:30 a.m. EST. A full list of the 2017 Internet2 Gender Diversity Award winners and NSRC-Internet2 fellows, along with their bios, appears below:

Zeynep Ondin, Ph.D. is a user experience and interaction designer for the IT Experience & Engagement unit within the Division of Information Technology at Virginia Tech since2016. In her current role, she works at improving the user experience across the division’s various platforms and mechanisms of user engagement in order to provide a consistent user experience for all students, faculty, and staff who interact with IT systems and services. Prior to joining Virginia Tech, she spent 10 years working in various IT roles at higher education institutions.

Meloney Linder serves as associate dean for communications, facilities and technology for the University of Wisconsin – Madison, Wisconsin School of Business (WSB). Meloney’s responsibilities include strategic oversight of WSB’s brand and consumer insights, integrated marketing communications, information technology services, academic technology and web, and building and conference services for the school. She is committed to advancing higher education and the mission of the UW-Madison and Wisconsin School of Business through collaboration. Meloney serves as an advisor on WSB Dean’s Leadership Team and currently serves as the chair of the University of Wisconsin – Madison’s divisional technology advisory group.

Courtney Fell is a learning experience designer at the University of Colorado (CU) Boulder. She first came to CU in 2007 as a Spanish instructor and soon began leveraging technology to create interactive online lessons for her language students. From there, Courtney left the classroom to support other faculty in the sound incorporation of technology in their classrooms. Courtney now works for CU’s Office of Information Technology where she partners with campus leaders to find human-centered solutions to the university’s most complex challenges. In the last few years, she has led a number of successful and transformative initiatives for CU Boulder including: moving new student orientation online for domestic and international students, developing an innovative cross-campus large lecture experience for space studies, and exploring the use of robotic technologies paired with video conferencing software to provide a flexible learning solution for CU students.

Kerry Havens is an ambitious and caring working mother and perpetual student. Working in the Office of Information Technology at the University of Colorado (CU) Boulder for the past 16 years, she developed a passion for finding broad solutions that fit many needs. She continually finds herself at a crossroads between working with people and technology and is currently seeking opportunities to solidify her path towards a career in leadership in an organization that helps kids and young adults find purpose, gratefulness, and kindness.

Claire Stirm is a science gateway manager with HUBzero in the Academic Research Computing Department at Purdue University. Stirm graduated from Purdue University in 2016 with a degree in Professional Writing and a degree in Classical Studies. Stirm is currently earning a Master’s of Science in Communication with a focus in strategic communication from the Brian Lamb School of Communication at Purdue University. Since joining the HUBzero Team, Stirm has worked with researchers in plant genomics, healthcare and volcanology. In her free time Stirm enjoys reading, writing and camping.

Jieyu Gao joined the Emerging IT Leaders program with Information Technology department at Purdue upon her graduation from Purdue’s Applied Statistics and Economics (Honor) program in 2016. She works with researchers and faculty to help resolve their data analysis concerns. She is interested in learning new technologies and machine learning algorithms and applications.


Sarah Kiden is the head of systems at Uganda Christian University and a facilitator at Research and Education Network for Uganda (RENU). She loves to learn, build and support systems/networks, and has been involved in coordinating capacity building initiatives for universities and research institutions in Uganda since 2014. In her free time, she volunteers with the Internet Society Uganda Chapter, through which she picked interest and became active in internet policy development at ICANN. She recently co-founded DigiWave Africa, a non-profit organization which supports the safe and responsible use of technology for youth and children. Sarah holds an MSc in information systems and BSc in information technology.

Dr. Kanchana Kanchanasut is a professor in computer science at the Asian Institute of Technology (AIT), Thailand. Starting in 1988, she was among the first to bring the internet to Thailand, and has worked closely with the research and education (R&E) networks in Thailand and in the Asia-Pacific region. Nearly 20 years ago, Dr. Kanchanasut established the Internet Education and Research Laboratory (intERLab) at AIT to provide much-needed capacity building for internet engineers in the region. In recognition of her pioneering role in the early days of internet development, driving cross-border R&E networks, and starting the first open and neutral Internet Exchange Point in Southeast Asia in 2015 – the Bangkok Neutral Internet Exchange (BKNIX), Dr. Kanchanasut was inducted in the Internet Hall of Fame as the first representative from Thailand. In 2016, she was awarded the prestigious Jon Postel Service Award for her many years of service for R&E and internet development in Asia. Currently she is a researcher at the Internet Education and Research Laboratory where she is focusing on challenged networks research and community wireless mobile network deployments.

For more information on the Global Summit, taking place April 23-26 at the Renaissance Washington, D.C. Downtown Hotel, visit

About Internet2

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves 317 U.S. universities, 70 government agencies, 43 regional and state education networks, and through them supports more than 94,000 community anchor institutions, over 900 InCommon participants, 78 leading corporations working with our community, and more than 60 national research and education network partners representing more than 100 countries.

Internet2 delivers a diverse portfolio of technology solutions that leverages, integrates, and amplifies the strengths of its members and helps support their educational, research and community service missions. Internet2’s core infrastructure components include the nation’s largest and fastest research and education network that was built to deliver advances, customized services that are accessed and secured by the community-developed trust and identity framework.

Source: Internet2

The post Internet2 Announces Winners of 2017 Gender Diversity Award appeared first on HPCwire.

Inspur Sets up New AI Department

Mon, 04/24/2017 - 01:01
Inspur Establishes Artificial Intelligence (AI) Department

Inspur announces at the annual meeting that it has set up an artificial intelligence department, and will continue to introduce innovative computing platforms geared towards AI applications.

“Inspur will continue to provide advanced AI computing solutions to meet the rapidly increasing demand for AI applications”, says Liu Jun, General Manager of the AI and HPC Department at Inspur. In 2017, the company will focus on product innovations for AI computing data centers, optimizations of deep learning algorithm frameworks, as well as ecosystem development. It will soon launch the most powerful deep learning supercomputer server in the computing industry, and in conjunction with this, Inspur will continue to develop and optimize open-source deep learning frameworks such as Caffe-MPI. For its customers, Inspur will offer training in cluster management software and performance optimization tools. At the same time, Inspur will work on end-to-end AI solutions designed for industries like health care, security and finance in order to build a comprehensive industry ecosystem together with its partners.

Inspur has been working on new developments in the AI domain. To this day, this company accounts for more than 60% of China AI computing server market. It works closely with other leading AI companies, such as Baidu, Alibaba, Tencent, iFLYTEK, Qihoo 360, Sogou, Toutiao, and Face++, in systems and applications as Inspur seeks to help its customers achieve significant performance improvements for voice, image, video, search, networking and other applications. In the industry today, Inspur has the most complete GPU server product line that includes 2\4\8 cards for standalone machines. Inspur and Baidu have also jointly developed an extendable PBox AI Rack designed to train and optimize standalone 16-card machines. Inspur is also the only mainstream server vendor that offers FPGA accelerator cards for deep learning.

The post Inspur Sets up New AI Department appeared first on HPCwire.

Musk’s Latest Startup Eyes Brain-Computer Links

Fri, 04/21/2017 - 21:08

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers.

The initial goal is aiding the disabled, but the visionary inventor reportedly views the AI startup as a way of forging non-verbal forms of communication while at the same time promoting ethical AI research.

Details of the new venture are sketchy, but according to several reports this week the new venture called Neuralink Corp. would assist researchers in keeping up with steady advancements in machine intelligence. Details of the AI interface startup were first reported by the Wall Street Journal.

Neuralink’s proposed interface reportedly involves implanting “tiny electrodes in human brains.” On Thursday (April 20), Musk confirmed details of the startup, saying he would serve as chief executive. The startup’s initial goal is developing links between computers and the brain that could be used to assist the disabled.

Ultimately, Neuralink’s goal is to forge a new language Musk calls “consensual telepathy.”

More details about the neural startup emerged this week on the web site Wait But Why. Based on the assumption that spoken words are merely words “compressed approximations of uncompressed thoughts,” Musk explained the notion of consensual telepathy this way:

“If I were to communicate a concept to you, you would essentially engage in consensual telepathy. You wouldn’t need to verbalize unless you want to add a little flair to the conversation or something, but the conversation would be conceptual interaction on a level that’s difficult to conceive of right now.”

Asked about a timeline, Musk said a computer-brain interface for applications beyond the disabled remains nearly a decade away. “Genetics is just too slow, that’s the problem,” Musk asserted, according to the web site. “For a human to become an adult takes twenty years. We just don’t have that amount of time.”

Raising concerns about the societal implications of AI, Musk helped launch OpenAI in 2015 to redirect research toward “safe artificial general intelligence.” In launching OpenAI, Musk and his co-founders noted: “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Also in 2015, Musk donated $10 million to the Future of Life Institute that seeks to mitigate the “existential risks” posed by advanced AI.

The post Musk’s Latest Startup Eyes Brain-Computer Links appeared first on HPCwire.

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

Fri, 04/21/2017 - 17:59

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. This is the largest known high-performance computing cluster to run in the public cloud, according to Google’s Alex Barrett and Michael Basilyan.

Sutherland used Google’s cloud to explore generalizations of the Sato-Tate Conjecture and the conjecture of Birch and Swinnerton-Dyer to curves of higher genus, write Barrett and Basilyan on the Google Cloud Platform blog. “In his latest run, he explored 1017 hyperelliptic curves of genus 3 in an effort to find curves whose L-functions can be easily computed, and which have potentially interesting Sato-Tate distributions. This yielded about 70,000 curves of interest, each of which will eventually have its own entry in the L-functions and Modular Forms Database (LMFDB),” they explain.

Sutherland compared the quest to find suitable genus 3 curves to “searching for a needle in a fifteen-dimensional haystack.” It’s highly compute-intensive research that can require evaluating a 50 million term polynomial in 15 variables.

Before moving to the public cloud platform, Sutherland conducted his research locally on a 64-core machine but runs would take months. Using MIT clusters was another option, but there were sometimes access and software limitations. With Compute Engine, Sutherland can create a cluster with his preferred operating system, libraries and applications, the Google blog authors note.

According to Google, the preemtible VMs that Sutherland used are “full-featured instances that are priced up to 80 percent less than regular equivalents, but can be interrupted by Compute Engine.”

Since the computations are embarrassingly parallel, interruptions have limited impact and the workload can also grab available instances across Google Cloud Regions. Google reports that in a given hour, about 2-3 percent of jobs are interrupted and automatically restarted.

Coordinating instances was done with a combination of Cloud Storage and Datastore, which assigns tasks to instances based on requests from the Python client API. “Instances periodically checkpoint their progress on their local disks from which they can recover if preempted, and they store their final output data in a Cloud Storage bucket, where it may undergo further post-processing once the job has finished,” write the blog authors. Pricing for the 220,000 cluster was not shared.

Sutherland is already planning an even larger run of 400,000 cores, noting that when you “can ask a question and get an answer in hours rather than months, you ask different questions.”

There have been several other notably large cloud runs conducted by HPC cloud specialist Cycle Computing over the years. In late 2013, Cycle spun up a 156,000-core AWS cluster for Schrödinger and the University of Southern California to power a quantum chemistry application. The year prior, Cycle Computing created a 50,000 core virtual supercomputer on AWS to facilitate Schrödinger’s search for novel drug compounds for cancer research. In November 2014, Cycle customer HGST ran a 1 million simulation job in eight hours to help identify an optimal advanced drive head design. At peak, the cluster incorporated 70,908 IvyBridge cores with a peak performance of 729 teraflolps.

Cycle has also leveraged the Google Compute Engine (GCE). In 2015, Cycle ran a 50,000-core cancer gene analysis workload for the Broad Institute using preemptible virtual machine instances.

Amazon has also benchmarked several self-made clusters for the Top500 list. The most recent, a 26,496 core Intel Xeon cluster, entered the list in November 2013 at position 64 with 484 Linpack teraflops. As of November 2016, the cluster was in 334th position.

The post MIT Mathematician Spins Up 220,000-Core Google Compute Cluster appeared first on HPCwire.

NERSC Cori Shows the World How Many-Cores for the Masses Works

Fri, 04/21/2017 - 11:49

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others.

“We have about 6,000 users – with 700 different codes – who are doing research across all fields of interest to the Office of Science and we support them all,” said Richard Gerber, NERSC HPC department head and senior science advisor. “That means that all our users and all their codes have to run, and run well, on our systems. One of our challenges is to get our entire workload to run efficiently and effectively on next-generation supercomputers. This goal has become known as ‘Many core for the masses,’ and that’s what we will be spending a lot of time working on in the upcoming year.”

By definition then, many-cores for the masses at NERSC includes getting all the Office of Science applications running on NERSC’s new Cori supercomputer with 9,300 Intel Xeon Phi processors (formerly known as Knights Landing or KNL) and 1,900 Intel Xeon compute nodes.

“Cori is NERSC’s first manycore system and is on the path to exascale,” Gerber continued. “In particular it’s the first system where single-thread performance may be lower than single-thread performance on the previous system. This presents a real challenge for some users.” The Cori supercomputer also presents a deeper memory/storage hierarchy from the Intel Xeon Phi processor on-package MCDRAM, to DDR, to a burst buffer flash storage layer and all the way through to the Lustre file system.

In preparing for Cori over the past two years, the NERSC team launched NESAP (the NERSC Exascale Science Applications Program), which is a collaborative effort where NERSC partners with code teams, library and tools developers, Intel, Cray and the HPC community to prepare for the Cori many-core architecture. Twenty projects were selected for NESAP based on computational and scientific reviews by NERSC and other DOE staff. These projects represent about half of the runtime hours utilized on the NERSC supercomputers.

Figure 1: NESAP activities

The idea is to provide training for staff and postdocs and apply the lessons learned to the broad NERSC user community. These lessons are also widely applicable to the general Intel Xeon and Intel Xeon Phi processors user community. “As we learn things, a big part of our strategy is to take that knowledge and spread it out to the community – the community of our 6,000 users but also the worldwide community,” Gerber pointed out in the NERSC talk at the recent Intel HPC Developer conference, Many Cores for the Masses: Lessons Learned from Application Readiness Efforts at NERSC for the Knights Landing Based Cori System.

Jack Deslippe, who leads the NESAP effort and the NERSC Application Performance Group, reiterated the point that, “Cori represents the first machine that NERSC has procured where doing nothing means that a user’s code can actually run slower on the new system node-per-node.” That is why the NESAP program is an “all hands on deck” effort to work at a much deeper level with user code than the NERSC has done before. “This effort has touched every group at NERSC,” he said, “and has created a level of collaboration with Cray and Intel engineers on apps that has never occurred at the center before.”

Optimization for the Masses

When talking to scientists and users, the NERSC team likens the optimization process to that of an ant farm — an analogy that has become popular, no doubt, due to its silliness. Deslippe noted in an SC16 talk “This sort of out-of-the-box thinking that gives you a promotion at Berkeley,” which garnered a hearty laugh from the audience. The truth as reflected by the ant hill model (shown below) is that optimizing code is not always a straightforward process. In particular, Deslippe observed that, “it is easy to get lost in the weeds” – especially with Intel Xeon Phi processors due to the wealth of new architectural features on these devices that a programmer might want to target.

Figure 2: How to talk to the masses about optimizing codes for Cori

Profiling your code is, “like a lawnmower that constantly finds and knocks down the next tallest blade of grass” he said, an analogy to optimizing the next section of code that consumes the greatest amount of runtime. The programmers then take the code section away for investigation. In order to bring order to the ant farm, NERSC has employed the use of the roofline model, which tells the programmer not only how much they are improving the code according to an absolute measure of performance (shown on the y-axis below), but it also tells them which architectural features might help. The positions of code performance relative to the ceilings in the model show where the potential performance gains can be achieved be it via vectorization (AVX), code restructuring via Instruction Level Parallelism (ILP) or focusing on efficient use of the High Bandwidth memory (HBM).

Figure 3: The roofline model is a valuable optimization tool

The ability to easily collect accurate roofline performance data is the result of collaboration between Intel and NERSC staff. (See for more information.) The NERSC team is actively working with Intel on the co-design of performance tools in the Intel Advisor utility that now includes the roofline model. Early access can be found here.

Early CORI Intel Xeon Phi processor single node results

Early single Intel Xeon Phi processor node results show excellent speedups on the NESAP codes with a maximum speedup of 13x for the BerkeleyGW package, a set of computer codes written at Berkeley that calculate the quasiparticle properties and optical responses for a large variety of materials.

The optimization process of one of the kernels (Kernel-C) utilized the roofline model and the performance impact of six optimization steps is shown below. Note that the optimization process also delivered significant performance increases on the Intel Xeon processors as well.

Figure 4

Overall the NESAP optimization process delivered significant increases in performance on both the Intel Xeon and Intel Xeon Phi processor Cori computational nodes. Intel Xeon processor results are shown in orange below and the Intel Xeon Phi processor results are shown in blue. In most cases, the speedup was greater on Intel Xeon Phi processors than on the Intel Xeon processors. Doug Doeffler noted that, “Haswell tends to be more forgiving of unoptimized code.” The Boxlib code is one exception because it started as a bandwidth limited code that fit into the Intel Xeon Phi processor MCDRAM memory.

Figure 5

In general, the MCDRAM system benefitted most of the NESAP applications.

Figure 6: NESAP performance improvements attributed to the MCDRAM memory system

Early Cori Scaling Studies

Cori contains a large number of computational nodes, so scaling is a key factor in efficiently utilizing the machine. Of concern is the observation that the Intel Xeon Phi computational nodes deliver roughly one-third the sequential performance of a Haswell/Broadwell Xeon core. However this lower performance core must support both the application and much of the communication stack and processing of the MPI communications calls.

Summarized in the following graphic, NERSC has found that Cori shows performance improvements at all scales and decompositions.

Figure 7: The Cori supercomputer shows scaling speedups at all scales


In a majority of the reporting NESAP codes and kernels, single node runs on the Intel Xeon Phi nodes outperformed single node runs on the Intel Xeon processor (Haswell) nodes. However, the superior Intel Xeon Phi processor performance only happened after optimization guided by the roofline model.

About the Author

Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national labs and commercial organizations. He was also the editor of Parallel Programming with OpenACC. Rob can be reached at

The post NERSC Cori Shows the World How Many-Cores for the Masses Works appeared first on HPCwire.

Academic Communities Join Forces to Safeguard Against Cyberattacks

Fri, 04/21/2017 - 08:51

DENVER, Colo., April 21, 2017 — The increase in the risk from cyberattacks has received significant attention from the research and education (R&E) community and has spurred many campuses to adopt new security controls and implement additional tools to protect their institutions. These risks include:

  • ransomware attacks which typically attacks a system or computer with the intent to disrupt or block access to data until a ransom is paid;
  • distributed denial-of-service (DDoS) attacks that are intended to interfere with the availability of a campus’ network or applications.

“There are always new threats to cybersecurity, and the threats often evolve faster than the safeguards,” said Kim Milford, executive director of the Research and Education Networking Information Sharing and Analysis Center (REN-ISAC), which started in 2002 and coordinates information sharing about computer security threats and countermeasure among higher education institutions. “Through active sharing among the research and education community about the most current threats, we can collectively defend against them by updating our processes and finding safeguards that most effectively protect against those threats.”

Higher education could be susceptible to cyberthreats for many different reasons, but are likely targets due to their computing resources, intellectual property, and vast amount of personal information belonging to their students, faculty, and staff. In 2016, REN-ISAC sent out 67,000 notifications to R&E institutions about potential machines compromised by vulnerability exploits.

Milford will be presenting an annual assessment of the current risks along with practical and operational advice to the research and education community at the 2017 Internet2 Global Summit on Tuesday, April 25 from 4:30 – 5:30 p.m. EST at the Renaissance Downtown Washington, D.C. Hotel.

EDITOR’S NOTE: Interviews are available to members of the media upon request. Reporters interested in obtaining a press badge for the 2017 Global Summit should contact Sara Aly,

Read the full press release here.

About Internet2

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves 317 U.S. universities, 70 government agencies, 43 regional and state education networks, and through them supports more than 94,000 community anchor institutions, over 900 InCommon participants, 78 leading corporations working with our community, and more than 60 national research and education network partners representing more than 100 countries.

Internet2 delivers a diverse portfolio of technology solutions that leverages, integrates, and amplifies the strengths of its members and helps support their educational, research and community service missions. Internet2’s core infrastructure components include the nation’s largest and fastest research and education network that was built to deliver advances, customized services that are accessed and secured by the community-developed trust and identity framework.


Established in 2004 as part of the National Council of Information Sharing and Analysis Centers (ISACS), the Research and Education Networking Information Sharing and Analysis Center (REN-ISAC) is a member organization committed to aiding and promoting cybersecurity protection in the research and education (R&E) community. With over 500 member institutions and 1600 active participant, REN-ISAC helps to analyze cybersecurity threat trends and protection techniques that impact R&E. REN-ISAC analyzes this information, along with information provided in publicly available resources such as the Verizon Data Breach Report, and provides the R&E IT professionals with alerts, advisories, ongoing discussions and recommendations to help reduce risks. For more information, visit

Source: Internet2

The post Academic Communities Join Forces to Safeguard Against Cyberattacks appeared first on HPCwire.