Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 31 sec ago

Bio-IT World Announces 2017 Best Practices Awards Winners

Fri, 05/26/2017 - 08:54

NEEDHAM, Mass., May 26, 2017 — Bio-IT World has announced the winners of the 2017 Best Practices Awards this morning at the Bio-IT World Conference and Expo in Boston, MA. Entries from Maccabi Healthcare System, Rady Children’s Institute for Genomic Medicine, Allotrope Foundation, Earlham Institute, Biomedical Imaging Research Services Section (BIRSS), and Alexion Pharmaceuticals were honored.

Since 2003, the Bio-IT World Best Practices Awards has honored excellence in bioinformatics, basic and clinical research, and IT frameworks for biology and drug discovery. Winners were chosen in four categories, and two discretionary awards this year as well.

“Looking back at the fourteen years since our first Best Practices competition, I am amazed by how far the bio-IT field has come. I continue to be inspired by the work done in our field,” said Bio-IT World Editor Allison Proffitt. “The Bio-IT World Community is increasingly open, and the partnerships and projects showcased here prove our dedication to collaborative excellence.”

Bio-IT World debuted the Best Practices Awards at the second Bio-IT World Conference & Expo in 2003, hoping to not only elevate the critical role of information technology in modern biomedical research, but also to highlight platforms and strategies that could be widely shared across the industry to improve the quality, pace, and reach of science. In the years since, hundreds of projects have been entered in the annual competition, and over 80 prizes have been given out to the most outstanding entries.

This year, a panel of eleven invited expert judges joined the Bio-IT World editors in reviewing detailed submissions from pharmaceutical companies, academic centers, government agencies, and technology providers.

The awards ceremony was held at the Seaport World Trade Center in Boston, where the winning teams received their prizes from Proffitt, veteran judge Chris Dwan, and Philips Kuhl, president of conference organizer Cambridge Healthtech Institute.

2017 Bio-IT World Best Practices Award Winners:

Clinical IT & Precision Medicine: Maccabi Healthcare System nominated by Medial EarlySign

Identifying High-Risk, Under-the-Radar Patients

In October 2015, Maccabi Healthcare System joined forces with Medial EarlySign to implement advanced AI and machine learning algorithms to uncover the “hidden” signals within electronic medical records (EMRs) and identify unscreened individuals at high risk of harboring Colorectal Cancer. The system used existing EMR Data only, including routine blood counts.

ColonFlag evaluated nearly 80,000 outpatient blood count tests results collected over one year, and flagged 690 individuals (approximately 1%) as highest risk population for further evaluation. Of those, 220 colonoscopies were performed, of which 42% had findings including 20 cancers (10%).

Informatics: Rady Children’s Institute for Genomic Medicine nominated by Edico Genome

Precision medicine for newborns by 26-hour Whole Genome Sequencing

Genetic diseases, of which there are more than 5,000, are the leading cause of death in infants, especially in Neonatal Intensive Care Units (NICU) and Pediatric Intensive Care Units (PICU). The gateway to precision medicine and improved outcomes in NICUs/PICUs is a rapid genetic diagnosis. Diagnosis by standard methods, including whole genome sequencing (WGS), is too slow to guide NICU/PICU management. Edico Genome, Rady Children’s Institute for Genomic Medicine, and Illumina have developed scalable infrastructure to enable widespread deployment of ultra-rapid diagnosis of genetic diseases in NICUs and PICUs. First described in “A 26-hour system of highly sensitive WGS for emergency management of genetic diseases” in September 2015, we have now improved and implemented this infrastructure at Rady Children’s Hospital (RCH). Among the first 48 RCH infants tested, 23 received diagnoses and 16 had a substantial change in NICU/PICU treatment. We are currently equipping other children’s hospitals to emulate these results.

Knowledge Management: Allotrope Foundation

The Allotrope Framework: A holistic set of capabilities to improve data access, interoperability and integrity through standardization, and enable data-driven innovation

The Allotrope Framework is comprised of a technique-, vendor-, and platform-independent file format for data and contextual metadata (with class libraries to ensure consistent implementation); Taxonomies and Ontologies- an extensible basis of a controlled vocabulary to unambiguously describe and structure metadata; and Data Models that describe the structure of the data.

Member companies, collaborating with vendor partners, have begun to demonstrate how the Framework enables cross-platform data transfer, facilitates finding, accessing and sharing data, and enables increased automation in laboratory data flow with a reduced need for error-prone manual input. The first production release is available to members and partners (as of Q4 2015), and phased public releases of the framework components will become available beginning mid-2017.

IT infrastructure/HPC: Earlham Institute

Improving Global Food Security and Sustainability By Applying High-Performance Computing To Unlock The Complex Bread Wheat Genome

One of the most important global challenges to face humanity will be the obligation to feed a world population of approximately nine billion people by 2050. Wheat is grown on the largest area of land of any crop at over 225 million hectares, and over two billion people worldwide are dependent on this crop as their daily staple diet. Unfortunately, the six primary crop species see up to 40% loss in yield due to plant disease. Furthermore, a changing climate, increased degradation in arable land, reduction in biodiversity through rainforest destruction, and increasing sea levels all contribute to declining crop yields that greatly undermines global food security and sustainability. A solution to this grand challenge is to unlock the complex genomics of important crops, such as bread wheat, to identify the genes that underlie resistance to disease and environmental factors. One of the toughest crops to tackle, bread wheat has a hugely complex genome and is five times bigger than the human genome, with 17 billion base pairs of DNA. By exploiting leading-edge HPC infrastructure deployed at the Earlham Institute (EI), scientists have now assembled the genomic blueprint of the bread wheat genome for the very first time. By analyzing this wheat assembly, breeders worldwide can now begin to explore new variations of wheat that exhibit the very traits that will help improve its durability in the face of dogged disease and climate change.

Judges’ Choice: Biomedical Imaging Research Services Section (BIRSS) nominated by SRA International

Biomedical Research Informatics Computing System (BRICS)

The Biomedical Research Informatics Computing System (BRICS) is a dynamic, expanding, and easily reproducible informatics ecosystem developed to create secure, centralized biomedical databases to support research efforts to accelerate scientific discovery, by aggregating and sharing data using Web-based clinical report form generators and a data dictionary of Clinical Data Elements. Effective sharing of data is a fundamental attribute in this new era of data informatics. Such informatics advances create both technical and political challenges to efficiently and effectively use biomedical resources. Designed to be initially un-branded and not associated with a particular disease, BRICS has been used so far to support multiple neurobiological studies, including the Federal Interagency Traumatic Brain Injury Research (FITBIR) program, the Parkinson’s Disease Biomarkers Program (PDBP), and the National Ophthalmic Disease Genotyping and Phenotyping Network (eyeGENE). Supporting the storage of phenotypic, imaging, neuropathological, and genomics data, the BRICS instances currently have more than 31,500 subjects.

Editor’s Choice: Alexion Pharmaceuticals nominated by EPAM Systems

Alexion Insight Engine

The Alexion Insight (AI) Engine is a decision support system that provides senior executives and corporate planning staff answers to business and scientific questions across a landscape of approximately 9,000 rare diseases. The AI Engine filters and sorts across key criteria such as prevalence, clinical trials, severity, and onset to prioritize in real-time diseases of interest for targets, line extensions, and business development activity. Over a period of two years Alexion worked with EPAM to develop the AI Engine. The system integrates data from several external data sources into a cloud-based, Semantic Web database. Gaps and errors in publicly available data were filled and corrected by a team of expert curators. The engine supports an interactive, web-based interface presenting the rare disease landscape. The AI Engine has reduced the amount of time required to produce recommendations to senior management on promising disease candidates from a few months to mere minutes.

About Bio-IT World (www.Bio-ITWorld.com)

Part of Healthtech Publishing, Bio-IT World provides outstanding coverage of cutting-edge trends and technologies that impact the management and analysis of life sciences data, including next-generation sequencing, drug discovery, predictive and systems biology, informatics tools, clinical trials, and personalized medicine. Through a variety of sources including, Bio-ITWorld.com, Weekly Update Newsletter and the Bio-IT World News Bulletins, Bio-IT World is a leading source of news and opinion on technology and strategic innovation in the life sciences, including drug discovery and development.

About Cambridge Healthtech Institute (www.healthtech.com)

Cambridge Healthtech Institute (CHI), a division of Cambridge Innovation Institute, is the preeminent life science network for leading researchers and business experts from top pharmaceutical, biotech, CROs, academia, and niche service providers. CHI is renowned for its vast conference portfolio held worldwide including PepTalk, Molecular Medicine Tri-Conference, SCOPE Summit, Bio-IT World Conference & Expo, PEGS Summit, Drug Discovery Chemistry, Biomarker World Congress, World Preclinical Congress, Next Generation Dx Summit and Discovery on Target. CHI’s portfolio of products include Cambridge Healthtech Institute Conferences, Barnett International, Insight Pharma Reports, Cambridge Marketing Consultants, Cambridge Meeting Planners, Knowledge Foundation Bio-IT World, Clinical Informatics News and Diagnostics World.

Source: Bio-IT World

The post Bio-IT World Announces 2017 Best Practices Awards Winners appeared first on HPCwire.

PRACEdays Strengthens European HPC Community Ties

Thu, 05/25/2017 - 20:39

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at the Polytechnic University of Catalonia. The program was packed with high-level international keynote speakers covering the European HPC strategy and science and industrial achievements in HPC. A diverse mix of engaging sessions showcased the latest advances across the array of computational sciences within academia and industry.

What began as mainly an internal PRACE conference now boasts an impressive scientific program. Chair of the PRACE Scientific Steering Committee Erik Lindahl is one of the people responsible for the program’s growth and success. At PRACEdays, HPCwire spoke with the Stockholm University biophysics professor (and GROMACS project lead) about the goals of PRACE, his role with the conference and his research interests. So much interesting ground was covered, that we’re presenting the interview in two parts with part one focusing on PRACE and PRACEdays activities and part two showcasing Lindahl’s research interests and his perspective on where HPC is heading with regard to artificial intelligence and mixed-precision arithmetic.

HPCwire: Tell us about your role as Chair of the PRACE Scientific Steering Committee.

Erik Lindahl

Erik Lindahl: The scientific steering committee is really the scientific oversight body and our job is to do the scientific prioritization in PRACE. The reason I have engaged in PRACE was very much based on creating a European network of science and making sure that rather than being happy just competing in Sweden – Sweden is a nice country but it’s a very small part of Europe – what I really love about PRACE is we are getting researchers throughout Europe to have a common community of computing. But I think, this is a more important goal of PRACE than we think. Machines are nice but machines come and go and four years later we’ve used that money, but building this network of human infrastructure, that is something that is lasting.

HPCwire: How is PRACEdays helping accomplish that goal?

Lindahl: We have all of these Centers of Excellence that we are bringing together here, so Europe has now eight Centers of Excellence that provide joint training, tutorials, and tools to improve application performance. These are very young; they’ve been around for roughly 18 months, so right now, we don’t have all students going to PRACEdays, we can’t handle a conference that large, but we have all these Centers of Excellent and the various organizations and EU projects get together and then they in turn go out and spread the knowledge in their networks. In a couple of years we might very well have a PRACEdays that’s 500 people and then I hope we have all the students here. From the start this was mostly a PRACE internal conference, and the part that I’m very happy about is that we are increasing the scientific content and that’s what it’s going to take for the scientists to come.

HPCwire: PRACEdays is the central event of the European HPC Summit Week 2017, now in its second year.

Lindahl: That’s something also I’m very happy with to see it co-organized. It comes back to the same thing; Europe has a very strong computational landscape, but we sometimes forget that because we don’t collaborate enough.

HPCwire: What is the mission of PRACE?

Lindahl: The important thing with PRACE, not just PRACEdays but PRACE as a whole project, is that we are really establishing a European organization for computing and this is partly more of a challenge in Europe because in contrast with the U.S., while you have your 50 states, it is clear that it is one country, one grant organization sponsoring computing. The challenge for Europe has of course been, I would argue, that the national organizations of Europe are far stronger than the states in the US, but of course on the equivalent of the federal level, the European Union, the system has historically been much weaker so that what PRACE has established is that we finally have an organization that is not just providing computing cycles on the European arena, but also helping establish what is the vision for computing and how should we – not just Europe as a region push computing – but how should scientists in Europe push computing and what are the really big grand challenges that people should start approaching. And the challenge here is that no matter how good individual groups are, these problems are really hard just as you are seeing in the states – as nice as California is, if California tried to go it alone they would find it pretty difficult to compete with China and Japan.

HPCwire: How does PRACE serve European researchers?

Lindahl: The main role of PRACE is to provision resources and PRACE makes it possible for researchers to get what we call tier 0 resources for the very largest problems, the problems that are so large that it gets difficult to allocate them in a single country, and in particular most of these national systems tend to have, I wouldn’t say conservative programs, but kind of continuous allocations. What PRACE tries to push is these really grand challenge ideas: risky research; it’s perfectly okay to fail. You can spend one hundred million core hours to possibly solve a really difficult problem. I think in large part we are starting to achieve that. As always, of course, scientists want more resources. I’m very happy with the way that PRACE 2 has gotten countries to sign on and significantly increase the resources compared with what we had a few years ago.

The other part that I personally really like about PRACE are the software values and part of it of course has to do with establishing a vision and making sure there is really good education because all of these students, no matter how good our universities are, when people are sitting whether it’s in Stockholm, Barcelona or Frankfurt, there might be a handful of students in their area. PRACE makes it possible to provide training at a much more advanced level than we normally can in our national systems. Cost-wise it is not as large a part of the budget, but when it comes to competing and [facilitating] advanced computing, it is probably just as important as buying these machines.

The third part of this has to do with our researchers, and this is a part of where my role comes in as chair of the scientific steering committee. Researchers, we are a bit of a split personality. On the one hand we don’t like to apply for resources; writing research grants takes time away from the research you would like to be doing. On the other hand, a very important factor of having to compete for resources is not just that I’m competing for resources but when we are writing these grant applications, that’s also when we need to formulate our ideas – that’s when I need to be better than I was two or three years ago. Can I identify the really important problems to solve here, what I would like to do the next few years? I think here surprisingly lies a danger in our national system, in particular the ones that are fairly generously funded because the generously funded system you become complacent and you are kind of used to getting your resources. What I like with PRACE is you get a challenge: what if you had a factor of ten more resources than you do now? But you can’t just say that you would like to have it; you need to have a really good idea to get that, and it starts to challenge our best researchers who in essence compete against each other in Europe and become better than they were last year, and I think that’s a very important driving factor for science.

HPCwire: What is the vision for the PRACEdays conference?

Lindahl: PRACEdays is fairly young as a conference and we are still trying to get it to find its form. It’s not really an industry conference in the sense of having vendors here – I think there are other great venues, both ISC and Supercomputing and we see no point with trying to compete with them, but we are increasingly trying to move PRACEdays to become the venue where we have the scientists meet – not necessarily disciplinary because as a biophysicist I tend to go to a biophysical society, but of course there are lots of people working with computational aspects that are interdisciplinary or they might very well be using similar types of molecular simulation models, and materials sciences. [At PRACEdays] we really focus on computational techniques. We get to see what people are doing in other domains. We are going to start having computers with one million processors, and I think as scientists it’s very easy to try to become incrementally better – we all do that all the time; my code scales better this year than it did last year, but we have colleagues that already scale to a quarter million processors. That’s a challenge; we need to pick up 100 times better than we are, which is of course difficult, but if we don’t even think about it, we don’t start to do the work. I like these challenges because I’m seeing what people can do in other areas that I don’t get in my disciplinary conferences.

PRACEdays is also a venue where we get to meet all the different groups – the Centers of Excellence that the European commission has started to fund, so I think all of this is part of a budding computational infrastructure that is really shared in Europe. It’s certainly not without friction. If there wasn’t any friction it would be because we weren’t approaching hard problems. But I think things are really moving in the right direction and we are starting to establish a scheme where if you are like me, if you are a biophysicist, you should not just go to your national organization; the best help, the best resources, the best training is on the European [level] today and that I’m very happy with.

HPCwire: It’s the second year for the European HPC Summit.

Lindahl: That’s something also I’m very happy with to see it coorganized. It comes back to the same thing; Europe has a very strong computational landscape, but we sometimes forget that because we don’t collaborate enough.

HPCwire: Is it fair to think of PRACE as parallel to XSEDE in the US?

Lindahl: Yes and no, they have slightly different roles so PRACE works very close together with XSEDE and we are doing wonderful things together in training and we’re very happy to have them there. When it comes to the provisioning of resources. PRACE is more similar to the INCITE program, and this is intentional.

I think XSEDE does a wonderful thing in the US. The main thing that XSEDE managed to change in the US was to have a focus on the users, not just the focus on buying sexier machines, or how many boxes you have or how many FLOPS you have but what are you really doing for science and what does the scientist need and that was sorely needed not just in US but throughout the world.

This is a development that has happened in Europe too but the challenge with Europe is that we have lots of countries with very strong existing organizations there and if PRACE went in and started to take over the normal computing, suddenly I think you would alienate all these national organization that PRACE still very much depends on having good relations with, and that’s also why we’ve said that PRACE will engage in all these levels when it comes to training, when it comes to organization.

We have what we call a Tier 1 program, where it’s possible for researchers to get access to a large resource, say Knights Landing. A researcher in general in Europe who needs access to a special computer that’s not available in any other country, they can get access to that through these collaborative programs.

Then PRACE itself has hardware access through a program that’s much more similar to INCITE. The very largest programs, the programs that are really too large for any of the national systems, and I think overall that works well because I think on this level most countries see it as a complement rather than competing with their existing organizations.

HPCwire: Science and industry sometimes have split incentives. How much involvement should science have with industry and what’s your perspective on how public private partnerships and similar arrangements should work?

Lindahl: This is a difficult question and it comes down to the question of what is HPC. The traditional view that we’ve taken particularly in academia, is we focus on all of these very high end machines, whether it’s a petaflop, exaflop, yottaflop, the very extreme moonshot programs. That is of course important to large fields of science or I actually would say the reason academia stresses this is because academia’s role is the push the boundaries and industry normally shouldn’t be at the boundary, with a couple of exceptions today.

I think the joint role we have both in academia and industry is understanding this whole spectrum of approaches – so scientists might be thinking of running MPI over millions of processors but the very same techniques – if we can improve scaling, if we can make computers work faster – that’s used in machine learning too. In machine learning you might only run it over four nodes, but they too are just as interested in making this run faster, it’s just the problems they apply them to might be slightly different.

The other part that I think has changed completely in the last few years is this whole approach with artificial intelligence and machine learning that is now so extremely dependent on floating point performance in general. What we today call graphics processors, accelerators, they are now everywhere – it’s probably just a matter of time before you have a petaflop in your car. And it was less than ten years ago that a petaflop was the sexiest machines we had in the world. And at that level, even in your car, you are going to run parallel computations over maybe 20,000 cores. And when I was a student, we didn’t dream of that level of parallelism. Somewhere there, I think you are going to run on different machines because you wouldn’t buy a car if it cost you a billion dollars. The goals and the applications are different but the fundamental problems we work on are absolutely the same.

That was a bit of a detour, but when it comes to the public-private partnerships and the challenges here, there are certainly lots of areas where we are all starting to use commodity technology and accelerators might very well be one of them, so by the time that industry has caught on, by the time there is a market, then we can just go out and procure things on the open market, but then there are of course other areas where we are not quite sure where we are going to end up yet. And when it comes to industry, industry might not yet be at the point yet where it turns this into a product, and if we’re talking about chip development or networking technology, these things can also be very expensive. I certainly see that a role for some of these projects, we might very well have to engage together because there is no way that an academic lab can develop a competitive microprocessor we simply don’t have those resources; on the other hand, there is no way a company would do it because they are afraid they can’t market this and they can’t get their money back. So at some point starting to collaborate in this is not just okay, I think we have to do it.

The difficult part is we have to steer very carefully along this balance. This can’t turn into industry subsidies and similarly it can’t turn into industry subsidizing academia either because then it’s pointless. It’s a very difficult problem, but I don’t think we have any choice; we have to collaborate. If you start looking at machine learning nowadays, not just the most advanced hardware technology but in many case even the software, suddenly we have commercial companies hiring academics not because they are tier 2, but because they are the very best academics. So in artificial intelligence, some of the best research environments are actually in industry, not in academia. I think it’s a new world, but one we will gradually have to adapt to.

Stay tuned for part two, where Dr. Lindahl highlights his research passions and champions a promising future for HPC-AI synergies. We also couldn’t pass up the opportunity to ask about his pioneering work retooling the molecular dynamics code GROMACS to take advantage of single-precision arithmetic. It’s a fascinating story that takes on new relevance as AI algorithms push hardware vendors to optimize for single and even half precision instructions.

The post PRACEdays Strengthens European HPC Community Ties appeared first on HPCwire.

Russian Researchers Claim First Quantum-Safe Blockchain

Thu, 05/25/2017 - 14:08

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers.

The center said the technology has been successfully tested by one of Russia’s largest banks, Gazprombankm, and that the center is now working to expand the capability to other Russian and international financial services organizations.

The announcement was greeted with a wait-and-see attitude by industry observers, including HPC analyst Steve Conway, of Hyperion (formerly IDC), who noted that, given the complexity of the use case, neither the press release nor the white paper issued by the Russian Quantum Center provided enough technical detail to validate its announcement.

“As far as the use case goes,” Conway said, “it’s pretty universally acknowledged that one of the key early uses for quantum computing is going to be for cyber defense, so that’s no surprise. Efforts like that are underway around the world. It’s difficult to assess this one in comparison with any other without having any technical details about what they’re doing.”

Addison Snell, CEO of Intersect 360 Research, said, “It is still early in the development of quantum computing and difficult to compare the efficacy of the Russians’ approach versus efforts we have seen from companies like D-Wave and IBM. The most important point is that Russia, which already has capable supercomputing vendors, such as RSC and T-Platforms, is now part of the quantum computing discussion as well.”

The Russian Quantum Center said it secures the blockchain by combining quantum key distribution (QKD) with post-quantum cryptography, making it essentially “un-hackable,” according to the center. The technology creates special blocks that are signed by quantum keys rather than the traditional digital signatures, the center said, with the quantum keys generated by a QKD network.

QKD networks have become increasingly common around the world, particularly in the financial sector. China, Europe and the United States have existing QKD networks used for smart contracts, financial transactions and classified information.

Quantum computing holds the promise of delivering performance exponentially more powerful than today’s computers, but its commercial realization remains years away. It’s also seen as a major threat when in the hands of hackers.

Google appears to be at the forefront of this work – the company’s quantum-AI team has set for itself the goal of making a quantum annealer with 100 qubits by the end of this year. A qubit, or quantum bit, is the quantum computing equivalent of the classical bit. Conway pointed out that the Russian Quantum Center’s claims would require sophisticated quantum computing capabilities.

“It’s interesting because the challenges with creating a quantum computer increase dramatically with the number of qubits,” said Conway. “It’s a whole lot easier to do something with a couple of qubits than it is with hundreds or thousands of qubits. But in fact if you want to get serious about this you have to get to the thousands of qubits… I’d be surprised if this were in the thousands of qubits range, which is what you’d really need for serious cybersecurity.”

The post Russian Researchers Claim First Quantum-Safe Blockchain appeared first on HPCwire.

OpenMP ARB Appoints Duncan Poole of NVIDIA and Kathryn O’Brien of IBM to its Board of Directors

Thu, 05/25/2017 - 11:59

AUSTIN, Texas, May 25, 2017 — The OpenMP ARB, a group of leading hardware and software vendors and research organizations which creates the OpenMP standard parallel programming specification, has appointed Duncan Poole and Kathryn O’Brien to its Board of Directors. They bring a wealth of experience to the OpenMP ARB.

Duncan Poole is director of platform alliances for NVIDIA’s Accelerated Computing Division. He is responsible for driving partnerships where engineering interfaces are adopted by external parties who are building tools for accelerated computing. Duncan is also the president of OpenACC, and responsible for NVIDIA’s membership of OpenMP. His goal is to encourage the adoption of accelerators by developers who want good performance and portability of their accelerated code.

Kathryn O’Brien is a Principal Research Staff Member at IBM T.J. Watson Research Center, where she has worked for over 25 years. She managed the compiler team that implemented OpenMP on the CELL heterogeneous architecture. Since that time she has been heavily engaged in the adoption of OpenMP across a range of product and research compiler efforts. Over the last 8 years she has been part of the leadership team driving IBM Research’s Exascale program, where her focus has been on the evolution and development of the broader software programming and tools environment.

“Duncan and Kathryn bring us great experience”, says Partha Tirumalai, Chairman of the OpenMP Board of Directors. “We are very pleased to have them join the OpenMP board.”

In addition to Duncan and Kathryn, the board of directors of the OpenMP ARB consists of Partha Tirumalai of Oracle, Sanjiv Shah of Intel, and Josh Simons of VMware.

About OpenMP

The OpenMP ARB has a mission to standardize directive-based multi-language high-level parallelism that is performant, productive and portable. Jointly defined by a group of major computer hardware vendors, software vendors, and researchers, the OpenMP API is a portable, scalable model that gives parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from embedded systems and accelerator devices to multicore systems and large-scale shared-memory machines. The OpenMP ARB owns the OpenMP brand, oversees the OpenMP specification, and produces and approves new versions of the specification. Further information can be found at http://www.openmp.org/.

Source: OpenMP

The post OpenMP ARB Appoints Duncan Poole of NVIDIA and Kathryn O’Brien of IBM to its Board of Directors appeared first on HPCwire.

Google Debuts TPU v2 and will Add to Google Cloud

Thu, 05/25/2017 - 10:38

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the second generation TPU v2 will soon be added to Google Compute Engine and Google Cloud shortly thereafter. The folks in the lab are clearly busy at Google. The new TPU is said to deliver 180 teraflops of floating-point performance.

“Powerful as these TPUs are on their own, though, we designed them to work even better together. Each TPU includes a custom high-speed network that allows us to build machine learning supercomputers we call “TPU pods.” A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model,” wrote Jeff Dean, Google senior fellow and Urs Hölzle, senior vice president  Google cloud infrastructure, in a blog (Build and train machine learning models on our new Google Cloud TPUs) last week.

Google TPU Pod

Google says that using these TPU pods has already produced dramatic improvements in training times. “One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod,” wrote Dean and Hölzle.

NVIDIA CEO Jensen Huang in a lengthy bog on the AI/deep learning revolution this week paid tribute to Google, “It’s great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. AI is the greatest technology force in human history.”

TPUs, of course, aren’t new to Google data centers, but the company started talking about them publicly only recently in a blog and also released a technical paper, titled “In-Datacenter Performance Analysis of a Tensor Processing Unit​,” that details the design and performance characteristics of the TPU.

According to that paper, Google’s TPU was 15 to 30 times faster at inference than Nvidia’s K80 GPU and Intel Haswell CPU in a Google benchmark test. On a performance per watt scale, the TPUs are 30 to 80 times more efficient than the CPU and GPU (with the caveat that these are older designs). (See HPCwire/Datanami article by Alex Woodie, Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money)

In last week’s blog, the authors note, “We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs. You can program these TPUs with TensorFlow, the most popular open-source machine learning framework on GitHub, and we’re introducing high-level APIs, which will make it easier to train machine learning models on CPUs, GPUs or Cloud TPUs with only minimal code changes.”

Link to Google blog: https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/

Link to NVIDIA blog: https://blogs.nvidia.com/blog/2017/05/24/ai-revolution-eating-software/

The post Google Debuts TPU v2 and will Add to Google Cloud appeared first on HPCwire.

Nvidia CEO Predicts AI ‘Cambrian Explosion’

Thu, 05/25/2017 - 09:59

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing platforms and data framework, insists Nvidia CEO Jensen Huang.

One consequence of this AI revolution will be “a Cambrian explosion of autonomous machines” ranging from billions of AI-power Internet of Things devices to autonomous vehicles, Huang forecasts.

Along with a string of AI-related announcements coming out of the GPU powerhouse’s annual technology conference, Huang used a May 24 blog post to tout the rollout of Google’s latest iteration of its TensorFlow machine-learning framework, the Cloud Tensor Processing Unit, or TPU.

The combination of Nvidia’s new Volta GPU architecture and Google’s TPU illustrates how—in a variation on a technology theme—”AI is eating software,” Huang asserted.

Arguing that GPUs are defying the predicted end of Moore’s Law, Huang further argued: “AI developers are racing to build new frameworks to tackle some of the greatest challenges of our time. They want to run their AI software on everything from powerful cloud services to devices at the edge of the cloud.”

Nvidia CEO Jensen Huang

Along with the muscular Volta architecture, Nvidia earlier this month also unveiled a GPU-accelerated cloud platform geared toward deep learning. The AI development stack runs on the company’s distribution of Docker containers and is touted as “purpose built” for developing deep learning models on GPUs.

That dovetails with Google’s “AI-first” strategy that includes the Cloud TPU initiative aimed at automating AI development. The new TPU is a four-processor board described as a machine- learning “accelerator” that can be accessed from the cloud and used to train machine-learning models.

Google said its Cloud TPU could be mixed-and-matched with the Volta GPU or Skylake CPUs from Intel.

Cloud TPUs were designed to be clustered in datacenters, with 64 stacked processors dubbed “TPU pods” capable of 11.5 petaflops, according to Google CEO Sundar Pichai. The cloud-based Tensor processors are aimed at computer-intensive training of machine learning models as well as real-time tasks like making inferences about images

Along with TensorFlow, Huang said Nvidia’s Volta GPU would be optimized for a range of machine-learning frameworks, including Caffe2 and Microsoft Cognitive Toolkit.

Nvidia is meanwhile releasing as open source technology its version of a “dedicated, inferencing TPU” called the Deep Learning Accelerator that has been designed into its Xavier chip for AI-based autonomous vehicles.

In parallel with those efforts, Google has been using its TPUs for the inference stage of a deep neural network since 2015. TPUs are credited with helping to bolster the effectiveness of various AI workloads, including language translation and image recognition programs, the company said.

The combination of processing power, cloud access and machine-learning training models are combining to fuel Huang’s projected “Cambrian explosion” of AI technology: “Deep learning is a strategic imperative for every major tech company,” he observed. “It increasingly permeates every aspect of work from infrastructure, to tools, to how products are made.”

The post Nvidia CEO Predicts AI ‘Cambrian Explosion’ appeared first on HPCwire.

PGAS Use will Rise on New H/W Trends, Says Reinders

Thu, 05/25/2017 - 09:25

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly known as PGAS, has been around for decades in academic circles but has seen extremely limited use in production applications. PGAS methods include UPC, UPC++, Coarray Fortran, OpenSHM and the latest MPI standard.

Developments in hardware design are giving a boost to PGAS performance that will lead to more widespread usage in the next few years. How much more, of course, remains to be seen. In this article, I’ll explain why PGAS has increased interest and support, show some sample code to illustrate PGAS approaches, and explain why Intel Xeon Phi processors offer an easy way to explore PGAS with performance at a scale not previously available.

PGAS defined

PGAS programming models offer a partitioned global shared memory capability, via a programming language or API, whether special support exists in hardware or not. Four keys in this definition:

  • Global address space – any thread can read/write remote data
  • Partitioned – data is designated as local or global, this is NOT hidden from us – this is critical so we can write our code for locality to enable scaling
  • via a programming language or API – PGAS does not fake that all memory is shared via techniques such as copies on page faults, etc. Instead, PGAS always has an interface that a programmer uses to access this “shared memory” capability. A compiler (with a language interface) or a library (with an API) does whatever magic is needed.
  • whether special support exists in hardware or not – as a programmer, I do not care if there is hardware support other than my craving for performance!

PGAS rising

Discussion of PGAS has been around for decades. It has been steadily growing in practicality for more and more of us, and it is ripe for a fresh look by all of us programmers. I see at least three factors that are coming together which will lead to more widespread usage in the upcoming years.

Factor 1: Hardware support for more and more cores connected coherently. In the 1990s, hardware support for the distributed shared memory model emerged with research projects including Stanford DASH and MIT Alewife, and commercial products including the SGI Origin, Cray T3D/E and Sequent NUMA-Q.  Today’s Intel Xeon Phi processor has many architectural similarities to these early efforts designed specifically for a single-chip implementation. The number of threads of execution is nearly identical, and the performance much higher owing largely to a couple of decades of technological advances. This trend not only empowers PGAS, it also enables exploring PGAS today at a scale and performance level never before possible.

Factor 2: Low latency interconnects. Many disadvantages of PGAS are being addresses by low latency interconnects, partly driven by exascale development. The Cray Aries interconnect has driven latencies low enough that PGAS is quite popular in some circles, and Cray’s investments in UPC, UPC++, SHMEM and Chapel reflect their continued investments in PGAS. Other interconnects, including Intel Omni-Path Architecture, stand to extend this trend. A key to lower latency is driving functionality out of the software stack and into the interconnect hardware where it can be performed more quickly and independently. This is a trend that greatly empowers PGAS.

Factor 3: Software support growing. The old adage “where there’s smoke there’s fire” might be enough to convince us PGAS is on the rise because software support for PGAS is leading the way. When the U.S. government ran a competition for proposals for high production “next generation” programming languages for high performance computing needs (High Productivity Computing Systems HPCS), three competitors were awarded contracts to develop their proposals of Fortress, Chapel, and X10. It is interesting to note that all three included some form of PGAS support. Today, we also see considerable interest and activity in SHMEM (notably OpenSHMEM), UPC, UPC++ and Coarray Fortran (the latter being a part of the Fortran standard since Fortran 2008). Even MPI 3.0 offers PGAS capabilities. Any software engineer will tell you that “hardware is nothing without software.” It appears that the hardware support for PGAS will not go unsupported, making this a key factor in empowering PGAS.

OpenSHMEM

OpenSHMEM is both an effort to standardize an API and a reference implementation as a library. This means that reads and writes of globally addressable data are performed with functions rather than simple assignments that we will see in language implementations.  Library calls may not be as elegant, but they leave us free to use any compiler we like. OpenSHMEM is a relatively restrictive programming model because of the desire to map its functionality directly to hardware. One important limitation is that all globally addressable data must be symmetric, which means that the same global variables or data buffers are allocated by all threads. Any static data is also guaranteed to be symmetric. This ensures that the layout of remotely accessible memory is the same for all threads, and enables efficient implementations.

#include <shmem.h> int main(void) { shmem_init(); if (num_pes()<2) shmem_global_exit(1); /* allocate from the global heap */ int *A = shmem_malloc(sizeof(int)); int B = 134; /* store local B at PE 0 into A at processing element (PE) 1 */ if (my_pe()==0) shmem_int_put(A,&B,1,1); /* global synchronization of execution and data */ shmem_barrier_all(); /* observe the result of the store */ if (my_pe()==1) printf(“A@1=%d\n”,*A); /* global synchronization to make sure the print is done */ shmem_barrier_all(); shmem_free(A); shmem_finalize(); return 0; }

A simple OpenSHMEM program, written according to the OpenSHMEM 1.2 specification.  C standard library headers are omitted.

UPC 

Unified Parallel C (UPC) is an extension to C99. The key language extension is the shared type qualifier. Data objects that are declared with the shared qualifier are accessible by all threads, even if those threads are running on different hosts. An optional layout qualifier can also be provided as part of the shared array type to indicate how the elements of the array are distributed across threads. Because UPC is a language and has compiler support, the assignment operator (=) can be used to perform remote memory access. Pointers to shared data can also themselves be shared, allowing us to create distributed, shared linked data structures (e.g., lists, trees, or graphs). Because compilers may not always recognize bulk data transfers, UPC provides functions (upc_memput, upc_memget, upc_memcpy) that explicitly copy data into and out of globally addressable memory. UPC can allocate globally addressable data in a non-symmetric and non-collective manner, which increases the flexibility of the model and can help to enable alternatives to the conventional bulk-synchronization style of parallelism.

#include <upc.h> int main(void) { if (THREADS<2) upc_global_exit(1); /* allocate from the shared heap */ shared int *A = upc_all_alloc(THREADS,sizeof(int)); int B = 134; /* store local B at PE 0 into A at processing element (PE) 1 */ if (MYTHREAD==0) A[1] = B; /* global synchronization of execution and data */ upc_barrier; /* observe the result of the store */ if (MYTHREAD==1) printf(“A@1=%d\n”,A[1]); upc_all_free(A); return 0; }

A simple UPC program, written according to the version 1.3 specification.  C standard library headers are omitted.

Fortran Coarrays

The concept of Fortran Coarrays, developed as an extension to Fortran 95, was standardized in Fortran 2008. An optional codimension attribute can be added to Fortran arrays, allowing remote access to the array instances across all threads. When using a Coarray, an additional codimension is specified using square brackets to indicate the image in which the array locations will be accessed.

program main implicit none integer, allocatable :: A(:)[:] integer :: B if (num_images()<2) call abort; ! allocate from the shared heap allocate(A(1)[*]) B = 134; ! store local B at processing element (PE) 0 into A at PE 1 if (this_image().eq.0) A(1)[1] = B; ! global synchronization of execution and data sync all ! observe the result of the store if (this_image().eq.1) print*,’A@1=’,A(1)[1] ! make sure the print is done sync all deallocate(A) end program main

A simple Fortran program, written according to the 2008 specification.

MPI‑3 RMA

The MPI community first introduced one-sided communication, also known as Remote Memory Access (RMA), in the MPI 2.0 standard.  MPI RMA defines library functions for exposing memory for remote access through RMA windows. Experiences with the limitations of MPI 2.0 RMA led to the introduction in MPI 3.0 of new atomic operations, synchronization methods, methods for allocating and exposing remotely accessible memory, a new memory model for cache-coherent architectures, plus several other features. The MPI‑3 RMA interface remains large and complex partly because it aims to support a wider range of usages than most PGAS models. MPI RMA may end up most used as an implementation layer for other PGAS models such as Global Arrays, OpenSHMEM, or Fortran coarrays, as there is at least one implementation of each of these using MPI‑3 RMA under the hood.

#include <mpi.h> int main(void) { MPI_Init(NULL,NULL); int me,np; MPI_Comm_rank(MPI_COMM_WORLD,&me); MPI_Comm_size(MPI_COMM_WORLD,&np); if (np<2) MPI_Abort(MPI_COMM_WORLD,1); /* allocate from the shared heap */ int * Abuf; MPI_Win Awin; MPI_Win_allocate(sizeof(int),sizeof(int),MPI_INFO_NULL, MPI_COMM_WORLD,&Abuf,&Awin); MPI_Win_lock_all(MPI_MODE_NOCHECK,Awin); int B = 134; /* store local B at processing element (PE) 0 into A at PE 1 */ if (me==0) { MPI_Put(&B,1,MPI_INT,1,0,1,MPI_INT,Awin); MPI_Win_flush_local(1,Awin); } /* global synchronization of execution and data */ MPI_Win_flush_all(Awin); MPI_Barrier(MPI_COMM_WORLD); /* observe the result of the store */ if (me==1) printf(“A@1=%d\n”,*Abuf); MPI_Win_unlock_all(Awin); MPI_Win_free(&Awin); MPI_Finalize(); return 0; }

A simple MPI RMA program, written according to the version 3.1 specification.  C standard library headers are omitted.

PGAS and Intel Xeon Phi processors

With up to 72 cores that share memory, an Intel Xeon Phi processor is a perfect device to explore PGAS with performance at a scale not previously so widely available. Since we do care about performance, running PGAS on a shared memory device with so many cores is a fantastic proxy for future machines that will offer increased support for performance using PGAS across larger and larger systems.

Code examples and figures are adopted from Chapter 16 (PGAS Programming Models) of the book Intel Xeon Phi Processor High Performance Programming – Intel Xeon Phi processor Edition, used with permission. Jeff Hammond and James Dinan were the primary contributors to the book chapter and the examples used in the chapter and in this article. I owe both of them a great deal of gratitude for all their help.

About the Author

James Reinders likes fast computers and the software tools to make them speedy. Last year, James concluded a 10,001 day career at Intel where he contributed to projects including the world’s first TeraFLOPS supercomputer (ASCI Red), compilers and architecture work for a number of Intel processors and parallel systems. James is the founding editor of The Parallel Universe magazine and has been the driving force behind books on VTune (2005), TBB (2007), Structured Parallel Programming (2012), Intel Xeon Phi coprocessor programming (2013), Multithreading for Visual Effects (2014), High Performance Parallelism Pearls Volume One (2014) and Volume Two (2015), and Intel Xeon Phi processor (2016). James resides in Oregon, where he enjoys both gardening and HPC and HPDA consulting.

The post PGAS Use will Rise on New H/W Trends, Says Reinders appeared first on HPCwire.

CSRA Announces Fourth Quarter and Fiscal Year 2017 Financial Results

Thu, 05/25/2017 - 08:15

FALLS CHURCH, Va., May 24, 2017 — CSRA Inc. (NYSE: CSRA), a leading provider of next-generation IT solutions and professional services to government organizations, today announced financial results for the fourth quarter of fiscal year 2017, which ended March 31, 2017.

“In fiscal year 2017, we built a strong foundation for the future through robust business development success, differentiated technical offerings, and strong financial management,” said Larry Prior, CSRA president and CEO. “We ended the year on a high note, as our fourth quarter revenue, adjusted EBITDA, and adjusted EPS met or exceeded consensus estimates, and we booked $1.3 billion in awards. Our book-to-bill ratio of 1.1x marked the ninth straight quarter with bookings at or above revenue. This success gives us confidence that we will achieve organic revenue growth in fiscal year 2018 while also maintaining strong profitability and free cash flow. I am also pleased to announce that we will soon make our first acquisition as a public company. NES Associates will bring us strong competitive advantage in a number of large, near-term IT network opportunities—another example of how we live our tagline, ‘Think Next. Now.'”

Summary Operating Results (Unaudited)

(Dollars in millions, except per share data)

Three Months Ended

Fiscal Years Ended

March 31, 2017

April 1, 2016

March 31, 2017

April 1, 2016(a)

Revenue

$

1,254

$

1,290

$

4,993

$

4,250

Operating income (loss)

$

90

$

(76)

$

622

$

187

Net income (loss) attributable to CSRA common stockholders

$

37

$

(72)

$

304

$

87

GAAP diluted EPS

$

0.22

$

(0.44)

$

1.84

$

0.53

Adjusted revenue

$

1,254

$

1,290

$

4,993

$

5,198

Adjusted EBITDA

$

207

$

197

$

792

$

787

Adjusted diluted EPS

$

0.49

$

0.46

$

1.91

$

1.74

Note: All quarterly and adjusted figures are unaudited; refer to “Reconciliation of Non-GAAP Financial Measures” at the end of this news release for a more detailed discussion of management’s use of non-GAAP measures and for reconciliations to GAAP financial measures.

(a) For the fiscal year ended April 1, 2016, adjusted revenue, adjusted EBITDA, and adjusted diluted EPS are pro forma measures.

Revenue for the fourth quarter of fiscal year 2017 was $1.25 billion, up 3 percent compared to the third quarter of fiscal year 2017 (sequentially). Quarterly revenue was down 3 percent compared to the fourth quarter of fiscal year 2016 (year-over-year), the lowest such decline since the Company was formed in November 2015. Revenue for fiscal year 2017 was $5.0 billion, down 4 percent compared to adjusted revenue for fiscal year 2016.

Operating income for the fourth quarter of fiscal year 2017 of $90 million (7.2% operating margin), includes $61 million of expense related to the amendment of the Intellectual Property Matters Agreement (the “Original IPMA” and, as amended, the “IPMA”) with Computer Sciences Corporation (now known as DXC Technology) (“CSC”) and another $5 million of other separation, merger, and integration costs; $16 million of pension and other post-retirement benefit (“OPEB”) plans mark-to-market expense; $20 million of other pension benefits; as well as $11 million of amortization from acquisition-related intangible assets. Adjusted EBITDA, which excludes these items, was $207 million for the fourth quarter, up 5 percent year-over-year. The adjusted EBITDA margin of 16.5% matched the highest in the last three years (including pro forma results), driven by strong contract performance and disciplined cost management. Adjusted EBITDA for fiscal year 2017 was $792 million, which was up 1 percent compared to fiscal year 2016, reflected a margin of 15.9%, an improvement of 80 basis points compared to the prior fiscal year.

Net income attributable to CSRA shareholders for the fourth quarter of fiscal year 2017 was $37 million, or $0.22 per share, compared to a loss of $72 million, or $0.44 per share in the fourth quarter of fiscal year 2016. Adjusted diluted EPS was $0.49 for the quarter and $1.91 for the fiscal year, up 7 percent and 10 percent, respectively, from the comparable periods in fiscal year 2016.

The adjusted results reflect the methodology laid out in the Company’s Form 8-K filing on April 10, 2017. Compared to the previously reported measures, adjusted EBITDA excludes all costs and benefits associated with the defined benefit plans, and adjusted EPS excludes all costs and benefits associated with the defined benefit plans as well as amortization of acquisition-related intangible assets. Prior year amounts have been revised to conform to the current year presentation.

Cash Management and Capital Deployment

For the fourth quarter of fiscal year 2017, operating cash flow was $50 million, and free cash flow was $62 million. Operating cash flow included $61 million associated with the payment to CSC in connection with the signing of the IPMA; this payment is not included in free cash flow, which excludes non-recurring separation-related payments. The remaining $4 million from the $65 million IPMA payment is included in investing cash flow and captured on the balance sheet as a software asset.

During the fourth quarter, the Company used $20 million to pay down debt and returned $16 million to shareholders as part of its regular quarterly cash dividend program. The Board of Directors declared that the Company will pay a cash dividend of $0.10 per share on July 12, 2017 to all common shareholders of record as of June 15, 2017. As of March 31, 2017, the Company had $126 million in cash and cash equivalents and $2.6 billion in debt (excluding capital lease obligations).

After the close of the quarter, the Company signed a definitive agreement to acquire the Alexandria, VA-based network engineering firm NES Associates, LLC, a leading provider of telecommunications, infrastructure, and application architecture and implementation services to Defense and other government customers. The transaction is expected to close in the first half of fiscal year 2018, and is subject to regulatory approval and customary closing conditions.

Business Development

Bookings totaled $1.3 billion in the fourth quarter, representing a book-to-bill ratio of 1.1x. The fourth quarter marked the ninth consecutive quarter with a book-to-bill ratio of 1.0x or higher. Bookings for the fiscal year totaled $6.9 billion, representing a book-to-bill ratio of 1.4x.

Included in the quarterly bookings were several particularly important single-award prime contracts:

  • Enterprise IT Support for the Environmental Protection Agency (EPA). Under a $266 million, five-year contract, CSRA will provide a full range of services to develop and operate the EPA’s infrastructure and application platforms. Services delivered under this new contract for CSRA include: data center management, application hosting, application deployment and maintenance, geospatial service support, network security, cybersecurity, cloud computing, continuity of operations (COOP) services, enterprise identity and access management (EIAM), and active directory (AD).
  • Program Executive Office (PEO) Aircraft Carriers Support. CSRA secured a five-year, $61 million recompete to provide a full range of acquisition program support services to PEO Aircraft Carriers, including the design, development, construction, modernization, and life cycle management of aircraft carriers for the Navy. CSRA has supported PEO Aircraft Carriers for over 25 years.
  • Administrative Office of U.S. Courts (AOUSC) IT Security Support. The AOUSC awarded CSRA a new $57 million, four-year contract to secure the Courts’ IT assets. Under this task order, CSRA will provide highly-specialized security services, such as security engineering, penetration testing, security assessments, and training.
  • EPA High Performance Computing (HPC) Support. CSRA secured a new five-year, $58 million contract to provision, maintain, and support the EPA’s HPC environment, as well as its scientific visualization hardware and software. CSRA’s support of computational modeling and simulation tools will allow the EPA to solve complex research problems quickly and in a cost-effective manner to guide decisions and better protect human health and the environment.
  • Department of the Navy Chief of Information (CHINFO) Support. Under a five-year, $39 million contract, CSRA will continue to provide the Navy support to its worldwide public communication and media support services program.

The Company’s backlog of signed business orders at the end of fourth quarter of fiscal year 2017 was $15.2 billion, of which $2.4 billion was funded.

Forward Guidance

Based on the substantial momentum from its business development success, the Company is initiating guidance ranges that anticipate organic growth in revenue and free cash flow and robust performance in adjusted EBITDA and adjusted diluted EPS. The Company elects to provide ranges for certain metrics that are not prepared and presented in accordance with GAAP because it cannot make reliable estimates of key items that would be necessary to provide guidance for its GAAP operating and cash flow measures, including pension and OPEB mark-to-market adjustments and the initial sale associated with any changes to its receivables purchase agreement.

Metric

Fiscal Year 2018

Revenue (millions)

$5,000 – $5,200

Adjusted EBITDA (millions)

$770 – $800

Adjusted Diluted Earnings per Share

$1.88 – $2.00

Free Cash Flow (millions)

$330 – $380

The fiscal year 2018 adjusted EBITDA and adjusted diluted EPS guidance is based on the same definitions used in this press release and described fully in the company’s Form 8-K filed with the Securities and Exchange Commission on April 10, 2017.

CSRA Chief Financial Officer Dave Keffer commented, “I am pleased to post such strong earnings growth in the quarter and the year, underscoring CSRA’s commitment to long-term earnings growth. We expect to grow revenue in fiscal year 2018 in line with our long-term model. Our pending acquisition is a great example of the disciplined growth we are able to pursue, consistent with our balanced, long-term approach to capital allocation, as our balance sheet continues to evolve. After aggressively paying down debt, we look to add in acquisitions and opportunistic share repurchases to accelerate growth and drive shareholder value.”

About CSRA Inc.

CSRA (NYSE: CSRA) solves our nation’s hardest mission problems as a bridge from mission and enterprise IT to Next Gen, from government to technology partners, and from agency to agency.  CSRA is tomorrow’s thinking, today. For our customers, our partners, and ultimately, all the people our mission touches, CSRA is realizing the promise of technology to change the world through next-generation thinking and meaningful results. CSRA is driving towards achieving sustainable, industry-leading organic growth across federal and state/local markets through customer intimacy, rapid innovation and outcome-based experience. CSRA has approximately 18,500 employees and is headquartered in Falls Church, Virginia. To learn more about CSRA, visit www.csra.com. Think Next. Now.

Source: CSRA

The post CSRA Announces Fourth Quarter and Fiscal Year 2017 Financial Results appeared first on HPCwire.

DDN Delivers Production-Level Performance for Machine Learning at Scale

Thu, 05/25/2017 - 08:00

SANTA CLARA, Calif., May 25, 2017 — DataDirect Networks (DDN) today announced that large commercial machine learning programs in manufacturing, autonomous vehicles, smart cities, medical research and natural-language processing are overcoming production scaling challenges with DDN’s large-scale, high-performance storage solutions. Machine learning projects often stumble in the transition from proof of concept to production scale, which can introduce significant production delays in rapidly developing markets where time is of the essence.

For machine learning applications at scale, DDN delivers up to 40X faster performance than competitive Enterprise scale out NAS and up to 6X faster performance than Enterprise SAN solutions, while providing faster results against more types of data using a wide variety of techniques. It also allows machine learning and deep learning programs to start small for proof of concept and scale to production-level performance and petabytes per rack with no additional architecting required.

“The high performance and flexible sizing of DDN systems make them ideal for large-scale machine learning architectures,” said Joel Zysman, director of advanced computing at the Center for Computational Science at the University of Miami. “With DDN, we can manage all our different applications from one centrally located storage array, which gives us both the speed we need and the ability to share information effectively. Plus, for our industrial partnership projects that each rely on massive amounts of instrument data in areas like smart cities and autonomous vehicles, DDN enables us to do mass transactions on a scale never before deemed possible. These levels of speed and capacity are capabilities that other providers simply can’t match.”

With storage appliances that can start at a few hundred terabytes and grow to ~10 PB in a single rack, DDN’s machine learning customers can scale from test bed to production ramp and beyond in a single platform. DDN solutions are enabling customers to leverage machine learning applications to speed results and improve competitiveness, profitability, customer service, business intelligence and research effectiveness, including:

  • Smart cities planning for tourism via a city government and academic research cooperation;
  • Fraud detection for wire transfers and credit card transactions at a large U.S. bank;
  • Digital assistant/natural-language processing at a Fortune 100 SaaS provider;
  • Route optimization, pricing and informed consumer metrics for autonomous vehicles; and
  • Near real-time affinity marketing and fraud detection for online payments.

DDN storage allows machine learning algorithms to run faster and to include more data than any other system in the market, which enables researchers to accelerate algorithm testing, decrease development/refinement times and ultimately decrease time to market for the “learned” results – a significant advantage in today’s competitive markets.

“The uniqueness of DDN’s architecture enables The University of Miami to save data being generated constantly from literally millions of sensors to address the entire storage needs for a smart city with up to 15,000 residents,” Zysman added. “Equally impressive, we can do all that without impacting our other research, computations and simulations that are going on at the same time.”

As huge amounts of processing power and large data repositories have become more affordable, a rich environment for the advancement of machine learning and deep learning has emerged. Machine learning applications are being created and implemented across a wide range of processes, replacing or improving human input, and addressing problems that previously were not undertaken because of the sheer volume of the data.

“To be successful, machine learning programs need to think big from the start,” said Laura Shepard, senior director of product marketing at DDN. “Prototypes of programs that start by using mid-range enterprise storage or by adding drives to servers often find that these approaches are not sustainable when they need to ramp to production. With DDN, customers can transition easily with a single high-performance platform that scales massively. Because of this, DDN is experiencing tremendous demand from both research and enterprise organizations looking for high-performance storage solutions to support machine learning applications.”

Supporting Resources

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Delivers Production-Level Performance for Machine Learning at Scale appeared first on HPCwire.

Intel, Broad Institute Announce Breakthrough Genomics Analytics Stack

Wed, 05/24/2017 - 14:48

May 24, 2017 — Today is a milestone in advancing genomics research, and Intel is thrilled to be involved in these three important developments:

  • The Broad Institute of MIT and Harvard is open-sourcing the world’s most popular and now much-improved genome analysis software, GATK4.
  • Intel and Broad have developed a breakthrough architecture, called the Broad-Intel Genomics Stack (BIGstack), which currently delivers a 5x improvement to Broad’s genomics analytics pipeline using Intel’s CPUs, Omni-Path Fabric and SSDs. The stack also includes optimizations for the forthcoming release of Intel’s integrated CPU + FPGA products.
  • China’s industry leader in genomics, BGI, is announcing adoption of the most current GATK tools, including Broad and Intel optimizations — a groundbreaking step toward global alignment in the rapidly growing genomics community.

I want to do justice to these tremendous achievements, so let me expand on them.

First, Intel and Broad share the common vision of harnessing the power of genomic data and making it widely accessible for research around the world to yield important discoveries. Genomics offers insights into the inner workings of DNA within organisms. Advances in genomics are fueling discovery-based research to better understand the complexities of biological systems.

Nearly everyone has experienced cancer and its devastating effects in their family, some more than others. With today’s announcements, we can take new steps toward understanding the molecular drivers of cancer and other diseases and accelerating the promise of precision medicine.

That’s why Intel and Broad are making the BIGstack available to run the new GATK4 Best Practices pipeline up to five times faster than the previous versions, supporting data volumes at truly unprecedented scale and simplifying deployment with production-ready scripts. The architecture yields performance based on the combination of Intel CPUs, Omni-Path Fabric and SSDs. The BIGstack also includes optimizations for Intel FPGAs with early results showing a potential for more than 35x improvement in the PairHMM algorithm.

Version 1.0 of the Broad-Intel Genomics stack is the kind of breakthrough in affordability for the genomics analytics community that we sought to create as part of the Intel-Broad Center for Genomic Data Engineering, a five-year $25 million collaboration announced in November. The stack is now available to the 45,000 registered academic, nonprofit and commercial users of the GATK, the Broad’s popular genomics analysis toolkit.

There’s more on the Intel website about this new reference architecture announced today at Bio-IT World Conference & Expo. Additionally, we wanted to share that:

I’m tremendously excited by BGI’s announcement – it means the leading genomics institutions in China and the United States will be using the same set of open source software tools. And this expanded access will facilitate the standardization and sharing of data for bigger and better research in the future.

It’s rewarding that GATK4 includes key optimizations made possible through collaboration at the Intel-Broad Center for Genomic Data Engineering in Cambridge, which I had the pleasure of visiting last month. I hope that the BIGstack will be a common platform for advanced analytics workloads that the world’s leading genomics institutions can utilize to facilitate collaboration and scientific breakthroughs.

Finally, this turnkey solution will be available as a reference architecture and through original equipment manufacturers (OEMs) and system integrators (SIs), including Lenovo, HPE, Inspur and Colfax, with more to follow.

I’m proud of the accomplishments of the Intel team in enabling Intel technology as a facilitator of scientific breakthroughs. Moments such as these make me believe that in our lifetime we will see the cure for cancer, and I am so honored to be partnering with great institutions such as Broad and BGI to help them make this happen.

Looking ahead, it’s clear that the complex interplay of genetic variants and how treatments can affect molecular pathways is an area of study ripe for machine learning because of the need to learn by example, over and over. Working with some of the world’s most brilliant minds, Intel engineers are eager to apply artificial intelligence to this grand challenge.

Learn more by visiting the Intel-Broad Center for Genomic Data Engineering website.

Source: Jason Waxman, Corporate Vice President and General Manager of Data Center Solutions Group, Intel Corporation

The post Intel, Broad Institute Announce Breakthrough Genomics Analytics Stack appeared first on HPCwire.

RoCE Initiative Launches New Online Product Directory for CIOs, IT Professionals

Wed, 05/24/2017 - 10:53

BEAVERTON, Ore., May 24, 2017 – The RoCE Initiative, an education and resource program of the InfiniBand Trade Association (IBTA), today announced the availability of the RoCE Product Directory. The new online resource is intended to inform CIOs and enterprise data center architects about their options for deploying RDMA over Converged Ethernet (RoCE) technology within their Ethernet infrastructure.

The RoCE Product Directory is comprised of a growing catalogue of RoCE-enabled solutions provided by members of the IBTA. The directory allows users to search by product type and/or brand, connecting them directly to each item’s specific product page. The online directory currently contains RoCE capable Adapter Cards and LAN on Motherboard Application Specific ICs (LOM ASIC).

“Broadcom is excited about the IBTA’s launch of the RoCE Product Directory and is proud to have Broadcom’s RoCE-enabled NetXtreme E-Series Ethernet adapters featured,” said Robert Lusinsky, Broadcom Director of Marketing and InfiniBand Trade Association Marketing Working Group co-chair.  “Our 10/25/50/100GbE NetXtreme E-Series adapters integrate advanced congestion control algorithms that enable high performance RoCE deployments for applications such as machine learning and NVMe-oF, and the IBTA’s RoCE Product Directory will be a valuable resource for customers.”

“Cavium is a member of IBTA, a supporter of the RoCE Initiative and proud to be listed in the RoCE product directory,” said Christopher Moezzi, Vice President of Marketing, Ethernet Adapter Group, Cavium, Inc. “Cavium FastLinQ® 10/25/50/100GbE NICs with Universal RDMA deliver groundbreaking performance combined with the intelligence and offload capabilities that enable enterprise, cloud and Telco platforms to deliver exceptional levels of service and application processing, while offloading packet I/O.”

“As a longtime member of the IBTA and pioneer of RoCE technology, Mellanox is proud to participate in the RoCE Initiative’s product directory,” said Kevin Deierling, Vice President of Marketing, Mellanox Technologies. “Now on our fifth generation of ConnectX RoCE adapters, our customers are able to achieve dramatic improvements in storage and compute platform efficiency and performance. Continued innovation has allowed our RoCE solutions to accelerate performance over ordinary Ethernet networks, without requiring any special configuration to operate losslessly.”

Access the RoCE Product Directory at www.roceinitiative.org/product-search.

Additional RoCE Technical Resources Now Available

The RoCE Initiative partnered with Demartek, an independent computer industry analyst firm, to develop theDemartek RoCE Deployment Guide. The free guide is designed for IT managers and technical professionals who are exploring the benefits of RoCE technology and looking for practical guidance on the deployment of RoCE solutions. The technical document demonstrates step-by-step deployment of RoCE-capable 25GbE and 100GbE products by several vendors.

More information on the Demartek RoCE Deployment Guide can be found athttp://www.demartek.com/Demartek_RoCE_Deployment_Guide.html.

Additionally, the RoCE Initiative recently published a white paper explaining the many benefits of RoCE capabilities and how the technology can address the ever-changing needs of next generation data centers. The document, titled RoCE Accelerates Data Center Performance, Cost Efficiency, and Scalability, features recent case studies and outlines the following RoCE advantages:

  • Freeing the CPU from Performing Storage Access
  • Altering Design Optimization
  • Future-Proofing the Data Center
  • Presenting New Opportunities for the Changing Data Center

The technical brief is available for download at http://www.roceinitiative.org/resources/.

About the RoCE Initiative

The RoCE Initiative promotes RDMA over Converged Ethernet (RoCE) awareness, technical education and reference solutions for high performance Ethernet topologies in traditional and cloud-based data centers. Leading RoCE technology providers are contributing to the Initiative through the delivery of case studies and white papers, as well as sponsorship of webinars and other events. For more information, visitwww.RoCEInitiative.org.

About the InfiniBand Trade Association

The InfiniBand Trade Association was founded in 1999 and is chartered with maintaining and furthering the InfiniBand and the RoCE specifications. The IBTA is led by a distinguished steering committee that includes Broadcom, Cray, HP, IBM, Intel, Mellanox Technologies, Microsoft, Oracle, and Cavium. Other members of the IBTA represent leading enterprise IT vendors who are actively contributing to the advancement of the InfiniBand and RoCE specifications. The IBTA markets and promotes InfiniBand and RoCE from an industry perspective through online, marketing and public relations engagements, and unites the industry through IBTA-sponsored technical events and resources. For more information on the IBTA, visitwww.infinibandta.org.

Source: the InfiniBand Trade Association

The post RoCE Initiative Launches New Online Product Directory for CIOs, IT Professionals appeared first on HPCwire.

IBM Platform Deployed by Sidra to Advance Qatar’s Biomedical Research Capabilities

Wed, 05/24/2017 - 09:15

ARMONK, N.Y., May 24, 2017 — IBM (NYSE: IBM) today announced the deployment of its IBM solutions as the compute and storage infrastructure for Sidra Medical and Research Center (Sidra).

Sidra, a groundbreaking hospital, biomedical research and educational institution, chose IBM solutions to manage and store clinical genome sequencing data as well as to provide the organization’s biomedical informatics technology infrastructure capabilities that will serve as a national resource. The IBM platform is used for data management and storage, bioinformatics and High Performance Computing (HPC).

One of the first programs Sidra used the IBM technology platform was for the Qatar Genome Programme (QGP) 1.  The QGP is a large national genome medical research project, which aims to develop personalized healthcare therapies for the Qatari population. Sidra is a key partner of the QGP and is responsible for sequencing, analyzing and providing the data management for whole genome sequences from the population. Sidra has sequenced over 3000 samples in Phase I and has embarked on the sequencing and analysis for Phase II of the QGP.

“Sidra has undertaken to implement personalized medicine to better meet the unmet needs of the women and children in Qatar and beyond. Biomedical informatics plays a central role in bringing this concept to life,” said Dr. Rashid Al-Ali Division Chief of the Biomedical Informatics Division at Sidra. “This is why we hired a multidisciplinary team of experts from all over the world, invested in leading technologies and chosen vendors like IBM to help enhance our approach to offering personalized care to the women and children of Qatar. Our implementation of technologies goes beyond than meeting Sidra’s needs – as we have the basic building blocks in place to be considered a national resource in the county and build local capacity.”

A number of IBM solutions underpin Sidra’s analytics and data architecture, including:

  • IBM Flash Storage systems to accelerate access to critical meta data by the Sidra community;
  • IBM Software defined infrastructure as a workload and resource manager, to ease the management of big data analytics, and to scale up capabilities to manage HPC analytics involving hundreds of thousands of jobs and vast amounts of data.

“The Sidra and IBM work effort is unique – it was a joint collaboration between our bioinformatics experts who led the complex analysis component and built the pipelines while IBM customised the system to ensure best performance and ease of use,” said Dr. Mohamed-Ramzi Temanni, Manager, Bioinformatics Technical Group at Sidra Medical and Research Center. “Analyzing hundreds of samples in parallel on a regular basis requires a robust HPC system to handle the load properly. From our experience, IBM systems has proven to be reliable in helping us address this technical requirement.”

Performing analysis on each sample takes between two to seven days. Failures at any point in the analysis of the data can be very costly as it would require each job to restart from the beginning. Using IBM Spectrum software provides Sidra high reliability to manage the application pipeline to help meet its deadlines.

Since deploying the IBM platform, Sidra has completed hundreds of thousands of computing tasks comprising millions of files and directories, without experiencing system downtime. Overall, Sidra has reduced its time-to-completion for long running jobs while increasing its resource utilization substantially. As a result, Sidra completed its requirements for Phase I of the QGP ahead of time.

“Sidra’s remit for the Qatar Genome Program is an ambitious genomics medical research project in terms of compute and data scope,” said Dr. Robert Eades, Research Advisor, IBM Middle East & Africa. “To most effectively manage and analyze a large number of whole genome sequences for population genomics, Sidra chose IBM as one of its key players to build a long-term technology foundation for medical analytics research at this scale.”

IBM at Bio-IT World

As a Bio-IT World Platinum sponsor IBM will deliver several presentations during the event, including the following, which take place on Wednesday, May 24th:

In addition IBM solutions will appear across the Bio-IT World exhibition floor: IBM Storage Spectrum Scale within IBM Business Partner, DDN booth #357. IBM Aspera in booth #348 and IBM Cloud Object Storage in Booth #554.

For more on IBM Storage, visit www.ibm.com/systems/storage.

About Sidra Medical and Research Center

Sidra Medical and Research Center will be a groundbreaking hospital, research and education institution, focusing on the health and wellbeing of children and women regionally and globally.

Sidra represents the vision of Her Highness Sheikha Moza bint Nasser who serves as its Chairperson. The high-tech facility will not only provide world-class patient care but will also help build Qatar’s scientific expertise and resources.

Sidra will be a fully digital facility, incorporating the most advanced information technology applications in all its functions. Designed by renowned architect Cesar Pelli, Sidra features a main hospital building and a separate outpatient clinic.

Sidra opened its Outpatient Clinic on 1 May 2016 and offers outpatient services for women and children through a referral based system in partnership with other healthcare providers in Qatar.

Sidra is also part of a dynamic research and education environment in Qatar and through strong partnerships with leading institutions around the world, Sidra is creating an intellectual ecosystem to help advance scientific discovery through investment in medical research. For more information please visit www.sidra.org.

Source: IBM

The post IBM Platform Deployed by Sidra to Advance Qatar’s Biomedical Research Capabilities appeared first on HPCwire.

DDN Names Eric Barton as CTO for Software-Defined Storage

Wed, 05/24/2017 - 08:51

SANTA CLARA, Calif., May 24, 2017 — DataDirect Networks (DDN) today appointed Eric Barton as the company’s chief technology officer (CTO) for software-defined storage. In this role, Barton will lead the company’s strategic roadmap, technology architecture and product design for DDN’s newly created Infinite Memory Engine (IME) business unit. Barton brings with him more than 30 years of technology innovation, entrepreneurship and expertise in networking, distributed systems and storage software.

Eric Barton, CTO of Software-Defined Storage at DDN

“Eric Barton is a visionary with a proven track record and deep understanding of data-intensive computing and complex distributed storage systems. As we expand our customer base into cloud, AI, machine learning and rack-scale SSD environments, we are excited to have Eric lead our next wave of technological innovation,” said Alex Bouzari, chief executive officer, chairman and co-founder, DDN. “The rapid growth of our Software-Defined Storage business and Eric’s appointment as CTO reflect DDN’s commitment and leadership in developing innovative solutions for the world’s most demanding data intensive environments.”

Prior to DDN, Barton was lead architect for the High Performance Data Division (HPDD) at Intel Corporation, where he created some of the world’s most innovative storage architectures, leveraging object storage, NVMe, distributed file systems, multi-core processors and 3D XPoint. Before Intel, Barton was co-founder and CTO of Whamcloud, the main development arm behind the open-source Lustre file system, which was acquired by Intel Corporation in 2012. Prior to that Barton designed and built one of the first commercially available distributed file systems, actively contributed to the development of advanced networking protocols for technical computing, and implemented the Lustre Networking Layer (LNET).

“Eric is a true thought leader in distributed and high-performance storage, one of the very finest in the industry, and we are fortunate to have him join the team,” said Robert Triendl, SVP for sales, marketing and field services. “With the advent of technologies such as NVMe and 3D XPoint, and innovations in device technologies and storage fabrics, the market for elastic storage solutions is undergoing dramatic changes. As the leader in scalable storage systems with thousands of mission critical customers, DDN is uniquely positioned for a significant expansion of our addressable market.”

DDN’s appointment of Barton and other recent initiatives aimed at accelerating the expansion of its IME software-defined storage product family exemplify the company’s long-term commitment to lead in the creation and delivery of innovative storage solutions for AI, Machine Learning, Enterprise, Cloud and Technical Computing. Available today as both software-only and appliance servers, IME is a scale-out, flash-native storage solution that solves I/O bottlenecks and accelerates applications and workflows in a reliable and cost-effective fashion. IME provides predictable, scalable performance at a fraction of the cost of conventional storage solutions.

“During the past years, DDN has developed a remarkable set of core technologies for scalable storage systems based on flash and non-volatile memory (NVM),” Barton said. “I am excited to join the extremely talented team at DDN and help drive the company’s efforts in building a family of elastic, NVM-based storage products that will bring these technologies to a broader enterprise and cloud market.”

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Names Eric Barton as CTO for Software-Defined Storage appeared first on HPCwire.

Mellanox Delivers Interconnect Solution for Open Platform for DBaaS on IBM Power Systems

Wed, 05/24/2017 - 08:48

SUNNYVALE, Calif. & YOKNEAM, Israel, May 24, 2017 — Mellanox Technologies, Ltd. (NASDAQ:MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced support for the new Open Platform for Database-as-a-Service (DBaaS) on IBM Power Systems.

“As the need for new applications to be delivered faster than ever increases in a digital world, developers are turning to modern software development models including DevOps, as-a-Service and self-service to increase the volume, velocity and variety of business applications,” said Terri Virnig, VP, Power Ecosystem and Strategy at IBM. “With Open Platform for DBaaS, IBM is supporting these cloud development models to provide complete control of data, access and security for compliance, as well as the choice and flexibility for agile development of innovative new applications.”

In the Cognitive Computing era, businesses face a deluge of data from a wide variety of sources including internal, external, social, mobile, and sensors. Traditional enterprise computing, with its reliance on conventional databases designed for structured data, is unable to cope with this flood of data. Organizations everywhere are modernizing their data platforms, using the entire range of available technology to derive patterns and insights from both structured and unstructured data. Next generation applications based on open source technology, in particular open source databases, are critical for this modernization.

“Companies must undergo digital transformation in order to compete and lead in the data-driven Insight Economy, and open source economics and innovation are speeding this digital transformation,” said Kevin Deierling, vice president marketing, Mellanox Technologies. “Open Database-as-a-Service has proven to be one of the most efficient ways to meet today’s business requirements including high performance, high availability, and scalability at a lower cost of ownership. In order to achieve these goals, it is critical to choose the right interconnect solution.”

DBaaS has a number of advantages over traditional databases. First, it delivers improvement in speed and agility, as today’s cloud workloads require on-demand and agile provisioning of databases. Second, Open DBaaS results in a significant reduction in licensing and infrastructure costs. Finally, database sprawl, generated by the hundreds or thousands of unutilized databases that have accumulated over the years in organizations, is eliminated.

The solution is ideal for developers who need fast, flexible, and secure performance and reliability as they work on multiple applications using multiple operating systems and databases. DevOps and Operations staff need a span of control to allocate database services, and CIO/CTO executives need to optimize return on investment. Modern and mature open source DBaaS such as EnterpriseDB and MongoDB are hardened for enterprise deployments and have innovative new capabilities beyond those of traditional enterprise databases. For example, healthcare organizations are using MongoDB to build new customer call-center applications that combine traditional patient billing data with new data formats such as MRIs and lab test results and text data from specialist appointments. These new call centers are significantly reducing patient wait times and improving quality of care. Other organizations are using open source database management systems to combine GPS location data with real-time social media and video feeds with their own data to offer public transportation alternatives during peak travel times.

Open Platform for DBaaS on IBM Power Systems is an open source-based platform that integrates servers, intelligent interconnect, storage, operating systems, middleware, and databases, and disrupts conventional x86-based systems with demonstrable price-performance advantages.

The exponential growth of data demands not only the fastest throughput but also smarter networks. Mellanox intelligent interconnect solutions incorporate advanced acceleration engines that perform sophisticated processing algorithms on the data as it moves through the network. Intelligent network solutions greatly improve the performance and total infrastructure efficiency of data intensive applications.

About Mellanox

Mellanox Technologies (NASDAQ:MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Delivers Interconnect Solution for Open Platform for DBaaS on IBM Power Systems appeared first on HPCwire.

HPE Delivers Broadest, Deepest Flash Storage Portfolio for Hybrid IT

Wed, 05/24/2017 - 08:25

PALO ALTO, Calif., May 24, 2017 — Hewlett Packard Enterprise (HPE) (NYSE:HPE) today announced a comprehensive flash portfolio update with new products and data protection solutions designed to help customers continue their journey to an all- flash data center. The new offerings include:

  • A more powerful midrange HPE 3PAR StoreServ 9450 all-flash array
  • Availability of Nimble Storage primary and new secondary flash arrays
  • Affordable fifth generation HPE MSA Storage
  • High-speed, cloud-connected StoreOnce data protection

The adoption of flash storage continues to gain pace, with 51 percent of customers1 predicting that they will have an All-Flash Data Center within five years. It’s not just enterprises, smaller companies are also adopting flash as prices come down.  At the same time, IT teams are seeking deeper integration across servers, storage, networks, and automation tools to maximize value from investments.

“As flash permeates the datacenter it has become critical to move beyond the array – from predictive analytics to data protection to investment strategies,” said Bill Philbin, senior vice president, Data Center Infrastructure Group, Hewlett Packard Enterprise. “These new solutions help more customers maximize the value of flash on-premises and enable flexible off-premises data mobility.”

Leading All-Flash price-performance and density across multiple segments
Data growth and app development in the data center is expanding exponentially and putting pressure on IT to consolidate more data on less infrastructure while also evaluating the right mix of on-premises and off-premises investments. To address these demands, HPE is updating its overall flash portfolio including the addition of cloud-ready Nimble Storage flash arrays powered by predictive analytics for a new-style approach to storage support and monitoring.

For midrange customers hitting performance limits of aging systems, HPE 3PAR is redefining expectations with the new HPE 3PAR StoreServ 9450, a highly scalable, multi-tenant, and performance rich all-flash platform that builds on the product family’s number one position in the midrange fibre channel.2 To enable the consolidation of more data in less space without compromising service levels, HPE is increasing performance 70 percent, doubling scale to 6PB, and enabling 3X the front-end connectivity with 80 host ports.3  This new, more powerful member of the 3PAR family enables consistent and predictable performance at less than half the cost of EMC VMAX 250F4 and supports consolidation of both block and file workloads. For those considering long-term investment strategies, the 3PAR StoreServ 9450 also provides a futureproof path to next generation Storage Class Memory and NVM Express (NVMe) using HPE 3PAR 3D Cache.

HPE is also unveiling the fifth generation of its entry SAN platform, HPE MSA, starting with the introduction of HPE MSA 2050 and 2052 models. MSA has been the leading entry fibre channel array for eight years5 and now delivers 2x more performance than the previous generation starting under $10,0006. With all-inclusive software and 1.6 terabytes of solid-state disk (SSD) capacity included, the new HPE MSA 2052 offers a 40 percent cost savings7 on hybrid flash models. Both MSA platforms are affordable starting points for application acceleration providing flexibility to mix any combination of SSD and SAS drives plus including resiliency features such as snapshots and remote replication.

Flash-optimized protection now cloud-ready and 15x faster
With the scale and speed of flash storage becoming pervasive it’s critical that customers re-think data protection to avoid risk and eliminate bottlenecks – this includes taking advantage of public cloud to achieve the right mix of Hybrid IT.  As part of that shift, HPE has federated primary, secondary, and object storage to enable zero-impact data protection while putting backup data to work.

For customers exploring public-cloud tiers for secondary data, HPE announced StoreOnce CloudBank, a long-term data retention solution that provides significantly lower cost protection on multi-cloud destinations such as AWS, Azure or on premises object storage. Unlike many cloud backup solutions, StoreOnce CloudBank has been architected to reduce bandwidth requirements by over 99 percent, helping to lower costs of cloud-based storage to just $0.001 per gigabyte per month. The solution also supports the full set of on-premises StoreOnce features, including 3PAR integration, so customers can automatically transition data off-premises while assuring rapid and flexible disaster recovery.

As HPE 3PAR flash customers seek to rapidly protect larger and larger data sets, HPE Recovery Manager Central (RMC) already provides the fastest possible protection of applications running on 3PAR by directly connecting to secondary StoreOnce data protection systems through patented block-level integration.  RMC is part of 3PAR’s all-inclusive licensing model and has now been enhanced with a new Express Restore feature, which enables 15x faster data recovery8 from on-prem or off-prem StoreOnce repositories.  RMC has also now been integrated with Veeam Explorer so customers can recover application items such as e-mails, documents and database schemas directly from RMC-V Express Protect backups.

Many customers moving to an all-flash datacenter are looking for ways to maximize return on those investments. For some that includes changing how they utilize data stored as backup copies. The new Nimble Secondary Flash Array (SFA) allows customers to put their backup and copy data to work for secondary applications. The system brings together always-on deduplication and compression to lower capacity costs for backup with flash-optimized performance and zero-copy cloning to run production workloads like dev/test, QA and analytics.

Pricing and Availability

  • HPE 3PAR StoreServ 9450 will be available June 2017 with US street pricing starting at $74,840.
  • HPE MSA 2050 will be available June 2017 with US street pricing starting at $7,750.
  • HPE MSA 2052 will be available June 2017 with US street pricing starting at $9,600.
  • RMC 4.1 is included as part of the standard all-inclusive 3PAR StoreServ licensing package and will be available from September 2017.
  • HPE StoreOnce CloudBank is available as part of an Early Access Program. Contact HPE to see if you qualify.
  • Nimble Secondary Flash Array currently available with US street pricing starting under $40,000.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.

Source: HPE

The post HPE Delivers Broadest, Deepest Flash Storage Portfolio for Hybrid IT appeared first on HPCwire.

Dr. Ralf Schneider of the HPC Center Stuttgart Improves Bone Fracture Treatment

Wed, 05/24/2017 - 08:23

May 24, 2017 — You never want your doctor taking guesses with your health. But when it comes to treating bone fractures with implants, guesswork is part of the process.

“You have two patients where everything is exactly the same and one implant will fail and the other won’t,” says Ralf Schneider from the High Performance Computing Center Stuttgart (HLRS). “Why?”

Dr. Schneider is working to answer this very question — and put some certainty into a very uncertain science.

Bone implants are a common method for treating fractures of the hip, in particular. They allow patients to maintain mobility and avoid the severe complications that can come from bed rest. While an ideal solution, hip fracture implants are plagued by a consistent failure rate.

Doctors select an implant for their patients from among several types. And thus far, there hasn’t been much science behind that choice. They choose “based on their experience in the moment” says Schneider, and then wait to see if it will hold or be among those that fail.

The implant process is unpredictable because bone composition varies from person to person. Thus, bones react to “loading” — the process by which they “remodel” or heal themselves — differently. That variance is the reason an implant might work for one person but not another.

Schneider knew if doctors could predict how specific patients’ bone would remodel, they could better select and position an implant. To do that, they need to see the bone’s microstructure and determine its stiffness

Along with his research colleagues, Schneider is using HLRS’s Cray supercomputer to conduct micromechanical simulations of bone tissue. “You have to calculate the local strain within the bone in the correct way,” says Schneider. “If you don’t have the right elasticity, you’ll formulate the strain incorrectly which will lead to an incorrect estimation of bone remodeling” and an incorrect calculation of the risk of implant failure.

Getting the correct calculations starts with correct material data — meaning bone tissue. It’s impractical to get a sample from every patient, so Schneider’s goal is conduct bone tissue simulations for a range of ages and genders and compile a representative database.

“You’ll get an idea of what bone elasticities for people in particular ranges look like,” he says. “So when you have a patient with no bone tissue sample you can compare his bone density with the samples you already have and you can say, “Okay, he’s most likely to have this stiffness so I’ll use this stiffness parameter then I can calculate the strains and bone remodeling correctly.”

Schneider calls the planned database a “decision support system.” And while these micromechanical simulations aren’t large, resolving each tissue sample will require 120,000 individual simulations. Schneider says: “I want to help surgeons with this wonderful simulation and I couldn’t do it without supercomputing. On a workstation I would be calculating for years. With HPC I do it in a day. It’s a perfect tool for it.”

High-Performance Computing Center Stuttgart

The High Performance Computing Center Stuttgart (HLRS) supports researchers from science and industry with high-performance computing platforms, technologies and services. Their Cray XC system “Hazel Hen” is the fastest supercomputer in the European Union and one of the most powerful in the world.

System Details

Cray XC supercomputer
7.42 PF peak performance
41 cabinets
7,712 compute nodes
185,088 compute cores
964 TB memory

Source: Cray

The post Dr. Ralf Schneider of the HPC Center Stuttgart Improves Bone Fracture Treatment appeared first on HPCwire.

NSF Presents FY 2018 Budget Request

Wed, 05/24/2017 - 08:17

May 23, 2017 — National Science Foundation (NSF) Director France A. Córdova today publicly presented President Donald J. Trump’s Fiscal Year (FY) 2018 NSF budget request to Congress.

The FY 2018 NSF budget request of $6.65 billion reflects NSF’s commitment to establishing clear priorities in areas of national importance and identifying the most innovative and promising research ideas that will yield the highest return on investment for the nation. It supports fundamental research that will drive the U.S. economy, support our nation’s security, and keep the U.S. a global leader in science, engineering and technology.

“For nearly seven decades, NSF investments in fundamental and transformational research have catalyzed discoveries that impact the lives and livelihoods of all Americans,” Córdova said. “This proposal allows us to determine the priorities for funding across the spectrum of science and engineering; facilitates interdisciplinary research and our goal to broaden participation in science; funds the construction of large facilities that will transform our understanding of nature; and seeds innovation and discovery by initiating our 10 Big Ideas.”

Detailed information on the FY 2018 budget request is available beginning today on the NSF website.

NSF continues to bring together researchers from all fields of science and engineering to address today’s challenges through foundation-wide activities. The agency continues to efficiently invest in the fundamental research and talented people who make the innovative discoveries that transform our future. NSF remains committed to supporting cross-disciplinary investments that would have significant scientific, national and societal impact.

Under the budget request:

  • Cyber-Enabled Materials, Manufacturing, and Smart Systems (CEMMSS) would receive $222.43 million. This investment aims to integrate science and engineering activities across NSF, including breakthrough materials, advanced manufacturing and smart systems, which include robotics and cyber-physical systems.
  • Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (NSF INCLUDES) would receive $14.88 million. This is an integrated, national initiative to increase the preparation, participation, advancement and potential contributions of those who have been traditionally underserved or underrepresented in the science, technology, engineering and mathematics (STEM) enterprise.
  • Innovations at the Nexus of Food, Energy and Water Systems (INFEWS) would receive $24.4 million. This investment aims to understand, design and model interconnected food, energy and water system through an interdisciplinary research effort that incorporates all areas of science and engineering and addresses the natural, social and human-built factors involved.
  • NSF Innovation Corps (I-Corps) would receive $26.15 million. This program improves NSF-funded researchers’ access to resources that can assist in bridging the gap between discoveries and technologies, helping to transfer knowledge to downstream technological applications and use at scale.
  • Risk and Resilience investments would receive $31.15 million to improve predictability and risk assessment and increase preparedness for extreme natural and man-made events to reduce their impact on quality of life, society and the economy.
  • Secure and Trustworthy Cyberspace (SaTC) would receive $113.75 million. This investment aims to build the knowledge base in cybersecurity that enables discovery, learning and innovation, and leads to a more secure and trustworthy cyberspace.
  • Understanding the Brain (UtB) would receive $134.46 million. This initiative encompasses ongoing cognitive science and neuroscience research and NSF’s contributions to the ongoing Brain Research through Advancing Innovation and Neurotechnologies (BRAIN) Initiative. The goal of UtB is to enable scientific understanding of the full complexity of the brain, in action and in context.

In FY 2016, NSF provided 27 percent of total federal support for academic basic research in all science and engineering fields in the United States. Approximately 2,000 U.S. colleges, universities and other institutions received NSF funding. The vast majority of NSF’s funding — about 93 percent — is committed to supporting research, education and related activities. Thus, most of NSF’s budget goes back to states and localities through the grants and awards the agency makes.

NSF expects to evaluate over 50,000 proposals in FY 2018 and, through its competitive merit review process, make nearly 11,000 awards, including 8,000 new research grants. Reflecting a decrease of $841 million from the FY 2016 actuals, this is an estimated funding rate of 19 percent, down from 21 percent in FY 2016.

NSF’s FY 2018 support through grants and other awards is anticipated to reach an estimated 300,000 researchers, postdoctoral fellows, trainees, teachers and students who will make the innovative discoveries that transform our future. The support is divided equally among individuals, teams, centers and major facilities.

In her budget presentation, Córdova highlighted how robust NSF investments in research have returned exceptional dividends to the American people, expanding knowledge, improving lives and ensuring our security.

Source: NSF

The post NSF Presents FY 2018 Budget Request appeared first on HPCwire.

Breakthrough for Large-Scale Computing: ‘Memory Disaggregation’ Made Practical

Wed, 05/24/2017 - 08:15

ANN ARBOR, Mich., May 24, 2017 — For decades, operators of large computer clusters in both the cloud and high-performance computing communities have searched for an efficient way to share server memory in order to speed up application performance.

Now a newly available open-source software developed by University of Michigan engineers makes that practical.

The software is called Infiniswap, and it can help organizations that utilize Remote Direct Memory Access networks save money and conserve resources by stabilizing memory loads among machines. Unlike its predecessors, it requires no new hardware and no changes to existing applications or operating systems.

Infiniswap can boost the memory utilization in a cluster by up to 47 percent, which can lead to financial savings of up to 27 percent, the researchers say. More efficient use of the memory the cluster already has means less money spent on additional memory.

“Infiniswap is the first system to scalably implement cluster-wide ‘memory disaggregation,’ whereby the memory of all the servers in a computing cluster is transparently exposed as a single memory pool to all the applications in the cluster,” said Infiniswap project leader Mosharaf Chowdhury, U-M assistant professor of computer science and engineering.

“Memory disaggregation is considered a crown jewel in large scale computing because of memory scarcity in modern clusters.”

The software lets servers instantly borrow memory from other servers in the cluster when they run out, instead of writing to slower storage media such as disks. Writing to disk when a server runs out of memory is known as “paging out” or “swapping.” Disks are orders of magnitude slower than memory, and data-intensive applications often crash or halt when servers need to page.

Prior approaches toward memory disaggregation—from computer architecture, high-performance computing and systems communities, as well as industry—aren’t always practical. In addition to the new hardware or modifications to existing applications, many depend on centralized control that becomes a bottleneck as the system scales up.  If that fails, the whole system goes down.

To avoid the bottleneck, the Michigan team designed a fully decentralized structure. With no centralized entity keeping track of the memory status of all the servers, it doesn’t matter how large the computer cluster is. Additionally, Infiniswap does not require designing any new hardware or making modifications to existing applications.

“We’ve rethought the well-known remote memory paging problem in the context of RDMA,” Chowdhury said.

The research team tested Infiniswap on a 32-machine RDMA cluster with workloads from data-intensive applications that ranged from in-memory databases such as VoltDB and Memcached to popular big data software Apache Spark, PowerGraph and GraphX.

They found that Infiniswap improves by an order of magnitude both “throughput”—the number of operations performed per second—and “tail latency”—the speed of the slowest operation. Throughput rates improved between 4 and 16 times with Infiniswap, and tail latency by a factor of 61.

“The idea of borrowing memory over the network if your disk is slow has been around since the 1990s, but network connections haven’t been fast enough,” Chowdhury said. “Now, we have reached the point where most data centers are deploying low-latency RDMA networks of the type previously only available in supercomputing environments.”

Infiniswap is being actively developed by U-M computer science and engineering graduate students Juncheng Gu, Youngmoon Lee and Yiwen Zhang, under the guidance of Chowdhury and Kang Shin, professor of electrical engineering and computer science.

The research that led to Infiniswap was funded by the National Science Foundation, Office of Naval Research and Intel. A recent paper on Infiniswap, titled “Efficient Memory Disaggregation with Infiniswap,” was presented at the USENIX Symposium on Networked Systems Design and Implementation in March.

Source: University of Michigan

The post Breakthrough for Large-Scale Computing: ‘Memory Disaggregation’ Made Practical appeared first on HPCwire.

SPEC/HPG Seeks Applications for Upcoming MPI Accelerator Benchmark

Wed, 05/24/2017 - 08:12

GAINESVILLE, Va., May 24, 2017  – The Standard Performance Evaluation Corp.’s High-Performance Group (SPEC/HPG) is offering rewards of up to $5,000 and a free benchmark license for application code and datasets accepted under its new SPEC MPI Accelerator Benchmark Search Program.

Applications that make it through the entire search program will be incorporated into a new benchmark currently called SPEC MPI ACCEL. The new benchmark will combine components of the current SPEC MPI and SPEC ACCEL benchmark suites.

Real-world scientific applications

“Our goal is to develop a benchmark that contains real-world scientific applications and scales from a single node of a supercomputer to thousands of nodes,” says Robert Henschel, SPEC/HPG chair. “The broader the base of contributors, the better the chance that we can cover a wide range of scientific disciplines and parallel-programming paradigms.”

The search program is open to individual and group submissions from industry, academia and research communities. Monetary rewards will be provided upon successful completion of escalating steps in the evaluation process, including successful initial porting, providing workloads and benchmark profiling, code testing and benchmark infrastructure, and final acceptance into the new benchmark suite.

Beyond the monetary reward

Beyond the monetary reward, free benchmark license and industry recognition, those whose applications are selected under the search program will benefit from seeing a large number of performance results published on the SPEC website, giving them a better understanding of how their applications scale. Currently, there are more than 500 performance results posted on the SPEC website for SPEC ACCEL and SPEC MPI benchmarks.

Henschel says the search program is important to SPEC/HPG because the new benchmark is a departure from those in the past that focus on very specific parallelization techniques, such as MPI, OpenMP and OpenACC.

“This new benchmark is a fresh start for SPEC/HPG, as we are trying to include a much broader set of parallelization techniques in order to better exploit modern HPC architectures and better represent the landscape of scientific applications.”

Deadline: December 31, 2017

Proposals for the SPEC MPI Accelerator Benchmark Search Program will be accepted from now until December 31, 2017 at 11:59 U.S. Pacific Standard Time.   For more information on the program and an entry form, visit http://www.spec.org/hpg/search/.

About SPEC

SPEC is a non-profit organization that establishes, maintains and endorses standardized benchmarks and tools to evaluate performance and energy consumption for the newest generation of computing systems.  Its membership comprises more than 120 leading computer hardware and software vendors, educational institutions, research organizations, and government agencies worldwide.

Source: SPEC

The post SPEC/HPG Seeks Applications for Upcoming MPI Accelerator Benchmark appeared first on HPCwire.

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

Tue, 05/23/2017 - 22:52

President Trump’s proposed $4.1 trillion U.S. FY 2018 budget puts America on track to stand up an exascale capable machine by 2021, but is grim for the rest of science and technology spending. As a total crosscut from the DOE Office of Science and the National Nuclear Security Administration, exascale-focused efforts receive $508 million dollars, a full 77 percent hike over FY17 enacted levels.

Nearly alone among government science programs, Advanced Scientific Computing Research (ASCR) and the Exascale Computing Project (ECP) run by the Department of Energy (DOE) escaped pervasive and deep cuts in spending in the FY 2018 budget proposal (“A New Foundation for American Greatness”), released Tuesday morning. Total ASCR funding gets an 11.6 percent lift to $722 million, and ECP funding rises nearly 20 percent to $196.6 million over FY17 enacted levels ($164 million).

The rest of the Office of Science programs were not so fortunate. Neither was the NIH (19 percent cut), the NSF (11 percent cut) and most severely, the EPA (31.4 percent cut).

Science observes that aside from the $197 million allotted to the DOE’s exascale computing project, spending on computing research actually falls, and “with all the other cuts in DOE’s science programs, it’s not clear what all that extra computing power would be used to do.”

The President’s budget slashes DOE Office of Science funding 17 percent from enacted 2017 levels to $4.47 billion with five out of six research programs (all but ASCR) slated to receive steep cuts.

+ Basic energy sciences (BES), which funds research in chemistry, materials sciences, and condensed matter physics, would see its budget contract by 16.9 percent to $1.555 billion.

+ High energy physics (HEP) program would receive a cut of 18.4 percent to $673 million.

+ Nuclear physics would see its budget drop 19.1 percent to $503 million.

+ Fusion energy sciences (FES) would be cut by 18.4 percent to $310 million. The FES budget makes $63 million available for ITER, the international fusion experiment under construction in France, but that’s far less than the estimated $230 million the US part of the project requires to stay afloat (as reported by Science in March).

+ The biological and environmental research (BER) program, is hardest hit, facing a 43 percent chop to $349 million. Science reports, “much of that cut would come out of DOE’s climate modeling research.”

The Trump budget, which seeks $54 billion in expanded military spending, carves out additional funding for the NNSA, the organization responsible for enhancing national security through the military application of nuclear science. The NNSA would receive a boost of 7.8 percent, from $12.9 billion to $13.9 billion, while funding for the NNSA’s Advanced Simulation Computing program surges 10.7 percent from $663 million to $734 million. The NNSA has requested $183 million in FY 2018 “for activities and research leading to deployment of exascale capability for national security applications in the early 2020s.” ($161 million of this exascale allotment slated for ASC.)

The $183 million request would boost the NNSA’s FY17 exascale budget by $88 million. Included in the new target is $22 million for the Exascale Class Computer Cooling Equipment (ECCCE) project at the Los Alamos National Laboratory. The ECCCE would fund open-cell evaporative cooling towers for an exascale-class machine at the lab. Another $3 million is allocated for the Exascale Computing Facility Modernization (ECFM) Project at Lawrence Livermore National Laboratory. The purpose of the ECFM Project is to fund the facilities and infrastructure upgrades necessary to site an exascale-class system.

Of the nearly $508 million slated for exascale, $346.58 million flows into the Office of Science coffers with $161 million going to the NNSA’s Advanced Simulation and Computing (ASC) program (according to DOE budget documents). As we noted above, however, the NNSA FY18 budget actually requests $183 million — an apparent discrepancy of $22 million. Now that happens to be the exact amount associated with the ECCCE project (also discussed above), but we’re still tracking down details and won’t speculate further.

Of the $346,580,000 Office of Science ECI Request, $196,580,000 is slated for the ECP project “to accelerate research and the preparation of applications, develop a software stack for both exascale platforms, and support additional co-design centers in preparation for exascale system deployment in 2021.” The remaining $150,000,000 would fund Leadership Computing Facilities, “to begin planning, non-recurring engineering, and site preparations for the intended deployment of at least one exascale system in 2021.” The DOE intends to deploy one exascale platform at Argonne in 2021, followed by a second exascale-capable system with a different advanced architecture at Oak Ridge.

Besides the US government’s significantly increased commitment to funding exascale, what stands out in these numbers a shift in total exascale funding toward NNSA under Trump’s term. NNSA’s share of the pie has gone from 25 percent in FY16 to 32 percent in FY17 and (proposed) FY18. (Plugging in the $181 million NNSA exascale funding figure would push that to nearly 36 percent.)

While we’ve just scratched the surface of the budget’s implications for science and advanced computing, in the big picture, there is little chance the final FY18 version will bear a close resemblance to the Trump plan (which is essentially a fleshed out version of the “skinny budget” that was modeled after the Heritage Foundation blueprint). The budget has not been well-received by either side of the political aisle and has been widely criticized for too-steep cuts and unrealistic accounting practices.

“There’s this rosy optimism that somehow growth will magically occur, and yet it cuts the principal source of that growth,” said Rush Holt, CEO of the American Association for the Advancement of Science, cited in a Washington Post article. Yet “[the proposal] savages research. Economists are clear: That’s where we ultimately get our economic growth.”

The Information Technology and Innovation Foundation (ITIF), a prominent US science and technology think tank, released a statement that read in part:

The United States has suffered for more than a decade from chronic underinvestment in basic science, research and development, and technology commercialization, and from insufficient support for small manufacturers. Further reducing federal investment in these kinds of foundational goods will set back the country even further—undermining economic growth, causing standards of living to stagnate, and putting prosperity at risk for future generations of Americans. Yet the administration’s budget calls for a nearly 10 percent cut for non-defense R&D. The administration needs to recognize there is a big difference between wasteful spending and critical investments that ensure the U.S. economy, citizens, and businesses thrive. Targeted federal government programs of the sort the administration is suggesting Congress cut are widely used by even the most conservative Republican governors to help businesses in their states compete.

Further reading:

http://www.sciencemag.org/news/2017/05/what-s-trump-s-2018-budget-request-science

https://www.washingtonpost.com/news/to-your-health/wp/2017/05/22/trump-budget-seeks-huge-cuts-to-disease-prevention-and-medical-research-departments/

https://www.washingtonpost.com/news/energy-environment/wp/2017/05/22/epa-remains-top-target-with-trump-administration-proposing-31-percent-budget-cut/

https://www.washingtonpost.com/news/wonk/wp/2017/05/23/larry-summers-trumps-budget-is-simply-ludicrous/

https://itif.org/publications/2017/05/23/trump-budget-proposal-undermines-us-innovation-and-competitiveness

–John Russell contributed to this report.

The post Exascale Escapes 2018 Budget Axe; Rest of Science Suffers appeared first on HPCwire.

Pages