Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 15 min 45 sec ago

Crossing the HPC chasm to AI and Deep Learning with IBM Spectrum Computing

Sun, 03/18/2018 - 22:06

IBM recently announced new software to deliver faster time to insights for AI, deep learning, high performance computing and data analytics, based on the same technology that is being deployed for the Department of Energy CORAL supercomputer at Oak Ridge and Lawrence Livermore national labs.

The post Crossing the HPC chasm to AI and Deep Learning with IBM Spectrum Computing appeared first on HPCwire.

A3Cube Partners with Edgeware Computing to Increase Sales of Supercomputing Solutions for the Enterprise

Sat, 03/17/2018 - 15:40

A3Cube is the inventor and maker of the Kira Family of enterprise supercomputers, powered by the Fortissimo Foundation system, to produce a high-performance data platform using standard hardware and processors that plugs quickly into current architectures. A3Cube’s approach solves file-management, storage, memory-usage, latency, and processor-sharing challenges other systems have not mastered. Independent testing shows increased application and network speeds of several factors.

“We are very pleased to plug our solutions into the professional networks of EdgeWare Computing,” said Emilio Billi, CTO and inventor of all A3Cube’s technology, just named Rising Entrepreneur of the Year by Technology Headlines. “Their ability to discover, design, and deploy ground-breaking solutions to meet some of the world’s most complex technology challenges is a perfect match for our High Performance Data systems.”

“A3Cube provides the type of game-changing technology we bring to our customers,” said EdgeWare Computing co-founder Greg Powers. “With big-data, security, AI, analytics, edge, and streaming workflow needs increasing at logarithmic speed, we can no longer rely on the geometric increases embodied in Moore’s Law.”

“We’ve entered the Age of Exascale Computing,” added EdgeWare Computing co-founder Eliot Bergson, “and A3Cube and EdgeWare help companies use data as jet fuel today.”

About A3Cube

A3Cube, based in San Jose, Calif., pioneers the transformation from High Performance Computing into High Performance Data. It brings the Emilio Billi’s experience of more than 20 years in developing supercomputers systems in conjunction with a team of veteran hardware and software engineers focused on delivering innovative products that integrate diverse technologies into a unified architecture. A3Cube’s integrated ultra-computing portfolio consists of different product lines: HPC supercomputers, Data Analytics solutions, and AI specialized supercomputers. For more information, visit

About EdgeWare Computing

EdgeWare Computing, based in New York, Los Angeles, and Ames, Iowa, brings almost 100 years of collective business experience to government agencies and Fortune 500 companies in security, aerospace, technology, healthcare, finance, and other industries. They specialize in delivering solutions that every CEO, CTO, and CIO needs to increase revenue and constantly improve their business. Fore more information, visit

Source: A3Cube; EdgeWare Computing

The post A3Cube Partners with Edgeware Computing to Increase Sales of Supercomputing Solutions for the Enterprise appeared first on HPCwire.

Supermicro Receives Nasdaq Staff Determination Letter; Has Requested Hearing Before Hearings Panel

Sat, 03/17/2018 - 15:16

SAN JOSE, Calif., March 17, 2018 — Super Micro Computer, Inc. (NASDAQ:SMCI) (the “Company”), a global leader in high performance, high-efficiency server, storage technology and green computing, today announced that it has received a letter from the staff of the Listing Qualifications Department (the “Staff”) of The Nasdaq Stock Market LLC (“Nasdaq”) notifying the Company that since it remains delinquent in filing its Annual Report on Form 10-K for the fiscal year ended June 30, 2017 and its Quarterly Reports on Form 10-Q for the quarterly periods ended September 30, 2017 and December 31, 2017, the Staff has determined that the Company is non-compliant with Nasdaq Listing Rule 5250(c)(1). Previously the Staff granted the Company an extension until March 13, 2018 to file all delinquent periodic reports. As a result of the foregoing, the Company’s common stock is subject to suspension in trading on March 23, 2018 and delisting from Nasdaq unless the Company requests a hearing before a Hearings Panel by March 21, 2018.

On March 16, 2018, the Company submitted a letter to Nasdaq requesting a hearing before a Hearings Panel at which it intends to present its plan to regain and thereafter maintain compliance with all applicable Nasdaq listing requirements. The hearing request automatically stays the delisting process for a period of 15 calendar days from the date of the deadline to request a hearing. The Hearings Panel has the authority to grant the Company additional time of up to 360 days from the original due date of the Company’s first late filing to regain compliance before further action would be taken to delist the Company’s common stock.

In connection with its request for a hearing, the Company has also requested a stay of the suspension of trading and delisting of its common stock, pending the decision of the Hearings Panel. The Hearings Panel will notify the Company by April 5, 2018 of its decision to allow the Company to continue to trade on Nasdaq pending the hearing and a decision by the Hearings Panel. There can be no assurance that the Hearings Panel will grant the Company’s request for continued listing or stay the delisting of its common stock.


About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO.

Products include servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage, networking, server management software and SuperRack® cabinets / accessories delivering unrivaled performance and value.

Founded in 1993 and headquartered in San Jose, California, Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative. The Company has global logistics and operations centers in Silicon Valley (USA), the Netherlands (Europe) and its Science & Technology Park in Taiwan (Asia). Supermicro, FatTwin, TwinPro, SuperBlade, Double-Sided Storage, BBP, SuperRack, Building Block Solutions and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names and trademarks are the property of their respective owners.

Source: Super Micro Computer, Inc.

The post Supermicro Receives Nasdaq Staff Determination Letter; Has Requested Hearing Before Hearings Panel appeared first on HPCwire.

Nominations Open for 2018 ACM SIGHPC/Intel Computational & Data Science Fellowships

Sat, 03/17/2018 - 15:09

March 17 — ACM’s Special Interest Group on High Performance Computing (SIGHPC), in partnership with Intel, awards the ACM SIGHPC/Intel Computational and Data Science Fellowships to 12 world-class students from under-represented groups in computing.

The fellowships, funded by Intel, were established to increase diversity among students pursuing graduate degrees in data and computational sciences, including women and students from racial/ethnic backgrounds historically underrepresented in the computing field. The fellowship provides $15,000 USD annually for up to five years for students at institutions anywhere in the world who have completed less than half of their planned program of study.

Students, nominated by their graduate advisors, span disciplines from finance and robotics to managing personal health data, and represent institutions of all sizes in 25 countries. In 2017, 80% of nominees were female, and 40% identified as an underrepresented minority in their country of study.

Nominations are evaluated and ranked by a panel of experts (themselves diverse with respect to race, gender, discipline, and nationality) based on nominees’ overall potential for excellence in data science and/or computational science, and the extent to which they will serve as leaders and role models to increase diversity in the workplace.

Nominations Open

March 15, 2018

Nominations Close

April 30, 2018

Notifications Sent

July 31, 2018

Nomination Details

For complete information and to make a nomination.


For questions about the nomination process, contact: SIGHPC


Funding is awarded in August. Winners receive travel support to attend the SC Conference, where they are recognized during the Awards Ceremony. Winners also receive a complimentary membership in SIGHPC for the duration of their fellowship.


The ACM Special Interest Group on HPC is the first international group within a major professional society that is devoted exclusively to the needs of students, faculty, and practitioners in high performance computing. SIGHPC’s mission is to help spread the use of HPC, raise the standards of the profession, and ensure a rich and rewarding career for people involved in the field.

For more information, visit: SIGHPC.

Source: SIGHPC

The post Nominations Open for 2018 ACM SIGHPC/Intel Computational & Data Science Fellowships appeared first on HPCwire.

Mellanox Reacts to Activist Investor Pressures in Letter to Shareholders

Fri, 03/16/2018 - 17:55

As we’ve previously reported, activist investor Starboard Value has been exerting pressure on Mellanox Technologies to increase its returns. In response, the high-performance networking company on Monday, March 12, published a letter to shareholders outlining its proposal for a May 2018 extraordinary general meeting (EGM) of shareholders and highlighting its long-term growth strategy and focus on operating margin improvement.

Here is the text of that letter in full:

Dear Fellow Shareholders,

On behalf of the Mellanox Board of Directors, we are writing to you today to emphasize that we believe calling an extraordinary general meeting of shareholders (EGM) is essential to protecting shareholder choice, to reiterate that continued successful execution of our long-term growth strategy is delivering value and to provide some clarity to certain perspectives in the marketplace.

The EGM Proposals Reflect Our Board’s Commitment to Best-in-Class Governance

At Mellanox’s EGM in May 2018, shareholders will be asked to vote in favor of two best-in-class governance proposals, designed to enhance shareholder choice in a contested election by allowing shareholders to vote for the director candidates who they believe will best guide Mellanox’s strategy and success over the long term.

First, you will be asked to establish plurality voting in the event of a contested election. With a plurality voting standard, which has been adopted by the vast majority of U.S.-listed companies in the event of contested elections, the Board would consist of the director nominees gaining the greatest number of votes and all of the directors who serve on the Board would be elected directly by shareholders. Under our current majority voting structure, it would be possible for fewer than 11 director candidates to receive the necessary votes to get elected, leaving vacancies to be filled by those majority-elected directors rather than by you, our shareholders. Such an outcome becomes particularly likely in contested elections, where votes may be split among a larger number of nominees.

Second, you will be asked to vote in favor of the use of universal proxy cards, which would require any nominee to the Board to consent to being named as a nominee in any proxy statement or proxy card used in connection with the general meeting at which directors will be elected. The adoption of universal proxy cards will provide for all nominees put forth by either the Board or any shareholder of Mellanox to be listed together on one universal proxy card, enabling shareholders to elect any combination of director nominees they choose. Under our current rules, shareholders may only validly submit one proxy, meaning they cannot easily cast votes for nominees from both proxies as they could when voting in person.

The EGM is an Essential Step to Protect Shareholder Choice and Interests

We strongly believe that the composition of the Mellanox Board of Directors should reflect the true intentions of our shareholders. We have carefully taken into consideration many different factors, including a detailed consideration of all relevant U.S. and Israeli legal requirements. We have scheduled the EGM as soon as practicable in order to ensure that all shareholders’ voices will be heard.

On January 17, 2018, without warning and despite what we believed to be constructive discussions, Starboard Value nominated a full slate of directors to the Mellanox Board with the intention of replacing all of the current directors and taking control of the Company. Thus, the decisions made at the Company’s annual general meeting of shareholders (AGM) will truly affect the future of Mellanox. In light of this, the Mellanox Board, in consultation with our independent financial and legal advisors, immediately conducted a comprehensive assessment on how to proceed in the best interests of its shareholders. This review made it clear that updating Mellanox’s proxy voting mechanics to better align them with best practice in a contested election is an essential first step and one that needs to be addressed before the 2018 AGM.

The Mellanox Board believes that shareholders will place far greater value on the freedom to choose the Board they want, without fear of unintended consequences, than on what amounts to a brief delay in the Company’s historical AGM schedule.

Executing Our Long-Term Growth Strategy to Deliver Shareholder Value

Mellanox employs a careful and thoughtful approach to investing and planning and, as our past investments continue to yield impressive results, we remain well-positioned to further benefit from our long-term growth strategy.

With a total addressable market size of approximately $10.6 billion1 forecasted in 2021, the Ethernet segment represents a highly attractive growth opportunity for the Company. We began investing in R&D efforts related to Ethernet technology in 2013. The Ethernet design wins we are now seeing across our product suite are the rewards of our focus on innovation through R&D and investing in the future. This focused investment strategy has enabled us to outpace our competitors since 2010, with Mellanox now being the leader in innovative end-to-end solutions for connecting servers and storage platforms, holding the #1 or #2 position in many of the key markets in which we operate.

In fiscal 2017, the Company made significant strategic investments to complete its data center-focused portfolio. Specifically:

  • Improving InfiniBand competitive strength in AI and HPC markets through InfiniBand 200Gb/s generation, to be introduced shortly;
  • Developing the BlueField family of products from our EZchip acquisition, which provides Mellanox access to a $2 billion addressable system-on-a-chip (SoC) market2; and
  • Increasing the deployment of Spectrum Ethernet Switch platforms.

Strongly Positioned to Continue Realizing the Benefits of Our Prior Investments

Our emphasis on R&D has enabled us to create the most cutting-edge solutions in the industry, resulting in consistent revenue growth, averaging 27% on a pro forma basis since the Company’s initial public offering in 2007. Our fiscal 2017 results demonstrate the successful execution of our multi-year revenue diversification strategy and our leadership position in 25Gb+ Ethernet adapters. Notably, our Ethernet switch business in the 25Gb+ segment grew 41% sequentially in the fourth quarter of 2017, as customers around the world increasingly adopted our Ethernet products.

These trends are holding firm, and customer transition from 10Gb/s to 25Gb+ Ethernet adapters is accelerating across the board – we believe in large part due to Mellanox. We foresaw this industry shift, which is why we made the right investments and worked so diligently to position our business to have the technology and capacity to capture market share and meet demand when the transition began. In fact, we recently raised our guidance estimates for the first quarter of 2018. Based on the mid-point of our guidance, we expect to deliver around 30% year-over-year growth in revenue and 11% operating margin expansion. Importantly, our strong financial performance was a direct result of the investments we made years before in 25Gb+ Ethernet technology.

Executing a Disciplined Campaign to Dramatically Drive Operating Margin Improvements

Mellanox is also focused on expanding market share and improving operating margins. We are committed to achieving these objectives, while maintaining our competitive advantage of superior technology for the long term. We are rationalizing our product portfolio and focusing our investment on businesses with the greatest potential for growth and highest return on investment capital. Due to our ongoing cost-cutting initiatives, we continue to reduce our operating expenses. Recently, we announced that we are ceasing investment in a new generation of network processing family of products, and discontinuing our development in 1550 nanometer silicon photonics.

The Board and management team are fully focused on executing the Company’s strategy of driving sustainable growth, which we are seeing in the accelerated adoption of key products across the customer base, while at the same time delivering on our commitment to more efficiently manage costs. In line with our strategy, we plan to end fiscal 2018 with operating margins in the 20%+ range. And as we scale faster with our Ethernet solutions, we anticipate growing and driving further efficiencies to reach operating margins in the 30s.

Mellanox is, and Will Continue to Be, a Growth Story

Mellanox has been executing on our growth strategy, and we are delivering positive results. The tremendous growth and success we have achieved is a culmination of years of carefully planned investment, as well as a direct result of our Board and management team’s extensive expertise and dedication to research and development. We are seeing significant design wins across our product suite, penetrating key markets and building traction with customers.

The value we have created over the past two quarters is the result of the widespread adoption of 25Gb+ technology in every major market around the world – a trend for which Mellanox has long anticipated and planned. In short, our R&D strategy is working and proven in our financial results. Cost cuts can drive profitability in the short term, but they can also stifle innovation and growth and have limited upside over the long term, particularly if they are undertaken to meet specific operating margin targets and not with a view towards sustainable value creation. Growth takes vision, time, investment, expertise and patience – but the upside is significant and more sustainable than financial engineering. That is the true long-term value we are creating at Mellanox for all shareholders.

New Independent Directors Further Enhance Our Diverse, Highly-Qualified Board Committed to Shareholder Value Creation

Mellanox has a diverse and experienced Board that is actively engaged in overseeing the execution of our strategy to continue to increase revenue, expand market share and improve our operating margins. Our Board is composed of 11 highly qualified and experienced directors, nine of whom are independent and all of whom are seasoned leaders committed to driving shareholder value.

Notably, our Board recently welcomed two new, independent directors, Steve Sanghi and Umesh Padval, concluding our search that began last year to fill two vacant board seats. As CEO of Microchip since 1991, Steve has established himself as one of the best operators in the semiconductor industry with a proven ability to drive profitable growth and value creation. Since joining the board of Integrated Device Technology in 2008, Umesh has seen significant operating margin expansion and stock price appreciation, resulting in over 5x market cap growth during his tenure. Each is a skilled and deeply knowledgeable leader who brings new perspectives and extensive semiconductor industry knowledge to our team. We are confident that their leadership abilities will be invaluable to Mellanox as we continue to execute on our strategic plan.

The EGM Will Ensure That Your Voice Will Be Heard

Rushing into an AGM with our existing voting policies puts our company, shareholders and progress at risk. Mellanox, like many Israel-domiciled corporations, does not have articles of association that provide for a fair and transparent contested election. The unintended consequences of rushing into an AGM without first solving for the majority vote standard and establishing a universal proxy card could be exploited at the expense of Mellanox and its shareholders.

This year’s EGM was scheduled on approximately the same timeline as our 2017 uncontested AGM. In light of the requirement to hold the EGM first, the AGM has been pushed out. Following the EGM, we intend to immediately begin the process of scheduling the AGM, which we anticipate holding on July 25, 2018, in accordance with the Israeli Companies Law and Mellanox’s articles of association.

The Mellanox Board of Directors will be sending you proxy materials shortly so that you can vote to approve these best-in-class governance proposals: establishing plurality voting and requiring the use of universal proxy cards in the event of contested elections. Preliminary copies of the proxy materials have been filed with the U.S. Securities and Exchange Commission and are publicly available on our website

The Mellanox Board is committed to building and protecting your investment by holding the EGM promptly and taking the steps necessary to align our governance policies with your interests and ensure that the composition of our Board fairly reflects shareholders’ intentions.

On behalf of your Board of Directors, thank you for your continued support.


Irwin Federman

Chairman of the Board

Eyal Waldman

President, CEO and Director

The post Mellanox Reacts to Activist Investor Pressures in Letter to Shareholders appeared first on HPCwire.

MathWorks Announces Release 2018a of the MATLAB and Simulink Product Families

Fri, 03/16/2018 - 15:00

NATICK, Mass., March 16, 2018 — MathWorks introduced Release 2018a (R2018a) with a range of new capabilities in MATLAB and Simulink. R2018a includes two new products, Predictive Maintenance Toolbox for designing and testing condition monitoring and predictive maintenance algorithms, and Vehicle Dynamics Blockset for modeling and simulating vehicle dynamics in a virtual 3D environment. In addition to new features in MATLAB and Simulink, and the new products, this release also includes updates and bug fixes to 94 other products.

MATLAB Product Family Updates Include:

    • Live functions, documentation authoring, debugging, and interactive controls for embedding sliders and drop-down menus in the Live Editor
    • App (UI) testing framework, C++ MEX interface, custom tab completion, and function assistants for advanced software development
  • MATLAB Online:
    • Hardware connectivity for communicating with USB webcams
  • Econometrics Toolbox:
    • Econometric Modeler app for performing time series analysis, specification testing, modeling, and diagnostics
  • Image Processing Toolbox:
    • 3-D image processing and volume visualization
  • Partial Differential Equation Toolbox:
    • Structural dynamic analysis to find natural frequencies, mode shapes, and transient response
  • Optimization Toolbox:
    • Branching methods for solving mixed-integer linear problems faster

Deep Learning

  • Neural Network Toolbox:
    • Support package for importing deep learning layers and networks designed in TensorFlow-Keras
    • Long short-term memory (LSTM) networks for solving regression problems, and doing text classification with Text Analytics Toolbox
    • Adam, RMSProp, and gradient clipping to improve network training
    • Accelerated training for directed acyclic graph (DAG) networks using multiple GPUs and computing intermediate layer activations
  • Computer Vision System Toolbox:
    • Image Labeler app to automate labeling of individual pixels for semantic segmentation
  • GPU Coder:
    • CUDA code generation for networks with directed acyclic graph (DAG) topology and pretrained networks like GoogLeNet, ResNet, and SegNet
    • C code generation for deep learning networks on Intel and ARM processors

Data Analytics

  • Statistics and Machine Learning Toolbox:
    • High-density data visualization with scatter plots in the Classification Learner app
    • Big data algorithms for kernel SVM regression, computing confusion matrices, and creating nonstratified partitions for cross-validation
  • Text Analytics Toolbox:
    • Multiword phrase extraction and counting, HTML text extraction, and detection of sentences, email addresses, and URLs
    • Stochastic LDA model training for large datasets
  • Predictive Maintenance Toolbox:
    • A new product for designing and testing condition monitoring and predictive maintenance algorithms

Simulink Product Family Updates Include:

  • Simulink:
    • Predictive quick insert to connect a recommended block to an existing block in a model
    • Simulation Pacing for running simulations at wall clock speed or other specified pace for improved visualization
    • Simulation Data Inspector in the Live Editor for directly adding, viewing, and editing plots
  • Simulink 3D Animation:
    • Collision detection for sensing collisions of virtual world objects using point clouds, raytracing, and primitive geometries
  • Simscape:
    • Moist air domain and block library to model HVAC and environmental control systems
    • Partitioning local solver to increase real-time simulation speed


  • Automated Driving System Toolbox:
    • Driving Scenario Designer app for interactively defining actors and driving scenarios to test control and sensor fusion algorithms
  • Model Predictive Control Toolbox:
    • ADAS blocks for designing, simulating, and implementing adaptive cruise control and lane-keeping algorithms
  • Vehicle Network Toolbox:
    • CAN FD protocol support in Simulink, and XCP over Ethernet to communicate with ECUs from MATLAB or Simulink
  • Model-Based Calibration Toolbox:
    • Powertrain Blockset integration for using measured data to calibrate and generate tables for Powertrain Blockset mapped engines
  • Vehicle Dynamics Blockset:
    • A new product for modeling and simulating vehicle dynamics in a virtual 3D environment

Code Generation

  • Embedded Coder:
    • Embedded Coder dictionary for defining custom code generation configurations for data and functions
    • Code Perspective for customizing Simulink desktop for code generation workflows
  • MATLAB Coder:
    • Row-major array layout to simplify interfacing generated code with C environments storing arrays in row-major format
    • Sparse matrix support to enable more efficient computation using sparse matrices in generated code
    • C code generation for machine learning deployment including k-nearest neighbor, nontree ensemble models, and distance calculations with Statistics and Machine Learning Toolbox
  • Fixed-Point Designer:
    • Lookup table optimization for approximating functions and minimizing existing lookup table RAM usage
  • HDL Coder:
    • Matrix support enabling HDL code generation directly from algorithms with two-dimensional matrix data types and operations

Signal Processing and Communications

  • Signal Processing Toolbox:
    • Signal Analyzer app for processing multiple signals and extracting regions of interest from signals
    • Vibration signal analysis from rotating machinery using RPM tracking and order analysis
  • LTE System Toolbox:
    • NB-IoT support to model the narrowband Internet of Things transport and physical downlink shared channel
  • RF Blockset:
    • Power amplifier model for capturing nonlinearity and memory effects based on input/output device characteristics
  • Wavelet Toolbox:
    • Continuous and discrete wavelet transform filter banks
  • Robotics System Toolbox:
    • Lidar-based SLAM for localizing robots and map environments using lidar sensors

Verification and Validation

  • Simulink Requirements:
    • Requirements import with ReqIF for importing requirements from third-party tools such as IBM Rational DOORS Next Generation or Siemens Polarion
  • Simulink Test:
    • Coverage aggregation to combine coverage results from multiple test runs
  • Polyspace Code Prover:
    • AUTOSAR support for static analysis of AUTOSAR software components

R2018a is available immediately worldwide. For more information, see R2018a Highlights.

Follow @MATLAB on Twitter for the conversation about what’s new in R2018a, or like the MATLAB Facebook page.

About MathWorks

MathWorks is the leading developer of mathematical computing software. MATLAB, the language of technical computing, is a programming environment for algorithm development, data analysis, visualization, and numeric computation. Simulink is a graphical environment for simulation and Model-Based Design for multidomain dynamic and embedded systems. Engineers and scientists worldwide rely on these product families to accelerate the pace of discovery, innovation, and development in automotive, aerospace, electronics, financial services, biotech-pharmaceutical, and other industries. MATLAB and Simulink are also fundamental teaching and research tools in the world’s universities and learning institutions. Founded in 1984, MathWorks employs more than 3500 people in 15 countries, with headquarters in Natick, Massachusetts, USA. For additional information, visit

Source: MathWorks

The post MathWorks Announces Release 2018a of the MATLAB and Simulink Product Families appeared first on HPCwire.

NVIDIA Releases PGI 2018

Fri, 03/16/2018 - 14:01

March 16, 2018 — NVIDIA has announced the availability of PGI 2018. PGI compilers and tools are used by scientists and engineers who develop applications for high-performance computing (HPC) systems. They deliver world-class multicore CPU performance, an easy on-ramp to GPU computing with OpenACC directives, and performance portability across all major HPC platforms.

New Features in 2018:

  • Support for Intel Skylake, IBM POWER9 and AMD Zen
  • AVX-512 code generation on compatible Intel processors
  • Full OpenACC 2.6 directives-based parallel programming on both Tesla GPUs and multicore CPUs
  • OpenMP 4.5 for x86-64 and OpenPOWER multicore CPUs
  • Integrated CUDA 9.1 toolkit and libraries for Tesla GPUs including V100 Volta
  • Partial C++17 support and GCC 7.2 interoperability
  • New PGI fastmath intrinsics library including AVX-512 support

For additional details, check out the SlideShare here:

Source: NVIDIA

The post NVIDIA Releases PGI 2018 appeared first on HPCwire.

Asetek Announces Ongoing Collaboration with Intel on Liquid Cooling for Servers and Datacenters

Fri, 03/16/2018 - 08:42

March 16, 2018 — In anticipation of forthcoming product announcements, Asetek today announced an ongoing collaboration with Intel to provide hot water liquid cooling for servers and datacenters.

This collaboration, which includes Asetek’s ServerLSL and RackCDU D2C technologies, is focused on the liquid cooling of density-optimized Intel® Compute Modules supporting high-performance Intel® Xeon® Scalable processors.

“Asetek liquid cooling solutions are designed to support high-powered CPUs in an energy-efficient and cost-effective manner,” said Andre Eriksen, Asetek CEO and Founder. We are excited about the work we’ve done with Intel to enable their customers and partners to realize the benefits that hot water liquid cooling can provide in datacenter environments.”

“Our customers are looking to use high-performance Intel processors in very dense configurations,” said Al Diaz, VP and general manager, Product Collaboration and Systems Division, Intel Data Center Group. “The work we’ve done with Asetek will enable them to support highly demanding datacenter workloads in a liquid cooled environment.”

Further announcements by Asetek related to this collaboration will be forthcoming in the next few months.

About Asetek

Asetek is a global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK). For more information,

Source: Asetek

The post Asetek Announces Ongoing Collaboration with Intel on Liquid Cooling for Servers and Datacenters appeared first on HPCwire.

Scientists Estimate North American Snowfall with NASA’s Pleiades Supercomputer

Thu, 03/15/2018 - 17:33

COLUMBUS, Ohio, March 15, 2018 — There’s a lot more snow piling up in the mountains of North America than anyone knew, according to a first-of-its-kind study.

Scientists have revised an estimate of snow volume for the entire continent, and they’ve discovered that snow accumulation in a typical year is 50 percent higher than previously thought.

In the journal Geophysical Research Letters, researchers at The Ohio State University place the yearly estimate at about 1,200 cubic miles of snow accumulation. If spread evenly across the surface of the continent from Canada to Mexico, the snow would measure a little over 7.5 inches deep. If confined to Ohio, it would bury the state under 150 feet of snow.

Most of the snow accumulates atop the Canadian Rockies and 10 other mountain ranges. And while these mountains compose only a quarter of the continent’s land area, they hold 60 percent of the snow, the researchers determined.

The research represents an important step toward understanding the true extent of fresh water sources on the continent, explained doctoral student Melissa Wrzesien, lead author on the paper.

“Our big result was that there’s a lot more snow in the mountains than we previously thought,” she said. “That suggests that mountain snow plays a much larger role in the continental water budget than we knew.”

It’s currently impossible to directly measure how much water is on the planet, said Michael Durand, associate professor of earth sciences at Ohio State. “It’s extremely important to know—not just so we can make estimates of available fresh water, but also because we don’t fully understand Earth’s water cycle.”

The fundamentals are known, Durand explained. Water evaporates, condenses over mountains and falls to earth as rain or snow. From there, snow melts, and water runs into rivers and lakes and ultimately into the ocean.

But exactly how much water there is—or what proportion of it falls as snow or rain—isn’t precisely known. Satellites make reasonable measurements of snow on the plains where the ground is flat, though uncertainties persist even there. But mountain terrain is too unpredictable for current satellites. That’s why researchers have to construct regional climate computer models to get a handle on snow accumulation at the continental scale.

For her doctoral thesis, Wrzesien is combining different regional climate models to make a more precise estimate of annual snow accumulation on 11 North American mountain ranges, including the Canadian Rockies, the Cascades, the Sierra Nevada and the Appalachian Mountains. She stitches those results together with snow accumulation data from the plains.

So far, the project has consumed 1.8 million core-hours on NASA’s Pleiades supercomputer and produced about 16 terabytes of data. On a typical laptop, the calculations would have taken about 50 years to complete.

Whereas scientists previously thought the continent held a little more than 750 cubic miles of snow each year, the Ohio State researchers found the total to be closer to 1,200 cubic miles.

They actually measure snow-water equivalent, the amount of water that would form if the snow melted—at about a 3-to-1 ratio. For North America, the snow-water equivalent would be around 400 cubic miles of water—enough to flood the entire continent 2.5 inches deep, or the state of Ohio 50 feet deep.

And while previous estimates placed one-third of North American snow accumulation in the mountains and two-thirds on the plains, the exact opposite turned out to be true: Around 60 percent of North American snow accumulation happens in the mountains, with the Canadian Rockies holding as much snow as the other 10 mountain ranges in the study combined.

“Each of these ranges is a huge part of the climate system,” Durand said, “but I don’t think we realized how important the Canadian Rockies really are. We hope that by drawing attention to the importance of the mountains, this work will help spur development in understanding how mountains fit into the large-scale picture.”

What scientists really need, he said, is a dedicated satellite capable of measuring snow depth in both complex terrain and in the plains. He and his colleagues are part of a collaboration that is proposing just such a satellite.

Co-authors on the paper included Distinguished University Scholar C.K. Shum, senior research associate Junyi Guo and doctoral student Yu Zhang, all of the School of Earth Sciences at Ohio State; Tamlin Pavelsky of the University of North Carolina at Chapel Hill; and Sarah Kapnick of the National Oceanic and Atmospheric Administration. Durand and Wrzesien hold appointments at the Byrd Polar and Climate Research Center, and Shum holds an appointment with the Chinese Academy of Sciences.

Their work was funded by NASA and the National Science Foundation.

Source: Pam Frost Gorder, Ohio State University (link)

The post Scientists Estimate North American Snowfall with NASA’s Pleiades Supercomputer appeared first on HPCwire.

Quantum Computing vs. Our ‘Caveman Newtonian Brain’: Why Quantum Is So Hard

Thu, 03/15/2018 - 17:06

Quantum is coming. Maybe not today, maybe not tomorrow, but soon enough*. Within 10 to 12 years, we’re told, special-purpose quantum systems (see related story: Hyperion on the Emergence of the Quantum Computing Ecosystem) will enter the commercial realm. Assuming this happens, we can also assume that quantum will, over extended time, become increasingly general purpose as it delivers mind-blowing power.

Here’s the quantum computing dichotomy: even as quantum evolves toward commercial availability, very few of us in the technology industry have the slightest idea what it is. But it turns out there’s a perfectly good reason for this. As you’ll see, quantum (referred to as the “science of the very small”) is based on a non-human, non-Newtonian stratum of earthly existence, which means it does things, and acts in accordance with certain laws, for which we humans have no frame of reference.

Realizing why quantum is so alien can be liberating. It frees us from the gnawing worry that we’re not smart enough to ever understand it. It also means we can stop trying to fake it when quantum comes up in conversation. Speaking as a confirmed “Newtonian caveman” (see below), this writer asserts that at least the thinnest, outermost layer of quantum may not be as incomprehensible as we suppose. It might be a good idea if all of us were to make a late New Year’s Resolution to take a fresh stab at grasping quantum’s basic principles.

To help in this process, below are remarks delivered this week at the Rice University Oil & Gas HPC Conference in Houston by Kevin Kissell, technical director in Google’s Office of the CTO. In an interview last year, Kissell told us that while he works with Google’s quantum computing R&D group, he is by background a systems architect; his role with the quantum group is to advise his colleagues on assembling the technology into usable form.

“I’m not really a quantum guy,” he told us at SC17, “though I do read quantum physics textbooks in my spare time.”

Oh, ok.

If you’ve never been to a Kevin Kissell presentation at an industry conference, make a point of it at your next opportunity. It’s appointment viewing. The profusion of technical and scientific knowledge that pours forth, colored by humor, energy and intelligence, is something to see. A tech enthusiast, Kissell gives you the sense that he can’t get his thoughts and words out fast enough. He put on such a performance at the Houston conference, taking on the Herculean task of explaining quantum computing to the rest of us. To Kissell’s great credit, he did it with the empathy of a natural teacher who understands where comprehension stops and mystification begins.

Below is an excerpt of his remarks:

Google has been working on quantum computing for a while, and it’s really hard to explain to people sometimes. And it’s my belief that this is because our brains are not wired for it. There’s an evolutionary advantage in having a brain that understands Newtonian mechanics. Which is to say that when I throw a rock, it’s going to follow a parabola. Now it took us 10,000-20,000 years to be able to define a parabola mathematically. But the intuition that it’s going to start dropping – and dropping at an accelerated rate, because that’s what gravity does – that’s pretty instinctive because that’s a survival thing. But with quantum mechanics, there’s no reason why our brain needs to wrap itself around quantum mechanics in the same way, and in part this is because it contradicts intuition.

One of the classic examples that I found quite helpful in understanding this stuff is the classic demo that you can do it with a laser; the classic model is having a controlled source of individual photons, you fire photons in a beam splitter, you have a couple of mirrors, you have another beam splitter and you have a couple of detectors.

Kevin Kissell this week at the Rice University Oil & Gas HPC Conference

Now my Newtonian caveman brain tells me what should be happening is that a statistically equal number of photons should be hitting on either detector. But that’s not what happens. Because photons ain’t Newtonian things, they’re quantum things. And if you accept this just on faith – because I couldn’t derive this personally – that a beam splitter can be modeled as that matrix (see image) and that the path on which the photon is traveling can be thought of as a vector of a couple of probabilities, then I multiply that probability vector by the beam splitter, that gives me a couple of other resulting matrices, and then I run those matrices into the second beam splitter. The result I get is that the probability of it going into the upper target is zero and the probability that it goes to the target on the right becomes one. That seems strange, and the math only works if it is mathematically, at least, possible that the photon is on both paths at the same time.

This hurts our brains, but this seems to be the way the universe works at a microscopic level.

And so taking this…, if I think of my element of data as a quantum bit – or a qubit – it’s not something that I can represent as an on/off thing. In fact the usual graphical representation is a point on a sphere. So you can represent that point on a sphere as an X-Y-Z coordinates, or I can represent it as a pair of angles relative to the baseis. Typically, it’s done with angles. It hurts my eyes to read it, but that’s the way it’s done.

What’s cute about this is that with a normal bit, it’s 0 or 1…. (But) the quantum bit actually just has that photon which is on both paths at the same time. So this qubit is in a certain sense both 0 and 1 at the same time. It’s got a couple of values that are superimposed on it.

That’s kind of cool, but what is cooler is that if I’ve got two qubits then the vector spaces just sort of blossom. If I have two bits, I can express a value and I have four options that I can express. But if I have two qubits I can express four values at the same time. And that’s the power of it. It’s just exponentially more expressive, if you can actually master it.

So if I have 50 qubits, that state space is actually up there with a very large (Department of Energy) machine. I don’t know if it’s up there with an exascale machine, but it’s getting way up there. If I have 300 qubits, in principle I can represent and manipulate more states than there are atoms in the universe.

And, very conveniently, if I have 333 qubits I can represent a Google for it. (audience laughs) I’m not saying that that’s our design goal, but I’ll be very surprised if we don’t do at least a few runs with a 333-qubit machine… (more laughs)…

The post Quantum Computing vs. Our ‘Caveman Newtonian Brain’: Why Quantum Is So Hard appeared first on HPCwire.

How the Cloud Is Falling Short for HPC

Thu, 03/15/2018 - 14:40

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise IT in its willingness to outsource computational power. The most often touted reason for this is cost – but such a simple description hides a series of more interesting causes for the lukewarm relationship the HPC community has with public cloud providers. Here, we explore how things stand in 2018 – and more importantly, what the cloud vendors need to do if they want to make their services competitive with on-premise HPC.


Despite the huge volume of SaaS and PaaS solutions available within the cloud, the nature of HPC is such that vanilla IaaS servers and associated networking are likely to form the bulk of research computing cloud usage for the foreseeable future. The overheads of virtualisation have previously been cited as a good reason not to move into the cloud, but this argument holds water less and less as time goes on; both because researchers are generally willing to pay an (admittedly smaller) overhead to make use of containerisation, and because the actual overhead is decreasing as cloud vendors shift to custom, external silicon for managing their infrastructure. To address cases where the small remaining overhead is still too much, bare-metal infrastructure is starting to show up in the price lists of major clouds.


Without low-latency interconnects, cloud usage will be effectively impossible for massive MPI jobs typical of the most ambitious “grand challenge” research. Azure tries to fill the niche for providing this sort of hardware in the public cloud – at present they miss the mark due to high costs, though that is a problem which can be remedied easily given enough internal political will.

It is not a given that cloud providers must offer low-latency interconnects more widely, but if they make the business decision not to do so, they must recognise that there will always be a segment of the market which is closed to them. Rather than trying to bluff their way into the high-end HPC market, cloud vendors who choose to eschew the low-latency segment should focus on their genuine strength; the near-infinite scale they can offer for high-throughput workloads and cloudbursting of single-node applications.

Data movement

Before we even reach the complexities of managing data once it is in the cloud, there are issues to be faced with getting it there, and eventually getting it back.

All three major cloud providers have set up very similar schemes for academic research customers which include discounted or free data egress; effectively, the costs for moving data out of the cloud are waived as long as they represent no more than 15% of the total bill for the institution. At the moment then, there is no obvious reason to favour one provider over the others on this front.

For industry users, data being held hostage as it grows in volume is less of a concern – the chain of ownership is much more straightforward, and as long as the company retains an account with the cloud provider, someone will be able to access the files (whether they are in a position to make decisions about data migration is another question…). Data produced by university researchers is more tricky in this regard – funding council rules are deliberately non-specific about what is actually required from researchers when they make a data management plan. The general consensus is that published data needs some level of discoverability and cataloguing; implementing a research data service in the cloud is likely to be far easier in the long-term than providing an on-premise solution, but requires a level of commitment to operational spending that many institutions would not be comfortable with. Cloud providers could certainly afford to make this easier.


The storage landscape within the cloud presents another complication, one which many HPC users will be far less prepared for than simply tuning their core-count and wall-clock times. Migrating data directly in and out of instance-attached block storage volumes via SSH might be the way to go for short, simple tasks – but any practical workflow with data persisting across jobs is going to need to make use of object storage.

While the mechanisms to interact with object storage are fairly simple for all three cloud providers, the breadth of options available when considering what to do next (stick with standard storage, have a tiered model with migration policies, external visibility, etc) could lead to a lot of analysis paralysis. For researchers who just want to run some jobs, storage is the first element of the cloud they will touch which is likely to provoke a strong desire to give in and go back to waiting for time on the local cluster.

For more demanding users, the problems only get worse – none of the built-in storage solutions available across the public cloud providers is going to be suitable for applications with high bandwidth requirements. Parallel file systems built on top of block storage are the obvious fix, but can quickly become expensive even without the licensing costs for a commercially supported solution. Managing high-performance storage on an individual level is going to require more heavyweight automation approaches than many HPC researchers will be used to deploying, and so local administrators could suddenly find themselves supporting not one, but dozens of questionably optimised Lustre installs.

A parallel file system appliance spun up by the cloud provider is the obvious solution here – just like database services and Hadoop clusters, the back-end of a performant file system should not need to be re-invented by every customer.


All major cloud providers have taken roughly the same approach to research computing, best summarised as “build it, and they will come”. Sadly for them, it hasn’t quite worked out that way. Much of the ecosystem associated with each public cloud is predicated on the fact that third-party software vendors can come along and offer a tool which manages, or sits on top of, the IaaS layer. These third parties then charge a small per-hour fee for use of the tool, which is billed alongside the regular cloud service charges. Alternatively, a monthly fee for support can be used where a per-instance charge does not scale appropriately.

These models both work pretty well for enterprise, but do not mesh well with scientific computing, which is typically funded by unpredictable capital investments – a researcher with a fixed pot of money needs to be really confident that your software is worth the cost if they are going to adding a further percentage on top of every core-hour charge they pay. More often, they will choose to cobble something together themselves. This duplication of effort is a false economy as far as the whole research community is concerned, but for individuals it can often appear to be the most efficient way forward.

Cloud providers could address the low-hanging fruit here by putting together their own performance-optimised instance images for HPC, based on (for example) a simple CentOS base and with their own tested performance tweaks pre-enabled, hyperthreading disabled, and perhaps some sensible default software stack such as OpenHPC. Doing this themselves, rather than relying on a company to find some way to monetise it, would give the user community confidence that their interests are actually being taken into consideration.

Funding, billing and cost management

Cloud prices are targeted at enterprise customers, where hardware utilisation below 20% is common. Active HPC sites tend to be in the 70-90% utilisation range, making on-demand cloud server pricing decidedly unattractive. In order to be cost-competitive with on-premise solutions, cloud HPC requires the use of pre-emptible instances and spot-pricing.

The upshot of this price sensitivity is that cloud vendors could be forgiven for finding the HPC community to be a bit of a nuisance; we demand expensive hardware in the form of low-latency interconnects and fancy accelerators… but aren’t willing to pay much of a premium for them. HPC is therefore unlikely to drive much innovation in cloud solutions – that is, until a big customer (think oil & gas, weather, or perhaps pharmaceuticals) negotiate a special deal and decide to take the leap. Dipping in a toe will not be enough (many companies are there already) – the move will have to include 100% of the application stack if the cloud providers hope to silence the naysayers. Once that happens, the lessons learned from the migration can filter out to the rest of the industry.

The challenges of funding an open-ended operational service out of largely capital-backed budgets are a barrier to wholesale adoption of the cloud by universities, though this is one which central government really ought to be the ones to address. Cloud vendors can certainly help matters – the subscription model taken by Azure is a good start, but needs to be rolled out to the other providers and explained much better to potential users.

Finally there is, perhaps, scope for these multi-billion dollar companies to accept some of the cost risk by allowing for hard caps on charges or refunds on a portion of pre-empted jobs, mirroring the way that hardware resellers are expected to cope with liquidated damage contract terms. Call it a charitable donation to science and they might even be able to write it off…

What’s next?

Cloud providers have a few ways to get out of the doldrums they currently find themselves in with regards to the HPC market.

Firstly, they should sanitise their sign-up process; AWS has this covered for the most part, but the Windows-feel of Azure is surely off-putting to hardcore technical users. GCP offers probably the most comfortable experience for this crowd, but desperately needs to do something about the fact that individuals trying to sign up for a personal account in the EU are warned that for tax reasons, the Google cloud is for business use only; I hate to think how many potential customers have been dissuaded from trying out the platform based on this alone.

Secondly, they need to find a way to be more open-handed with trial opportunities suitable for research computing. The standard free trials available for AWS, Azure and GCP are generous if you are an individual hosting a trove of cat pictures, but not so much when you are dealing with terabytes of data and hundreds of core-hours of usage. These trials are already done on the corporate level for target customers, but need to be expanded substantially.

As discussed earlier, the HPC software ecosystem in the cloud is somewhat more stunted than the providers might have hoped – an easy way around this is to provide a stepping-stone between generic enterprise resources and solutions with third-party support. An open framework of tools would allow the ecosystem to develop more readily, and with less risk to third-party vendors.

Training is an area where all three of the cloud providers discussed here put in a considerable effort already. This should be enough to get HPC system administration staff up to speed, but there is still the matter of the end-users – local training by the admin teams of an organisation will clearly play some part, but the cloud vendors would do well to offer more tailored, lightweight courses for those who need to be able to understand, but not necessarily manage, their infrastructure.

Finally, there is the matter of vendor lock-in – one of the major factors which dissuades larger organisations from committing to a particular supplier. Any time you see a large organisation throw their lot in with one of the big three, you can be sure that there have been some lengthy discussions on discounts. Not every customer can expect this treatment, but if vendors wish to inspire any sort of confidence in their customers, they need to make a convincing case that you will be staying long term because you want to, and not because you have to. Competitive costs and rapid innovation have been the story of the cloud so far, but the trend must continue apace if Google, Microsoft or Amazon wish to become leading brands in HPC.

About the Author

Chris Downing joined Red Oak Consulting @redoakHPC in 2014 on completion of his PhD thesis in computational chemistry at University College London. Having performed academic research using the last two UK national supercomputing services (HECToR and ARCHER) as well as a number of smaller HPC resources, Chris is familiar with the complexities of matching both hardware and software to user requirements. His detailed knowledge of materials chemistry and solid-state physics means that he is well-placed to offer insight into emerging technologies. Chris, Senior Consultant, has a highly technical skill set working mainly in the innovation and research team providing a broad range of technical consultancy services. To find out more

The post How the Cloud Is Falling Short for HPC appeared first on HPCwire.

Univa Open Sources Project Tortuga

Thu, 03/15/2018 - 13:01

CHICAGO, March 15, 2018 — Univa, a leading innovator in on-premise and hybrid cloud workload management solutions for enterprise HPC customers, announced the contribution of its Navops Launch (née Unicloud) product to the open source community as Project Tortuga under an Apache 2.0 license to help proliferate the transition of enterprise HPC workloads to the cloud.

“Having access to more software that applies to a broad set of applications like high performance computing is key to making the transition to the cloud successful,” said William Fellows, Co-Founder and VP of Research, 451 Research. “Univa’s contribution of Navops Launch to the open source community will help with this process, and hopefully be an opportunity for cloud providers to contribute and use Tortuga as the on-ramp for HPC workloads.”

Navops Launch offers faster path to the cloud for HPC workloads

While the software is largely used in enterprise HPC environments today, Project Tortuga is a general purpose cluster and cloud management framework with applicability to a broad set of applications including high performance computing, big data frameworks, Kubernetes and scale-out machine learning / deep learning environments. Tortuga automates the deployment of these clusters in local on-premise, cloud-based and hybrid-cloud configurations through repeatable templates.

Tortuga can provision and manage both virtual and bare-metal environments and includes cloud-specific adapters for AWS, Google Cloud, Microsoft Azure, OpenStack and Oracle Cloud Infrastructure with full support for bring-your-own image (BYOI). The built-in policy engine allows users to dynamically create, scale and teardown cloud-based infrastructure in response to changing workload demand. Management, monitoring and accounting of cloud resources is the same as for local servers.

“There is no denying that enterprises are increasingly migrating key workloads to the cloud and our HPC customers are no exception,” said Gary Tyreman, President and CEO of Univa. “We have seen a prominent up-tick of enterprise HPC users looking to tap the vast potential of the public cloud. To stimulate a more robust and broad path to the cloud, we have decided to open source one of Univa’s core products with an eye on simplifying and bringing community involvement to the cloud onboarding process.”

Delivering a commercially supported cloud management system to The Wharton School

When it looked to migrating to the cloud, The Wharton School, University of Pennsylvania was faced with either developing its own software or finding a proven, supported solution. While its HPCC hardware is located on the Penn campus, Navops Launch now allows Wharton to triple its core count with Amazon Web Services EC2 (AWS), with users accessing “anything and everything.” This new flexibility allows researchers to scale beyond on-campus resources, work in isolated environments, and control their own services and costs. “Navops Launch was a solid choice. Being able to use a commercially supported cloud management system that is tightly integrated with Univa Grid Engine is a big plus for us,” said Gavin Burris, Senior Project Leader, The Wharton School.


The open source project is available now at

Univa will continue to evolve Project Tortuga and offer commercial support under the product name Navops Launch, which is production-proven in large-scale distributed computing environments.

For more information visit or contact Univa at

About Univa Corporation

Univa is a leading independent provider of software-defined computing infrastructure and workload orchestration solutions. Univa’s intelligent cluster management software increases efficiency while accelerating enterprise migration to hybrid clouds. Millions of compute cores are currently managed by Univa products in industries such as life sciences, manufacturing, oil and gas, transportation and financial services. We help hundreds of companies to manage thousands of applications and run billions of tasks every day. Univa is headquartered in Chicago, with offices in Toronto and Munich. For more information, please visit

Source: Univa Corporation

The post Univa Open Sources Project Tortuga appeared first on HPCwire.

UT Researchers Develop New Visualization Tools to Explore Fusion Physics

Thu, 03/15/2018 - 12:59

March 15, 2018 — Scientific visualization brings research data to life, but it still frequently lies flat on a computer screen, making interpretation difficult.

Augmented and virtual reality, on the other hand, provides another dimension through which researchers can see their data.

In recent years, staff at the Texas Advanced Computing Center (TACC) have begun developing applications for the Microsoft HoloLens that let scientists interact with their computer models in new ways.

At the 2017 International Conference for High Performance Computing, Networking, Storage and Analysis, they unveiled a proof-of-concept demonstration that allowed scientists to see a plasma model, developed by University of Texas physicist Wendell Horton, evolve over time in virtual 3D space.

“The dynamics of plasmas is complicated due to the interaction of the electric and magnetic fields. This leads to complex vortex structures in three dimensions that are nearly impossible to understand with two-dimensional projections,” Horton said. “Augmented and virtual reality tools give us a new way to really see what is taking place in these evolving complex nonlinear vortex plasma structures.”

To bring the augmented reality visualization to life, experts from TACC converted Horton’s plasma datasets into a form that could be ingested into the Unity platform, a leading framework for AR content creation. They then overlaid text and audio, making a sharable AR experience that members of Horton’s group, collaborators or other researchers, can view and interpret.

‘”The Hololens project with Dr. Horton served as a successful proof-of-concept for augmented reality scientific visualization,” said Greg Foss, one of the lead visualization researchers on the project. “His team is now considering how to best take advantage of the technology.”

Horton’s fusion model isn’t the only application that TACC has experimented with. They have created AR representations of physics-based models of clouds and developed a tool that allows air traffic controllers to perceive planes in the sky, even when they are indoors.

“Virtual and augmented reality capabilities will continue to expand how we explore and discover, both in scientific inquiry and in our daily lives,” said Paul Navrátil, deputy director of Visualization at TACC. “We’re proud to be pushing the art of the possible here at TACC in conjunction with a broad and diverse set of academic partners.”

Source: Aaron Dubrow, University of Texas

The post UT Researchers Develop New Visualization Tools to Explore Fusion Physics appeared first on HPCwire.

Supercomputer Simulation Opens Prospects for Obtaining Ultra-Dense Electron-Positron Plasmas

Thu, 03/15/2018 - 11:36

March 15, 2018 — To achieve breakthrough research results in various fields of modern science, it is vital to develop successful interdisciplinary collaborations. Long-term interaction of physicists from the Institute of Applied Physics of the Russian Academy of Sciences, researchers from Chalmers University of Technology and computer scientists from Lobachevsky University has resulted in a new software tool PICADOR developed for numerical modeling of laser plasmas on modern supercomputers.

A field structure in a dipole wave. (Image courtesy of E. Efimenko)

The work on the PICADOR software system started in 2010. PICADOR is a parallel implementation of the particle-in-cell method that has been optimized for modern heterogeneous cluster systems. The project combined the competencies and efforts of experts from different fields, thus becoming the basis for the well-thought-out optimization and development of new computing approaches that take into account various physical processes. Eventually, this opened the way for a breakthrough in modeling capabilities in a number of research projects. The system’s outstanding functional capabilities and performance make it possible to perform numerical simulations in a range of problems at the forefront of modern laser plasma physics.

In their article published in Scientific Reports, Nizhny Novgorod scientists have formulated the conditions (that were found theoretically and verified in a numerical experiment), under which the avalanche-like generation of electrons and positrons in the focus of a high-power laser pulse yields the electron-positron plasma of record density. The study of such objects will make it possible to approach the understanding of processes occurring in astrophysical objects and to study elementary particle production processes.

A well-known fact in quantum physics is the possibility of transformation of certain particles into other particles. In particular, in a sufficiently strong electric or magnetic field, a gamma photon can decay into two particles, an electron and a positron. Until now, this effect was observed in laboratory conditions mainly when gamma radiation was transmitted through crystals in which sufficiently strong fields exist near atomic nuclei. However, in the nearest future scientists can get a new tool for studying this phenomenon: lasers capable of generating short pulses with a power of more than 10 petawatt (1 petawatt = 1015 watt = 1 quadrillion watt). This level of power is achieved by extreme focusing of radiation. For example, scientists suggest using a laser field configuration, which is referred to as dipole focusing. In this case, the focus point is irradiated from all sides, as it were. Previously, it was shown theoretically that electron-positron avalanches can be observed at the focus of such a laser facility: particles created by the decay of a gamma photon will be accelerated by a laser field and will emit gamma photons, which in turn will give rise to new electrons and positrons. As a result, the number of particles in a short time should grow immensely giving rise to a superdense electron-positron plasma.

However, there are some limitations on the density of the plasma that can be obtained in this way. At some point, the laser radiation will not be able to penetrate the plasma that has become too dense, and the avalanche will cease to increase. According to existing estimates, particle concentration in the laser focus will be just over 1024 particles per cubic centimeter. For comparison, approximately the same electron concentration is found in heavy metals, for example, in platinum or gold.

In their new paper, a team of authors headed by Professor A.M. Sergeev, Academician of the Russian Academy of Sciences, showed that under certain conditions this number can be an order of magnitude higher.

Large-scale numerical simulation of the electron-positron avalanche development in a tightly focused laser field demonstrated a fundamentally new object of investigation, the quasistationary states of a dense electron-positron plasma. These states have a very interesting and unexpected structure. While the laser field in the form of a dipole wave has an axial symmetry, the distribution of electron-positron plasma resulting from the development of the current instability degenerates into two thin layers oriented at a random angle. The thickness of the layers and particle concentration in these layers is apparently limited only by the randomness of the radiation process, which leads to extreme plasma density values. With a total number of particles of the order of 1011, the density exceeds the value of 1026 particles per cubic centimeter, and in our case it was only limited by the resolution of numerical simulation.

Source: Lobachevsky University

The post Supercomputer Simulation Opens Prospects for Obtaining Ultra-Dense Electron-Positron Plasmas appeared first on HPCwire.

TACC’s Stampede1 Used to Simulate and Study Dynamics of Red Blood Cells

Thu, 03/15/2018 - 10:59

March 15, 2018 — If you think of the human body, microvascular networks comprised of the smallest blood vessels are a central part of the body’s function. They facilitate the exchange of essential nutrients and gasses between the blood stream and surrounding tissues, as well as regulate blood flow in individual organs.

While the behavior of blood cells flowing within single, straight vessels is a well-known problem, less is known about the individual cellular-scale events giving rise to blood behavior in microvascular networks.

To better understand this, researchers Peter Balogh and Prosenjit Bagchi published a recent study in the Biophysical Journal. Bagchi resides in the Mechanical and Aerospace Engineering Department at Rutgers University, and Balogh is his PhD student.

To the researchers’ knowledge, theirs is the first work to simulate and study red blood cells flowing in physiologically realistic microvascular networks, capturing both the highly complex vascular architecture as well as the 3D deformation and dynamics of each individual red blood cell.

Balogh and Bagchi developed and used a state-of-the-art simulation code to study the behavior of red blood cells as they flow and deform through microvascular networks. The code simulates 3D flows within complex geometries, and can model deformable cells, such as red blood cells, as well as rigid particles, such as inactivated platelets or some drug particles.

“Our research in microvascular networks is important because these vessels provide a very strong resistance to blood flow,” said Bagchi. “How much energy the heart needs to pump blood, for example, is determined by these blood vessels. In addition, this is where many blood diseases take root. For example, for someone with sickle cell anemia, this is where the red blood cells get stuck and cause enormous pain.”

One of the paper’s findings involves the interaction between red blood cells and the vasculature within the regions where vessels bifurcate. They observed that as red blood cells flow through these vascular bifurcations, they frequently jam for very brief periods before proceeding downstream. Such behavior can cause the vascular resistance in the affected vessels to increase, temporarily, by several orders of magnitude.

There have been many attempts to understand blood flow in microvascular networks dating back to the 1800s and French physician and physiologist, Jean-Louis-Marie Poiseuille, whose interest in the circulation of blood led him to conduct a series of experiments on the flow of liquids in narrow tubes. He also formulated a mathematical expression for the non-turbulent flow of fluids in circular tubes.

Updating this research, Balogh and Bagchi use computation to enhance the understanding of blood flow in these networks. Like many other groups, they originally modelled capillary blood vessels as small, straight tubes and predicted their behavior. “But if you look at the capillary-like vessels under the microscope, they are not straight tubes…they are very winding and continuously bifurcate and merge with each other,” Bagchi said. “We realized that no one else had a computational tool to predict the flow of blood cells in these physiologically realistic networks.”

“This is the first study to consider the complex network geometry in 3D and simultaneously resolve the cell details in 3D,” Balogh said. “One of the underlying goals is to better understand what is occurring in these very small vessels in these complex geometries. We hope that by being able to model this next level of detail we can add to our understanding of what is actually occurring at the level of these very small vessels.”

In terms of cancer research, this model may have tremendous implications. “This code is just the beginning of something really big,” Bagchi said.

In the medical field today, there are advanced imaging systems that image the capillary network of blood vessels, but it’s sometimes difficult for those imaging systems to predict the blood flow in every vessel simultaneously. “Now, we can take those images, put them into our computational model, and predict even the movement of each blood cell in every capillary vessel that is in the image,” Bagchi said.

This is a huge benefit because the researchers can see whether the tissue is getting enough oxygen or not. In cancer research, angiogenesis — the physiological process through which new blood vessels form from pre-existing vessels — is dependent upon the tissue getting enough oxygen.

The team is also working on modeling targeted drug delivery, particularly for cancer. In this approach nanoparticles are used to carry drugs and target the specific location of the disease. For example, if someone has cancer in the liver or pancreas, then those specific organs are targeted. Targeted drug delivery allows increased dose of the drug so other organs don’t get damaged and the side effects are minimized.

“The size and shape of these nanoparticles determine the efficiency of how they get transported through the blood vessels,” Bagchi said. “We think the architecture of these capillary networks will determine how well these particles are delivered. The architecture varies from organ to organ. The computational code we developed helps us understand how the architecture of these capillary networks affects the transport of these nanoparticles in different organs.”

This research used computational simulations to answer questions like: How accurately can a researcher capture the details of every blood cell in complex geometries? How can this be accomplished in 3D? How do you take into account the many interactions between these blood cells and vessels?

“In order to do this, we need large computing resources,” Bagchi said. “My group has been working on this problem using XSEDE resources from the Texas Advanced Computing Center. We used Stampede1 to develop our simulation technique, and soon we will be moving to Stampede2 because we’ll be doing even larger simulations. We are using Ranch to store terabytes of our simulation data.”

The eXtreme Science and Engineering Discovery Environment (XSEDE) is a National Science Foundation-funded virtual organization that integrates and coordinates the sharing of advanced digital services — including supercomputers and high-end visualization and data analysis resources — with researchers nationally to support science. Stampede1, Stampede2, and Ranch are XSEDE-allocated resources. The simulations reported in the paper took a few weeks of continuous simulation and resulted in terabytes of data.

In terms of how this research will help the medical community, Bagchi said: “Based on an image of capillary blood vessels in a tumor, we can simulate it in 3D and predict the distribution of blood flow and nanoparticle drugs inside the tumor vasculature, and, perhaps, determine the optimum size, shape and other properties of nanoparticles for most effective delivery,” Bagchi said. “This is something we’ll be looking at in the future.”

To read the original article, click here.

Source: Faith Singer-Villalobos, TACC

The post TACC’s Stampede1 Used to Simulate and Study Dynamics of Red Blood Cells appeared first on HPCwire.

Cray Tapped to Deliver Largest Supercomputer Dedicated to Fusion Science in Japan

Thu, 03/15/2018 - 10:31

SEATTLE, March 15, 2018 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) has announced that the National Institutes for Quantum and Radiological Science and Technology (QST) selected a Cray XC50  supercomputer to be its new flagship supercomputing system. The new system is expected to deliver peak performance of over 4 petaflops, an increase of more than 2 times the system it is replacing. It will support QST in powering complex calculations for plasma physics (turbulence, edge physics, integrated modeling) and fusion technology, and will be the largest supercomputer used specifically for nuclear fusion science in Japan.

The Cray XC50 supercomputer will be installed at the Rokkasho Fusion Institute in Japan. The Cray supercomputer will replace Helios, the Institute’s prior supercomputer used for the Broader Approach project between the European Atomic Energy Community (Euratom) and Japan. The supercomputer will accelerate the realization of fusion energy through R&D and advanced technologies project and will complement the ITER project, a worldwide collaboration designed to demonstrate the scientific feasibility of fusion power as an environmentally responsible energy source.

“We’re looking forward to delivering a supercomputer for QST that will further the Institute’s work in discovering opportunities for fusion power as a reliable energy source,” said Mamoru Nakano, president of Cray Japan. “The speed and integrated software environment of the Cray XC50 will enhance QST’s infrastructure and allow researchers to speed time to discovery.”

QST will be providing more than 1,000 European and Japanese fusion researchers with the high-performance computing technology required to advance game-changing research in fusion power. The Cray system will provide the performance and scale necessary to support QST researchers in running complex plasma calculations as part of the ITER project.

The Cray XC series of supercomputers are designed to handle the most challenging workloads requiring sustained multi-petaflop performance. The Cray XC supercomputers incorporate the Aries high-performance network interconnect for low latency and scalable global bandwidth, the HPC-optimized Cray Linux Environment, the Cray programming environment consisting of powerful tools for application developers, as well as the latest Intel processors and NVIDIA GPU accelerators. The Cray XC supercomputers deliver on Cray’s commitment to performance supercomputing with an architecture and tightly-integrated software environment that provides extreme scalability and sustained performance.

The system is expected to be put into production in 2018.

For more information on the Cray XC supercomputers, please visit the Cray website at

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to for more information.

Source: Cray Inc.

The post Cray Tapped to Deliver Largest Supercomputer Dedicated to Fusion Science in Japan appeared first on HPCwire.

NSF to Host CAREER Program Webinar

Thu, 03/15/2018 - 10:25

March 15, 2018 — The NSF CAREER Coordinating Committee will host a webinar to answer participants’ questions about development and submission of proposals to the NSF Faculty Early Career Development Program (CAREER). The webinar will give participants the opportunity to interact with members of the NSF-wide CAREER Coordinating Committee in a question-and-answer format. The webinar will be held on May 15 from 1 PM to 3 PM EDT.

In preparation for the webinar, participants are strongly encouraged to consult material available on-line concerning the CAREER program. In particular, the CAREER program web page has a wealth of current information about the program, including:

Additionally, there is a video of a live presentation about the CAREER program accessible through the library of videos from a recent NSF Grants Conference.

How to Submit Questions

Participants may submit questions about CAREER proposal development and submission in advance of the webinar by sending e-mail to:  Questions received by May 11, 2018 will be considered for inclusion in the webinar.

Please note that questions regarding eligibility for the CAREER program in any individual case will not be addressed during the webinar.  Questions about the CAREER program that are not covered during the webinar should be directed to the appropriate NSF Divisional contact shown on the web page


Participants should register in advance at the web page

How to Access the Webinar

Video and audio for the webinar are provided separately.

Meeting Type


Source: NSF

The post NSF to Host CAREER Program Webinar appeared first on HPCwire.

Record Breaking Amount in Total Tape Capacity Shipments Announced by the LTO Program

Thu, 03/15/2018 - 10:06

SILICON VALLEY, Calif., March 15, 2018 — The LTO Program Technology Provider Companies (TPCs), Hewlett Packard Enterprise, IBM Corporation and Quantum today released their annual tape media shipment report, detailing year-over-year shipments. The report showed a record 108,457 petabytes (PB) of total tape capacity (compressed) shipped in 2017, an increase of 12.9 percent over the previous year.

“This year, we’ve seen a rising interest in tape technology from a variety of industries – particularly from those who prioritize protecting their important data from ransomware attack and who require solutions for long-term data retention,” said Chris Powers, Vice President HPE Storage, at Hewlett Packard Enterprise. “With our recent announcement of the updated LTO technology roadmap, we expect to see the amount of capacity shipped continue to grow year-over-year as we provide customers with a cost-effective and secure data storage solution.”

To put this immense amount of tape capacity into perspective, two PB of data is equivalent to roughly the amount of information in all United States’ research libraries. Multiply that number by 54,228 for a better understanding of the massive amount of data capacity shipped in 2017!

With new LTO-8 technology specifications designed to enable customers to store up to 30 TB of compressed capacity2, and the LTO-7 cartridge initialized as Type M media giving customers the opportunity to write 9 TB (22 TB compressed) on a brand new LTO-7 cartridge using LTO-8 drives, 2018 media capacity shipments are expected to soar as tape users migrate to these newer technologies. The increased adoption of LTO-7 technology also remains a key contributor to this capacity increase.

“Many organizations continue to rely on tape for their long-term archive and high-capacity, low-cost data storage needs,” said Phil Goodwin, Research Director, IDC. “Moreover, having tape as part of a backup strategy can provide an ‘air gap’ to help protect against data loss due to ransomware. The higher capacity, faster throughput of the new LTO-8 technology offers continued price-to-performance gains for organizations using tape in their data centers.”

Media unit shipments in 2017 reflect a small decrease over the prior year, which is typical as the market anticipates the introduction of the new LTO-8 generation. The year-over-year unit shipments are offset by the total capacity shipped in the same period, indicating that tape usage is migrating to higher capacity LTO-7 technologies.

The LTO Program will continue to produce annual shipment reports for tape media, which are available for download from the LTO Program website,

About Linear Tape-Open (LTO)

The LTO Ultrium format is a powerful, scalable, adaptable open tape format developed and continuously enhanced by technology providers Hewlett Packard Enterprise (HPE), IBM Corporation and Quantum Corporation (and their predecessors) to help address the growing demands of data protection in the midrange to enterprise-class server environments. This ultra-high capacity generation of tape storage products is designed to deliver outstanding performance, capacity and reliability combining the advantages of linear multi-channel, bi-directional formats with enhancements in servo technology, data compression, track layout, and error correction.

The LTO Ultrium format has a well-defined roadmap for growth and scalability. The roadmap represents intentions and goals only and is subject to change or withdrawal. There is no guarantee that these goals will be achieved. The roadmap is intended to outline a general direction of technology and should not be relied upon in making a purchasing decision. Format compliance verification is vital to meet the free-interchange objectives that are at the core of the LTO Program. Ultrium tape mechanism and tape cartridge interchange specifications are available on a licensed basis. For additional information on the LTO Program, visit and the LTO Program Web site at

Source: LTO Program

The post Record Breaking Amount in Total Tape Capacity Shipments Announced by the LTO Program appeared first on HPCwire.

SURF Cooperative Makes iRODS the Data Management Solution for Dutch National Data Infrastructure

Thu, 03/15/2018 - 08:45

March 15, 2018 — SURF, a cooperative of research and educational institutions that supports researchers throughout the Netherlands, has joined the iRODS Consortium and plans to use iRODS to support Dutch scientists and their research data management needs.

SURF joins Bayer, Dell/EMC, DDN, HGST, IBM, Intel, MSC, the U.S. National Institute of Environmental Health Sciences, OCF, RENCI, the Swedish National Infrastructure for Computing, University College London, University of Groningen, Utrecht University, and the Wellcome Trust Sanger Institute as iRODS Consortium members. The consortium leads efforts to develop, support, and sustain the integrated Rule-Oriented Data System (iRODS) as an open source data management platform.

As a cooperative of research and educational institution that supports researchers throughout the Netherlands with large-scale storage facilities, SURF needs innovative and comprehensive data management tools. Those needs include access to distributed storage facilities, tools to support effective and efficient data management, and data provenance that complies with national and international regulatory requirements. To support its clients’ needs, SURF began using iRODS about two years ago. Now, it is creating a national Research Data Management (RDM) expertise center that will incorporate iRODS into nationwide services for researchers. The center, with help from the iRODS Consortium, plans to build knowledge on the use of iRODS among members of the SURF cooperative, a collaborative information and communication technology (ITC) organization for education and research in the Netherlands.

According to Mark Cole, head of business development and product management at SURF, the new RDM will use iRODS as the backbone of its data management infrastructure. iRODS will make it possible for researchers to find and share data across multiple sites without the cost, time, compatibility issues, or security risks that come with transferring large datasets. iRODS will also enable data provenance, a historical record of the data and its origins, that implements privacy by design.

“We view iRODS as an RDM tool that has the potential to support researchers throughout the entire research cycle,” said Saskia van Eeuwijk, project manager at SURF. “The iRODS technology is for that reason positioned by us in the center of RDM solutions for researchers.  We are very enthusiastic on iRODS as a platform and we want to contribute to the further development of this platform for the coming years.”

As consortium members, SURF will play a role in guiding the future development of iRODS, growing the user and developer community, and facilitating iRODS support, education, and collaboration opportunities. They will also have the opportunity to participate in the annual iRODS User Group meeting, which this year will take place June 5 – 7 in Durham, NC, USA.

“Having SURF as an iRODS Consortium member gives us the chance to make a real impact on how research is done in the Netherlands,” said iRODS Consortium Executive Director Jason Coposky. “Large, multi-institutional research support organizations need practical solutions to their data challenges, while adhering to data security and provenance requirements. We think we can meet those needs for SURF and look forward to a long-lasting relationship.”

For more on iRODS and the iRODS Consortium, visit the iRODS website.

Source: iRODS

The post SURF Cooperative Makes iRODS the Data Management Solution for Dutch National Data Infrastructure appeared first on HPCwire.

Stephen Hawking, Legendary Scientist, Dies at 76

Wed, 03/14/2018 - 17:47

Stephen Hawking passed away at his home in Cambridge, England, in the early morning of March 14; he was 76. Born on January 8, 1942, Hawking was an English theoretical physicist, cosmologist, author and director of research at the Centre for Theoretical Cosmology within the University of Cambridge. A brilliant scientist and visionary, Hawking advanced cosmology as a computational science and led the launch of several UK supercomputers dedicated to cosmology and particle physics.

Considered one of the greatest minds of our time, Professor Hawking brought his passion for science into the public sphere through his writing, lectures, television appearances and collaboration on biographical films. He was world renowned for his work with black holes and relativity and wrote the best-selling “A Brief History of Time” as well as several other popular science books.

Stephen Hawking, Andrey Kaliazin, Mike Woodacre, Paul Shellard and Simon Appleby. Circa: 2012. Credit: Judith Croasdell. (Source)

Professor Hawking was the first to set out a theory of cosmology linking general relativity and quantum mechanics. He also showed that black holes emit energy due to quantum effects near the event horizon, a phenomenon today known as Hawking radiation. He was a proponent of the many-worlds interpretation of quantum mechanics.

Professor Hawking was on the forefront of transforming cosmology from a largely speculative endeavor to a quantitative and predictive science. “Without supercomputers, we would just be philosophers,” he has stated.

Hawking led the founding of the COSMOS supercomputing facility in 1997. Last December, the center deployed an HPE Superdome Flex in-memory computing platform to process massive data sets that represent 14 billion years of history.

“Curiosity is essential to being human,” said Hawking at the time of the collaboration. “From the dawn of humanity we’ve looked up at the stars and wondered about the Universe around us. My COSMOS group is working to understand how space and time work, from before the first trillion trillionth of a second after the big bang up to today, fourteen billion years later.”

In November 2015, the Stephen Hawking Centre for Theoretical Cosmology was recognized with the HPCwire Readers’ Choice Award for Best Use of High Performance Data Analytics. The award was for the many-core acceleration of the MODAL analysis pipeline which offered new statistical insights from the Cosmic Microwave Background as observed by the ESA Planck Satellite. The work was achieved on the Intel Xeon Phi-enabled SGI UV2000, developed in collaboration with the STFC DiRAC HPC Facility and the largest shared-memory computer in Europe at the time.

Professor Hawking was an inspiration to people around the world for his brilliant scientific achievements and for his determination and good humor in living with a disability. Diagnosed with amyotrophic lateral sclerosis (ALS) in 1963 when he was 21, and outliving his doctors’ prognosis by decades, Hawking used a wheelchair and communicated through a computerized voice system. “The ability to see the lighter side of life and his perseverance in the face of adversity were important aspects of his warm and open personality,” remarked friends and colleagues in a tribute. “He was a living demonstration that there should be no boundary to human endeavour.”

Speaking at the opening of the 2012 London Paralympics games, Hawking said, “There is no such thing as a standard run-of-the mill human being. And however difficult life may seem, there is always something you can do, and succeed at.”

Professor Hawking was awarded the US Presidential Medal of Freedom in 2009. Other distinguished honors he has received include the Copley Medal of the Royal Society, the Albert Einstein Award, the Gold Medal of the Royal Astronomical Society, the Fundamental Physics Prize, and the BBVA Foundation Frontiers of Knowledge Award for Basic Sciences. He was a Fellow of The Royal Society, a Member of the Pontifical Academy of Sciences, and a Member of the US National Academy of Sciences.

Hawking leaves behind three children and three grandchildren.

In a statement, Hawking’s children said: “We are deeply saddened that our beloved father passed away today. He was a great scientist and an extraordinary man whose work and legacy will live on for many years. His courage and persistence with his brilliance and humour inspired people across the world.

“He once said: ‘It would not be much of a universe if it wasn’t home to the people you love.’ We will miss him for ever.”

The post Stephen Hawking, Legendary Scientist, Dies at 76 appeared first on HPCwire.