Feed aggregator

InfiniBand Accelerates the World’s Fastest Supercomputers

Related News- HPC Wire - 1 hour 10 min ago

BEAVERTON, Ore., Nov. 21, 2017 — The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand specification, today highlighted the latest TOP500 List, which reports the world’s first and fourth fastest supercomputers are accelerated by InfiniBand. The results also show that InfiniBand continues to be the most used high-speed interconnect on the TOP500 List, reinforcing its status as the industry’s leading high performance interconnect technology. The updated list reflects continued demand for InfiniBand’s unparalleled combination of network bandwidth, low latency, scalability and efficiency.

InfiniBand connects 77 percent of the new High Performance Computing (HPC) systems added since the June 2017 list, eclipsing the 55 percent gain from the previous six month period. This upward trend indicates increasing InfiniBand usage by HPC system architects designing new clusters to solve larger, more complex issues. Additionally, InfiniBand is the preferred fabric of the leading Artificial Intelligence (AI) and Deep Learning systems currently featured on the list. As HPC demands continue to evolve, especially in the case of AI and Deep Learning applications, the industry can rely on InfiniBand to meet their rigorous network performance requirements and scalability needs.

The latest TOP500 List also featured positive developments for RDMA over Converged Ethernet (RoCE) technology. All 23 systems running Ethernet at 25Gb/s or higher are RoCE capable. We expect the number of RoCE enabled systems on the TOP500 List to rise as more systems look to take advantage of advanced high-speed Ethernet interconnects for further performance and efficiency gains.

“InfiniBand being the preferred interconnect for new HPC systems shows the increasing demand for the performance it can deliver.  Its place at #1 and #4 are excellent examples of that performance,” said Bill Lee, IBTA Marketing Working Group Co-Chair. “Besides of delivering world-leading performance and scalability, InfiniBand guarantees backward and forward compatibility, ensuring users highest return on investment and future proofing their data centers.”

The TOP500 List (www.top500.org) is published twice per year and ranks the top supercomputers worldwide based on the LINPACK benchmark rating system, providing valuable statistics for tracking trends in system performance and architecture.

About the InfiniBand Trade Association

The InfiniBand Trade Association was founded in 1999 and is chartered with maintaining and furthering the InfiniBand and the RoCE specifications. The IBTA is led by a distinguished steering committee that includes Broadcom, Cray, HPE, IBM, Intel, Mellanox Technologies, Microsoft, Oracle and QLogic. Other members of the IBTA represent leading enterprise IT vendors who are actively contributing to the advancement of the InfiniBand and RoCE specifications. The IBTA markets and promotes InfiniBand and RoCE from an industry perspective through online, marketing and public relations engagements, and unites the industry through IBTA-sponsored technical events and resources. For more information on the IBTA, visit www.infinibandta.org.

Source: InfiniBand Trade Association

The post InfiniBand Accelerates the World’s Fastest Supercomputers appeared first on HPCwire.

Five from ORNL Elected Fellows of American Association for the Advancement of Science

Related News- HPC Wire - 4 hours 57 min ago

OAK RIDGE, Tenn., Nov. 21, 2017 — Five researchers at the Department of Energy’s Oak Ridge National Laboratory have been elected fellows of the American Association for the Advancement of Science (AAAS).

AAAS, the world’s largest multidisciplinary scientific society and publisher of the Science family of journals, honors fellows in recognition of “their scientifically or socially distinguished efforts to advance science or its applications.”

Budhendra Bhaduri, leader of the Geographic Information Science and Technology group in the Computational Sciences and Engineering Division, was elected by the AAAS section on geology and geography for “distinguished contributions to geographic information science, especially for developing novel geocomputational approaches to create high resolution geographic data sets to improve human security.”

Bhaduri’s research focuses on novel implementation of geospatial science and technology, namely the integration of population dynamics, geographic data science and scalable geocomputation to address the modeling and simulation of complex urban systems at the intersection of energy, human dynamics and urban sustainability. He is also the director of ORNL’s Urban Dynamics Institute, a founding member of the DOE’s Geospatial Sciences Steering Committee and was named an ORNL corporate fellow in 2011.

Sheng Dai, leader of the Nanomaterials Chemistry group in the Chemical Sciences Division, was elected by the AAAS section on chemistry for “significant and sustained contribution in pioneering and developing soft template synthesis and ionothermal synthesis approaches to functional nanoporous materials for energy-related applications.”

Dai’s research group synthesizes and characterizes novel functional nanomaterials, ionic liquids and porous materials for applications in catalysis, efficient chemical separation processes and energy storage systems. He is the director of the Fluid Interface Reactions, Structures and Transport (FIRST) Center, a DOE Energy Frontier Research Center, and was named an ORNL corporate fellow in 2011.

Mitchel Doktycz, leader of the Biological and Nanoscale Systems Group in the Biosciences Division, was elected by the AAAS section on biological sciences for “distinguished contributions to the field of biological sciences, particularly advancing the use of nanotechnologies for characterizing and interfacing to biological systems.”

Doktycz is also a researcher at ORNL’s Center for Nanophase Materials Sciences and specializes in the development of analytical technologies for post-genomics studies, molecular and cellular imaging techniques and nanomaterials used to study and mimic biological systems. He holds a joint faculty appointment in the UT-ORNL Bredesen Center for Interdisciplinary Research and Graduate Education and the Genome Science and Technology Program at the University of Tennessee, Knoxville.

Bobby G. Sumpter, deputy director of the Center for Nanophase Materials Sciences (CNMS), was elected by the AAAS section on physics for “distinguished contributions to the field of computational and theoretical chemical physics, particularly for developing a multifaceted approach having direct connections to experimental research in nanoscience and soft matter.”

Sumpter’s research combines modern computational capabilities with chemistry, physics and materials science for new innovations in soft matter science, nanomaterials and high-capacity energy storage. He is the leader of both the Computational Chemical and Materials Science Group in the Computational Sciences and Engineering Division and the Nanomaterials Theory Institute at CNMS, which is a DOE Office of Science User Facility. He was named an ORNL corporate fellow in 2013, is chair of the Corporate Fellows Council and holds a joint faculty appointment in the UT-ORNL Bredesen Center.

Robert Wagner, director of the National Transportation Research Center in the Energy and Transportation Science Division, was elected by the AAAS section on engineering for “distinguished contributions to the fields of combustion and fuel science, particularly for seminal research on combustion instabilities and abnormal combustion phenomena.”

Wagner is the lead of the Sustainable Mobility theme for ORNL’s Urban Dynamics Institute and the co-lead of the DOE’s Co-Optimization of Fuels and Engines Initiative, which brings together the unique research and development capabilities of nine national labs and industry partners to accelerate the introduction of efficient, clean, affordable and scalable high-performance fuels and engines. He also holds a joint faculty appointment in the UT-ORNL Bredesen Center and is a fellow of the Society of Automotive Engineers International and the American Society of Mechanical Engineers.

The new fellows will be formally recognized in February at the 2018 AAAS Annual Meeting in Austin, Texas.

ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

Source: ORNL

The post Five from ORNL Elected Fellows of American Association for the Advancement of Science appeared first on HPCwire.

HPE Announces Antonio Neri to Succeed Meg Whitman as CEO

Related News- HPC Wire - 5 hours 11 min ago

PALO ALTO, Calif., Nov. 21, 2017 — Hewlett Packard Enterprise today announced that, effective February 1, 2018, Antonio Neri, current President of HPE, will become President and Chief Executive Officer, and will join the HPE Board of Directors.  Meg Whitman, current Chief Executive Officer, will remain on the HPE Board of Directors.

“I’m incredibly proud of all we’ve accomplished since I joined HP in 2011.  Today, Hewlett Packard moves forward as four industry-leading companies that are each well positioned to win in their respective markets,” said Meg Whitman, CEO of HPE. “Now is the right time for Antonio and a new generation of leaders to take the reins of HPE. I have tremendous confidence that they will continue to build a great company that will thrive well into the future.”

Meg Whitman was appointed President and CEO of HP in September 2011.  Since then, she has executed against a five-year turnaround strategy that has repositioned the company to better compete and win in today’s environment.  Under her leadership, the company rebuilt its balance sheet, reignited innovation, strengthened operations and improved customer and partner satisfaction.  It also made strategic moves to focus and strengthen its portfolio, most notably its separation from HP Inc., which was the largest corporate separation in history.  She also led the subsequent spin off and mergers of HPE’s Enterprise Services and Software businesses, as well as strategic acquisitions including Aruba, SGI, SimpliVity and Nimble Storage.

Under Whitman’s leadership, significant shareholder value has been created, including nearly $18 billion in share repurchases and dividends.  Since the birth of HPE on November 2, 2015, the company has delivered a total shareholder return of 89 percent, which is more than three times that of the S&P 500.

“During the past six years, Meg has worked tirelessly to bring stability, strength and resiliency back to an iconic company,” said Pat Russo, Chairman of HPE’s Board of Directors. “Antonio is an HPE veteran with a passion for the company’s customers, partners, employees and culture. He has worked at Meg’s side and is the right person to deliver on the vision the company has laid out.”

Neri, 50, joined HP in 1995 as a customer service engineer in the EMEA call center.  He went on to hold various roles in HP’s Printing business and then to run customer service for HP’s Personal Systems unit.  In 2011, Neri began running the company’s Technology Services business, then its Server and Networking business units, before running all of Enterprise Group beginning in 2015.  As the leader for HPE’s largest business segment, comprising server, storage, networking and services solutions, Neri was responsible for setting the R&D agenda, bringing innovations to market, and go-to-market strategy and execution.  Neri was appointed President of HPE in June 2017.  In addition to leading the company’s four primary lines of business, as President, Neri has been responsible for HPE Next, a program to accelerate the company’s core performance and competitiveness.

“The world of technology is changing fast, and we’ve architected HPE to take advantage of where we see the markets heading,” said Antonio Neri, President of HPE. “HPE is in a tremendous position to win, and we remain focused on executing our strategy, driving our innovation agenda, and delivering the next wave of shareholder value.”

HPE’s strategy is based on three pillars.  First, making Hybrid IT simple through its offerings in the traditional data center, software-defined infrastructure, systems software, private cloud and through cloud partnerships.  Second, powering the Intelligent Edge through offerings from Aruba in Campus and Branch networking, and the Industrial Internet of Things (IoT) with products like Edgeline and its Universal IoT software platform. Third, providing the services that are critical to customers today, including Advisory, Professional and Operational Services.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the core data center to the cloud to the intelligent edge, our technology and services help customers around the world make IT more efficient, more productive and more secure.

Source: HPE

The post HPE Announces Antonio Neri to Succeed Meg Whitman as CEO appeared first on HPCwire.

Live and in Color, Meet the European Student Cluster Teams

Related News- HPC Wire - 6 hours 6 min ago

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let’s get to know them better through the miracle of video…..

Team FAU/TUC is a combined team of Friedrich Alexander University and Technical University of Munich. The team is coming off a LINPACK win at ISC17, but ran into some trouble at SC17. Lots of it was just bad luck and the kind of niggling technical problems that we have all run into with practically every tech project we undertake.

One triumph for the team is their optimization of the Born application onto GPUs. This brought the Born ‘shot’ time down to a mere seven minutes from the three hour CPU-only length. That’s one hell of an optimization.

As you’ll see in the video, the team is taking their bad luck philosophically and plugging ahead in the competition, which is exactly what we love to see.

Team Poland is a mix of students from several Warsaw area universities. We first met them at ASC16 and now, as then, they weren’t camera or microphone shy. A self-admitted emotional team, we heard all about their trials and travails at SC17.

They missed turning in their LINPACK result by a tiny 30 second margin. The team had NVIDIA V100 GPUs and needed to get some missing firmware for the accelerators. Unfortunately for Team Poland, it took them quite a while to acquire said firmware, which caused them to miss the LINPACK time limit.

They’re a fun and enthusiastic team, to say the least, and, again, not camera shy. They lay it all out there in the video posted above.

In our next article, we’ll introduce the Asian cluster competition teams….stay tuned….

The post Live and in Color, Meet the European Student Cluster Teams appeared first on HPCwire.

RAIDIX Data Storage Celebrates RAID’s 30th Anniversary

Related News- HPC Wire - 6 hours 44 min ago

Nov. 21, 2017 — Data storage software developer RAIDIX celebrates the 30th anniversary of the RAID technology that enables the user to consolidate multiple drives into a logical array for improved redundancy and performance. The concept of RAID (redundant array of independent disks) was first introduced in 1987 by Berkeley University researchers David Patterson, Garth A. Gibson, and Randy Katz.

In June 1988, the scientists presented a paper “A Case for Redundant Arrays of Inexpensive Disks (RAID)” at the SIGMOD conference (“inexpensive” was switched for “independent” in later naming). The primary levels of the RAID specification were RAID 1 (a mirror disk array), RAID 2 (reserved for levels with dedicated Hamming-code parity), RAID 3 and 4 (byte- and block-level striping with dedicated parity), and RAID 5 (block-level striping with distributed parity).

Initially, RAID was deemed a hardware technology, by and large. Indeed, a physical RAID controller is capable of supporting multiple arrays of various levels concurrently. However, a more efficient implementation of RAID is available with the use of software components (drivers). The Linux kernel, for instance, enables flexible management of RAID devices. Leveraging the Linux kernel modules and erasure coding technology, the RAIDIX software developers created a building block for high-performance and fault-tolerant data storage systems based on commodity-off-the-shelf hardware.

RAIDIX operates with the RAID 0, RAID 5, RAID 6, and RAID 10 levels. The patented proprietary RAIDIX algorithms include the unique RAID 7.3 and N+M levels.

RAID 7.3 is a sibling of RAID 6 with double parity, yet the former delivers greater reliability. RAID 7.3 is a level of interleaving blocks with triple parity distribution, allowing the system to restore data in case up to 3 drives fail. RAID 7.3 enables high-performance levels without any additional CPU load, and optimally fits large arrays over 32 TB capacity.

Another patented technology RAID N+M is the level of interleaving blocks with flexible parity that allows the user to choose the number of disks for checksum allocation. RAID N+M requires at least 8 disks and can sustain complete failure of up to 64 drives in the same group (depending on the number of parity disks).

Development of new algorithms and cutting-edge technology inspired by artificial intelligence and machine learning is the core competency of the RAIDIX in-house Research Lab. Over the years, RAIDIX has registered 10+ technology patents in the US. The Lab’s key lines of research involve gradual elimination of obsolete write levels and associated latencies, advancement of predictive analytics, configuration of data storage policies on-the-fly, and more.

About RAIDIX

RAIDIX (www.raidix.com) is a leading solution provider and developer of high-performance data storage systems. The company’s strategic value builds on patented erasure coding methods and innovative technology designed by the in-house research laboratory. The RAIDIX Global Partner Network encompasses system integrators, storage vendors and IT solution providers offering RAIDIX-powered products for professional and enterprise use.

Source: RAIDIX

The post RAIDIX Data Storage Celebrates RAID’s 30th Anniversary appeared first on HPCwire.

SC17 Student Cluster Kick Off – Guts, Glory, Grep

Related News- HPC Wire - 6 hours 46 min ago

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair.

It began with a welcome from SC17 chair Bernd Mohr, where he lauded the competition for being the most internationally flavored event of the entire SC conference.

Stephen Harrell then took over, introducing his committee members (who are vital to the smooth operation of the competition), then having Andy Howard briefly explain the cloud component of the tourney. After that, we heard Andrea Orton unveil the mystery application (it’s MPass-A, a sophisticated atmospheric modeling app.).

At that point, it was on, and students were a blur of activity as they hurried to their booths to begin work on the applications.

Our video camera puts you right in the middle of the action, as you’ll see in the upcoming articles. Stay tuned…..

The post SC17 Student Cluster Kick Off – Guts, Glory, Grep appeared first on HPCwire.

DDN Congratulates Customer JCAHPC for Winning Inaugural IO500 Award

Related News- HPC Wire - 10 hours 39 min ago

SANTA CLARA, Calif., Nov. 21, 2017 – DataDirect Networks (DDN) today announced that the Oakforest-PACS system at the Joint Center for Advanced HPC (JCAHPC) in Japan, which uses DDN’s Infinite Memory Engine (IME), has been named the first annual IO500 winner. The award was revealed at the SuperComputing 2017 trade show. The goal of the IO500 (www.vi4io.org/std/io500/start) is to create a suite of I/O benchmarks that allow comparison of storage systems and is similar in concept to the Top500 for computing systems. Storage systems are ranked according to the combination of a set of I/O benchmarks that are designed to represent a mix of applications and real-world workloads.

JCAHPC’s IME deployment now takes the #1 position, ahead of all other filesystem and burst buffer solutions. The secret behind IME’s performance is the ground-up development of a new leaner I/O path that delivers flash performance direct to the applications rather than presenting flash through a file system.

“We are very pleased to finally have a benchmark suite that is accepted industry wide and reflects the kinds of workloads found within our most data-intensive customer environments,” said James Coomer, vice president of product and benchmarking at DDN.  “Even more exciting is the fact that our customer, JCAHPC, achieved this top honor utilizing DDN’s IME flash cache.  This recognition is validation that IME is delivering the type of I/O acceleration gains for which it was designed.”

IME is DDN’s scale-out flash cache that is designed to accelerate applications beyond the capabilities of today’s file systems. IME manages I/O in an innovate way to avoid the bottlenecks and latencies of traditional I/O management and delivers the complete potential of flash all the way to connected applications. IME also features advanced data protection, flexible erasure coding and adaptive I/O routing, which together provide new levels of resilience and performance consistency.

About DDN

DataDirect Networks (DDN) is a leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Congratulates Customer JCAHPC for Winning Inaugural IO500 Award appeared first on HPCwire.

First-Ever High School Team Squares Off Against Top Universities in Annual Supercomputing Challenge

Related News- HPC Wire - 12 hours 34 min ago

Nov. 21, 2017 — Most underdogs don’t take home the trophy. But that didn’t stop the Sudo Wrestlers from competing as the first all-high school team in the 11th annual Student Cluster Competition, held last week at the SC17 supercomputing show, in Denver.

All 16 teams participating in the Student Cluster Competition at SC17 gathered on the steps of the Denver Convention Center for a commemorative shot.

Dozens of undergraduate students in 16 teams from some the world’s most lauded universities joined the high schoolers, all armed with the latest NVIDIA Tesla V100 GPU accelerators. Their aim: to create small computing clusters for a non-stop, 48-hour challenge measuring how fast they could complete scientific, high performance computing workloads.

Traveling to the show in Denver from William Henry Harrison High School in West Lafayette, Indiana, the Sudo Wrestlers comprised one senior, four juniors and a sophomore. Their interest in the challenge was inspired by a presentation two Purdue University instructors made last year to their school’s robotics club.

“We’re probably not going to win, but we’re just happy to be here,” said team member Jacob Sharp, as the competition was getting started.

Sharp was charged with ensuring the cluster was up and running smoothly at all times. As a gamer and fan of NVIDIA’s GeForce cards, Sharp said he and his teammates were “psyched” to have access to NVIDIA Volta GPUs to build their cluster.

The competition fosters collaboration not just within teams, but across them. Sharp reported that the other, older teams offered assistance when the Sudo Wrestlers ran into compiling and other technical issues.

Unfortunately, Sharp’s prediction was right. Toward the end of the week, the winner of the Student Cluster Competition was announced. It was not the Sudo Wrestlers.

Instead, the honor went to Singapore’s Nanyang Technological University, whose members shattered two benchmark records with a screaming-fast cluster they packed with 16 NVIDIA V100 GPUs.

The NTU team posted a SCC LINPACK score of 51.77 TFlop/s, beating the previous record of 37.05 TFlop/s, held by Germany’s Friedrich-Alexander-Universitat. Then, it captured the competition’s HPCG record — a benchmark meant to mimic modern HPC workloads — with a score of 2,056, easily topping the 1,394 record set by the Purdue/NEU team six months ago at ISC17.

The Sudo Wrestlers posted a LINPACK score of 28.65, landing them in ninth place — an impressive feat for such a young team.

Source: NVIDIA

The post First-Ever High School Team Squares Off Against Top Universities in Annual Supercomputing Challenge appeared first on HPCwire.

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

Related News- HPC Wire - Mon, 11/20/2017 - 17:16

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore a potential sale.”

The WSJ article, written by David Benoit, reports, “The New York activist investor has a long record of successful semiconductor investments, highlighted earlier Monday by Marvell’s $6 billion deal for Cavium Inc., less than two years after Starboard arrived and the company promptly ousted its founders. The deal helped send Marvell stock higher, and it has now returned about 157% since before Starboard arrived in February 2016, compared with roughly 127% from the iShares Semiconductor exchange-traded fund.”

According to the report, Starboard believes Mellanox spends too much on research and development, among other things, to try to grow revenue, “sacrificing margins compared with peers, according to people familiar with the matter.”

Mellanox is a leading provider of HPC interconnect technology, offering both InfiniBand and Ethernet products. It also purchased interconnect chip specialist EZchip and has been incorporating its technology (see HPCwire article, Mellanox Spins EZchip/Tilera IP Into BlueField Networking Silicon, and EnterpriseTech coverage
Mellanox Ethernet/ARM NICs Lighten CPU Burden).

Link to Wall Street Journal article:  https://www.wsj.com/articles/starboard-value-takes-10-7-stake-in-mellanox-technologies-1511216509

The post Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible? appeared first on HPCwire.

Croatia Signs the European Declaration on High-Performance Computing

Related News- HPC Wire - Mon, 11/20/2017 - 11:41

Nov. 20, 2017 — Croatia is the 13th country to sign the European declaration on high-performance computing (HPC). Blaženka Divjak, Croatian Minister of Science and Education signed the declaration today in Brussels in the presence of Roberto Viola, Director-General for Communications Networks, Content and Technology at the European Commission.

Vice-President Ansip, responsible for the Digital Single Market, and Mariya Gabriel, Commissioner for Digital Economy and Society welcomed this important step for EuroHPC: “We are pleased to welcome Croatia in this bold European project. By aligning our European and national strategies and pooling resources, we will put Europe in leading global position in HPC and provide access to world-class supercomputing resources for both public and private users, especially for SMEs, who use more and more HPC in their business processes. The scientific and industrial developments will have a direct positive impact on European citizens’ daily lives in areas going from biotechnological and biomedical research to personalised medicine, and from renewable energy to urban development.”

Blaženka Divjak, Croatian Minister of Science and Education added: “Republic of Croatia recognizes the need for EU integrated world-class high performance computing infrastructure which in combination with EU data and network infrastructures would upraise both Europe’s and Croatian scientific capabilities and industrial competitiveness. Therefore, we are very pleased that Croatia is now part of this ambitious European project. It is widely agreed that scientific progress as well as economic growth will increasingly rely on top level HPC-enabled methods and tools, services and products. Signing this Declaration is a step in the right direction for our country which will help Croatia to further develop our research and industrial potential. Europe needs to combine resources to overcome its fragmentation and the dilution of efforts.”

The goal of the EuroHPC agreement is to establish a competitive HPC ecosystem by acquiring and operating leading-edge high-performance computers . The ecosystem will comprise hardware and software components, applications, skills and services. It will be underpinned by a world-class HPC and data infrastructure HPC infrastructure, available across EU, no matter where supercomputers are located. This HPC infrastructure will also support the European Open Science Cloud and will allow millions of researchers to share and analyse data in a trusted environment. Focusing initially on the scientific community, the user base of the cloud will over time be enlarged to a wide range of users: scientific communities, large industry and SMEs, as well as the public sector.

The EuroHPC declaration aims at having EU exascale supercomputers, capable of at least 1018 calculations per second, in the global top three by 2022-2023.

The EuroHPC initiative was launched during the Digital Day in March 2017 and signed by France, Germany, Italy, Luxembourg, the Netherlands, Portugal and Spain (see the press statementspeech and blog post by Vice-President Ansip). Five other countries have since joined this bold European initiative: Belgiumin June, Slovenia in July, Bulgaria and Switzerland in October and Greece in November.

Why HPC matters

Supercomputers are very powerful systems with hundreds of thousands or millions of processors working in parallel to analyse billions of pieces of data in real time. They do extremely demanding computations for simulating and modelling scientific and engineering problems that cannot be performed using general-purpose computers. Therefore, access to HPC becomes essential in many areas spanning from health, biology and climate change to automotive, aerospace energy and banking.

Moreover, as the problems we want to solve are more and more complex, the demands on computational resources are growing accordingly. In this rhythm, today’s state of the art machines are obsolete after 5-7 years of operation.

Aiming at and developing a European HPC ecosystem will benefit both academia and industry. As a wide range of scientific and industrial applications will be made available at EU level, citizens will benefit from an increased level of HPC resources in areas like:

  • Health, demographic change and wellbeing
  • Secure, clean and efficient energy
  • Smart, green and integrated urban planning
  • Cybersecurity
  • Weather forecasting and climate change
  • Food security

More examples in the HPC factsheet.

Next steps

The European Commission, together with countries which have signed the declaration are preparing, by the end of 2017, a roadmap with implementation milestones to deploy the European exascale supercomputing infrastructure.

All other Member States and countries associated to Horizon 2020 are encouraged to join EuroHPC and work together, and with the European Commission, in this initiative.

Source: European Commission

The post Croatia Signs the European Declaration on High-Performance Computing appeared first on HPCwire.

AMD EPYC Processor Powers New HPE Gen10 Server to World Records in SPEC CPU Benchmarks

Related News- HPC Wire - Mon, 11/20/2017 - 11:14

SUNNYVALE, Calif., Nov. 20, 2017 — AMD (NASDAQ: AMD) today announced that the new Hewlett Packard Enterprise (HPE) (NYSE: HPE) ProLiant DL385 Gen10 server, powered by AMD EPYC processors set world records in both SPECrate2017_fp_base and SPECfp_rate2006. The secure and flexible 2P 2U HPE ProLiant DL385 Gen10 Server joins the HPE Cloudline CL3150 server in featuring AMD EPYC processors. With designs ranging from 8-core to 32-core, AMD EPYC delivers industry-leading memory bandwidth across the HPE line-up, with eight channels of memory and unprecedented support for integrated, high-speed I/O with 128 lanes of PCIe 3 on every EPYC processor.

“HPE is joining with AMD today to extend the world’s most secure industry standard server portfolio to include the AMD EPYC processor. We now give customers another option to optimize performance and security for today’s virtualized workloads,” said Justin Hotard, vice president and GM, Volume Global Business Unit, HPE. “The HPE ProLiant DL385 featuring the AMD EPYC processor is the result of a long-standing technology engagement with AMD and a shared belief in continuing innovation.”

AMD EPYC Leadership Cost-per-VM Server Configurations

The performance of AMD EPYC is delivered by up to 64-cores in the HPE DL385 2P server configuration and access to 4 terabytes of memory and 128 lanes of PCIe connectivity. The combination of core-count and features attains up to 50 percent lower cost per virtual machine (VM) HPE sees over traditional server solutions.

AMD EPYC-powered HPE ProLiant DL385 Gen10 World Record Floating Point Performance

  • An AMD EPYC model 7601-based HPE DL385 Gen10 system scored 257 on SPECrate2017_fp_base, higher than any other two socket system score published by SPEC.
  • An AMD EPYC model 7601-based HPE DL385 Gen10 system scored 1980 on SPECfp_rate2006, higher than any other two socket system score published by SPEC.

“The HPE DL385 positions the AMD EPYC processor right in the heart of the high-volume market where dual-socket servers are frequently deployed by service providers, large enterprises and small-to-medium size businesses,” said Matt Eastwood, senior vice president, enterprise, datacenter and cloud infrastructure, IDC. “With its combination of high-performance cores, memory bandwidth and PCIe connectivity options it is an attractive choice to address a wide range of business applications and workloads.”

AMD Secure Processor

Every EPYC processor integrates hardware based security. The ProLiant DL385 delivers unmatched security via the HPE Silicon Root of Trust enabling only validated firmware to run. The HPE Silicon Root of Trust is linked to the AMD Secure Processor in the AMD EPYC SoC for firmware validation before the server boots.

The AMD Secure Processor and HPE DL385 also enables:

  • Secure Encrypted Memory − All the memory or a portion of the memory can be encrypted to protect data against memory hacks and scrapes.
  • Secure Encrypted Virtualization − VMs have separate encryption keys as does the hypervisor, isolating the VMs from one another and from the hypervisor itself.

“AMD is proud to deliver to HPE a superb balance of high-performance cores, memory, and I/O for optimal performance with AMD EPYC,” said Scott Aylor, Corporate VP and GM Enterprise Business Unit at Advanced Micro Devices.  “With AMD EPYC the HPE ProLiant DL385 Gen10 can support more virtual machines per server, process more data in parallel, directly access more local storage, while more securely protecting data in memory.”

About AMD

For more than 45 years AMD has driven innovation in high-performance computing, graphics and visualization technologies ― the building blocks for gaming, immersive platforms, and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible.

Source: AMD

The post AMD EPYC Processor Powers New HPE Gen10 Server to World Records in SPEC CPU Benchmarks appeared first on HPCwire.

Installation of Sierra Supercomputer Steams Along at LLNL

Related News- HPC Wire - Mon, 11/20/2017 - 10:55

Sierra, the 125 petaflops machine based on IBM’s Power9 chip and being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the 180 petaflops system being built at Oak Ridge National Laboratory and expected to perhaps top the Top500 list in June. Like Sierra, Summit features a heterogeneous architecture based on Power9 and Nvidia GPUs.

Livermore today posted a brief update on Sierra’s progress along with a short video. Trucks began delivering racks and hardware over the summer with system acceptance scheduled in fiscal 2018. Sierra, part of the CORAL effort, is expected to provide four to six times the sustained performance of the Lab’s current workhorse system, Sequoia.

“Sierra is what we call an advanced technology platform,” says Mike McCoy, Program Director, Advanced Simulation and Computing, in the video. “[It] will serve the three NNSA (National Nuclear Security Administration) laboratories. So the ATS2, which is Sierra, is the second in a series of four systems that are on a roadmap to get us to exascale computing [around] 2024.”

Sierra is expected to have roughly 260 racks and will be the biggest computer installed at Livermore in size, number of racks, and speed.

“IBM analyzed our benchmark applications, showed us how the system would perform well for them, and how we would be able to achieve similar performance for our real applications,” said Bronis de Supinski, Livermore Computing’s chief technology officer and head of Livermore Lab’s Advanced Technology (AT) systems, in the article. “Another factor was that we had a high probability, given our estimates of the risks associated with that proposal, of meeting our scheduling requirements.”

While Lab scientists have positive indications from their early access systems, de Supinski said until Sierra is on the floor and running stockpile stewardship program applications, which could take up to two years, they won’t be certain how powerful the machine will be or how well it will work for them.

Sierra will feature two IBM Power 9 processors and 4 NVIDIA Volta GPUs per node. The Power 9s will provide a large amount of memory bandwidth from the chips to Sierra’s DDR4 main memory, and the Lab’s workload will benefit from the use of second-generation NVLINK, forming a high-speed connection between the CPUs and GPUs.

As Livermore’s first extreme-scale CPU/GPU system, Sierra has presented challenges to Lab computer scientists in porting codes, identifying what data to make available on GPUs and moving data between the GPUs and CPUs to optimize the machine’s capability. Through the Sierra Center of Excellence, Livermore Lab code developers and computer scientists have been collaborating with on-site IBM and NIVIDIA employees to port applications.

Feature Image: Sierra, LLNL

The post Installation of Sierra Supercomputer Steams Along at LLNL appeared first on HPCwire.

Marvell to Acquire Cavium

Related News- HPC Wire - Mon, 11/20/2017 - 09:25

SANTA CLARA, and SAN JOSE, Calif., Nov. 20 — Marvell Technology Group Ltd. and Cavium, Inc. today announced a definitive agreement, unanimously approved by the boards of directors of both companies, under which Marvell will acquire all outstanding shares of Cavium common stock in exchange for consideration of $40.00 per share in cash and 2.1757 Marvell common shares for each Cavium share. Upon completion of the transaction, Marvell will become a leader in infrastructure solutions with approximately $3.4 billion1 in annual revenue.

The transaction combines Marvell’s portfolio of leading HDD and SSD storage controllers, networking solutions and high-performance wireless connectivity products with Cavium’s portfolio of leading multi-core processing, networking communications, storage connectivity and security solutions. The combined product portfolios provide the scale and breadth to deliver comprehensive end-to-end solutions for customers across the cloud data center, enterprise and service provider markets, and expands Marvell’s serviceable addressable market to more than $16 billion. This transaction also creates an R&D innovation engine to accelerate product development, positioning the company to meet today’s massive and growing demand for data storage, heterogeneous computing and high-speed connectivity.

“This is an exciting combination of two very complementary companies that together equal more than the sum of their parts,” said Marvell President and Chief Executive Officer, Matt Murphy. “This combination expands and diversifies our revenue base and end markets, and enables us to deliver a broader set of differentiated solutions to our customers. Syed Ali has built an outstanding company, and I’m excited that he is joining the Board. I’m equally excited that Cavium’s Co-founder Raghib Hussain and Vice President of IC Engineering Anil Jain will also join my senior leadership team. Together, we all will be able to deliver immediate and long-term value to our customers, employees and shareholders.”

“Individually, our businesses are exceptionally strong, but together, we will be one of the few companies in the world capable of delivering such a comprehensive set of end-to-end solutions to our combined customer base,” said Cavium Co-founder and Chief Executive Officer, Syed Ali. “Our potential is huge. We look forward to working closely with the Marvell team to ensure a smooth transition and to start unlocking the significant opportunities that our combination creates.”

The transaction is expected to generate at least $150 to $175 million of annual run-rate synergies within 18 months post close and to be significantly accretive to revenue growth, margins and non-GAAP EPS.

Transaction Structure and Terms 
Under the terms of the definitive agreement, Marvell will pay Cavium shareholders $40.00 in cash and 2.1757 Marvell common shares for each share of Cavium common stock. The exchange ratio was based on a purchase price of $80 per share, using Marvell’s undisturbed price prior to November 3, when media reports of the transaction first surfaced. This represents a transaction value of approximately $6 billion. Cavium shareholders are expected to own approximately 25% of the combined company on a pro forma basis.

Marvell intends to fund the cash consideration with a combination of cash on hand from the combined companies and $1.75 billion in debt financing. Marvell has obtained commitments consisting of an $850 million bridge loan commitment and a $900 million committed term loan from Goldman Sachs Bank USA and Bank of America Merrill Lynch, in each case, subject to customary terms and conditions. The transaction is not subject to any financing condition.

The transaction is expected to close in mid-calendar 2018, subject to regulatory approval as well as other customary closing conditions, including the adoption by Cavium shareholders of the merger agreement and the approval by Marvell shareholders of the issuance of Marvell common shares in the transaction.

Management and Board of Directors 
Matt Murphy will lead the combined company, and the leadership team will have strong representation from both companies, including Marvell’s current Chief Financial Officer Jean Hu, Cavium’s Co-founder and Chief Operating Officer Raghib Hussain and Cavium’s Vice President of IC Engineering Anil Jain. In addition, Cavium’s Co-founder and Chief Executive Officer, Syed Ali, will continue with the combined company as a strategic advisor and will join Marvell’s Board of Directors, along with two additional board members from Cavium’s Board of Directors, effective upon closing of the transaction.

Advisors
Goldman Sachs & Co. LLC served as the exclusive financial advisor to Marvell and Hogan Lovells US LLP served as legal advisor. Qatalyst Partners LP and J.P. Morgan Securities LLC served as financial advisors to Cavium and Skadden, Arps, Slate, Meagher & Flom LLP served as legal advisor.

Marvell Preliminary Third Fiscal Quarter Results 
Based on preliminary financial information, Marvell expects revenue of $610 to $620 million and non-GAAP earnings per share to be between $0.32 and $0.34, above the mid-point of guidance provided on August 24, 2017. Further information regarding third fiscal quarter results will be released on November 28, 2017 at 1:45 p.m. Pacific Time.

Transaction Website 
For more information, investors are encouraged to visit http://MarvellCavium.transactionannouncement.com, which will be used by Marvell and Cavium to disclose information about the transaction and comply with Regulation FD.

Call/Webcast to Discuss Transaction 
Interested parties may join a conference call Monday, November 20, 2017 at 5:00 a.m. Pacific Time to discuss the transaction by dialing 1 (866) 547-1509 in the U.S. or +1 (920) 663-6208 internationally, with the conference ID 6386325. A webcast of the call can be accessed by visiting Marvell’s investor relations website. A replay will be available until December 4, 2017 by dialing 1 (800) 585-8367, replay ID 6386325.

About Marvell  
Marvell first revolutionized the digital storage industry by moving information at speeds never thought possible. Today, that same breakthrough innovation remains at the heart of the company’s storage, networking, and connectivity solutions. With leading intellectual property and deep system-level knowledge, Marvell’s semiconductor solutions continue to transform the enterprise, cloud, automotive, industrial, and consumer markets. To learn more, visit: www.marvell.com.

About Cavium  
Cavium, Inc., offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware-reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com.

Source: Marvell; Cavium, Inc.

The post Marvell to Acquire Cavium appeared first on HPCwire.

Paul K. Kearns Appointed Director of Argonne National Laboratory

Related News- HPC Wire - Mon, 11/20/2017 - 07:54

Nov. 20, 2017 — Paul K. Kearns has been appointed director of the U.S. Department of Energy’s Argonne National Laboratory. President Robert J. Zimmer announced the appointment in his capacity as chairman of the board of directors of UChicago Argonne LLC, which operates Argonne for the U.S. Department of Energy.

Paul K. Kearns

Kearns, who has served in multiple leadership roles in the national laboratory system and at the Department of Energy, is currently the interim director of Argonne. His appointment is effective immediately.

Kearns is the 14th director of Argonne, a multidisciplinary science and engineering research center that seeks scientific and engineering solutions to the grand challenges of our time: sustainable energy, a healthy environment and a secure nation.

“Paul has a strong record of leadership at laboratories across the country, and brings to Argonne a deep understanding of how to support and advance research and scientific discovery,” said Zimmer. “We look forward to working with him on an ambitious program of research in science and engineering that helps address critical challenges faced by society.”

The University of Chicago manages the laboratory for the Department of Energy through UChicago Argonne, LLC. Argonne was established in 1946 following the first sustained nuclear reaction conducted at the University as part of the Manhattan Project. Argonne was the first in a series of national laboratories funded to conduct scientific research in the nation’s interest.

Today, the laboratory’s mission is to lead discovery and to power innovation in a wide range of energy and scientific priorities—from fundamental research on physics, computing and chemistry to cutting-edge applications for batteries and energy storage, security and sustainable energy analysis, and innovation.

The laboratory works closely with UChicago in these areas as well as such emerging priorities as quantum computing, microbiome research, sensing and detecting, and water research.

Kearns will lead the laboratory as it pursues the next generation of science. Such work includes bringing the nation to the next level of supercomputing power—called “exascale”—by the year 2021, and new initiatives in materials science and chemistry. Argonne is in the process of upgrading the brightness and energy of the Advanced Photon Source, the laboratory’s powerful X-ray synchrotron, where thousands of scientists annually conduct research across a wide-range of fields.

Kearns joined Argonne in 2010 as its chief operations officer. During his career at Argonne, he has helped drive and increase collaboration to advance Argonne’s most critical initiatives and expanded engagement with the University and its Institute for Molecular Engineering. He also has streamlined operations for efficiency, which improved execution and delivery of services. He also has worked to increase collaboration across the laboratory, as well as strengthen relationships and raise the laboratory’s visibility with sponsors and partners.

Kearns’ appointment was informed by a panel of distinguished leaders and scientists, chaired by Eric D. Isaacs, UChicago executive vice president for research, innovation and national laboratories and a former director of Argonne. Kearns became interim director in January after then-Laboratory Director Peter Littlewood stepped down to assume a faculty position at the University of Chicago.

Prior to joining Argonne, Kearns served as the laboratory director of Idaho National Engineering and Environmental Laboratory and held a series of roles at Battelle Global Laboratory Operations. At Battelle, he conducted strategic planning and business development for research activities in energy, environment and national security.

Kearns holds a doctorate and a master’s degree in bionucleonics, and a bachelor’s degree in natural resources and environmental sciences, all from Purdue University. He is a fellow of the American Association for the Advancement of Science and a member of the American Nuclear Society and the Society for Conservation Biology.

Source: University of Chicago

The post Paul K. Kearns Appointed Director of Argonne National Laboratory appeared first on HPCwire.

New Technologies, Industry Luminaries, and Outstanding Top500 Results Highlight Intel’s SC17 Presence

Related News- HPC Wire - Mon, 11/20/2017 - 01:01

Last week, the rhetorical one-two punch of the Intel® HPC Developer Conference and Supercomputing 2017 offered global HPC aficionados new insights into the direction of advanced HPC technologies, and how those tools will empower the future of discovery and innovation. In case you missed it, here is a breakdown of all the action!

The Intel® HPC Developer Conference 2017 kicked off the week with 700+ attendees hearing industry luminaries share best practices and techniques to realize the full potential of the latest HPC tools and approaches. Intel’s Joe Curley, Gadi Singer, and Dr. Al Gara took the main stage and offered a thought-provoking keynote outlining the intertwined futures of HPC and AI. As individuals who are helping architect the future of HPC, the three speakers discussed the adaptation of AI into workflows, technological opportunities to enable it, and the driving forces behind the future range of architectures and systems and solutions.  Attendees also gained hands-on experience with Intel platforms, obtained insights to maximize software efficiency and advance the work of researchers and organizations of all sizes, and networked with peers and industry experts. Watch the Intel HPC Developers Conference website as we publish the videos and multiple technical sessions over the next few weeks.

Then with the kickoff of SC17, Intel announced outstanding industry acceptance results for Intel® Xeon® Scalable processors and Intel® Omni-Path Architecture (Intel® OPA). Intel also provided additional insights into AI, machine learning and the latest HPC technologies.

Intel detailed how Intel® Xeon® Scalable processors have delivered the fastest adoption rate of any new Intel Xeon processor on the Top5001. The latest processor surpasses the previous generation’s capability with a 63% improvement in performance across 13 common HPC applications, and up to double the number of FLOPS per clock2.  On the November 2017 Top500 list, Intel-powered supercomputers accounted for six of the top 10 systems and a record high of 471 out of 500 systems. Also, Intel powered all 137 new systems added to the November list.

To date, 18 HPC systems utilizing the new processors reside among November 2017’s Top500 list of the world’s fastest supercomputers, each delivering total performance surpassing 25 petaFLOPS. Other organizations using the new Intel Xeon Scalable processors at the heart of their HPC systems report substantial boosts in system speed, resulting in 110 world performance records1.

In addition to the processors, Intel OPA momentum continued with systems using Intel OPA delivering a combined 80 petaFLOPS, surpassing the June 2017 Top500 numbers by nearly 20%. Among those organizations using 100Gb fabric for their Top500 HPC systems, Intel OPA now connects almost 60 percent of nodes3.

The demos in Intel’s booth allowed attendees to see how the power of these technologies enables advancements across the HPC industry. Taking center stage at the Intel’s booth was a virtual-reality motorsports demonstration where visitors experienced the power of advanced technology which will enable the next generation of vehicles.

Attendees seeking a deeper dive into the technologies joined “Nerve Center Sessions” at the Intel pavilion where they gained cutting-edge insights from industry luminaries and joined the presenters for small table discussions afterwards.

With recent AI advancements, are humans the only ones making “intelligent” decisions? Intel Fellow Pradeep Dubey, who is also the Director of Parallel Computing Lab presented Artificial Intelligence and The Virtuous Cycle of Compute. He took the opportunity to explain how the convergence of Big Data, AI, and algorithmic advances transform the relationship between humans and HPC systems.

In case you missed the conference this year you can get more detail from Intel’s SC17 page and follow Intel on Twitter @intelHPC for ongoing insights. And to learn more about the latest Intel HPC and AI technologies, check out www.intel.com/hpc.

 

~~~~~~~~~~

1 https://newsroom.intel.com/news/sc17-intel-boasts-record-breaking-top500-position-fastest-ramp-new-xeon-processor-list/

2 Up to 1.63x Gains based on Geomean of Weather Research Forecasting – Conus 12Km, HOMME, LSTCLS-DYNA Explicit, INTES PERMAS V16, MILC, GROMACS water 1.5M_pme, VASPSi256, NAMDstmv, LAMMPS, Amber GB Nucleosome, Binomial option pricing, Black-Scholes, Monte Carlo European options. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance/datacenter.

3  Intel estimate based on Top500 data and other public sources

 

The post New Technologies, Industry Luminaries, and Outstanding Top500 Results Highlight Intel’s SC17 Presence appeared first on HPCwire.

SC Bids Farewell to Denver, Heads to Dallas for 30th

Related News- HPC Wire - Fri, 11/17/2017 - 23:23

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the largest HPC conference in the world. In keeping with the conference theme of HPC Connects, General Chair Bernd Mohr of Juelich Supercomputing, the event’s first international chair, advanced strong global attendance: there were 2,800 international attendees from 71 countries, including 122 international exhibitors.

Asked what they liked most about the 29th annual Supercomputing Conference, you’ll get a lot of folks geeking out over the technology but ultimately it’s the community that keeps them coming back to connect with old friends and meet new ones. Other fan favorites that had attendees buzzing this year were the electrifying student cluster competition (with first-ever planned power shutoffs!), record-setting SCinet activities, impressive Gordon Bell results and an out-of-this-world keynote; plus a lineup of 12 invited talks, including a presentation by the supercomputing pioneer himself, Gordon Bell.

On Tuesday morning, SC17 Chair Mohr welcomed thousands of SC attendees to the conference center ballroom to share conference details and introduce the SC17 keynote, “Life, the Universe and Computing: The Story of the SKA Telescope.” In front of a stunning widescreen visual display, Professor Philip Diamond, director general of the international Square Kilometer Array (SKA) project, and Dr. Rosie Bolton, SKA Regional Centre project scientist, described the SKA’s vision and strategy to map and study the entire sky in greater detail than ever before. Everyone we spoke with was enthralled by the keynote and several long-time attendees said it was the best one yet. Read more about it in our feature coverage, “HPC Powers SKA Efforts to Peer Deep into the Cosmos.”

In his introduction, Mohr pointed out that the discovery of gold about five miles away from the conference site in 1858 led to the founding of Denver. “[It] is fitting,” said Mohr” because today high performance computing is at the forefront of a new gold rush, a rush to discovery using an ever-growing flood of information and data. Computing is now essential to science discovery like never before. We are the modern pioneers pushing the bounds of science for the betterment of society.”

One of the marvels of the show each year is SCinet, the fastest most powerful scientific network in the world for one week. This year SCinet broke multiple records, achieving 3.63 Tbps of bandwidth, as well as the most floor fiber laid and most circuits ever. SCinet takes three weeks to set up and operates with over $66 million in loaned state-of-the-art equipment and software. In the weeks ahead, we will have more coverage on how this fascinating feat is pulled off as well as the ground-breaking networking research that is enabled.

The 50th edition of the Top500, covered in-depth here, was announced on Monday. The list will go down in history as the year China pulled ahead in multiple dimensions, not just with the number one system (which China has claimed for ten consecutive lists), but with the highest number of systems and largest flops share.

A roundup of benchmark winners from SC17:

Top500: China’s Sunway TaihuLight system (93 petaflops)

Green500: Japan’s RIKEN ZettaScaler Shoubu system B (17 gigaflops/watt)

HPCG: Japan’s RIKEN K computer (0.6027 petaflops)

Graph500: Japan’s RIKEN K computer

For the second year, China won the coveted Gordon Bell prize, the Nobel prize of supercomputing presented by the Association for Computing Machinery each year in association with SC. The 12-member Chinese team employed the world’s fastest supercomputer, Sunway TaihuLight to simulate 20th century’s most devastating earthquake, which occurred in Tangshan, China in 1976. The research project “18.9-Pflops Nonlinear Earthquake Simulation on Sunway TaihuLight: Enabling Depiction of 18-Hz and 8-Meter Scenarios” achieved greater efficiency than had been previously attained running similar programs on the Titan and TaihuLight supercomputers. You can read about the important practical implications of this work in the ACM writeup.

SC17 Student Cluster Champs: Nanyang Technological University, Singapore (Source: @SCCompSC)

In an awards ceremony Thursday the team from Nanyang Technological University took the gold in the Student Cluster Competition. With its dual-node Intel 2699-based cluster accelerated with 16 Nvidia V100 GPUs, the team from Singapore pulled off a triple-play, also recording record runs for both Linpack and HPCG. At 51.77 TFlop/s the team’s SC17 Linpack score beat the previous record by nearly 40 percent and its HPCG score of 2,055.85 was a nearly 50 percent improvement over the previous record-holder. Among the competitors this year who deserve honorable mention are the first all-high school team from William Henry Harrison High. Check back next week for more extensive coverage of the contest and a rundown of the winning teams from our roving contest reporter Dan Olds.

Ralph A. McEldowney

The HPCwire editorial team would like to congratulate everyone on their achievements this year. And we applaud our HPCwire Readers’ and Editors’ choice award winners, a diverse and exceptional group of organizations and people who are on the cutting-edge of scientific and technical progress. (The photo gallery of award presentations can be viewed on Twitter.)

We look forward to seeing many of you in June at the International Supercomputing Conference in Frankfurt, Germany, and then in Dallas for SC18, November 11-16, when SC will be celebrating its 30th anniversary. The SC18 website is already live and the golden key has been handed to next year’s General Chair Ralph A. McEldowney of the US Department of Defense HPC Modernization Program. If you’re partial to the Mile High City, you’re in luck because SC will be returning to Denver in 2019 under the leadership of University of Delaware’s Michela Taufer, general chair of the SC19 conference.

Stay tuned in the coming weeks as we release our SC17 Video Interview series.

The post SC Bids Farewell to Denver, Heads to Dallas for 30th appeared first on HPCwire.

How Cities Use HPC at the Edge to Get Smarter

Related News- HPC Wire - Fri, 11/17/2017 - 20:30

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed.

Speaking at SC17 in Denver this week, a panel of smart city practitioners shared the strategies, techniques and technologies they use to understand their cities better and to improve the lives of their residents. With data coming in from all over the urban landscape and worked over by machine learning algorithms, Debra Lam, managing director for smart cities & inclusive innovation at Georgia Tech who works on strategies for Atlanta and the surrounding area, said “we’ve embedded research and development into city operations, we’ve formed a match making exercise between the needs of the city coupled with the most advanced research techniques.”

Panel moderator Charlie Cattlett, director, urban center for computation & data Argonne National Laboratory who works on smart city strategies for Chicago, said that the scale of data involved in complex, long-term modeling will require nothing less than the most powerful supercomputers, including the next generation of exascale systems under development within the Department of Energy. The vision for exascale, he said, is to build “a framework for different computation models to be coupled together in multiple scales to look at long-range forecasting for cities.”

“Let’s say the city is thinking about taking 100 acres and spend a few hundred million dollars to build some new things and rezone and maybe augment public transit,” Cattlett said, “how do you know that that plan is actually what you think it’s going to do? You won’t until 10-20 years later. But if you forecast using computation models you can at least eliminate some of the approaches that would be strictly bad.”

With both Amazon and Microsoft in its metropolitan area, it’s not surprising that Seattle is doing impressive smart city work. Michael Mattmiller, CTO of Seattle, said good planning is necessary for a city expected to grow by 32 percent. Mattmiller said 75 percent of the new residents moving to Seattle are coming for jobs in the technology sector, and they will tend to have high expectations for how their city uses technology.

Some of Seattle’s smart city tactics are relatively straightforward, if invaluable, methods for city government to open the lines of communication with residents and to respond to problems faster. For example, the city developed an app called “Find It, Fix It” in which residents who encounter broken or malfunctioning city equipment (broken street light, potholes, etc.) are encouraged to take a cell phone picture and send a message to the city with a description of the problem and its location.

Of a more strategic nature is Seattle’s goal of becoming carbon neutral by 2050. The key challenges are brought on by the 100,000 people who come to the downtown areas each day for their jobs. The city’s Office of Sustainability collects data on energy consumption from sensors placed on HVAC and lighting systems in office buildings and retail outlets and has developed benchmarks for comparing energy consumption on a per-building basis, notifying building owners if they are above or below their peer group.

Mattmiller said Amazon and Microsoft helped build analytics algorithms that run on Microsoft Azure public cloud. The program is delivering results; Mattmiller said energy consumption is down, with a reduction of 27 million tons of carbon.

Seattle also analyzed weather data and rainfall amounts, discovering that the city has distinct microclimates, with some sections of the city getting as much as eight more inches of rain (the total annual amount of rain in Phoenix) per year than others. This has led to the city issuing weather alerts to areas more likely to have rain events and to send repair and maintenance trucks to higher risk areas.

Transportation, of course, is a major source of pollution, carbon and frustration (30 percent of urban driving is spent looking for parking spaces). Seattle trolled resident for ideas and held a hackathon that produced 14 prototype solutions, including a team from Microsoft who bike to work: they developed a machine learning program that predicts the availability of space on bike racks attached to city buses, “an incredibly clever solution,” Mattmiller said.

In Chicago, Pete Beckman, co-director, Northwestern Argonne Institute of Science and Engineering, Argonne National Laboratory, helped develop sensors placed throughout the city in its Array of Things project. He said that while most sensors used by cities are big, expensive and sparse, Beckman said the project managers wanted to “blanket the city with sensors,” which would collect a broad variety of data and also have significant computational power – a “programmable sensor” that doesn’t just report data but one for which you can write programs to run in the device. They also wanted it to be attractive, so students at the Art Institute of Chicago were recruited to help design the enclosure.

“This becomes a high performance computing problem,” Beckman said. “Why do you need to run programs at the edge? Why run parallel computing out there? Because the amount of data we want to analyze would swamp any network. The ability to have 4K cameras, to have hyperspectral imaging, to have audio, all that (data) can’t be sent back to the data center for processing, it has to be processed right there in a small, parallel supercomputer. Whether it’s Open CV (Open Source Computer Vision Library), Caffe or other deep learning framework like Tensorflow, we have to run the computation out at the edge.”

One scenario outlined was of a sensor detecting an out-of-control vehicle approaching a busy intersection; the sensor picks up on the impending danger and delays the pedestrian “WALK” sign and turns all the traffic lights in the intersection red. These are calculations that require HPC-class computing at the street corner.

Chicago is using its Array of Things sensors in other critical roles, such as real time flood monitoring, for tracking pedestrian, bicycle car and truck traffic and predictively model accidents.

“The questions for us in the parallel computing world,” Beckman said, “are how do we take that structure on our supercomputers and scale it in a way so we have a virtuous loop to do training of large-scale data on the supercomputer and create models that are inference-based, that are quick and fast, that can be pushed out to parallel hardware accelerated out on the edge? The Array of Things project is working on that now.”

The post How Cities Use HPC at the Edge to Get Smarter appeared first on HPCwire.

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

Related News- HPC Wire - Fri, 11/17/2017 - 19:06

Thus week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning as images of stars and galaxies and tiny telescopes and giant telescopes streamed across the high definition screen extended the length of Colorado Convention Center ballroom’s stage. One was reminded of astronomer Carl Sagan narrating the Cosmos TV series.

SKA, you may know, is the Square Kilometre Array project being run by an international consortium and intended to build the largest radio telescope in the world; it will be 50 times more powerful than any other radio telescope today. The largest today is  ALMA (Atacama Large Millimeter/submillimeter Array) located in Chile and has 66 dishes.

SKA will be sited in two locations, South Africa, and Australia. The two keynoters Philip Diamond, Director General of SKA, and Rosie Bolton, SKA Regional Centre Project Scientist and Project Scientist for the international engineering consortium designing the high performance computers, took turns outlining radio astronomy history and SKA’s ambition to build on that. Theirs was a swiftly-moving talk, both entertaining and informative. The visuals flashing adding to the impact.

Their core message: This massive new telescope will open a new window on astrophysical phenomena and create a mountain of data for scientists to work on for years. SKA, say Diamond and Bolton, will help clarify the early evolution of the universe, be able to detect gravitational waves by their effect on pulsars, shed light on dark matter, produce insight around cosmic magnetism, create detailed, accurate 3D maps of galaxies, and much more. It could even play a SETI like role in the search for extraterrestrial intelligence.

“When fully deployed, SKA will be able to detect TV signals, if they exist, from the nearest tens maybe 100 stars and will be able to detect the airport radars across the entire galaxy,” said Diamond, in response to a question. SKA is creating a new government organization to run the observatory, “something like CERN or the European Space Agency, and [we] are now very close to having this process finalized,” said Diamond.

Indeed this is exciting stuff. It is also incredibly computationally intensive. Think about an army of dish arrays and antennas, capturing signals 24×7, moving them over high speed networks to one of two digital “signal processing facilities”, one for each location, and then on to two ‘science data processors” centers (think big computers). And let’s not forget data must be made available to scientists around the world.

Consider just a few data points, shown below, that were flashed across stage during the keynote presentation. The context will become clearer later.

It’s a grand vision and there’s still a long way to go. SKA, like all Big Science projects, won’t happen overnight. SKA was first conceived in 90s at the International Union of Radio Science (URSI) which established the Large Telescope Working Group to begin a worldwide effort to develop the scientific goals and technical specifications for a next generation radio observatory. The idea arose to create a “hydrogen array” able to detect H radiofrequency emission (~1420 MHz). A square kilometer was required to have a large enough collection area to see back into the early universe. In 2011 those efforts consolidated in a not-for-project company that now has ten member countries (link to brief history of SKA). The U.S. which did participate in early SKA efforts chose not to join the consortium at the time.

Although first conceived as a hydrogen array, Diamond emphasized, “With a telescope of that size you can study many things. Even in its early stages SKA will be able to map galaxies early in the universe evolution. When full deployed it will conduct fullest galaxy mapping in 3D encompassing up to one million individual galaxies and cover 12.5 billon years of cosmic history.”

A two-phase deployment is planned. “We’re heading full steam towards critical design reviews next year,” said Diamond. Full construction starts in two years with construction of the first phase expected to begin in 2019. So far €200 have been committed for design along with “a large fraction” of the €640 required for first phase construction. Clearly there are technology and funding hurdles ahead. Diamond quipped if the U.S. were to join SKA and pony up, say $2 billion, they would ‘fix’ the spelling of kilometre to kilometer.

There will actually be two telescopes, one in South Africa about 600 km north of Cape Town and another one roughly 800 km north of Perth in western Australia. They are being located in remote regions to reduce radiofrequency interference from human activities.

“In South Africa we are going to be building close to 200 dishes, 15 meters in diameter, and the dishes will be spread over 150 km. They [will operate] over a frequency range of 350 MHz to 14 GHz. In Australia we will build 512 clusters, each of 256 antennas. That means a total of over 130,000 2-meter tall antennas, spread over 65 km. these low frequency antennas will be tapered with periodic dipoles and will cover the frequency range 50 to 350MHz. It is this array that will be the time machine that observes hydrogen all the way back to the dawn of the universe.”

Pretty cool stuff. Converting those signals into data is a mammoth task. SKA plans two different types of processing center for each location. “The radio waves induce voltages in the receivers that capture them and modern technology allows us to digitize them to high precision than ever before. From there optical fibers transmit the digital data from the telescopes to what we call central processing facilities or (CPFs). There’s one for each telescope,” said Bolton.

Using a variety of technologies including “some exciting FPGA, CPU-GU, and hybrids”, CPFs are where the signals are combined. Great care must be taken to first synchronize the data so it enters the processing chain exactly when it should to account for the fact the radio waves from space reached one antenna before reaching another. “We need to correct that phase offset down to the nanosecond,” said Bolton.

Once that’s done a Fourier transform is applied to the data. “It decomposes essentially a function of time into the frequencies that make it up; it moves us into the frequency domain. We do this with such precision that the SKA will be able to process 65000 different radio frequencies simultaneously,” said Diamond

Once the signals have been separated in frequencies they processed one of two ways. “We can either stack the signals together of various antenna in what we call a time domain data. Each stacking operation corresponds to a different direction in the sky. We’ll be able to look at 2000 such directions simultaneously. This time domain processing analysis detects repeating objects such as pulsars or one off events like gamma ray explosions. If we do find an event, we are planning to store the raw voltage signals at the antennae for a few minutes so we can go back in time and investigate them to see what happened,” said Bolton.

This time domain data can be used by researchers to measure pulsar – which are a bit like cosmic lighthouses – signal arrival times accurately and detect the drift if there is one as a gravitational wave passes through.

“We can also use these radio signals to make images of the sky. To do that we take the signals from each pair if antennas, each baseline, and effectively multiply them together generating data objects we call visibilities. Imagine it will be done for 200 dishes and 512 groups of antennas, that’s 150,000 baselines ad 65000 different frequencies. That makes up to 10B different data streams. Doing this is a data intensive process that requires around 50 petaflops of dedicated digital signal processing.

Signals are processed inside these central processing facilities in a way that depends on the science that “we want to do with them. Once processed the data are then sent via more fiber optic cables to the Science Data Processors or SDPs. Two of these “great supercomputers” are planned, one in Cape Town for the dish array and one in Perth for low frequency antennas.

“We have two flavors of data within the science processor. In the time domain we’ll do panning for astrophysical gold, searching over 1.5M candidate objects every ten minutes sniffing out the real astrophysical phenomena such as pulsar signals or flashes of radio light,” said Diamond. The expectation is for a 10,000 to 1 negative-to-positive events. Machine learning will play a key role in finding the “gold”.

Making sense of the 10 billion incoming visibility data streams poses the greatest computational burden, emphasized Bolton: “This is really hard because inside the visibilities (data objects) of the sky and antenna responses are all jumbled. We need to do another massive Fourier transform to get from the visibility space that depends on the antenna separations to sky planes. Ultimately we need to develop self-consistent models not only of the sky that generated the signals but also how each antenna was behaving and even how the atmosphere was changing during the data gathering.

“We can’t do that in one fell swoop. Instead we’ll have several iterations trying to find the calibration parameters and source positions of brightnesses.” With each iteration bit by bit, fainter and fainter signal emerge from the noise. “Every time we do another iteration we apply different calibration techniques and we improve a lot of them but we can’t be sure when this process is going to converge so it is going to be difficult,” said Bolton.

A typical SKA map, she said, will probably contain hundreds of thousands of radio array sources. The incoming images are about 10 petabytes in size. Output 3D images are 5000 pixels on each axis and 1 petabyte in size.

Distributing this data to scientists for analysis is another huge challenge. The plan is to distribute data via fiber to SKA regional centers. “This another real game changer that the SKA, CERN, and a few other facilities are bringing about. Scientists will use the computing power of the SKA regional centers to analyze these data products,” said Diamond.

The keynote was a wowing, multimedia presentation, and warmly received by attendees. It bears repeating that many issues remain and schedules have slipped slightly, but it is still a stellar example of Big Science, requiring massively coordinated international efforts, and underpinned with enormous computing resources. Such collaboration is well aligned with SC17’s theme – HPC Connects.

Link to video recording of the presentation: https://www.youtube.com/watch?time_continue=2522&v=VceKNiRxDBc

The post SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos appeared first on HPCwire.

Argonne to Install Comanche System to Explore ARM Technology for HPC

Related News- HPC Wire - Fri, 11/17/2017 - 18:05

Nov. 17, 2017 — The U.S. Department of Energy’s (DOE) Argonne National Laboratory is collaborating with Hewlett Packard Enterprise (HPE) to provide system software expertise and a development ecosystem for a future high-performance computing (HPC) system based on 64-bit ARM processors.

ARM is a RISC-based processor architecture that has dominated the mobile computing space for years. That dominance is due to how tightly ARM CPUs can be integrated with other hardware, such as sensors and graphics coprocessors, and also because of the architecture’s power efficiency. ARM’s capacity for HPC workloads, however, has been an elusive target within the industry for years.

“Inducing competition is a critical part of our mission and our ability to meet our users’ needs.” – Rick Stevens, associate laboratory director for Argonne’s Computing, Environment and Life Sciences Directorate.

Several efforts are now underway to develop a robust HPC software stack to make ARM processors capable of supporting the multithreaded floating-point workloads that are typically required by high-end scientific computing applications.

HPE, a California-based technology company and seller of high-level IT services and hardware, is leading a collaboration to accelerate ARM chip adoption for high-performance computing applications. Argonne is working with HPE to evaluate early versions of chipmaker Cavium ARM ThunderX2 64-bit processors for the ARM ecosystem. Argonne is interested in evaluating the ARM ecosystem as a cost-effective and power-effective alternative to x86 architectures based on Intel CPUs, which currently dominate the high-performance computing market.

To support this work, Argonne will install a 32-node Comanche Wave prototype ARM64 server platform in its testing and evaluation environment, the Joint Laboratory for System Evaluation, in early 2018. Argonne researchers from various computing divisions will run applications on the ecosystem and provide performance feedback to HPE and partnering vendors.

Argonne’s advanced computing ecosystem, chiefly its Argonne Leadership Computing Facility, a DOE Office of Science User Facility, supports a research community whose work requires cutting-edge computational resources — some of the most powerful in the world. For more than a decade, Argonne has been partnering with industry vendor IBM, and more recently, Intel and Cray, to produce custom architectures optimized for scientific and engineering research. These architectures not only feature custom processor systems, but novel interconnects, software stacks and solutions for power and cooling, among other things.

“We have to build the pipeline for future systems, too,” said Rick Stevens, associate laboratory director for Argonne’s Computing, Environment and Life Sciences Directorate. “Industry partnerships are critical to our ability to do our job — which is to provide extreme-scale computing capabilities for solving some of the biggest challenges facing the world today. Inducing competition is a critical part of our mission and our ability to meet our users’ needs.”

“By initiating the Comanche collaboration, HPE brought together industry partners and leadership sites like Argonne National Laboratory to work in a joint development effort,” said HPE’s Chief Strategist for HPC and Technical Lead for the Advanced Development Team Nic Dubé. “This program represents one of the largest customer-driven prototyping efforts focused on the enablement of the HPC software stack for ARM. We look forward to further collaboration on the path to an open hardware and software ecosystem.”

Argonne researchers may eventually contribute to development of the ARM system’s compilers, which are the programs that translate application code into instructions interpreted by the processor. In the past, the difficulty and expense of compiler development have impeded the adoption of alternative processor architectures by high-performance computing applications. Such obstacles are now mitigated by robust open source compiler projects, such as LLVM, which Argonne contributes to actively.

The Comanche collaboration will be presenting at different venues and showcasing a full rack of next generation ARM servers at the 2017 International Conference for High Performance Computing, Networking, Storage and Analysis (SC17) this week (booth #494).

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Source: Argonne National Laboratory

The post Argonne to Install Comanche System to Explore ARM Technology for HPC appeared first on HPCwire.

Julia Computing Wins RiskTech100 2018 Rising Star Award

Related News- HPC Wire - Fri, 11/17/2017 - 14:09

NEW YORK, Nov. 17, 2017 — Julia Computing was selected by Chartis Research as a RiskTech Rising Star for 2018.

The RiskTech100 Rankings are acknowledged globally as the most comprehensive and independent study of the world’s major players in risk and compliance technology. Based on nine months of detailed analysis by Chartis Research, the RiskTech100 Rankings assess the market effectiveness and performance of firms in this rapidly evolving space.

Rob Stubbs, Chartis Research Head of Research, explains, “We interviewed thousands of risk technology buyers, vendors, consultants and systems integrators to identify the leading RiskTech firms for 2018. We know that risk analysis, risk management and regulatory requirements are increasingly complex and require solutions that demand speed, performance and ease of use. Julia Computing has been developing next-generation solutions to meet many of these requirements.

For example, Aviva, Britain’s second-largest insurer, selected Julia to achieve compliance with the European Union’s new Solvency II requirements.  According to Tim Thornham, Aviva’s Director of Financial Modeling Solutions, “Solvency II compliant models in Julia are 1,000x faster than our legacy system, use 93% fewer lines of code and took 1/10 the time to implement.” Furthermore, the server cluster size required to run Aviva’s risk model simulations fell 95% from 100 servers to 5 servers, and simpler code not only saves programming, testing and execution time and reduces mistakes, but also increases code transparency and readability for regulators, updates, maintenance, analysis and error checking.

About Julia and Julia Computing

Julia is a high performance open source computing language for data, analytics, algorithmic trading, machine learning, artificial intelligence, and many other domains. Julia solves the two language problem by combining the ease of use of Python and R with the speed of C++. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. For example, Julia has run at petascale on 650,000 cores with 1.3 million threads to analyze over 56 terabytes of data using Cori, the world’s sixth-largest supercomputer. With more than 1.2 million downloads and +161% annual growth, Julia is one of the top programming languages developed on GitHub. Julia adoption is growing rapidly in finance, insurance, machine learning, energy, robotics, genomics, aerospace, medicine and many other fields.

Julia Computing was founded in 2015 by all the creators of Julia to develop products and provide professional services to businesses and researchers using Julia. Julia Computing offers the following products:

  • JuliaPro for data science professionals and researchers to install and run Julia with more than one hundred carefully curated popular Julia packages on a laptop or desktop computer

  • JuliaRun for deploying Julia at scale on dozens, hundreds or thousands of nodes in the public or private cloud, including AWS and Microsoft Azure

  • JuliaFin for financial modeling, algorithmic trading and risk analysis including Bloomberg and Excel integration, Miletus for designing and executing trading strategies and advanced time-series analytics

  • JuliaDB for in-database in-memory analytics and advanced time-series analysis

  • JuliaBox for students or new Julia users to experience Julia in a Jupyter notebook right from a Web browser with no download or installation required

To learn more about how Julia users deploy these products to solve problems using Julia, please visit the Case Studies section on the Julia Computing Website.

Julia users, partners and employers hiring Julia programmers in 2017 include Amazon, Apple, BlackRock, Capital One, Comcast, Disney, Facebook, Ford, Google, IBM, Intel, KPMG, Microsoft, NASA, Oracle, PwC, Uber, and many more.

About Chartis Research

Chartis Research is a leading provider of research and analysis on the global market for risk technology. It is part of Infopro Digital, which owns market-leading brands such as Risk and WatersTechnology. Chartis’ goal is to support enterprises as they drive business performance through improved risk management, corporate governance and compliance, and to help clients make informed technology and business decisions by providing in-depth analysis and actionable advice on virtually all aspects of risk technology.

Source: Julia

The post Julia Computing Wins RiskTech100 2018 Rising Star Award appeared first on HPCwire.

Pages

Subscribe to Research Computing aggregator