Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 48 min 22 sec ago

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

Fri, 01/20/2017 - 09:33

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. This morning CMU reports its latest Poker-playing AI software, Libratus, is winning against four of the world’s best professional poker players in a 20-day, 120,000 hand tournament – Brains vs. AI – at Rivers Casino in Pittsburgh. Maybe it’s a new way to fund graduate programs. (Just Kidding!)

One of the pros, Jimmy Chou, said he and his colleagues initially underestimated Libratus, but have come to regard it as one tough player, “The bot gets better and better every day. It’s like a tougher version of us.” Chou and three other leading players – Dong Kim, Jason Les and Daniel McAulay – specialize in this two-player, unlimited bid form of Texas Hold’em and are considered among the world’s top players of the game.

According the CMU report, while the pros are fighting for humanity’s pride – and shares of a $200,000 prize purse – Carnegie Mellon researchers are hoping their computer program will establish a new benchmark for artificial intelligence by besting some of the world’s most talented players.

Libratus was developed by Tuomas Sandholm, professor of computer science, and his student, Noam Brown. “Libratus is being used in this contest to play poker, an imperfect information game that requires the AI to bluff and correctly interpret misleading information to win. Ultimately programs like Libratus also could be used to negotiate business deals, set military strategy, or plan a course of medical treatment – all cases that involve complicated decisions based on imperfect information,” according to the CMUY report.

CMU, of course, has been sharpening its AI poker skills for quite some time. Back in the fall of 2016, CMU’s software Baby Tartanian8, also created by Sandholm and Brown, placed third in the bankroll instant run-off category of another computer poker tournament (see HPCwire article, CMU’s Baby Tartanian8 Pokerbot Sweeps Annual Competition).

Back then Sandholm said, “Our ‘baby’ version of Tartanian8 was scaled down to fit within the competition’s 200 gigabyte data storage limit. It also could not do sophisticated, real-time deliberation because of the competition’s processing limit. The original Tartanian8 strategy was computed in late fall by myself and Noam on the Comet supercomputer at the San Diego Supercomputer Center (SDSC).”

In the spring of 2015 CMU’s Claudico software fared well in competition (See HPCwire article, CMU’s Claudico Goes All-In Against World-Class Poker Pros). In that first Brains Vs. AI contest in 2015, four leading pros amassed more chips than the AI, called Claudico. But in the latest contest, Libratus had amassed a lead of $459,154 in chips in the 49,240 hands played by the end of Day Nine.

The post CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again) appeared first on HPCwire.

Atos Announces First UK Delivery of New Bull Sequana Supercomputer

Fri, 01/20/2017 - 07:14

PARIS, France, Jan. 20 — Atos, a leader in digital transformation, announces the first installation of its Bull sequana X1000 new-generation supercomputer system, in the UK at the Hartree Centre. Founded by the UK government, the Science and Technology Facilities Council (STFC) Hartree Centre is a high performance computing and data analytics research facility. The world’s most efficient supercomputer, Bull sequana, is an exascale-class computer capable of processing a billion billion operations per second while consuming 10 times less energy than current systems.

This major collaboration between Atos and the Centre focuses on various initiatives aimed at addressing the UK Government’s Industrial Strategy which encourages closer collaboration between academia and industry. It includes:

  • The launch of a new UK based High Performance Computing (HPC) as a Service Offering (HPCaaS), which enables both large and small and medium-sized enterprises (SME) to take advantage of extreme computing performance through easily accessible Cloud portals. Improving SME access to such tools encourages and supports high-tech business innovation across the UK.
  • ‘Deep Learning’ as a service (DLaaS); an emerging cognitive computing technique with broad applicability from automated voice recognition to medical imaging. The technology can be used, for example, to automatically detect anomalies in mammography scans with a higher degree of accuracy than the human eye.

The new supercomputer will allow both academic and industry organisations to use the latest technology and develop applications using the most recent advances in artificial intelligence and high performance data analytics. As such, the Bull sequana system will aid Hartree to become the ‘go-to’ place in the UK for technology evaluation, supporting the work of major companies in fields ranging from engineering and consumer goods to healthcare and pharmaceuticals.

Andy Grant, Head of Big Data and HPC, Atos UK&I, said, “We believe that our Bull supercomputing technology and our expertise will reinforce the Centre’s reputation as a world class HPC centre of excellence and as the flagship model for industry-academic collaboration.”

Alison Kennedy, Director of the Hartree Centre, said, “The Hartree Centre works at the leading edge of emerging technologies and provides substantial benefits to the many industrial and research organisations that come to us.  Our collaboration with Atos will ensure that we continue to enable businesses, large and small, to make the best use of supercomputing and Big Data to develop better products and services that will boost productivity and drive growth.”

The partnership also encompasses a joint project to develop next-generation hardware and software solutions and application optimisation services, so that commercial and academic users benefit from the Hartree systems. It is also helping promote participation in STEM careers at higher education level and beyond, particularly in the North West of the UK.

The Bull sequana will be approximately 3.4 PFlops when installed and is composed of Intel Xeon and many core Xeon Phi (Knights Landing) processor technology.  It has been designed to accommodate future blade systems for Deep Learning, GPU and ARM based computing.

The new Bull sequana system is one of the most energy efficient general purpose supercomputers in the world and is in the TOP20 of the Green500 list of the most energy efficient computers.

About Atos

Atos SE (Societas Europaea) is a leader in digital transformation with circa 100,000 employees in 72 countries and pro forma annual revenue of circa € 12 billion. Serving a global client base, the Group is the European leader in Big Data, Cybersecurity, Digital Workplace and provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting edge technologies, digital expertise and industry knowledge, the Group supports the digital transformation of its clients across different business sectors: Defense, Financial Services, Health, Manufacturing, Media, Utilities, Public sector, Retail, Telecommunications, and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and is listed on the Euronext Paris market. Atos operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. www.atos.net

Source: Atos

The post Atos Announces First UK Delivery of New Bull Sequana Supercomputer appeared first on HPCwire.

IBM Reports 2016 Fourth Quarter and Full Year Financial Results

Fri, 01/20/2017 - 06:54

ARMONK, N.Y., Jan. 20 — IBM (NYSE: IBM) has announced fourth-quarter and full-year 2016 earnings results.

“In 2016, our strategic imperatives grew to represent more than 40 percent of our total revenue and we have established ourselves as the industry’s leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer.  “IBM Watson is the world’s leading AI platform for business, and emerging solutions such as IBM Blockchain are enabling new levels of trust in transactions of every kind.  More and more clients are choosing the IBM Cloud because of its differentiated capabilities, which are helping to transform industries, such as financial services, airlines and retail.”

“In 2016, we again made substantial capital investments, increased our R&D spending and acquired 15 companies — a total of more than $15 billion across these elements.  The acquisitions further strengthened our capabilities in analytics, security, cognitive and cloud, while expanding our level of industry expertise with additions such as Truven Health Analytics and Promontory Financial Group,” said Martin Schroeter, IBM senior vice president and chief financial officer.  “At the same time, we returned almost $9 billion to shareholders through dividends and gross share repurchases.”

Strategic Imperatives

Fourth-quarter cloud revenues increased 33 percent.  The annual exit run rate for cloud as-a-service revenue increased to $8.6 billion from $5.3 billion at year-end 2015.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 16 percent (up 17 percent adjusting for currency) and revenues from security increased 7 percent (up 8 percent adjusting for currency).

For the full year, revenues from strategic imperatives increased 13 percent (up 14 percent adjusting for currency).  Cloud revenues increased 35 percent to $13.7 billion.  The annual exit run rate for cloud as-a-service revenue increased 61 percent (up 63 percent adjusting for currency) year to year.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 34 percent (up 35 percent adjusting for currency) and from security increased 13 percent (up 14 percent adjusting for currency).

Full-Year 2017 Expectations

The company expects operating (non-GAAP) diluted earnings per share of at least $13.80 and GAAP diluted earnings per share of at least $11.95.  Operating (non-GAAP) diluted earnings per share exclude $1.85 per share of charges for amortization of purchased intangible assets, other acquisition-related charges and retirement-related charges.  IBM expects a free cash flow realization rate in excess of 90 percent of GAAP net income.

Cash Flow and Balance Sheet

In the fourth quarter, the company generated net cash from operating activities of $3.2 billion; or $5.6 billion excluding Global Financing receivables.  IBM’s free cash flow was $4.7 billion.  IBM returned $1.3 billion in dividends and $0.9 billion of gross share repurchases to shareholders.  At the end of December 2016, IBM had $5.1 billion remaining in the current share repurchase authorization.

The company generated full-year free cash flow of $11.6 billion, excluding Global Financing receivables.  The company returned $8.8 billion to shareholders through $5.3 billion in dividends and $3.5 billion of gross share repurchases.

IBM ended the fourth-quarter 2016 with $8.5 billion of cash on hand.  Debt, including Global Financing debt of $27.9 billion, totaled $42.2 billion.  Core (non-Global Financing) debt totaled $14.3 billion.  The balance sheet remains strong and is well positioned to support the business over the long term.

Segment Results for Fourth Quarter

  • Cognitive Solutions (includes solutions software and transaction processing software) —revenues of $5.3 billion, up 1.4 percent (up 2.2 percent adjusting for currency) were driven by growth in cloud, analytics and security.
  • Global Business Services (includes consulting, global process services and application management) — revenues of $4.1 billion, down 4.1 percent (down 3.6 percent adjusting for currency).
  • Technology Services & Cloud Platforms (includes infrastructure services, technical support services and integration software) — revenues of $9.3 billion, up 1.7 percent (up 2.4 percent adjusting for currency).  Growth was driven by strong hybrid cloud services, analytics and security performance.
  • Systems (includes systems hardware and operating systems software) — revenues of $2.5 billion, down 12.5 percent (down 12.1 percent adjusting for currency).  Gross profit margins improved driven by z Systems performance.
  • Global Financing (includes financing and used equipment sales) — revenues of $447 million, down 1.5 percent (down 2.1 percent adjusting for currency).

Full-Year 2016 Results

Diluted earnings per share from continuing operations were $12.39, down 9 percent compared to the 2015 period.  Net income from continuing operations for the twelve months ended December 31, 2016 was $11.9 billion compared with $13.4 billion in the year-ago period, a decrease of 11 percent.

Consolidated net income was $11.9 billion compared to $13.2 billion in the year-ago period.  Consolidated diluted earnings per share were $12.38 compared to $13.42, down 8 percent year to year. Revenues from continuing operations for the twelve-month period totaled $79.9 billion, a decrease of 2 percent year to year compared with $81.7 billion for the twelve months of 2015.

Operating (non-GAAP) diluted earnings per share from continuing operations were $13.59 compared with $14.92 per diluted share for the 2015 period, a decrease of 9 percent.  Operating (non-GAAP) net income from continuing operations for the twelve months ended December 31, 2016 was $13.0 billion compared with $14.7 billion in the year-ago period, a decrease of 11 percent.

Source: IBM

The post IBM Reports 2016 Fourth Quarter and Full Year Financial Results appeared first on HPCwire.

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

Thu, 01/19/2017 - 16:09

US-based publishing and investment group International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. (“China Oceanwide”) and IDG Capital, the companies announced today (Thursday). The official announcement comes after months of speculation with Reuters reporting in November that the parties were in “advanced discussions.”

Tech analyst firm IDC is included in the deal but will go forth without the HPC group, which will find a new corporate home before the sale goes through (more details below).

The terms of the deal were not disclosed, but sources have estimated the sales price to be between $500 million to $1 billion. The transaction is expected to close within the first quarter of 2017.

Founded in 1964 by Pat McGovern, IDG is a prominent global media, market research and venture company; it operates in 97 countries around the world. McGovern, the long-time CEO for the company, passed away in 2014.

“Pat was not only a great boss, but also a mentor to me for over 22 years,” said Hugo Shong, founding general partner of IDG Capital. “IDG’s culture is at the core of its success, and its strength has always been rooted in the talent and dedication of its people. Our focus going forward will be on investing in the company and its people for growth over the long term, as we carry the flag for Pat’s legacy for many years to come.”

IDG Capital is an independently operated investment management partnership, which cites IDG as one of many limited partners. It was formed in 1993 as China’s first technology venture investment firm. It operates in a wide-swath of sectors, including Internet and wireless communications, consumer products, franchise services, new media, entertainment, education, healthcare and advanced manufacturing.

China Oceanwide is a privately held, multi-billion dollar, international conglomerate founded by Chairman Zhiqiang Lu. Its operations span financial services, real estate assets, media, technology and strategic investment. The company has a global business force of 12,000.

The Future of IDC’s HPC Team

Given IDC’s position as an analyst firm of record for the HPC community, you may be wondering how IDG’s sale to Chinese interests will impact IDC’s HPC group, which deals with sensitive US information. Earl Joseph, IDC program vice president and executive director HPC User Forum, explained that due to the nature of their business dealings, IDC’s HPC group will be separated out of IDC’s holdings prior to the sale.

“We want to let you know that we will be fully honoring all IDC HPC contracts and deliverables, and will continue our HPC operations as before,” Joseph shared in an email.

“Because the HPC group conducts sensitive business with governments, the group is being separated prior to the deal closing. It will be operated under new ownership that will be independent from the buyer of IDC to ensure that the group can continue to fully support government research requirements. The HPC group will continue to do business as usual, including research reports, client studies, and the HPC User Forums. After the deal closes, all research reports will be provided by the new HPC group at: www.hpcuserforum.com.”

The IDC HPC team also clarified that it will retain control of “all the IP on HPC, numbers, reports, etc.” and it “won’t be part of a non-US company.”

“We are being setup to be a healthy, growing concern,” said Joseph.

Until the IDC HPC group finds a new corporate home, it will remain part of IDG.

The post IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group appeared first on HPCwire.

Weekly Twitter Roundup (Jan. 19, 2017)

Thu, 01/19/2017 - 14:05

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.

Just received our new clear tile for the Owens Cluster! #supercomputer pic.twitter.com/w0h4HVn0Ai

— OhioSupercomputerCtr (@osc) January 18, 2017

Preparation for #SC17 has begun, check out the new look and feel for SC17 with the #HPCConnects Logo! Supercomputing is only 10 months away

— SC17 (@Supercomputing) January 19, 2017

As part of the @MontBlanc_Eu Project, we got to visit the MareNostrum, a #supercomputer housed in what used to be a chapel. #HPC #whataphoto pic.twitter.com/QUOgefWkep

— Connect Tech Inc. (@ConnectTechInc) January 18, 2017

Even today's low temperatures did not prevent our lively #HPC discussions at today's event #ARM on the road. Thanks all for attending! pic.twitter.com/gbPGaksQ6d

— Mont-Blanc (@MontBlanc_Eu) January 17, 2017

Stampede supercomputer simulates silica glass, science to save on energy bills from heat loss https://t.co/ESdrytAJIx

— TACC (@TACC) January 19, 2017

We're creating a first-of-its-kind #supercomputer with @GW4Alliance thanks to £3M @EPSRC funding https://t.co/ojbC5XkmLB

— University of Bath (@UniofBath) January 17, 2017

Awesome week of presenting and learning at the @ddn_limitless Sales Conference – 2017 is going to be exciting across the product portfolio pic.twitter.com/fBHrAUIBPL

— Kurt Kuckein (@kkuckein) January 18, 2017

Congratulations @ragerber! Well deserved: https://t.co/JIF6H7I6Kl @BerkeleyLab pic.twitter.com/G0gcrumM3x

— NERSC (@NERSC) January 17, 2017

Last talk at @MontBlanc_Eu Conference. Jean Gonnord #CEA makes a vibrant appeal to buy European in #HPC pic.twitter.com/7QIXN7XV8P

— Pascale BernierBruna (@PBernierBruna) January 17, 2017

Highlights of @DeptofDefense Secretary Ashton Carter term include Vislab visit. https://t.co/fUdtcRX01z #HPCmatters pic.twitter.com/KztfVsSHKw

— TACC (@TACC) January 19, 2017

Click here to view the top tweets from last week.

The post Weekly Twitter Roundup (Jan. 19, 2017) appeared first on HPCwire.

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

Thu, 01/19/2017 - 11:09

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda.

CEA (Alternative Energies and Atomic Energy Commission), long a force in European HPC, and RIKEN, Japan’s largest research institution, share broad goals in the new initiative. On the RIKEN side, the Advanced Institute of Computation Science (AICS) will coordinate much of the work although activities expected to extend RIKEN-wide and to other Japanese academia.

Perhaps not surprisingly, further development of ARM is a driving force. Here are comments by project leaders from both partners:

  • RIKEN. “We are committed to building the ARM-based ecosystems and we want to send that message to those who are related to ARM so that those people will be excited in getting in contact with us,” said Shig Okaya, director, Flagship 2020 Project, RIKEN. Japan and contractor Fujitsu, of course, have committed to using ARM on the post k computer.
  • CEA. “We are [also] committed to development of the [ARM] ecosystem and we will [also] compare and cross test with the other platforms such as Intel. It’s a way for us to anticipate the future needs of our scientists and industry people so that we have a full working co-design loop,” said Jean-Philippe Bourgoin, director of strategic analysis and member of the executive committee, CEA. Europe also has a major ARM project – Mont Blanc now in its third phase – that is exploring use of ARM in leadership class machines. Atos/Bull is the lead contractor.
Jean-Philippe Bourgoin, director of strategic analysis and member of the executive committee CEA (r); Shig Okaya, director, Flagship 2020 Project, RIKEN

The agreement, announced last week in Japan and France, has been in the works for some time, said Okaya and Bourgoin, and is representative of the CEA-RIKEN long-term relationship. Although details are still forthcoming, the press release on the CEA website provides a worthwhile snapshot:

“The scope of the collaboration covers the development of open source software components, organized in an environment that can benefit both hardware developers and software and application developers on x86 as well as ARM architectures. The open source approach is particularly suited to combining the respective efforts of partners, bringing software environments closer to today’s very different architectures and giving as much resonance to the results as possible – in particular through contributions to the OpenHPC collaborative project.

“Priority topics include environment and programming languages, execution materials, and work schedulers optimized for energy. Particular attention is paid to performance and efficiency indicators and metrics – with a focus on designing useful and cost-effective computers – as well as training and skills development. Finally, the first applications included in the collaboration concern quantum chemistry and condensed matter physics, as well as the seismic behavior of nuclear installations.”

The new agreement, say both parties, “should enable France and Japan to join forces in the global race on this strategic (HPC and exascale) subject. The French and Japanese approaches have many similarities not only in their technological choices, but also in the importance given to building user ecosystems around these new supercomputers.”

Formally, the collaboration is part of an agreement between the French Ministry of National Education, Higher Education and Research and the Ministry of Education, Culture, Sports and Science And Japanese Technologies (MEXT). Europe and Japan have both been supporters of open architectures and open source software. It also helps nation each further explore non-x86 (Intel) processors architecture.

It’s worth noting that ARM, founded in the U.K., was purchased last year by Japanese technology conglomerate SoftBank (see HPCwire article, SoftBank will Purchase ARM Ltd for $32B).

Steve Conway, IDC research vice president, HPC/HPDA, said, “This CEA-RIKEN collaboration to advance open source software for leadership-class supercomputers, including exascale systems, makes great sense. Both organizations are among the global leaders for HPC innovations in hardware and software, and both have been strong supporters of the OpenHPC collaborative. IDC has said for years, that software advances will be even more important than hardware progress for the future of supercomputing.”

K computer, RIKEN

The collaboration is a natural one, said Okaya and Bourgoin, not least because each organization is leading exascale development efforts in their respective countries and each already hosts formidable HPC resources – RIKEN/AICS’s K computer and CEA’s Curie machine which is part of the Partnership for Advanced Computing in Europe (PRACE) network of computers.

“One of the outcomes of this partnership will be that the applications and codes developed by the Japanese will be able to be ported and run on the French computer and of course the French codes and applications will be able to be run on the Japanese computer. So the overall ecosystem [of both] will benefit,” said Bourgoin. He singled out three critical areas for collaboration: programming environment, runtime environment, and energy-aware job scheduling.

Okaya noted there are differences in the way each organization has tackled these problems but emphasized they are largely complementary. One example of tool sharing is the microkernel strategy being developed at RIKEN, which will be enriched by use of a virtualization tool (PCOCC) from CEA. At the application level, at least to start, two applications areas have been singled out:

  • Quantum Chemistry/Molecular Dynamics. There’s an early effort to port BigDFT, developed in large measure in Europe, to the K computer with follow-up work to develop libraries.
  • Earth Sciences. Japan has leading edge seismic simulation/prediction capabilities and will work with CEA to port Japan’s simulation code, GAMERA. Bourgoin noted the value of such simulations in nuclear installation evaluations and recalled that Japan and France have long collaborated on a variety of nuclear science issues.

The partnership seems likely to bear fruit on several fronts. Bourgoin noted the agreement has a lengthy list of detailed deliverables and timetable for delivery. While the RIKEN effort is clearly focused on ARM, Bourgoin emphasized it is not clear which processor(s) will emerge for next generations HPC and exascale in the coming decade. Europe and CEA want to be ready for whatever mix of processor architecture landscapes arises.

In addition to co-development, Bourgoin and Okaya said they would also work on HPC training issues. There is currently a lack of needed trained personnel, they agreed. How training would be addressed was not yet spelled out. It will be interesting to watch this collaboration and monitor what effect it has on accelerating ARM traction more generally. Recently, of course, Cray announced an ARM-based supercomputer project to be based in the U.K.

Neither partner wanted to go on record regarding geopolitical influences on processor development generally or this collaboration specifically. Past European Commission statements have made it clear the EC would likely back a distinctly European (origin, IP, manufacture) processor alternative to the x86. Japan seems likely to share such homegrown and home-control concerns with regard to HPC technology, which is seen as an important competitive advantage for industry and science.

The post France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale appeared first on HPCwire.

Appentra Joins the OpenPOWER Foundation

Thu, 01/19/2017 - 10:12

A Coruña, Spain, January 19 — Appentra Corporation (@Appentra), a software company for guided parallelization today announced it has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture.

Appentra joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform.

OpenPOWER Foundation has a collaborative environment from which, the members obtain current information on OpenPOWER activities and they get involved in areas of interest to them. Thus, we will actively participate in the OpenPOWER Ready program to demonstrate that our new software Parallware Trainer is interoperable with other OpenPOWER Ready products. We are also interested in working with the Academia Discussion Group to better understand how Parallware Trainer can help in teaching parallel programming with OpenMP and OpenACC.

“For us it is of great value to share our experiences and learn from world-wide leading universities, national laboratories and supercomputing centers that are also members of OpenPOWER Foundation.” said Manuel Arenaz CEO at Appentra.

“The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology.” says Calista Redmond, Director of OpenPOWER Global Alliances at IBM. “With the Power architecture designed for Big Data and Cloud, new OpenPOWER Foundation members like Appentra, will be able to add their own innovations on top of the technology to create new applications that capitalize on emerging workloads.”

About OpenPOWER Foundation

The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth.The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry. To learn more about OpenPOWER and to view the complete list of current members, go to www.openpowerfoundation.org. #OpenPOWER

About Appentra

Appentra is a technology company providing software tools for guided parallelization in high-performance computing and HPC-like technologies.

Appentra was founded in 2012 as a spin-off from the University of A Coruña. Dr. Manuel Arenaz and his team were conducting research in the area of advanced compilation techniques to improve the performance in high-performance parallel computing codes. Specifically, Dr. Arenaz’s team was focused on the static program analysis for parallelization of sequential scientific applications that use sparse computations, automatic parallelism discovery, and development of parallelizing code transformations for sparse applications.

This led to an idea: Develop a set of tools, The Parallware Suite, that help users manage the complexity of parallel programming, keep up with leading industrial standards, and not only parallelizes the code but train users how to parallelize their code. By using Parallware Suite, users can take control of their parallel applications, improve productivity in their output, and release the full potential of HPC in their environment.

Source: Appentra

The post Appentra Joins the OpenPOWER Foundation appeared first on HPCwire.

Altair to Offer HPC Cloud Offerings on Oracle Cloud Platform

Thu, 01/19/2017 - 08:46

TROY, Mich., Jan. 19 — Altair today announced a business collaboration with Oracle to build and offer High Performance Computing (HPC) solutions on the Oracle Cloud Platform. This follows Oracle’s decision to name Altair’s PBS Works as its preferred workload management solution for Oracle Cloud customers.

Altair PBS Works running on the Oracle Cloud Platform offers independent software vendors (ISVs) faster time to market on a proven HPC platform to address markets such as Oil & Gas, Insurance Information Processing and the internet of things (IoT). The Altair HPC advantage is the ability to quickly jumpstart an ISV interested in HPC with short time-to-market for their solutions on a proven platform.

“The Oracle Cloud Platform provides superior performance in terms of price, predictability, and throughput, with a low cost pay-as-you-go cloud model,” said Sam Mahalingam, Chief Technical Officer, Altair. “We are delighted to partner with Oracle to provide High Performance Computing (HPC) solutions with Altair’s PBS Works for the Oracle Cloud.”

Altair has served the HPC market for over a decade with award-winning workload management, engineering, and cloud computing software. Used by thousands of companies worldwide, PBS Works enables engineers in HPC environments to improve productivity, optimize resource utilization and efficiency, and simplify the process of workload management.

“Altair is a longtime leader in HPC and cloud solutions,” said Deepak Patil, Vice President of Product Management, Oracle Cloud Platform. “Their unique combination of HPC and engineering expertise makes PBS Works Oracle’s preferred workload management suite for High Performance Computing on the Oracle Cloud.”

The Oracle Platform promises to offer HPC users superior performance in terms of price, predictability, and throughput.

As part of the collaboration, Altair will work closely with Oracle to develop turnkey solutions allowing users to access cloud HPC resources in the Oracle Cloud Platform from any web-enabled device. These solutions will leverage Altair’s industry leading PBS Works job scheduling solution on the Oracle Cloud to enable intuitive web portal access and secure workload management for rapid, scalable access to Oracle Cloud Platform HPC resources.

Initial solutions will target life sciences, energy, and academia increasing HPC demand to run compute-intensive workloads including DNA sequencing, advanced simulations, and big data analytics to test new concepts or products in virtual space.

For more information on Altair’s PBS Works and HPC cloud offerings, visit www.pbsworks.com/overview.

About Altair

Altair is focused on the development and broad application of simulation technology to synthesize and optimize designs, processes and decisions for improved business performance. Privately held with more than 2,600 employees, Altair is headquartered in Troy, Michigan, USA and operates more than 45 offices throughout 20 countries. Today, Altair serves more than 5,000 corporate clients across broad industry segments. To learn more, please visit www.altair.com.

Source: Altair

The post Altair to Offer HPC Cloud Offerings on Oracle Cloud Platform appeared first on HPCwire.

SDSC’s Gordon Supercomputer Assists in New Microbiome Study

Thu, 01/19/2017 - 06:45

Jan. 19 — A new proof-of-concept study by researchers from the University of California San Diego has succeeded in training computers to “learn” what a healthy versus an unhealthy gut microbiome looks like based on its genetic makeup. Since this can be done by genetically sequencing fecal samples, the research suggests there is great promise for new diagnostic tools that are, unlike blood draws, non-invasive.

As recent advances in scientific understanding of Parkinson’s disease and cancer immunotherapy have shown, our gut microbiomes – the trillions of bacteria, viruses and other microbes that live within us – are emerging as one of the richest untapped sources of insight into human health.

The problem is these microbes live in a very dense ecology of up to one billion microbes per gram of stool. Imagine the challenge of trying to specify all the different animals and plants in a complex ecology like a rain forest or coral reef – and then imagine trying to do this in the gut microbiome, where each creature is microscopic and identified by its DNA sequence.

Determining the state of that ecology is a classic ‘Big Data’ problem, where the data is provided by a powerful combination of genetic sequencing techniques and supercomputing software tools. The challenge then becomes how to mine this data to obtain new insights into the causes of diseases, as well as novel therapies to treat them.

The new paper, titled “Using Machine Learning to Identify Major Shifts in Human Gut Microbiome Protein Family Abundance in Disease,” was presented last month at the IEEE International Conference on Big Data. It was written by a joint research team from UC San Diego and the J. Craig Venter Institute (JCVI). At UC San Diego, it included Mehrdad Yazdani, a machine learning and data scientist at the California Institute for Telecommunications and Information Technology’s (Calit2) Qualcomm Institute; Biomedical Sciences graduate student Bryn C. Taylor and Pediatrics Postdoctoral Scholar Justine Debelius; Rob Knight, a professor in the UC San Diego School of Medicine’s Pediatrics Department as well as the Computer Science and Engineering Department and director of the Center for Microbiome Innovation; and Larry Smarr, Director of Calit2 and a professor of Computer Science and Engineering. The UC San Diego team also collaborated with Weizhong Li, an associate professor at JCVI.

Metagenomics and Machine Learning

The software to carry out the study was developed by Li and run on the data-intensive Gordon supercomputer at the San Diego Supercomputer Center (SDSC), an Organized Research Unit of UC San Diego, using 180,000 core-hours. That’s equivalent to running a personal computer 24 hours a day for about 20 years.

The work began with a genetic sequencing technique known as “metagenomics,” which breaks up the DNA of the hundreds of species of microbes that live in the human large intestine (our “gut”). The technique was applied to 30 healthy people (using sequencing data from the National Institutes of Health’s Human Microbiome Program), together with 30 samples from people suffering from the autoimmune Inflammatory Bowel Disease (IBD), including those with ulcerative colitis and with ileal or colonic Crohn’s disease. This resulted in sequencing around 600 billion DNA bases, which were then fed into the Gordon supercomputer to reconstruct the relative abundance of these species; for instance, how many E. coli are present compared to other bacterial species.

Since each bacterium’s genome contains thousands of genes and each gene can express a protein, this technique made it possible to translate the reconstructed DNA of the microbial community into hundreds of thousands of proteins, which are then grouped into about 10,000 protein families.

To discover the patterns hidden in this huge pile of numbers, the researchers harnessed what they refer to as “fairly out-of-the-bag” machine-learning techniques originally developed for spam filters and other data mining applications. Their goal was to use these algorithms to classify major changes in the protein families found in the gut bacteria of both healthy subjects and those with IBD, based on the DNA found in their fecal samples.

The researchers first used standard biostatistics routines to identify the 100 most statistically significant protein families that differentiate health and disease states. These 100 protein families were then used as a “training set” to build a machine learning classifier that could classify the remaining 9,900 protein families in diseased versus healthy states. The goal was to find a “signature” for which protein families were elevated or suppressed in disease versus healthy states.

The entire article can be found here.

Source: Tiffany Fox, SDSC

The post SDSC’s Gordon Supercomputer Assists in New Microbiome Study appeared first on HPCwire.

NERSC Selects Six Teams to Participate in NESAP for Data Program

Thu, 01/19/2017 - 06:41

Jan. 19 — Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program.

Since the NESAP program was unveiled in 2014, NERSC has been partnering with code teams and library and tool developers to prepare and optimize their codes for the Cori manycore architecture. Like NESAP, the NESAP for Data program joins application teams with resources at NERSC, Cray and Intel; however, while the initial NESAP projects involve mostly simulation codes, NESAP for Data targets science applications that process and analyze massive datasets acquired from U.S. Department of Energy-supported experimental and observational sources, such as telescopes, microscopes, genome sequencers, light sources and particle physics detectors. The goal is to enable these applications to take full advantage of the Intel Xeon Phi Knights Landing (KNL) chipset on Cori.

The selected NESAP for Data projects are:

  • Dark Energy Spectroscopic Instrument Codes; Stephen Bailey, Berkeley Lab (HEP)
  • Union of Intersections Framework; Kris Bouchard, Berkeley Lab (BER)
  • Cosmic Microwave Background Codes (TOAST); Julian Borrill, Berkeley Lab (HEP)
  • ATLAS Simulation/Analysis Code; Steve Farrell, Berkeley Lab (HEP)
  • Tomographic Reconstruction; Doga Gursoy, Argonne (BES)
  • CMS Offline Reconstruction Code; Dirk Hufnagel, FermiLab (HEP)

“We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”

Through NESAP, the teams will have full access to Cori-KNL, plus NERSC Data Department expertise, testbeds and collaborations with vendors. Participants are already lining up to start submitting jobs to the KNL debug queues, according to Thomas. NESAP for Data participants will also be included in the various NESAP meetings and events, such as the upcoming dungeon session to be held in March at Intel’s Portland, Ore. facility.

Like the initial NESAP program, NESAP for Data also includes post-doctoral opportunities; the first NESAP post-doc to start working on the data projects is Zahra Ronaghi, who joined the NESAP program in January. NERSC is now in the process of reviewing more applications to fill two additional post-doc positions within NESAP for Data.

“We’ve learned a tremendous amount in the last couple years working with our existing NESAP teams to prepare their applications for Knights Landing,” said Jack Deslippe, acting group lead for NERSC’s Application Performance group. “We’ve developed an optimization strategy that the greater NERSC community can use to prepare for Cori. It’s really exciting to get a chance to bring in some new data-centric applications—to apply some of what we’ve learned already but also to learn more from the unique challenges these apps face.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy’s Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science.

Source: NERSC

The post NERSC Selects Six Teams to Participate in NESAP for Data Program appeared first on HPCwire.

SC17 Now Accepting Proposals for Workshops

Thu, 01/19/2017 - 06:40

Jan. 19 — SC includes full- and half-day workshops that complement the overall Technical Program events, with the goal of expanding the knowledge base of practitioners and researchers in a particular subject area. These workshops provide a focused, in-depth venue for presentations, discussion and interaction.  Workshop proposals were peer-reviewed academically with a focus on submissions that inspire deep and interactive dialogue in topics of interest to the HPC community.

Publishing through SIGHPC

Workshops held in conjunction with the SC conference are *not* included as part of the SC proceedings.

If a workshop will have a rigorous peer-review process for selecting papers, we encourage the organizers to approach ACM SIGHPC about their special collaborative arrangement, which allows the workshop’s proceedings to be published in the two digital archives (ACM Digital Library and IEEE Xplore). The workshop’s proceedings will also be linked to the SC16 online program.

Please note that this option requires a second proposal to SIGHPC and imposes additional requirements; see http://www.sighpc.org/events/collaboration/scworkshops for details.

Important Dates

  • Web submissions open: January 1, 2017
  • Submission Deadline: February 7, 2017

Web Submissions: https://submissions.supercomputing.org/

Email Contact: workshops@info.supercomputing.org

SC17 Workshop Chair: Almadena Chtchelkanova, NSF

SC17 Workshop Vice-Chair: Luiz DeRose, Cray Inc.

Source: SC17

The post SC17 Now Accepting Proposals for Workshops appeared first on HPCwire.

ITER and BSC Collaborate to Simulate the Process of Fusion Power Generation

Thu, 01/19/2017 - 06:30

Jan. 19 — The ITER Organization and the Barcelona Supercomputing Center have gone one step further in their collaboration to simulate the process of fusion power generation. Both parties have signed a Memorandum of Understanding (MoU) in which they agree on the importance of promoting and furthering academic and scientific cooperation in all academic and scientific fields of mutual interest and to advance the training of young researchers. ITER is the international nuclear fusion R&D project, which is building the world’s largest experimental tokamak in France. Its aims to demonstrate that fusion energy is scientifically and technologically feasible.

ITER and BSC already collaborate in the area of numerical modelling to assess the design of the ITER pellet injector. These computer simulations are based upon non-linear 3D Magnetohydrodynamics (MHD) methods. Their focus is modelling the injection of pellets to forecast and control instabilities that could damage the reactor. These instabilities are called Edge Localized Modes (ELM), which can occur at the boundary of the fusion plasma and are problematic because they can release large amounts of energy to the reactor wall, wearing it away in the process. The goal of these simulations is to assess the optimal pellet size and speed of the pellet injector.

The MoU is valid for a duration of 5 years and tightens the cooperation between the two institutions, leaders in their respective fields, further. ITER will become the biggest and most relevant fusion device in the world while BSC, with its 475 researchers and experts and the upgrade of MareNostrum 3 to MareNostrum 4 that will take place later this year, is one of the top supercomputing centers worldwide.  As the first step within this new MoU, the two institutes will start a collaboration on the ITER Integrated Modelling infrastructure, IMAS, together with the EUROfusion Work Package for Code Development.

Mervi Mantsinen

The Barcelona Supercomputing Center Fusion team is coordinated by Mervi Mantsinen, ICREA professor at BSC from October 2013. During this time, Mantsinen has been one of the scientific coordinators for the EUROfusion experimental campaign to prepare fusion at ITER. Mantsinen has coordinated one of the two largest experiments for 2015-2016 at the Joint European Torus (JET), the biggest and most powerful fusion reactor in the world and is assisting the design and construction of ITER. Previously Mantsinen worked at JET and the ASDEX Upgrade tokamak at the Max-Planck Institute for Plasma Physics in Garching, Germany.

Mantsinen’s research focuses on the numerical modelling of experiments in magnetically confined fusion devices in preparation for ITER operation. Her objective is to enhance modelling capabilities in the field of fusion through code validation and optimization. This research is done within the European fusion research program EUROfusion for Horizon 2020 in close collaboration with ITER, the International Tokamak Physics Activity, EUROfusion and the Spanish national fusion laboratory CIEMAT.

ITER is the international nuclear fusion R&D project, which is building the world’s largest experimental tokamak nuclear fusion reactor in France. ITER aims to demonstrate that fusion energy is scientifically and technologically feasible by producing ten times more energy than is put in.

Fusion energy is released when hydrogen nuclei collide, fusing into heavier helium atoms and releasing tremendous amounts of energy in the process. ITER is constructing a tokamak device for the fusion reaction, which uses magnetic fields to contain and control the plasma – the hot, electrically charged gas that is produced in the process.

EUROFUSION is the ‘European Consortium for the Development of Fusion Energy’ and manages and funds European fusion research activities. The EUROfusion consortium is composed of the member states of the European Union plus Switzerland as associated member.

The Joint European Torus (JET) is located at the Culham Centre for Fusion Energy in Oxfordshire, Great Britain.  JET is presently the largest and most powerful fusion reactor in the world and studies fusion in conditions approaching those needed for a fusion power plant.

About the Barcelona Supercomputing Center (BSC)

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specializes in high performance computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

BSC is a Severo Ochoa Center of Excellence and a first-level hosting member of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe). The center also manages the Spanish Supercomputing Network (RES).

The BSC Consortium is composed by the Ministerio de Economía, Industria y Competitividad of the Spanish Government, the Departament d’Empresa i Coneixement of the Catalan Government and the Universitat Politècnica de Catalunya – BarcelonaTech.

Source: BSC

The post ITER and BSC Collaborate to Simulate the Process of Fusion Power Generation appeared first on HPCwire.

ARM Waving: Attention, Deployments, and Development

Wed, 01/18/2017 - 17:07

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s RIKEN announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December.

The pieces of an HPC ecosystem for ARM seem to be sliding, albeit unevenly, into place. Market traction in terms of HPC product still seems far off – there needs to be product available after all – but the latest announcements suggest growing momentum in sorting out the needed components for potential ARM-based HPC offerings. Plenty of obstacles remain – Fujitsu’s much-discussed ARM-based post K computer schedule has been delayed amid suggestions that processor issues are the main cause. Nevertheless interest in ARM for HPC is rising.

The biggest splash at this week’s Mont-Blanc project meeting was announcement of Cray’s plans to build a massive ARM supercomputer for the GW4 consortium (GW4) in the U.K. On first glance, it looks to be the first production ARM-base supercomputer. Named Isambard after the Victorian engineer Isambard Kingdom Brunel, the new system is scheduled for delivering in the March-December 2017 timeframe. Importantly, Isambard “will provide multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of hardware platforms.”

Project leader and professor of HPC at the University of Bristol, Simon McIntosh-Smith, said “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack. [It’s] a unique system that will enable direct ‘apples-to-apples’ comparisons across architectures, thus enabling UK scientists to better understand which architecture best suits their application.”

Here’s a quick Isambard snapshot:

  • Cray CS-400 system
  • 10,000+ 64-bit ARMv8 cores,
  • HPC optimized stack
  • Be used to compare x86, Knights Landing, and Pascal processors
  • Cost £4.7M over three years

The specific ARM chip planned for use was not named although the speculation is it’s likely to be a Cavium. The new machine will be hosted by the U.K. Met (climate/weather forecasting) agency. Paul Selwood, Manager for HPC Optimization at the Met Office said: “This system will enable us, in co-operation with our partners, to accelerate insights into how our weather and climate models need to be adapted for these emerging CPU architectures,” in the release announcing the project. The GW4 Alliance brings together four leading research-intensive universities: Bath, Bristol, Cardiff and Exeter.

The second splash at the BSC meeting was perhaps less spectacular but also important. The Mont-Blanc project has been percolating along since 2011. A smaller prototype was stood up in 2015 and it seems clear much of Europe is hoping that ARM-based processors will offer an HPC alternative and greater European control over its exascale efforts. Cavium’s ThunderX2 chip – a 64-bit ARMv8-A server processor that’s compliant with ARMv8-A architecture specifications and ARM SBSA and SBBR standards – will power the third phase prototype.

Mont-Blanc, of course, is the European effort to explore how ARM can be practically scaled for larger machines including future exascale systems. Atos/Bull is the primary contractor. The third phase of the Mont-Blanc project seeks to:

  • Define the architecture of an Exascale-class compute node based on the ARM architecture, and capable of being manufactured at industrial scale.
  • Assess the available options for maximum compute efficiency.
  • Develop the matching software ecosystem to pave the way for market acceptance.

The CEA-Riken collaboration announced last week is yet another ARM ecosystem momentum builder. “We are committed to building the ARM-based ecosystems and we want to send that message to those who are related to ARM so that those people will be excited in getting in contact with us,” said Shig Okaya, director, Flagship 2020, and a project leader for the CEA-RIKEN effort. It will, among other things, focus on and programming languages, execution materials, and work schedulers optimized for energy. Co-development of codes and code sharing are big parts of the deal. (HPCwire will cover the CEA-RIKEN in greater detail in a future article).

Whether the increased attention on ARM will translate into success beyond the mobile and SOC world where it is now a dominant player (mobile, for example) isn’t clear. One of CEA’s goals is to compare ARM with a range or architectures to determine which performs best and for which workloads. Many market watchers are wary of ARM’s potential in HPC, which is still a relatively small market. Then again, less success in HPC wouldn’t necessarily rule out success in traditional servers. We’ll see.

The post ARM Waving: Attention, Deployments, and Development appeared first on HPCwire.

Richard Gerber Named Head of NERSC’s HPC Department

Wed, 01/18/2017 - 11:32

Jan. 18 — Richard Gerber has been named head of NERSC’s High-Performance Computing (HPC) Department, formed in early 2016 to help the center’s 6,000 users take full advantage of new supercomputing architectures – those already here and those on the horizon – and guide and support them during the ongoing transition to exascale.

For the past year, Gerber served as acting head of the department, which comprises four groups: Advanced Technologies, Application Performance, Computational Systems and User Engagement.

“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”

The HPC Department is also responsible for standing up and supporting world-class systems in a production computing environment and looking to the future. “We work with complex, first-of-a-kind systems that present unique challenges,” Gerber said. “Our staff is constantly providing innovative solutions that make systems more capable and productive for our users. Looking forward, we are evaluating emerging technologies and gathering scientific needs to influence future HPC directions that will best support the science community.”

In addition, NERSC is working to acquire its next large system, NERSC-9, and prepare users to make effective use of it and exascale architectures in general, Gerber noted.

“The challenge really is getting the community to exascale, and there are many aspects to that, including helping users explore different programming models,” he said. “Beyond that we are starting to think about how to prepare for a post-Moore’s Law world when it arrives. We want to help move the community toward exascale and make sure they are ready.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy’s Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science.

Source: NERSC

The post Richard Gerber Named Head of NERSC’s HPC Department appeared first on HPCwire.

Nimbix Unveils Expanded Cloud Product Strategy

Wed, 01/18/2017 - 07:25

RICHARDSON, Tex., Jan. 18 — Nimbix, a leading provider of high performance and cloud supercomputing services, announced today its new combined product strategy for enterprise computing, end users and developers.  This new strategy will focus on three key capabilities – JARVICE Compute for high performance processing, including Machine Learning, AI and HPC workloads; PushToCompute for application developers creating and monetizing high performance workflows; and MaterialCompute, a brand new intuitive user interface, featuring the industry’s largest high performance application marketplace available from a cloud provider.

Nimbix’s JARVICE platform powers the Nimbix Cloud and is capable of processing massively parallel turnkey workflows ranging from enterprise simulation to machine learning and serving all major industries and organizations.  Unlike other cloud providers who leverage virtualization technology to provide slices of a physical machines to users, JARVICE delivers high performance computation on bare-metal supercomputing systems using Nimbix’s patented Reconfigurable Cloud Computing technology and fully containerized application components for agility and security.  JARVICE powers the Nimbix Cloud and is also available as a product for both hosted and on-premises private cloud deployments.

PushToCompute, released in September of 2016, is the fastest, easiest way for developers to onboard commercial or open source compute-intensive applications into the cloud.  Using the industry standard Docker format, PushToCompute seamlessly interfaces with major third party registries such as Docker Hub, Google Container Registry, and others, as well as private registries. PushToCompute is available as a subscription service and will expand to include build capabilities for both x86 and POWER architectures in the first half of 2017.  With these new capabilities PushToCompute will offer end to end continuous integration as well as continuous delivery services for developers of compute intensive workflows such as machine learning and other complex algorithms.  Once deployed, these workflows can be made available in the public marketplace for monetization in an on-demand fashion.

MaterialCompute, Nimbix’s newest offering, sets a new standard for ease of use and accessibility of high-end computing services.  MaterialCompute aims to reduce clicks, improve flows, and optimize display on both desktop and mobile devices.  With MaterialCompute, users choose applications and workflows from the marketplace and execute these on the Nimbix Cloud at any scale, from any network, on any device, and leveraging any advanced computing technologies such as the latest GPUs from NVIDIA and FPGAs from Xilinx.  Developers can also leverage MaterialCompute to create and manage applications, and interface seamlessly to PushToCompute mechanisms.

“Delivering optimized technology capabilities to different communities is key to a successful public cloud offering,” said Nimbix Chief Technology Officer Leo Reiter.  “With this unified approach, Nimbix delivers discrete product capabilities to different audiences while maximizing value to all parties with the underlying power of the JARVICE platform.”

JARVICE and PushToCompute are available with both on-demand and subscription pricing.  MaterialCompute will be available for public access in February 2017 and will serve as the primary front-end for all Nimbix Cloud services.

About Nimbix

Nimbix is the leading provider of purpose-built cloud computing for big data and computation. Powered by JARVICE, the Nimbix Cloud provides high performance software as a service, dramatically speeding up data processing for Energy, Life Sciences, Manufacturing, Media and Analytics applications. Nimbix delivers unique accelerated high-performance systems and applications from its world-class datacenters as a pay-per-use service. Additional information about Nimbix is included in the company overview, which is available on the Nimbix website at https://www.nimbix.net.

Source: Nimbix

The post Nimbix Unveils Expanded Cloud Product Strategy appeared first on HPCwire.

Michele Tauter Named SC19 Chair

Wed, 01/18/2017 - 07:00

Jan. 18 — The University of Delaware’s Michela Taufer has been elected general chair of the 2019 International Conference for High Performance Computing, Networking, Storage and Analysis (SC19).

Sponsored by the Association for Computing Machinery and IEEE, SC is the primary international high-performance computing (HPC) conference.

“We are excited to have the benefit of Dr. Taufer’s leadership for SC19,” says John West, director of strategic initiatives at the Texas Advanced Computing Center and chair of the SC Steering Committee.

“This conference has a unique role in our community, and we depend upon the energy, drive, and dedication of talented leaders to keep SC fresh and relevant after nearly 30 years of continuous operation. The Steering Committee also wants to express its gratitude for the commitment that the University of Delaware is making by supporting Michela in this demanding service role.”

Taufer has been involved with the SC conference since 2007 and has served in many roles, including reviewer, technical papers area chair, doctoral showcase chair, and technical program co-chair. She is currently on the Student Cluster Competition Reproducibility committee and the Reproducibility Advisory Board of the Steering Committee. She is the finance chair for 2017, and she was elected to the Steering Committee in 2015.

In addition to her work with the SC conference, Taufer has been involved in other major conferences in the HPC field. In 2015 she co-chaired the IEEE International Conference on Cluster Computing, and in 2017, she will be general chair of the IEEE International Parallel and Distributed Processing Symposium.

“This is a well-deserved honor for Prof. Taufer and marks her as one of a few recognized leaders in the field of HPC,” says Kathy McCoy, chair of the Department of Computer and Information Sciences.

“This bring tremendous recognition to Prof. Taufer and her contributions, and it shines a spotlight on all of Delaware’s HPC efforts. We are thankful for her leadership.”

About the SC conference series

Established in 1988, the annual SC conference has grown in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall.

SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers.

The SC technical program has addressed virtually every area of scientific and engineering research, as well as technological development, innovation, and education. Its presentations, tutorials, panels, and discussion forums have included breakthroughs in many areas and inspired new and innovative areas of computing.

Source: Diane Kukich, University of Delaware

The post Michele Tauter Named SC19 Chair appeared first on HPCwire.

HiPEAC Conference Begins January 23

Wed, 01/18/2017 - 06:45

Jan. 18 — Taking place in Stockholm from January 23-25, the 12th HiPEAC conference will bring together Europe’s top thinkers on computer architecture and compilation to tackle the key issues facing the computing systems on which we depend. HiPEAC17 will see the launch of the HiPEAC Vision 2017, a technology roadmap which lays out how technology affects our lives and how it can, and should, respond to the challenges facing European society and economies, such as the ageing population, climate change and shortages in the ICT workforce.

The Vision 2017 proposes a reinvention of computing. “We are at a crossroads, as our current way of making computers and their associated software is reaching its limit,” says Editor of the Vision, Marc Duranton of CEA. “New domains such as cyber-physical systems, which entangle the cyber and physical worlds, and artificial intelligence require us to trust systems and so develop more efficient approaches to cope with the challenges of safety, security, privacy, energy efficiency and increasing complexity. It really is the right time to reinvent computing!”

The Vision 2017 also highlights the economic importance of Europe remaining at the forefront of technological innovation. In that vein, HiPEAC17 is not a traditional academic conference; the network brings together computing systems research teams based in universities and research labs with those based in industry so as to ensure that research is relevant to market needs.  Indeed, the network has recently given a Technology Transfer Award to Horacio Pérez-Sánchez of the Universidad Católica de Murcia for his team’s work on computational drug discovery technologies, work supported by the EU-funded Tetracom initiative, which facilitated the transfer of research results from university labs to commercial application.

HiPEAC17 will also serve as a platform for HiPEAC’s recruitment service, which aims to help match European companies and research teams with the people with the skills they need, something that often proves to be a hurdle to business development.

Highlights of the conference include:

  • Launch of Matryx Computers, pre-integrated (hardware and fully-featured OS) computer platforms based on FPGA, by Embedded Computing Specialists (Brussels);
  • New startup Zeropoint Technologies (Stockholm), which is innovating ultrafast memory compression systems;
  • RWTH Aachen spinoff SILEXICA, just awarded $8 million in series A funding and celebrating the release of its next generation SLX Tool Suite for multicore platforms;
  • Keynotes from well-known experts Kathryn McKinley (Microsoft), Sarita Adve (University of Illinois) and Sandro Gaycken (Digital Society Institute, ESMT Berlin) will focus on data centre tail latency, memory hierarchies in the era of specialization, and the ‘as yet unsolvable problem’ of cybersecurity.

The City of Stockholm will host a conference evening reception at the famous Stockholm City Hall, home of the Nobel Prize banquet. Once again, the biggest international names in technology have shown their confidence in HiPEAC by generously supporting the conference.

Source: Barcelona Supercomputing Center

The post HiPEAC Conference Begins January 23 appeared first on HPCwire.

NEC Joins Forces With Micro Strategies

Wed, 01/18/2017 - 06:40

IRVING, Tex., Jan. 18 — NEC Corporation of America (NEC), a leading provider and integrator of advanced IT, communications, networking and biometric solutions, today announced that it has significantly strengthened its channel in data networking with the addition of Parsippany, New Jersey based Micro Strategies Inc., a leading provider of enterprise technology solutions for over 30 years. Micro Strategies specializes in the implementation of Networking, Mobility, Analytics, Security, Cloud, Infrastructure, Software, ECM, and High Availability solutions.

“We are delighted to join with Micro Strategies, one of the fastest growing companies in our space over the past 12 years, averaging annual revenue growth of around twelve per cent,” said Larry Levenberg, Vice President, NEC Corporation of America. “This complementary relationship combines our strength in Infrastructure, SDN, and cloud services with Micro Strategies’ growing footprint in multiple facets of IT.”

Micro Strategies has two innovation centers in New Jersey and Pennsylvania and the NEC relationship will initially focus on delivering a broad range of converged infrastructure technology solutions and backup.

Starting with its mainframes 40 years ago, NEC has engineered highly efficient storage solutions that reduce the ever-growing cost of storing business critical data. NEC storage solutions deliver high performance, superior scalability, and higher data resiliency. Virtualization extends storage infrastructure investments to reduce costs and simplify manageability. NEC’s Express5800 Server Series provides innovative features that address today’s complex IT infrastructure computing needs. Powered by energy efficient and reliable Intel Xeon processors, Express5800 servers deliver the proven performance and advanced functionality that reduce procurement and operational costs.

“We are very pleased to join forces with NEC,” said Anthony Bongiovanni, president and CEO of Micro Strategies. “This is an incredibly exciting time of growth for Micro Strategies and fundamental to our success is our customer-centric focus and the broad range of solutions we are able to offer through our partner relationships. We looked with diligence at how the addition of a partner can benefit our customers and NEC met all the criteria. We feel NEC aligns with our strategy going forward with a similar business philosophy they refer to as ‘Smart Enterprise’.”

Source: NEC

The post NEC Joins Forces With Micro Strategies appeared first on HPCwire.

Women Coders from Russia, Italy, and Poland Top Study

Tue, 01/17/2017 - 16:27

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. The U.S. placed 14th. Countries with largest proportions of women coders participating in the challenges are India, United Arab Emirates, and Romania. The U.S. was eleventh.

Attracting women to STEM careers generally and HPC specifically is an ongoing challenge although progress is being made (see HPCwire interview: A Conversation with Women in HPC Director Toni Collis). In the HackerRank study, roughly 17 percent of all coders participating in its challenges are women. Interestingly, the 17 percent figure roughly mirrors the proportion of women in technical positions at Google (17 percent) and Facebook (15 percent) according to HackerRank.

As with all such studies, this one must be read with a grain of salt. “We began our analysis with an attempt to assess exactly how many HackerRank test takers are female. Though we don’t collect gender data from our users, we were able to assign a gender to about 80% of users based on their first name. We did not include first names with equal gender distributions,” reports HackerRank

To determine the top performers, HackerRank reviewed scores on algorithms challenges, which account for more than 40 percent of all HackerRank tests. Algorithms challenges include sorting data, dynamic programming, searching for keywords, and other logic-based tasks. “Scores typically range from 0 to 115 points, although scores can reach as high as 10,000. We examined the 20 countries with the most female users in order to have large sample sizes. Russia’s female developers, who only account for 7.8 percent of Russian HackerRank users, top the list with an average score of 244.7 on algorithms tests,” according to the blog.

More details can be found in the full blog: https://www.hackerrank.com/work/tech-recruiting-insights/blog/female-developers

The post Women Coders from Russia, Italy, and Poland Top Study appeared first on HPCwire.

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

Tue, 01/17/2017 - 12:30

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur.

The two companies said they will jointly target oil and gas, life sciences, financial services, academia and other sectors.

The two companies have a track record of working together on joint deals, primarily in Asia.

“Inspur has worked closely with DDN on projects across China for many years, and we are excited to expand our collaboration with DDN to deliver joint solutions to customers worldwide,” said Vangel Bojaxhi, Inspur’s worldwide business development manager.

Inspur, founded in 2000, is headquartered in Jinan, Shandong Province, and has 26,000 employees. It has the largest share (18.2 percent) of China’s server market`and is, according to the company, the largest server provider for Alibaba and Baidu. According to industry watcher Gartner Group, Inspur was the world’s fastest growing server vendor for the first three quarters of 2016, with server shipment year-on-year growth of 28 percent during that period.

Privately held DDN has evolved from a nearly 100 percent partnerships sales model, as of two years ago, to a 50-50 balance between partnerships and direct sales, according to Larry Jones, DDN’s partner manager for the Inspur relationship. He said the deal was spurred by Inspur’s ambition to expand its reach beyond China, hiring an international business development manager who is familiar with DDN and has worked with Jones in the past. The joint agreement grew from there.

“It’s really exciting for both companies,” he said. “Inspur has its own storage organization, but like most server manufacturers they don’t have a HPC storage offering that’s anywhere near what DDN can do.

“For us, we’ve done some deals with Inspur in China but never on a global basis,” said Jones. The relationship is “in its infancy, but we’re hoping to grow it slowly and build on our mutual relationships with clients and take advantage of the expertise and core competencies of each company.”

While the partnership will help Inspur gain a toehold in the U.S. market, for DDN it’s intended to help the company reach markets in China and Europe, according to Jones. He said this is first partnership of this kind in the U.S. for Inspur.

“I think it’s going to start more in the commercial marketplace,” he said, “then as time goes on it will progress into the traditional HPC market as Inspur is accepted on a global basis. The things they do in China in the high end, traditional HPC space, they do with Chinese components. But they also are a Western component servers vendor too, so they make computers out of all the western components, (x86) machines that look very much like a Dell EMC or an HPE or Lenovo or IBM kind of thing, with Intel and Nvidia processors, as opposed to machines based on all Chinese technologies.

“We’ll be offering a fully integrated stack,” added Jones. “So if you know what we do with Lustre and Spectrum Scale file systems offerings, here’s a set of equipment that has been built, tested, we know it works, deployed, supported, so everything is there. Inspur is in position to say: ‘Here’s a complete, integrated solution that includes DDN storage.”

The post Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN appeared first on HPCwire.

Pages