Feed aggregator

Mellanox Technologies Updates First Quarter Outlook

Related News- HPC Wire - 1 hour 20 min ago

SUNNYVALE, Calif. & YOKNEAM, Israel, Feb. 23, 2018 — Mellanox Technologies, Ltd. (NASDAQ:MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, has announced updates to its first quarter outlook previously provided on its fourth quarter earnings call and earnings release on January 18, 2018.

First Quarter 2018 Outlook

Mellanox currently projects:

  • Quarterly revenues of $240 million to $250 million
  • Non-GAAP gross margins of 68.5% to 69.5%
  • Non-GAAP operating expenses of $120 million to $122 million
  • Share based compensation expenses of $16.3 million to $16.8 million
  • Non-GAAP diluted share count to be in range of 52.4 million and 52.9 million

“Throughout the first quarter, it has become clear that the trends we experienced at the end of 2017 are holding firm, and customer transition from 10 gigabit per second to 25 gigabit per second Ethernet adapters is accelerating across the board,” said Eyal Waldman, Chief Executive Officer of Mellanox. “We are particularly pleased to see that this widespread adoption of 25 gigabit per second technology covers the majority of customer categories in every major market around the world, a direct result of the strategy we have been executing on in recent years. Our investment in R&D is driving product innovation and sustainable long-term growth, and we are well positioned to capture further market share as the landscape shifts to 25 gigabit per second and beyond. In fact, Mellanox is already offering leading edge 25, 50 and 100 gigabit per second Ethernet solutions. We continue to build momentum and make progress on our financial and operational initiatives, by reducing our operating expense run rate and driving efficiencies in our business, and are confident that our focused investment strategy will continue to deliver positive results into the future.”

CFO Transition

Today, Mellanox also announced that Jacob Shulman has accepted an executive position at a pre-IPO company and will step down as Chief Financial Officer of Mellanox on May 4, 2018, after announcing fiscal first quarter 2018 earnings and signing off on the filing of the first quarter financials with the SEC.

The Company has been identifying and evaluating candidates to succeed Mr. Shulman as CFO with the assistance of an executive search firm.

“On behalf of the Board and management team, I would like to thank Jacob for his financial leadership and contributions to Mellanox,” said Mr. Waldman. “Jacob played an instrumental role in building Mellanox’s solid financial foundation. The Board of Directors, our employees and I are grateful to Jacob for his service and wish him the best as he embarks on an exciting new chapter in his career.”

Mr. Shulman said, “I joined Mellanox because I believed in our strategic direction, and I continue to believe the Company is positioned to serve our customers and deliver value to shareholders. We have made substantial investments in innovation and R&D over the past five years, and I look forward to seeing those investments bear fruit.”

Mr. Waldman continued, “During his tenure at Mellanox, Jacob added talent and strength to our finance team, which will continue to execute as we conduct the search for our next CFO. The Board and I are committed to finding a strong successor, and are actively working to identify a new finance leader with a proven track record of driving profitable growth and taking decisive action to enable margin expansion.”

About Mellanox

Mellanox Technologies (NASDAQ:MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Technologies Updates First Quarter Outlook appeared first on HPCwire.

New Head of Los Alamos National Lab’s Tech Transfer Division Announced

Related News- HPC Wire - 1 hour 28 min ago

LOS ALAMOS, N.M., Feb. 23, 2018 — On February 26, Antonio “Tony” Redondo will be taking over as head of Los Alamos National Laboratory’s tech transfer division, the Richard P. Feynman Center for Innovation. Named after the famous Manhattan Project physicist, the Feynman Center helps to transition science and technology created at the Laboratory to the private sector.

Antonio “Tony” Redondo. Image courtesy of Los Alamos National Laboratory.

Redondo is the former Theoretical Division leader and currently a senior scientist in the Theory, Simulation and Computation Directorate. In his 35 years at Los Alamos, he has served as principal investigator for several projects, including Soft Matter Mechanical, Rheological and Stability Properties, funded by Procter and Gamble; Metal Corrosion, funded by Chevron; Sustainable Materials, funded by Procter & Gamble; and Crystallization of Sugar, funded by Mars, Inc.

“Tony’s background has given him firsthand experience building effective partnerships between industry and external sponsors, and program and line organizations—skills that will serve him well as director of the Feynman Center,” said Nancy Jo Nicholas, principal associate director for Global Security, which oversees the Feynman Center.“He’ll be a real asset to the organization.”

Redondo will replace David Pesiri, who has been director of the Feynman Center since 2011. Pesiri will move into the Director’s Office to help manage the upcoming management and operation (M&O) contract change. In addition to leading the Feynman Center, he was a team leader for business development for five years. Prior to joining Los Alamos, he was a successful entrepreneur, helping to create and lead several technology companies. Pesiri has more than a decade of management experience at the Laboratory.

“I want to thank Dave for his hard work piloting Feynman Center through significant change over the last several years,” said Laboratory Director Terry Wallace. “He helped establish many important strategic partnerships and strengthened the tie between those partnerships and the Lab’s mission. His work is greatly appreciated and I look forward to having him in the Director’s Office to help with the upcoming M&O contract transition.”

About Los Alamos National Laboratory

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy’s National Nuclear Security Administration.

Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

Source: Los Alamos National Laboratory

The post New Head of Los Alamos National Lab’s Tech Transfer Division Announced appeared first on HPCwire.

OCF Deploys Petascale Lenovo Supercomputer at University of Southampton

Related News- HPC Wire - 9 hours 4 min ago

Feb. 22 — Researchers from across the University of Southampton are benefitting from a new high performance computing (HPC) machine named Iridis, which has entered the Top500, debuting at 251 on the list. The new 1,300 teraflops system was designed, integrated and configured by high performance compute, storage and data analytics integrator, OCF, and will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services.

Over the past decade, the University has seen a 425 per cent increase in the number of research projects using HPC services, from across multiple disciplines such as engineering, chemistry, physics, medicine and computer science. In addition, the new HPC system is also supporting the University’s Wolfson Unit. Best known for ship model testing, sailing yacht performance and ship design software, the Unit was founded in 1967 to enable industry to benefit from the facilities, academic excellence and research activities at the University of Southampton.

“We have a worldwide customer base and have worked with the British Cycling Team for the last three Olympic games, as well as working with teams involved in the America’s Cup yacht race,” comments Sandy Wright, Principal Research Engineer, Wolfson Unit at the University of Southampton. “In the past 10 years, Computational Fluid Dynamics (CFD) has become a perfectly valid commercial activity, reducing the need for physical experimentation. CFD gives as good an answer as the wind tunnel, without the need to build models, so you can speed up research whilst reducing costs. Iridis 5 will enable the Wolfson Unit to get more accurate results, whilst looking at more parameters and asking more questions of computational models.”

It’s a sentiment echoed by Syma Khalid, Professor of Computational Biophysics at the University: “Our research focuses on understanding how biological membranes function – we use HPC to develop models to predict how membranes protect bacteria. These membranes control how molecules move in and out of bacteria.  We aim to understand how they do this at the molecular level. The new insights we gain from our HPC studies have the potential to inform the development of novel antibiotics. We’ve had early access to Iridis 5 and it’s substantially bigger and faster than its previous iteration – it’s well ahead of any other in use at any University across the UK for the types of calculations we’re doing.

Four times more powerful than the University’s previous HPC system, Iridis comprises more than 20,000 Intel Skylake cores on a next generation Lenovo ThinkSystem SD530 server – the first UK installation of the hardware. In addition, it is using 10x Gigabyte servers in total containing 40x NVIDIA GTX 1080 Ti GPUs for projects requiring high single precision performance, and OCF has committed to the delivery of 20x Volta GPUs when they become available. OCF’s xCAT-based software is used to manage the main HPC resources, with Bright Computing’s Advanced Linux Cluster Management software chosen to provide the research cloud and data analytics portions of the system.

“We’re purposefully embracing more researchers and disciplines than ever before at the University, which brings a lot of competing demands, so we need a more agile way to provision systems,” says Oz Parchment, Director of iSolutions, at the University of Southampton.” Users need a infrastructure that’s flexible and easily managed, which is why Bright Computing is the ideal solution, particularly as we’re now embracing more complex research disciplines.”

Iridis has two petabytes of storage provided by Lenovo DSS Spectrum Scale Appliance connected via Mellanox EDR Infiniband and more than five PetaBytes of research data management storage using IBM Tape, taking advantage of the latest 15TB capable drives and media. The University has committed to a Proof of Concept of StrongBox’s StrongLink data and tape management solution, a unique approach to manging data environments that automates data classification for life-cycle data management for any data, on any storage, anywhere.

Oz continues: “The University of Southampton has a long tradition in the use of computational techniques to generate new knowledge and insight, stretching back to 1959 when our researchers first used modelling techniques on the design of Sydney Opera House. Data and analysis of that data, using computational methods is at the heart of modern science and technology and, in order to attract the best world-class researchers we need world-class research facilities.”

Julian Fielden, Managing Director of OCF comments: “Academia really is feeling the pressure in attracting new researchers, groups and grants. Competition has never been fiercer. Throughout our 13-year relationship with the University of Southampton, it has had the determination and ambition to compete not just nationally, but internationally and, critically, provide the HPC, Cloud and Data Analytics services that world-class researchers desire.

On working with OCF, Parchment concludes: “We’ve been working with OCF since 2004. The team has always delivered to our needs and gone the extra mile providing services, support and consultancy in addition to the hardware and software solutions. OCF listens and understands our needs, putting forward ideas that we haven’t even thought about. The team are all technical innovators.”

About the University of Southampton

The University of Southampton drives original thinking, turns knowledge into action and impact, and creates solutions to the world’s challenges.  We are among the top one per cent of institutions globally.  Our academics are leaders in their fields, forging links with high-profile international businesses and organisations, and inspiring a 24,000-strong community of exceptional students, from over 135 countries worldwide. Through our high-quality education, the University helps students on a journey of discovery to realise their potential and join our global network of over 200,000 alumni.  www.southampton.ac.uk

About OCF

OCF specialises in supporting the significant big data challenges of private and public UK organisations. Our in-house team and extensive partner network can design, integrate, manage or host the high performance compute, storage hardware and analytics software necessary for customers to extract value from their data. With heritage of over 15 years in HPC, managing big data challenges, OCF now works with over 20 per cent of the UK’s Universities and Research Councils, as well as commercial clients from the automotive, aerospace, financial, manufacturing, media, oil & gas, pharmaceutical and utilities industries. www.ocf.co.uk

Source: OCF

The post OCF Deploys Petascale Lenovo Supercomputer at University of Southampton appeared first on HPCwire.

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

Related News- HPC Wire - Thu, 02/22/2018 - 22:19

Lenovo today unveiled the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with partner Leibniz Supercomputing Center (LRZ) in Germany. The servers are designed to operate using warm water, up to 45°C, lowering datacenter power consumption 30-40 percent compared to traditional cooling methods, according to Lenovo.

Nearly 6,500 of the ThinkSystems SD650s featuring Intel Xeon Platinum processors interconnected with Intel Omni-Path Architecture will be put into production at LRZ this year, providing the supercomputing center with 26.7 petaflops of peak performance, housed in about 100 racks. The SuperMUC-NG supercomputer will be deployed with Lenovo’s new Lenovo Intelligent Computing Orchestrator (LiCO) and the Lenovo Energy Aware Runtime (EAR) software, a technology that dynamically optimizes system infrastructure power while applications are running.

Lenovo’s Scott Tease holding a ThinkSystem SD650 server

“Pretty much all the investments that we made to get to exascale LRZ is taking advantage of in this bid we won with them,” said Scott Tease, executive director, HPC and AI at Lenovo in an on-site briefing at Lenovo’s headquarters in Morrisville, North Carolina, last week. “We will start building systems and start shipping them in March; the floor will be ready by the end of April, and move-in starts in early May. We’ll be ready to do acceptance in September with final customer acceptance in November.”

The direct-water cooled design of the SD650 enables 85-90 percent heat recovery; the rest can easily be managed by a standard computer room air conditioner. The hot water coming off the servers can be recycled to warm buildings in the winter, as LRZ does with its petascale SuperMUC cluster, but the technology developed by Lenovo for SuperMUC-NG actually transforms that heat energy back into cooling for networking and storage components.

The endothermic magic trick only works with “high quality heat,” Lenovo thermal engineer Vinod Kamath told us, so the SD650 servers were designed to be able to consume 50°C inlet temperatures. Water is piped out of the servers at 58-60°C depending on workload and sent through an adsorption chiller, where it is converted to chilled 20°C water suitable for cooling storage and networking components.

Adsorption chilling will be applied to half the nodes at the LRZ install, generating about 600 kilowatts of chilled water capacity. This translates into about 100,000 Euros a year in saved energy at the European site, where the cost for energy is about 16-18 Eurocents per kilowatt-hour (roughly 2-3 times the cost for similar sites in the United States). Lenovo claims a 50 percent energy savings with the endothermic reaction versus a standard compressor, dropping the datacenter PUE from 1.6 to less than 1.1.

Interesting fact, adsorption chillers were used in the late 1800s in America to keep milk cold. At least moderately warm water is required and if you’re using chilled water to cool you can’t really take advantage of the economics of the adsorption chiller. With 60°C inlet water, the efficiency of Lenovo’s adsorption chiller is about 60 percent. If your energy source has a higher temperature, say 80-90°C then the extraction is even more efficient but 60°C is good enough for some significant savings.

The cooling solution can be traced back to 2012, when Lenovo-IBM (Lenovo acquired IBM’s x86 server business in 2014) was approached by LRZ to develop a system that was both powerful and extremely energy efficient. The first production implementation to come out of the partnership was the 9,200 node SuperMuc at LRZ, that achieved a number four ranking on the June 2012 Top500 list. The custom motherboard, developed with Intel, was cooled by water piped over compute and memory and back out of the system. LRZ used the hot water coming out of the system to heat parts of their building, which defrayed some of their overall energy costs.

The partnership also led to the deployment of the CoolMuc-2 cluster at LRZ in 2016. That system was the prototype for the next-gen LRZ cooling solution; it used hot outlet water to drive absorption chillers that generated cool water to remove watts from storage systems in the cluster.

“When we started doing this it was all about power cost,” said Tease. “It was all about datacenter optimization. Those things are still important, but we’re starting to see people recognize that water will allow them to do things that air can’t. I can do special processors that I can’t do with air; I can achieve densities that in the future I can’t do with air. We are really excited that we’ve got such a unique design, what we believe is an industry-leading design point as the market is coming to where we’ve been.”

The SD650 HPC servers have no system fans, and operate at lower temperatures when compared to standard air-cooled systems. Chillers are not needed for most customers, which translates into further savings and a lower total cost of ownership. The new server supports high-speed EDR InfiniBand and Omni-Path fabrics as well as standard SSDs, NVMe SSDs, and M.2 boot SSDs.

In demoing the SD650, Kamath showed how the water supply comes in through the 6U NeXtScale n1200 chassis and goes into the servers. “We have a calibrated flow split between the processor and the memory to tune the heat transfer,” he said. “We recognize that networking devices are power hungry devices now and will be more so in the future, so the water that splits to the memory is coupled to a drive, an NVMe or SSD, and coupled to a network device, like ConnectX-5 or OPA, and then the water flows and connects back to conduction point.”

Two Lenovo ThinkSystem SD650 servers on the compute tray that provides water cooling. Source: Lenovo

The Lenovo ThinkSystem SD650 dual-node tray is designed for high-performance computing (HPC), large-scale cloud, and heavy simulations. One 6U NeXtScale n1200 enclosure accommodates up to 12 SD650 compute nodes, accommodating up to 24 Intel Xeon Scalable Processors, 9.2TB of memory, 24 SFF SSDs or 12 SFF NVMe drives, and 24 M.2 boot drives.

Lenovo has paid special attention to the next-generation memory technologies. The system has 12 DIMM slots using truDDR4 memory but there are actually 16 slots total. Four have been reserved for 3D-XPoint and other future components. The cooling system is able to extract 10 watt on standard DIMMs, and for 3D XPoint and other higher-powered memory future designs, they’ll have two water lines going through a DIMM that can consume 18 watts. Lenovo also provides a handy DIMM removal tool that makes changing out memory quick and easy.

Lenovo has been picking up major system awards in Germany since acquiring IBM’s x86 business three years ago. It has the fastest supercomputer in Spain, Italy, Denmark, Norway, Australia, Canada, and soon in Germany with LRZ. It has also been making in-roads with its warm water cooling solutions. In addition to its systems at LRZ, it has warm water HPC installations at Peking University (first ever in China), India Space Administration (first ever in India), and a multi-university system in Norway.

Liquid cooling is becoming mainstream in HPC, especially in environments where there is a need for high-density or in locations with high electricity rates. Lenovo tells clients that when it comes to electricity prices, anything over of 15 cents a kilowatt hour will provide a return on investment within one year.

Another benefit of removing more heat is that CPUs can run in “turbo” mode nonstop, which translates into 10 percent greater performance from the CPU. The SD650 is managed by Lenovo Intelligent Computing Orchestrator (LiCO), a management suite with an intuitive GUI that supports management of large HPC cluster resources and accelerates development of AI applications. LiCO works with the most common AI frameworks, including TensorFlow, Caffe and Microsoft CNTK.

 

 

The post Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install appeared first on HPCwire.

Do Cryptocurrencies Have a Part to Play in HPC?

Related News- HPC Wire - Thu, 02/22/2018 - 15:48

It’s easy to be distracted by news from the US, China, and now the EU on the state of various exascale projects, but behind the vinyl-wrapped cabinets and well-groomed sales execs are an army of Excel-wielding PMO and accountancy staff trying desperately to keep projects on-time and on-budget. Applying the same level of scrutiny once the system is installed is sadly somewhat rare, however.

One argument we often hear from staff at research-focused organisations is that they shouldn’t need to justify their expenditure on HPC in great detail – after all, supporting fundamental research is part of the core mission of the institution, and no-one is doubting the role computational work plays in that effort. Ultimately though, universities and public-sector research organisations are increasingly being driven towards more business-like operational models, and so HPC may need to follow suit. Since cloud HPC is only just barely starting to gain acceptance as a viable option, providers of on-premise HPC could do with a way of insulating themselves from the scrutiny of accountants who don’t concern themselves with the details of why a bare-metal solution is still the preferred choice of the user community.

It has always been the case that those responsible for paying for HPC have a vested interest in exploring alternative sources of revenue, particularly by squeezing out wasted cycles. It isn’t that they don’t trust their research staff to deliver value – investment in HPC has been shown time and again to deliver a strong return, with numbers up to 500x ROI commonly quoted. Seeking alternatives to the normal user-base isn’t about increasing income per se, but rather about diversifying where that income originates from as a hedge against sudden changes in funding policies or unexpected drops in utilisation.

Let’s take university HPC funding as an example. Deciding how much to spend on a HPC system is generally a simple function of how much capital is actually available, and how much research income the university expects the system to generate. If the university were to invite paying commercial customers onto their system, they could augment their investment and create opportunities for collaboration at the same time – surely a win-win?

Sadly, it isn’t all that easy. Creating a service which meets the higher standards of industry in terms of data protection and reliability can quickly make the proposition unaffordable, particularly now that the customers can obtain pay-as-you-go computing resources from cloud vendors, which readily offer the sort of SLAs a company would expect when paying for a service. It is for this reason that Red Oak generally advises it’s university customers to focus their HPC outreach efforts on delivering value to partners through the expertise of their staff, rather than their unused compute capability. So, without bringing commercial users on board, how could a research HPC system owner diversify their income sources?

Cryptocurrencies have created an odd effect in the computing world whereby idle cycles have suddenly become quite valuable – it is now worthwhile for hackers and unscrupulous websites to mine cryptocurrency in your browser, sometimes justified as an alternative to presenting ads or harvesting your data.

Based on recent trends in mining difficulty and price, we can estimate that mining a cache-bound altcoin will typically deliver in excess of a 2x return on the cost of electricity, but will not come close to meeting the TCO for a HPC system. Relative to just running research then, we can determine that it might be worthwhile filling up unused nodes on your cluster with low-priority mining workloads, which can easily be pre-empted by “real” compute jobs.

There is an obvious, and dangerous, corollary to the above; while we are deriving some sort of tangible benefit from this mining activity, why not just bump up the priority a bit to exclude those users whose research doesn’t bring in any grant income? This is the sort of thing which might seem like an easy win to someone with too much focus on the balance sheet, but clearly neglects the benefits which speculative research activities can offer down the road. Companies whose HPC activity is defined by a rigid set of workflows might be able to justify the more mercenary approach, but will ultimately suffer the same long-term negative impact on innovation if they focus too heavily on squeezing value out of their hardware by brute force.

Once a HPC system owner has generated a stack of digital currency, they are faced with a new problem – sell or hold? This is where things become even more interesting; selling the currency immediately would cover the extra electricity costs which are being generated (assuming someone is occasionally tracking prices, and switching off the miners when they become unprofitable). As the safe and sensible (read: boring and unimaginative) option, this will appeal to some managers who are already nervous about the whole cryptocurrency business, but those who are more bullish could instead focus on convincing their senior stakeholders that this kind of high risk, high reward “investment” has greater long-term gains to exploit. They should be careful though; if too much staff time is spend tracking prices and making investment decisions, the miners might quickly find their profits wiped out by decreased productivity!

Besides the potential to act as an enormous distraction, institutions should also consider that getting involved in cryptocurrency might carry some reputational risk. The widely adopted variants which are suitable for CPU mining (and hence more appropriate for mining on HPC) tend to emphasise privacy, naturally leading to a large uptake within criminal circles. Furthermore, while turning electricity directly into money might have a good business case, it doesn’t exactly fit well within any legitimate “green” agenda which might be in effect.

One final message which anyone considering the addition of mining to their HPC repertoire ought to heed – remember who owns the coins! Earlier days of cryptocurrency saw mining occasionally being done on HPC systems, but back then it was rogue users looking to pocket the income rather than administrators with a plan and, more importantly, approval.

Whether this extra monetisation scheme is a good idea or not depends more on the risk appetite of the institution and the availability of good ROI calculations than on any technical quirk of the setup. Every HPC system manager ought to be evaluating their TCO against the benefits their service provides; the potential for monetising wasted cycles need only be one more term in the equation.

About the Author

Chris Downing joined Red Oak Consulting @redoakHPC in 2014 on completion of his PhD thesis in computational chemistry at University College London. Having performed academic research using the last two UK national supercomputing services (HECToR and ARCHER) as well as a number of smaller HPC resources, Chris is familiar with the complexities of matching both hardware and software to user requirements. His detailed knowledge of materials chemistry and solid-state physics means that he is well-placed to offer insight into emerging technologies. Chris, Senior Consultant, has a highly technical skill set working mainly in the innovation and research team providing a broad range of technical consultancy services. To find out more www.redoakconsulting.co.uk.

This article originally appeared on the Red Oak Consulting blog. It is republished here by agreement with Red Oak.

The post Do Cryptocurrencies Have a Part to Play in HPC? appeared first on HPCwire.

ICEI Public Information Event to be Held at Barcelona Supercomputing Center

Related News- HPC Wire - Thu, 02/22/2018 - 12:39

Feb. 22, 2018 — BSC (Spain), CEA (France), CINECA (Italy), ETH Zuerich/CSCS (Switzerland) and Forschungszentrum Juelich/JSC (Germany) jointly announce a Public Information Event, which will be held on 15 March 2018 from 10:00 to 16:00 CET on the premises of BSC in Barcelona, Spain. The purpose of the event is to consult the market in preparation of a possible procurement of equipment and R&D services.

The partners plan to deliver a set of e-infrastructure services that will be federated to form the Fenix Infrastructure. The distinguishing characteristic of this e-infrastructure is that data repositories and scalable supercomputing systems will be in close proximity and well integrated. First steps in this direction are planned to be realised in the context of the European Human Brain Project, which will be the initial prime user of this research infrastructure.

The purpose of the ICEI Public Information Event is to inform all interested suppliers about the expectations and plans of the partners, as well as to gather their feedback. All interested suppliers of relevant solutions and services are invited to participate in this event. The overall number of participants is limit ed. To maximise the number of participating suppliers, the number of representatives per company is restricted to two.

In order to register, please use this registration form until 12th of March at 12 CET.

For any further information, please send an email to ice-pie@bsc.es

Source: Barcelona Supercomputing Center

The post ICEI Public Information Event to be Held at Barcelona Supercomputing Center appeared first on HPCwire.

Mellanox Appoints Steve Sanghi and Umesh Padval to Board of Directors

Related News- HPC Wire - Thu, 02/22/2018 - 08:39

SUNNYVALE, Calif. & YOKNEAM, Israel, Feb. 22, 2018 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the appointment of Steve Sanghi, Chief Executive Officer of Microchip Technology, and Umesh Padval, Partner at Thomvest Ventures, to the Company’s Board of Directors, effective immediately. With these additions, the Mellanox Board now consists of eleven directors. Mr. Sanghi and Mr. Padval fill existing vacancies on the Board.

Mr. Sanghi has established himself as a highly respected leader in the semiconductor industry with unmatched operational expertise and a proven ability to drive profitable growth. Under his leadership as CEO of Microchip since 1991, the company’s revenue has grown from $89 million as of its 1993 IPO to a $4.0 billion run rate with an industry leading 38% operating margin. In addition, Microchip has acquired and integrated 19 companies and executed a clear investment strategy focused on developing and sustaining profitable product lines while driving market share, leading to stock price appreciation of approximately 14,200%, excluding dividends.

Mr. Padval brings to Mellanox over 30 years of technology, marketing, operations and strategic expertise. Throughout his career, Mr. Padval has been a CEO, investor and entrepreneur, and has served on over 20 public and private boards. Since Mr. Padval joined the Integrated Device Technology (IDT) Board in 2008, the company has seen significant operating margin expansion and stock price appreciation, which has resulted in over 5x market cap growth. Mr. Padval’s deep operating experience, his profound understanding of the inner workings of the semiconductor industry and strategic expertise, together with his boardroom leadership, have created significant shareholder value for several companies and make him a tremendous addition to the Board of Mellanox.

“The Mellanox Board of Directors has been actively seeking highly qualified, independent directors to fill the two remaining seats on our Board, and we are delighted to welcome Steve and Umesh,” said Irwin Federman, Chairman of the Board of Mellanox. “In addition to their outstanding experience and complementary skill sets, Steve and Umesh bring strong leadership and new perspectives that will be instrumental to continued growth and increased shareholder value for Mellanox investors.

“Steve is one of the best operators in the semiconductor industry, with an impressive track record of value creation as CEO of Microchip. We are proud to add him to our Board and are confident his steadfast focus on operational efficiency and disciplined approach to capital allocation will be invaluable to Mellanox as the Company continues to execute on our strategic plan. Additionally, Umesh brings extensive public company leadership experience and operational excellence to drive long-term growth and profitability. His balanced perspective is ideally suited for Mellanox which has maintained its high growth trajectory while achieving the scale to provide end-to-end differentiated solutions to its customers.”

“This is an exciting time to join the Mellanox Board,” said Mr. Sanghi. “Mellanox’s focus on innovation and R&D, along with its forward looking strategy, has positioned the Company for sustained growth and profitability – and that is being proven out by the Company’s strong results and guidance. I have always been focused on the importance of driving profitable growth and will work closely with the Mellanox Board to ensure Mellanox continues on its upward trajectory and reaches its full potential.”

“I am excited to be part of the Mellanox team, which is addressing a large market opportunity by developing innovative and market leading product platforms,” said Mr. Padval. “I am looking forward to using my past experience to contribute to the Company’s goals of continued revenue growth and maximizing shareholder value.”

Mr. Federman continued, “Today’s appointments demonstrate our continued commitment to best-in-class corporate governance and to ensuring we have the right mix of diversity, independence, experience and skills to position Mellanox for future success. I look forward to working with Steve and Umesh, as well as the other outstanding members of our Board and management team to continue delivering industry-leading growth and maximizing shareholder value.”

About Steve Sanghi

Steve Sanghi was named the President of Microchip in August 1990, Chief Executive Officer in October 1991 and the Chairman of the Board of Directors in October 1993. Prior to that, Mr. Sanghi was Vice President of Operations at Waferscale Integration, Inc., a semiconductor company, from 1988 to 1990. Mr. Sanghi was employed by Intel Corporation from 1978 to 1988, where he held various positions in management and engineering, the most recent serving as General Manager of Programmable Memory Operations. Additionally, Mr. Sanghi currently serves on the board of Myomo Inc., a commercial stage medical device company. He has previously served on the boards of many public and private companies including, Hittite Microwave, Xyratex, Adflex Solutions, Artisoft and Flip Chip International. Mr. Sanghi has won numerous industry awards, including “Executive of the Year” by Electronic Engineering Times in 2010 and 2016. He also won the “Arizona Entrepreneur of the Year” award by Ernst and Young in 1994. Mr. Sanghi holds a Master of Science degree in Electrical and Computer Engineering from the University of Massachusetts and a Bachelor of Science degree in Electronics and Communication from Punjab University, India.

About Umesh Padval

Mr. Padval brings over 30 years of broad operating experience in technology, marketing, sales, operations and general management in a variety of high technology industries coupled with his board experiences at over 20 public and private companies. Mr. Padval is a Partner at Thomvest Ventures. Prior to that, Mr. Padval served as a Partner at Bessemer Venture Partners, and before that, as Executive Vice President of the Consumer Products Group at LSI Logic Corporation, where he was also previously the Senior Vice President of the company’s Broadband Entertainment Division. Prior to that, Mr. Padval served as the CEO and Director of C-Cube (which was acquired by LSI Logic in 2001) and was previously President of the company’s Semiconductor Division. Prior to joining C-Cube, Mr. Padval held senior management positions at VLSI Technology and Advanced Micro Devices. Mr. Padval currently serves on the board of a public company, IDT as well as on boards of several private companies, Avnera Corporation, Avalanche Technologies, Lastline, Tactus Technologies and Sutter Health Pacific Division. Mr. Padval has previously served on the public company boards of Monolithic Power Systems, Elantec Semiconductor, Silicon Image, Entropic Communications and C-Cube Microsystems. He also served on the advisory boards at Stanford University. Mr. Padval holds a Bachelor in Technology from Indian Institute of Technology, Mumbai and an MS in Engineering from Stanford University.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Appoints Steve Sanghi and Umesh Padval to Board of Directors appeared first on HPCwire.

Packet Deploys AMD EPYC Processors in its Global Bare Metal Cloud

Related News- HPC Wire - Thu, 02/22/2018 - 07:57

NEW YORK, Feb. 22, 2018 — Packet, a leading bare metal cloud for developers, today expanded its lineup to include the no-compromise single socket, high-performance AMD EPYC processor. Packet’s new “c2.medium” configuration is based on the AMD EPYC 7401P processor, and features 24 physical cores that can be up and running in as little as eight minutes for just $1.00 / hr.

The system, built on Dell EMC’s new PowerEdge R6415 platform, includes 64GB of RAM and dual 480GB SSD’s. It is available via API, portal or developer tools like Terraform at Packet’s Parsippany (NJ) and Sunnyvale (CA) locations. Private deployments of customized AMD-based configurations are available at any of Packet’s 15 global datacenters.

“Packet’s ability to automate and deliver new hardware solutions like the AMD EPYC is a cornerstone of our value proposition,” said Zachary Smith, CEO at Packet. “As the first bare metal cloud platform to provide direct developer access to EPYC, we are leading the charge to enable innovation on the next wave of datacenter hardware.”

While bare metal has long been a favorite of the AMD gaming customer base, the combination of 24 high-performance physical cores and a modestly priced single-socket system is attractive to a wide variety of use cases, from scale-out SaaS platforms, to Kubernetes-based cloud native applications and enterprise workloads leveraging virtualization.

“We’re thrilled to see Packet adopt the AMD EPYC 7401P, no-compromise single socket solution,” noted Dan Bounds, Senior Director of Datacenter Solutions at AMD. “Their unique combination of cloud-style consumption with direct access to bare metal is a fantastic way to showcase EPYC to a new generation of compute-hungry developers.”

Packet’s proprietary technology automates physical servers and networks to provide cloud-style automation without the use of virtualization or multi-tenancy. The company is 100% focused on automating fundamental, bare-metal infrastructure – enabling customers, partners, and the open source community to innovate on top of un-opinionated infrastructure.

For more information, visit www.packet.net/AMD

About Packet

Packet is a leading bare metal cloud for developers. Its proprietary technology automates physical servers and networks without the use of virtualization or multi-tenancy – powering over 60k deployments each month in its 20 global datacenters.

Founded in 2014 and based in New York City, Packet has quickly become the provider of choice for leading enterprises, SaaS companies, and software innovators.  In addition to its public cloud, Packet’s unique “Private Deployment” model enables companies to automate their own infrastructure in facilities all over the world.

Packet is a proud member of the Open19 Foundation, as well as the Cloud Native Computing Foundation (CNCF), where it donates and manages the CNCF Community Infrastructure Lab.  Additionally, Packet supports many open source projects, including Memcached.org, NixOS, Docker, and Kernel.org.

Source: Packet

The post Packet Deploys AMD EPYC Processors in its Global Bare Metal Cloud appeared first on HPCwire.

InfiniBand Trade Association Members Conclude Most Extensive Compliance and Interoperability Testing Event to Date

Related News- HPC Wire - Wed, 02/21/2018 - 15:23

BEAVERTON, Ore., Feb. 21, 2018 — The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand and RoCE specifications, today announced the availability of its latest InfiniBand Combined Cable and Device Integrators’ List and RDMA over Converged Ethernet (RoCE) Interoperability List. Held October 2017 at the University of New Hampshire Interoperability Laboratory (UNH-IOL), IBTA Plugfest 32 featured new member vendors, cables, devices and testing capabilities, making it the most extensive and successful compliance and interoperability event to date. The rigorous, independent third-party compliance and interoperability program ensures that each cable and device tested successfully meets end user needs and expectations of InfiniBand and RoCE technology.

Customers ranging from data centers and research facilities to universities and government labs leverage these results when determining which products to use when designing or upgrading their systems. Fabric design is an important and costly business decision, making the InfiniBand Integrators’ List and the RoCE Interoperability List crucial elements when building systems and ensuring that all equipment will operate seamlessly. The independent validation provided by the IBTA Compliance and Interoperability program is critical to the advancement of RDMA technology and the industry as a whole.

Key highlights from Plugfest 32:

·   Records for RoCE interoperability testing

  o   Four major device vendors and seven of the most prominent cable vendors are now participating in the RoCE interoperability testing program

  o   21 Ethernet devices and over 60 Ethernet cables registered for the event, which were run through 30 different scenarios at speeds of 10, 25, 40, 50 and 100 GbE speeds

·   Cutting-Edge Testing Equipment and Capabilities

  o   Work on new SFI Transport tester at Plugfest 32 will allow 25 additional tests for Plugfest 33 in April 2018

  o   New test equipment and application software enabled testing of InfiniBand HDR 200 Gb/s copper cables and the development of new HDR 200 Gb/s testing suites for implementation at Plugfest 33

“The IBTA Plugfest is widely acknowledged as the most demanding and effective compliance and interoperability program in the industry, which creates a robust and reliable ecosystem of InfiniBand and RoCE solutions that end users can depend on,” said Rupert Dance, Chair of the IBTA Compliance and Interoperability Working Group (CIWG). “Moreover, each IBTA Plugfest provides our members – both cable and device vendors – with a neutral venue that is also attended by leading RDMA test equipment vendors who supply cutting edge equipment for advanced compliance and interoperability testing. This results in a unique opportunity only offered by the IBTA for its members to test the latest InfiniBand and Ethernet-based products in many different scenarios, debug in real-time and resolve any issues uncovered in the process.”

The IBTA Plugfest is essential to members and creates a clear path to develop products that are compliant to the InfiniBand and RoCE specifications and also interoperable within the larger ecosystem. Each product that completes compliance and interoperability testing is compiled into the resulting InfiniBand Integrators’ List and RoCE Interoperability List. These lists are technical resources that provide immense benefits to all end users deploying RDMA-based fabrics.

Vendors that contributed test equipment to IBTA Plugfest 32 include Ace Unitech, Anritsu, Keysight Technologies, Molex, Software Forge, TE Connectivity, Tektronix and Wilder Technologies.

The following sources provide additional information about the IBTA Integrators’ List Program:

·         Methods of Implementation (MOI)

·         Integrators’ List Policy and Testing Procedures

·         Archives of past InfiniBand Integrators’ Lists and RoCE Interoperability Lists

IBTA Plugfest 33 will be held April 9-20, 2018 at UNH-IOL. Registration information is available on the Plugfest page.

To learn more about the benefits of becoming an IBTA member, visit the Membership Information page.

About the InfiniBand Trade Association

The InfiniBand Trade Association was founded in 1999 and is chartered with maintaining and furthering the InfiniBand and the RoCE specifications. The IBTA is led by a distinguished steering committee that includes Broadcom, Cray, HPE, IBM, Intel, Mellanox Technologies, Microsoft, Oracle and QLogic. Other members of the IBTA represent leading enterprise IT vendors who are actively contributing to the advancement of the InfiniBand and RoCE specifications. The IBTA markets and promotes InfiniBand and RoCE from an industry perspective through online, marketing and public relations engagements, and unites the industry through IBTA-sponsored technical events and resources. For more information on the IBTA, visit www.infinibandta.org.

Source: InfiniBand Trade Association

The post InfiniBand Trade Association Members Conclude Most Extensive Compliance and Interoperability Testing Event to Date appeared first on HPCwire.

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

Related News- HPC Wire - Wed, 02/21/2018 - 15:16

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and Communications (ACCC). RIKEN is known for its high-quality research in a wide range of scientific disciplines, including health, brain, and life sciences, accelerator science, physical sciences, and computational science, among others.

“In 2015, with ongoing advances in research and technology,” commented Hiroo Kenzaki of the ACCC, “we needed to support more large-scale and medium-scale computation-intensive workloads across the sciences in which our researchers work.” These fields of computational science include quantum chromodynamics (QCD), condensed matter physics, quantum chemistry, biophysics, genomics, and more. RIKEN scientists run a wide range of commercially available software from ISVs, open source software, and their own, internally developed codes. “We introduced HOKUSAI, our next-generation cluster for general-purpose computing,” added Kenzaki.

GreatWave + BigWaterfall = Hokusai

Famous Kirifuri waterfall painting by Katsushika Hokusai

HOKUSAI was built in two stages. In 2015, the first stage called GreatWave—designed for highly parallelized calculations—was put into production. It includes a 1.0 petaflops massively parallel general-purpose system built on the Fujitsu PRIMEHPC FX100 platform based on the SPARC64 processor. Thus, it has compatibility with Japan’s K computer. This first stage also provides two workload-specialized application computing systems, one with two 60-core large-memory nodes using Intel Xeon E7 v2 processors and the other with 30 nodes housing four Nvidia GPUs and two 12-core Intel Xeon E5 v3 processors per node.

Early in 2017, BigWaterfall, a second and larger general-purpose cluster, was acquired; it was placed into production on October 11, 2017. “BigWaterfall accommodates a growing demand for higher performance supercomputing for computational research,” stated Kenzaki. BigWaterfall extends HOKUSAI’s capacity for workloads built on software optimized for Intel processors. BigWaterfall is built on Fujitsu PRIMERGY CX2550 M4 servers, based on the Intel Xeon Gold 6148 (Skylake) processor. With 840 dual-socket nodes, the system contains 1,680 CPUs (33,600 cores) with 78.7 TB of memory. It provides 2.5 petaflops theoretical performance—2.6X more performance than the old system—and placed at 82 on the November 2017 Top500 list.

Skylake—Processor of Choice

In 2016, prior to building BigWaterfall, RIKEN ran a proof of concept (PoC) using a few nodes built on Intel Xeon Phi processors. “With Intel Advanced Vector Extensions 512 (Intel AVX512) and six-channel memory, the Intel Xeon Phi processors offered high performance for computation-intensive codes, but it did not perform well on I/O-intensive workloads,” stated Kenzaki. That led the ACCC to look at the Intel Xeon Scalable Processor. “Skylake has Intel AVX512 and six-channel memory, plus it features high operating frequency,” said Kenzaki. “The combination delivered very high parallel floating-point performance and computational throughput on the tests we ran, which will benefit the types of workloads RIKEN researchers run.” Additionally, RIKEN ACCC can leverage many IA programming tools and software, including the Intel Parallel Studio XE Cluster Edition, to help developers optimize their codes.

Both GreatWave and BigWaterfall are interconnected with common storage systems—both general storage and hierarchical storage resources—based on the Fujitsu Exabyte File System (FEFS), and they utilize common login and control nodes. RIKEN chose the InfiniBand Enhanced Data Rate (EDR, 100 Gbps) fabric for the new clusters.

2.6X the Performance of Previous Cluster

“Our biggest challenge was to ensure uninterrupted service on GreatWave as we brought BigWaterfall into production. Since they had common components, special care was taken to minimize the impact on users and avoid problems. We needed to maintain accessibility for users and sustain running jobs, while BigWaterfall was built, tested, and released,” commented Kenzaki.

During the build out phase of BigWaterfall, benchmarks were conducted for various performance measurements, including LINPACK, HimenoBMT, Gaussian, and network performance. “The computation power and the memory bandwidth are particularly important to us in order to maintain a balanced system that delivers optimal application performance,” he added.

With BigWaterfall, Riken is able to run many types of workloads—memory-intensive, IO intensive, and compute-intensive—because of Intel AVX512, high-bandwidth memory, and high operating frequency. Overall, the new system delivers 2.6X more performance and twice the memory capacity for bio-informatics, genomics, and engineering workloads, enabling RIKEN to accelerate their engineering research and projects.

More information about HOKUSAI can be found at http://accc.riken.jp/en/tag/hokusai/.

Ken Strandberg is a technical story teller. He writes articles, white papers, seminars, web-based training, video and animation scripts, and technical marketing and interactive collateral for emerging technology companies, Fortune 100 enterprises, and multi-national corporations. Mr. Strandberg’s technology areas include Software, HPC, Industrial Technologies, Design Automation, Networking, Medical Technologies, Semiconductor, and Telecom. He can be reached at ken@catlowcommunications.com.

The post HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance appeared first on HPCwire.

Irish Centre for High-End Computing Asks Students to Name Ireland’s National Supercomputer

Related News- HPC Wire - Wed, 02/21/2018 - 11:28

Feb. 21, 2018 — The Minister for Education & Skills Richard Bruton and ICHEC launched a competition in a bid to name Ireland’s newest supercomputer which will be made available to all Irish researchers. The supercomputer will be installed in 2018 to replace “Fionn”, the current system in use since 2013. This supercomputer will provide Irish researchers with the High-Performance Computing (HPC) power to address some of the toughest challenges in science and society such as tackling climate change, improving healthcare and innovating Irish products through agriculture, engineering and manufacturing. It will also facilitate emerging technologies such as artificial intelligence, machine learning and earth observation that will foster new skills in the Irish educational system and workforce.

Image courtesy of ICHEC.

ICHEC is asking primary and secondary schoolchildren in Ireland to choose an appropriate name for the new supercomputer through a naming competition. The competition looks to shine a light on a shortlist of six pioneering Irish scientists and to educate young students about their lives and achievements. Students from a class are encouraged to vote for a candidate accompanied by a short essay, poster or video to support their choice.

“It is important to honour the amazing Irish scientists who have blazed a trail for the current and future generations of scientists” said Prof JC Desplat, Director of ICHEC. “We hope that the competition will inspire students to learn about the importance of computing for research and new discoveries, while recognising some of the Irish achievements in science and technology in the past.”

The competition is open to both primary and secondary level classes and the winning submissions (one from each level) will each be awarded eight Rasberry Pi-Tops for their respective classroom. These build-it-yourself laptops are particularly suited to introduce coding and computer science to children through practical experiments and inventions. ICHEC will also provide coding tutorials for the winning classes.

The Competition Candidates

  • Kay Antonelli – Computer programmer
  • Francis Beaufort – Hydrographer
  • Nicholas Callan – Inventor and experimental physicist
  • Ellen Hutchins – Botanist
  • Richard Kirwan – Geologist
  • Eva Philbin – Chemist

Speaking at the competition launch, Minister Richard Bruton said “We are aiming to make Ireland’s education and training service the best in Europe by 2026, which I believe will be integral to our continued national success.

This year’s Action Plan has a particular focus on innovation, with the new Leaving Certificate Computer Science curriculum to be introduced on a phased basis from September 2018, the continued development of the primary mathematics curriculum to take account of computational thinking and problem-solving skills and the use of digital technologies to enhance teaching and learning across the range of teaching and learning services.

The Minister added “The ongoing provision of High-Performance Computing (HPC) resources, principally for researchers in third-level institutions, will be central to facilitating our drive towards excellence and innovation across the education system.”

The Competition

To enter the competition, visit nameourcomputer.ichec.ie. Submissions can take the form of a short essay, poster or video. Students are encouraged to research all candidates and incorporate their research into their final submission. Submissions for the most popular candidate will be judged by a panel to select the winning entries based on content, technical and artistic merit. All submissions must be made before 12:00 Friday 20th April.

Source: ICHEC

The post Irish Centre for High-End Computing Asks Students to Name Ireland’s National Supercomputer appeared first on HPCwire.

Super Micro Computer Inc. Announces Receipt of Non-Compliance Letter from Nasdaq

Related News- HPC Wire - Wed, 02/21/2018 - 11:19

SAN JOSE, Calif., Feb. 21, 2018 — Super Micro Computer, Inc. (NASDAQ:SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing, today announced that the Company received a notification letter (the “Letter”) from Nasdaq stating that the Company is not in compliance with Nasdaq listing rule 5250(c)(1), which requires timely filing of reports with the U.S. Securities and Exchange Commission. The Letter was sent as a result of the Company’s delay in filing its Quarterly Report on Form 10-Q for the period ended December 31, 2017 (the “Q2 10-Q”) and its continued delay in filing its Annual Report on Form 10-K for the fiscal year ended June 30, 2017 (the “Form 10-K”) and the Quarterly Report on Form 10-Q for the quarter ended September 30, 2017 (the “Q1 10-Q”). The Company previously submitted a revised plan of compliance to Nasdaq on November 29, 2017 (the “Plan”) with respect to its delay in filing the Form 10-K and Q1 10-Q (the “Initial Delinquent Filings”). Upon review of the Plan and in connection with this additional delinquency, the Letter requires that the Company submit an update to the Plan by February 28, 2018 (the “Revised Plan”).

The Letter has no immediate effect on the listing or trading of the Company’s common stock on the Nasdaq Global Select Market. If the Revised Plan is submitted and accepted, the Company could be granted up to 180 days from the due date of the Initial Delinquent Filing, or March 13, 2018, to regain compliance. If Nasdaq does not accept the Company’s plan, then the Company will have the opportunity to appeal that decision to a Nasdaq hearings panel.

As previously announced, the Company has been unable to file the Form 10-K and the Q1 10-Q. The Audit Committee of the Company’s Board of Directors has completed the previously disclosed investigation. Additional time is required to analyze the impact, if any, of the results of the investigation on the Company’s historical financial statements, as well as to conduct additional reviews before the Company will be able to finalize the Form 10-K. The Company is unable at this time to provide a date as to when the Form 10-K will be filed or to determine whether the Company’s historical financial statements will be adjusted or, if so, the amount of any such adjustment(s) and what periods any such adjustments may impact. The Company intends to file the Q1 10-Q and Q2 10-Q promptly after filing the Form 10-K.

About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO.

Source: Super Micro Computer, Inc.

The post Super Micro Computer Inc. Announces Receipt of Non-Compliance Letter from Nasdaq appeared first on HPCwire.

Neural Networking Shows Promise in Earthquake Monitoring

Related News- HPC Wire - Wed, 02/21/2018 - 11:17

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Their study, published in Science Advances last week, centered on Oklahoma where before 2009 there were roughly two earthquakes of magnitude 3.0 or higher per year; in 2015 the number of such earthquakes exceeded 900.

Earthquake monitoring, particularly for small to medium magnitude earthquakes, has been pushed into the limelight in states where the fracking industry is booming and the related disposal of wastewater has been implicated in the rise in the number of earthquakes. The result has been contentious debate over fracking’s role in the problem and what appropriate steps for regulation and remediation are needed. Not surprisingly, efforts aimed at improving monitoring and understanding of the underlying science have accelerated.

Researchers Thibaut Perol (Harvard), Michaël Gharbi (MIT), Marine Denolle (Harvard) summarize the challenge nicely in their abstract:

“Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods.”

Their model is a deep convolutional network that takes a window of three-channel waveform seismogram data as input and predicts its label either as seismic noise or as an event with its geographic cluster (see figure below).

Fig. 2 ConvNetQuake architecture. The input is a waveform of 1000 samples on three channels. Each convolutional layer consists in 32 filters that downsample the data by a factor of 2 (see Eq. 1). After the eighth convolution, the features are flattened into a 1D vector of 128 features. A fully connected layer outputs the class scores.

The computational requirements were substantial. Both the parameter set and training data were too large to fit in memory prompting use of batched stochastic gradient descent algorithm. ConvNetQuake was implemented in TensorFlow and all of the trainings run on Nvidia Tesla K20Xm GPUs. “We trained for 32,000 iterations, which took approximately 1.5 hours,” write the authors who used the Odyssey cluster supported by the Faculty of Arts and Sciences Division of Science, Research Computing Group at Harvard University.

The researchers note that traditional approaches to earthquake detection generally fail to detect events buried in even the modest levels of seismic noise. Waveform autocorrelation is generally the most effective method to identify these repeating earthquakes from seismograms, but the method is computationally intensive and not practical for long time series. One approach to reduce the computation is to select a small set of representative waveforms as templates and correlate only these with the full-length continuous time series.

Bringing machine learning to bear on the problem isn’t new.  Recently, an unsupervised earthquake detection method, referred to as Fingerprint and Similarity Thresholding (FAST), has succeeded in reducing the complexity of the template matching approach.

“FAST extracts features, or fingerprints, from seismic waveforms, creates a bank of these fingerprints, and reduces the similarity search through locality-sensitive hashing. The scaling of FAST has shown promise with near-linear scaling to large data sets,” write the researchers.

Their work poses the problem as one of supervised classification – ConvNetQuake is trained on a large data set of labeled raw seismic waveforms and learns a compact representation that can discriminate seismic noise from earthquake signals. The waveforms are no longer classified by their similarity to other waveforms, as in previous work.

“Instead, we analyze the waveforms with a collection of nonlinear local filters. During the training phase, the filters are optimized to select features in the waveforms that are most relevant to classification. This bypasses the need to store a perpetually growing library of template waveforms. Owing to this representation, our algorithm generalizes well to earthquake signals never seen during training. It is more accurate than state-of-the-art algorithms and runs orders of magnitude faster,” they write. The figure below shows the data sets used.

Fig. 1 Earthquakes and seismic station in the region of interest (near Guthrie, OK) from 14 February 2014 to 16 November 2016. GS.OK029 and GS.OK027 are the two stations that record continuously the ground motion velocity. The colored circles are the events in the training data set. Each event is labeled with its corresponding area. The thick black lines delimit the six areas. The black squares are the events in the test data set. Two events from the test set are highlighted because they do not belong to the same earthquake sequences and are nonrepeating events.

The limitation of the methodology, they say, is the size of the training set required for good performances for earthquake detection and location. “Data augmentation has enabled great performance for earthquake detection, but larger catalogs of located events are needed to improve the performance of our probabilistic earthquake location approach. This makes the approach ill-suited to areas of low seismicity or areas where instrumentation is recent but well-suited to areas of high seismicity rates and well-instrumented.”

Link to paper: http://advances.sciencemag.org/content/4/2/e1700578/tab-pdf

Link to article discussing the work on The Verge: https://www.theverge.com/2018/2/14/17011396/ai-earthquake-detection-oklahoma-neural-network

The post Neural Networking Shows Promise in Earthquake Monitoring appeared first on HPCwire.

Missing Link to Novel Superconductivity Revealed at Ames Laboratory

Related News- HPC Wire - Wed, 02/21/2018 - 09:00

Feb. 21, 2018 — Scientists at the U.S. Department of Energy’s Ames Laboratory have discovered a state of magnetism that may be the missing link to understanding the relationship between magnetism and unconventional superconductivity. The research, recently published in npj Nature Quantum Materials, provides tantalizing new possibilities for attaining superconducting states in iron-based materials.

Image courtesy of Ames Laboratory.

“In the research of quantum materials, it’s long been theorized that there are three types of magnetism associated with superconductivity. One type is very commonly found, another type is very limited and only found in rare situations, and this third type was unknown, until our discovery,” said Paul Canfield, a senior scientist at Ames Laboratory and a Distinguished Professor and the Robert Allen Wright Professor of Physics and Astronomy at Iowa State University.

The scientists suspected that the material they studied, the iron arsenide CaKFe4As4 , was such a strong superconductor because there was an associated magnetic ordering hiding nearby. Creating a variant of the compound by substituting in cobalt and nickel at precise locations, called “doping,” slightly distorted the atomic arrangements which induced the new magnetic order while retaining its superconducting properties.

“The resources of the national laboratories were essential for providing for the diversity of techniques needed to reveal this new magnetic state,” said Canfield. “We’ve been able to stabilize it, it’s robust, and now we’re able study it. We think by understanding the three different types of magnetism that can give birth to iron-based superconductors, we’ll have a better sense of the necessary ingredients for this kind of superconductivity.”

The research is further discussed in the paper, “Hedgehog spin-vortex crystal stabilized in a hole-doped iron-based superconductor,” authored by William R. Meier, Qing-Ping Ding, Andreas Kreyssig, Sergey L. Bud’ko, Aashish Sapkota, Karunakar Kothapalli, Vladislav Borisov, Roser Valentí, Cristian D. Batista, Peter P. Orth, Rafael M. Fernandes, Alan I. Goldman, Yuji Furukawa, Anna E. Böhmer, and Paul C. Canfield; and published in the journal npj Nature Quantum Materials.

This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory.

Ames Laboratory is a U.S. Department of Energy Office of Science national laboratory operated by Iowa State University. Ames Laboratory creates innovative materials, technologies and energy solutions. We use our expertise, unique capabilities and interdisciplinary collaborations to solve global problems.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.  For more information, please visit science.energy.gov.

Source: Ames Laboratory

The post Missing Link to Novel Superconductivity Revealed at Ames Laboratory appeared first on HPCwire.

HPE Wins $57 Million DoD Supercomputing Contract

Related News- HPC Wire - Tue, 02/20/2018 - 17:35

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernization Program (HPCMP) with supercomputing capability and support services to “accelerate the development and acquisition of advanced national security capabilities.”

The DoD has ordered a total of seven HPE SGI 8600 systems: four for the Air Force Research Laboratory (AFRL) DoD Supercomputing Resource Center (DSRC) near Dayton, Ohio, and three for the Navy DSRC Air Force Research Laboratory DSRC at Stennis Space Center, Mississippi. The AFRL machines will be housed at the Wright-Patterson Air Force Base and will support hypersonics research and computational modeling of air, naval, and ground weapon systems and platforms. The Navy DSRC machines will be used for advanced weapons development and global weather modeling requirements. Combined the systems represent 14.1 petaflops of peak computational capacity and more than 24 petabytes of usable storage, leveraging DDN EXAScaler Lustre-based technology.

“In our data-driven world, supercomputing is increasingly becoming a key to stay ahead of competition – this applies to national defense just as to commercial enterprises” said Bill Mannel, vice president and general manager, HPC and AI, Hewlett Packard Enterprise in a statement. “The DoD’s continuous investment in supercomputing innovation is a clear testament to this development and an important contribution to U.S. national security. HPE has been a strategic partner with the HPCMP for two decades, and we are proud that the DoD now significantly extends this partnership, acknowledging HPE’s sustained leadership in high performance computing.”

When HPE purchased SGI for $275 million in 2016, SGI’s strength in the government vertical was one of the motivating factors. Introduced in 2017 and based on the legacy SGI ICE XA architecture, the HPE SGI 8600 offers petaflops speed for challenging problems in life, earth, and space sciences, to engineering, manufacturing and national security. HPE says the sixth generation server line provides scale and efficiency for complex, largest environments – “up to thousands of nodes with leading power efficiency” achieved with system direct liquid cooling of high-wattage components. Last June, HPE received the Green500 award for the HPE SGI 8600-based TSUBAME cluster, heralded as Japan’s fastest artificial intelligence supercomputer.

All seven DoD systems employ 24-core Intel Xeon Platinum 8168 (Skylake) processors on an Intel Omni-Path Architecture fabric. Four of the seven systems have been outfitted with Nvidia P100 GPUs.

The AFRL DSRC side of the contract consists of:

  • A single system of 2,352 Intel Skylake CPUs plus 24 Nvidia Tesla P100 GPUs, 244 terabytes of memory, and 9.2 petabytes of usable storage.
  • A single system of 576 Intel Skylake CPUs, 58 terabytes of memory, and 1.6 petabytes of usable storage.
  • Two systems, each with 288 Skylake CPUs, 30 terabytes of memory, and 1.0 petabytes of usable storage.

The Navy DSRC will receive:

  • Two systems, each consisting of 1,472 Skylake CPUs, 16 Nvidia Tesla P100 GPUs, 154 terabytes of memory, and 5.6 petabytes of usable storage.
  • A single system consisting of 296 Skylake CPUs, four Nvidia Tesla P100 GPUs, 32 terabytes of memory, and 1.0 petabytes of usable storage.

The U.S. Army Corps of Engineers, Engineering and Support Center, Huntsville, Alabama, awarded the supercomputing contract, which includes five years of 24/7 system support, including on-site system administration and applications support personnel from HPE. The machines are expected to enter production status in the later half of 2018.

The post HPE Wins $57 Million DoD Supercomputing Contract appeared first on HPCwire.

Research and Markets Releases Global Quantum Networking Markets Report

Related News- HPC Wire - Tue, 02/20/2018 - 11:39

DUBLIN, Feb. 20, 2018 — The “Quantum Networking: Deployments, Components and Opportunities – 2017-2026” report has been added to ResearchAndMarkets.com’s offering.

This report will be essential reading for marketing, business development and product managers throughout the data communications and telecommunications industry, especially those at firms in the fiber optic and small satellite sectors. The report will also be valuable for those planning business development and investment in the quantum computing and quantum encryption businesses.

For the past 15 years, major service providers and research institutions worldwide have run quantum network trials. We are now entering a period in which permanent quantum networks are being built. These are designed initially to support quantum encryption services, but will soon also provide the infrastructure for quantum computing.

CIR believes that as quantum networks are deployed, they will eventually create opportunities at the service level, but more immediately at the components and modules level. This is because quantum networks will require a slew of new optical networking technologies to make them function effectively. In this report, CIR identifies the leading opportunities that will emerge from the building of quantum networks throughout the world.

This report includes:

  • Profiles of all the leading quantum networks and related R&D around the globe. We discuss which technologies and components these networks are using and developing and how quantum networks will impact the telecommunications and data communications more generally. For each of these networks, current and planned applications are discussed and we also analyze where the potential for commercialization will be found.
  • Ten-year forecasts of the deployment of quantum network nodes around the globe with breakouts by technologies used, applications served and the kinds of components being used. These forecasts are developed in the context of a roadmap for future needs for encryption, high-performance computing (HPC), and big data infrastructure support.
  • A thorough analysis of the commercialization potential for the technologies associated with quantum networking. This analysis will discuss how leading commercial organizations active in building today’s quantum networks expect to build businesses around their experience.

Key Topics Covered:

Executive Summary

Chapter One: Introduction

Chapter Two: Technologies and Components

Chapter Three: Quantum Network Profiles

Chapter Four: Ten-Year Forecasts of Quantum Network Markets

Companies Mentioned

  • AT&T
  • BT
  • Battelle Institute
  • Raytheon
  • Toshiba

For more information about this report visit https://www.researchandmarkets.com/research/hfpv22/global_quantum?w=4

Source: Research and Markets

The post Research and Markets Releases Global Quantum Networking Markets Report appeared first on HPCwire.

Industry HPC User Group to Meet May 9-11 in Chicago

Related News- HPC Wire - Tue, 02/20/2018 - 11:35

Feb. 20, 2018 — The Industry HPC User Group (iHPCug) is a meeting of peers, arranged by companies who use High Performance Computing (HPC) for research and production purposes. These companies include aerospace, automotive, manufacturing, oil & gas, other energy, life sciences, and more.

Each meeting has two focus themes, taking up a half-day each, and are structured with invited speakers and a moderated panel+audience discussion. These are followed private sessions (two half-days) for the industry members only, with discussions on an agreed set of topics of interest. All attendees are expected to engage (this is not a sit-and-listen conference).

The open session focus themes for 2018 are:

  • Storage and I/O for HPC;
  • Proprietary vs Standards vs DIY approaches for HPC.

Attendance at the meeting is by invitation only. iHPCug welcomes requests for invitations from any companies that use HPC or who are planning to do so – please contact them for more information using the contact and registration page.

Source: iHPCug

The post Industry HPC User Group to Meet May 9-11 in Chicago appeared first on HPCwire.

Caringo Announces Enhancements to Swarm Scale-Out Hybrid Storage and SwarmNFS

Related News- HPC Wire - Tue, 02/20/2018 - 11:22

AUSTIN, Tex., Feb. 20, 2018 — Caringo, Inc. today announced performance and interoperability enhancements to Swarm Scale-Out Hybrid Storage and SwarmNFS. As a pioneer in on-premises object storage technology, Caringo launched their masthead product, Swarm, in 2006. Field-hardened at version 9.5, Swarm serves as the foundation for massively scalable storage solutions that span use cases from Media & Entertainment (M&E), High-Performance Computing, Internet of Things (IOT), Government, Medical, Research, Education, Cloud Storage and Enterprise IT.

Released in 2016, Caringo SwarmNFS was the first lightweight file protocol converter to bring the benefits of scale-out object storage—including built-in data protection, high-availability, and powerful metadata—to NFSv4. Unlike cumbersome file gateways and file connectors, SwarmNFS is a stateless Linux® process that integrates directly with Caringo Swarm—allowing mount points to be accessed across campus, across country or across the world. The patent-pending technology delivers a truly global namespace across NFSv4, S3, HDFS, and SCSP/HTTP, delivering data distribution and data management at scale without the high cost and complexity of legacy solutions.

SwarmNFS 2.0 leverages a powerful new patent-pending feature in Swarm 9.5 that allows a client to only send the data of an object that has changed. Swarm then combines the changes with the existing object data, dynamically reducing the bandwidth requirements and the time taken for a client to update existing objects. Before this breakthrough, a client had to re-upload all the data bytes in an object, even if only a small change was required. The ability to only send a few bytes of modified data when updating a large object is both an object storage industry first and a true game changer which dynamically closes the gap between file and object, and has improved overall NFS performance up to 20x for concurrent client access and file operations.

Additional highlights of Swarm 9.5 include:

  • Ability to directly edit object metadata through the user interface. This complements the long-standing feature that has been available to applications.
  • Storage transaction metadata filtering for Managed Service Providers (MSPs) and other direct-to-Internet deployments to manage bandwidth and information flow.
  • Hybrid Cloud tiering capabilities have been enhanced for improved support of Caringo Swarm Hybrid Cloud for Microsoft Azure.

“We have made significant enhancements to Swarm and SwarmNFS to bridge the gap between existing file-based applications and object storage in response to customer requests and an evolving data landscape that needs a cohesive, scalable, intelligent storage strategy,” said Tony Barbagallo, Caringo VP of Product. “These releases deliver on our commitment to develop technology innovations that help our customers manage the unbridled growth of unstructured data and rapidly evolving access requirements.”

Caringo will be exhibiting at the JB&A pre-NAB Technology event April 8–9, 2018 as well as in booth number SL11807 of the NAB Show Expo in Las Vegas, NV April 9–12, 2018. They will be demonstrating the new capabilities of Swarm 9.5 and SwarmNFS 2.0 as well as their award-winning FileFly Secondary Storage Platform and Caringo Drive.

For more information, visit http://www.Caringo.com.

About Caringo 

Caringo was founded in 2005 to change the economics of storage by designing software from the ground up to solve the issues associated with relentless data growth. Caringo’s flagship product, Swarm, decouples data from applications and hardware providing a foundation for continued data access and analysis that continuously evolves while guaranteeing data integrity. Today, Caringo software-defined object storage solutions are used to preserve and provide access to rapidly scaling data sets across many industries by organizations such as NEP, iQ Media, Argonne National Labs, Texas Tech University, Department of Defense, the Brazilian Federal Court System, City of Austin, British Telecom and hundreds more worldwide.

Source: Caringo

The post Caringo Announces Enhancements to Swarm Scale-Out Hybrid Storage and SwarmNFS appeared first on HPCwire.

Topological Quantum Superconductor Progress Reported

Related News- HPC Wire - Tue, 02/20/2018 - 07:55

Overcoming sensitivity to decoherence is a persistent stumbling block in efforts to build effective quantum computers. Now, a group of researchers from Chalmers University of Technology (Sweden) report progress in devising a superconductor able to host Majorana particles whose relative insensitivity to decoherence is a promising advantage.

The work is described in an account posted yesterday on Phys.org (Unconventional superconductor may be used to create quantum computers of the future). The basic idea is Majorana particles could become stable building blocks of quantum computers. The problem is they only occur under special circumstances. The Chalmers team, led by Floriana Lombardi, reported manufacturing a component that is able to host the sought-after particles.

“Majorana fermions are highly original particles, quite unlike those that make up the materials around us. In highly simplified terms, they can be seen as half electron. In a quantum computer the idea is to encode information in a pair of Majorana fermions which are separated in the material, which should, in principle, make the calculations immune to decoherence,” according to Phys.org.

In solid state materials, Majorana appear to occur only in topological superconductors. Microsoft is perhaps the best known of commercial organizations betting big on topological quantum computers. For a long time many doubted the existence of Majorana although evidence has been piling up in their favor (see HPCwire article, Neutrons Zero in on the Elusive Magnetic Majorana Fermion).

To create their unconventional superconductor, the Chalmers researchers started with a topological insulator made of bismuth telluride (Bi2Te3). The researchers placed a layer of aluminum, a conventional superconductor, on top, which conducts current entirely without resistance at low temperatures. The superconducting pair of electrons then leak into the topological insulator, which also becomes superconducting.

Initial measurements all indicated they had only induced standard superconductivity in the Bi2Te3 topological insulator, but when they cooled the component later to repeat some measurements the situation suddenly changed—the characteristics of the superconducting pairs of electrons varied in different directions.

“[That isn’t compatible at all with conventional superconductivity. Unexpected and exciting things occurred,” says Lombardi in the article. “For practical applications, the material is mainly of interest to those attempting to build a topological quantum computer. We want to explore the new physics hidden in topological superconductors – this is a new chapter in physics.”

Link to full account on Phys.org: https://phys.org/news/2018-02-unconventional-superconductor-quantum-future.html

The post Topological Quantum Superconductor Progress Reported appeared first on HPCwire.

Supercomputer Unlocks Possibilities for Tinier Devices and Affordable DNA Sequencing

Related News- HPC Wire - Tue, 02/20/2018 - 07:31

Feb. 20, 2018 — Since its discovery in 2004, graphene has captured imaginations and sparked innovation in the scientific community. Perhaps rightly so as it is 200 times stronger than the strongest steel but still flexible, incredibly light but extremely tough, and conducts heat and electricity more efficiently than copper. Professor Jerry Bernholc of North Carolina State University is utilizing the National Center for Supercomputing Applications’ Blue Waters supercomputer at the University of Illinois at Urbana-Champaignto explore graphene’s applications, including its use in nanoscale electronics and electrical DNA sequencing.

Graphene and Nanoscale Electronics

Currently, the trend toward smaller silicon semiconductors seems to be slowing down as it reaches limits of small scale. The world is moving past Moore’s Law, the idea that every two years computer processing speed will double and costs will decrease. Transistor density is still increasing but speed increases have slowed dramatically. In addition to that, systems are no longer shrinking like they did in the past as transistors reach physical limits.

This is bad news for those trying to use very fast computers, or any electronics for that matter, that have been getting thinner and thinner.

However, graphene may be a new way forward.

“We’re looking at what’s beyond Moore’s law, whether one can devise very small transistors based on only one atomic layer, using new methods of making materials,” Bernholc says. “We are looking at potential transistor structures consisting of a single layer of graphene, etched into lines of nanoribbons, where the carbon atoms are arranged like a chicken wire pattern. We are looking at which structures will function well, at a few atoms of width.”

Trying to do computations like this on normal computers is impossible, so Bernholc and his team utilized the Blue Waters supercomputer.

“We are doing quantum mechanical computations with thousands of atoms, and several thousands of electrons, and that requires very fast, very powerful systems, and we need to do calculations in parallel,” Bernholc says. “The computer chips are not fast enough—one computer chip in a desktop machine cannot do such calculations. On Blue Waters, we use thousands of nodes in parallel, so we can complete quantum mechanical calculations in a time that’s practical and receive results in a timely fashion.”

Graphene and DNA Sequencing

Bernholc is among the researchers who think that graphene may also play a major role in the push to decrease prices for gene sequencing. With 19 companies offering personal, direct-to-consumer genetics tests, it is easier than ever to sequence DNA, learning your family history and identifying genetic risks.

Some forms of sequencing DNA include electrophoresis, which involves running a current through gel with DNA segments in it, causing DNA strands of varying lengths to move to different locations (shorter strands move faster). This allows comparison between known DNA strands and unknown ones.

As graphene is an excellent conductor of electricity, it is not surprising its use in gene sequencing is being explored. Recently, a group of researchers in California explored the possibility of using nanotubes (a tubular cousin of graphene) to electrically detect a single nucleotide addition during DNA replication. If the nucleotides can also be distinguished electrically, one would be able to sequence DNA and other genetic materials more cheaply and accurately. Currently, DNA sequencing involves complex labeling and readout schemes, which are quite costly and time-consuming. But nanotubes could lead to a simple nanocircuit that could operate faster and be much cheaper.

Bernholc and his team ran calculations to reproduce the California experiment, but changed the electrical conditions. This enabled them to perform calculations that allowed for some DNA base pairs to be distinguished, but not others. There are four chemical bases that are used to store information in DNA: adenine (A), guanine (G), cytosine (C) and thymine (T). The sequence of the DNA tells the cells in your body what proteins and chemicals to make. The bases pair up with each other (A with T and C with G) to form base pairs.

“That allows us to distinguish A from T. G and T are very clear, we can tell G and T from C and A, but we cannot distinguish C and A at the moment using graphene,” Bernholc says. “That’s where more work is needed, but we are moving towards being able to have a new way to sequence DNA.”

For Bernholc’s team and other researchers, the possibilities for graphene’s applications—nanoscale electronics, DNA sequencing and beyond—seem endless.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About the Blue Waters Project

The Blue Waters petascale supercomputer is one of the most powerful supercomputers in the world, and is the fastest sustained supercomputer on a university campus. Blue Waters uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. Blue Waters has more memory and faster data storage than any other open system in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenges. Recent advances that were not possible without these resources include computationally designing the first set of antibody prototypes to detect the Ebola virus, simulating the HIV capsid, visualizing the formation of the first galaxies and exploding stars, and understanding how the layout of a city can impact supercell thunderstorms.

Source: Susan Szuch, NCSA

The post Supercomputer Unlocks Possibilities for Tinier Devices and Affordable DNA Sequencing appeared first on HPCwire.

Pages

Subscribe to Research Computing aggregator