Related News- HPC Wire

Subscribe to Related News- HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 49 min 3 sec ago

NCSA Announces 2018-2019 Illinois Faculty Fellows

11 hours 4 min ago

April 23, 2018 — The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign has named Faculty Fellowship awardees for 2018-2019. These six Illinois faculty members will work with NCSA to investigate a wide array of subjects including cancer, agriculture, cinema, economics, civil infrastructure and more.

Each faculty member will work closely with experts at NCSA on a project that aligns with research focus areas and/or major projects (i.e., the Blue Waters projectXSEDE, the Midwest Big Data HubIndustry program).

2018-2019 NCSA Faculty Fellows

HETEROGENEOUS DATA ANALYTICS FRAMEWORK FOR PREDICTING THE DETERIORATION OF CIVIL INFRASTRUCTURE SYSTEMS

Nora El-GoharyCivil and Environmental Engineering
Civil infrastructure systems (CIS) data, like structured bridge inventory data, unstructured textual bridge inspection reports, traffic and weather data, etc. are most often analyzed in isolation by CIS decision makers. The overarching goal of El-Gohary’s project is to develop a novel computational data analytics framework which could be used for learning from multi-source heterogeneous data to enhance civil infrastructure system deterioration prediction and maintenance decision making.

LEARNING HUGE-SCALE DIFFUSION NETWORKS IN REAL-TIME

Niao HeIndustrial and Enterprise Systems Engineering
Current approaches to modeling and learning the temporal dynamics and recurrent behaviors of huge-scale diffusion networks, such as social networks, disease networks, and cyber-crime networks typically lack the flexibility and/or scalability to accommodate ever-growing massive event data. This leads to poor predictions of abnormal events, while accurate prediction is crucial in life-death related contexts such as healthcare and crime. He’s project aims to establish statistically and computationally efficient approaches that enable flexible modeling of diffusion processes using deep neural networks and to reinforce their scalability and wide applicability in the emerging healthcare and security analytics.

DEVELOPMENT OF MACHINE LEARNING-BASED ANALYTICS AND VISUALIZATION APPROACHES FOR PREDICTIVE TOXICOLOGY

Zeynep Madak-ErdoganFood Science and Human Nutrition
As new molecules such as pesticides are developed for the agriculture industry, they must be rigorously assessed for acute to chronic mammalian toxicity and risk to human health. In many cases, toxicity to human health is only detected during late stage of product development, resulting in major setbacks to agricultural R&D pipelines. Returning NCSA Faculty Fellow Madak-Erdogan aims to develop a machine learning approach for early prediction of mammalian toxicity of small molecules using gene expression data, specifically to identify early biomarkers of liver cancer due to liver toxicant exposure, a critical problem at the intersection of the agriculture industry and human health today.

INDIAN CINEMA IN CONTEXT: INTERACTIVE FILM HISTORY ARCHIVE AND TOOLS

Rini MehtaComparative and World Literature
Indian cinema, born in the early 1900s, has had a unique and interesting history, particularly with the recent and popular development of Bollywood, the Hindi film industry based in Bombay. The project Indian Cinema in Context (ICIC) proposes to build a set of tools to study the history and evolution of Indian cinema, the world’s largest film industry which produces more films annually than the combined output of the second and third nations on the list (U.S. and China).

A WORKSHOP TO JUMPSTART HIGH-PERFORMANCE COMPUTING IN FINANCE

Mao YeFinance
Modern financial markets generate vast amounts of data, and industry practitioners routinely apply big-data techniques to guide their decision. However, financial economists’ empirical tools have not kept pace with the markets they analyze. Ye’s project aims to stimulate collaborations among financial economists and high-performance computing experts via a workshop this summer, in hopes of catalyzing a solution for how supercomputing resources can be used to address new questions in financial economics.

OMIX DEVELOPMENT: A VISUAL ANALYTICS PLATFORM FOR MULTI-OMIC MICROBIOME DATA

Ruoqing ZhuStatistics 
Microbial communities have a significant impact on human health. Growing research has demonstrated the influence of the host microbiota in cancer and the interaction with the human immune system, which impacts potential diagnosis and treatment. Also, gastrointestinal microbiome perturbations and diet are independently linked to public health issues including obesity, nonalcoholic fatty liver disease, and type 2 diabetes. Zhu’s project aims to advance the development of the NCSA VI-Bio group’s prototype OmiX, an informatics tool which will enable scientists to study and better understand the microbiome-human interaction.

About the Faculty Fellows Program

The Faculty Fellows Program at NCSA provides an opportunity for faculty and researchers at the University of Illinois at Urbana-Champaign to catalyze and develop long-term research collaborations between Illinois departments, research units, and NCSA. This competitive program provides seed funding for demonstration or start-up projects, workshops, and/or other activities with the potential to lead to longer-term collaborations around research, development and education. Learn more about the Faculty Fellows program.

About NCSA

The National Center for Supercomputing Applications (NCSA) is a hub of transdisciplinary research and digital scholarship where University of Illinois faculty, staff, and students, and collaborators from around the globe, unite to address research grand challenges for the benefit of science and society. Current research focus areas are Bioinformatics and Health Sciences, Computing and Data Sciences, Culture and Society, Earth and Environment, Materials and Manufacturing, and Physics and Astronomy. Learn more about NCSA.

Source: NCSA

The post NCSA Announces 2018-2019 Illinois Faculty Fellows appeared first on HPCwire.

HERMES Public Health Modeling Software Released for Public Use

12 hours 35 min ago

April 23, 2018 — Public health experts at the Global Obesity Prevention Center (GOPC) at Johns Hopkins University and the Pittsburgh Supercomputing Center (PSC) has released their HERMES (Highly Extensible Resource for Modeling Event-Driven Supply Chains) supply-chain modeling software for public use. The user-friendly HERMES software will enable decision makers and other stakeholders to analyze supply chains of vaccines and other medical supplies to make them more efficient and reliable.

All too often, life-saving vaccines are not reaching people who need them, according to Jay DePasse, PSC’s Director of Public Health Applications. Bottlenecks, stock-outs, power outages and vaccine wastage occur when a vaccine supply chain isn’t fully functioning. Without the ability to properly measure and analyze the system of a given vaccine supply chain, malfunctions will continue to impede the delivery of vaccines to target populations.

Understanding how the various components of a vaccine supply chain interact with each other is critical to evaluating the logistics of a supply chain, identifying the root causes of issues and formulating sustainable solutions.

HERMES is a software platform that allows users to create a model of a given supply chain in order to better visualize and understand the complete system. Developed by members from the GOPC and PSC, HERMES serves as a “virtual laboratory” for users to evaluate a supply chain and test the effects of implementing different potential policies, interventions, practices and technology changes all with the safety of a computer before trying in real life.

To date, the HERMES team has used the software to create vaccine supply chain models to help decision makers in a wide range of countries such as Niger, Benin, Senegal, Chad, Kenya, Mozambique, Thailand, Vietnam and India. Potential policies and interventions ranged from assessing the value of using Unmanned Aerial Vehicles (UAVs), also known as drones, to transport vaccines in Mozambique to modeling the economic and clinical impacts of heat-stable vaccines in various countries.

Learn more about the HERMES software.

Read the GOPC press release.

About PSC

The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry, and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundation cyberinfrastructure program.

Source: PSC

The post HERMES Public Health Modeling Software Released for Public Use appeared first on HPCwire.

New Exascale System for Earth Simulation Introduced

13 hours 26 min ago

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy’s Office of Science in the Biological and Environmental Research Office. The E3SM release will include model code and documentation, as well as output from an initial set of benchmark simulations.

The Earth, with its myriad interactions of atmosphere, oceans, land and ice components, presents an extraordinarily complex system for investigation. Earth system simulation involves solving approximations of physical, chemical and biological governing equations on spatial grids at resolutions that are as fine in scale as computing resources will allow. The full press release is on LLNL web site.

The E3SM project will reliably simulate aspects of earth system variability and project decadal changes that will critically impact the U.S. energy sector in the near future. These critical factors include a) regional air/water temperatures, which can strain energy grids; b) water availability, which affects power plant operations; c) extreme water-cycle events (e.g. floods and droughts), which impact infrastructure and bio-energy; and d) sea-level rise and coastal flooding, which threaten coastal infrastructure.

The goal of the project is to develop an earth system model (ESM) that has not been possible because of limitations in current computing technologies. Meeting this goal will require advances on three frontiers: 1) better resolving earth system processes through a strategic combination of developing new processes in the model, increased model resolution and enhanced computational performance; 2) representing more realistically the two-way interactions between human activities and natural processes, especially where these interactions affect U.S. energy needs; and 3) ensemble modeling to quantify uncertainty of model simulations and projections.

E3SM will provide insights on earth system interactions in the Arctic and their influence on mid-latitude weather. In this E3SM model simulation, winter storm clouds, represented here by outgoing longwave radiation, or OLR, affect sea ice coverage as the clouds move across the Arctic.

“The quality and quantity of observations really makes us constrain the models,” said David Bader, Lawrence Livermore National Laboratory (LLNL) scientist and lead of the E3SM project. “With the new system, we’ll be able to more realistically simulate the present, which gives us more confidence to simulate the future.”

Simulating atmospheric and oceanic fluid dynamics with fine spatial resolution is especially challenging for ESMs.

The E3SM project is positioned on the forefront of this research challenge, acting on behalf of an international ESM effort. Increasing the number of earth system days simulated per day of computing time is a prerequisite for achieving the E3SM project goal. It also is important for E3SM to effectively use the diverse computer architectures that the DOE Advanced Scientific Computing Research (ASCR) Office procures to be prepared for the uncertain future of next-generation machines.

A long-term aim of the E3SM project is to use exascale machines to be procured over the next five years. The development of the E3SM is proceeding in tandem with the Exascale Computing Initiative (ECI) (An exascale refers to a computing system capable of carrying out a billion billion (1018) calculations per second. This represents a thousand-fold increase in performance over that of the most advanced computers from a decade ago.

“This model adds a much more complete representation between interactions of the energy system and the earth system,” Bader said. “The increase in computing power allows us to add more detail to processes and interactions that results in more accurate and useful simulations than previous models.”

To address the diverse critical factors impacting the U.S. energy sector, the E3SM project is dedicated to answering three overarching scientific questions that drive its numerical experimentation initiatives:

  • Water Cycle: How does the hydrological cycle interact with the rest of the human-earth system on local to global scales to determine water availability and water cycle extremes?
  • Biogeochemistry: How do biogeochemical cycles interact with other earth system components to influence the energy sector?
  • Cryosphere Systems: How do rapid changes in cryosphere (continental and ocean ice) systems evolve with the earth system, and contribute to sea-level rise and increased coastal vulnerability?

The post New Exascale System for Earth Simulation Introduced appeared first on HPCwire.

2018 FLOW-3D European Users Conference Speakers Announced

13 hours 32 min ago

SANTA FE, N.M., April 23, 2018 — Flow Science, Inc. has announced the speakers for its 2018 FLOW-3D European Users Conference, which will be held at Le Méridien Stuttgart Hotel in Stuttgart, Germany on May 14 – 16, 2018. The conference will be co-hosted by Flow Science Deutschland.

Customers who use the FLOW-3D product suite as the basis for innovative research and development will present their work, including topics such as additive manufacturing and foaming applications, sediment transport modeling, centrifugal casting processes, cryogenic tank flows, and flow in a peristaltic pump. Speakers from Pöyry Energy GmbH, Roche Diagnostics GmbH, ArianeGroup GmbH, Mott MacDonald, EDF-CIH, Österreichisches Gießerei-Institut, JBA Consulting and Endurance Overseas are part of a diverse lineup of presenters. Additionally, Flow Science’s senior technical staff members will present current and upcoming developments for the FLOW-3D product suite. A full list of speakers and their topics is available at: https://www.flow3d.com/speakers-announced-for-the-2018-flow-3d-european-users-conference/

Advanced training will be offered on Monday, May 14. Attendees can choose from two tracks: Metal Casting and Water & Environmental. The Metal Casting track, taught by Dr. Matthias Todte of Flow Science Deutschland, will focus on best practices for setting up models and interpreting results, including defect identification. The Water & Environmental track, taught by John Wendelbo, Director of Sales at Flow Science, will explore many facets of a real-world fish passage case study.

Flow Science’s HPC partner, Penguin Computing will participate as a conference sponsor. Penguin Computing is a leading developer of open, Linux-based HPC, enterprise data center and cloud solutions, offering a range of products from Linux servers to integrated, turn-key HPC clusters. Penguin Computing on Demand (POD) offers HPC accelerated time-to-solution without the complexity and expense of owning a cluster. Penguin Computing can be found online at https://www.penguincomputing.com/

More information about the conference, including online registration, can be found at: https://www.flow3d.com/2018-flow-3d-european-users-conference/

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Source: Flow Science

The post 2018 FLOW-3D European Users Conference Speakers Announced appeared first on HPCwire.

Overcoming Space and Power Limitations in HPC Data Centers

21 hours 24 min ago

In companies of all sizes, critical applications are being adopted to accelerate product development, make forecasts based on predictive models, enhance business operations, and improve customer engagements. As a result, there is a growing need for Big Data analytics in many businesses, more sophisticated and more granular modeling and simulation, wide-spread adoption of AI (and the need to train neural nets), and new applications such as the use of genomic analysis in clinical settings and personalized medicine.

These applications generate workloads that overwhelm the capacity of most installed data center server systems. Simply put, today’s compute-intensive workloads require access to significant HPC resources.

Challenges bring HPC to the Mainstream

Many of today’s new and critical business applications are pushing the limits of traditional data centers. As a result, most companies that previously did not need HPC capabilities, now find such processing power is required to stay competitive. Unfortunately, several problems prevent this from happening.

When attempting to upgrade infrastructure, most organizations face inherent data center limitations with space and power. Specifically, many data centers lack the physical space to increase compute capacity significantly. And all organizations incur high electricity costs to run and cool servers, while some data centers have power constraints that cannot be exceeded.

Additionally, there is lack of internal HPC expertise. IT staff may not have the knowledge base to determine which HPC elements (including processors, memory, storage, power, and interconnects) are best for the organization’s workloads or the expertise to carry out HPC system integration and optimization. These skills have not been required in mainstream business applications until now.

As a result, most organizations need help when selecting an HPC solution to ensure it is the right match for the organization’s compute requirements and budget constraints, and one that fits into an existing data center.

Selecting the Right Technology Partner

Traditional clusters consisting of commodity servers and storage will not run the compute-intensive workloads being introduced into many companies today. Fortunately, HPC systems can be assembled using the newest generation of processors, high-performance memory, high-speed interconnect technologies, and high-performance storage device like NVMe SSDs.

However, to address data center space and power issues, an appropriate solution must deliver not just HPC capabilities, but the most compute power per watt in a densely packed enclosure.

To achieve this, it makes sense to find a technology partner with deep HPC experience who can bring together optimized systems solutions with rack hardware integration and software solution engineering to deliver ultimate customer satisfaction. This is an area where Super Micro Computer, Inc. can help.

Supermicro® has a wide-range of solutions to meet the varying HPC requirements found in today’s organizations. At the heart of its HPC offerings are the SuperBlade® and MicroBlade product lines, which are advanced high-performance, density-optimized, and energy-efficient solutions for scalable resource-saving HPC applications.

Both lines offer industry-leading performance, density, and energy efficiency. They support the option of BBP® (Battery Backup Power modules), so the systems provide extra protection to the data centers when a power outage or UPS failure occurs. This feature is ideal for critical workloads, ensuring uptime in the most demanding situations.

SuperBlade and MicroBlade solutions are offered in several form factors (8U, 6U, 4U, 3U) to meet the various compute requirements in different business environments.

At the high end of the spectrum, there is the 8U SuperBlade:

SBE-820C series enclosure supports 20x 2-socket (Intel® Xeon® Scalable processor) blade servers with 40 hot-plug NVMe SSDs or 10x 4-socket (Intel® Xeon® Scalable processor) blade servers with 80 hot-plug NVMe SSDs, 100Gbps EDR InfiniBand or 100Gbps Intel Omni-Path switch, and 2x 10GbE switches. This SKU is best for HPC, enterprise-class applications, cloud computing, and compute-intensive applications.

SBE-820J series enclosure supports 20x 2-socket (Intel® Xeon® Scalable processor) blade servers with 40 hot-plug NVMe SSDs or 10x 4-socket (Intel® Xeon® Scalable processor) blade servers with 80 hot-plug NVMe SSDs, and 4x Ethernet switches (25GbE/10GbE). This SKU is similar to the SKU above, except it is built to operate at 25G/10G Ethernet instead of 100G InfiniBand or Omni-Path. This solution is most suitable for HPC workloads in IT environments that leverage Ethernet switches with 40G or 100G uplinks.

The 8U SuperBlade offering includes the highest density x86 based servers that can support up to 205W Intel® Xeon® Scalable processor. One Supermicro customer at a leading semiconductor equipment company is using 8U SuperBlade systems for HPC applications with 120x 2-socket (Intel® Xeon® Scalable processor) blade servers per rack. This allows the company to save a significant amount of space and investment dollars in its data center.

Supermicro solutions helped a Fortune 50 Company scale its processing capacity to support its rapidly growing compute requirements. To address space limitations and power consumption issues, the company deployed over 75,000 Supermicro MicroBlade disaggregated, Intel® Xeon® processor-based servers at its Silicon Valley data center. Both SuperBlade and MicroBlade are equipped with advanced airflow and thermal design and can support free-air cooling. As a result, this data center is one of the world’s most energy efficient with a Power Usage Effectiveness (PUE) of 1.06.

Compared to a traditional data center running at 1.49 PUE, this new Silicon Valley data center powered by Supermicro blade servers achieves an 88 percent improvement in overall energy efficiency. When the build-out is complete at a 35 megawatt IT load power, the company is targeting $13.18M in savings per year in total energy costs across the entire data center.

Summary

Supermicro provides customers around the world with application-optimized server, workstation, blade, storage, and GPU systems. Based on its advanced Server Building Block Solutions and system architecture innovations, Supermicro offers the industry’s most optimized selection for IT, datacenter, and HPC deployments. Its SuperBlade and MicroBlade solutions deliver industry-leading density and energy efficiency to address common data center limitations when scaling HPC capacity.

To learn how your organization can run new compute-intensive workloads while addressing space and power limitations, visit:  

http://www.supermicro.com/products/SuperBlade/

http://www.supermicro.com/products/MicroBlade/

 

 

The post Overcoming Space and Power Limitations in HPC Data Centers appeared first on HPCwire.

HPC Systems Professionals Workshop at SC18 to Focus on Reproducibility

Fri, 04/20/2018 - 16:45

Defining the HPC Systems Professional Workshop Audience

In order to meet the demands of HPC researchers, large-scale computational and storage machines require many staff members who design, install, and maintain high-performance systems. These HPC systems professionals include system engineers, system administrators, network administrators, storage administrators, and operations staff that all face challenges that are specific to high performance systems.

HPCSYSPROS18 Focus

Alex Younts, General Chair of HPCSYSPROS18

The HPC Systems Professionals Workshop (HPCSYSPROS) is a platform for discussing the unique challenges and developing the state of the practice for the HPC systems community.  The program committee is soliciting submissions that address the best practices of building and operating high performance systems with an emphasis on reproducible solutions that can be implemented by systems staff at other institutions.

Aligning with the SC18 push for reproducibility in workshops and the HPC systems professional discipline, the HPCSYSPROS workshop is calling for a new type of submission. Along with academic papers, the committee is soliciting artifact submissions with short descriptions.

These artifacts should have novel and reproducible outcomes, and those outcomes will be verified during the review process. All artifacts and papers will be published as an archive for best practices and novel techniques for dissemination to the broader HPC Systems Professionals community.

Some topics of interest for this group are below (please note, topics are not limited to this list).

  • Cluster, configuration, or software management
  • Performance tuning/Benchmarking
  • Resource manager and job scheduler configuration
  • Monitoring/Mean-time-to-failure/ROI/Resource utilization
  • Virtualization/Clouds
  • Designing and troubleshooting HPC interconnects
  • Designing and maintaining HPC storage solutions
  • Cybersecurity and data protection
  • Cluster storage
Stephen Lien Harrell, SC18 Program Chair

Some Artifact Types for Submissions

  • Architecture Descriptions of interesting network, storage or system architecture
  • Small Middleware or Systems Software
  • System Configuration and Configuration Management

For more information about submitting to the HPCSYSPROS18 workshop please read the CFP by clicking here.

The Workshop will be held in conjunction with SC18. Click here for complete details and additional links.

About the authors: Stephen Lien Harrell is the SC18 Program Chair and Alex Younts is the General Chair of the HPCSYSPROS18. Both work for Purdue University.

Source: SC18

The post HPC Systems Professionals Workshop at SC18 to Focus on Reproducibility appeared first on HPCwire.

ISC STEM Student Day Opens for Signup

Fri, 04/20/2018 - 08:47

FRANKFURT, Germany, April 20, 2018 – The signup for the ISC STEM Student Day program opened a few days ago and the organizers have already received an overwhelming response. So far, close to 130 regional and international applications have been received. The program can admit a total of 200 participants.

The enthusiastic response underlines the value of offering STEM degree students a program that gives them an early insight into the field of high performance computing and the important players in the sector. Businesses looking for talent to hire or universities considering candidates for their postgraduate degree programs will benefit from being a part of the ISC STEM Student Day & Gala.

The STEM Student Day & Gala will be held on Wednesday, June 27, kicking off with a morning tutorial on HPC, machine learning and data analytics. Other important elements of the program include a tour of the exhibition, the Wednesday keynote address delivered by Dr. Thomas Sterling, and the HPC career fair and dinner at a nearby hotel.

Due to space constraints, the tutorial can only admit 70 attendees, but the rest of the program is open to 200 students. Interested applicants are encouraged to sign up through the website as soon as possible. Those accepted will receive a notification in the month of May, which gives them free admission to the program and the Wednesday conference.

The current applicants are enrolled in various degree programs, including meteorology, physics, computer science, mathematics, biosciences, biochemistry, machine learning for neuroscience, computer engineering, computational physics, automation engineering, biology, chemistry, climate science, engineering cybernetics, business IT, and molecular biotechnology.

“Last year slightly over 100 students signed up for ISC STEM Student Day & Gala and this year we have already exceeded this number within a few days of opening the program,” remarked Nages Sieslack, ISC Communications Manager. “This shows that both local and international students are very keen on a comprehensive introduction to high performance computing and converging technologies, but have very limited opportunities to attain that currently.

“Every employer in the HPC sector hopes to find qualified work force, but many STEM graduates are not entering high performance computing, as they aren’t all that aware of the career opportunities in this field. Many universities still don’t integrate high performance computing as a module in their existing curriculum, which put the students and the HPC sector at a disadvantage.

When I presented this disparity to the ISC management, they fully supported me in establishing this program. As conference organizers, it is our hope to enable more students to attend the ISC STEM Student Day & Gala in the future. Besides providing the students free registration, networking opportunities, meals and career advice, we hope to help them achieve a realistic impression of this challenging field and its tight-knit community.” said Sieslack.

The organizers are glad to see PRACE, Fraunhofer ITWM, Hyperion Research, Intersect360 Research, ATOS, ICHEC, Leibniz Supercomputing Centre, NAG, RIKEN R-CCS, Goethe Universität Frankfurt am Main and HiPEAC step forward to support this year’s STEM Student Day & Gala.

If you are interested in the attendee demographics or sponsoring the ISC STEM Day & Gala in the future, please contact josiah.tabor@isc-group.com.

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

ISC High Performance attracts engineers, IT specialists, system developers, vendors, scientists, researchers, students, journalists, and other members of the HPC global community. The exhibition draws decision-makers from automotive, finance, defense, aeronautical, gas & oil, banking, pharmaceutical and other industries, as well those providing hardware, software and services for the HPC community. Attendees will learn firsthand about new products and applications, in addition to the latest technological advances in the HPC industry.

Source: ISC High Performance

The post ISC STEM Student Day Opens for Signup appeared first on HPCwire.

Fujitsu Boosts RIKEN’s AI Research Computer ‘RAIDEN’ by 13.5x

Thu, 04/19/2018 - 21:14

TOKYO, April 19, 2018 — Fujitsu has significantly boosted the performance of RAIDEN (Riken AIp Deep learning ENvironment), a computer system for artificial intelligence research it had originally deployed in 2017 to the RIKEN Center for Advanced Intelligence Project (AIP Center), the AI research arm of RIKEN. The system upgrade by Fujitsu was based on an order received from RIKEN, and now RIKEN AIP Center is putting it into operation from April 2018.

The upgraded RAIDEN has increased its performance by a considerable margin, moving from an initial total theoretical computational performance of four petaflops, to 54 petaflops [editor’s note: all figures are half-precision] placing it in the top tier of Japan’s systems. In having built this system, Fujitsu demonstrates its commitment to support cutting-edge AI research in Japan.

Background

Since it began operations after system delivery in April 2017, the RIKEN AIP Center in Japan has put RAIDEN to use for R&D on next generation AI technology. Such cutting-edge AI research is conducted with enormous neural networks, a machine learning method particularly in deep learning. The increasing scale of neural networks promises to improve factors such as the accuracy with which the networks handle more complex characteristics, but has also led to a drastic increase in computational volume. Moreover, even in AI research beyond deep learning, computational time is increasing due to the increasing complexity of algorithms and the volumes of data involved.

The RIKEN AIP Center undertook this upgrade in light of its expanded usage needs, with a view toward increasing the efficiency of research and development and promoting further AI research using RAIDEN.

Structure of the Upgraded System

For the GPU servers specialized for deep learning, the system has been upgraded from NVIDIA DGX-1 servers featuring NVIDIA Tesla P100 GPUs to the latest NVIDIA Tesla V100 GPUs. In addition, by increasing the number of DGX-1 servers from 24 to 54, the upgraded system achieves a computational performance of 54 petaflops.

For the computational cluster servers capable of more general-purpose processing, in addition to the 32 existing Fujitsu Server PRIMERGY RX2530 M2 x86 servers, Fujitsu has newly deployed 64 additional Fujitsu Server PRIMERGY CX2550 M4 servers. In addition, it has deployed one Fujitsu Server PRIMERGY RX4770 M4 unit as a compute server that handles high-volume data.

Related Websites

Fujitsu Technical Computing

Fujitsu to Build RIKEN’s “Deep learning system,” One of Japan’s Largest Systems Dedicated to AI Research

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 155,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.5 trillion yen (US$40 billion) for the fiscal year ended March 31, 2017. For more information, please see http://www.fujitsu.com.

Source: Fujitsu

The post Fujitsu Boosts RIKEN’s AI Research Computer ‘RAIDEN’ by 13.5x appeared first on HPCwire.

JUWELS Phase 1 Installation Completes at Jülich Supercomputing Centre

Thu, 04/19/2018 - 20:36

April 19 — Capable of 12 quadrillion calculations per second, JUWELS will replace this spring the Tier-0 machine JUQUEEN, opening the way to exascale supercomputers.

Spring 2018 came with big news in Jülich Supercomputing Centre (JSC): the first module of the JUWELS supercomputer has been installed.

JUWELS stands for “Jülich Wizzard for European Leadership Science” and it has been built in a joint development project between Atos and ParTec.

It is the first element of an innovative supercomputer architecture called “Modular Supercomputing”. A highly scalable module with significantly higher performance will follow in 2019.

The prototype for this  smart exascale architecture was developed in the three EU-funded projects DEEP,  DEEP-ER and DEEP-EST.

JUWELS is built on the basis of European IP and technology. It can be seen as the first proof of concept to demonstrate European performance on the way to exascale supercomputers.

Source: European Commission

The post JUWELS Phase 1 Installation Completes at Jülich Supercomputing Centre appeared first on HPCwire.

Supermicro Announces Closing of Refinancing

Thu, 04/19/2018 - 19:56

SAN JOSE, Calif., April 19 — Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, today announced that the Company has closed an expanded new credit facility, replacing its existing amended credit facility. The new credit facility led by Bank of America Merrill Lynch received commitments from syndicate banks for an amount in excess of $250 million for a 60% increase in borrowing capacity. In addition, the new credit facility provides a conversion opportunity to expand borrowing capacity to $400 million, after certain conditions have been met.

“As Supermicro continues its strong growth momentum, it was imperative that we increase our network of relationship banks as well as total liquidity. Our new increased credit facility of $250 million in addition to our other existing credit facility allows an upside in borrowing capacity to meet our immediate growth objectives,” said Kevin Bauer, chief financial officer. “We are pleased to enter into this new corporate credit facility, which provides flexibility for future capacity while we continue to execute on our long term growth strategy.”

About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO.

Products include servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage, networking, server management software and SuperRack cabinets and accessories, delivering unrivaled performance and value.

Founded in 1993 and headquartered in San Jose, California, Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative. The Company has global logistics and operations centers in Silicon Valley (USA), the Netherlands (Europe) and its Science & Technology Park in Taiwan (Asia).

Source: Supermicro

The post Supermicro Announces Closing of Refinancing appeared first on HPCwire.

Atipa Technologies Wins 2017 Partner of the Year for HPC Technical Solution at Intel Technology Partner Awards

Thu, 04/19/2018 - 19:22

LAWRENCE, Kansas, April 19 — Atipa Technologies, a leading provider of high-performance computing and storage solutions, today announced that it received the Intel Technology Partner award for HPC Technical Solution. Atipa Technologies was honored for its streamlined high-performance computing deployments for the Department of Energy using Intel technologies. The award was presented at the 2018 Intel Partner Connect conference in Washington, D.C.

Atipa Technologies architected an extremely cost effective 924-node computing cluster for a U.S. Naval Nuclear Laboratory. Atipa evaluated the cost/performance ratio for a wide range of Intel Xeon processor E5-2600 v4 family and chose to build the system using the Intel Xeon processor E5-2683 v4 and the Intel Compute Module HNS2600KP Family. During system testing and burn-in, Atipa went the extra mile to ensure all nodes individually achieved SPECfp2006 rates within two percent of the average, giving the Naval Nuclear Laboratory greater predictability and reliability across nodes throughout the entire cluster. The entire system was turned on and accepted within seven business days after the start of the on-site installation. The final optimized system performance surpassed the minimum requirements by 15 percent while staying within budget.

“Our engineers are among the most thorough and meticulous in the business. Their passion for technology and drive for perfection allow us to deliver the highest quality HPC systems and exceed our customers’ expectations,” said Dana Chang, vice president, at Atipa Technologies.

“Intel appreciates the work that Atipa Technologies has done in finding new and innovative ways to use Intel technology to improve the deployment process for high performance computing solutions,” said Jason Kimrey, general manager, U.S. Channel Scale and Partners at Intel. “We congratulate them on their successes and look forward to our continued collaboration.”

For additional details about Atipa Technologies’ High-Performance Computing Services and Solutions, visit www.atipa.com for more information.

About Atipa Technologies

Atipa Technologies is the high-performance computing division of Microtech Computers, which has been in business and under the same ownership since 1986. Atipa has been deploying High-Performance Computing and Storage systems since 2001 and has since received many awards and recognitions, including most recently the 2017 Intel Partner of the Year award for HPC Technical Solution. We consistently deliver quality high-performance solutions ranked on the Top500.org list of the fastest supercomputers in the world and deployed the 13th fastest supercomputer in the world in 2013. Atipa’s commitment to rigorous testing of every hardware and software component before shipping a system is the key to our success and customer satisfaction.

Source: Atipa Technologies

The post Atipa Technologies Wins 2017 Partner of the Year for HPC Technical Solution at Intel Technology Partner Awards appeared first on HPCwire.

Moscow State University Team Wins World Finals of the ACM International Collegiate Programming Contest

Thu, 04/19/2018 - 19:15

NEW YORK, April 19, 2018 – The 2018 World Finals of the Association for Computing Machinery (ACM) International Collegiate Programming Contest (ICPC) culminated today at Peking University in Beijing, China. Three students from Moscow State University earned the title of 2018 World Champions. Teams from the Moscow Institute of Physics and Technology, Peking University and The University of Tokyo placed in second, third and fourth places and were recognized with gold medals in the prestigious competition.

ACM ICPC is the premier global programming competition conducted by and for the world’s universities. The global competition is conceived, operated and shepherded by ACM and headquartered at Baylor University. This year’s World Finals were hosted by Peking University and CYSC: Children and Youth Science Center of CAST, and the contest was sponsored by Founder Group and JetBrains. For more than four decades, the competition has raised the aspirations and performance of generations of the world’s problem solvers in computing sciences and engineering.

In the competition, teams of three students tackle eight or more complex, real-world problems. The students are given a problem statement, and must create a solution within a looming five-hour time limit. The team that solves the most problems in the fewest attempts in the least cumulative time is declared the winner. This year’s World Finals saw 140 teams competing. Now in its 42nd year, ICPC has gathered more than 320,000 students from around the world to compete since its inception.

As computing increasingly becomes part of the daily routines of a growing percentage of the global population, the solution to many of tomorrow’s challenges will be written with computing code. The ICPC serves as a unique forum for tomorrow’s computing professionals to showcase their skills, learn new proficiencies and to work together to solve many real-world problems. This international event fosters the innovative spirit that continues to transform our world.

The 140 teams that participated in this year’s World Finals emerged from local and regional ICPC competitions that took place in the fall of 2017. Initially, selection took place from a field of more than 300,000 students in computing disciplines worldwide. A record number of students advanced to the regional level. 49,935 contestants from 3,089 universities in 111 countries on six continents competed at more than 585 sites, all with the goal of earning one of the coveted invitations to Beijing.

In addition to the World Champion designation, gold, silver, and bronze medals were awarded. The top teams this year included:

Moscow State University (1st)

Moscow Institute of Physics and Technology (2nd)

Peking University (3rd)

The University of Tokyo (4th)

Seoul National University (5th)

University of New South Wales (6th)

Tsinghua University (7th)

Shanghai Jiao Tong University (8th)

St. Petersburg ITMO University (9th)

University of Central Florida (10th)

Massachusetts Institute of Technology (11th)

Vilnius University (12th)

Ural Federal University (13th)

For full results, to learn more about the ICPC, view historic competition results, or investigate sample problems, please visit https://icpc.baylor.edu.

About the ACM-ICPC

Headquartered at Baylor University, the ACM-ICPC is a global competition among the world’s university students, nurturing new generations of talent in the science and art of information technology. For more information about the ACM-ICPC, including downloadable high-resolution photographs and videos, visit ICPC headquarters and ICPCNews. Additional information can be found via the “Battle of the Brains” podcast series. Follow the contest on Twitter @ICPCNews and #ICPC2016.

About ACM

ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post Moscow State University Team Wins World Finals of the ACM International Collegiate Programming Contest appeared first on HPCwire.

MSOE to Break Ground on New Facility for AI and Computational Science Education

Thu, 04/19/2018 - 12:26

MILWAUKEE, Wis., April 19, 2018 — On Monday, April 23 at 2 p.m., Milwaukee School of Engineering (MSOE) will break ground on its new academic facility. The $34 million Dwight and Dian Diercks Computational Science Hall has been funded by a donation from MSOE Regent and alumnus Dr. Dwight Diercks and his wife, Dian.

Dwight and Dian Diercks Computational Science Hall Rendering. Rendering by Uihlein-Wilson Architects.

The gift marks a bold step forward for the university. With the addition of Dwight and Dian Diercks Computational Science Hall, and its new Bachelor of Science in Computer Science program, MSOE will be positioned at the educational forefront in artificial intelligence (AI), deep learning, cyber security, robotics, cloud computing and other next-generation technologies.

Dr. Diercks earned his bachelor’s degree in computer science and engineering at MSOE in 1990. He also holds an Honorary Doctor of Engineering degree from the university. Today, Diercks serves as senior vice president at NVIDIA, a California-based technology company and global leader in AI, supercomputing and visual computing.

About MSOE

MSOE is an independent, non-profit university with about 2,800 students that was founded in 1903. MSOE offers bachelor’s and master’s degrees in engineering, business and nursing. The university has a national academic reputation; longstanding ties to business and industry; dedicated professors with real-world experience; a 95% graduate outcomes rate; and the highest ROI and average starting salaries of any Wisconsin university according to PayScale Inc. MSOE graduates are well-rounded, technologically experienced and highly productive professionals and leaders.

Source: MSOE

The post MSOE to Break Ground on New Facility for AI and Computational Science Education appeared first on HPCwire.

IBM Joins NSF’s BIGDATA Program

Thu, 04/19/2018 - 12:10

April 19, 2018 — The National Science Foundation’s (NSF) Directorate for Computer and Information Science and Engineering (CISE) is pleased to announce that IBM has joined as one of the cloud resource providers for the Critical Techniques, Technologies, and Methodologies for Advancing Foundations and Applications of Big Data Sciences and Engineering (BIGDATA) program solicitationin Fiscal Year (FY) 2018.

IBM joins Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure in providing cloud credits/resources to qualifying NSF-funded projects, thereby supporting researchers in their big data research and education activities, especially those focusing on large-scale experimentation and scalability studies.

Following the introduction and success of the cloud option last year, CISE issued a call to encourage participation by all cloud providers.

Proposals submitted to the NSF BIGDATA program that request cloud credits/resources must adhere to a maximum 70-30 split between the requested NSF funds and the requested cloud resources, respectively, andmust not request less than $100,000 for cloud resources.

Proposers interested in using IBM cloud resources may use the information on the following webpage to develop the total cost of cloud resources along with an annual usage plan over the duration of their projects: https://www.ibm.com/cloud/pricing. Corresponding information for the other cloud providers is provided in the BIGDATA program solicitation.

BIGDATA proposal submissions are due between May 7 and May 14, 2018 (and no later than 5 p.m. submitter’s local time on May 14th).  All those interested in submitting a proposal should refer to the solicitation for details.

Of the 21 BIGDATA awards announced in FY 2017, eight benefited from the cloud option.  We are confident that our partnerships with AWS, GCP, Microsoft Azure, and now IBM will help accelerate research and innovation in big data and data science.  We look forward to the response from the national big data and data science research community!

Source: NSF

The post IBM Joins NSF’s BIGDATA Program appeared first on HPCwire.

Hands-on Supercomputing Labs Prepare Students for Careers in Drug Discovery

Thu, 04/19/2018 - 09:06

April 18, 2018 — When the students in Pierre Neuenschwander’s master’s level “Proteins and Nucleic Acids” class prepared for their midterm in March, they were actually following a path that their professor had begun nearly a decade ago.

Docking simulation of Argatroban to Thrombin: The Argatroban drug is shown as a stick structure with the Thrombin macromolecule shown as a white surface. Simulation with the proper Thrombin binding site residues for reference resulted in an almost perfect position of the drug in the binding pocket. Image courtesy of TACC.

A lab scientist by training, Neuenschwander, an associate professor of biochemistry at The University of Texas Health Science Center at Tyler (UTHSCT), began to experiment in the mid-2000s with computational drug docking, then in its early days.

The idea of discovering and testing how new drugs bind to proteins in silico (or on a computer) had been around for a while, but it was only around that time that computers were getting powerful enough to simulate the physics and chemistry involved in molecules interacting.

Neuenschwander used newly-available software to perform virtual experiments on clotting factor IXa, a blood clotting enzyme that was the subject of his research at that time.

He began the computations and day after day would return to his office to jiggle the mouse to make sure the computer was still processing. After several months, his computer finally spit out an answer. He was astounded to find it had deduced the location where a drug would bind.

“Eventually it gave an answer — nice different structural configurations of the small molecule — and one was just right,” Neuenschwander said. “I had given the computer no clue and told it to search the entire surface and it narrowed down to one spot on factor IXa.”

The story might have ended there had not a staff member from the Texas Advanced Computing Center (TACC) visited UTHSCT in 2009 and mentioned to Neuenschwander that, as part of the University of Texas Cyberinfrastructure (UTRC) program, researchers at all 14 of the University of Texas System institutions had free access to TACC’s supercomputers.

Neuenschwander re-ran his simulation on TACC’s Ranger supercomputer — one of the most powerful in the world at the time — and, lo and behold, he was able to solve the same problem that took months on his laptop in 15 minutes.

“I realized this is doable. Now we can start teaching students how to do this so when they go out and work for drug companies they can come armed with the knowledge of what’s pushing the limits and what’s possible,” he said.

Neuenschwander has offered the Proteins and Nucleic Acids course every year since 2014. Access to TACC allows him to expose students to computational modeling and design in a way that makes molecular and atomic reactions more concrete for students.

“In lectures, they learn about interactions between proteins, lipids and nucleic acids, but it’s really hard to do a lab on that,” he explained. “You can do experiments for binding but you can’t really see the interactions; you can’t get a good feel for all that’s happening. But with computer modeling on the other hand, you can, because you can visualize the process when you’re done. So I developed the lab portion of that course as a computing lab.”

Most of his students have no Linux or programming experience. In the class, they learn how to access TACC supercomputers and run virtual experiments. Neuenschwander sets up the projects in advance so students can focus on the scientific research rather than the computer programming. Nonetheless, they get a hands-on experience logging in to TACC’s systems, adjusting parameters, and running simulations.

For their mid-term exams, Neuenschwander has students download the crystal structure of docked molecules from a protein database and use TACC supercomputers to pull the molecules apart and predict how they will bind.

“The students learn that if they allow the computer to vary wildly from what’s known, they may not get the right results,” he said. “But if they make judicious choices that they can justify scientifically, the computer gets it right fairly often.”

For their finals, students are asked to take a sequence of RNA, fold it to make 1,000 three-dimensional structures, and then screen each to see if they will bind to a molecule – a process that forms the basis of a biotechnology tool called siRNA, which uses an RNA molecule to shut down a specific gene.

The exam is actually a bit of a trick though.

“What they will find out when they do the RNA modeling is that the computer can’t really find a good solution because it’s much too complicated,” Neuenschwander revealed. “So, it shows that even with the supercomputers that we have now, we need even more computing power to predict those structures with any accuracy.”

Several students who have taken Neuenschwander’s class have gone on to use advanced computing for their theses. Others end up using the skills they were first introduced to in his class in their careers and further education.

This was the case for Juan Macias, a PhD candidate in the computational and systems biology program at Washington University in Saint Louis, where he uses supercomputers to study epigenetic regulation in metabolic disease like obesity and diabetes. He says Neuenschwander’s class helped “demystify” the process of using advanced computing.

“Exposure to the TACC resources was very useful in getting me used to working on those sorts of systems,” Macias said. “Having someone guide you through the process of working with these sorts of resources, as was done in that class, is invaluable.”

Drug docking is not the only problem that advanced computing can be used for. Molecular dynamics — a method for studying the physical movements of atoms and molecules computationally — is another area that Neuenschwander is exploring. Using molecular dynamics, researchers can understand how bonds form between molecules and how their shape changes when molecules interact. He hopes to develop a new class teaching molecular dynamics to students in the biochemistry program in the near future.

Meanwhile, the number of problems that use computation is growing. Neuenschwander is particularly excited about proteomics and efforts to develop virtual human test subjects.

“Wouldn’t it be great instead of going into human trials and risking getting someone sick first, you have a computer tell you what the potential problems might be?” he asked. “The more we learn about how these molecules interact, the closer to reality that becomes.”

It may take decades to reach that goal, but by training the next-generation of computational biochemists, Neuenschwander is helping to make it a reality.

Source: Aaron Dubrow, TACC

The post Hands-on Supercomputing Labs Prepare Students for Careers in Drug Discovery appeared first on HPCwire.

XSEDE ECSS, Blacklight and Stampede Support California Yellowtail Genome Assembly

Thu, 04/12/2018 - 10:45

April 12, 2018 — If you eat fish in the U.S., chances are it once swam in another country. That’s because the U.S. imports over 80 percent of its seafood, according to estimates by the United Nations. New genetic research could help make farmed fish more palatable and bring America’s wild fish species to dinner tables. Scientists have used big data and supercomputers to catch a fish genome, a first step in its sustainable aquaculture harvest.

Researchers assembled and annotated for the first time the genome – the total genetic material – of the fish species Seriola dorsalis. Also known as California Yellowtail, it’s a fish of high value to the sashimi, or raw seafood industry. The science team formed from the Southwest Fisheries Science Center of the U.S. National Marine Fisheries Service, Iowa State University, and the Instituto Politécnico Nacional in Mexico. They published their results on January of 2018 in the journal BMC Genomics.

“The major findings in this publication were to characterize the Seriola dorsalis genome and its annotation, along with getting a better understanding of sex determination of this fish species,” said study co-author Andrew Severin, a Scientist and Facility Manager at the Genome Informatics Facility of Iowa State University.

“We can now confidently say,” added Severin, “that Seriola dorsalis has a Z-W sex determination system, and that we know the chromosome that it’s contained on and the region that actually determines the sex of this fish.” Z-W refers to the sex chromosomes and depends on whether the male or female is heterozygous (XX,XY or ZZ,ZW), respectively. Another way to think about this is that in Z-W sex determination, the DNA molecules of the fish ovum determine the sex of the offspring. By contrast, in the X-Y sex determination system, such is found in humans, the sperm determines sex in the offspring.

It’s hard to tell the difference between a male and female yellowtail fish because they don’t have any obvious phenotypical, or outwardly physically distinguishing traits. “Being able to determine sex in fish is really important because we can develop a marker that can be used to determine sex in young fish that you can’t determine phenotypically,” Severin explained. “This can be used to improve aquaculture practices.” Sex identification lets fish farmers stock tanks with the right ratio of males to females and get better yield.

Assembling and annotating a genome is like building an enormous three-dimensional jigsaw puzzle. The Seriola dorsalis genome has 685 million pieces – its base pairs of DNA – to put together. “Gene annotations are locations on the genome that encode transcripts that are translated into proteins,” explained Severin. “Proteins are the molecular machinery that operate all the biochemistry in the body from the digestion of your food, to the activation of your immune system to the growth of your fingernails. Even that is an oversimplification of all the regulation.”

Severin and his team assembled the genome of 685 megabase (MB) pairs from thousands of smaller fragments that each gave information to form the complete picture. “We had to sequence them for quite a bit of depth in order to construct the full 685 MB genome,” said study co-author Arun Seetharam. “This amounted to a lot of data,” added Seetharam, who is an associate scientist at the Genome Informatics Facility of Iowa State University.

The raw DNA sequence data ran 500 gigabytes for the Seriola dorsalis genome, coming from tissue samples of a juvenile fish collected at the Hubbs SeaWorld Research Institute in San Diego. “In order to put them together,” Seetharam said, “we needed a computer with a lot more RAM to put it all into the computer’s memory and then put it together to construct the 685 MB genome. We needed really powerful machines.”

That’s when Seetharam realized that the computational resources at Iowa State University at the time weren’t sufficient get the job done in a timely manner, and he turned to XSEDE, the eXtreme Science and Engineering Discovery Environment funded by the National Science Foundation. XSEDE is a single virtual system that scientists can use to interactively share computing resources, data and expertise.

“When we first started using XSEDE resources,” explained Seetharam, “there was an option for us to select for ECSS, the Extended Collaborative Support Services. We thought it would be a great help if there were someone from the XSEDE side to help us. We opted for ECSS. Our interactions with Phillip Blood of the Pittsburgh Supercomputing Center were extremely important to get us up and running with the assembly quickly on XSEDE resources,” Seetharam said.

The genome assembly work was computed at the Pittsburgh Supercomputing Center (PSC) on the Blacklightsystem, which at one point was the world’s largest coherent shared-memory computing system. Blacklight has since been superseded by the data-centric Bridges system at PSC, which includes similar large-memory nodes of up to 12 terabytes — a thousand times more than a typical personal computer. “We ended up using Blacklight at the time because it had a lot of RAM,” recalled Andrew Severin. That’s because they needed to put all the raw data into the computer’s random access memory (RAM) so that it could use the algorithms of the Maryland Super-Read Celera Assembler genome assembly software. “You have to be able to compare every single piece of sequence data to every other piece to figure out which pieces need to be joined together, like a giant puzzle,” Severin explained.

“We also used Stampede,” continued Severin, “the first Stampede, which is another XSEDE computational resource that has lots and lots of compute nodes. Each compute node you can think of as a separate computer. ” The Stampede1 system at the Texas Advanced Computing Center had over 6,400 Dell PowerEdge server nodes, which later added 508 Intel Knights Landing (KNL) nodes in preparation for its current successor, Stampede2 with 4,200 KNL nodes.

“We used Stampede to do the annotation of these gene models that we identified in the genome to try and figure out what their functions are,” Severin said. “That required us to perform an analysis called the Basic Local Alignment Search Tool (BLAST), and it required us to use many CPUs, over a year’s worth of compute time that we ended up doing within a couple of week’s worth of actual time because of the many nodes that were on Stampede.”

To read the full article, click here.

Source: TACC

The post XSEDE ECSS, Blacklight and Stampede Support California Yellowtail Genome Assembly appeared first on HPCwire.

Ang Joins PNNL as Chief Scientist for Computing

Thu, 04/12/2018 - 09:59

RICHLAND, Wash., April 12, 2018 — James “Jim” Ang has joined the Department of Energy’s Pacific Northwest National Laboratory as chief scientist for computing. Ang will also serve as the lab’s point of contact for DOE’s Advanced Scientific Computing Research mission. His responsibilities include providing senior leadership in high performance computing and advanced computing.

“Jim’s experience and skills are complementary to PNNL’s existing strengths and will further our important work in graph analytics, performance modeling/simulation, runtime systems, programming models, and more,” said Louis Terminello, associate laboratory director for Physical and Computational Sciences.

For the past three years, Ang has served as the technical manager of the DOE Exascale Computing Program department at Sandia National Laboratories. He focused on developing and defining its strategy. In his 28-year career at Sandia, Ang also held management roles in its Scalable Computer Architectures department, Extreme-scale Computing group and the Scalable Systems Integration department.

“Transcending his many titles has been Jim’s overall aptitude for business development and inter-organizational collaborations,” said Terminello.

Jim’s areas of active research include HPC system architectures, system-on-chip processor designs, advanced memory subsystems, interconnection networks, large-scale system resilience, power monitoring and control, application performance analysis, the development and use of HPC architectural system simulators, proxy applications, and advanced architecture test beds for HPC co-design. Jim also serves as the president and founding board member for the Association for High Speed Computing.

Ang earned his doctorate in mechanical engineering from the University of California at Berkeley. He also received a master’s in mechanical engineering from Berkeley, a bachelor’s in mechanical engineering from the University of Illinois at Urbana-Champaign and a bachelor’s in physics from Grinnell College in Iowa.

To connect with Ang or to learn more about PNNL’s computing research portfolio—spanning from the basic to the applied— visit https://www.pnnl.gov/computing/.

About Pacific Northwest National Laboratory

Pacific Northwest National Laboratory is the nation’s premier laboratory for scientific discovery in chemistry, earth sciences, and data analytics and for solutions to the nation’s toughest challenges in energy resiliency and national security. Founded in 1965, PNNL is operated by Battelle for the U.S. Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit the PNNL’s News Center.

Source: PNNL

The post Ang Joins PNNL as Chief Scientist for Computing appeared first on HPCwire.

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

Wed, 04/11/2018 - 23:10

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with the release of the much-anticipated CORAL-2 request for proposals (RFP). Although funding is not yet secured, the anticipated budget range for each system is significant: $400 million to $600 million per machine including associated non-recurring engineering (NRE).

CORAL of course refers to the joint effort to procure next-generation supercomputers for Department of Energy’s National Laboratories at Oak Ridge, Argonne, and Livermore. The fruits of the original CORAL RFP include Summit and Sierra, ~200 petaflops systems being built by IBM in partnership with Nvidia and Mellanox for Oak Ridge and Livermore, respectively, and “A21,” the retooled Aurora contract with prime Intel (and partner Cray), destined for Argonne in 2021 and slated to be the United States’ first exascale machine.

The heavyweight supercomputers are required to meet the mission needs of the Advanced Scientific Computing Research (ASCR) Program within the DOE’s Office of Science and the Advanced Simulation and Computing (ASC) Program within the National Nuclear Security Administration.

The CORAL-2 collaboration specifically seeks to fund non-recurring engineering and up to three exascale-class systems: one at Oak Ridge, one at Livermore and a potential third system at Argonne if it chooses to make an award under the RFP and if funding is available. The Exascale Computing Project (ECP), a joint DOE-NNSA effort, has been organizing and leading R&D in the areas of the software stack, applications, and hardware to ensure “capable,” i.e., productively usable, exascale machines that can solve science problems 50x faster (or more complex) than the 20 petaflops systems of today.

Like the original CORAL program, which kicked off in 2012, CORAL-2 has a mandate to field architecturally diverse machines in a way that manages risk during a period of rapid technological evolution. “Regardless of which system or systems are being discussed, the systems residing at or planned to reside at ORNL and ANL must be diverse from one another,” notes the CORAL-2 RFP cover letter [PDF]. Sharpening the point, that means the Oak Ridge system must be distinct from A21 and from a potential CORAL-2 machine at Argonne. It is conceivable, then, that this RFP may result in one, two or three different architectures, depending of course on the selections made by the labs and whether Argonne’s CORAL-2 machine comes to fruition.

“Diversity,” according to the RFP documents, “will be evaluated by how much the proposed system(s) promotes a competition of ideas and technologies; how much the proposed system(s) reduces risk that may be caused by delays or failure of a particular technology or shifts in vendor business focus, staff, or financial health; and how much the proposed system(s) diversity promotes a rich and healthy HPC ecosystem.”

Here is a listing of current and future CORAL machines:

Proposals for CORAL-2 are due in May with bidders to be selected later this year. Acquisition contracts are anticipated for 2019.

If Argonne takes delivery of A21 in 2021 and deploys an additional machine (or upgrade) in the third quarter of 2022, it would be fielding two exascale machines/builds in less than two years.

“Whether CORAL-2 winds up being two systems or three may come down to funding, which is ‘expected’ at this point, but not committed,” commented HPC veteran and market watcher Addison Snell, CEO of Intersect360 Research. “If ANL does not fund an exascale system as part of CORAL-2, I would nevertheless expect an exascale system there in a similar timeframe, just possibly funded separately.”

Several HPC community leaders we spoke with shared more pointed speculation on what the overture for a second exascale machine at Argonne so soon on the heels of A21 may indicate, insofar as there may be doubt about whether Intel’s “novel architecture” will satisfy the full scope of DOE’s needs. Given the close timing and the reality of lengthy procurement cycles, the decision on a follow-on will have to be made without the benefit of experience with A21.

Argonne’s Associate Laboratory Director for Computing, Environment and Life Sciences Rick Stevens, commenting for this piece, underscored the importance of technology diversity and shined a light on Argonne’s thinking. “We are very interested in getting as broad range of responses as possible to consider for our planning. We would love to have multiple choices to consider for the DOE landscape including exciting options for potential upgrades to Aurora,” he said.

If Intel, working with Cray, is able to fulfill the requirements for a 1-exaflops A21 machine in 2021, the pair may be in a good position to fulfill the slightly more rigorous “capable exascale” requirements outlined by ECP and CORAL-2.

The overall bidding pool for CORAL-2 is likely to include IBM, Intel, Cray and Hewlett Packard Enterprise (HPE), although upstart system-maker Nvidia may also have a hand to play. HPE could come in with a GPU-based machine or an implementation of its memory-centric architecture, known as The Machine. In IBM’s court, the successor architectures to Power9 are no doubt being looked at as candidate architectures.

And while it’s always fun dishing over the sexy processing elements (with flavors from Intel, Nvidia, AMD and IBM in the running), Snell pointed out it is perhaps more interesting to look at the interconnect topologies in the field. “Will we be looking at systems based on an upcoming version of a current technology, such as InfiniBand or OmniPath, or a future technology like Gen-Z, or something else proprietary?” he pondered.

Stevens weighed in on the many technological challenges still at hand, ranging from memory capacity, power consumption, and systems balance, but he noted that, fortunately, the DOE has been investing in many of these issues for many years, through the PathForward program and its predecessors, created to foster the technology pipeline needed for extreme-scale computing. It’s no accident or coincidence that we’ve already run through all the names in the current “Forward” program: AMD, Cray, HPE, IBM, Intel, and Nvidia.

“Hopefully the vendors will have some good options for us to consider,” said Stevens, adding that Argonne is seeking a broad set of responses from as many vendors as possible. “This RFP is really about opening up the aperture to new architectural concepts and to enable new partnerships in the vendor landscape. I think it’s particularly important to notice that we are interested in systems that can support the integration of simulation, data and machine learning. This is reflected in both the technology specifications as well as the benchmarks outlined in the RFP.”

Other community members also shared their reactions.

“It is good to see a commitment to high-end computing by DOE, though I note that the funding has not yet been secured,” said Bill Gropp, director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (home to the roughly 13-petaflops Cray “Blue Waters” supercomputer). “What is needed is a holistic approach to HEC; this addresses the next+1 generation of systems but not work on applications or algorithms.”

“What stands out [about the CORAL-2 RFP] is that it doesn’t take advantage of the diversity of systems to encourage specialization in the hardware to different data structure/algorithm choices,” Gropp added. “Once you decide to acquire several systems, you can consider specialization. Frankly, for example, GPU-based systems are specialized; they run some important algorithms very well, but are less effective at others. Rather than deny that, make it into a strength. There are hints of this in the way the different classes of benchmarks are described and the priorities placed on them [see page 23 of the RFP’s Proposal Evaluation and Proposal Preparation Instructions], but it could be much more explicit.

“Also, this line on page 23 stands out: “The technology must have potential commercial viability.” I understand the reasoning behind this, but it is an additional constraint that may limit the innovation that is possible. In any case, this is an indirect requirement. DOE is looking for viable technologies that it can support at reasonable cost. But this misses the point that using commodity (which is how commercial viability is often interpreted) technology has its own costs, in the part of the environment that I mentioned above and that is not covered by this RFP.”

Gropp, who is awaiting the results of the NSF Track 1 RFP that will award the follow-on to Blue Waters, also pointed out that NSF has only found $60 million for the next-generation system, and has (as of November 2017) cut the number of future track 2 systems to one. “I hope that DOE can convince Congress to not only appropriate the funds for these systems, but also for the other science agencies,” he said.

Adding further valuable insight into the United States’ strategy to field next-generation leadership-class supercomputers especially with regard to the “commercial viability” precept is NNSA Chief Scientist Dimitri Kusnezov. Interviewed by the Supercomputing Frontiers Europe 2018 conference in Warsaw, Poland, last month (link to video), Kusnezov characterized DOE and NNSA’s $258 million funding of the PathFoward program as “an investment with the private sector to buy down risk in next-generation technologies.”

“We would love to simply buy commercial,” he said. “It would be more cost-effective for us. We’d run in the cloud if that was the answer for us, if that was the most cost-effective way, because it’s not about the computer, it’s about the outcomes. The $250 million [spent on PathForward] was just a piece of ongoing and much larger investments we are making to try and steer, on the sides, vendor roadmaps. We have a sense where companies are going. They share with us their technology investments, and we ask them if there are things we can build on those to help modify it so they can be more broadly serviceable to large scalable architectures.

“$250 million dollars is not a lot of money in the computer world. A billion dollars is not a lot of money in the computer world, so you have to have measured expectations on what you think you can actually impact. We look at impacting the high-end next-generation roadmaps of companies where we can, to have the best output. The best outcome for us is we invest in modifications, lower-power processors, memory closer to the processor, AI-injected into the CPUs in some way, and, in the best case, it becomes commercial, and there’s a market for it, a global market ideally because then the price point comes down and when we build something there, it’s more cost-effective for us. We’re trying to avoid buying special-purpose, single-use systems because they’re too expensive and it doesn’t make a lot of sense. If we can piggyback on where companies want to go by having a sense of what might ultimately have market value for them, we leverage a lot of their R&D and production for our value as well.

“This investment we are doing buys down risk. If other people did it for us that would even be better. If they felt the urgency and invested in the areas we care about, we’d be really happy. So we fill in the gaps where we can. …But ultimately it’s not about the computer, it’s really about the purpose…the problems you are solving and do they make a difference.”

The post US Plans $1.8 Billion Spend on DOE Exascale Supercomputing appeared first on HPCwire.

Trio of Heavy Hitters Talk Shop at Supercomputing Frontiers Europe 2018

Wed, 04/11/2018 - 20:01

Organizers of Supercomputing Frontiers Europe 2018, held in Warsaw, Poland, last month, have posted brief, fascinating video interviews with three prominent keynoters – Dimitri Kusnezov, chief scientist, National Nuclear Security Administration, DoE; Thomas Sterling, professor of intelligent systems engineering and director, Center for Research in Extreme Scale Technologies, Indiana University; and Karlheinz Meier, a leader in the EU Human Brain Project and professor (chair) of experimental physics, University of Heidelberg.

The conference, now in its fourth year, was held in Singapore the first three times; it tackles all things supercomputing spanning technology, international competition, and efforts to transfer advanced technology into productive use by industry. No surprise, AI writ large, the race to exascale, and convergence of AI and traditional HPC were all part of this year’s agenda.

With apologies to those quoted, here are three teaser snippets from their video interviews intended to entice but hardly catch the full scope of their comments (their keynote topics are in parentheses):

  • Kusnezov (Precision Medicine as an Accelerator for Next Generation Supercomputing) – “We have looked for ways to partner with private sector to share best practices, share our equipment, share our capabilities and tools to help in advancing their type of competitive edge in the business sectors they are in. [We’ve found] in partnering with the private sector, it’s hit or miss.”
  • New HPC text by Thomas Sterling, Matthew Anderson, Maciej Brodowicz

    Sterling (Simultac Fonton) – “The US, according to the [NSCI] initiative, wanted to remain the leader in the field. Now in truth, by several metrics, the US is not the leader of the field although it certainly is one of the highest [users] of high performance computing in the world….If there is a race between the U.S. and China in terms of achieving exascale, in all likelihood China has already won.”

  • Meier (Neuromorphic computing: From biology to user facilities) – “There is a two-way connection between brain science and computing technology. Brain science can help develop future computing technology. We know a lot about deep learning and we think deep learning is somehow inspired by biology but currently there is very little inspiration from biology. The artificial neural networks that we use don’t have a lot to do with how the brain works.”

The interviews were wide-ranging and worth watching. Links to all three follow

1. Dimitri Kusnezov, DoE

Worried about the global exascale race? Not Kusnezov. “If it’s a race I think that’s great because it means there are many exerting significant energy to advance the field,” said Kusnezov. He spent time distinguishing the DoE’s distinct needs from most HPC users’ communities, also discussed why he thinks HPC in the cloud had yet to gain substantial traction, and talked about DoE desire to influence vendor technology roadmaps. While successful collaboration with DoE supercomputer facilities to foster HPC adoption by industry remains difficult now, Kusnezov singled oil and gas and tire manufacturers as success stories and expects more to come.

 

2. Thomas Sterling, Indiana University and CREST

Sterling was informative and entertaining as always. “Simply a prediction on my part. I believe many will disagree [but] I believe intelligent machines that embody principle intelligence – we don’t have these yet – will ultimately go back to the early schools of symbolic computing rather than statistical, probability computing or the training-session-based computing, and will ultimately consume the vast majority of cycles in computing whether it’s small computers or supercomputers larger than can be imagined. So I believe that intelligent computing will dominate computing within our lifetime.” 

 

3. Karlheinz Meier, Human Brain Project and University of Heidelberg

Meier made clear how little we know about how the brains works and how effective neuromorphic computing that more accurately mimicking brain function and organization could be game-changing. Duplicating the brain’s efficiency use of energy – 20 watts versus megawatt – has long been an attractive goal and would be a game changer in mobile devices. Fault tolerance is another critical attribute. “We lose about one brain cell per second, 100 thousand per day, and still we kind of function reasonably well. If you would kill a transistor a processor per second, it would stop processing immediately.”

The post Trio of Heavy Hitters Talk Shop at Supercomputing Frontiers Europe 2018 appeared first on HPCwire.

TACC Promotes Paul Navratil to Director of Visualization

Wed, 04/11/2018 - 16:46

April 11 — The Texas Advanced Computing Center (TACC) at The University of Texas at Austin today announced that Paul Navrátil has been promoted to the position of Director of Visualization.

This role includes managing the Scalable Visualization Technologies (SVT) and Visualization Interfaces and Applications (VIA) groups, where he will direct the overall strategy and management of the visualization area at TACC. Navrátil most recently served as the deputy director of the Visualization group, prior to that serving as the manager of the SVT group for seven years, and has been with TACC for a total of 11 years.

Navrátil leads many grants and contracts as either a principal investigator (PI) or co-PI, including the Intel Visualization Center of Excellence. In this role, Navrátil worked with Intel to develop TACC’s strategy for the adoption of software-defined visualization, which enables visualization on any compute resource and eliminates the need for a dedicated visualization subsystem. He has led many of TACC’s remote visualization efforts, including management of large-scale visualization resources on TACC’s Longhorn, Stampede and Maverick systems.

Paul Navrátil, TACC’s Director of Visualization

“I’m excited about working with our outstanding visualization team to develop seamless solutions for the complete analysis life-cycle that enable more effective scientific workflows independent of data size, data velocity and computing platform,” Navrátil said.

“Our aim is to make our users more effective in performing their science, whether analyzing the data from a ten thousand-core weather model or a ten thousand time-step molecular simulation, and whether they are using a high-resolution tiled display or a smart phone,” he said. “We also will leverage advancements in virtual reality and augmented reality technologies to improve analytic capabilities. The future outlook is very exciting.”

In his new role, Navrátil will continue to advance in situ visualization techniques, where the analysis runs at the same time as the simulation, for large-scale data to cope with the widening gap between a simulation’s ability to generate data compared to its ability to write the data to disk for later analysis.

Navrátil earned his doctorate, master’s and bachelor’s degrees in Computer Science from The University of Texas at Austin as well as a bachelor’s degree in Plan II Honors from the university. He has authored numerous papers and publications on scientific data visualization, irregular algorithms, and parallel systems. His work has been featured in numerous venues, both nationally and internationally, including the New York Times, Discover, and PBS News Hour.

Kelly Gaither, who previously served as TACC’s Director of Visualization for 16 years, now has a joint appointment at the Dell Medical School (DMS) as an associate professor with the DMS Women’s Health research team. Gaither will remain as a director and senior research scientist at TACC and split her time between the two appointments.

“The words “Big Data” seem to be used ever more frequently, but the best way for many humans to interpret large amounts of data is to *look* at a visual representation of it,” said TACC Executive Director Dan Stanzione. “Fundamentally, that’s what visualization is about — trying to take the very large and complicated datasets that come from modern scientific computing, and turn them into something that people can understand and gain new insight from.”

“Visualization is a key component of advanced computing, and a critical part of the portfolio of activities at TACC,” Stanzione said. “We are amazingly fortunate to have someone of Paul’s talents bring his experience to this crucial area of the center, and we are also incredibly grateful to Kelly for so skillfully building this area of TACC over the last 15+ years, and look forward to her contributions from her new role in the Dell Medical School.”

TACC provides world-class high performance computing systems and cyberinfrastructure for the U.S. open science community, including deploying and operating since its inception in 2001 more than 15 NSF-funded supercomputers and advanced visualization systems for national programs. TACC’s visualization resources blend science, technology and art to bring the complexities of our world to life helping solve large-scale problems.

Source: TACC

The post TACC Promotes Paul Navratil to Director of Visualization appeared first on HPCwire.

Pages