This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

Scottish supercomputer hits 100 million hours computing milestone

June 6th, 2024
Scottish supercomputer hits 100 million hours computing milestone
The UK Crop Diversity HPC cluster. Credit: The James Hutton Institute

One of Scotland's largest supercomputing clusters has hit a milestone of 100 million hours of computing time performed on cutting edge UK science.

The UK CropDiversity High-Performance Computing (HPC) cluster, hosted and managed by independent research organisation The James Hutton Institute in north east Scotland, has also handled more than 400 terabytes of new data in the last 12 months alone—equivalent to around 100,000 HD movies.

Its huge number crunching capacity, with processing power equivalent to about 80,000 standard laptops, allows significant gaps in knowledge about the natural world, crops and disease to be unlocked much faster than could otherwise be done, including through use of modern AI methods like machine learning.

One recent project saw the HPC used to help assess extinction risks for the world's flowering plants for scientists at the Royal Botanic Gardens in Kew—work that would have taken many weeks previously to do, took just days.

Dr. Iain Milne, who manages the UK CropDiversity HPC at the Hutton, says, "Since the first building blocks of the HPC were put in place in 2020, about 19 million analysis tasks have been run, using more than 100 million hours (or nearly 11,500 years) of computing time—how long each of the processors we have has been running—and covering more than a petabyte* of data.

"While most of the work it is doing is focused on scientific research, it is a capability that's also available to commercial organisations who need access to this level of computing."

At the Hutton, the HPC is used for looking at gene expression and genomes—helping to understand how plants adapt to different environments or inputs.

"Time is not on our side and we need to move fast if we are to find solutions to the very real, fast moving impact of climate change on crop resilience and so we need to be able to fast track so much of the research and these machines do it," says machine learning engineer, Fraser Macfarlane.

"Here at the Hutton, it's helping process ever expanding datasets generated by the likes of high-throughput phenotyping. This process uses a suite of sensors and cameras to monitor and understand how plants grow and develop. Additionally, the vast quantities of remote sensing data produced by satellites and aerial platforms like drones can be analysed, helping us understand the world around us."

The HPC recently handled its largest data throughput for a single project—at 25 terabytes—for the Biodiversity for Opportunities, Livelihoods and Development (BOLD) project, a Crop Trust project the Hutton is a partner on that looking to strengthen global food and nutrition security by supporting the conservation and use of crop diversity.

This year alone, more than 20 scientific papers have been published based on work that relied on the UK CropDiversity HPC, ranging from looking at how human genetic diversity influences antibiotic resistance to the development of seedless clementines.

Contact Hutton if you're interested in finding out more about access to the HPC and what it can offer you.

The UK CropDiversity HPC was funded by Biotechnology and Biological Sciences Research Council and Advanced Life Sciences Research Technology Initiative (ALERT) grants BB/S019669/1 and BB/X019683/1 and The Department of Business, Energy and Industrial Strategy Public Sector Research Establishment Infrastructure Fund.

The HPC partners are National Institute of Agricultural Botany (NIAB), The Natural History Museum, Scotland's Rural College, Royal Botanic Gardens, Kew, Royal Botanic Garden Edinburgh, University of Edinburgh and the University of St Andrews.

More information:
*If that was MP3 tracks, it would take you 1,900 years to listen to them all, even if listening 24/7.
A CPU - or core hour (because CPUs now have lots of cores) - is just an hour that the CPU spent "doing something". So 100,000,000 CPU (or core) hours could be a single CPU working itself to death over 100m hours, or, as in our case, thousands of cores doing stuff together so that the cumulative time across all of them (since 2020) hits 100m hours.
By the end of 2024, the UK CropDiversity HPC cluster will have a total of 497,588 combined CPU and GPU cores, with a maximum theoretical peak performance of 2.17 petaflops. It also has significant memory capacity and network speeds of up to 100 gigabits per second.

Provided by The James Hutton Institute

Citation: Scottish supercomputer hits 100 million hours computing milestone (2024, June 6) retrieved 24 July 2024 from https://sciencex.com/wire-news/479119609/scottish-supercomputer-hits-100-million-hours-computing-mileston.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.