Last week I attended the SuperComputing Summit in Denver, and it was one of the best shows around. Unfortunately, I didn't have the chance to attend the technical and educational sessions, but the show floor was full of great ideas and amazing technology. I spent some time talking with vendors and visitors to our booth, and now I'm back home with new ideas as well as a clearer vision on what is happening in the HPC world, which is usually a precursor for the rest of IT.
Even with the differences between HPC and traditional IT, there are many technologies that, once developed for HPC, are then deployed in a smaller or simplified form years later in other fields.
It’s no surprise that machine learning and artificial intelligence are the talk of the town, and SC17 was no exception.
GPUs, FPGAs, ASICs, you name it. The race for the ultimate hardware to support AI is incredible, and some of the demos are mind-blowing; you can easily get lost in this flood of news, benchmarks, comparisons, and papers.
It is also incredible how little at the summit was dedicated to visualization. VR and AR were essentially absent, relegated mostly to niches, almost to confirm that these are almost exclusively consumer technologies right now, while good applications to share and visualize data are still very traditional.
Another important change I saw this year (the last time I attended the event was in 2015) is the massive presence of ARM-based solutions. New supercomputers based on ARM, vendors showing specific designs for ARM, massive ARM-based educational clusters, and more. ARM is quickly becoming the norm, and there are some valid reasons for that. With modern and efficient co-processors (like GPUs) and the need to reduce power consumption while increasing compute density, ARM is gaining a lot of attention. And now, with the maturity of distributed software programming, it is possible to overcome most of the limits of single core CPU performance (if any).
And that's not all. I was particularly impressed by the case study of an Australian company, BitScope. They built a 150-node 6U rack unit based on Raspberry Pis (the equivalent of 1,000 nodes in a 42U rack). This project, backed by several research centers including Los Alamos National Labs, will enable students and researchers to explore extreme scale cluster designs and learn how to build software for them at a very low cost. This is the perfect prototype or proof of concept for what will come tomorrow; and what will be your first choice to build a production cluster if the research has been made on ARM?!
Object Storage everywhere
The term Exascale was extensively used by many attendees. Object storage is universally recognized as an important infrastructure layer of these Exascale designs. The sheer amount of data generated by these systems must be saved somewhere, and high performance storage is too expensive for long term retention.
The number of use cases where object storage is involved is growing too, and two trends are making it even more interesting than in the past:
- The sheer amount of data necessary to support ML/AI, IoT and any Exascale project you can think about.
- With large projects now developed by multiple research teams dispersed in the world, shared knowledge is more important than ever.
In both cases object storage is becoming increasingly relevant because it offers low $/GB together with ease of access through HTTP-based protocols. At the same time, it looks clear to me that everybody is now looking for more performance and ease of use at scale than in the past; and most traditional object stores are not really checking all the boxes, while we do!
Enterprise IT is looking at all these topics while HPC is already deeply integrated in them. Even though some of the solutions seen at this show won't be applicable to enterprise IT, it is also true that the idea will be the same, and getting a glimpse of the future is always a good thing.
I was surprised by the absence of some well known object storage vendors at #SC17. But, since most of the objections about object storage were around its rigidity, overall complexity, and the limited set of use cases they can address (also because the inconsistent performance provided by traditional ring-based layouts), I can easily understand why.
Nevertheless, for us was one of the best events of the year. I had opportunities to talk about the upcoming evolution of our products and also to show what we are able to do with serverless computing, thanks to Grid for Apps.