More than two decades ago, the Java programming language, originally developed by Sun Microsystems, offered developers the promise of being able to build an application once and then run it on any operating system. .
Intel CTO Greg Lavender remembers Java’s early promise better than anyone, having spent more than a decade working at Sun. Instead of having to create applications for different hardware and operating systems, Java’s promise was more uniform and streamlined development.
The ability to build once and run anywhere, however, isn’t uniform across the computing landscape in 2022. It’s a situation Intel is looking to help change, at least when it comes to compute. accelerated and the use of GPUs.
The need for a uniform Java-like language for GPUs
“Today in the accelerated computing and GPU world, you can use CUDA and then you can only run on an Nvidia GPU, or you can use AMD’s CUDA equivalent running on an AMD GPU” , Lavender told VentureBeat. “You can’t use CUDA to program an Intel GPU, so what do you use?”
This is where Intel is a strong contributor to the open source SYCL specification (SYCL is pronounced like “sickle”) which aims to do for the GPU and accelerated computing what Java did decades ago for the development of apps. Intel’s investment in SYCL isn’t entirely selfless and isn’t just about supporting an open source effort; it’s also to help direct development more towards its recently launched consumer and data center GPUs.
SYCL is an approach for data parallel programming in the C++ language and, according to Lavender, it is very similar to CUDA.
Intel supports standardizing one code to rule them all
To date, the development of SYCL has been handled by the Khronos Group, which is a cross-party organization that helps develop standards for parallel computing, virtual reality, and 3D graphics. On June 1, Intel acquired Scottish development company Codeplay Software, which is a major contributor to the SYCL specification.
“We should have an open programming language with extensions to C++ that are being standards-tracked that can run on Intel, AMD, and Nvidia GPUs without changing your code,” Lavender said.
Automated tool to convert CUDA to SYCL
Lavender is also a realist and he knows that there is a lot of code already written specifically for CUDA. That’s why Intel developers have created an open source tool called SYCLomatic, which aims to migrate CUDA code to SYCL. Lavender claimed that SYCLomatic today covers around 95% of all functionality present in CUDA. He noted that the 5% SYCLomatic does not cover features specific to Nvidia hardware.
Along with SYCL, Lavender said there are code libraries that developers can use that are device independent. The way it works is that the code is written once by a developer, and then SYCL can compile the code to work with any architecture needed, be it an Nvidia, AMD, or Intel GPU.
Looking ahead, Lavender said he hopes SYCL can become a Linux Foundation project, to allow more participation and growth in the open source effort. Both Intel and Nvidia are members of the Linux Foundation supporting multiple efforts. Among the projects both Intel and Nvidia are members of today is the Open Programmable Infrastructure (OPI) project, which is to provide an open standard for infrastructure programming units (IPUs) and data processing units ( DUP).
“We should have written once, run everywhere for accelerated computing, then let the market decide which GPU it wants to use and level the playing field,” Lavender said.
VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Discover our Briefings.
#Intel #CTO #developers #build #run #GPU