




















At this year’s ARC Industry Forum in Orlando, Phil Trickovic, the Vice President of sales at Diamanti, spoke to ARC’s Craig Resnick, Vice President Consulting, during the ARC Industry Forum in Orlando last month. Phil discussed with Craig the company’s Diamanti Enterprise Kubernetes Platform, its use for container management, and what are some of its features, benefits, and industrial applications. You can watch the full video here and/or on YouTube.
When Phil Trickovic was asked about the “secret ingredient” of the Diamanti platform, he went on to mention that the company brought about one of its biggest innovations, which are acceleration cards that provide a performance boost by offloading the processing of traditional legacy applications, such as container IO and storage as well as network IO, off of the host CPU. This enables the host CPU to be free to do only background tasks and base OS tasks. All of this is managed from a single HMI screen.
According to Phil Trickovic, “Obviously, in the Kubernetes container space, the OpenShift space, or anything that can be deployed in a microservices type architecture, is a use case for us. Secondary to those, also quite important would be HPC workloads, AI and ML workloads, or any kind of acceleration or data availability that you may need for refactored apps, which is the restructuring of existing computer code to improve its performance, readability, portability or code adherence without changing the code's intended functions.”
Phil Trickovic replied that, “The speed and flexibility that you get within the container architecture should be enough to steer people in that direction. This emerging kind of technology has eliminated the need for hypervisors, and in some cases for the massive amounts of front-end computing that you're seeing in traditional modern-day data centers. So, this consolidation is huge.”
Phil Trickovic says, “The edge we provide customers for AI, ML, or HPC type workloads is a granular level of performance scaling. The scheduler that we have, and the ability to segment GPUs for specific workloads, QoS settings that you can adjust to enhance the speeds of these or slow them down while you're doing your neural network training, depending on the results that you're getting is unparalleled.”
“There's a number of ways you can do that. You can run in parallel while you are refactoring the applications. Refactoring means you're taking them off legacy architectures that require a heavy hypervisor and heavy operating system involvement, and you're transitioning to a microservices type architecture. It doesn't have to be a disruptive kind of upgrade; as they can run in parallel, so it is very flexible in how you're doing that architecture,” according to Phil Trickovic.
There are a number of solutions, because the nature of containers, in conjunction with Diamanti as the platform, provides highly flexible distribution over remote devices. If there are many different types of edge devices or IoT-type devices that are containerized within the application, that's where, Diamanti would bring value to its customers.
Concluding the interview, Craig Resnick agreed that Diamanti’s Enterprise Kubernetes Platform has made substantial progress in growing its customer implementations and success stories, and ARC looks forward to following the company’s progress moving forward.