Data Science now Supports Distributed Jobs

Data Science now supports distributed jobs. By eliminating infrastructure complexities, distributed jobs offer on-demand scalability and cost efficiency, while ensuring fast, secure, and reliable execution of AI workloads. You can now run ML or data workloads as jobs that span several compute nodes provisioned and orchestrated by Data Science, while specifying how those nodes are grouped and interact. Each node group in your cluster can be independently configured, and provisioned in parallel or sequence to match the needs of your distributed training or serving framework. You can bring your own code, artifacts and containers.

For more information, see Distributed Jobs in the Data Science documentation.