Pre-conference Training Sessions are conducted by GridGain engineers and solution architects with deep technical expertise and real-world experience solving customer problems. During the sessions, attendees will use the actual products, and learn how to unlock the full potential of the Ignite platform.
After the training sessions attendees will receive special badges so they can share their Apache Ignite proficiency with peers and prospective employers.
Please click on the links below to join us at a future training sessions:
This two-hour training is for developers, architects, and DevOps engineers who want to deploy and orchestrate Apache Ignite in a Kubernetes environment. You begin with the configuration and scalability essentials, as you deploy Ignite in pure in-memory mode. Next, you convert the Ignite in-memory cluster into a multitier database that scales beyond available memory capacity, as you set up disk storage for Ignite native persistence and GridGain backups. Finally, you select the connectivity option that is preferred for your applications and simplify the monitoring, management, and troubleshooting of your Kubernetes-based deployment.
- Ignite in in-memory mode: basic configuration principles, cluster discovery, auto-scaling, availability zones, and rolling restarts
- Ignite in multitier database mode (in-memory plus native persistence): storage configuration (data versus WAL), cluster backups, and storage performance monitoring
- Application deployment and connectivity options: apps and Ignite in K8, apps in K8 but Ignite outside, and Ignite in K8 but apps outside
If you are new to Ignite, attend the Apache Ignite Essentials course: https://www.gridgain.com/products/services/training/apache-ignite-essentials
This two-hour training is for Java developers and architects who build high-performance and data-intensive applications that are powered by Apache Ignite. During the course, you are introduced to three of Ignite's essential capabilities (data partitioning, affinity co-location, and co-located processing) and learn how to apply your newly acquired knowledge to increase the speed and scale of your applications.
- The essential capabilities of Apache Ignite
- How to use data partitioning to achieve limitless horizontal scalability
- How affinity co-location of data makes it possible to run high-performance, distributed queries at scale
- How to run custom compute tasks straight on the cluster nodes, to negate any negative impact that a network might have on the performance of your applications.
This two-hour training is for Java developers and architects who want to explore the best practices and nuances of using Spring Boot and Spring Data with Apache Ignite. During the training, you build a RESTful web service that uses Apache Ignite as an in-memory database. The service is a Spring Boot application that interacts with the Ignite cluster via Spring Data repository abstractions.
- Configuring an Apache Ignite cluster that uses Spring Boot and Spring Data
- Designing Java POJOs for Ignite Spring Data repositories
- Defining custom SQL queries for Ignite Spring Data repositories
- Designing DTOs (data transfer objects) for Ignite Spring Data services
- Building a Spring Boot RESTful endpoint that works with Ignite Spring Data services
- Learning tips and tricks while working with Ignite Spring Data and Ignite Spring Boot
This two-hour, hands-on training is for those wondering how to monitor and manage Apache Ignite clusters in production: what the most important metrics are, how to set up alerting or troubleshoot performance when the cluster is under the production load, and develop queries. The list of questions and challenges related to Ignite production monitoring goes on and on. And you’ll get many of those questions covered.
During the training, you’re going to set up a management and monitoring solution based on GridGain Control Center, an enterprise-grade tool for Ignite deployments. The solution will let you perform or automate the following tasks:
- Ignite storage monitoring - includes but not limited to the memory and disk usage, checkpointing, and WAL-related I/O monitoring
- Alerts configuration and triggering - to ensure that you are notified when a crucial cluster event occurs
- Distributed tracing usage - to detect and solve bottlenecks or hot spots in operations that span multiple cluster nodes
- Query development and tips to analyze query performance statistics