Ignite summit schedule
Creating a Credit Card Processor with Apache Ignite and Kafka
As event-driven architecture progressively dominates industry data pipelines, a diversity of infrastructure patterns appear. By using Kafka and Apache Ignite hosted on an IBM OpenShift cluster, we were able to create a real-time, credit-card-transaction processor. This pattern uses Kafka for processing and mapping the credit card data. The pattern also uses GridGain Kafka Connect for synсing and sourcing data between Apache Ignite and Kafka, and uses Apache Ignite as not only a database, but also to run an in-memory continuous query to flag the fraudulent transaction. These flagged transactions are then placed into a separate cache and, by way of a synс connector, are transferred back into Kafka within a “fraud” topic. This pattern enables the event-processing power of Kafka with the in-memory computing and querying speed of Apache Ignite. This session describes the details surrounding the process of creating this architecture from writing the Apache Ignite Helm Chart to configuring the continuous query. We discuss what we have learned, as well as what the next steps are in building on a design like this.
Daniel Rossos
Daniel Rossos is an IBM Software Engineering Intern and an Engineering Science - Machine Intelligence student at the University of Toronto. Daniel has been on work term with IBM Global Business Services since May 2021 and for that duration has been on the Business Transformation Services team. Daniel works with many distributed technologies, including Confluent, Redpanda, and GridGain. He also has responsibility for many infrastructure concepts and tools, including OpenShift and Kubernetes, Terraform, Helm, and Ansible. Daniel uses these distributed technologies while also implementing their infrastructures with existing pipelines and platforms.