Apache Kafka Streams + Machine Learning (Spark, TensorFlow, H2O.ai)

I started at Confluent in May 2017 to work as Technology Evangelist focusing on topics around the open source framework Apache Kafka. I think Machine Learning is one of the hottest buzzwords these days as it can add huge business value in any industry. Therefore, you will see various other posts from me around Apache Kafka (messaging), Kafka Connect (integration), Kafka Streams (stream processing), Confluent’s additional open source add-ons on top of Kafka (Schema Registry, Replicator, Auto Balancer, etc.). I will explain how to leverage all this for machine learning and other big data technologies in real world production scenarios.

Read this, if you wonder why am so excited about moving (back) to open source for messaging, integration and stream processing in the big data world.

In the following blog post, I want to share my first slide deck from a conference talk representing Confluent: A software architecture user group in Leipzig, Germany organized a 2-day event to discuss big data in practice.

Apache Kafka Streams + Machine Learning / Deep Learning

This is the abstract of the slide deck:

Big Data and Machine Learning are key for innovation in many industries today. Large amounts of historical data are stored and analyzed in Hadoop, Spark or other clusters to find patterns and insights, e.g. for predictive maintenance, fraud detection or cross-selling.

This first part of the session explains how to build analytic models with R, Python and Scala leveraging open source machine learning / deep learning frameworks like Apache Spark, TensorFlow or H2O.ai.

The second part discusses how to leverage these built analytic models in your own real time streaming applications or microservices. It explains how to leverage the Apache Kafka cluster and Kafka Streams instead of building an own stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable and performant way.

The last part explains how Apache Kafka can help to move from a manual build and deployment of analytic models to continuous online model improvement in real time.

Slide Deck: How to Build Analytic Models and Deployment to Real Time Processing

Here is the slide deck:

https://www.slideshare.net/KaiWaehner/apache-kafka-streams-machine-learning-deep-learning

More blog posts with more details and specific code examples will follow in the next weeks. I will also do a web recording for this slide deck and post it on Youtube.

Kai Waehner

bridging the gap between technical innovation and business value for real-time data streaming and applied AI.

Recent Posts

dbt Meets Apache Flink: One Workflow for Data Engineers on Snowflake, BigQuery, Databricks, and Confluent

Two toolchains, two skill sets, two CI/CD pipelines — that has been the reality for…

4 weeks ago

The Shift Left Architecture 2.0: Operational, Analytical and AI Interfaces for Real-Time Data Products

The Shift Left Architecture moves data integration logic into an event-driven architecture where governed data…

4 weeks ago

UFC VIP Experience Worth the Price? Fan Review. Business Perspective. Tech Vision.

The Ultimate Fighting Championship (UFC) held Fight Night London on March 21, 2026, at The…

1 month ago

Dashboards and Queries for Apache Kafka: Operational, Explorative, and the Role of the Context Engine

Dashboards are a popular way to make streaming data visible and useful, but they are…

1 month ago

Data Streaming at MWC 2026: How Apache Kafka, Flink and Agentic AI Power Telecom Trends

Mobile World Congress (MWC) 2026 highlights the shift from batch systems to real time data…

2 months ago

From Takeoff to Touchdown: Real-Time Aviation with Data Streaming at Qantas

This blog post explores how data streaming transforms airline operations by enabling real-time visibility, faster…

2 months ago