Spring Cloud Data Flow is a toolkit for building data integration and real-time data processing pipelines.
Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. This makes Spring Cloud Data Flow suitable for a range of data processing use cases, from import/export to event streaming and predictive analytics.
The Spring Cloud Data Flow server uses Spring Cloud Deployer, to deploy pipelines onto modern runtimes such as Cloud Foundry, Kubernetes, Apache Mesos or Apache YARN.
A selection of pre-built stream and task/batch starter apps for various data integration and processing scenarios facilitate learning and experimentation.
Custom stream and task applications, targeting different middleware or data services, can be built using the familiar Spring Boot style programming model.
A simple stream pipeline DSL makes it easy to specify which apps to deploy and how to connect outputs and inputs. A new composed task DSL was added in v1.2.
The dashboard offers a graphical editor for building new pipelines interactively, as well as views of deployable apps and running apps with metrics.
The Spring Could Data Flow server exposes a REST API for composing and deploying data pipelines. A separate shell makes it easy to work with the API from the command line.
An easy way to get started on Spring Cloud Data Flow would be to follow the platform-specific implementation links from the table below. Each of the implementations evolves in isolation with independent release cadences. It is highly recommended to review the platform-specific reference docs to learn more about the feature capabilities.
Server Type | Stable Release | Milestone/Snapshot Release |
---|---|---|
Local Server | 1.6.1.RELEASE[docs] | 1.6.2.BUILD-SNAPSHOT[docs] |
Cloud Foundry Server | 1.6.1.RELEASE[docs] | 1.6.2.BUILD-SNAPSHOT[docs] |
Kubernetes Server | 1.6.1.RELEASE[docs] | 1.6.2.BUILD-SNAPSHOT[docs] |
Apache YARN Server | 1.2.2.RELEASE[docs] | 1.2.3.BUILD-SNAPSHOT[docs] |
Apache Mesos Server | 1.0.0.RELEASE[docs] | 1.1.0.BUILD-SNAPSHOT[docs] |
Step 1 - There are two ways to get started. The quickest is to download the Spring Cloud Data Flow Local-Server's Docker Compose artifact. (Mac users can use 'curl -O' instead of 'wget')
wget https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.6.1.RELEASE/spring-cloud-dataflow-server-local/docker-compose.yml
Step 2 - From the directory where you downloaded docker-compose.yml
, start the system.
DATAFLOW_VERSION=1.6.1.RELEASE docker-compose up
Step 3 - Open the dashboard at http://localhost:9393/dashboard.
Step 4 - Use 'Create Stream' under STREAMS Tab to define and deploy a stream time | log
called 'ticktock'.
Once the ‘ticktock’ stream is deployed, two running stream apps will appear under RUNTIME. Click on 'ticktock.log' to determine the location of the stdout log file.
Step 5 - Verify that events are being written to the ticktock log every second. To view the stream logs, copy the path in the "stdout" text box on the dashboard and in another console type:
docker exec -it dataflow-server tail -f <COPIED-STDOUT-PATH>
Spring Cloud Data Flow builds upon several projects and the top-level building blocks of the ecosystem are listed in the following visual representation. Each project represents a core capability and they evolve in isolation, with separate release cadences - follow the links to find more details about each project.
REST-APIs / Shell / DSL
|
Dashboard
|
Spring Flo
|
Spring Cloud Data Flow Metrics Collector
|
Spring Cloud Data Flow - Core
|
↓ Uses ↓
Spring Cloud Deployer - Service Provider Interface (SPI)
|
↑ Implements ↑
Spring Cloud Deployer Local
|
Spring Cloud Deployer Cloud Foundry
|
Spring Cloud Deployer Kubernetes
|
Spring Cloud Deployer Yarn
|
Spring Cloud Deployer Mesos
|
↓ Deploys ↓
Spring Cloud Stream App Starters
|
Spring Cloud Task App Starters
|
Spring Cloud Stream
|
Spring Cloud Task
|
↓ Uses ↓
Spring Integration
|
Spring Boot
|
Spring Batch
|