The real-time pipeline of an ecosystem
Present-day business-to-business information systems have become so complex that troubleshooting them properly causes real-time performance, data presented maturely, with a thorough understanding of data interpretation and a bit of skill too.
The systems must be equipped with powerful instrumentation, otherwise, lack of information will lead to loss of time and in some cases loss of revenue.
Speedy detection of threatening issues is by far the most important objective of monitoring and alerting. Paying close attention to anomalous behavior in the system help to detect resource saturation and rare defects.
Data Producers and Consumers
Traditionally, a message queue is used to help reliably deliver communications or messages. Message queues provide an asynchronous communications protocol, meaning that the sender and receiver of the message do not need to interact with the message queue at the same time.
Publisher is a Producer who produces messages, Subscriber is a Consumer who consumes messages from a source, and they use Message Queue as the media.
Queue based systems are typically designed in a way that there are multiple consumers processing data from a queue and the work gets distributed such that each consumer gets a different set of items to process. Hence, there is no overlap, allowing the workload to be shared and enables horizontally scalable architectures.
Today there is a handful of messaging technologies, enterprise service buses, and iPaaS vendors in the market. Event streaming is the digital equivalent of the human body’s central nervous system. Apache Kafka is an open source, distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Kafka is designed for high volume publish-subscribe messages and streams, meant to be durable, fast, and scalable.
Kafka provides a rich set of APIs for the Producer and Consumer to use, to transform streams of data and connect to external applications to pull or push data.
Typical use cases of Kafka are:
- Website activity
- Event sourcing
- Commit log
- Log aggregation
Axway B2Bi with Kafka
AMPLIFY™ B2Bi provides organizations with a set of services for exchanging standardized electronic business documents between applications and trading partners.
B2Bi enables you to organize and automate the flow of electronic documents between participants located both inside and outside your enterprise network.
It is quite interesting to see how one can integrate Axway B2Bi with Kafka within an organization in their real-time and batch pipeline.
We will see Messaging, Metrics, and Event sourcing use cases of Kafka in this post with a rich set of Java SDK available in Axway B2Bi.
B2Bi Messaging — Inbound data
Kafka provides Java Client APIs that enable B2Bi’s SDK to extend and write a piece of code that connects to Kafka as a Producer. Create an application delivery that points to the Kafka broker and specify the corresponding Kafka Topic. This enables the end-to-end tracking of B2Bi transmission visible in Axway Sentinel.
Also, use B2Bi Mapping to chunk or extract the required message from the actual payload received from Trading Partner. Produce data to Kafka using <Key, Value> pair with a unique Key for the whole transmission.
B2Bi Messaging – outbound data
Using Kafka’s Java Client APIs and B2Bi’s SDK extend and write code that connects to Kafka as a Consumer. Create an application pickup that points to the Kafka broker. This also enables end-to-end tracking of B2Bi transmission in Axway Sentinel.
One can use B2Bi Integration processing to wrap multiple messages and/or transform the data before the payload is delivered to the Trading Partner.
Axway B2Bi currently sends the B2Bi usage metrics of the Trading Engine to a log file but also can be configured to send to JMX, CSV file, and Graphite server.
Using Kafka Java Client APIs one can write a Kafka Metrics Reporter that sends the metrics to Kafka. Configure the Kafka Reporter in monitoringconfig.xml file.
<Reporter name=”JMX” enabled=”false” rateUnit=”SECONDS” durationUnit=”MILLISECONDS” />
<Reporter name=”CSV” enabled=”false” rateUnit=”SECONDS” durationUnit=”MILLISECONDS” writeInterval=”5″ path=”../logs/metrics” />
<Reporter name=”GRAPHITE” enabled=”false” rateUnit=”SECONDS” durationUnit=”MILLISECONDS” writeInterval=”30″ host=”host.graphiteserver.com” port=”2003″ />
<Reporter name=”LOG4J” enabled=”true” rateUnit=”SECONDS” durationUnit=”MILLISECONDS” writeInterval=”60″ />
<Reporter name=”KAFKA” enabled=”true” rateUnit=”SECONDS” durationUnit=”MILLISECONDS” writeInterval=”60″ bootstrapServer=”lphxpsowalg1.lab.phx.axway.int:9092″ topic=”channel5″ clientId=”B2BiStatsProducer01″ />
Additionally, you can equip Kafka with Logstash/Elasticsearch/Kibana or any monitoring tools that can consume from Kafka to enable real-time monitoring of these metrics.
Write an Event router using B2Bi’s SDK and Kafka’s Java Client APIs and configure events.xml file with the Kafka configuration as follows.
<EventRouter id=”Kafka Event Router” class=”com.axway.b2bi.test.KafkaEventRouter” active=”false” priority=”1″>
<Parameters bootstrapServer=”lphxpsowalg1.lab.phx.axway.int:9092″ topic=”B2BiEvents” clientId=”B2BiEventsProducer01″/>
Additionally, you can equip Kafka with Logstash/Elasticsearch/Kibana or any monitoring tools that can consume from Kafka to enable real-time monitoring of these events.
Discover more about AMPLIFY Streams.
B2Bi 2.6 SP1 Trading Engine Developer Guide
B2Bi 2.6 SP1 Administrator Guide