Using JavaScript Client To Produce and Consume Events
Part 1 – Objectives, Audience, Tools & Prerequisites
Part 3 – Using JavaScript Client To Produce and Consume Events
Part 4 – AWS Lambda Function To Produce Kafka Events
Part 5 – AWS API Gateway For Accepting Events
Part 6 – Consuming Events Using MSK Connect – S3 Bucket Sink
Part 7 – Consuming Events Using AWS ECS
Part 8 – Storing Consumed Events Into AWS RDS
Introduction
In the previous part of this tutorial series, we have managed to create an AWS MSK cluster. We have also logged in to a client machine and used the handy console producer to produce events and a console consumer to consume them.
In the third part, we are going to learn how to produce and consume events using a JavaScript client. In subsequent parts of the series, we will need this knowledge because we will implement an AWS Lambda function, using JavaScript, to do exactly these two things: to produce and consume events from our AWS MSK cluster.
Let’s proceed with the development of these two simple Kafka clients.
Local Kafka Installation
We will develop the simple clients locally. The integration with our AWS MSK cluster will take place in a subsequent phase and will be described in another blog post.
In order to do the development locally, we will need to have Kafka installed locally too.
We download Kafka from here: https://kafka.apache.org/downloads
We choose to download the latest version (3.3.1 as of this writing). In our terminal, in the root folder of our working project, we execute the command:
$ wget https://downloads.apache.org/kafka/3.3.1/kafka_2.13-3.3.1.tgz
This will download the file kafka_2.13-3.3.1.tgz
which we have to untar with the command:
$ tar -xvf kafka_2.13-3.3.1.tgz
This command will create the folder kafka_2.13-3.3.1
with all the Kafka files inside.
Start Kafka Locally
In order to start Kafka locally, we need to start Zookeeper and then the Kafka server.
Start Zookeeper
We change directory to where we have Kafka installed:
$ cd kafka_2.13-3.3.1
Then, we start Zookeeper with the following command:
$ bin/zookeeper-server-start.sh config/zookeeper.properties
This command will start Zookeeper and will hold off our terminal.
Start Kafka Broker Server
With the current terminal occupied by Zookeeper, we start another terminal and we cd
again to the folder were we have Kafka installed. Being there, we execute the following command to start Kafka broker server:
$ bin/kafka-server-start.sh config/server.properties
This will start the server and will hold off our terminal.
Create a Kafka Topic
We are going to need a Kafka topic so that we can publish data into it. Let’s create one.
$ bin/kafka-topics.sh --create --topic order-events --bootstrap-server localhost:9092
The above command creates the topic with name order-events
. Remember this name, because we are going to need it later on to produce events into this topic.
Start a Node.js Project
We will now start a new terminal and create a new Node.js project. We need to have Node.js and yarn already installed.
Note: If you don’t have Node.js and yarn installed, follow the instructions relevant to your operating system and install them.
Here is how we initialise the project:
$ ❯ yarn init
yarn init v1.22.19
question name (test_kafka_js_client):
question version (1.0.0):
question description:
question entry point (index.js):
question repository url:
question author (Panagiotis Matsinopoulos):
question license (MIT):
question private:
success Saved package.json
✨ Done in 10.87s.
The above creates the file package.json
which will be used as a starting point to build the simple JavaScript Kafka producer and consumer.
Install JavaScript Client Library for Kafka
Kafka server speaks a TCP protocol and we need to write a JavaScript client that speaks the language that Kafka server understands. There is a very popular JavaScript client for Kafka already published and we are going to use that one. It is called KafkaJS.
Hence, we first need to add it to our project:
$ yarn add kafkajs
Write the Kafka Producer
With the kafkajs
installed, we are ready to write our simple Kafka Producer. Create the file producer.js
in the root folder of your Node.js project. Enter the following content:
This is a very simple producer that produces one new message every 1 second. After 10 seconds it terminates.
Here are some details about the code above:
- lines 3-6: We construct a new
Kafka
client object by specifying theclientId
andbrokers
. TheclientId
is useful to differentiate one client from another. - line 8: We instantiate a producer.
- line 12: We connect producer to brokers.
- lines 15 – 16: Then, repeatedly, every 1 second, we generate a new message.
- lines 20 – 24: Every second, we call the
producer.send()
, which sends a message to topicorder-events.
Note theallowAutoTopicCreation
boolean flag. We set it tofalse
because we have already created the topic.
Run Kafka Producer
We can now run this program and see how it produces messages:
$ node producer.js
Start producing....
...producing
...producing
...producing
...producing
...producing
...producing
...producing
...producing
...producing
$
The messages have been produced and have been stored into Kafka database.
Check Number Of Messages In A Topic
With the messages produced, we might want to confirm that the topic order-events
has the number of messages we have produced.
We can run the following command from the Kafka installation folder:
$ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --bootstrap-server localhost:9092 --topic order-events | awk -F ":" '{sum += $3} END {print "Result: "sum}'
Result: 9
$
The number 9
matches the number of ...producing
lines printed above, when we ran the producer.
Note: The bin/kafka-run-class.sh kafka.tools.GetOffsetShell
command prints the number of messages on each partition. For example, if a topic has 1000 partitions, then this would have been a 1000 lines of output. The awk
command that follows is summing up the number of messages in each partition in order to print the total.
Write the Kafka Consumer
Now it is time to write the Kafka consumer. It is very similar and equally simple. Here it is:
- lines 3 – 6: We create a
Kafka
client object. We specify theclientId
and thebrokers
. - line 8: We instantiate a Kafka consumer. Note the
groupId
argument. Client groups is a way for Kafka to scale up and provide high availability for consuming events. Many client instances can join the same group. In that case the load of consumption will be divided amongst the members of the groups. - line 13: We subscribe our consumer to the Kafka topic we want this consumer to process messages from.
- line 15 – 23: We execute the
consumer.run
which will process each message and just print its details on screen.
Run Kafka Consumer
Running Kafka consumer is quite simple:
$ node consumer.js
...
This will print out all the messages that exist in the Kafka order-events
topic. It will print things like this:
{ partition: 0, offset: '1', value: 'This is order 7' }
Closing Note
That is the end of Part 3 of the tutorial series Practical Terraform & AWS. We will continue with the fourth Part, Creating a Lambda Function To Produce Kafka Events.
Contact Me
If you want to contact me to ask questions and provide feedback and comments, you are more than welcome to do it. My name is Panos and my email is panos@mixlr.com.