anchor
Blog /  
Apache Kafka vs RabbitMQ. Introduction to message brokers. Part 1

Introduction to Message Brokers. Part 1: Apache Kafka VS RabbitMQ

April 15, 2019
->
8 min read

The growing amount of equipment, connected to the Net has led to a new term, <medium>Internet of things (or IoT)<medium>. It came from the machine to machine communication and means a set of devices that are able to interact with each other. The necessity of improving system integration caused the development of <medium>message brokers<medium>, that is especially important for data analytics and business intelligence. In this article, we will look at 2 big data tools: Apache Kafka and Rabbit MQ.

Why did message brokers appear?

Can you imagine the current amount of data in the world? Nowadays, about 12 billion "smart" machines are connected to the Internet. Considering about 7 billion people living on the planet, we have almost one-and-a-half device per person. By 2020, their number will significantly increase to 200 billion, or even more. With technological development, building of "smart" houses and other automatic systems, our everyday life becomes more and more digitized.

Message broker use case

As a result of this digitization, <medium>software developers<medium> face the problem of successful data exchange. Imagine you have your own application. For example, it's an online store. So, you permanently work in your technological scope, and one day you need to make the application interact with another one. In previous times, you would use simple "in points" of the machine to machine communication. But nowadays we have special <medium>message brokers<medium>. They make the process of data exchange simple and reliable. These tools use different protocols that determine the message format. The protocols show how the message should be transmitted, processed, and consumed.

Messaging in a nutshell

{{rr19-1="/custom-block-to-blog/two-page"}}

Programs like this are essential parts of computer networks. They ensure transmitting of information from point A to point B.

When is a message broker needed?

  • If you want to control data feeds. For example, the number of registrations in any system.
  • When the task is to put data to several applications and avoid direct usage of their API.
  • The necessity to complete processes in a defined order like a transactional system.

So, we can say that message brokers can do 4 important things:

{{rr19-2="/custom-block-to-blog/two-page"}}

There are <medium>self-deployed and cloud-based<medium> messaging tools. In this article, I will share my experience of working with the first type.

Message broker Apache Kafka

<medium>Pricing:<medium> free
<medium>Official website:<medium> https://kafka.apache.org/
<medium>Useful resources:<medium> documentation, books

<medium>Pros:<medium>

  • Easy to pick up
  • Powerful event streaming platform
  • Fault-tolerance and reliable solution
  • Good scalability
  • Free community distributed product
  • Multi-tenancy
  • Suitable for real-time processing
  • Excellent for big data project

<medium>Cons:<medium>

  • Lack of ready to use elements
  • The absence of complete monitoring set
  • Dependency on Apache Zookeeper
  • No routing
  • Issues with an increasing number of messages

What do Netflix, eBay, Uber, The New York Times, PayPal and Pinterest have in common? All these great enterprises have used or are using the world's most popular message broker, Apache Kafka.

{{rr19-3="/custom-block-to-blog/two-page"}}

The story of Kafka development

With numerous advantages for real-time processing and big data projects, this asynchronous messaging technology has conquered the world. How did it start? In 2010 LinkedIn engineers faced the problem of <medium>integration<medium> huge amounts of data from their infrastructure into a lambda architecture. It also included Hadoop and real-time event processing systems.

As for traditional message brokers, they didn't satisfy Linkedin needs. These solutions were too heavy and slow. So, the engineering team has developed the scalable and fault-tolerant <medium>messaging system<medium> without lots of bells and whistles. The new queue manager has quickly transformed into a full-fledged event streaming platform.

Apache Kafka capabilities

The technology has become popular largely due to its compatibility. Let's see. We can use Apache Kafka with a wide range of systems. They are:

  • web and desktop custom applications
  • microservices, monitoring and analytical systems
  • any needed sinks or sources
  • NoSQL, Oracle, Hadoop, SFDC

With the help of Apache Kafka, you can successfully create data-driven applications and manage complicated back-end systems. The picture below shows 3 main capabilities of this queue manager.

As you can see, Apache Kafka is able to:

{{rr19-4="/custom-block-to-blog/two-page"}}

Apache Kafka key terms and concepts

First of all, you should know about the abstraction of a distributed commit log. This confusing term is crucial for the message broker. Many web developers used to think about "logs" in the context of a login feature. But Apache Kafka is based on the log data structure. This means a log is a time-ordered, append-only sequence of data inserts. As for other concepts, they are:

  1. Topics (the stored streams of records)
  2. Records (they include a key, a value, and a timestamp)
  3. APIs (Producer API, Consumer API, Streams API, Connector API)

Kafka working principle

There are 2 main patterns of messaging:

  1. Queuing
  2. Publish-subscribe

Both of them have some pros and cons. The advantage of the first pattern is the opportunity to easily scale the processing. On the other hand, queues aren't multi-subscriber. The second model provides the possibility to broadcast data to multiple consumer groups. At the same time, scaling is more difficult in this case.

Apache Kafka magically combines these 2 ways of data processing, getting benefits of both of them. It should be mentioned that this queue manager provides better ordering guarantees than a traditional message broker.

Kafka peculiarities

Combining the functions of messaging, storage, and processing, Kafka isn't a common message broker. It's a powerful event streaming platform capable of handling trillions of messages a day. Kafka is useful both for storing and processing historical data from the past and for real-time work. You can use it for creating streaming applications, as well as for streaming data pipelines.

If you want to follow the steps of Kafka users, you should be mindful of some nuances:

{{rr19-5="/custom-block-to-blog/two-page"}}

Conclusion

Being a perfect open-source solution for real-time statistics and big data projects, this message broker has some weaknesses. The thing is it requires you to work a lot. You will feel a lack of plugins and other things that can be simply reused in your code.

I recommend you to use this multiple publish/subscribe and queueing tool, when you need to optimize processing really big amounts of data (100 000 messages per second and more). In this case, Apache Kafka will satisfy your needs.

Message broker RabbitMQ

<medium>Pricing:<medium> free
<medium>Official website:<medium> https://www.rabbitmq.com
<medium>Useful resources:<medium> tools, best practices

<medium>Pros:<medium>

  • Suitable for many programming languages and messaging protocols
  • Can be used on different operating systems and cloud environments
  • Simple to start using and to deploy
  • Gives an opportunity to use various developer tools
  • Modern in-built user interface
  • Offers clustering and is very good at it
  • Scales to around 500,000+ messages per second

<medium>Cons:<medium>

  • Non-transactional (by default)
  • Needs Erlang
  • Minimal configuration that can be done through code
  • Issues with processing big amounts of data

The next very popular solution is written in the Erlang. As it's a simple, general-purpose, functional programming language, consisted of many ready to use components, this software doesn't require lots of manual work. RabbitMQ is known as a <medium>"traditional" message broker<medium>, which is suitable for a wide range of projects. It is successfully used both for development of new startups and notable enterprises.

The software is built on the Open Telecom Platform framework for clustering and failover. You can find many client libraries for using the queue manager, written on all major programming languages.

The story of RabbitMQ development

One of the <medium>oldest open source message brokers<medium> can be used with various protocols. Many web developers like this software, because of its useful features, libraries, development tools, and instructions.

In 2007, Rabbit Technologies Ltd. had developed the system, which originally implemented AMQP. It's an open wire protocol for messaging with complex routing features. AMQP ensured cross-language flexibility of using message broking solutions outside the Java ecosystem. In fact, RabbitMQ perfectly works with Java, Spring, .NET, PHP, Python, Ruby, JavaScript, Go, Elixir, Objective-C, Swift and many other technologies. The numerous plugins and libraries are the main advantage of the software.

RabbitMQ capabilities

Created as a message broker for general usage, RabbitMQ is based on the pub-sub communication pattern. The messaging process can be either synchronous or asynchronous, as you prefer. So, the main features of the message broker are:

  • Support of numerous protocols and message queuing, changeable routing to queues, different types of exchange.
  • Clustering deployment ensures perfect availability and throughput. The software can be used across various zones and regions.
  • The possibilities to use Puppet, BOSH, Chef and Docker for deployment. Compatibility with the most popular modern programming languages.
  • The opportunity of simple deployment in both private and public clouds.
  • Pluggable authentication, support of TLS and LDAP, authorization.
  • Many of the proposed tools can be used for continuous integration, operational metrics, and work with other enterprise systems.

RabbitMQ working principle

Being a broker-centric program, RabbitMQ gives guarantees between producers and consumers. If you choose this software, you should use transient messages, rather than durable.

The program uses the broker to check the state of a message and verify whether the delivery was successfully completed. The message broker presumes that consumers are usually online.

As for the message ordering, the consumers will get the message in the published order itself. The order of publishing is managed consistently.

RabbitMQ peculiarities

The main advantage of this message broker is the perfect set of plugins, combined with nice scalability. Many web developers enjoy clear documentation and well-defined rules, as well as the possibility of working with various message exchange models. In fact, RabbitMQ is suitable for 3 of them:

{{rr19-6="/custom-block-to-blog/two-page"}}

Here you can see the gap between Kafka and RabbitMQ. If a consumer isn't connected to a fanout exchange in RabbitMQ, the message will be lost. At the same time, Kafka allows avoiding this, because any consumer can read any message.

Conclusion

As for me, I like RabbitMQ due to the opportunity to use many plugins. They save time and speed-up work. You can easily adjust filters, priorities, message ordering, etc. Just like Kafka, RabbitMQ requires you to deploy and manage the software. But it has convenient in-built UI and allows using SSL for better security. As for abilities to cope with big data loads, here RabbitMQ is inferior to Kafka.

To sum up, both Apache Kafka and RabbitMQ truly worth the attention of skillful software developers. I hope, my article will help you find suitable big data technologies for your project. If you still have any questions, you are welcome to contact Freshcode specialists.

{{about-barmin-blue="/material/static-element"}}

Shall we discuss
your idea?
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
What happens after you fill this form?
We review your inquiry and respond within 24 hours
We hold a discovery call to discuss your needs
We map the delivery flow and manage the paperwork
You receive a tailored budget and timeline estimation