On-Demand Webinar: MQTT 5 in Azure with HiveMQ Enterprise MQTT Broker

How to integrate MQTT 5 into your Azure IoT Platform with the HiveMQ MQTT Enterprise Broker

Here's what you'll learn in the webinar:

In addition to the technical advantages that MQTT offers for communication in the Internet of Things, another major advantage is that it is also supported by the IoT components of the major cloud service providers - and thus also by Microsoft Azure. However, a closer look reveals that the MQTT support of the cloud service providers is subject to considerable limitations in some cases. None of the providers can offer complete support for MQTT 5. In our webinar, we want to demonstrate in an exemplary and practical way how the HiveMQ Enterprise MQTT Broker can be integrated into an Azure-based enterprise platform and thus how the full range of functions of MQTT 5 can be used for IoT solutions. In doing so, we do not limit ourselves to automating the deployment of HiveMQ, but also go into essential aspects such as monitoring, authentication and authorization as well as Digital / Device Twin and thus show how HiveMQ can be more than just an alternative to the Azure IoT Hub.

Watch now online (German only):

The speakers

Dominik Obermaier is CTO and Co-Founder of HiveMQ and has contributed significantly to the standardization of the MQTT protocol, which is indispensable in IoT today. He is passionate about helping customers develop innovative IoT solutions and related products. Dominik is an author and conference speaker on MQTT, Java application architectures, IoT and highly scalable systems.
Dominik Obermaier
CTO & Co-Founder, HiveMQ
After his studies in computer science, Sven Kobow was self-employed as a consultant and software developer for some time before he was head of software development at Maul-Theet GmbH for 4 years. With an interim position as Senior Software Developer at Bosch, Sven Kobow has been working at diva-e as Senior Platform Architect since April 2015. He is, among other things, the Founder of the diva-e IoT Lab.
Sven Kobow
(former) Senior Platform Architect, diva-e

Download the presentation (German only):

Transcript of the diva-e Webinar MQTT 5 in Azure with HiveMQ Enterprise MQTT Broker

Angela Meyer: Welcome to our diva-e webinar with our partner HiveMQ. Today you will learn from our experts Dominik Obermaier and Sven Kobow how the HiveMQ Enterprise MQTT Broker can be integrated into an Azure-based enterprise platform. My name is Angela Meyer, and I'm on the diva-e-marketing team. Among other things, I oversee our events and webinars, and I'm your presenter today. And at this point, I would like to hand it over to Dominik and Sven. And I hope the participants have much fun listening. Sven, I will now hand over the broadcast rights to you.

Introduction diva-e and HiveMQ

About Sven Kobow and diva-e

Sven Kobow: Thank you, Angela, for the introduction. And from my side, of course, a warm welcome. I am very much looking forward to spending the next hour with you. To tell you, as Angela already announced, a few things about how you can enjoy MQTT 5 and all its features with the HiveMQ Enterprise MQTT Broker on an Azure-based enterprise platform. First of all, briefly about us, about diva-e, who are we? We are Germany's leading transactional experienced partner. We are the number one e-commerce partner in Germany. We are the partner of choice for many B2B customers. We have tools and technologies for secure projects. We are now almost 800 employees in eight locations across Germany. Since 2018, we have had our own diva-e IoT Lab. We bundle all our activities around IoT and industrial IoT and can thus provide our customers with expertise in this area accordingly.

And of course, we also offer services around the HiveMQ Broker or HiveMQ products in our service portfolio, such as operations or extensions development. Here is a short overview of our customers. As you can see, there are quite a few well-known companies. We work for our customers in a wide variety of industries, and the number of IT-related projects has been increasing rapidly in recent years. This means that we can also report a good demand in this area. Who am I? My name is Sven Kobow, and I work at diva-e as an Expert Platform Architect. I founded the diva-e IoT Lab at 2018 am a specialist in IoT and data platforms. I have many years of experience in agile software development, and I actively contribute to the Open Industry 4.0 Alliance. Please do so if you want to follow me on LinkedIn or GitHub. Contact details are posted here. Exactly. At that point, I would hand it over to Dominik.

About Dominik Obermaier and HiveMQ

Dominik Obermaier: A warm welcome from me as well. My name is Dominik Obermaier. I am the technical managing director and founder of HiveMQ. I come from distributed systems and have done much work in this area over the last few years, including development. And as one of the few Germans, I am also a member of the so-called OASIS MQTT Committee. That means we specify the MQTT standard, which, perhaps some know, is an ISO standard. In the meantime, it has also become the primary protocol in the industry for connecting devices to the Internet. And I was actively involved in specifying the current ISO standard and MQTT 5, the new generation. And at the moment, I am also interested in making MQTT SM fit for sensor networks so that it can also be deployed in microscopic environments. I am a book author conference speaker, and I also deal with topics like working at the university and organizing conferences. You can follow me on Twitter and LinkedIn. (8 sec.)

About HiveMQ. HiveMQ is a company that has been around for eight years. We are located near Munich. At this point, we serve 130 customers with our core product HiveMQ, many of which are retail companies, including all German car manufacturers and also internationally a major car manufacturer, or some major car manufacturers. And by next year, by the next two years, we will-, about 50 per cent of all connected cars will be running on HiveMQ. We are specialized in IoT and MQTT software, including just our core product HiveMQ. But also ecosystem tools such as libraries, which we make available as open-source, used in industrial environments, and use cases such as connected cars.

The Internet of Things

As we speak, a lot is happening on the so-called Internet of Things. And what we see here is the number of people connected to the Internet. If you go back to 2018 now, that means that in 2018, circa four billion people had access to the Internet. That's a little bit more than half. And even in 2020, of course, that has continued to develop, and in the next few years, simply a large proportion of people will have access to the Internet.

But what's exciting now is that this chart that we see here shows the devices connected to the Internet. And the chart that we just saw before, which was just these four billion people and growing, that yes already a very steeply growing curve. You can see here that this line is drawn in gray down here. Whereas at this point, we have well over 30 billion devices already on the Internet. What is a widget? That can be anything from a washing machine to a car to, let's say, industrial production lines can be devices. That means the Internet of Things is growing exponentially, and it's one of the most significant changes there's been probably since the steam engine. That means the Internet of Things also requires completely different technologies than the Internet of People. Because simply, the scales that we see here are entirely different. And one of the critical technologies is the whole topic of cloud computing.

Cloud computing platforms

And there are the big hyperscalers, the cloud providers. And Amazon is one of the most widely used, AWS is one of the most commonly used cloud platforms that make up a considerable platform market. And in the meantime, Microsoft Azure is growing very strongly. And as we will see later, it is probably the better choice for larger customers, more ambitious customers, to do cloud computing. And in this webinar, we will also look in detail at how you can also technically map your use cases to do IoT with Azure. As I said, when you talk about cloud computing, if you don't want to go into the private cloud, the question is do I go in the direction of Microsoft Azure, do I go in the order of AWS, or do I go in the direction of Google Cloud.

And the current distribution of customers shows that AWS has the most customers as of today. Microsoft Azure is much better positioned, especially in industry, especially in large companies with more than 1000 employees. And look at the revenue and the number of customers. You can see that Azure has somewhat fewer absolute customers than Google or Amazon, for example, but the usage is significantly higher. All the big cloud providers also have in common that they bring this so-called Internet of Things platform. And they have many, many functionalities, which we don't want to go into in detail right now. Sven will then detail the same components and techniques necessary. And how to implement a clean architecture. But basically, we very often talk about the fact that you need analytics. That means evaluations. Machine learning is now ubiquitous. And it's an important aspect, especially when I'm talking about predictive maintenance. Device management, including the professionalization of devices, is a core topic.

But also things like Digital Twin, which is the virtual representation of a physical device that I can interact with. And of course, things like security are also important. And what all cloud providers have in common, whether it's AWS with its IoT offering or Azure with IoT Hub, is the topic of connectivity. We're talking about the direct connection from the device, that can be from a car, can be from a machine to the cloud. And that is also a part that generally comes along out of the box. But it comes with considerable disadvantages if you use these standard functionalities. And that's what we want to look at in detail now.

IoT in the cloud: standard functionalities

The question is, why should you bother with a device to the cloud and a cloud to the device? Because connecting is usually the part, you don't want to deal with. You want to do IoT in the cloud, for example, to map predictive maintenance use cases or other use cases. You often don't want to deal with how the devices connect to the cloud? Because it will fit. But the truth is that the devices that you produce yourself, if I'm a mechanical engineer now, but also if I'm a car manufacturer, then these devices are ubiquitous, or should be in most cases in my value chain. And one of the topics that have always been very important on the Internet has been the topic of open standards. It's essential in that respect to avoid vendor lock-in. And it has emerged HTTP on the web for people as the standard protocol. It has the advantage that you don't have to worry about whether the Wikipedia server that I want to visit has an Apache web server, an Enginex web server, or whether it even has its own.

Because, in the end, my browser speaks HTTP, and you just get a response. And I'm also free to use Chrome, Firefox, or even Microsoft Edge. Looking at the Internet of Things, the standard protocol par excellence is MQTT. And as I said, that's an industry standard, that's an ISO standard. And the guarantee is that if I use MQTT to connect my devices to the cloud, I can change my cloud provider at any time. Because if for some reason I now want to build an infrastructure on AWS in addition to Microsoft, I can buy that my device speaks MQTT, this standard, I can change my endpoint at any time. I don't have that tight coupling. But unfortunately, that tight coupling is something that the cloud providers often specify.

MQTT in detail

And maybe again briefly on the topic of MQTT in detail. MQTT is usually used when I have, let's say, unreliable networks. That is networks that are not stable. Usually, we're talking about mobile networks, which means some connection over 2G, 3G, 4G, or now 5G in the future. And MQTT is mainly used whenever I don't have a relationship that is always guaranteed to be online. For example, if I have a vehicle now, I could very well drive into a tunnel and lose my connectivity that way. And here, especially in Germany, anyone who has ever travelled with Deutsche Bahn and tried to browse the InternetInternet with their cell phone will realize well that it's not that 100 per cent good network most of the time. And that is precisely what MQTT was developed for.

But also to create an industry-standard that just allows not to have vendor lock-in. MQTT 5 is the latest standard that was released in 2018. And it's like. I'll say, the gold standard for new Internet of Things projects. And it's the successor to MQTT 3.1.1. And who's wondering now? I don't even know which version I'm using? The chances are excellent that MQTT 3.1.1, the ISO standard, was used. MQTT 5 was developed very specifically to make this cloud connectivity better but also to make it flexible. With MQTT 5, it was necessary to us, and it was also essential to us in the Technical Committee, to see that we collect feedback from the users who have been in production with MQTT over the years to improve the protocol. But also to increase the performance. And that's where things like enhanced error reporting and more significant scalings came along.

For example, with HiveMQ, we have customers who currently connect more than ten million devices via a single MQTT broker. So, the problem that arises now is that when I use Azure, and that is currently the case, although Microsoft is now also sitting on the MQTT Technical Committee, it is so that there is no support for MQTT 5. And that's a shame because, since 2018, it is the new standard, which is also used in new projects. But also, and this is a pity, with just Azure IoT Hub, it is impossible to speak industry compliant MQTT 5. There are technical limitations, and it's just, let's say, it's not MQTT in the true sense. It is based on MQTT, but it is a proprietary version. And the problem is that if I work with IoT Hub today, it leads to vendor lock-in, and I can't port my solution to another cloud later. But it is also difficult for me to have this connectivity on standard-compliant MQTT 5.

That means I'm also doomed in quotes to use the first case to work correctly. Because it is very, very difficult to interact with open-source standard-compliant MQTT libraries with IoT Hub. And we also have a comparison of the popular IoT cloud platforms here. That can be found at this link. And there is; also, the limitations are listed in more detail, what exactly are the limitations. So, what the webinar is also about here? What is the alternative?

HiveMQ: An alternative

And one of the alternatives, many of our customers does precisely that. So it is essential to all of our customers to speak standard-compliant MQTT to not get a vendor lock-in in the value chain of their own company. To be flexible for the future. And accordingly - HiveMQ implements all MQTT standards 100 per cent. And the topic we mainly work with is its reliability and scalability. That means we have more than ten million devices in use customers. These include many automotive manufacturers. For example, so many automotive clouds run on HiveMQ. And HiveMQ is the MQTT broker that brings this data to data lakes, machine learning, applications. In other words, this is the connectivity endpoint to which the devices connect to remain standard-compliant on the device side. And HiveMQ is designed to be standards-compliant and can be operated on all common cloud platforms, including Azure, which we will look at in detail today.

But of course, things like Kubernetes are also elementary to do. And as I said, we'll see more about that in a moment. So HiveMQ, as I said, is built to run on these platforms. And integrates besides that this device connectivity happens over standard MQTT, HiveMQ integrates with all the common enterprise systems. And it starts with Apache Kafka and what we will see today, with just cloud services, like Event Hubs and ServiceBuzz. Even if I were now in the Amazon ecosystem, it is possible to even with all these components. And I can just. I have the freedom to operate that in the cloud, in the private cloud, public cloud. Exactly, as I said, we have some customers who do that.

We have 130 customers worldwide who use this, including all German car manufacturers in production, but also companies such as Siemens or Telekom rely on it. But also the largest cable operator in the world, Liberty Global. But also, companies like Munich Airport, for example, use HiveMQ to map these airport use cases to remain standard-compliant and be well-positioned for the future. Exactly. And I am delighted that we can now present this to you with our partner diva-e, HiveMQ partners. And they have already done this integration very profoundly, just like Microsoft Azure and HiveMQ, and they are experts in it. How do I bring these components together to map a use case that works, from proof of concept to production? And then I would like to hand it over to Sven.

Use Case: Integration of HiveMQ into an Azure landscape

Sven Kobow: Exactly, thank you. Right. So the following slides are definitely about showing how you can integrate HiveMQ into your Azure landscape. We assume that a company has chosen Azure for good reasons, such as building its enterprise infrastructure on it. And that's perfectly fine. We don't want to question that at all. There are good reasons to choose Azure, for example, or any other cloud service provider, to ultimately push your workload there. Exactly. Before we go into detail, let's just take a quick look at what the architecture of an IoT platform could look like. I said "could" deliberately because there are certainly a lot of different approaches, which of course, still depend very much on the specific use case.

The architecture of an IoT platform: an example

But all in all, you can say that many architectures follow a standard like the one shown here. Ultimately, the data from the IoT devices usually enters the platform via MQTT brokers or comparable technologies. And then there often splits this data stream. You could say that there is something like a hot bath, where it is primarily a matter of doing things like stream processing and holding data to utilize or display it again as quickly as possible. To do this, the data is often stored in time-series databases, which are high-performance. However, only a limited amount of information is stored, usually the last 90 days. This has the great advantage that, for example, you can access this time series data very quickly in specific applications.

Then there is often the cold path, which ultimately involves persisting the data permanently. To execute things like analytics, reporting or machine learning on the data. Storage is often much slower but also much more cost-efficient. Corresponding components are often much more favourable about the amount of data. You can say that such an architecture follows the pattern of generating insights from data and ultimately deriving actions from these insights. If a company has now opted for Azure, it could build a corresponding IoT platform using Azure's onboard resources. The IoT Hub from Azure would be the corresponding component of choice, which provides everything needed to create an IoT platform. This means that ultimately the IoT Hub also includes this concept of the digital twin, that I have the option of managing my devices, my Things, via the IoT Hub.

I can provide connectivity via different protocols like MQTT and AMQP. There are many SDKs for other platforms to develop the software and connect them to the IoT Hub. Another significant advantage is that the IoT Hub as an internal component is excellently integrated with the other members of Azure. And, that in the end, of course, you can then get the data wonderfully into your enterprise infrastructure via this path and into your other business applications. But, as we have already heard in Dominik's previous presentation, there are some disadvantages to be considered here. First of all, MQTT is not fully supported. If I implement MQTT and don't use the default Azure SDK, I have to stick to default communication schemes, and I'm also limited in using certain concepts or topics. And of course, IoT Hub as a component of Azure also brings corresponding limitations in terms of SKU size, as it is always called in Azure, i.e. the size of the IoT Hub that you have ordered. If you need more detailed information, you can find out more about quotas and throttling concerning the IoT Hub at the link. Of course, this mainly refers to the number of devices, i.e. the number of simultaneous connections that can be established, according to their actual size, etc. To remove precisely these limitations and, as Dominik has already described very nicely, to map connectivity in terms of the value chain, and if you want to retain complete control here, you can, of course, use HiveMQ as a message broker and also operate it in Azure. I have shown here what such an architecture could look like.

HiveMQ as Message Broker

The core component is HiveMQ, which can be operated in a Kubernetes cluster, i.e. in the Azure Kubernetes Service. This ultimately ensures integration into the Azure platform. At this point, it must also be said that HiveMQ is a pure MQTT broker, and in contrast to the IoT Hub, it is, of course, the case that some functionality is lost here at first glance, which the IoT Hub brings with it. And that is primarily the concept of the digital twin. This is important because I always want to have information about the status and configurations of my devices in my cloud solution, which is usually mapped with this digital twin concept. Accordingly, we have thought about which components are still needed to reproduce this functionality to provide quasi equivalent functionality in the end. Of course, this also includes the aspect of monitoring because anyone who opts for managed services on a hyper-scale does so primarily for the reason that operation should be significantly more straightforward. That they can enjoy high availability, etc.

And appropriate monitoring is always an essential component. As you can see here in this architecture sketch, we have implemented Ditto as a Thing Twin component on the one hand. On the other hand, we have connected Azure monitoring via the Prometheus extension HiveMQ, which enables monitoring via Azure native tools. The components shown here in dashed lines are all components that could be further connected in principle. In other words, you are in no way initiated to forward the data to the Event Hub, as the demo will show in our case in a moment. Still, you could, of course, also use any other Azure component at this point, such as Event Grid, the Service Bus, and you could, of course, also use other services for persistence.

You can connect Azure Functions via the Event Hub, persist the data in an Azure Data lake or Azure Blog Storage, or even integrate it into an Azure Time Series Insight instance. To then use these services accordingly for its business applications. Of course, this architecture could be expanded at will. And you can, at the end of the day, use Azure API Management to integrate the whole thing with an Azure Active Directory for authentication or authorization. Or you could, of course, if you don't want to use the Event Hub, but you're already using Kafka. For example, you could use Kafka or other message brokers at that point. Let's go back to the concept of the digital twin.

The digital twin

Ultimately, the digital twin always contains a digital copy of a real asset. There are many different definitions at this point. I don't want to go into that much depth. Ultimately, I always want to have, let's say, permanent interfaces in which I can read out properties from my devices, regardless of whether they are connected at the current time, for example, or not. I also want to be able to manage my things, of course. And all of these are functionalities that the Eclipse Ditto component, for example, is supposed to provide. Eclipse Ditto has the great advantage that, on the one hand, it is open-source. On the other hand, it has extensive APIs for data management. Live streaming from the data has a wide range of possibilities for integration via the Connectivity API, such as MQTT, AMQP or Kafka. And Ditto provides the ability to transform the payload format, which of course, gives you the knowledge, or the flexibility at the point, to end up designing your payload completely flexibly according to your own needs. And at this point, you are not limited to any message format. Of course, this also allows you to use other standards. That's why you can see these two other icons or logos on the right side.

These are institutions that are currently very much involved with the topic of message formats and also standards standardization of message formats. One is the Open Industry 4.0 Alliance, of which we are also a member. And of course, the Platform Industry 4.0, which is also working on standards with the management shell to, let's say, map future assets in a model or different models. All of this can ultimately be realized with Ditto. Let's move on to monitoring.


As I said, we have also connected HiveMQ monitoring or HiveMQ metrics to Azure monitoring with the help of the Prometheus extension. And so, I also get the opportunity to monitor my HiveMQ instance, of course, because that is an essential aspect. Of course, I want to know how many devices are connected, how many messages are sent, and any errors. At this point, of course, we can create complete transparency in this way.

Implementation of the case

Okay. Let's get down to the practical implementation of our demo case. How did we do that? We're coming to the actual performance now. What might that look like if you decide to use HiveMQ on Azure for your project, for your IoT platform? Yes, many people choose Azure or cloud platforms, as I said before, to enjoy managed services. So, they want to keep operations as low as possible and then shift that effort towards cloud providers. And how you can ultimately achieve a similar effect, or enjoy the same benefits, is what I will now describe below.

We decided to follow a DevOps approach, so for example, we represent all the infrastructure as code. So the entire infrastructure that we provision in Azure is scripted. It is always essential that we can provide these infrastructures without manual intervention. Because ultimately, this gives you the advantages that the infrastructure is visioned, that it can be tested, and that it can be reproduced at any time. So, in this case, I can roll out different stages at the push of a button, or if it is necessary, I can completely roll out the second instance of my platform at any time without additional effort and the error susceptibility of manual provisioning.

The source code

The development process in our demo scenario is as follows: The developers work on the source code. The source code includes HiveMQ extensions, things like Kubernetes manifests, and all kinds of items needed in the pipeline, such as the continuous deployment pipeline. That's all in a source code repository, in a warehouse that developers can work on, where developers then transition into the version extension. At that point, we also used Azure DevOps as a platform.

It is so that the corresponding pipelines are triggered by committing the changes in Azure DevOps. Here on the right side, you can see which tools are used. For example, the build process of HiveMQ Extensions is done with Maven. Then the de-docker container is built, and finally, the infrastructure is transferred to Azure, to the platform itself, via Terraform and Kubernetes' tools. Here you can see the whole thing mapped out again. So the commit ultimately triggers the pipelines. First, a build pipeline runs, which eventually turns source code into program artefacts, and these are then rolled up via Docker registries and other mechanisms and deployed in Azure.

Event Hub Extension

Let's take a look at our Event Hub Extension. After all, this Event Hub Extension is critical for the integration in our case. Using the HiveMQ Extension Framework and the Extension SDK offered by HiveMQ, you can extend the broker with Java or JBM-based languages and thus add new functionalities. And among other things, you also can implement your business logic in the broker in this way. This is a great advantage that the IoT Hub, for example, does not offer. It is not extensible. I cannot change the behaviour at this point.

In our concrete case, I developed the extension in Kotlin, it is configurable, and the basic principle is that it forwards the MQTT messages to one or more event hubs. And, of course, it tries to keep the MQTT 5 features and advantages as far as possible in this protocol implementation. Here you can see it again in the actual functionality. So we have a Thing, which means Demo Device null, which has two features. One is also a Bosch sensor that can output temperature, humidity and air pressure under a battery. You now send an MQTT 5 message in this format, a JSON-based message format, to the HiveMQ broker and the Event Hub Extension, which ultimately sends this message to the Event Hub depending on the external configuration. This is what our HiveMQ Cluster setup looks like in our example.

HiveMQ Extension Framework

We have deployed Kubernetes as a three-node cluster. We have deployed HiveMQ as a three-node cluster in the Kubernetes cluster. In addition to the Event Hub extension, which is custom developed, we also use other attachments found in the HiveMQ Marketplace. For example, the DNS extension is necessary to run HiveMQ in the cluster in Kubernetes. And the already mentioned Prometheus Extension to display the monitoring. I said it before. Here it is explicitly shown again. The HiveMQ Extension Framework is the corresponding key piece to enable the complete integration of HiveMQ into your infrastructure. And therefore also a full integration into your business platform. Okay. Then it's time now to show you what that looks like in concrete terms. Now I have to take a quick look. First, I would like to start visualizing the development process again. Now, this is shown here again. You can see the development environment here. And the developer would now make a few changes here in our HiveMQ extension.

Angela Mayer: Sven, we only see the slide with getting excited. Maybe try again.

Sven Kobow: Yeah, maybe I need to do this demo, perhaps I need to do the presentation mode-, now I see, okay, all right. Then we'll start there again.

Demonstration of the development process on the model

You can see the development environment here. I have made a simple change to the extension. And finally, of course, you want to find this change in your environment. To do this, I use the corresponding change here, which is transferred via Git to the source code repository and pushed. And when that happens, then you trigger the connected pipelines to start running in Azure DevOps. You can see that nowhere at the point, a new pipeline run was created, number 130, containing the corresponding change. And then here, the necessary steps are taken by the agents, the code is checked out again on the agent. Then the appropriate dependencies are downloaded, the extension is built. Let me speed this up a little bit here. And then ultimately, a Docker image is created, which is then pushed to a Docker registry and then, of course, from there, fixed deployments are made available. Let me make that a little bit faster here. (6 sec.) Here you can see the point. This has now been the view into Azure itself. You can see there is a resource group that is still empty. That means we're kind of at the beginning. The environment is not deployed yet. And now that the image pipeline has run through successfully, a release is triggered automatically. ...

Angela Meyer: Sven, you are currently a bit harder to hear. (Sven Kobow: Okay.) Yes, now you can be heard again.

Sven Kobow: You shouldn't cover the mic with your hand. Okay, so what you see here in the background now are the individual steps necessary to ultimately provide the IoT platform in Azure with everything essential for this. So resource groups are created, Kubernetes clusters are provisioned, local IP addresses. Those IP addresses are connected to Kubernetes to the services. So the whole setup, in all its complexity, is done fully automatically here. I designed this as a video because the provisioning of such a Kubernetes cluster usually takes quite some time. And here, I have the option to just go over it a little bit faster. As you can see here, Terraform is used here as a provisioning tool. That is, in Terraform, the environment is described with all of its resources and its configuration. And can be applied to Azure that way. (8 sec.) Right, now you can see the output in the background. Now the Event Hub is being created right now.

Of course, what has to be taken into account is that when these resources are generated, connection strings and primary keys, etc., are also developed, which are then required at the end in other places to connect the components further. And of course, that's all included here in this pipeline as well, that at the end, the HiveMQ Event Hub Extension then also gets the appropriate keys to connect to the Event Hub. So, now the process is complete. And if you then refresh your view on Azure again, you can see that the corresponding resources have all been created here. We're going to do here simply look again at what has been made in the Kubernetes clusters.

First of all, of course, I have to get the credentials, i.e. the password, from the newly created Kubernetes cluster. This can be done with the Azure CLI tools, and then I can simply use the standard Cube Control command to get an overview of everything that has happened here in the Kubernetes cluster. And now you can see here that there are, for example, three HiveMQ nodes, related services, which HiveMQ then makes available to the outside via external IPs, etcetera. Exactly. Of course, what also happened in the background is that the Ditto instance that we deployed was configured accordingly. I'm starting to see that we've already taken quite a bit of time, so I'll move on to the following video. Which is then, of course, also possible, now you can see here my console where I'm doing the following, that's right after.

I start the Live View on Ditto here in the lower area and have sent a message via MQTT in the upper window, and then also see that below via the Ditto Live View or the Live Streaming that it is sent on. Then we can just look again. How does that look now, basically with the publishing of messages? That is also shown here again. So, a self-developed Event Hub client has started directly in the lower area. I simply use the Azure Event Hub SDK to create a small demo client. And at this point, this client represents, for example, any other business service that is listening to the Event Hub. And in the upper area, an MQTT message is sent right now, in the format we saw in the presentation.

And in the lower area, you can now see that the message was received accordingly on the Event Hub. So you can see that via MQTT, letters can be obtained on the Event Hub. So, that's about it from the demonstration. I hope that I wasn't too quick and that I could give a good insight into how the setup is and how we envision that, how you can integrate HiveMQ into Azure accordingly to achieve precisely what we described at the beginning. So, complete integration of HiveMQ into an Azure-based enterprise platform. That would be the end of the demo for now.

What are the next steps?

Here you can find info about it interested in MQTT 5. Where can you find more sources? HiveMQ has provided excellent documentation material on its web pages. On the website of HiveMQ, you can also find the possibility to download HiveMQ as-, either in the Community Edition or as a test version in the Enterprise Edition. From our point of view, what's essential, as the next steps for someone planning to build an IoT platform, know your business and know exactly what you want to do. Because ultimately, it doesn't make sense to build an IoT platform because it's technically possible. The entire business concept must always be coherent. We can provide support if needed, so please contact me. Likewise, for Azure configuration or even HiveMQ Event Hub Extension development, feel free to contact me. So, Angela, I'm at the end now, and I would like to start the Q&A session.


Angela Meyer: Super. Yes, thank you very much for the insights, Sven and Dominik. Right, I will go into the individual questions that have come in here now. And the first question is

Are there already HiveMQ extensions for Service Bus or Event Hub? And where do you get them?

Sven Kobow: Dominik, do you want to answer? Shall I answer?

Dominik Obermaier: Exactly, I'm happy to start. So, it's like this. There is no standard extension now. What we have seen here is the demo that was developed by diva-e to show precisely this integration. At the moment, there is no standard connector here. This, let's say, a project-specific connector can be used. And as Sven has also shown, it is possible to connect not only Service Bus or Event Hubs via the Extension SDK but also things like Event Gritt or any other services directly from the Extension SDK.

Angela Meyer: Okay. And to ask the participants again whether you are already using HiveMQ for your MQTT service in your company. Which I would start now briefly. And then, participants can continue to ask questions and fill out the short survey here already. (8 sec.) I would just read out the following question to you guys. Namely

What systems does HiveMQ use to manage credentials and device hubs? Is this integrated with Ditto?

Sven Kobow: Well, I can first answer our setup and then pass it on to Dominik. In our specific case, we use mutual TLS, which HiveMQ supports for authentication. HiveMQ itself does not provide any way to manage devices. In our case, it is also not the case that HiveMQ and Ditto are integrated, but that could be done. You can develop appropriate extensions by integrating the authentication process or the administration with Ditto directly. Ditto provides all the APs that are needed for this.

Dominik Obermaier: Yes, perhaps I should add that device management is precisely the same. Typically, device management is also about authentication and permissions on the one hand. But very often, I also have the issue of provisioning and revocation. For example, if a customer's device has been hacked or whatever, I also want to cancel. That is the complete life cycle management. For that, some solutions usually have APIs. Our customers have a custom-built solution, especially for large IoT deployments. It's often like that. But it is also the case that I can connect the solution used directly by the cloud provider via APIs. It is also possible, for example, in the model, it was not integrated so far, but it would also be possible, for example, to map at least a part via Ditto. That's what the Extension SDK is for. In most cases, the issue is precisely the issue regarding devising management and provisioning. It is very, very customer-specific. And accordingly, these systems are connected. Very often, these have a rest interface.

And these are then connected via the extensions system, which is very simple. And that is also something that partners like diva-e do and can click very quickly. To always be able to map this in use case-specific way. Of course, HiveMQ supports permissions, authentication, and very, very fine granularity. You can even go so far down as to enable and disable exactly MQTT functionality for individual devices, if necessary. To manage the permissions for devices in a very, very fine-grained way. If you want something out-of-the-box, the Enterprise Security Extension connects with Elder, with databases, all possible databases. But also with Ocean and so on. That you can use directly out-of-the-box. However, here in the cloud context, it's precisely this integration with Azure, as it was just shown by Sven, that offers itself. But that is also something that I think we can talk about afterwards. So with the colleagues from diva-e or me, to find out more. How customers do that in concrete projects.

Angela Meyer: Exactly. If you're interested, we'd be happy to discuss that afterwards as well. So, another question came in here.

Was the extension built in the demo embedded directly into a HiveMQ image? Are the wings reloaded at broker runtime? And does the extension run outside the broker?

Sven Kobow: Yes. So in our specific case, we build the extension in a pipeline and package it together with HiveMQ in a Docker image. So that's shipped right along with it. You can activate and deactivate extensions in HiveMQ at runtime. So you could also deploy such an extension into a running HiveMQ instance at any time. In the demo example, we decided not to do that. Certain rollout functionality is also provided by Kubernetes at this point, for example, to restart the ports. And if the image changes, there are a variety of conceivable scenarios. The extension is executed separately but within the HiveMQ broker.

Dominik Obermaier: Exactly. Maybe, if I may add something there. It is also the case that this extension is executed precisely in a sandbox. That means I have ensured isolation. That means I can't run with other extensions, let's say. For example, they can't get in each other's way if I have a security-relevant extension and a not security-relevant extension because the broker itself ensures precisely an isolation. And just as Sven also said, it is possible at runtime. It is also here, not yet officially announced, but in the next few weeks, an operator will also be formally supported by us for Kubernetes. This will enable things like loading extensions directly in the Kubernetes context. But it is also the case, now in the concrete demo, that it is, of course, already a very, very good pattern, even if I can make these deployments and roll them out again at any time and use Kubernetes for them. But if I now just-, from time to time, there are project contexts where just that is difficult, and I also still want to reload extensions in Kubernetes. And that is then possible. And with the Kubernetes Operator, which will be released soon and officially supported, it's even more manageable because a dedicated tool is available from the official site.

Angela Meyer: Okay. So I would include one more question, and then we can gladly clarify any other questions afterwards since the time has now moved forward.

What are other services required to run the broker? What does a standard setup look like with slightly less self-driven services?

Sven Kobow: So what other components were necessary in our case to operate it like this? The setup is based on the AKS from Azure as a Kubernetes cluster. Ultimately, since these are Docker images, you could also use other services on Azure. Would then, of course, not get to enjoy the benefits of Kubernetes at that point. In any case, to run HiveMQ in the cluster, as far as I know, you need the DNS extension. I don't remember Dominik. If this is not entirely correct, feel free to correct me. Can you repeat the second part of the question?

Angela Meyer: Yes.

What does a standard setup look like with slightly less self-powered services?

Sven Kobow: With a little less self-operated services. So, I'll try to answer the question differently. I have to admit. I don't think I fully understand it. The setup that we have built is supposed to be precisely when someone has chosen a cloud provider for a good reason, just to get the benefits of cloud provider-operated services. And ultimately, all the services that we use at this point, except Ditto and HiveMQ, of course, are managed services. At this point, it has to be said that HiveMQ, especially with the fault tolerance that such a HiveMQ cluster brings with it, is very much like a managed component from my point of view.

Angela Meyer: Yes, the participant here means MongoDB specifically, if I'm pronouncing that correctly.

Sven Kobow: MongoDB. (Angela: Yes.) You could, so by the fact that ultimately some Azure services, I think the Azure databases, even offer MongoDB compatibility. So in that respect, you could also use managed services there at that point and use MongoDB APIs in your small applications.

Angela Meyer: Okay. As I said, I would now end the Q&A session here due to the time. And all further questions will be answered by our experts afterwards. And you are also welcome to contact Sven and Dominik if you have any other questions. The recording will also be made available after that. Here again, is the data of Sven. You can also write to him via LinkedIn or GitHub and connect. And he is known for any questions and will be happy to further discuss IT services with you. Here's a hint about our following webinars. We host weekly diva-e webinars about SEO, content, or our partner Spryker. Please register for the next webinars on our diva-e website.

We look forward to your participation. Yes, and now I thank Sven and Dominik for your time and insights. And thank you also to the participants for joining us, have a great day and see you all next time.

Dominik Obermaier: See you next time. Thank you for the invitation. Bye.

Sven Kobow: Thank you, bye.