MicrosoftAzure

Your enterprise containers orchestrator in the cloud: deploying OpenShift on Azure

openshift_azure

In a “containerized” world running “cloud native” applications, there is the need for a containers orchestrator as Kubernetes which is the best open source platform providing such features. Furthermore, Red Hat pushed this project to an enterprise-grade level developing OpenShift (on top of Kubernetes) adding more powerful features in order to simplify the applications development pipeline. Today, thanks to this platforms, you can easily move your containerized applications (or microservices based solutions) from an on-premise installation to the cloud without changing anything on the development side; it’s just a matter of having a Kubernetes or OpenShift cluster up and running and moving your container from one side to the other.

As already announced during the latest Red Hat Summit in San Francisco, in order to improve the developers experience, Red Hat and Microsoft, are working side by side for providing a managed OpenShift service (as it is already available for Kubernetes with AKS); you can read more about this effort here.

But … while waiting for having this awesome and really easy solution, what’s the way for deploying an OpenShift cluster on Azure today?

Microsoft is already providing some good documentation for helping you on doing that for both the open source OKD project (Origin Community Distribution of Kubernetes, formerly known as OpenShift Origin) and the Red Hat productized version named OCP (OpenShift Container Platform) if you have a Red Hat subscription. Furthermore, there are two different GitHub repositories which provide templates for deploying both: the openshift-origin and the openshift-container-platform.

In this blog post, I’d like to describe my personal experience about deploying OKD on Azure trying to summarize and explain all the needed steps for having OpenShift up and running in the cloud.

Get the Azure template

First of all, you have to clone the openshift-origin repository, available under the Microsoft GitHub organization, which provides the deployment template (in JSON format) for deploying OKD.

git clone https://github.com/Microsoft/openshift-origin.git

Because the master branch contains the most current release of OKD with experimental items, this may cause instability but includes new things. For this reason, it’s better switching to one of the branches for using one of the stable releases, for example, the 3.9:

git checkout -b release-3.9 origin/release-3.9

This repository contains the ARM (Azure Resource Manager) template file for deploying the needed Azure resources, the related configurable parameters file and the Ansible scripts for preparing the nodes and deploying the OpenShift cluster.

You will come back to use the artifacts provided in this repository later because there are some other steps you need to do as prerequisites for the final deployment.

Generate SSH key

In order to secure access to the OpenShift cluster (to the master node), an SSH key pair is needed (without a passphrase). Once you have finished deploying the cluster, you can always generate new keys that use a passphrase and replace the original ones used during initial deployment. The following command shows how it’s possible using the ssh-keygen tool on Linux and macOS (for doing the same on Windows, follow here).

ssh-keygen -f ~/.ssh/openshift_rsa -t rsa -N ''

The above command will store the SSH key pair, generated using the RSA algorithm (-t rsa) and without a passphrase (-N ”), in the openshift_rsa file.

01_ssh_keygen

In order to make the SSH private key available in the Azure cloud, we are going to use the Azure Key Vault service as described in the next step.

Store SSH Private Key in Azure Key Vault

Azure Key Vault is used for storing cryptographic keys and other secrets used by cloud applications and services. It’s useful to create a dedicated resource group for hosting the Key Vault instance (this group is different from the one for deploying the OpenShift cluster).

az group create --name keyvaultgroup --location northeurope

Next step is to create the Key Vault in the above group.

az keyvault create --resource-group keyvaultgroup --name keyvaultopenshiftazure --location northeurope --enabled-for-template-deployment true

The Key Vault name must be globally unique (so not just inside the just created resource group).

Finally, you have to store the SSH key into the Key Vault for making it accessible during the deployment process.

az keyvault secret set --vault-name keyvaultopenshiftazure --name openshiftazuresecret --file ~/.ssh/openshift_rsa

Create an Azure Active Directory service principal

Because OpenShift communicates with Azure, you need to give it the permissions for executing operations and it’s possible creating a service principal under Azure Active Directory.

The first step is to create the resource group where we are going to deploy the OpenShift cluster. The service principal will assign the contributor permissions to this resource group.

az group create --name openshiftazuregroup --location northeurope

Finally, the service principal creation.

az ad sp create-for-rbac --name openshiftazuresp --role Contributor --password mypassword --scopes /subscriptions//resourceGroups/openshiftazuregroup

From the JSON output, it’s important to take a note of the appId field which will be used as aadClientId parameter in the deployment template.

02_create_sp

Customize and deploy the Azure template

Back to the openshift-origin GitHub repository, we have cloned before, the azuredeploy.parameters.json file provides all the parameters for the deployment. All the corresponding possible values are defined in the related azuredeploy.json file.

This file defines the deployment template and it describes all the Azure resources that will be deployed for making the OpenShift cluster such as virtual machines for the nodes, virtual networks, storage accounts, network interfaces, load balancers, public IP addresses (for master and infra node) and so on. It also describes the parameters with the related possible values.

Taking a look at the parameters JSON file there are some parameters with a “changeme” value. Of course, it means that we have to assign meaningful values to these parameters because they make the main customizable part of the deployment itself:

  • openshiftClusterPrefix: a prefix used for assigning hostnames for all nodes – master, infra and nodes.
  • adminUsername: admin username for both operating system login and OpenShift login
  • openshiftPassword: password for the OpenShift login
  • sshPublicKey: the SSH public key generated in the previous steps (the content of the ~/.ssh/openshift_rsa.pub file)
  • keyVaultResourceGroup: the name of the resource group that contains the Key Vault
  • keyVaultName: the name of the Key Vault you created
  • keyVaultSecret: the Secret Name you used when creating the Secret (that contains the Private Key)
  • aadClientId: the Azure Active Directory Client ID you should have noted when the service principal was created
  • aadClientSecret: the Azure Active Directory service principal secret/password chosen on creation

Other useful parameters are:

  • masterVmSize, infraVmSize and nodeVmSize: size for VMs related to the master, infra and worker nodes
  • masterInstanceCount, masterInstanceCount and masterInstanceCount: number of master, infra and worker nodes

All the deployed nodes run CentOS as the operating system.

Instead of replacing the values in the original parameters file, it’s better making a copy and renaming it, for example, as azuredeploy.parameters.local.json.

After configuring the template parameters, it’s possible to deploy the cluster running the following command from inside the GitHub repository directory (for using the contained JSON template file and the related parameters file).

az group deployment create -g openshiftazuregroup --name myopenshiftcluster --template-file azuredeploy.json --parameters @azuredeploy.parameters.local.json

This command could take quite a long time depending on the number of nodes and their size and at the end, it will show the output in JSON format from which the two main fields (in the “outputs” section) are:

  • openshift Console Url: it’s the URL of the OpenShift web console you can access using the configured adminUsername and openshiftPassword
  • openshift Master SSH: contains information for accessing to the master node via SSH
  • openshift Infra Load Balancer FQDN: contains the URL for accessing the infra node

In order to access the OpenShift web console, you can just copy and paste the “openshift Console Url” value in your preferred browser.

03_openshift_console

In the same way, you can access the OpenShift master node by just copying and pasting the “openshift Master SSH” value in a terminal.

Once on the master node, you can start to use the oc tool for interacting with the OpenShift cluster, for example showing the available nodes as follows.

04_master_ssh

You can notice that these URLs don’t have really friendly names and it’s because the related variables in the template for defining them are:


"infraLbPublicIpDnsLabel": "[concat('infradns', uniqueString(concat(resourceGroup().id, 'infra')))]",

"openshiftMasterPublicIpDnsLabel": "[concat('masterdns', uniqueString(concat(resourceGroup().id, 'master')))]"

Conclusion

Nowadays, as you can see, following a bunch of simple steps and leveraging the Azure template provided by Microsoft and Red Hat, it’s not so difficult to have an OpenShift cluster up and running in the Azure cloud. If you think about all the resources that need to be deployed (nodes, virtual network, load balancers, network interfaces, storage, …), the process really simplifies the overall deployment. Of course, the developers’ life will be easier when Microsoft and Red Hat will announce the managed OpenShift offer.

Having an OpenShift cluster running in the cloud it’s just the beginning of the fun … so, now that you have it … enjoy developing your containerized applications!

Today meetup … “Open sourcing the IoT : running EnMasse on Kubernetes”

Yes … I’m at the airport waiting for my flight coming back home and I like to write something about the reason of my trip … as usual.

IMG_20170605_132419 DBjIUg1W0AEL7u7

Today, I had a meetup in Milan hosted in the Microsoft Office and organized by my friend Felice Pescatore who leads the AgileIoT project; of course my session was about messaging and IoT … so no news on that. The title ? “Open sourcing the IoT : running EnMasse on Kubernetes”.

Other friends were there with their sessions like Felice himself, Valter Minute speaking about how moving from an IoT prototype to a product and Clemente Giorio and Matteo Valoriani with very interesting sessions about Holo Lens real scenarios.

I started with an introduction about messaging and how it is related to the IoT then moving to the EnMasse project, an open source “messaging as a service” platform that is well suited for being the messaging infrastructure of an IoT solution (for example, it’s applicable inside the Eclipse Hono project).

I showed main EnMasse features and the new ones which will come in the next weeks and how EnMasse provides a messaging and IoT solution from an “on-premise” deployment to the “cloud” in a Kubernetes or OpenShift cluster. For this reason I said “open sourcing the IoT”, because all the components in such solution are open source !

IMG_20170605_132407 IMG_20170605_132359

For showing that, I had a demo with a Kubernetes cluster running on Azure Container Service deploying EnMasse and Apache Spark on that. This demo was made of an AMQP publisher sending simulated temperature values to a “temperature” address deployed in EnMasse (as a queue) and a Spark Streaming job reading such values in order to process them in real time and getting the max value in the latest 5 seconds writing the result to the “max” address (another queue); finally an AMQP receiver was running in order to read and show such values from “max”.

If you want to know more about that you can find the following resources :

IoT developer survey : my 2 cents one year later …

As last year, I have decided to write a blog post about my point of view on the IoT developer survey from the Eclipse Foundation (IoT Working Group) with IEEE, Agile IoT and the IoT Council.

From my point of view, the final report gives always interesting insights on where the IoT business is going and about that, Ian Skerrett (Vice President of Marketing at Eclipse Foundation) has already analyzed the results, available here, writing a great blog post.

I want just to add 2 more cents on that …

Industry adoption …

It’s clear that industries are adopting IoT and there is a big increment for industrial automation, smart cities, energy management, building automation, transportation, healthcare and so on. IoT is becoming “real” even if, as we will see in the next paragraphs, it seems that we are still in a prototyping stage. A lot of companies are investing on that but few of them have real solutions running in the field. Finally, from my point of view, it could be great to add more information about countries because I think that there is a big difference on how and where every country is investing for IoT.

The concerns …

Security is always the big concern but, as Ian said, interoperability and connectivity are on a downward trend; I agree with him saying that all the available middleware solutions and the IoT connectivity platforms are solving these problems. The great news is that all of them support different open and standard protocols (MQTT, AMQP but even HTTP) that is the way to go for having interoperability; at same time we are able to connect a lot of different devices, supporting different protocols, so the connectivity problem is addressed as well.

Coming back to security, the survey shows that much more software developers are involved on building IoT solutions even because all the stuff they mostly use are SSL/TLS and data encryption so at software level. From my point of view, some security concerns should be addressed at hardware level (using crypto-chip, TPM and so on) but this is an area where software developers have a lack of knowledge. It’s not a surprise because we know that IoT needs a lot of different knowledge from different people but the survey shows that in some cases not the “right” people are involved on developing IoT solution. Too much web and mobile developers are working on that, too few embedded developer with a real hardware knowledge.

Languages : finally a distinction !

Last year, in my 2 cents, I asked for having a distinction on which side of an IoT solution we consider the most used programming languages. I’m happy to know that Eclipse Foundation got this suggestion so this year survey asked about languages used on constrained devices, gateway and cloud.

iot_survey

The results don’t surprise me : C is the most used language on “real” low constrained devices and all the other languages from Java to Python are mostly used on gateways; JavaScript fits in the cloud mainly with NodeJS. In any case, NodeJS is not a language so my idea is that providing only JavaScript as possible answer was enough even because other than using a server-side framework like NodeJS the other possibility is using JavaScript in “function as a service” platforms (i.e. Lambda from AWS, Azure Functions and so on) that are mostly based on NodeJS. Of course, the most used language in the cloud is Java.

What about OS ?

Linux is the most used OS for both constrained devices and IoT gateways but … here a strange thing comes in my mind. On “real” constrained devices that are based on MCUs (i.e. Cortex-Mx) you can run few specific Linux distros (i.e. uCLinux) and not a full Linux distro so it’s strange that Linux wins on constrained devices but then when the survey shows what distros are used, uCLinux has a very low percentage. My guess is that a lot of software developers don’t know what a constrained device is 🙂

On constrained devices I expect that developers uses “no OS” (programming on bare metal) or a really tiny RTOS but not something closed to Linux.

On gateways I totally agree with Linux but Windows is growing from last year.

Regarding the most used distros, the Raspbian victory shows that we are still in a prototyping stage. I can’t believe that developers are using Raspbian so the related Raspberry Pi hardware in production ! If it’s true … I’m scared about that ! If you know what are the planes, trains, building automation systems which are using something like that, please tell me … I have to avoid them 🙂

Regarding the protocols …

From my point of view, the presence of TCP/IP in the connectivity protocols results is misleading. TCP/IP is a protocol used on top of Ethernet and Wi-Fi that are in the same results and we can’t compare them.

Regarding communication protocols, the current know-how is still leading; this is the reason why HTTP 1.1 is still on the top and HTTP 2.0 is growing. MQTT is there followed by CoAP, which is surprising me considering the necessity to have an HTTP proxy for exporting local traffic outside of a local devices network. AMQP is finding its own way and I think that in the medium/long term it will become a big player on that.

Cloud services

In this area we should have a distinction because the question is pretty general but we know that you can use Amazon AWS or Microsoft Azure for IoT in two ways :

  • as IaaS hosting your own solution or an open source one for IoT (i.e. just using provided virtual machines for running an IoT software stack)
  • as PaaS using the managed IoT platforms (i.e. AWS IoT, Azure IoT Hub, …)

Having Amazon AWS on the top doesn’t surprise me but we could have more details on how it is used by the IoT developers.

Conclusion

The IoT business is growing and its adoption as well but looking at these survey results, most of the companies are still in a prototyping stage and few of them have a real IoT solution in the field.

It means that there is a lot of space for all to be invited to the party ! 😀

 

A routing IoT gateway to the Cloud

Let’s start with an on-premise solution …

Imagine that you have an embedded solution (or if you like it … an IoT solution) with a bunch of tiny devices which are connected to an on-premise server which receives telemetry data from them and is able to execute some elaboration in order to show information in real time on a dashboard and control the devices.

Imagine that your solution is based on the AMQP protocol and perhaps your on-premise server is running a messaging broker for gathering data from devices as messages through the local network.

Imagine that, due to your very constrained devices, the security in the network is guaranteed only at data level by encrypting the body of every single AMQP message. It’s possible that due to their complexity and need of more resources (CPU and memory) you can’t use sophisticated algorithms (i.e. DES, 3DES, AES, …) on your devices but only simple ones (i.e. TEA, ..).

Your solution is just working great in your environment.

… but now we want to move it to the Cloud

Imagine that for some reasons you need to change the on-premise nature of your solution and you want to connect the devices directly to the cloud with a very strict rule : nothing to change on the devices. At least you can change some configuration parameter (i.e. server ip, …) but not the way and the protocol they are using for communication.

The first simple solution could be moving your messaging broker from the on-premise server on a IaaS in the Cloud; just changing connection parameters on your devices and all continue to work as before.

The big problem now is that your data are sent through the public network and your security is based on a simple encryption algorithm applied only on the payload of the messages. For this reason, you start to think about using SSL/TLS in order to have security at connection level on top of TCP/IP, data encryption and server authentication.

Start to think about it but then … wait … I can’t use SSL/TLS on my tiny devices … they don’t have the needed resources in terms of CPU and memory … and now ?

Fog computing and IoT gateway : the solution ?

You know about “fog computing” (the new buzz word after IoT ?) and that you can solve your problem using an IoT gateway. Having this gateway could mean to have an intelligent piece of software which is able to gather data from the local network, process them in some way and then send them to the Cloud. The gateway could give you more features like filtering on data (sending only part of them), offline handling (if the Cloud isn’t reachable) and complex local processing but … wait … you don’t want it … you just want that data arrives to the Cloud in the same way as before (to the on-premise server) and for now you don’t need other additional great features.

Could we have a very simple IoT gateway with only the two following features we need :

  • SSL/TLS protocol support on behalf of the tiny devices;
  • traffic routing from devices to the Cloud in a transparent way;

The answer is … yes ! You have such solution and it’s provided by the Qpid Dispatch Router project from the ASF (Apache Software Foundation).

I already wrote about it in some previous articles [1] [3] so let me just show how you can use the router in a way that solve your “porting” problem.

The router just needs the right configuration

In order to show in a very simple way how to configure the router for our objective, we can use the Azure IoT Hub as Cloud platform for the IoT. As all the Azure messaging services like Service Bus and Event Hub, the IoT Hub needs an encrypted connection based on the SSL/TLS protocol … so it’s the problem we want to solve for our non SSL capable devices.

For the sake of simplicity we can run the router on a Raspberry Pi using the Raspbian distribution as OS; you can read about installing the Qpid Dispatch Router on Linux and on the Raspberry Pi in these articles [2] [4].

The main point is the configuration needed for the router in order to connect to an IoT Hub and routing the traffic from devices to it.

First of all we have to consider all the addresses that at AMQP level are used in order to send telemetry data to the hub, receive commands and reply with feedback. All these information are deeply explained here [5] [6].

The routing mechanism used in this configuration is the “link routing” [3] which means that the router creates a sort of “tunneling” between devices and the IoT Hub; it opens the TCP/IP connection with the hub, establishing it with SSL/TLS on top, and then opens the AMQP connection. All the SSL/TLS stuff happens between router and IoT Hub and the devices aren’t aware about it. You can see what happens through the router trace :

pi@raspberrypi:~ $ PN_TRACE_FRM=1 qdrouterd --conf ex06_iothub.conf
Sat Jul 23 11:56:17 2016 SERVER (info) Container Name: Router.A
Sat Jul 23 11:56:17 2016 ROUTER (info) Router started in Standalone mode
Sat Jul 23 11:56:17 2016 ROUTER_CORE (info) Router Core thread running. 0/Router.A
Sat Jul 23 11:56:17 2016 ROUTER_CORE (info) In-process subscription M/$management
Sat Jul 23 11:56:18 2016 ROUTER_CORE (info) In-process subscription L/$management
Sat Jul 23 11:56:18 2016 AGENT (info) Activating management agent on $_management_internal
Sat Jul 23 11:56:18 2016 ROUTER_CORE (info) In-process subscription L/$_management_internal
Sat Jul 23 11:56:18 2016 DISPLAYNAME (info) Activating DisplayNameService on $displayname
Sat Jul 23 11:56:18 2016 ROUTER_CORE (info) In-process subscription L/$displayname
Sat Jul 23 11:56:18 2016 CONN_MGR (info) Configured Listener: 0.0.0.0:5672 proto=any role=normal
Listening on 0.0.0.0:5672
Sat Jul 23 11:56:18 2016 CONN_MGR (info) Configured Connector: ppatiernoiothub.azure-devices.net:5671 proto=any role=on-demand
Sat Jul 23 11:56:20 2016 POLICY (info) Policy configured maximumConnections: 0, policyFolder: '', access rules enabled: 'false'
Sat Jul 23 11:56:20 2016 SERVER (info) Operational, 4 Threads Running
Connected to ppatiernoiothub.azure-devices.net:5671
[0x19dc6c8]: -> SASL
[0x19dc6c8]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, initial-response=b"anonymous@raspberrypi"]
[0x19dc6c8]: -> AMQP
[0x19dc6c8]:0 -> @open(16) [container-id="Router.A", hostname="ppatiernoiothub.azure-devices.net", max-frame-size=65536, channel-max=32767, idle-time-out=60000, offered-capabilities=:"ANONYMOUS-RELAY", properties={:product="qpid-dispatch-router", :version="0.6.0"}]
[0x19dc6c8]: <- SASL
[0x19dc6c8]:0 <- @sasl-mechanisms(64) [sasl-server-mechanisms=@PN_SYMBOL[:EXTERNAL, :MSSBCBS, :ANONYMOUS, :PLAIN]]
[0x19dc6c8]:0 <- @sasl-outcome(68) 
[0x19dc6c8]: <- AMQP
[0x19dc6c8]:0 <- @open(16) [container-id="DeviceGateway_1766cd14067b4c4b8008b15ba75f1fd6", hostname="10.0.0.56", max-frame-size=65536, channel-max=8191, idle-time-out=240000]

At this point, the devices can connect locally to the router and when they asked for all the AMQP links related to the IoT Hub addresses, they will be tunneled by the router : the AMQP “attach” performatives are routed to the IoT Hub through the connection with the router. The communication then continues on this link in terms of message transfers directly between IoT Hub and devices but all encrypted until the router through the SSL/TLS protocol.router_iothub

The router configuration is something like that :

listener {
 addr: 0.0.0.0
 port: 5672
 authenticatePeer: no
}

ssl-profile {
 name: azure-ssl-profile
 cert-db: /opt/qdrouterd/Equifax_Secure_Certificate_Authority.pem
}

connector {
 name: IOTHUB
 addr: <iotHub>.azure-devices.net
 port: 5671
 role: on-demand
 sasl-mechanisms: ANONYMOUS
 ssl-profile: azure-ssl-profile
 idleTimeoutSeconds: 120
}

# sending CBS token
linkRoute {
 prefix: $cbs/
 connection: IOTHUB
 dir: in
}

# receiving the status of CBS token request
linkRoute {
 prefix: $cbs/
 connection: IOTHUB
 dir: out
}

# sending telemetry path and command replies from device to hub on : devices/<DEVICE_ID>/messages/events
# ATTENTION ! Here we need CBS Token
linkRoute {
 prefix: devices/
 connection: IOTHUB
 dir: in
}

# receiving command on device from hub on : devices/<DEVICE_ID>/messages/deviceBound
# ATTENTION ! Here we need CBS Token
linkRoute {
 prefix: devices/
 connection: IOTHUB
 dir: out
}

The main points in the configuration are :

  • a listener entity which defines that the router accept incoming AMQP connections on port 5672 (not encrypted);
  • the ssl-profile entity in order to configure the parameter for SSL/TLS connection to the IoT Hub and specifically the CA certificate to use for server authentication;
  • the connector entity which defines the way the router connects to the IoT Hub (address and port) using the above SSL profile;

After above parameters there is a bunch of linkRoute entities which define what are the addresses that should be link routed by the router from devices to the hub (using the specified connector).

You can find the complete configuration file here.

The Netduino Plus 2 use case

In order to develop an application very quickly on device side I decided to use my knowledge about .Net Micro Framework using a board that hasn’t the SSL/TLS support : the Netduino Plus 2 board.

The simple application is able to send a message to the IoT Hub and receive a new one replying with a feedback. All the code is available here.

In the following pictures you can see the message sent by the board and the command received (with the related feedback) through the Device Explorer tool.

01

02

Conclusion

Of course, the Qpid Dispatch Router project has a greater object than I showed here that could be providing connection to messaging services at scale thanks a more complex router network, with a path redundancy feature to reach a broker or a simple receiver.

In this article, I just showed a different way to use it in order to give more power to tiny devices which aren’t able to connect to AMQP based services due to their limitation (in this case the lack of SSL/TLS support).

If you consider the starting point, the configuration change could be avoided because the router could have same IP address and AMQP listening port as the previous on-premise server .

It means that only adding the router configured for the Cloud connection solves the problem !

[1] Routing AMQP : the Qpid Dispatch Router project

[2] Qpid Dispatch Router installation on your Linux machine

[3] Routing mechanisms for AMQP protocol

[4] My Raspberry Pi runs the Qpid Dispatch Router

[5] Connecting to the Azure IoT Hub using an AMQP stack

[6] Azure IoT Hub : commands and feedback using AMQP .Net Lite

IoT developer survey : my point of view

Few days ago, the Eclipse Foundation published the report of the last IoT developer survey sponsored by the foundation itself with IEEE IoT and Agile IoT. This survey has as main objective to understand what are the preferred technologies used by developers in terms of languages, standards and operating systems; furthermore, it shows what are the main concerns about IoT and how companies are shipping IoT solutions today.

Great content about this report was published by Ian Skerrett (Vice President of Marketing at Eclipse Foundation) on his blog and on slideshare with a summary of all main information about it.

I’d like just to add my 2 cents and doing some absolutely personal considerations about the results …

Companies are investing …

Regarding how companies are delivering IoT solutions, it’s clear that the IoT market is growing. A lot of companies already have IoT products in the fields and the others are planning to develop them in the coming months. It’s not a surprise, other than a buzzword, the IoT is a real business opportunity for all companies strictly related to the embedded devices (silicon vendors, OEMs, ..) or software companies (for the cloud and application side) which are rapidly change how their business is made.

Security and interoperability : the big concerns

The result related to the main concerns about IoT is very clear : people and companies are worried about the security. All data flowing from our personal life or owned by companies to the cloud need to be protected in order to avoid someone can steal them. The concern about security is strictly related to software protocols (i.e. SSL/TLS, …) and hardware stuff (i.e. cryptochip, …) and today it seems that a very good solution isn’t available. The same is for interoperability : having a lot of IoT standard protocols means having NO standard protocols. A lot of consortiums are trying to define some standard specifications and frameworks in order to define a standard but … they are too much; all big companies are divided in different consortiums and some of them are part of more then one : this is a big deal, as for protocols … it means NO standard.

Developer prefer Java and C … what about JavaScript ?

It’s not a surprise the first place of Java as preferred language and C as second one : Java is used in a lot of cloud solutions which are based on open source products and C is the better language for developing on devices side with great performance at low cost (at least from an hardware point of view). First strange position is about JavaScript as third most used language : I hope this position is related to its huge usage with NodeJS on server side and not as “embedded” language on devices … I’m scared about that.

Protocols : the current know-how is leading

Now, the protocols …

Having HTTP/1.1 as first used protocol is real because today it’s the only well known protocol in the developers world; in order to develop and deliver an IoT solution with a quick time to market, companies leverage on internal know-how and sometimes they don’t invest to figure out how other protocols work and if they have other advantages. It explains to me this position, thanks to HTTP/1.1 simplicity and its ASCII/text based nature : a lot of developers don’t like binary format so much. Last point is that the REST architecture is a very good solution in a lot of scenarios and HTTP/1.1 is the most used protocol (the only one ?) for that.

MQTT and CoAP are used a lot thanks to the available open source projects and their simplicity; MQTT is very lightweight and works great on tiny embedded devices, CoAP tries to overcome some HTTP/1.1 disadvantages (i.e. server push, observer, …) with new features and its binary nature.

A lot of developers are scared about AMQP because I have to admit it’s not so simple like the previous ones but it’s powerful and everyone should give it a try. If you want to start with it, you can find a lot of links and resources here.

I’m surprised by the fourth position of HTTP/2.0 ! I mean … how many developers know, love and use HTTP/2 today ? I was surprised by this high position … I expected it behind “in-house, proprietary”, AMQP and XMPP. I suppose that companies are prototyping solutions using this protocol because they think that thanks to the HTTP/1.1 knowledge it’s quite simple to move to the next version : I think it’s totally wrong, because HTTP/2.0 is completely different from HTTP/1.1. I love it … I’ll invest in it.

OS : Linux and RTOS on bare metal

Regarding operating systems, the first position for Linux isn’t a surprise but we have to consider it both on server side and devices side (even if embedded devices based on Linux are a lot). The other OS are only for embedded devices (low constrained devices) so the percentages don’t have any help from cloud side. Finally Linux is useful for IoT gateway too (as we know with Kura) even if Microsoft, for example, is investing in its Windows IoT Core and will release an IoT Gateway SDK in the next months.

All the services in the cloud

Not a surprise Amazon AWS with its first position as Cloud services provider but I don’t think about their relatively new AWS IoT platform but all the IoT open source stuff that developers prefer to run on Amazon VMs than Azure VMs.

Conclusion

Here the great news is that IoT market is growing and developers/companies are investing in it to try to be on the market as soon as possible. The “bad” news is that too much different protocols and frameworks are used and the way to interoperability and interconnection is quite long or … infinite ?

Azure IoT Hub is GA : the news !

Yesterday, the Microsoft Azure IoT Hub was released in GA !

The public preview had a good success with a lot of people (makers) and companies (professional) try to use it for developing their IoT end to end solutions.

In a previous blog post, I have already discussed about its mean features with a comparison with AWS IoT, the Internet of Things platform by Amazon.

Relating to that article, there are the following differences it’s important to focus on :

  • Azure IoT Hub now supports MQTT 3.1.1 natively ! There is no need to use a field gateway for translating MQTT to AMQP (or HTTP) to communicate with the Hub. Now, your MQTT enabled devices can connect directly to the Cloud and you can use the SDK provided by Microsoft (with an API abstraction layer on top of MQTT) or any MQTT library (and M2Mqtt is a good choice for C# applications). Of course, the connection must be always encrypted with SSL/TLS protocol. More information at official documentation page here.
  • The pricing is changed : first of all, the pricing isn’t related to the number of devices (as the public preview) but only to the total number of messages/day. The bad news is that starting from April 1st the S1 and S2 plans will have a doubled price. Of course, the Free plan … will be still free !
  • AMQP over WebSockets : the AMQP protocol is supported on WebSockets too (like Event Hubs for example).

With the above two major news, the Azure IoT Hub offer is closer to AWS IoT offer : it supports MQTT and removed the devices limit on pricing.

News are not only on the Cloud side but on devices side too !

In the last months, a lot of OEMs and hardware companies worked hard to support Windows 10 for IoT Core and Azure IoT Hub connection on their platforms. Today the number of Azure Certified IoT Partners is literally increased !

 new_iothub_partners

It’s great to see that the Hub ecosystem is growing … now we have to wait for real IoT solutions based on it !

To start learning about Azure IoT Hub, I advice you the link to the Azure IoT Hub Learning Path which will guide you through all the steps needed to use the Hub in the best way.

My chat about IoT on TecHeroes show !

techeroes_channel9

On December 2nd I had a session at WPC 2015 in Milan speaking about Microsoft Azure IoT Hub.

During the conference I had a chat with Erica Barone (Microsoft Italia Evangelist) about IoT in general and more specific about the Microsoft offer on Azure with IoT Hub and IoT Suite. The chat was recorded as a new episode of the TecHeroes show published on Channel9.

Today it’s there !

It’s in italian and I’m sorry for all my foreign friends 😦

WPC 2015 Milan : Azure IoT Hub and IoT Suite

wpc2015

Organized by Overnet, in collaboration with Microsoft, WPC is the most important italian conference focused on Microsoft technologies. This year it will be covered in two full immersion days on December 1st and 2nd with 70 sessions in 8 tracks.

I’m honoured to be part of the speakers team this year as Microsoft MVP on Windows Embedded and IoT; on December 2nd, I’ll have a session about Microsoft Azure IoT Hub with an overview of the new Azure cloud gateway and the related Azure IoT Suite.

For sure, the conference will be great for contents and networking with all experts about Microsoft technologies. Don’t forget the “Ask The Expert” corner with a “bunch” of Microsoft MVPs ready to answer your questions.

All information and details about the registration and the conference on the official web site.

Let’s imagine Azure IoT Hub internal architecture

Today, thanks to the Microsoft Azure IoT Hub we can focus on developing our Internet of Things “end to end” solution at an application level perspective without concerning about the communication problems, the interconnection and messages exchange between the devices and  the service backend.

Before the advent of the IoT Hub we needed to setup all the communication channels to achieve the bidirectional paths from/to devices to/from Cloud. In that scenario, the best choice could be to use the Microsoft Azure Service Bus with Queues, Topics/Subscriptions and Event Hubs instances.

In the next paragraphs I’ll try to imagine (at very high level) what the IoT Hub service provides internally for us and how it sets up all the mentioned channels; we could mimic the related architecture using a bunch of Service Bus entities. During my explanation, I’ll use terms like “may” and “should” because I don’t know how it works for real, I can only imagine it and thinking as I need to implement it from scratch.

I consider this post as a conclusion of my previous “trilogy” on how to connect to the Azure IoT Hub using an AMQP stack, that is useful to understand how it works internally; these articles covered how to connect from a device perspective, how to handle command and feedback and finally how to get telemetry from devices.

The telemetry path

The first “simple” path we need to setup for an IoT solution is the telemetry one related to messages flow from devices to the Cloud without any response or feeback in the opposite direction. To support the ingestion of million events/second the IoT Hub “should” use an Event Hub like mechanism and it “may” be true because the D2C endpoint (at the service side) is defined as “Event Hub compatible” and we can read from it using a “pure” Event Hub client (like “low level” Event Hub Receiver or “high level” Event Processor Host).

iot_hub_internals_telemetry

As explained in this blog post, at AMQP level the devices sends data using a link connected to the following node as D2C endpoint (at device side) :

/devices/<DEVICE_ID>/messages/events

and the same node is exposed as “Event Hub compatible” at D2C endpoint at Cloud side (as already mentioned at this blog post). The related information ar available on the Azure portal to build the endpoint connection string.

eventhubcompatible

It should be clear that the telemetry path is achieved using an Event Hubs like channel.

The command path …

For handling commands from service to device, we need a channel for sending them and another one to receive feedback about their delivery (accepted, rejected, expired, …).

As explained in this blog post, the command path is achieved using a link to the following node at service side :

/messages/devicebound

and this node at device side :

/devices/<DEVICE_ID>/messages/deviceBound

The command path “should” be a queue on both sides (devices and service) with a related TTL (time to leave) and dead letter queue for expired or rejected messages by devices.

As we can see, the sending path “/messages/devicebound” hasn’t any information about the target device. To do that, the service need to set the To AMQP system property to the following value. An internal mechanism “shoud” route the command to the right queue related to the destination device analyzing the message and reading the above To property.

iot_hub_internals_command

It means that the internals “shoud” provides a queue on service side for sending commands and a queue for each device for receiving them.

… and feedback path

When the device accepts or rejects the message received on its C2D endpoint, the IoT Hub internals generates a feedback that is sent to another possible queue mapped on the following path :

/messages/servicebound/feedback

In this case, the device information related to the feedback are inside the body of the message itself in JSON format as described in the following post.

iot_hub_internals_feedback

In this case, the feedback path “should” be implemented with a queue on service side.

Conclusion

As you can see, the IoT Hub “should” provision a bunch of “Service Bus – like” entities for us inside a unique namespace related to the IoT Hub itself. Before this new services, we needed to setup all the event hubs and queues instance by ourselves … today IoT Hub provides the entire architecture.

iot_hub_internals

As I said … it’s only my imagination but … a possible high level solution to implement IoT Hub internally. It’s only an analogy game with a “home made” solution as you can see from the following Twitter conversation about this post by Clemens Vasters and Olivier Bloch from Service Bus and IoT Hub teams in Microsoft.

iot_hub_internals_twitter

Azure IoT Hub and IoT Suite : my chat with DotNetPodcast team

podcast_iot_hub_banner

For all italians people (or all my foreign friends who can understand italian ;-)) I’d like to announce another podcast about the Internet of Things on DotNetPodcast.

This is my third podcast on this stuff and I want to thank the DotNetPodcast team (Roberto Albano, Antonio Giglio, Massimo Bonanni) who invited me another time. It’s a pleasure for me.

This time I speak about the new Microsoft Azure managed service for the IoT world : the IoT Hub.

Why the need for the IoT Hub, what are its main features, connectivity and supported protocols, security, SDKs and certified hardware and how it fits well in an end to end IoT solution built using the Azure IoT Suite. Finally a brief comparison with the competitor AWS IoT from Amazon. These are the main points of my chat that you can find here.

I hope you’ll enjoy it !