Month: April 2017

IoT developer survey : my 2 cents one year later …

As last year, I have decided to write a blog post about my point of view on the IoT developer survey from the Eclipse Foundation (IoT Working Group) with IEEE, Agile IoT and the IoT Council.

From my point of view, the final report gives always interesting insights on where the IoT business is going and about that, Ian Skerrett (Vice President of Marketing at Eclipse Foundation) has already analyzed the results, available here, writing a great blog post.

I want just to add 2 more cents on that …

Industry adoption …

It’s clear that industries are adopting IoT and there is a big increment for industrial automation, smart cities, energy management, building automation, transportation, healthcare and so on. IoT is becoming “real” even if, as we will see in the next paragraphs, it seems that we are still in a prototyping stage. A lot of companies are investing on that but few of them have real solutions running in the field. Finally, from my point of view, it could be great to add more information about countries because I think that there is a big difference on how and where every country is investing for IoT.

The concerns …

Security is always the big concern but, as Ian said, interoperability and connectivity are on a downward trend; I agree with him saying that all the available middleware solutions and the IoT connectivity platforms are solving these problems. The great news is that all of them support different open and standard protocols (MQTT, AMQP but even HTTP) that is the way to go for having interoperability; at same time we are able to connect a lot of different devices, supporting different protocols, so the connectivity problem is addressed as well.

Coming back to security, the survey shows that much more software developers are involved on building IoT solutions even because all the stuff they mostly use are SSL/TLS and data encryption so at software level. From my point of view, some security concerns should be addressed at hardware level (using crypto-chip, TPM and so on) but this is an area where software developers have a lack of knowledge. It’s not a surprise because we know that IoT needs a lot of different knowledge from different people but the survey shows that in some cases not the “right” people are involved on developing IoT solution. Too much web and mobile developers are working on that, too few embedded developer with a real hardware knowledge.

Languages : finally a distinction !

Last year, in my 2 cents, I asked for having a distinction on which side of an IoT solution we consider the most used programming languages. I’m happy to know that Eclipse Foundation got this suggestion so this year survey asked about languages used on constrained devices, gateway and cloud.

iot_survey

The results don’t surprise me : C is the most used language on “real” low constrained devices and all the other languages from Java to Python are mostly used on gateways; JavaScript fits in the cloud mainly with NodeJS. In any case, NodeJS is not a language so my idea is that providing only JavaScript as possible answer was enough even because other than using a server-side framework like NodeJS the other possibility is using JavaScript in “function as a service” platforms (i.e. Lambda from AWS, Azure Functions and so on) that are mostly based on NodeJS. Of course, the most used language in the cloud is Java.

What about OS ?

Linux is the most used OS for both constrained devices and IoT gateways but … here a strange thing comes in my mind. On “real” constrained devices that are based on MCUs (i.e. Cortex-Mx) you can run few specific Linux distros (i.e. uCLinux) and not a full Linux distro so it’s strange that Linux wins on constrained devices but then when the survey shows what distros are used, uCLinux has a very low percentage. My guess is that a lot of software developers don’t know what a constrained device is 🙂

On constrained devices I expect that developers uses “no OS” (programming on bare metal) or a really tiny RTOS but not something closed to Linux.

On gateways I totally agree with Linux but Windows is growing from last year.

Regarding the most used distros, the Raspbian victory shows that we are still in a prototyping stage. I can’t believe that developers are using Raspbian so the related Raspberry Pi hardware in production ! If it’s true … I’m scared about that ! If you know what are the planes, trains, building automation systems which are using something like that, please tell me … I have to avoid them 🙂

Regarding the protocols …

From my point of view, the presence of TCP/IP in the connectivity protocols results is misleading. TCP/IP is a protocol used on top of Ethernet and Wi-Fi that are in the same results and we can’t compare them.

Regarding communication protocols, the current know-how is still leading; this is the reason why HTTP 1.1 is still on the top and HTTP 2.0 is growing. MQTT is there followed by CoAP, which is surprising me considering the necessity to have an HTTP proxy for exporting local traffic outside of a local devices network. AMQP is finding its own way and I think that in the medium/long term it will become a big player on that.

Cloud services

In this area we should have a distinction because the question is pretty general but we know that you can use Amazon AWS or Microsoft Azure for IoT in two ways :

  • as IaaS hosting your own solution or an open source one for IoT (i.e. just using provided virtual machines for running an IoT software stack)
  • as PaaS using the managed IoT platforms (i.e. AWS IoT, Azure IoT Hub, …)

Having Amazon AWS on the top doesn’t surprise me but we could have more details on how it is used by the IoT developers.

Conclusion

The IoT business is growing and its adoption as well but looking at these survey results, most of the companies are still in a prototyping stage and few of them have a real IoT solution in the field.

It means that there is a lot of space for all to be invited to the party ! 😀

 

Being a “remotee” in Red Hat … one year later

When I started to work at Red Hat last year (on March 1st), all my friends and relatives asked me a lot of questions about my new job … from home !

“What are your working hours ?”, “How does you manager verify that you are working ?”, “How do you share artifacts with your colleagues ?”, “What’s your daily life without getting in touch with your colleagues ?” …

I understand that for people, who don’t know what being a “remotee” means, it’s quite difficult to understand this way to spend the time at home but … working.

Other people just for kidding say “You are at home, you can do whatever you want” … but it’s not absolutely true ! It’s exactly the opposite !

I can say that every day I chat with my colleagues, writing or speaking it depends on the stuff we have to discuss. Once a week I have a video call for syncing about the work each team member has done during the last week. Sometimes, we have the chance to meet in person for conferences or business travels (it was great for me being in Boston in January for the F2F meeting of the whole “messaging” team).

Even if I don’t meet in person my colleagues every day, I have started to have friendship feelings with them. I think that if we lived in the same place, I could have a lot activities with them outside of the office. Of course, it’s a matter of how people are … and I was very lucky with my colleagues. By the way, having such a feeling means that you are comfortable in the team and with your “remote” colleagues.

We are an open source company, so all my artifacts are available online (on GitHub) but in any case we also share documents regarding stuff we are developing in order to have a place for getting feedback and comments.

After one year, I have these main points to share with you :

  • You have to be the manager of yourself and it’s not always simple.
  • You need a separation between being at “home” and being at the “home office”.
  • You have flexibility on the working hours but my preference is to have a full day work as I was in a “real” office.
  • You have your manager who trusts in you … for having a dispersed team, the trust is one of the main aspects.
  • You need to be a passionate employee about your job … you have to love it … and I’m lucky on that ! 🙂

Quite often, working from home means working more and it’s exactly the opposite of what a lot of people think. You are right there at the “office” (just few seconds from the bed), you are right there at “home” (just few steps from your desk) … but if you are passionate about what you do … it’s not a problem … at least for me 🙂

Of course, there are some perks about being a “remotee” :

  • I have more free time in the early morning and late afternoon because I can avoid to waste my time in a traffic jam ! Now I’m a runner who starts his day at 6:00 AM for having a workout and even a father that can play with his children just “one minute” after ending the work.
  • I have time for taking and picking up my son to the school.
  • Last year I had a daughter and since July I have been seeing her every hour during each day and how she is growing.
  • When I have a break during the day I can speak with my wife or play a little bit with my children.
  • There are few distractions because when you are at your desk … you are alone 🙂 You can be more concentrate on the problem you are trying to solve.
  • Two additional perks are … having a good Napolitan coffee at lunch and watching a “The Big Bang Theory” episode after lunch 🙂

I think that in a such working environment you need two main things : being passionate and being professional.

This short post came to my mind after reading a blog post series written by a Red Hatter during the last days explaining how it’s possible to work in a “dispersed” team and I think that you can read it to understand better how “dispersed” teams work great here in Red Hat.

So with this … I hope I have answered to all the people with their questions ! 🙂

And now … now I’m ready … ready for the Red Hat Summit where I’ll meet in person some of my colleagues and other Red Hatters from all around the world !

“Hostpath” based volumes dynamically provisioned on OpenShift

Storage is one of the critical pieces in a Kubernetes/OpenShift deployment for those applications which need to store persistent data; a good example is represented by “stateful” applications that are deployed using Stateful Sets (previously known as Pet Sets).

In order to do that, one or more persistent volumes are manually provisioned by the cluster admin and the applications can use persistent volume claims for having access to them (read/write). Starting from 1.2 release (as alpha), Kubernetes offers the dynamic provisioning feature for avoiding the pre-provisioning by the cluster admin and allowing auto-provisioning of persistent volumes when they are requested by users. In the current 1.6 release, this feature is now considered in the stable state (you can read more about that at following link).

As described in the above link, there is a provisioner which is able to provision persistent volumes as requested by users through a specified storage class. In general, each cloud provider (Amazon Web Services, Microsoft Azure, Google Cloud Platform, …) allows to use some default provisioners but for a local deployment on a single node cluster (i.e. for developing purpose) there is no default provisioner for using an “hostpath” (providing a persistent volume through the host in a local directory).

There is the following project (in the “Kubernetes incubator”) which provides a library for developing a custom external provisioner and one of the examples is exactly a provisioner for using a local directory on the host for persistent volumes : the hostpath-provisioner.

In this article, I’ll explain the steps needed to have the “hostpath provisioner” working on an OpenShift cluster and what I have learned during this journey. My intention is to provide a unique guide gathering information taken from various sources like the official repository.

Installing Golang

First of all,  I didn’t have Go language on my Fedora 24 and the first thing to know is that the version 1.7 (or above) is needed because the “context” package (added in the 1.7 release) is needed. I started installing the default Go version provided by Fedora 24 repositories (1.6.5) but receiving the following error trying to build the provisioner :

vendor/k8s.io/client-go/rest/request.go:21:2: cannot find package "context" in any of:
 /home/ppatiern/go/src/hostpath-provisioner/vendor/context (vendor tree)
 /usr/lib/golang/src/context (from $GOROOT)
 /home/ppatiern/go/src/context (from $GOPATH)

In order to install Go 1.7 manually, after downloading the tar file from the web site, you can extract it in the following way :

tar -zxvf go1.7.5.linux-amd64.tar.gz -C /usr/local

After that, two main environment variables are needed to be set for having the Go compiler and runtime working fine.

  • GOROOT : the directory where Go is just installed (i.e. /usr/local/go)
  • GOPATH : the directory with the Go workspace (where we need to create two other directories there, the src and bin)

Modifying the .bashrc (or the .bash_profile) file we can export such environment variables.

export GOPATH=$HOME/go
PATH=$PATH:$GOPATH/bin
export GOROOT=/usr/local/go
PATH=$PATH:$GOROOT/bin

Having the GOPATH/bin in the PATH is needed as we’ll see in the next step.

Installing Glide

The provisioner project we want to build has some Go dependecies and Glide is used as dependencies manager.

It can be installed in the following way :

curl https://glide.sh/get | sh

This command downloads the needed files and builds the Glide binary copying it into the GOPATH/bin directory (so we need to have that into the PATH as already done for using glide on the command line).

Building the “hostpath-provisioner”

First of all we need to clone the GitHub repository from here and then launching the make command from the docs/demo/hostpath-provisioner directory.

The Makefile has the following steps :

  • using Glide in order to download all the needed dependencies.
  • compiling the hostpath-provisioner application.
  • building a Docker image which contains the above application.

It means that this provisioner needs to be deployed in the cluster in order to provide the dynamic provisioning feature to the other pods/containers which needs persistent volumes created dynamically.

Deploying the “hostpath-provisioner”

This provisioner is going to use a directory on the host for persistent volumes. The name of the root folder is hardcoded in the implementation and it is /tmp/hostpath-provisioner. Every time an application will claim for using a persistent volume, a new child directory will be created under this one.

Such root folder needs to be created having all access for reading and writing :

mkdir -p /tmp/hostpath-provisioner
chmod 777 /tmp/hostpath-provisioner

In order to run the “hostpath-provisioner” in a cluster with RBAC (Role Based Access Control) enabled or on OpenShift you must authorize the provisioner.

First of all, create a ServiceAccount resource described in the following way :

apiVersion: v1
kind: ServiceAccount
metadata:
 name: hostpath-provisioner

then a ClusterRole :

kind: ClusterRole
apiVersion: v1
metadata:
 name: hostpath-provisioner-runner
rules:
 - apiGroups: [""]
 resources: ["persistentvolumes"]
 verbs: ["get", "list", "watch", "create", "delete"]
 - apiGroups: [""]
 resources: ["persistentvolumeclaims"]
 verbs: ["get", "list", "watch", "update"]
 - apiGroups: ["storage.k8s.io"]
 resources: ["storageclasses"]
 verbs: ["get", "list", "watch"]
 - apiGroups: [""]
 resources: ["events"]
 verbs: ["list", "watch", "create", "update", "patch"]
 - apiGroups: [""]
 resources: ["services", "endpoints"]
 verbs: ["get"]

It’s needed because the controller requires authorization to perform the above API calls (i.e. listing, watching, creating and deleting persistent volumes and so on).

Let’s create a sample project for that, save the above resources in two different files (i.e. serviceaccount.yaml and openshift-clusterrole.yaml) and finally create these resources.

oc new-project test-provisioner
oc create -f serviceaccount.yaml
oc create -f openshift-clusterrole.yaml

Finally we need to provide such authorization in the following way :

oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:test-provisioner:hostpath-provisioner
oc adm policy add-cluster-role-to-user hostpath-provisioner-runner system:serviceaccount:test-provisioner:hostpath-provisioner

The “hostpath-provisioner” example provides a pod.yaml file which describes the Pod to deploy for having the provisioner running in the cluster. Before creating the Pod we need to modify this file, setting the spec.serviceAccount property to the that in this case is just “hostpath-provisioner” (as described in the serviceaccount.yaml file).

kind: Pod
apiVersion: v1
metadata:
 name: hostpath-provisioner
spec:
 containers:
 - name: hostpath-provisioner
 image: hostpath-provisioner:latest
 imagePullPolicy: "IfNotPresent"
 env:
 - name: NODE_NAME
 valueFrom:
 fieldRef:
 fieldPath: spec.nodeName
 volumeMounts:
 - name: pv-volume
 mountPath: /tmp/hostpath-provisioner
 serviceAccount: hostpath-provisioner
 volumes:
 - name: pv-volume
 hostPath:
 path: /tmp/hostpath-provisioner

Last steps … just creating the Pod and then the StorageClass and the PersistentVolumeClaim using the provided class.yaml and claim.yaml files.

oc create -f pod.yaml
oc create -f class.yaml
oc create -f claim.yaml

Finally we have a “hostpath-provisioner” deployed in the cluster that is ready to provision persistent volumes as requested by the other applications running in the same cluster.

Selection_040

See the provisioner working

For checking that the provisioner is really working, there is a test-pod.yaml file in the project which starts a pod claiming for a persistent volume in order to create a SUCCESS file inside it.

After starting the pod :

oc create -f test-pod.yaml

we should see a SUCCESS file inside a child directory with a very long name inside the root /tmp/hostpath-provisioner.

ls /tmp/hostpath-provisioner/pvc-1c565a55-1935-11e7-b98c-54ee758f9350/
SUCCESS

It means that the provisioner has handled the claim request in the correct way, providing a volume to the test-pod in order to write the file.