Month: April 2017

Being a “remotee” in Red Hat … one year later

When I started to work at Red Hat last year (on March 1st), all my friends and relatives asked me a lot of questions about my new job … from home !

“What are your working hours ?”, “How does you manager verify that you are working ?”, “How do you share artifacts with your colleagues ?”, “What’s your daily life without getting in touch with your colleagues ?” …

I understand that for people, who don’t know what being a “remotee” means, it’s quite difficult to understand this way to spend the time at home but … working.

Other people just for kidding say “You are at home, you can do whatever you want” … but it’s not absolutely true ! It’s exactly the opposite !

I can say that every day I chat with my colleagues, writing or speaking it depends on the stuff we have to discuss. Once a week I have a video call for syncing about the work each team member has done during the last week. Sometimes, we have the chance to meet in person for conferences or business travels (it was great for me being in Boston in January for the F2F meeting of the whole “messaging” team).

Even if I don’t meet in person my colleagues every day, I have started to have friendship feelings with them. I think that if we lived in the same place, I could have a lot activities with them outside of the office. Of course, it’s a matter of how people are … and I was very lucky with my colleagues. By the way, having such a feeling means that you are comfortable in the team and with your “remote” colleagues.

We are an open source company, so all my artifacts are available online (on GitHub) but in any case we also share documents regarding stuff we are developing in order to have a place for getting feedback and comments.

After one year, I have these main points to share with you :

  • You have to be the manager of yourself and it’s not always simple.
  • You need a separation between being at “home” and being at the “home office”.
  • You have flexibility on the working hours but my preference is to have a full day work as I was in a “real” office.
  • You have your manager who trusts in you … for having a dispersed team, the trust is one of the main aspects.
  • You need to be a passionate employee about your job … you have to love it … and I’m lucky on that ! 🙂

Quite often, working from home means working more and it’s exactly the opposite of what a lot of people think. You are right there at the “office” (just few seconds from the bed), you are right there at “home” (just few steps from your desk) … but if you are passionate about what you do … it’s not a problem … at least for me 🙂

Of course, there are some perks about being a “remotee” :

  • I have more free time in the early morning and late afternoon because I can avoid to waste my time in a traffic jam ! Now I’m a runner who starts his day at 6:00 AM for having a workout and even a father that can play with his children just “one minute” after ending the work.
  • I have time for taking and picking up my son to the school.
  • Last year I had a daughter and since July I have been seeing her every hour during each day and how she is growing.
  • When I have a break during the day I can speak with my wife or play a little bit with my children.
  • There are few distractions because when you are at your desk … you are alone 🙂 You can be more concentrate on the problem you are trying to solve.
  • Two additional perks are … having a good Napolitan coffee at lunch and watching a “The Big Bang Theory” episode after lunch 🙂

I think that in a such working environment you need two main things : being passionate and being professional.

This short post came to my mind after reading a blog post series written by a Red Hatter during the last days explaining how it’s possible to work in a “dispersed” team and I think that you can read it to understand better how “dispersed” teams work great here in Red Hat.

So with this … I hope I have answered to all the people with their questions ! 🙂

And now … now I’m ready … ready for the Red Hat Summit where I’ll meet in person some of my colleagues and other Red Hatters from all around the world !

“Hostpath” based volumes dynamically provisioned on OpenShift

Storage is one of the critical pieces in a Kubernetes/OpenShift deployment for those applications which need to store persistent data; a good example is represented by “stateful” applications that are deployed using Stateful Sets (previously known as Pet Sets).

In order to do that, one or more persistent volumes are manually provisioned by the cluster admin and the applications can use persistent volume claims for having access to them (read/write). Starting from 1.2 release (as alpha), Kubernetes offers the dynamic provisioning feature for avoiding the pre-provisioning by the cluster admin and allowing auto-provisioning of persistent volumes when they are requested by users. In the current 1.6 release, this feature is now considered in the stable state (you can read more about that at following link).

As described in the above link, there is a provisioner which is able to provision persistent volumes as requested by users through a specified storage class. In general, each cloud provider (Amazon Web Services, Microsoft Azure, Google Cloud Platform, …) allows to use some default provisioners but for a local deployment on a single node cluster (i.e. for developing purpose) there is no default provisioner for using an “hostpath” (providing a persistent volume through the host in a local directory).

There is the following project (in the “Kubernetes incubator”) which provides a library for developing a custom external provisioner and one of the examples is exactly a provisioner for using a local directory on the host for persistent volumes : the hostpath-provisioner.

In this article, I’ll explain the steps needed to have the “hostpath provisioner” working on an OpenShift cluster and what I have learned during this journey. My intention is to provide a unique guide gathering information taken from various sources like the official repository.

Installing Golang

First of all,  I didn’t have Go language on my Fedora 24 and the first thing to know is that the version 1.7 (or above) is needed because the “context” package (added in the 1.7 release) is needed. I started installing the default Go version provided by Fedora 24 repositories (1.6.5) but receiving the following error trying to build the provisioner :

vendor/k8s.io/client-go/rest/request.go:21:2: cannot find package "context" in any of:
 /home/ppatiern/go/src/hostpath-provisioner/vendor/context (vendor tree)
 /usr/lib/golang/src/context (from $GOROOT)
 /home/ppatiern/go/src/context (from $GOPATH)

In order to install Go 1.7 manually, after downloading the tar file from the web site, you can extract it in the following way :

tar -zxvf go1.7.5.linux-amd64.tar.gz -C /usr/local

After that, two main environment variables are needed to be set for having the Go compiler and runtime working fine.

  • GOROOT : the directory where Go is just installed (i.e. /usr/local/go)
  • GOPATH : the directory with the Go workspace (where we need to create two other directories there, the src and bin)

Modifying the .bashrc (or the .bash_profile) file we can export such environment variables.

export GOPATH=$HOME/go
PATH=$PATH:$GOPATH/bin
export GOROOT=/usr/local/go
PATH=$PATH:$GOROOT/bin

Having the GOPATH/bin in the PATH is needed as we’ll see in the next step.

Installing Glide

The provisioner project we want to build has some Go dependecies and Glide is used as dependencies manager.

It can be installed in the following way :

curl https://glide.sh/get | sh

This command downloads the needed files and builds the Glide binary copying it into the GOPATH/bin directory (so we need to have that into the PATH as already done for using glide on the command line).

Building the “hostpath-provisioner”

First of all we need to clone the GitHub repository from here and then launching the make command from the docs/demo/hostpath-provisioner directory.

The Makefile has the following steps :

  • using Glide in order to download all the needed dependencies.
  • compiling the hostpath-provisioner application.
  • building a Docker image which contains the above application.

It means that this provisioner needs to be deployed in the cluster in order to provide the dynamic provisioning feature to the other pods/containers which needs persistent volumes created dynamically.

Deploying the “hostpath-provisioner”

This provisioner is going to use a directory on the host for persistent volumes. The name of the root folder is hardcoded in the implementation and it is /tmp/hostpath-provisioner. Every time an application will claim for using a persistent volume, a new child directory will be created under this one.

Such root folder needs to be created having all access for reading and writing :

mkdir -p /tmp/hostpath-provisioner
chmod 777 /tmp/hostpath-provisioner

In order to run the “hostpath-provisioner” in a cluster with RBAC (Role Based Access Control) enabled or on OpenShift you must authorize the provisioner.

First of all, create a ServiceAccount resource described in the following way :

apiVersion: v1
kind: ServiceAccount
metadata:
 name: hostpath-provisioner

then a ClusterRole :

kind: ClusterRole
apiVersion: v1
metadata:
 name: hostpath-provisioner-runner
rules:
 - apiGroups: [""]
 resources: ["persistentvolumes"]
 verbs: ["get", "list", "watch", "create", "delete"]
 - apiGroups: [""]
 resources: ["persistentvolumeclaims"]
 verbs: ["get", "list", "watch", "update"]
 - apiGroups: ["storage.k8s.io"]
 resources: ["storageclasses"]
 verbs: ["get", "list", "watch"]
 - apiGroups: [""]
 resources: ["events"]
 verbs: ["list", "watch", "create", "update", "patch"]
 - apiGroups: [""]
 resources: ["services", "endpoints"]
 verbs: ["get"]

It’s needed because the controller requires authorization to perform the above API calls (i.e. listing, watching, creating and deleting persistent volumes and so on).

Let’s create a sample project for that, save the above resources in two different files (i.e. serviceaccount.yaml and openshift-clusterrole.yaml) and finally create these resources.

oc new-project test-provisioner
oc create -f serviceaccount.yaml
oc create -f openshift-clusterrole.yaml

Finally we need to provide such authorization in the following way :

oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:test-provisioner:hostpath-provisioner
oc adm policy add-cluster-role-to-user hostpath-provisioner-runner system:serviceaccount:test-provisioner:hostpath-provisioner

The “hostpath-provisioner” example provides a pod.yaml file which describes the Pod to deploy for having the provisioner running in the cluster. Before creating the Pod we need to modify this file, setting the spec.serviceAccount property to the that in this case is just “hostpath-provisioner” (as described in the serviceaccount.yaml file).

kind: Pod
apiVersion: v1
metadata:
 name: hostpath-provisioner
spec:
 containers:
 - name: hostpath-provisioner
 image: hostpath-provisioner:latest
 imagePullPolicy: "IfNotPresent"
 env:
 - name: NODE_NAME
 valueFrom:
 fieldRef:
 fieldPath: spec.nodeName
 volumeMounts:
 - name: pv-volume
 mountPath: /tmp/hostpath-provisioner
 serviceAccount: hostpath-provisioner
 volumes:
 - name: pv-volume
 hostPath:
 path: /tmp/hostpath-provisioner

Last steps … just creating the Pod and then the StorageClass and the PersistentVolumeClaim using the provided class.yaml and claim.yaml files.

oc create -f pod.yaml
oc create -f class.yaml
oc create -f claim.yaml

Finally we have a “hostpath-provisioner” deployed in the cluster that is ready to provision persistent volumes as requested by the other applications running in the same cluster.

Selection_040

See the provisioner working

For checking that the provisioner is really working, there is a test-pod.yaml file in the project which starts a pod claiming for a persistent volume in order to create a SUCCESS file inside it.

After starting the pod :

oc create -f test-pod.yaml

we should see a SUCCESS file inside a child directory with a very long name inside the root /tmp/hostpath-provisioner.

ls /tmp/hostpath-provisioner/pvc-1c565a55-1935-11e7-b98c-54ee758f9350/
SUCCESS

It means that the provisioner has handled the claim request in the correct way, providing a volume to the test-pod in order to write the file.