<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.2">Jekyll</generator><link href="https://blog.kstaykov.eu/feed.xml" rel="self" type="application/atom+xml" /><link href="https://blog.kstaykov.eu/" rel="alternate" type="text/html" /><updated>2023-03-13T22:28:40+02:00</updated><id>https://blog.kstaykov.eu/feed.xml</id><title type="html">Kalin’s Blog</title><subtitle>This is my personal blog.</subtitle><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><entry><title type="html">Nostr - microblogging for the free</title><link href="https://blog.kstaykov.eu/dev/nostr-microblogging-for-the-free/" rel="alternate" type="text/html" title="Nostr - microblogging for the free" /><published>2023-03-13T21:00:00+02:00</published><updated>2023-03-13T21:00:00+02:00</updated><id>https://blog.kstaykov.eu/dev/nostr-microblogging-for-the-free</id><content type="html" xml:base="https://blog.kstaykov.eu/dev/nostr-microblogging-for-the-free/"><![CDATA[<p>I have a love/hate relationship with Twitter. I love it for the way I can follow people I find interesting, I like glancing at new updates and generally following topics I like. But I find it irritating that in the end of the day it’s just a platform that exploits my attention to make me look at ads.</p>

<p>I was looking for an alternative that is more open and free of the corporate control. I found Nostr. It has a public protocol and all implementations I’ve seen are opensource. What got me interested is not just that is decentralised but also how simple it is. It uses web sockets and messages of given type. Relays are exchanging those messages between them and users can decide which relays to use. Having that simplicity is a good starting point because it makes it easy to adopt and start using even from development perspective.</p>

<p>I noticed there is a nice Go library that can be used to implement a relay - <a href="https://github.com/nbd-wtf/go-nostr">https://github.com/nbd-wtf/go-nostr</a></p>

<p>I gave it a quick test and got to a working proof-of-concept code within minutes. Here’s a fully working relay demo:</p>

<p><a href="https://github.com/fiatjaf/relayer/blob/master/basic/main.go">https://github.com/fiatjaf/relayer/blob/master/basic/main.go</a></p>

<p>But if you ever want to try it out with a Nostr client keep in mind that web sockets just like http have a way to work with or without TLS. In other words:</p>

<ul>
  <li>Use <code class="language-plaintext highlighter-rouge">ws://</code> when you want insecure connection similar to <code class="language-plaintext highlighter-rouge">http://</code></li>
  <li>Use <code class="language-plaintext highlighter-rouge">wss://</code> when you want secure (TLS/SSL) connection similar to <code class="language-plaintext highlighter-rouge">https://</code></li>
</ul>

<p>Having to terminate SSL during development is tedious but most clients should support <code class="language-plaintext highlighter-rouge">ws://</code> so you can experiment with relays if you want.</p>

<p>Another interesting library used in the relayer is this one:</p>

<p><a href="https://github.com/nbd-wtf/go-nostr">https://github.com/nbd-wtf/go-nostr</a></p>

<p>That’s the most feature rich library I’ve seen for Go. There are other implementations in other languages too so it seems that the ecosystem around Nostr is growing. It is also being complimented by the ability to send zaps (SATS or Satoshi) which is small pieces of a bitcoin. The Satoshi (named after the creator of Bitcoin) is actually the smallest amount (denomination) of Bitcoin and is equal 0,00000001 BTC. That is a pretty small value and people often use it to zap posts made on Nostr which creates a value-for-value system. You like a post, you can zap it. This gives a few Satoshi coins to the creator of the post which is a great way for us to support one another.</p>

<p>What is more important is that we have building blocks that enable many use cases. We have:</p>

<ul>
  <li>Ability to connect with one another and share messages</li>
  <li>Ability to decentralise the system that spreads the messages so it is protected from censorship</li>
  <li>Ability to encrypt private messages</li>
  <li>Ability to integrate value transfer by means of Bitcoin lightning transactions</li>
</ul>

<p>This can be foundation to many use case implementations - shops, games, services, etc. I hope to see this system used for great new things.</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="Dev" /><summary type="html"><![CDATA[I have a love/hate relationship with Twitter. I love it for the way I can follow people I find interesting, I like glancing at new updates and generally following topics I like. But I find it irritating that in the end of the day it’s just a platform that exploits my attention to make me look at ads.]]></summary></entry><entry><title type="html">Linux Kernel - back to basics</title><link href="https://blog.kstaykov.eu/devops/back-to-basics-kernel/" rel="alternate" type="text/html" title="Linux Kernel - back to basics" /><published>2023-02-15T12:53:00+02:00</published><updated>2023-02-15T12:53:00+02:00</updated><id>https://blog.kstaykov.eu/devops/back-to-basics-kernel</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/back-to-basics-kernel/"><![CDATA[<p>It’s remarkable how time flies. It didn’t feel so long ago when I was building my own Linux kernel on PC with 8 MB of memory. But it was indeed long time ago..</p>

<p>I have the rare occasion to do it again today for a project I’m working on. It’s funny how rare this is nowadays. We are using it almost all the time but mostly through Linux distributions - Ubuntu, Debian, Fedora and so on. But here I am looking at the build slowly going through <code class="language-plaintext highlighter-rouge">drivers/net/ethernet/...</code> again and I feel nostalgic.</p>

<p>The other funny thing is that <code class="language-plaintext highlighter-rouge">make menuconfig</code> feels kind of new. It’s the exact same menu interface but I had to read through the items because most I am seeing either for the first time or I had forgotten about them. So, I defaulted and didn’t do any edits - yeah, I’m busy! No time to hack around :D</p>

<p>Another interesting thing I found was those scripts I had to run:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scripts/config --disable SYSTEM_TRUSTED_KEYS
scripts/config --disable SYSTEM_REVOCATION_KEYS
</code></pre></div></div>

<p>I don’t recall having to do that back in the old days. I guess this says a lot about how far we went in terms of security. It’s a whole new world we live in now but I’m glad I had the chance of experiencing the déjà vu moment of building my own kernel again.</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[It’s remarkable how time flies. It didn’t feel so long ago when I was building my own Linux kernel on PC with 8 MB of memory. But it was indeed long time ago..]]></summary></entry><entry><title type="html">Sensu - Getting Started</title><link href="https://blog.kstaykov.eu/devops/Sensu-getting-started/" rel="alternate" type="text/html" title="Sensu - Getting Started" /><published>2021-02-06T21:23:00+02:00</published><updated>2021-02-06T21:23:00+02:00</updated><id>https://blog.kstaykov.eu/devops/Sensu-getting-started</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/Sensu-getting-started/"><![CDATA[<p>Monitoring is often a sore topic. It’s crucial to get it right and get a grasp of what’s going on with your infrastructure. The more details you have during a failure the quicker you’ll act and recover from it. There are many solutions out there like <code class="language-plaintext highlighter-rouge">Nagios</code>, <code class="language-plaintext highlighter-rouge">Shinken</code>, <code class="language-plaintext highlighter-rouge">Zabbix</code>, <code class="language-plaintext highlighter-rouge">Icinga</code> to just name a few. And they are all great. Today I’ll show you <code class="language-plaintext highlighter-rouge">Sensu</code> which is one of the options I really like. It was rewritten in Go and has features that in my opinion makes it stand out:</p>

<ul>
  <li>Simple - two native binaries talking. One is an agent (host being monitored) and the other a backend (Sensu’s “mothership”).</li>
  <li>Powerful cli called <code class="language-plaintext highlighter-rouge">sensuctl</code>.</li>
  <li>Feels like Kubernetes with its declarative configuration.</li>
  <li>Packs all the right stuff you need in a monitoring platform - from the check to the alert with great options to automate and remediate in between.</li>
  <li>Event driven - it’s so much more than just alerting about abnormal condition since every event can be handled in any way you like.</li>
  <li>Feels like a framework, not just a tool.</li>
  <li>Easy to extend via its powerful asset management.</li>
</ul>

<p>And those are just a few of its great treats. To get started I would suggest to walk through a simple docker setup with one backend and an agent. According to its documentation at <a href="https://sensu.io/">https://sensu.io/</a> that would involve a few steps:</p>

<blockquote>
  <p>Start Sensu Backend</p>
</blockquote>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo docker network create sensu
$ sudo docker volume create sensu-backend-data
$ sudo docker run -d --rm --name sensu-backend \
  --network sensu -p 8080:8080 -p 3000:3000 \
  -v sensu-backend-data:/var/lib/sensu \
  sensu/sensu:6.2.5 sensu-backend start
11bbb27c3
$ sudo docker run -d --rm --network sensu -p :3030 \
  sensu/sensu:6.2.5 sensu-agent start \
  --backend-url ws://sensu-backend:8081 --deregister \
  --keepalive-interval=5 --keepalive-warning-timeout=10 --subscriptions linux
$ curl -s http://localhost:8080/version
{"etcd":{"etcdserver":"3.3.13","etcdcluster":"3.3.0"},"sensu_backend":"6.2.5"}
$ _
</code></pre></div></div>

<blockquote>
  <p>Install Sensu cli and connect to the backend</p>
</blockquote>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ curl -LO https://s3-us-west-2.amazonaws.com/sensu.io/sensu-go/6.2.5/sensu-go_6.2.5_darwin_amd64.tar.gz
$ sudo tar -xzf sensu-go_6.2.5_darwin_amd64.tar.gz -C /usr/local/bin/
$ sensuctl configure -n --url http://127.0.0.1:8080 \
  --username admin \
  --password 'P@ssw0rd!' \
  --namespace default
$ sensuctl cluster health
ID                NAME     ERROR   HEALTHY
8927110dc66458af  default          true
$ sensuctl cluster id
sensu cluster id: "227b26eb-26c4-46b2-bc7c-8c080b072e6b"
$ _
</code></pre></div></div>

<blockquote>
  <p>Connect an Agent and configure one simple check</p>
</blockquote>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sensuctl asset add sensu/monitoring-plugins:2.2.0-1
fetching bonsai asset: sensu/monitoring-plugins:2.2.0-1
added asset: sensu/monitoring-plugins:2.2.0-1
$ sensuctl check create ntp \
  --runtime-assets "sensu/monitoring-plugins" \
  --command "check_ntp_time -H time.nist.gov --warn 0.5 --critical 1.0" \
  --output-metric-format nagios_perfdata \
  --publish="true" --interval 30 --timeout 10 --subscriptions linux
$ sensuctl event list
ENTITY        CHECK      OUTPUT                                                                    STATUS  SILENCED   TIMESTAMP
a749e3a10d86  keepalive  Keepalive last sent from a749e3a10d86 at 2019-09-11 15:34:25 +0000 UTC    0       false      2019-09-11 08:34:25 -0700 PDT
a749e3a10d86  ntp        NTP OK: Offset -0.03375908732 secs|offset=-0.033759s;0.500000;1.000000;   0       false      2019-09-11 08:34:22 -0700 PDT
$ _
</code></pre></div></div>

<p>When just starting it’s nice to follow those steps and make the UI and cli work quickly to get a sense of accomplishment. Sensu does make this process easy as you can see in those few steps. Take your time and click through the UI which will be available on port <code class="language-plaintext highlighter-rouge">:3000</code>. Also get a feel for its cli interface:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>➜  ~ sensuctl
sensuctl controls Sensu instances

Usage:	sensuctl COMMAND

Flags:
      --api-url string             host URL of Sensu installation
      --cache-dir string           path to directory containing cache &amp; temporary files (default "/Users/kstaykov/Library/Caches/sensu/sensuctl")
      --config-dir string          path to directory containing configuration files (default "/Users/kstaykov/.config/sensu/sensuctl")
  -h, --help                       help for sensuctl
      --insecure-skip-tls-verify   skip TLS certificate verification (not recommended!)
      --namespace string           namespace in which we perform actions (default "default")
      --timeout duration           timeout when communicating with sensu backend (default 15s)
      --trusted-ca-file string     TLS CA certificate bundle in PEM format

Commands:
  completion           Output shell completion code for the specified shell (bash or zsh)
  configure            Initialize sensuctl configuration
  create               Create or replace resources from file or URL (path, file://, http[s]://), or STDIN otherwise.
  delete               Delete resources from file or STDIN
  describe-type        Print details about the supported API resources types
  dump                 Dump resource definitions to JSON or YAML
  edit                 Edit resources interactively
  env                  Display the commands to set up the environment used by sensuctl
  logout               Logout from sensuctl
  prune                Deletes resources that do not appear in the configs from file or URL (path, file://, http[s]://), or STDIN otherwise.
  version              Show the sensuctl version information

Management Commands:
  api-key              Manage apikeys
  asset                Manage assets
  auth                 Manage authentication drivers
  check                Manage checks
  cluster              Manage sensu cluster
  cluster-role         Manage cluster roles
  cluster-role-binding Manage cluster role bindings
  command              Manage sensuctl commands
  config               Modify sensuctl configuration
  entity               Manage entities
  event                Manage events
  filter               Manage filters
  handler              Manage handlers
  hook                 Manage hooks
  license              Manage enterprise license
  login                Authenticate sensuctl to Sensu using the provided arguments
  mutator              Manage mutators
  namespace            Manage namespaces
  role                 Manage roles
  role-binding         Manage role bindings
  secret               Manage secrets
  silenced             Manage silenced subscriptions and checks
  tessen               Manage tessen configuration
  user                 Manage users

Run 'sensuctl COMMAND --help' for more information on a command.
➜  ~
</code></pre></div></div>

<p>I won’t get into more details in this getting started article but do you remember when I said that it feels like Kubernetes? I’ll give you a small hint. Check out the <code class="language-plaintext highlighter-rouge">sensuctl dump all</code> command. This is how anything you configure can be dumped to declarative code that you can put in version control. You can also use <code class="language-plaintext highlighter-rouge">sensuctl create -f &lt;yaml file&gt;</code> to bring that configuration into play on the backend you have running. Don’t forget to <code class="language-plaintext highlighter-rouge">sensuctl login</code> first.</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[Monitoring is often a sore topic. It’s crucial to get it right and get a grasp of what’s going on with your infrastructure. The more details you have during a failure the quicker you’ll act and recover from it. There are many solutions out there like Nagios, Shinken, Zabbix, Icinga to just name a few. And they are all great. Today I’ll show you Sensu which is one of the options I really like. It was rewritten in Go and has features that in my opinion makes it stand out:]]></summary></entry><entry><title type="html">Kubernetes taint - what is it and how to work with it?</title><link href="https://blog.kstaykov.eu/devops/Kubernetes-taint/" rel="alternate" type="text/html" title="Kubernetes taint - what is it and how to work with it?" /><published>2018-12-29T08:51:00+02:00</published><updated>2018-12-29T08:51:00+02:00</updated><id>https://blog.kstaykov.eu/devops/Kubernetes-taint</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/Kubernetes-taint/"><![CDATA[<p>Taint and affinity control what pods should be repelled by the nodes (taint) and where the pods would be attracted to (affinity). That’s one of the great features of Kubernetes but there is a catch. If you run a single node cluster on your laptop (the way I like to do :)) you will often hit on a common taint - the NoSchedule one. It’s set to prevent scheduling on the master node and if you try to put some pods to play with (like Helm) you will probably hit on this problem:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[root@phix ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-nl4hq               1/1     Running   0          11h
kube-system   coredns-86c58d9df4-wbg8x               1/1     Running   0          11h
kube-system   etcd-phix                              1/1     Running   0          11h
kube-system   kube-apiserver-phix                    1/1     Running   0          11h
kube-system   kube-controller-manager-phix           1/1     Running   1          11h
kube-system   kube-flannel-ds-amd64-jtkqn            1/1     Running   0          11h
kube-system   kube-proxy-fqg5b                       1/1     Running   0          11h
kube-system   kube-scheduler-phix                    1/1     Running   1          11h
kube-system   kubernetes-dashboard-57df4db6b-cptdn   1/1     Running   0          11h
kube-system   tiller-deploy-8485766469-pd9ss         0/1     Pending   0          89s
[root@phix ~]# kubectl -n kube-system describe pod tiller-deploy-8485766469-pd9ss
Name:               tiller-deploy-8485766469-pd9ss
Namespace:          kube-system
Priority:           0
PriorityClassName:  &lt;none&gt;
Node:               &lt;none&gt;
Labels:             app=helm
                    name=tiller
                    pod-template-hash=8485766469
Annotations:        &lt;none&gt;
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/tiller-deploy-8485766469
Containers:
  tiller:
    Image:       gcr.io/kubernetes-helm/tiller:v2.12.1
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-b65qd (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  tiller-token-b65qd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tiller-token-b65qd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  &lt;none&gt;
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  104s (x2 over 104s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
[root@phix ~]#
</code></pre></div></div>

<p>The simple solution would be to remove this taint.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[root@phix ~]# kubectl get nodes -o json | jq .items[].spec.taints
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[root@phix ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/phix untainted
[root@phix ~]# kubectl get nodes -o json | jq .items[].spec.taints
null
[root@phix ~]# 
</code></pre></div></div>

<p>Notice the minus sign at the end of the taint removal command.</p>

<p>Note for production: this is bad idea for production. Normally if you run a Kubernetes cluster you would not have just the master node but also worker nodes. In such cases it’s a great idea to keep the master node NoSchedule taint and repel pods trying to schedule on it. By design the worker nodes should be the ones taking pods.</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[Taint and affinity control what pods should be repelled by the nodes (taint) and where the pods would be attracted to (affinity). That’s one of the great features of Kubernetes but there is a catch. If you run a single node cluster on your laptop (the way I like to do :)) you will often hit on a common taint - the NoSchedule one. It’s set to prevent scheduling on the master node and if you try to put some pods to play with (like Helm) you will probably hit on this problem:]]></summary></entry><entry><title type="html">How to deploy Django application using Chef</title><link href="https://blog.kstaykov.eu/devops/deploy-django-app-via-chef/" rel="alternate" type="text/html" title="How to deploy Django application using Chef" /><published>2018-12-20T14:51:00+02:00</published><updated>2018-12-20T14:51:00+02:00</updated><id>https://blog.kstaykov.eu/devops/deploy-django-app-via-chef</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/deploy-django-app-via-chef/"><![CDATA[<p><a href="https://www.djangoproject.com/">Django</a> is a popular Python web framework. I’ve been asked a couple of times how Python based applications can be automated so here I’ll give an example.</p>

<p>We’ll be using <a href="https://www.chef.io/">Chef</a>. Before we get started let’s discuss most of the items that make Chef’s ecosystem.</p>

<ul>
  <li>Recipe - a set of steps to make something happen. Just like a recipe for cake. In our case think of it as the recipe of how to build our application or install some dependencies.</li>
  <li>Cookbook - well.. a collection of recipes. Plus few more things we’ll be playing with shortly.</li>
</ul>

<p>There are different commands like <code class="language-plaintext highlighter-rouge">knife</code> but we won’t discuss them in much detail. Now we’ll go over one cookbook used to deploy the following Django app:</p>

<p><a href="https://github.com/gothinkster/django-realworld-example-app">https://github.com/gothinkster/django-realworld-example-app</a></p>

<p>It’s a real world example of a web application that I found not long ago. It has most of the things you would see in such app - database, frontend, some backend.</p>

<p>All Chef code we are going to use to deploy this application is here:</p>

<p><a href="https://github.com/zinderic/django-realworld">https://github.com/zinderic/django-realworld</a></p>

<h1 id="prerequisites">Prerequisites</h1>

<ul>
  <li>Chef DK or Chef Workstation</li>
  <li>VirtualBox</li>
  <li>Vagrant</li>
</ul>

<h1 id="the-chef-cookbook-structure---recipes">The Chef Cookbook Structure - recipes</h1>

<p>The most important files we’ll be looking at are located at the <a href="https://github.com/zinderic/django-realworld/tree/master/recipes">recipes</a> folder.</p>

<p>Here’s what recipes we have in their order of execution:</p>
<ul>
  <li>install_packages - this one installs some package dependencies using apt-get</li>
</ul>

<p>We’ll be deploying to Ubuntu. It’s of course an option to support multiple Linux families like Arch, RedHat and Debian in the same cookbook but that’s a more advanced setup we won’t cover here.</p>

<ul>
  <li>create_user - creates django user and group</li>
  <li>pyenv - setups python virtual environment</li>
  <li>app - deploy the application</li>
</ul>

<p>Here’s one example of a recipe:</p>

<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="c1">#</span>
<span class="c1"># Cookbook:: django-realworld</span>
<span class="c1"># Recipe:: install_packages</span>
<span class="c1">#</span>
<span class="c1"># Copyright:: 2018, The Authors, All Rights Reserved.</span>

<span class="n">execute</span> <span class="s2">"update-package-cache"</span> <span class="k">do</span>
    <span class="n">command</span> <span class="s2">"sudo apt-get update"</span>
    <span class="n">action</span> <span class="ss">:run</span>
<span class="k">end</span>
<span class="n">execute</span> <span class="s2">"install-build-essential"</span> <span class="k">do</span>
    <span class="n">command</span> <span class="s2">"sudo apt-get install -y build-essential checkinstall"</span>
    <span class="n">action</span> <span class="ss">:run</span>
<span class="k">end</span>
 <span class="n">execute</span> <span class="s2">"install-prereq"</span> <span class="k">do</span>
    <span class="n">command</span> <span class="s2">"sudo apt-get install -y libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev"</span>
    <span class="n">action</span> <span class="ss">:run</span>
<span class="k">end</span></code></pre></figure>

<p>Here we execute commands but you would often employ Chef’s functions. Like in here:</p>

<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="n">git</span> <span class="s1">'/home/django/.pyenv'</span> <span class="k">do</span>
    <span class="n">user</span> <span class="s1">'django'</span>
    <span class="n">group</span> <span class="s1">'django'</span>
    <span class="n">repository</span> <span class="s1">'https://github.com/pyenv/pyenv.git'</span>
    <span class="n">revision</span> <span class="s1">'master'</span>
    <span class="n">action</span> <span class="ss">:sync</span>
<span class="k">end</span></code></pre></figure>

<p>Notice that we’re using a resource called <code class="language-plaintext highlighter-rouge">git</code> which comes with Chef’s namespace and handles git related work. If you want to learn more about the git resource go read the docs here:</p>

<p><a href="https://docs.chef.io/resource_git.html">https://docs.chef.io/resource_git.html</a></p>

<p>Every time you need to write something you can check for pre-defined resource you can use and build things up from there. Of course you can also use Ruby directly. I would prefer to use pre-defined resources because this often saves me a lot of time and I know that those implementations are safe and tested. They are probably much better than what I would come up with now because I’ll be using it in just my cookbook while the pre-defined resources are used by the whole community.</p>

<h1 id="the-chef-cookbook-structure---tests">The Chef Cookbook Structure - tests</h1>

<p>Just like with software development it’s a good idea to write tests. Tests that cover the functionality of our cookbook are critical for the long-term success of our CI/CD pipelines. It will help us make future changes easier and feel more secure that we won’t break the deployment chain of events and impair our production releases.</p>

<p>Tests are located in their <a href="https://github.com/zinderic/django-realworld/tree/master/test/integration/default">test directory</a> and in our case they cover one to one each recipe.</p>

<p>Here’s how a test looks like:</p>

<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="sx">%w(
    'build-essential'
    'checkinstall'
    'libreadline-gplv2-dev'
    'libncursesw5-dev'
    'libssl-dev'
    'libsqlite3-dev'
    'tk-dev'
    'libgdbm-dev'
    'libc6-dev'
    'libbz2-dev'
)</span><span class="p">.</span><span class="nf">each</span> <span class="k">do</span> <span class="o">|</span><span class="n">pkg</span><span class="o">|</span>
    <span class="n">describe</span> <span class="n">package</span><span class="p">(</span><span class="n">pkg</span><span class="p">)</span> <span class="k">do</span>
        <span class="n">it</span> <span class="p">{</span> <span class="n">should</span> <span class="n">be_installed</span> <span class="p">}</span>
    <span class="k">end</span>
<span class="k">end</span></code></pre></figure>

<p>Notice that the whole syntax is pure Ruby. The interesting part is the <code class="language-plaintext highlighter-rouge">describe</code> method. For the testing part you can start with ChefSpec which is the main facility for writing Chef tests. You can find it here:</p>

<p><a href="https://docs.chef.io/chefspec.html">https://docs.chef.io/chefspec.html</a></p>

<p>There is more to testing. You can mock all kinds of objects that you don’t normally have. Imagine for example that in production you need a mysql database. Or maybe Kafka or Zookeeper. They will provide some service to your application that during testing you don’t need to have. You can safely mock it and make your tests run independently and quickly which is one of the most important thing about tests.</p>

<p>Don’t go and engineer your whole recipes just so you can test them. If you put data in your database during a recipe mock that data up in your test and expect it to be there while you are smoke testing your recipe. That way you’ll know that when you call the real database your smoke will indeed check this data and validate it the way your described.</p>

<p>OK, we have some recipes and tests. How can we really see them run? That’s the fun part. Let’s go and checkout Kitchen.</p>

<h1 id="testing-cookbooks-with-kitchen---configuration">Testing Cookbooks with Kitchen - configuration</h1>

<p>The first thing to do when you want to develop Chef cookbook is install Chef DK or nowadays you can also take advantage of the great Chef Workstation. It has some advantages but it’s still in its early days. Both tools will have the <code class="language-plaintext highlighter-rouge">kitchen</code> command.</p>

<p>Here’s our Kitchen definition file called <a href="https://github.com/zinderic/django-realworld/blob/master/.kitchen.yml">.kitchen.yml</a>:</p>

<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="o">---</span>
<span class="ss">driver:
  name: </span><span class="n">vagrant</span>
  <span class="ss">customize:
    memory: </span><span class="mi">4096</span>
    <span class="ss">cpuexecutioncap: </span><span class="mi">100</span>

<span class="ss">provisioner:
  name: </span><span class="n">chef_zero</span>
  <span class="c1"># You may wish to disable always updating cookbooks in CI or other testing environments.</span>
  <span class="c1"># For example:</span>
  <span class="c1">#   always_update_cookbooks: &lt;%= !ENV['CI'] %&gt;</span>
  <span class="ss">always_update_cookbooks: </span><span class="kp">true</span>

<span class="ss">verifier:
  name: </span><span class="n">inspec</span>

<span class="ss">platforms:
  </span><span class="o">-</span> <span class="ss">name: </span><span class="n">ubuntu</span><span class="o">/</span><span class="n">xenial64</span>

<span class="ss">suites:
  </span><span class="o">-</span> <span class="ss">name: </span><span class="n">default</span>
    <span class="ss">run_list:
      </span><span class="o">-</span> <span class="n">recipe</span><span class="p">[</span><span class="n">django</span><span class="o">-</span><span class="n">realworld</span><span class="o">::</span><span class="n">install_packages</span><span class="p">]</span>
      <span class="o">-</span> <span class="n">recipe</span><span class="p">[</span><span class="n">django</span><span class="o">-</span><span class="n">realworld</span><span class="o">::</span><span class="n">create_user</span><span class="p">]</span>
      <span class="o">-</span> <span class="n">recipe</span><span class="p">[</span><span class="n">django</span><span class="o">-</span><span class="n">realworld</span><span class="o">::</span><span class="n">pyenv</span><span class="p">]</span>
      <span class="o">-</span> <span class="n">recipe</span><span class="p">[</span><span class="n">django</span><span class="o">-</span><span class="n">realworld</span><span class="o">::</span><span class="n">app</span><span class="p">]</span>
    <span class="c1"># verifier:</span>
    <span class="c1">#   inspec_tests:</span>
    <span class="c1">#     - test/integration/default</span>
    <span class="n">attributes</span><span class="p">:</span></code></pre></figure>

<p>It’s your door to quick and easy development so let’s see what each section does.</p>

<ul>
  <li>Driver - that’s the hypervisor you’ll be using when Kitchen runs. In our case we’ll use Virtualbox via a tool called Vagrant.</li>
  <li>Provisioner - that’s our automation tool and while we write Chef Cookbooks we’ll be using chef_zero.</li>
  <li>Verifier - testing facility.</li>
  <li>Platform - what image we will use. As I said we’ll be using Ubuntu image.</li>
  <li>Suites - how many different systems (or VMs in our case) we will build to test with. Here we also put every recipe we want to run.</li>
</ul>

<h1 id="testing-cookbooks-with-kitchen---execution">Testing Cookbooks with Kitchen - execution</h1>

<ul>
  <li><code class="language-plaintext highlighter-rouge">kitchen create</code> - creates the VM.</li>
  <li><code class="language-plaintext highlighter-rouge">kitchen converge</code> - runs the Chef recipes as defined in the Kitchen configuration file and created in our Cookbook.</li>
  <li><code class="language-plaintext highlighter-rouge">kitchen verify</code> - run tests.</li>
  <li><code class="language-plaintext highlighter-rouge">kitchen destroy</code> - destroys the VM.</li>
</ul>

<p>This is one nice order of execution while you’re building things up. If you want to quickly test you can also do:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">kitchen test</code> - creates VM and run tests.</li>
</ul>

<p>If you have a VM created via <code class="language-plaintext highlighter-rouge">kitchen create</code> you can also login to the VM.</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">kitchen login &lt;name&gt;</code> - logins to a suite. In our case since we have only one we can just say <code class="language-plaintext highlighter-rouge">kitchen login</code> or if we feel more expressive we can still say <code class="language-plaintext highlighter-rouge">kitchen login default</code>.</li>
</ul>

<h1 id="summary">Summary</h1>

<p>In this article you saw how to:</p>

<ul>
  <li>Create Chef Cookbook recipes</li>
  <li>Create Chef Cookbook tests</li>
  <li>Use Kitchen to aid you while creating or testing your cookbooks</li>
</ul>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[Django is a popular Python web framework. I’ve been asked a couple of times how Python based applications can be automated so here I’ll give an example.]]></summary></entry><entry><title type="html">How to create admin user in Kubernetes to login to Dashboard</title><link href="https://blog.kstaykov.eu/devops/kubernetes-admin-user/" rel="alternate" type="text/html" title="How to create admin user in Kubernetes to login to Dashboard" /><published>2018-10-28T10:51:00+02:00</published><updated>2018-10-28T10:51:00+02:00</updated><id>https://blog.kstaykov.eu/devops/kubernetes-admin-user</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/kubernetes-admin-user/"><![CDATA[<p>There are cheap Kubernetes clusters out there and nowadays people like to do some tests. In this short article I will show you how to create a simple admin user with complete access easily. I’ll also show you how to enjoy the Kubernetes Dashboard on a DigitalOcean (or any other) cluster.</p>

<p>I would assume that you created your cluster. Within DigitalOcean this is as simple as a click. If you don’t have access to DO already you can use this ref link and get $100 of services for free:</p>

<p><a href="https://m.do.co/c/cc8f1a680e11">https://m.do.co/c/cc8f1a680e11</a></p>

<p>Before we start to make things even easier let’s create a simple alias. I called my alias “kube” and it will be referring the –kubeconfig at all times. I’ll be using it throughout this article so adjust your environment to your liking so you can follow along. Here’s my alias:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja ~]$ alias kube
alias kube='kubectl --kubeconfig=/home/kstaykov/Downloads/k8s-1-11-1-do-1-lon1-1540329911350-kubeconfig.yaml'
[kstaykov@manja ~]$
</code></pre></div></div>

<p>Now it’s time to setup your service account. Use this command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kube create -n kube-system serviceaccount admin
</code></pre></div></div>

<p>Notice that I created my service account in the kube-system namespace. If you want to know what namespaces you have you can get them using:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kube get namespaces
</code></pre></div></div>

<p>Now let’s put on a very permissive role binding setting for our cluster.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kube create clusterrolebinding permissive-binding \
  --clusterrole=cluster-admin \
  --user=admin \
  --user=kubelet \
  --group=system:serviceaccounts
</code></pre></div></div>

<p>Note that this policy will allow for ALL service accounts to act as administrators. Bare it in mind and don’t use this for production service. The concept of this article is to make a simple testing cluster.</p>

<p>Now it’s time to get the configuration of our user.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja ~]$ kube -n kube-system get serviceaccount admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2018-10-28T08:45:31Z
  name: admin
  namespace: kube-system
  resourceVersion: "463455"
  selfLink: /api/v1/namespaces/kube-system/serviceaccounts/admin
  uid: d3adfa7a-da8d-11e8-aeb9-622f6909f16e
secrets:
- name: admin-token-ndrwp
[kstaykov@manja ~]$ 
</code></pre></div></div>

<p>We can see that there is a secret here. Let’s grab it:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja ~]$ kube -n kube-system get secret admin-token-ndrwp -o yaml
apiVersion: v1
data:
  ca.crt: &lt;removed&gt;
  token: &lt;removed&gt;
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: admin
    kubernetes.io/service-account.uid: d3adfa7a-da8d-11e8-aeb9-622f6909f16e
  creationTimestamp: 2018-10-28T08:45:31Z
  name: admin-token-ndrwp
  namespace: kube-system
  resourceVersion: "463454"
  selfLink: /api/v1/namespaces/kube-system/secrets/admin-token-ndrwp
  uid: d3afa9e0-da8d-11e8-aeb9-622f6909f16e
type: kubernetes.io/service-account-token
[kstaykov@manja ~]$ 
</code></pre></div></div>

<p>I removed the ca.crt and token data but you should be able to see some big strings there. Notice that the token is base64 encoded. Use a command such as this to decode it:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "put-token-here" | base64 --decode
</code></pre></div></div>

<p>Now you should have a different string and that’s your true token. Keep this private as it has complete access over your cluster! Time to use it to login to the Dashboard. Open a proxy to the cluster:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kube proxy
</code></pre></div></div>

<p>This will open port 8001 on your machine and using it you can proxy to the API of the cluster. It’s a tunnel of a sort. Go to this URI:</p>

<p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a></p>

<p>Login using token authentication and use the token you decoded.</p>

<p>There you go! The Kubernetes Dashboard.</p>

<p><img src="/assets/images/kubernetes-dashboard.png" alt="kubernetes-dashboard" /></p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[There are cheap Kubernetes clusters out there and nowadays people like to do some tests. In this short article I will show you how to create a simple admin user with complete access easily. I’ll also show you how to enjoy the Kubernetes Dashboard on a DigitalOcean (or any other) cluster.]]></summary></entry><entry><title type="html">Start Jenkins on DigitalOcean’s Kubernetes Service</title><link href="https://blog.kstaykov.eu/devops/jenkins-on-kubernetes-digitalocean/" rel="alternate" type="text/html" title="Start Jenkins on DigitalOcean’s Kubernetes Service" /><published>2018-10-21T17:46:00+03:00</published><updated>2018-10-21T17:46:00+03:00</updated><id>https://blog.kstaykov.eu/devops/jenkins-on-kubernetes-digitalocean</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/jenkins-on-kubernetes-digitalocean/"><![CDATA[<p>DigitalOcean recently released to a set of users their new Kubernetes service which is really great. So, I decided to do yet another Jenkins over Kubernetes tutorial for you. It’s close to what I showed you previously but customized for DigitalOcean. You can grab an instance for as little as $5 per month. It’s the perfect testing ground - cheap and stable. If you don’t already have an account go grab one with this promotion code and get $100 of services on their entire platform:</p>

<p><a href="https://m.do.co/c/cc8f1a680e11">https://m.do.co/c/cc8f1a680e11</a></p>

<p>Did you get it? Good. Now let’s install Jenkins and create a ‘Hello, World!’ pipeline. First we’ll need to initialize our helm. Helm is a tool used for Kubernetes package management. First you’ll need to download your kubeconfig from the DigitalOcean website. There is a button for that available as soon as you build a cluster. Once done you should be able to use this file to get information on the cluster nodes. I’ll have just one node so here’s my output:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja ~]$ kubectl --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml get nodes
NAME                  STATUS    ROLES     AGE       VERSION
determined-kare-t8i   Ready     &lt;none&gt;    34m       v1.11.1
[kstaykov@manja ~]$ 
</code></pre></div></div>

<p>My config file is called “k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml” and it’s located in my Downloads folder. I’m using the Manjaro Linux distribution but you should have a close experience to this one whatever the OS you use.</p>

<p>Before we can start with Helm we’ll need to tune the cluster a little bit. Use those commands with your kubeconfig:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml create serviceaccount --namespace kube-system tiller
kubectl --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
</code></pre></div></div>

<p>This will configure account permissions and now we can initialize our Helm.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>helm init --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml
</code></pre></div></div>

<p>Simple, ah? You just put the same –kubeconfig option like on the kubectl command. Now let’s install the Jenkins:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja Downloads]$ helm --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml install stable/jenkins
NAME:   wrinkled-seagull
LAST DEPLOYED: Sun Oct 21 17:36:22 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==&gt; v1/Secret
NAME                      TYPE    DATA  AGE
wrinkled-seagull-jenkins  Opaque  2     0s

==&gt; v1/ConfigMap
NAME                            DATA  AGE
wrinkled-seagull-jenkins        5     0s
wrinkled-seagull-jenkins-tests  1     0s

==&gt; v1/PersistentVolumeClaim
NAME                      STATUS   VOLUME            CAPACITY  ACCESS MODES  STORAGECLASS  AGE
wrinkled-seagull-jenkins  Pending  do-block-storage  0s

==&gt; v1/Service
NAME                            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
wrinkled-seagull-jenkins-agent  ClusterIP     10.245.125.104  &lt;none&gt;       50000/TCP       0s
wrinkled-seagull-jenkins        LoadBalancer  10.245.59.106   &lt;pending&gt;    8080:32690/TCP  0s

==&gt; v1/Deployment
NAME                      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
wrinkled-seagull-jenkins  1        1        1           0          0s

==&gt; v1/Pod(related)
NAME                                       READY  STATUS   RESTARTS  AGE
wrinkled-seagull-jenkins-68f6587f87-gb5p2  0/1    Pending  0         0s


NOTES:
1. Get your 'admin' user password by running:
  printf $(kubectl get secret --namespace default wrinkled-seagull-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status of by running 'kubectl get svc --namespace default -w wrinkled-seagull-jenkins'
  export SERVICE_IP=$(kubectl get svc --namespace default wrinkled-seagull-jenkins --template  "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
  echo http://$SERVICE_IP:8080/login

3. Login with the password from step 1 and the username: admin

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

[kstaykov@manja Downloads]$
</code></pre></div></div>

<p>Perfect! Our Jenkins is on its way to be deployed. Within minutes we should be able to see it. The ‘get all’ command will list things nicely for us:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja ~]$ kubectl --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml get all
NAME                                            READY     STATUS    RESTARTS   AGE
pod/wrinkled-seagull-jenkins-68f6587f87-gb5p2   1/1       Running   0          22m

NAME                                     TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)          AGE
service/kubernetes                       ClusterIP      10.245.0.1       &lt;none&gt;            443/TCP          39m
service/wrinkled-seagull-jenkins         LoadBalancer   10.245.59.106    104.248.103.227   8080:32690/TCP   22m
service/wrinkled-seagull-jenkins-agent   ClusterIP      10.245.125.104   &lt;none&gt;            50000/TCP        22m

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wrinkled-seagull-jenkins   1         1         1            1           22m

NAME                                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/wrinkled-seagull-jenkins-68f6587f87   1         1         1         22m
[kstaykov@manja ~]$
</code></pre></div></div>

<p>Notice that we have a running pod. This means that Jenkins is already available to us. Also notice the loadbalancer on external IP 104.248.103.227 and port 8080. That’s where we’ll find the login page. Your IP of course will be different and before you try anything on mine be aware that this cluster will be long gone by the time you read this.</p>

<p>Now we only need the password for admin user and we’ll be on our way to create our simple pipeline. Review your output from the installation and you’ll see there is a command left for you. Mine is this one:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[kstaykov@manja ~]$ kubectl --kubeconfig ~/Downloads/k8s-1-11-1-do-1-fra1-1540131360772-kubeconfig.yaml get secret --namespace default wrinkled-seagull-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode
FKnT59sZAs
[kstaykov@manja ~]$
</code></pre></div></div>

<p>On no, you saw my password! Hehe as I said - long gone by then. Let’s login and we’ll see Jenkins page.</p>

<p><img src="/assets/images/jenkins-login-page.jpg" alt="jenkins-login-page" /></p>

<p>And here’s the execution of our very simple pipeline:</p>

<p><img src="/assets/images/jenkins-login-page-job-output.jpg" alt="jenkins-login-page-job-output" /></p>

<p>For more information on how this pipeline was built see the privious tutorial here:</p>

<p><a href="https://blog.kstaykov.eu/devops/jenkins-on-kubernetes/">Run Jenkins Master on Kubernetes Cluster</a></p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[DigitalOcean recently released to a set of users their new Kubernetes service which is really great. So, I decided to do yet another Jenkins over Kubernetes tutorial for you. It’s close to what I showed you previously but customized for DigitalOcean. You can grab an instance for as little as $5 per month. It’s the perfect testing ground - cheap and stable. If you don’t already have an account go grab one with this promotion code and get $100 of services on their entire platform:]]></summary></entry><entry><title type="html">Docker - using multistage build</title><link href="https://blog.kstaykov.eu/devops/docker-multistage-build/" rel="alternate" type="text/html" title="Docker - using multistage build" /><published>2018-06-30T10:59:00+03:00</published><updated>2018-06-30T10:59:00+03:00</updated><id>https://blog.kstaykov.eu/devops/docker-multistage-build</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/docker-multistage-build/"><![CDATA[<p>Have you ever tried building code on Docker just to end up with a huge container? Yes? Me too.</p>

<p>I’ll show you the beauty of multistage builds that will enable you to get a result such as this one:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
kstaykov/webin      latest              22016f6268d9        12 minutes ago      10.8MB
golang              1.9.7               f9ff4369deb0        2 days ago          750MB
alpine              latest              3fd9065eaf02        5 months ago        4.15MB
$
</code></pre></div></div>

<p>Notice how big the Golang image is. I’m using it to build my simple Go app but then I’ll host the app on a small alpine image which is only 4.15 MB. In the end my image is just 10.8 MB which is the alpine + my built go binary.</p>

<p>We’ll be reviewing the code in this repo here: <a href="https://gitlab.com/kstaykov/webin">https://gitlab.com/kstaykov/webin</a></p>

<p>If you have a look at the <strong>main.go</strong> you’ll see a very simple Go program that prints http headers and some form info to console for debug purposes. I needed something like that to debug web hooks so I put those few lines of code. The problem however is that I need this tool to be very small and easy to grab at some dev machines that have docker installed.</p>

<p>The straight forward approach would be to make one Dockerfile using Golang image and host my code and build there. That would work. Let’s do it.</p>

<figure class="highlight"><pre><code class="language-docker" data-lang="docker"><span class="k">FROM</span><span class="s"> golang:1.9.7</span>
<span class="k">WORKDIR</span><span class="s"> /app/</span>
<span class="k">COPY</span><span class="s"> main.go /app/</span>
<span class="k">RUN </span><span class="nv">GOOS</span><span class="o">=</span>linux <span class="nv">GOARCH</span><span class="o">=</span>amd64 <span class="nv">CGO_ENABLED</span><span class="o">=</span>0 <span class="nv">GOPATH</span><span class="o">=</span><span class="sb">`</span><span class="nb">pwd</span><span class="sb">`</span> go build <span class="nt">-o</span> webin
<span class="k">CMD</span><span class="s"> ["./app/webin"]</span></code></pre></figure>

<p>And away it goes.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker build -t kstaykov/webin:v1 .
Sending build context to Docker daemon  6.649MB
Step 1/5 : FROM golang:1.9.7
 ---&gt; f9ff4369deb0
Step 2/5 : WORKDIR /app/
Removing intermediate container 79f8edde1bee
 ---&gt; f4f165b3523f
Step 3/5 : COPY main.go /app/
 ---&gt; 29ea023add77
Step 4/5 : RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=0 GOPATH=`pwd` go build -o webin
 ---&gt; Running in f6088791512f
Removing intermediate container f6088791512f
 ---&gt; e90fa77e9fcf
Step 5/5 : CMD ["./app/webin"]
 ---&gt; Running in 400db33ba75f
Removing intermediate container 400db33ba75f
 ---&gt; 830a41c49231
Successfully built 830a41c49231
Successfully tagged kstaykov/webin:v1
$
</code></pre></div></div>

<p>Yey, it works!</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
kstaykov/webin      v1                  830a41c49231        2 minutes ago       756MB
golang              1.9.7               f9ff4369deb0        2 days ago          750MB
alpine              latest              3fd9065eaf02        5 months ago        4.15MB
$
</code></pre></div></div>

<p>Oh, wait. The end product image is 756 MBs in size. That’s not nice. It has everything in there. When I was studying docker I heard someone comparing this to a car factory. Using that analogy what we just did is make a car but the whole factory is still attach to it. That won’t sell well so let’s fix this.</p>

<p>We’ll tune our <strong>Dockerfile</strong> and build our multistage magic.</p>

<figure class="highlight"><pre><code class="language-docker" data-lang="docker"><span class="k">FROM</span><span class="w"> </span><span class="s">golang:1.9.7</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="s">builder</span>
<span class="k">WORKDIR</span><span class="s"> /app/</span>
<span class="k">COPY</span><span class="s"> main.go /app/</span>
<span class="k">RUN </span><span class="nv">GOOS</span><span class="o">=</span>linux <span class="nv">GOARCH</span><span class="o">=</span>amd64 <span class="nv">CGO_ENABLED</span><span class="o">=</span>0 <span class="nv">GOPATH</span><span class="o">=</span><span class="sb">`</span><span class="nb">pwd</span><span class="sb">`</span> go build <span class="nt">-o</span> webin

<span class="k">FROM</span><span class="s"> alpine:latest  </span>
<span class="k">RUN </span>apk <span class="nt">--no-cache</span> add ca-certificates
<span class="k">WORKDIR</span><span class="s"> /root/</span>
<span class="k">COPY</span><span class="s"> --from=builder /app/webin .</span>
<span class="k">CMD</span><span class="s"> ["./webin"]  </span></code></pre></figure>

<p>Beautiful. What we did is very simple. We started with our big Golang image and build the app there. We called that image builder in this small pipeline. Down below we use the binary <strong>/app/webin</strong> from the builder image which is the end result from the go compilation. That’s how we build the alpine based container with just the binary instead of the whole golang (factory? :P) data.</p>

<p>Away that goes.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker build -t kstaykov/webin:latest .
Sending build context to Docker daemon  6.649MB
Step 1/9 : FROM golang:1.9.7 as builder
 ---&gt; f9ff4369deb0
Step 2/9 : WORKDIR /app/
Removing intermediate container 5a16bfd5407d
 ---&gt; d2e0b5d2a598
Step 3/9 : COPY main.go /app/
 ---&gt; 13ea9e625f06
Step 4/9 : RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=0 GOPATH=`pwd` go build -o webin
 ---&gt; Running in 52d6e844b44e
Removing intermediate container 52d6e844b44e
 ---&gt; 284f90cd0b4e
Step 5/9 : FROM alpine:latest
 ---&gt; 3fd9065eaf02
Step 6/9 : RUN apk --no-cache add ca-certificates
 ---&gt; Running in 92499ce9fbc9
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/1) Installing ca-certificates (20171114-r0)
Executing busybox-1.27.2-r7.trigger
Executing ca-certificates-20171114-r0.trigger
OK: 5 MiB in 12 packages
Removing intermediate container 92499ce9fbc9
 ---&gt; f08ed1ee0649
Step 7/9 : WORKDIR /root/
Removing intermediate container 2bf26a80d6f9
 ---&gt; 4d8b3c4e0e4a
Step 8/9 : COPY --from=builder /app/webin .
 ---&gt; db3207ccbe20
Step 9/9 : CMD ["./webin"]
 ---&gt; Running in fa35ded63440
Removing intermediate container fa35ded63440
 ---&gt; 76887e697c9f
Successfully built 76887e697c9f
Successfully tagged kstaykov/webin:latest
$
</code></pre></div></div>

<p>Aaand the resulting image is… OK, you already saw that at the beginning so no surprise here.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
kstaykov/webin      latest              76887e697c9f        About a minute ago   10.8MB
&lt;none&gt;              &lt;none&gt;              284f90cd0b4e        2 minutes ago        756MB
golang              1.9.7               f9ff4369deb0        2 days ago           750MB
alpine              latest              3fd9065eaf02        5 months ago         4.15MB
$
</code></pre></div></div>

<p>I still keep the intermittent 756 MB image just to show the huge difference. Now that 10.8 MB image looks kind of sweet and that’s our whole app within a very small container. A micro service if you will.</p>

<p>In the Java world that would be a maven/gradle build and the resulting image will run a smaller jdk container. Sweet.</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[Have you ever tried building code on Docker just to end up with a huge container? Yes? Me too.]]></summary></entry><entry><title type="html">Run Jenkins Master on Kubernetes Cluster</title><link href="https://blog.kstaykov.eu/devops/jenkins-on-kubernetes/" rel="alternate" type="text/html" title="Run Jenkins Master on Kubernetes Cluster" /><published>2018-06-24T22:39:00+03:00</published><updated>2018-06-24T22:39:00+03:00</updated><id>https://blog.kstaykov.eu/devops/jenkins-on-kubernetes</id><content type="html" xml:base="https://blog.kstaykov.eu/devops/jenkins-on-kubernetes/"><![CDATA[<p>Such a lovely evening. It was a great sunny day near the Black Sea where I’m taking some time off with my family. Now it’s late enough to have a beer and… build a Jenkins master? Why not.</p>

<p>Today I’ll play with Kubernetes - one of my favorite toys lately. I don’t want to waste too much time building my cluster so I’ll use the pre-built package for MacOS - docker + kubernetes. It works great and it’s very simple to install.</p>

<p>Now since I promise you an easy time this evening let’s build our Jenkins master using Helm. That’s kind of a package manager for Kubernetes that you can download from github. It’s basically one binary that uses the context from kubectl which is used to control the cluster.</p>

<p>You’ll need to initialize a tiller (kubernetes agent running on pod over the cluster) and the helm (client) itself. Do that by simply running “helm init”. This process might be a bit more complicated if you’re running it over remote cluster since you’ll need to authenticate first but if you know how to setup remote cluster then that would be a piece of cake for you. So, let’s install Jenkins:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>helm install --name jenkins --namespace jenkins stable/jenkins
</code></pre></div></div>

<p>Here’s how output from this command looks like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ helm install --name jenkins --namespace jenkins stable/jenkins
NAME:   jenkins
LAST DEPLOYED: Sun Jun 24 21:58:07 2018
NAMESPACE: jenkins
STATUS: DEPLOYED

RESOURCES:
==&gt; v1/Service
NAME           TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
jenkins-agent  ClusterIP     10.99.56.118    &lt;none&gt;       50000/TCP       1s
jenkins        LoadBalancer  10.108.218.206  localhost    8080:31573/TCP  1s

==&gt; v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
jenkins  1        1        1           0          1s

==&gt; v1/Pod(related)
NAME                     READY  STATUS   RESTARTS  AGE
jenkins-789554878-5jfkx  0/1    Pending  0         0s

==&gt; v1/Secret
NAME     TYPE    DATA  AGE
jenkins  Opaque  2     1s

==&gt; v1/ConfigMap
NAME           DATA  AGE
jenkins        4     1s
jenkins-tests  1     1s

==&gt; v1/PersistentVolumeClaim
NAME     STATUS   VOLUME    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
jenkins  Pending  hostpath  1s


NOTES:
1. Get your 'admin' user password by running:
  printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status of by running 'kubectl get svc --namespace jenkins -w jenkins'
  export SERVICE_IP=$(kubectl get svc --namespace jenkins jenkins --template " { { range (index .status.loadBalancer.ingress 0) }} { { . }} { { end }}")
  echo http://$SERVICE_IP:8080/login

3. Login with the password from step 1 and the username: admin

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

$
</code></pre></div></div>

<p>Easy. Once the pod is deployed you can go to http://localhost:8080 and login with username admin and the password you see via this command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
</code></pre></div></div>

<p>Next I decided to build a very simple pipeline on the Jenkins:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>node {
   echo 'Hello World'
}
</code></pre></div></div>

<p>And here’s its console output:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Started by user admin
[Pipeline] node
Still waiting to schedule task
Waiting for next available executor
Running on default-20hkl in /home/jenkins/workspace/test
[Pipeline] {
[Pipeline] echo
Hello World
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
</code></pre></div></div>

<p>All it does is print “Hello World” without even setting up a stage. Now if you run the job you’ll notice the beauty of this setup right away. There is no executor at first but when you run the job a pod will be created automatically for you. When the executor running on this pod is created the job will run.</p>

<p>Now let’s look at some details of what’s going on. We created the Jenkins on a namespace called “jenkins”. Let’s see this namespace and all resources there.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl get all --namespace jenkins
NAME                          READY     STATUS    RESTARTS   AGE
pod/jenkins-789554878-5jfkx   1/1       Running   0          1h

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/jenkins         LoadBalancer   10.108.218.206   localhost     8080:31573/TCP   1h
service/jenkins-agent   ClusterIP      10.99.56.118     &lt;none&gt;        50000/TCP        1h

NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jenkins   1         1         1            1           1h

NAME                                DESIRED   CURRENT   READY     AGE
replicaset.apps/jenkins-789554878   1         1         1         1h
$
</code></pre></div></div>

<p>Notice that there is no executor pod at the moment. That’s because the executors don’t persist. They become available when there is demand from the Jenkins master. That’s one pretty good example of scaling your resources and killing them once there is no need for them to stick around.</p>

<p>Now to recap:</p>

<ul>
  <li>Docker + Kubernetes on MacOS was used as initial cluster setup which is very easy to install. Same thing you can get on Windows as well. On Linux you’ll need to do some configuration but it’s pretty straightforward and lots of documentation is available to support you.</li>
  <li>We install Jenkins using Helm. It’s easy to start but it will take some time getting used to when you want to configure all the plugins and settings. Helm is using charts which is a way to describe the packages. Tiller is Helm’s agent in the cluster that we initialized.</li>
  <li>Then we ran a simple pipeline that says “Hello World” and Jenkins started another pod to run an executor so that our pipeline gets executed.</li>
</ul>

<p>What happens next is up do you. You can build a Jenkinsfile to support your app and very easily deploy all its setup on the cluster. You can even build a whole CI/CD pipeline to promote your app through the different stages of its lifecycle.</p>

<p>That’s however a matter for another day…</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="DevOps" /><summary type="html"><![CDATA[Such a lovely evening. It was a great sunny day near the Black Sea where I’m taking some time off with my family. Now it’s late enough to have a beer and… build a Jenkins master? Why not.]]></summary></entry><entry><title type="html">Move to Netlify</title><link href="https://blog.kstaykov.eu/blog/move-to-netlify/" rel="alternate" type="text/html" title="Move to Netlify" /><published>2018-05-31T20:23:00+03:00</published><updated>2018-05-31T20:23:00+03:00</updated><id>https://blog.kstaykov.eu/blog/move-to-netlify</id><content type="html" xml:base="https://blog.kstaykov.eu/blog/move-to-netlify/"><![CDATA[<p>Time to say hi to my new site hosting - <a href="https://www.netlify.com/">netlify.com</a></p>

<p>I was using web hosting service up until now but I felt my shell scripting which does scp (because even rsync is even not supported by the hosting) is way out of date. Making this blog a docker container and running it on Kubernetes feels like an overkill and as soon as I saw Netlify with its simplistic interface I fell in love with it.</p>

<p>And now it’s time to say bye bye old way of syncing data to a vhost directory. Netlify is this blog’s new home.</p>

<p>There are lots of things that I like about Netlify. One of those things is definitely the free plans. I managed to deploy two separate apps for free. I don’t see many limits other than the dynamic content part which I don’t use anyway. I might try it in the future as there are some great things I can do with it.</p>

<p>My number one reason for turning to this service is its quick and easy configuration. With just few clicks I:</p>

<ul>
  <li>Configured the deployment from my git repo</li>
  <li>Configured the domain I use and got the configuration for my DNS zone</li>
  <li>Switched DNS service to Netlify for one of my apps so that they take care of it all.</li>
</ul>

<p>I can even configure SSL just like that - fast and easy. In the busy world we live in today those little things that save you time are what matter the most. Quick is the new sexy.</p>]]></content><author><name>Kalin Staykov</name><email>k.t.staykov@gmail.com</email><uri>http://www.kstaykov.eu</uri></author><category term="Blog" /><summary type="html"><![CDATA[Time to say hi to my new site hosting - netlify.com]]></summary></entry></feed>