<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Benchmarking on YABOB</title><link>https://risson.space/tags/benchmarking/</link><description>Recent content in Benchmarking on YABOB</description><generator>Hugo -- gohugo.io</generator><managingEditor>marc.schmitt@risson.space (Marc 'risson' Schmitt)</managingEditor><webMaster>marc.schmitt@risson.space (Marc 'risson' Schmitt)</webMaster><copyright>risson — All rights reserved</copyright><lastBuildDate>Mon, 07 Dec 2020 15:29:00 +0100</lastBuildDate><atom:link href="https://risson.space/tags/benchmarking/index.xml" rel="self" type="application/rss+xml"/><item><title>Benchmarking ZFS cache and log devices</title><link>https://risson.space/2020/12/benchmarking-zfs-cache-and-log-devices/</link><pubDate>Mon, 07 Dec 2020 15:29:00 +0100</pubDate><author>marc.schmitt@risson.space (Marc 'risson' Schmitt)</author><guid>https://risson.space/2020/12/benchmarking-zfs-cache-and-log-devices/</guid><description>Pursing updates I am making to my personal infrastructure, I wanted to deploy a Kubernetes cluster. This was supposed to be the new way I would deploy services to my infrastructure, and as such resources would be transfered to this cluster as I was moving services to it. Unfortunately, as I started moving a couple services to it, it exploded in my hands because the disks were not following with the etcd instances, ending up with the etcd leader giving up and the cluster not being able to elect a new leader as all of the members were lagging behind.</description></item></channel></rss>