Blog

The Subtle Art of Selling Magic Beans

last updated on 2019-10-14

With the recent uproar of a specific magic quadrant (MQ) report being released, I just couldn't help myself.

In the technology space, some decision makers are believed to put a lot of weight on analyst reports. These reports are made to cover different areas of technology, and aim to advise what may or may not be a good product to meet specific solution criteria. This is well and good, but analysts, like humans are fallible. Being unbiased is hard, and comparing different solutions on their actual merits requires very in-depth knowledge of each product.

Read more

Why Enterprise Architecture Still Matters

last updated on 2019-08-28

A different style post today. Its hard to strike the right balance between process and delivering results. I've been thinking about how I would explain what I do on a daily basis. I like the excitement of delivering results, another happy customer, creating a new tool, script, solution design or doing whatever it takes to get the job done better next time, and don't particularly care for process. Sometimes its hard to even justify taking the time to define process.

I think most people who work in IT for some time have a hard time explaining what they do, and have a hard time keeping track of what they did yesterday. But part of the job is ensuring that what we do is informed, documented, and easily repeatable. This means getting rid of as much busy work as possible. Ultimately the goal is that we and the teams we work with make time to learn and improve every day.

Enterprise Architecture (EA) exists to create building blocks, a foundation if you will, upon which to build a technology stack that directly supports business strategy. Traditionally it feels like Enterprise Architect practices focus on the long-term, and perhaps even, that the concept of defining high level processes and guidelines actually slow down business operations. After all, more process is usually bad right?

Read more

LucidLink Docker Volume Plugin for Persistent Storage

last updated on 2019-01-22

At this point, it is almost cliché: Containers should be stateless, and at the same time containers need to have persistent storage available. There are quite a few ways to address this, and storage vendors all seem to have their own solutions. This means third party storage, which in turn leads to additional management overhead, complexity, and cost.

What if you could just mount object storage to your containers and treat this as local disk? The developers at LucidLink recently made an alpha build of Docker Volume Plugin support available, allowing you to connect your containers to external S3-compatible storage with no additional work required.

The Docker plugin for LucidLink makes it really easy to mount LucidLink Filespaces in Docker containers, all the containers have to do is request the volume by name, and no matter what host the containers move to, they will still be able to access the same data as this is stored in your object storage bucket.

Read more

LucidLink with Veeam and Zenko Object Storage

last updated on 2018-09-24

LucidLink is a storage startup that transforms a cloud object store into scale-out stream-able, sharable, block storage. Its a log-structured distributed file system that supports mounting to Windows, Linux or Mac. Marketing buzzwords such as 'cloud volumes', 'data lakes' come to mind - but then built on top of low-cost, highly-scalable object storage. LucidLink offers a metadata service if you will, that provides the file system magic, and you can bring your own object storage.

LucidLink could be used for all kinds of large data sets. For cross-cloud data consumption or to allow legacy applications to consume object storage with the performance as if it was locally attached. It provides a simple solution to host network share data with large datasets of millions of files. You can access your LucidLink namespace from anywhere. It is a true file system that consumes object storage offering things like garbage collection, prefetching, caching, with more advanced features to come.

Read more

Veeam Policies from Tags or Attributes

last updated on 2018-07-14

Following on from the blog post on Veeam Backup as Code, there are several large customers who use VMware Tags or Attributes to help define backup job creation. This matters, because by doing so it reduces individual job maintenance, and in an ideal scenario allows Veeam Backup & Replication to 'manage itself', and your backups 'automagically' appear on disk.

This is a pretty cool concept that does not involve any user interface changes. Instead of manually creating jobs associated with specific VMware tags, we create a scheduled task that calls a PowerShell script. This script dynamically creates jobs from templates, and adds virtual machines based on their tags or attributes. Existing job object count and overall disk size are used as part of this calculation, so that no single job becomes 'too big' to manage individually. Of course, the reverse experience needs to managed too. Once a machine no longer requires this association, it is automatically removed from a job.

There are two example scripts you can download for this. Either BR-UpdateJobByTag or BR-UpdateJobByAttribute scripts, which will have to be run as a scheduled task with elevated rights. To update the job by tag a category should be used, and each tag value should have a similarly named template job. For attributes, the attribute name is set, and each value is associated with a template.

There are additional things that can be done around scheduling. And understandably, this is not a user-interface driven thing. But feedback is always welcome.