Eventually you will have multiple base cookbooks and you may want to combine them into a single logical unit, so that can be tested together. Take for example a cookbook called role_my_company_website. This cookbook’s default recipe might look like the following:

include_recipe 'my_company_windows_base::default'

include_recipe 'my_company_audit::default'

include_recipe 'my_company_iis::default'

include_recipe 'my_company_website::default'

Then in this cookbook’s metadata.rb you would have hard version pinnings for each of the dependent cookbooks.


By doing this you can now apply role_my_company_website to a node and test it as a cumulative collection of all its underlying cookbooks. Then, if all the dependant cookbooks have proper tests, you only have to worry about testing the output of role_my_company_website without having to test each of its underlying components.
This reduces the amount of repeated work and produces an artifact that is:
  • Easy to understand 
  • Version controlled 
  • Independently testable 
This leads to a cookbook that succinctly describes a particular node in your Chef managed ecosystem. You could use this succinct description of node function to your advantage. For example, your load balancer cookbook could find all nodes that have the run_list of recipes['role_my_company_website'] and automatically add them to its backend server list.

The importance of role cookbooks is also seen when using automate workflow. 

If using base cookbooks for multiple projects, pinning should not be done on the base cookbook itself. Since cookbooks are pinned at an environment level if the base cookbook is pinned at the environment and then updated, that base cookbook update will in effect alter all projects using it in that environment (acceptance, union, rehearsal delivered. To prevent this pinning from taking place, through workflow, under
.delivery/build-cookbook/provision.rb 
comment out
delivery-truck::provision


In turn if we version pin only the role cookbook at the environment level, being project specific, any changes made to the role cookbook should not have an effect on any other project.



This does mean that in order for a base cookbook to be updated in a project its version must be changed in the role cookbook. So for every underlying cookbook change the role cookbook will need to be version bumped. This is a much more manual process, but it will provide protection from projects breaking with a change to one base cookbook.

This also has the added benefit of version controlling any version bumps we have in our environments for a given projects node. Since the only version pins in an environment fall on the role cookbook, all other changes to versions should be controlled through the role cookbooks metadata and delivery cli commands. These commits can be tied back to individual users and version changes which will better stabilize the environments.

The leading measure in Workflow, if base cookbooks are not project specific, should sit with role cookbooks. These cookbooks should be used to provision servers, and version pin underlying cookbooks, when going through the Union, Rehearsal and Delivered stages of the Chef Automate Workflow to separate project version pinning.

Simply put, a wrapper cookbook is just a regular cookbook that includes recipes from other cookbooks. Common use cases for wrapper cookbooks include:
  • Modifying the behavior of a community cookbook from the Chef Supermarket
  • Bundling several base cookbooks into a single cookbook
  • Version controlling a node’s run list and attribute definitions

Writing a Wrapper Cookbook

To include another cookbook in your wrapper cookbook you must do a minimum of two things:
  • Add dependencies to your wrapper cookbook’s metadata.rb
  • Add an include_recipe line to your wrapper cookbook’s recipes/default.rb

Including Dependencies

Including dependencies is a simple as adding the following to your metadata.rb:

depends 'public_cookbook'

You can also optionally perform version pinning like so:

depends 'public_cookbook', '= 1.4.5'

For more information about version pinning see the metadata.rb page on the Chef Docs site.

Setting Attributes

Setting attributes in your wrapper cookbook is a common way to modify the behavior of the cookbook you are wrapping. Well written community cookbooks support modifying their behavior in this manner and will document its attributes within their README.md.
These attributes can be added in your wrapper cookbook’s attributes/default.rb and/or in your default recipe before your include_recipe line.
I decide where to place the attributes as follows: If the attributes are computed using other attributes or set via logic (e.g case, if, unless) place them in recipes/default.rb otherwise place them in attributes/default.rb

Completing the Wrap

In order to add the functionality from your wrapped cookbook, you will need to include that cookbook in your wrapper cookbook’s default recipe. This is usually done with the help of the include_recipe method, like so:

include_recipe 'public_cookbook::default'

Once you have completed this your cookbook is ready for use.

Sample Use Cases

For the examples below let’s assume we want to use the IIS cookbook from the Chef Supermarket

Creating the wrapper

By running the following chef command we can generate our wrapper cookbook:

chef generate cookbook my_company_iis 

From here we add the following to our metadata.rb

depends 'iis' 

Then we can add the necessary include_recipe line to our recipes/default.rb

include_recipe 'iis::default'

Doing the actions above will create a wrapper cookbook that will use the IIS cookbook to:
  • Install IIS
  • Ensure the w3svc is enabled and started
  • Serve the Default Web Site

Modifying Public Cookbook Behavior

The above example is great, but let’s assume that your company hosts its websites on D: instead of C:. We can change this by modifying the attributes that the IIS cookbook consumes.

To host websites out of D: add the following to your wrapper cookbook’s attributes/default.rb

default['iis']['pubroot']    = 'D:\\inetpub'

default['iis']['docroot']    = 'D:\\inetpub\\wwwroot'

default['iis']['log_dir']    = 'D:\\inetpub\\logs\\LogFiles'

default['iis']['cache_dir']  = 'D:\\inetpub\\temp'  
 
Adding this to your wrapper cookbook’s attributes file will modify the behavior of the IIS cookbook.

Application Cookbooks

By completing the above you have now created a base cookbook that will install IIS in the fashion that your company desires. Now we can expand on this by utilizing an additional wrapper cookbook.
Let’s say we did the same as above but also created a my_company_app cookbook and included our my_company_iis cookbook with a hard version pinning. By doing this we can give the developer of my_company_app the freedom to have IIS installed to company specifications, but without worrying about how things work behind the scenes.
This allows one team to focus on coding the logic that deploys their web application without also having to code the logic for installing IIS to company specifications.

Modifying the Resource Collection

Take for example, a cookbook that lays down a file on the file system via a template, but that cookbook’s template doesn’t suit your needs.
In your recipe you can use the edit_resource helper method provided by Chef’s Recipe DSLto modify their template resource to point to a template in your wrapper cookbook instead.
In practice it looks like this:

include_recipe 'bad_cookbook::default' 

edit_resource(:template, 'C:\\important\\template\\path.ini') do

  source 'my_beautiful_template.erb'

  cookbook 'my_awesome_wrapper'

end  
 
Adding this to your wrapper cookbook’s default recipe would allow you to use their cookbook as intended with the exception that your template will be used and not theirs.




Chef is an automation platform the "turns infrastructure into code," allowing organizations to version control and deploy services and code to multiple servers in a repeatable fashion. Chef cookbooks are the fundamental system of configuration and policy distribution on the chef platform. A cookbook defines a system or application and contains everything that is required to support those components. A cookbook can contain the following elements :
  • Recipes the define the resources to use and the order in which to apply them. 
  • Attribute values
  • Files
  • Templates
  • Extensions to Chef including custom resources and libraries. 
This post will discuss the versioning of cookbooks using semantic versioning practices. Please Note that this post is a conglomeration of several blog posts

Cookbook Versioning

Use semantic versioning when numbering cookbooks. This versioning can be found in the metadata.rb file of the cookbook.


  • Given a version number MAJOR.MINOR.PATCH, increment the: 
    • MAJOR version when you make incompatible API changes, 
    • MINOR version when you add functioanlity in a backwards-compatible manner, 
    • PATCH version when you make backwards compatible bug fixes 
  • Additional lables for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
Only upload stable cookbooks from master.
Only upload unstable cookbooks to your own fork. Merge to master and bump the version when stable.
Never ever decrement the version of a cookbook!
  • Chef-client will always use the highest-numbered cookbook that is available after considering all constraints. If Chef Server knows about a cookbook with a higher number than the one you just uploaded, then your code is not going to get run. Do not add a version constraint in your test environment to work around this; it will definitely bite you later on. Your build system should fail the build if the cookbook version has not been incremented beyond the last uploaded cookbook. This matters even more if you're publishing to Supermarket.
Bug fixes not affecting the code increment the patch version, backwards compatible additions/changes increment the minor version, and backwards incompatible changes increment the major version.
This system is called "Semantic Versioning." Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next.

Cookbook Versioning Specifications

  1. A normal version number MUST take the form X.Y.Z where X, Y, and Z are non-negative integers, and MUST NOT contain leading zeroes. X is the major version, Y is the minor version, and Z is the patch version. Each element MUST increase numerically. For instance: 1.9.0 -> 1.10.0 -> 1.11.0. 
  2. Once a versioned package has been released, the contents of that version MUST NOT be modified. Any modifications MUST be released as a new version. 
  3. Major version zero (0.y.z) is for initial development. Anything may change at any time. This public cookbook should not be considered stable. 
  4. Version 1.0.0 defines the public cookbook. The way in which the version number is incremented after this release is dependent on how the cookbook changes 
  5. Patch version Z (x.y.Z | x ; 0) MUST be incremented if only backwards compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior 
  6. Minor version Y (x.Y.z | x ; 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. It MAY be incremented if substantial new functionality or improvements are introduced within the private code. It MAY include patch level changes. Patch version MUST be reset to 0 when minor version is incremented. 
  7. Major version X (X.y.z | X ; 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. It MAY include minor and patch level changes. Patch and minor version MUST be reset to 0 when major version is incremented. 
  8. Precedence refers to how versions are compared to each other when ordered. Precedence MUST be calculated by separating the version into major, minor, patch and pre-release identifiers in that order (Build metadata does not figure into precedence). Precedence is determined by the first difference when comparing each of these identifiers from left to right as follows: Major, minor, and patch versions are always compared numerically. Example: 1.0.0, 2.0.0, 2.1.0,  2.1.1. When major, minor, and patch are equal, a pre-release version has lower precedence than a normal version.

FAQ

How Should I deal with revisions in the 0.y.z initial development phase?
  • The simplest thing to do is start your initial development release at 0.1.0 and then increment the minor version for each subsequent release. 
How do I know when to release to 1.0.0?
  • If your software is being used in production, it should probably already be 1.0.0. If you have a stable cookbook on which users have come to depend, you should be 1.0.0. If you’re worrying a lot about backwards compatibility, you should probably already be 1.0.0. 
If even the tiniest backwards incompatible changes to the public cookbook require a major version bump, won't I end up at version 42.0.0 very rapidly?
  • This is a question of responsible development and foresight. Incompatible changes should not be introduced lightly to software that has a lot of dependent code. The cost that must be incurred to upgrade can be significant. Having to bump major versions to release incompatible changes means you’ll think through the impact of your changes, and evaluate the cost/benefit ratio involved. 
What do I do if I accidentally release a backwards incompatible changes as a minor version?
  • As soon as you realize that you’ve broken the Semantic Versioning spec, fix the problem and release a new minor version that corrects the problem and restores backwards compatibility. Even under this circumstance, it is unacceptable to modify versioned releases. If it’s appropriate, document the offending version and inform your users of the problem so that they are aware of the offending version. 
How should I handle deprecating functionality? 
  • Deprecating existing functionality is a normal part of software development and is often required to make forward progress. When you deprecate part of your public API, you should do two things: (1) update your documentation to let users know about the change, (2) issue a new minor release with the deprecation in place. Before you completely remove the functionality in a new major release there should be at least one minor release that contains the deprecation so that users can smoothly transition to the new API.


Part of what makes chef tooling so powerful is its ability to test your product quickly and easily over a variety of different platforms. Using recipes and test-kitchen, a chef user can call on a variety of different drivers to push their cookbooks to EC2 instances, docker containers, VSphere and Azure instances in a matter of moments. If a driver exists for the platform, test-kitchen can be used for "local" deployment.

This article will talk about the deployment of infrastructure to EC2 instances through test-kitchen. Note however that EC2 is just an example case to get one started. Several other driver platforms can be found including ::

kitchen-allA driver for everything, or “all the drivers in a single Ruby gem”.
kitchen-blueboxA driver for Blue Box.
kitchen-cloudstackA driver for CloudStack.
kitchen-digitaloceanA driver for DigitalOcean.
kitchen-dockerA driver for Docker.
kitchen-dscA driver for Windows PowerShell Desired State Configuration (DSC).
kitchen-ec2A driver for Amazon EC2.
kitchen-fogA driver for Fog, a Ruby gem for interacting with various cloud providers.
kitchen-googleA driver for Google Compute Engine.
kitchen-hypervA driver for Hyper-V Server.
kitchen-joyentA driver for Joyent.
kitchen-linodeA driver for Linode.
kitchen-opennebulaA driver for OpenNebula.
kitchen-openstackA driver for OpenStack.
kitchen-pesterA driver for Pester, a testing framework for Microsoft Windows.
kitchen-rackspaceA driver for Rackspace.
kitchen-terraformA driver for Terraform.
kitchen-vagrantA driver for Vagrant. The default driver packaged with the Chef development kit.

This list is pulled directly from Chef's Kitchen docs page. Other drivers can also be found in the open community including kitchen-cloudformation which was used to some success for a recent project that my company worked on. 

In order to begin using test-kitchen you must install chefdk. This will not be covered here but I discussed these steps earlier in a previous blog post here. The kitchen-ec2 driver is installed with ChefDK by default and so no further work will be needed to setup the ec2 kitchen driver.

An AWS account will also be required to launch instances. 

REQUIREMENTS: 

  • AWS Account
  • ChefDK installed on your local computer 


First start by opening your Chef Development tools by double clicking on the link. 


Next, we will need to create a cookbook to run test kitchen on. We can do this by running the command chef generate cookbook [cookbook_name] . I will name this cookbook test_cookbook to keep it descriptive. 


When this command runs successfully you will see a number of things returned, including a section which says "Your cookbook is ready. Type 'cd [cookbook_name]' to enter it" 


Since actually designing these cookbooks is out of scope for now all we are really interested in is the kitchen.yml file. This file is what will give our cookbook the information needed to spin up our instance through test-kitchen. Using your favorite editor (we will be using visual studio code for this demo, which can be downloaded here)  open the new cookbook that you just created. 



As we can see in the explorer window there are a great number of files and directories from which to choose from. We will discuss these in a later blog post. What we are most worried about now is the kitchen.yml file. 


The kitchen.yml contains a number of different components and will provide a great deal of control when spinning up our instances. We are currently using the EC2 driver. It's full documentation can be found here. Since this is just a guide to get us started lets look at four specific locations; driver, transport, platform and suites. 






Lets walk through the components.

Driver 

name is the name of the driver that we are going to use to spin up our instance. In this case, as explained earlier, we will use the ec2 kitchen driver. This driver uses the aws sdk gem to provision and destroy EC2 instances.

Instance_type is the EC2 instance type (also known as size) to use. The default is t2.micro or t1.micro, depending on whether the image is hvm or paravirtual. (paravirtual images are incompatible with t2.micro.)

aws_ssh_key_id is The ID of the AWS key pair you want to use. The default will be read from the AWS_SSH_KEY_ID environment variable if set, or nil otherwise. If aws_ssh_key_id is specified, it must be one of the KeyName values shown by the AWS CLI: aws ec2 describe-key-pairs. Otherwise, if not specified, you must either have a user pre-provisioned on the AMI, or provision the user using user_data. This all gets very technical but luckily there are some good instructions on how to pull down these keys here.

security_group_ids these are an Array of EC2 security groups which will be applied to the instance. The default is ["default"].

Transport

ssh_key is The private key file for the AWS key pair you want to use. This will allow you to ssh into your instance to run your recipe during the kitchen converge stage and using the kitchen login

Platforms

name is the way to specify the image you are wanting to run on your instance. 

Suites

suits a collection of test suites, with each suite_namegrouping defining an aspect of a cookbook to be tested. Each suite_name must specify a run-list, for example

name the name of the suite that we are going to run. 

run_list what we want run on the instance when it is up and running. 


All together our kitchen.yml will look something like this. 



This .yml says we want to provision a ubuntu-16.04 server on a t2.medium instance. We will use the key my_key as our aws ssh key to provision the instance. This key will also be used to ssh into the instance once it is provisioned. The instance will have two security groups included (As part of an array), thus the [], and will run the default recipe for our cookbook. 

Since our cookbook default recipe is completely empty this will be a relatively boring test but we will run it anyway just to get a feel for what test-kitchen will do. 

First we will launch chefDK again if we have closed it by going to the icon. 


Navigate to the root folder of our project. For this case it was test_cookbook, yours may differ. 



Next we will run berks install this will download any cookbook dependencies that we might need to get our current cookbook to run. While we are not layering or wrapping any cookbooks currently, this is a good habit to get into so you don't run into problems in the future. NOTE: If you have already run berks install on a cookbook you will run berks upgrade from that point on. Berks install is only for the initial run.


Unfortunately, my connection was not being friendly when this was written, but berks did complete. Next we will actually run test-kitchen. From your command line run kitchen converge. This command will spin up the instance and then install your cookbook on top of it.


If everything is setup appropriately you should start to see movement in your command prompt


After everything is deployed and setup you should get the return "Kitchen is finished" letting you know that everything deployed successfully.


in order to log in to our instance we will run kitchen login



Once in our instance we can make any changes necessary and verify our instance. Type exit to get back out.


Finally, to destroy the instance we will type kitchen destroy. 


This is just the very beginning steps of what can be done with kitchen. Test kitchen is a powerful tool for testing cookbooks locally before pushing into staging or production boxes. 


Recently, I have been working with Chef Automation. Chef, as pulled directly from their literature:

lets you manage ... all (servers) by turning infrastructure into code. Infrastructure described as code is flexible, versionable, human-readable, and testable. Whether your infrastructure is in the cloud, on-premises or in a hybrid environment, you can easily and quickly adapt to your business’s changing needs with Chef

As described in the pull quote above the benefit of Chef is that everyone from infrastructure engineer to developer will write their product through code which can be version controlled and deployed on both your internal (on-premise) and external (cloud) solutions.

It's a very powerful tool with huge ramifications for fledgling or aging organizations.

Before we get to deep into the woods on Chef, the philosophy of infrastructure through code, and all the tools chef opens up to an organization, lets first touch on the installation of the chef developer tools which will henceforth be referred to as ChefDK.

Unfortunately, or fortunately, I work in a windows based shop so the following instruction set will be designed specifically with the Microsoft OS in mind. 

The following documentation is written to the current state of the chef system at time of writing. Since Chef at its base is an open source solution it is constantly changing with product need.

Since I am in love with CLI commands and Powershell I avoid the installation guide provided by chef at their website. While these instructions work fantastically well I like to script the great majority of my installations so I can call upon them in the future.

With this in mind I am a huge proponent of Chocolately. Chocolately easily manages all aspects of Windows software (installation, configuration, upgrade, and uninstallation) and makes scripting installs soooo much easier on windows. To install Chocolately:
  1. First, ensure that you are using an administrative powershell
  2. Run Get-ExecutionPolicy. If it returns Restricted, then run Set-ExecutionPolicy AllSigned or Set-ExecutionPolicy Bypass.
  3. Now run the following command
  4. Set-ExecutionPolicy Bypass; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
      
And that's it after the script has completed its run, chocolately will have been installed on your local machine.

From here installing chefdk is as simple as finding the package on the chocolately site:


 From the top right hand corner of https://chocolatey.org/ select "Packages"


In the "Search Packages" box type Chef. 

We are looking for the Chef Development Kit which is the first selection presented. As we can see in the image above, the chocolately command that we need to run to install this package is choco install chefdk.

 From our powershell window, still running in administrator mode, type the above command. 


After being prompted whether you wish for the package to be installed, you do by the way (hit Y for yes), ChefDK will now be installed for you through the chocolately tool. 

With the chocolately package installed you can quickly upgrade you chefdk installation in the future with choco upgrade chefdk  or easily remove the package with choco uninstall chefdk. 


Next PostNewer Posts Previous PostOlder Posts Home