My company's Chef Automate workflow pipelines were designed as a part of a middleware infrastructure project. The project required three auto scaled instances each sitting behind their own AWS ELB. The project enlisted the services of five teams, each with their own specialization for the project. The Infrastructure team created AWS Cloudformation Templates (CFTs), CFT cookbooks, VPCs, security groups and ELBs. The middleware team created the cookbooks for the respective instances including the underlying base cookbooks which will be utilized by or company for future projects. The QA team created and iterated upon smoke and functional testing for single instances and their communication between other instances. Finally, the Security team determined the compliance testing necessary for instances' and helped create proper security testing which would stop pipelines should servers fall out of compliance.

When designing the infrastructure and procedures for my company's Chef Automate workflow pipelines we came across a number of hurdles.

First, when provisioning instances via our CFT cookbook, the nodes are bootstrapped with chef client with a user data script. After chef client is installed, via the script, the nodes will run their first-boot.json. This contains the name of the cookbook for the the current project pipeline. If the recipe fails, however, during the initial bootstrapping process the node will not be attached appropriately to chef server.

This bootstrapping process is a necessary component for AutoScaled instances. If new instances are booted, as a part of an AutoScale procedure, those nodes will require a bootstrap procedure be run with the latest cookbooks. Therefore, testing of the cookbook will need to be independent of the CFT deployment steps.

In order to bypass this issue my company developed a pipeline that calls on, not only, our internal CFT provisioning cookbook but also test kitchen for our acceptance nodes.

By using kitchen-ec2 we are able to converge and destroy our cookbooks in acceptance to verify their viability before passing them to our user data script. This is made easier with the inclusion of the delivery-sugar cookbook. Delivery-sugar contains resources that allow for the creation, convergence and destruction of EC2, Azure, Docker and Vsphere instances using the delivery_test_kitchen resource.

My company is currently calling on kitchen-ec2 for instance creation. EC2 currently requires ALL of the following components to run successfully.

Test Kitchen Setup (Acceptance Stage Provisioning):

In order to enable this functionality please perform the following prerequisite steps.

Add ALL of the following items to the appropriate data bag within your Chef Server 




You can convert the private key content to a JSON-compatible string with the following command. 

chef exec ruby -e 'p ARGF.read' automate_kitchen.pem >> automate_kitchen.json 

Since the private key should be secured this data bag should be encrypted. In order to add an encrypted databag to the chef server you must first have proper access to the chef server, which is necessary for a knife command to be run. After this permission is in place you must run the following command.

knife data bag create delivery-secrets -- --secret-file encrypted_data_bag_secret

Where  is the name of your enterprise,  is the name of your organization and  is the current name of the pipeline you are creating. 

In order to decrypt this data the encrypted_data_bag_secret file, used to encrypt the data bag, must be added to your Chef Build servers at the following location.

/etc/chef/


Once these components are deployed Customize your kitchen YAML file with all the required information needed by the kitchen-ec2 driver driver.

NOTE  This kitchen.yml file will be the one found in your .delivery/build_cookbook and not the one found under your project cookbook


Delivery-sugar will expose the following ENV variables for use by kitchen.

  • KITCHEN_INSTANCE_NAME - set to the - values provided by delivery-cli 
  • KITCHEN_EC2_SSH_KEY_PATH - path to the SSH private key created from the delivery-secrets data bag created from the step above. 
These variable may be used in your kitchen YAML like the following example:



Once the prerequisites are in place you can use delivery_test_kitchen within your .delivery/build_cookbook/provision.rb to deploy instances through test kitchen.




Trigger a kitchen converge and destroy action using Ec2 driver and pointing it to .kitchen.ec2.yml in delivery.

NOTE: When adding a repo_path my companychooses #{workflow_workspace_repo}/.delivery/build_cookbook/. This is by preference and the location of the .yml file can sit wherever the user requires.



Trigger a kitchen create passing extra options for debugging 




Trigger a kitchen create extending the timeout to 20 minutes.

 


Since we are only using kitchen in our acceptance node my company must add the following logic to verify test kitchen is not used outside of the acceptance stage. (workflow_stage is a resource provided by delivery-sugar)




Version Pinning

The second issue we were presented with in creating our workflow pipelines came in the pinning of our environments. 

If using base cookbooks for multiple projects, pinning should not be done on the base cookbook itself. Since cookbooks are pinned at an environment level if the base cookbook is pinned at the environment and then updated, that base cookbook update will in effect alter all projects using it in that environment (acceptance, union, rehearsal delivered. To prevent this pinning from taking place, through workflow, under
.delivery/build-cookbook/provision.rb 
comment out
delivery-truck::provision


In turn if we version pin only the role cookbook at the environment level, being project specific, any changes made to the role cookbook should not have an effect on any other project.



This does mean that in order for a base cookbook to be updated in a project its version must be changed in the role cookbook. So for every underlying cookbook change the role cookbook will need to be version bumped. This is a much more manual process, but it will provide protection from projects breaking with a change to one base cookbook.

This also has the added benefit of version controlling any version bumps we have in our environments for a given projects node. Since the only version pins in an environment fall on the role cookbook, all other changes to versions should be controlled through the role cookbooks metadata and delivery cli commands. These commits can be tied back to individual users and version changes which will better stabilize the environments.

The leading measure in Workflow, if base cookbooks are not project specific, should sit with role cookbooks. These cookbooks should be used to provision servers, and version pin underlying cookbooks, when going through the Union, Rehearsal and Delivered stages of the Chef Automate Workflow to separate project version pinning. 

Setting up Metadata.rb, Provision.rb, Kitchen.yml and Berksfile in .delivery/build_cookbook

NOTE: before adding the workflow provisioning steps to the build_cookbook please add the project cookbook to the chef server, Either through automate workflow or through a knife command. If the project cookbook is not available upon the first run of the pipeline it will fail when trying to download cookbooks

With these two problems resolved, and explained, it is now time to setup the rest of our workflow pipeline.

We will start by modifying our Berksfile within .delivery/build_cookbook/. Since we will be calling on cookbooks that are currently stored in the chef server we will need to make sure that the workflow pipeline can reach out to it to find cookbooks. We do this by adding the chef server source


Next, we will modify our metadata.rb file. We need to make sure we are calling in delivery-sugar, delivery-truck, the current project cookbook for the pipeline and the cookbook we are using to provision our servers. 

NOTE: We only need to call the provisioning cookbook here if this is the role cookbook



We will also configure our kitchen.yml (which we have named here as kitchen.ec2.yml) as we described in the steps above. This file is used for our kitchen converge and destroy in our acceptance provisioning stage. 

NOTE: do not forget to change the cookbook we are calling in the kitchen.yml to reflect the current cookbook we are sitting in. (See run_list) 


Finally, we will modify our provision.rb file. Depending on our environment (role cookbook vs base cookbook or wrapper cookbook. Please see the documentation for version pinning for further explanation).

In a ROLE cookbook We will call upon the provisioning cookbook if we are in the union, rehearsal or delivered stage. This check can be made using the delivery-sugar resource workflow_stage which will call the current stage the pipeline is currently running in.

We will also call on the delivery-truck::provision cookbook to pin our environment. 

NOTE: the delivery-truck::provision recipe is included AFTER the run of our provisioning cookbook, See the section on versioning for more information)



If we are NOT IN A ROLE COOKBOOK delivery-truck::provision will not be called. We will also not need to include the recipe for provisioning in union, rehearsal or delivered. To keep things simple, and to prevent us from having to make too many modifications to our code, we will simply add a warning message in place of the provisioning cookbook includes. 


Once these changes are saved we can version bump our project cookbook, either through the metadata.rb file or the delivery command, and run delivery review.

NOTE: this version bump is done in the PROJECT COOKBOOK not the build cookbook. 


This will push the cookbook into Automate and start the Chef Automate Workflow Pipeline.


Eventually you will have multiple base cookbooks and you may want to combine them into a single logical unit, so that can be tested together. Take for example a cookbook called role_my_company_website. This cookbook’s default recipe might look like the following:

include_recipe 'my_company_windows_base::default'

include_recipe 'my_company_audit::default'

include_recipe 'my_company_iis::default'

include_recipe 'my_company_website::default'

Then in this cookbook’s metadata.rb you would have hard version pinnings for each of the dependent cookbooks.


By doing this you can now apply role_my_company_website to a node and test it as a cumulative collection of all its underlying cookbooks. Then, if all the dependant cookbooks have proper tests, you only have to worry about testing the output of role_my_company_website without having to test each of its underlying components.
This reduces the amount of repeated work and produces an artifact that is:
  • Easy to understand 
  • Version controlled 
  • Independently testable 
This leads to a cookbook that succinctly describes a particular node in your Chef managed ecosystem. You could use this succinct description of node function to your advantage. For example, your load balancer cookbook could find all nodes that have the run_list of recipes['role_my_company_website'] and automatically add them to its backend server list.

The importance of role cookbooks is also seen when using automate workflow. 

If using base cookbooks for multiple projects, pinning should not be done on the base cookbook itself. Since cookbooks are pinned at an environment level if the base cookbook is pinned at the environment and then updated, that base cookbook update will in effect alter all projects using it in that environment (acceptance, union, rehearsal delivered. To prevent this pinning from taking place, through workflow, under
.delivery/build-cookbook/provision.rb 
comment out
delivery-truck::provision


In turn if we version pin only the role cookbook at the environment level, being project specific, any changes made to the role cookbook should not have an effect on any other project.



This does mean that in order for a base cookbook to be updated in a project its version must be changed in the role cookbook. So for every underlying cookbook change the role cookbook will need to be version bumped. This is a much more manual process, but it will provide protection from projects breaking with a change to one base cookbook.

This also has the added benefit of version controlling any version bumps we have in our environments for a given projects node. Since the only version pins in an environment fall on the role cookbook, all other changes to versions should be controlled through the role cookbooks metadata and delivery cli commands. These commits can be tied back to individual users and version changes which will better stabilize the environments.

The leading measure in Workflow, if base cookbooks are not project specific, should sit with role cookbooks. These cookbooks should be used to provision servers, and version pin underlying cookbooks, when going through the Union, Rehearsal and Delivered stages of the Chef Automate Workflow to separate project version pinning.

Simply put, a wrapper cookbook is just a regular cookbook that includes recipes from other cookbooks. Common use cases for wrapper cookbooks include:
  • Modifying the behavior of a community cookbook from the Chef Supermarket
  • Bundling several base cookbooks into a single cookbook
  • Version controlling a node’s run list and attribute definitions

Writing a Wrapper Cookbook

To include another cookbook in your wrapper cookbook you must do a minimum of two things:
  • Add dependencies to your wrapper cookbook’s metadata.rb
  • Add an include_recipe line to your wrapper cookbook’s recipes/default.rb

Including Dependencies

Including dependencies is a simple as adding the following to your metadata.rb:

depends 'public_cookbook'

You can also optionally perform version pinning like so:

depends 'public_cookbook', '= 1.4.5'

For more information about version pinning see the metadata.rb page on the Chef Docs site.

Setting Attributes

Setting attributes in your wrapper cookbook is a common way to modify the behavior of the cookbook you are wrapping. Well written community cookbooks support modifying their behavior in this manner and will document its attributes within their README.md.
These attributes can be added in your wrapper cookbook’s attributes/default.rb and/or in your default recipe before your include_recipe line.
I decide where to place the attributes as follows: If the attributes are computed using other attributes or set via logic (e.g case, if, unless) place them in recipes/default.rb otherwise place them in attributes/default.rb

Completing the Wrap

In order to add the functionality from your wrapped cookbook, you will need to include that cookbook in your wrapper cookbook’s default recipe. This is usually done with the help of the include_recipe method, like so:

include_recipe 'public_cookbook::default'

Once you have completed this your cookbook is ready for use.

Sample Use Cases

For the examples below let’s assume we want to use the IIS cookbook from the Chef Supermarket

Creating the wrapper

By running the following chef command we can generate our wrapper cookbook:

chef generate cookbook my_company_iis 

From here we add the following to our metadata.rb

depends 'iis' 

Then we can add the necessary include_recipe line to our recipes/default.rb

include_recipe 'iis::default'

Doing the actions above will create a wrapper cookbook that will use the IIS cookbook to:
  • Install IIS
  • Ensure the w3svc is enabled and started
  • Serve the Default Web Site

Modifying Public Cookbook Behavior

The above example is great, but let’s assume that your company hosts its websites on D: instead of C:. We can change this by modifying the attributes that the IIS cookbook consumes.

To host websites out of D: add the following to your wrapper cookbook’s attributes/default.rb

default['iis']['pubroot']    = 'D:\\inetpub'

default['iis']['docroot']    = 'D:\\inetpub\\wwwroot'

default['iis']['log_dir']    = 'D:\\inetpub\\logs\\LogFiles'

default['iis']['cache_dir']  = 'D:\\inetpub\\temp'  
 
Adding this to your wrapper cookbook’s attributes file will modify the behavior of the IIS cookbook.

Application Cookbooks

By completing the above you have now created a base cookbook that will install IIS in the fashion that your company desires. Now we can expand on this by utilizing an additional wrapper cookbook.
Let’s say we did the same as above but also created a my_company_app cookbook and included our my_company_iis cookbook with a hard version pinning. By doing this we can give the developer of my_company_app the freedom to have IIS installed to company specifications, but without worrying about how things work behind the scenes.
This allows one team to focus on coding the logic that deploys their web application without also having to code the logic for installing IIS to company specifications.

Modifying the Resource Collection

Take for example, a cookbook that lays down a file on the file system via a template, but that cookbook’s template doesn’t suit your needs.
In your recipe you can use the edit_resource helper method provided by Chef’s Recipe DSLto modify their template resource to point to a template in your wrapper cookbook instead.
In practice it looks like this:

include_recipe 'bad_cookbook::default' 

edit_resource(:template, 'C:\\important\\template\\path.ini') do

  source 'my_beautiful_template.erb'

  cookbook 'my_awesome_wrapper'

end  
 
Adding this to your wrapper cookbook’s default recipe would allow you to use their cookbook as intended with the exception that your template will be used and not theirs.
Next PostNewer Posts Previous PostOlder Posts Home