Creating a Chef Automate Workflow Pipeline


My company's Chef Automate workflow pipelines were designed as a part of a middleware infrastructure project. The project required three auto scaled instances each sitting behind their own AWS ELB. The project enlisted the services of five teams, each with their own specialization for the project. The Infrastructure team created AWS Cloudformation Templates (CFTs), CFT cookbooks, VPCs, security groups and ELBs. The middleware team created the cookbooks for the respective instances including the underlying base cookbooks which will be utilized by or company for future projects. The QA team created and iterated upon smoke and functional testing for single instances and their communication between other instances. Finally, the Security team determined the compliance testing necessary for instances' and helped create proper security testing which would stop pipelines should servers fall out of compliance.

When designing the infrastructure and procedures for my company's Chef Automate workflow pipelines we came across a number of hurdles.

First, when provisioning instances via our CFT cookbook, the nodes are bootstrapped with chef client with a user data script. After chef client is installed, via the script, the nodes will run their first-boot.json. This contains the name of the cookbook for the the current project pipeline. If the recipe fails, however, during the initial bootstrapping process the node will not be attached appropriately to chef server.

This bootstrapping process is a necessary component for AutoScaled instances. If new instances are booted, as a part of an AutoScale procedure, those nodes will require a bootstrap procedure be run with the latest cookbooks. Therefore, testing of the cookbook will need to be independent of the CFT deployment steps.

In order to bypass this issue my company developed a pipeline that calls on, not only, our internal CFT provisioning cookbook but also test kitchen for our acceptance nodes.

By using kitchen-ec2 we are able to converge and destroy our cookbooks in acceptance to verify their viability before passing them to our user data script. This is made easier with the inclusion of the delivery-sugar cookbook. Delivery-sugar contains resources that allow for the creation, convergence and destruction of EC2, Azure, Docker and Vsphere instances using the delivery_test_kitchen resource.

My company is currently calling on kitchen-ec2 for instance creation. EC2 currently requires ALL of the following components to run successfully.

Test Kitchen Setup (Acceptance Stage Provisioning):

In order to enable this functionality please perform the following prerequisite steps.

Add ALL of the following items to the appropriate data bag within your Chef Server 




You can convert the private key content to a JSON-compatible string with the following command. 

chef exec ruby -e 'p ARGF.read' automate_kitchen.pem >> automate_kitchen.json 

Since the private key should be secured this data bag should be encrypted. In order to add an encrypted databag to the chef server you must first have proper access to the chef server, which is necessary for a knife command to be run. After this permission is in place you must run the following command.

knife data bag create delivery-secrets -- --secret-file encrypted_data_bag_secret

Where  is the name of your enterprise,  is the name of your organization and  is the current name of the pipeline you are creating. 

In order to decrypt this data the encrypted_data_bag_secret file, used to encrypt the data bag, must be added to your Chef Build servers at the following location.

/etc/chef/


Once these components are deployed Customize your kitchen YAML file with all the required information needed by the kitchen-ec2 driver driver.

NOTE  This kitchen.yml file will be the one found in your .delivery/build_cookbook and not the one found under your project cookbook


Delivery-sugar will expose the following ENV variables for use by kitchen.

  • KITCHEN_INSTANCE_NAME - set to the - values provided by delivery-cli 
  • KITCHEN_EC2_SSH_KEY_PATH - path to the SSH private key created from the delivery-secrets data bag created from the step above. 
These variable may be used in your kitchen YAML like the following example:



Once the prerequisites are in place you can use delivery_test_kitchen within your .delivery/build_cookbook/provision.rb to deploy instances through test kitchen.




Trigger a kitchen converge and destroy action using Ec2 driver and pointing it to .kitchen.ec2.yml in delivery.

NOTE: When adding a repo_path my companychooses #{workflow_workspace_repo}/.delivery/build_cookbook/. This is by preference and the location of the .yml file can sit wherever the user requires.



Trigger a kitchen create passing extra options for debugging 




Trigger a kitchen create extending the timeout to 20 minutes.

 


Since we are only using kitchen in our acceptance node my company must add the following logic to verify test kitchen is not used outside of the acceptance stage. (workflow_stage is a resource provided by delivery-sugar)




Version Pinning

The second issue we were presented with in creating our workflow pipelines came in the pinning of our environments. 

If using base cookbooks for multiple projects, pinning should not be done on the base cookbook itself. Since cookbooks are pinned at an environment level if the base cookbook is pinned at the environment and then updated, that base cookbook update will in effect alter all projects using it in that environment (acceptance, union, rehearsal delivered. To prevent this pinning from taking place, through workflow, under
.delivery/build-cookbook/provision.rb 
comment out
delivery-truck::provision


In turn if we version pin only the role cookbook at the environment level, being project specific, any changes made to the role cookbook should not have an effect on any other project.



This does mean that in order for a base cookbook to be updated in a project its version must be changed in the role cookbook. So for every underlying cookbook change the role cookbook will need to be version bumped. This is a much more manual process, but it will provide protection from projects breaking with a change to one base cookbook.

This also has the added benefit of version controlling any version bumps we have in our environments for a given projects node. Since the only version pins in an environment fall on the role cookbook, all other changes to versions should be controlled through the role cookbooks metadata and delivery cli commands. These commits can be tied back to individual users and version changes which will better stabilize the environments.

The leading measure in Workflow, if base cookbooks are not project specific, should sit with role cookbooks. These cookbooks should be used to provision servers, and version pin underlying cookbooks, when going through the Union, Rehearsal and Delivered stages of the Chef Automate Workflow to separate project version pinning. 

Setting up Metadata.rb, Provision.rb, Kitchen.yml and Berksfile in .delivery/build_cookbook

NOTE: before adding the workflow provisioning steps to the build_cookbook please add the project cookbook to the chef server, Either through automate workflow or through a knife command. If the project cookbook is not available upon the first run of the pipeline it will fail when trying to download cookbooks

With these two problems resolved, and explained, it is now time to setup the rest of our workflow pipeline.

We will start by modifying our Berksfile within .delivery/build_cookbook/. Since we will be calling on cookbooks that are currently stored in the chef server we will need to make sure that the workflow pipeline can reach out to it to find cookbooks. We do this by adding the chef server source


Next, we will modify our metadata.rb file. We need to make sure we are calling in delivery-sugar, delivery-truck, the current project cookbook for the pipeline and the cookbook we are using to provision our servers. 

NOTE: We only need to call the provisioning cookbook here if this is the role cookbook



We will also configure our kitchen.yml (which we have named here as kitchen.ec2.yml) as we described in the steps above. This file is used for our kitchen converge and destroy in our acceptance provisioning stage. 

NOTE: do not forget to change the cookbook we are calling in the kitchen.yml to reflect the current cookbook we are sitting in. (See run_list) 


Finally, we will modify our provision.rb file. Depending on our environment (role cookbook vs base cookbook or wrapper cookbook. Please see the documentation for version pinning for further explanation).

In a ROLE cookbook We will call upon the provisioning cookbook if we are in the union, rehearsal or delivered stage. This check can be made using the delivery-sugar resource workflow_stage which will call the current stage the pipeline is currently running in.

We will also call on the delivery-truck::provision cookbook to pin our environment. 

NOTE: the delivery-truck::provision recipe is included AFTER the run of our provisioning cookbook, See the section on versioning for more information)



If we are NOT IN A ROLE COOKBOOK delivery-truck::provision will not be called. We will also not need to include the recipe for provisioning in union, rehearsal or delivered. To keep things simple, and to prevent us from having to make too many modifications to our code, we will simply add a warning message in place of the provisioning cookbook includes. 


Once these changes are saved we can version bump our project cookbook, either through the metadata.rb file or the delivery command, and run delivery review.

NOTE: this version bump is done in the PROJECT COOKBOOK not the build cookbook. 


This will push the cookbook into Automate and start the Chef Automate Workflow Pipeline.

Previous PostOlder Post Home

0 comments:

Post a Comment