My company's Chef Automate workflow pipelines were designed as a part of a middleware infrastructure project. The project required three auto scaled instances each sitting behind their own AWS ELB. The project enlisted the services of five teams, each with their own specialization for the project. The Infrastructure team created AWS Cloudformation Templates (CFTs), CFT cookbooks, VPCs, security groups and ELBs. The middleware team created the cookbooks for the respective instances including the underlying base cookbooks which will be utilized by or company for future projects. The QA team created and iterated upon smoke and functional testing for single instances and their communication between other instances. Finally, the Security team determined the compliance testing necessary for instances' and helped create proper security testing which would stop pipelines should servers fall out of compliance.
When designing the infrastructure and procedures for my company's Chef Automate workflow pipelines we came across a number of hurdles.
First, when provisioning instances via our CFT cookbook, the nodes are bootstrapped with chef client with a user data script. After chef client is installed, via the script, the nodes will run their first-boot.json. This contains the name of the cookbook for the the current project pipeline. If the recipe fails, however, during the initial bootstrapping process the node will not be attached appropriately to chef server.
This bootstrapping process is a necessary component for AutoScaled instances. If new instances are booted, as a part of an AutoScale procedure, those nodes will require a bootstrap procedure be run with the latest cookbooks. Therefore, testing of the cookbook will need to be independent of the CFT deployment steps.
In order to bypass this issue my company developed a pipeline that calls on, not only, our internal CFT provisioning cookbook but also test kitchen for our acceptance nodes.
By using kitchen-ec2 we are able to converge and destroy our cookbooks in acceptance to verify their viability before passing them to our user data script. This is made easier with the inclusion of the delivery-sugar cookbook. Delivery-sugar contains resources that allow for the creation, convergence and destruction of EC2, Azure, Docker and Vsphere instances using the delivery_test_kitchen resource.
My company is currently calling on kitchen-ec2 for instance creation. EC2 currently requires ALL of the following components to run successfully.
Add ALL of the following items to the appropriate data bag within your Chef Server
My company is currently calling on kitchen-ec2 for instance creation. EC2 currently requires ALL of the following components to run successfully.
Test Kitchen Setup (Acceptance Stage Provisioning):
In order to enable this functionality please perform the following prerequisite steps.Add ALL of the following items to the appropriate data bag within your Chef Server
chef exec ruby -e 'p ARGF.read' automate_kitchen.pem >> automate_kitchen.json
Since the private key should be secured this data bag should be encrypted. In order to add an encrypted databag to the chef server you must first have proper access to the chef server, which is necessary for a knife command to be run. After this permission is in place you must run the following command.
knife data bag create delivery-secrets -- --secret-file encrypted_data_bag_secret
Where
In order to decrypt this data the encrypted_data_bag_secret file, used to encrypt the data bag, must be added to your Chef Build servers at the following location.
/etc/chef/
Once these components are deployed Customize your kitchen YAML file with all the required information needed by the kitchen-ec2 driver driver.
NOTE This kitchen.yml file will be the one found in your .delivery/build_cookbook and not the one found under your project cookbook
Delivery-sugar will expose the following ENV variables for use by kitchen.
- KITCHEN_INSTANCE_NAME - set to the
- values provided by delivery-cli - KITCHEN_EC2_SSH_KEY_PATH - path to the SSH private key created from the delivery-secrets data bag created from the step above.
Once the prerequisites are in place you can use delivery_test_kitchen within your .delivery/build_cookbook/provision.rb to deploy instances through test kitchen.
Trigger a kitchen converge and destroy action using Ec2 driver and pointing it to .kitchen.ec2.yml in delivery.
NOTE: When adding a repo_path my companychooses #{workflow_workspace_repo}/.delivery/build_cookbook/. This is by preference and the location of the .yml file can sit wherever the user requires.
Trigger a kitchen create passing extra options for debugging
Version Pinning
If using base cookbooks for multiple projects, pinning should not be done on the base cookbook itself. Since cookbooks are pinned at an environment level if the base cookbook is pinned at the environment and then updated, that base cookbook update will in effect alter all projects using it in that environment (acceptance, union, rehearsal delivered. To prevent this pinning from taking place, through workflow, under
.delivery/build-cookbook/provision.rb
comment outdelivery-truck::provision
In turn if we version pin only the role cookbook at the environment level, being project specific, any changes made to the role cookbook should not have an effect on any other project.
This does mean that in order for a base cookbook to be updated in a project its version must be changed in the role cookbook. So for every underlying cookbook change the role cookbook will need to be version bumped. This is a much more manual process, but it will provide protection from projects breaking with a change to one base cookbook.
This also has the added benefit of version controlling any version bumps we have in our environments for a given projects node. Since the only version pins in an environment fall on the role cookbook, all other changes to versions should be controlled through the role cookbooks metadata and delivery cli commands. These commits can be tied back to individual users and version changes which will better stabilize the environments.
Setting up Metadata.rb, Provision.rb, Kitchen.yml and Berksfile in .delivery/build_cookbook
With these two problems resolved, and explained, it is now time to setup the rest of our workflow pipeline.
We will start by modifying our Berksfile within .delivery/build_cookbook/. Since we will be calling on cookbooks that are currently stored in the chef server we will need to make sure that the workflow pipeline can reach out to it to find cookbooks. We do this by adding the chef server source
We will also configure our kitchen.yml (which we have named here as kitchen.ec2.yml) as we described in the steps above. This file is used for our kitchen converge and destroy in our acceptance provisioning stage.
NOTE: do not forget to change the cookbook we are calling in the kitchen.yml to reflect the current cookbook we are sitting in. (See run_list)
In a ROLE cookbook We will call upon the provisioning cookbook if we are in the union, rehearsal or delivered stage. This check can be made using the delivery-sugar resource workflow_stage which will call the current stage the pipeline is currently running in.
NOTE: this version bump is done in the PROJECT COOKBOOK not the build cookbook.
This will push the cookbook into Automate and start the Chef Automate Workflow Pipeline.
0 comments:
Post a Comment