MCP Tools Reference#
SousChef provides 95 specialised MCP tools for comprehensive Chef-to-Ansible migration, SaltStack-to-Ansible migration, Puppet-to-Ansible conversion, PowerShell-to-Ansible Windows automation, Bash script migration, and Ansible upgrade planning. Each tool is designed to work seamlessly with any AI model through the Model Context Protocol.
Working with MCP Tools
These tools are invoked through your AI assistant (Claude, GPT-4, Red Hat AI, local models, etc.). Simply describe what you need in natural language, and your AI assistant will use the appropriate tools.
About the Tool Count
Complete tool inventory available in source code
This guide documents the primary user-facing tools (Chef migration, SaltStack migration, Puppet migration, PowerShell migration, Bash migration, and Ansible upgrades) that cover the main capabilities. The MCP server includes additional internal helper tools that your AI assistant uses automatically behind the scenes.
As a user, you'll primarily interact with these documented tools. Your AI assistant may use additional tools automatically when needed (e.g., low-level file operations), but you don't need to invoke them directly.
See souschef/server.py for the complete authoritative list of all MCP tools.
Quick Reference by Capability Area#
| Capability | Tools | Use Case |
|---|---|---|
| Cookbook Analysis & Parsing | 8 tools | Parse and analyze Chef cookbooks, recipes, resources |
| Resource Conversion | 1 tool | Convert Chef resources to Ansible tasks |
| InSpec Integration | 2 tools | Convert InSpec tests and generate from recipes |
| Data Bags | 2 tools | Migrate data bags to Ansible vars/vault |
| Environments | 3 tools | Convert Chef environments to inventory |
| Migration Assessment | 5 tools | Assess complexity and plan migrations |
| Habitat | 1 tool | Parse Habitat plans |
| Performance | 2 tools | Profile and optimise parsing operations |
| CI/CD Pipeline Generation | 3 tools | Generate Jenkins, GitLab CI, and GitHub Actions |
| AWX/AAP Integration | 3 tools | Generate AWX job templates, workflows, and inventory |
| Chef Server Integration | 3 tools | Validate Chef Server connections and query dynamic inventory |
| Ansible Upgrade Planning | 5 tools | Assess Ansible environments and plan version upgrades |
| SaltStack Migration | 12 tools | Parse, convert, assess, and plan SaltStack-to-Ansible migrations |
| PowerShell Migration | 7 tools | Convert PowerShell scripts to Windows Ansible automation |
| Bash Script Migration | 3 tools | Convert Bash provisioning scripts to Ansible playbooks and roles |
| Puppet Migration | 8 tools | Convert Puppet manifests and modules to Ansible playbooks |
Cookbook Analysis & Parsing#
Complete cookbook introspection and analysis tools for understanding your Chef infrastructure.
parse_template#
Parse ERB templates with automatic Jinja2 conversion and variable extraction.
What it does: Converts Chef's ERB template files (Ruby-style templates) into Ansible's Jinja2 format. ERB and Jinja2 are both template languages that let you embed variables and logic into configuration files. In Chef you write <%= hostname %>, in Ansible it's {{ hostname }}. This tool automatically translates between the two syntaxes.
Why you need this: Chef cookbooks often contain dozens of template files for configs like nginx.conf, httpd.conf, database.yml, etc. Manually converting each one is tedious and error-prone. This tool does it instantly.
What you get: - The converted Jinja2 template ready for Ansible - A complete list of all variables referenced in the template - Information about what those variables need to be defined as in your playbook
Real-world example: Your Chef template templates/default/app.conf.erb with Chef ERB syntax automatically becomes an Ansible-ready app.conf.j2 with proper Jinja2 syntax, plus you get a list like "Variables needed: app_port, app_user, app_home".
Parameters:
- path (string, required): Path to the ERB template file
Returns: - JSON string with extracted variables and Jinja2-converted template
Example Usage:
parse_custom_resource#
Extract properties, attributes, and actions from Chef custom resources and LWRPs.
What it does: Analyses Chef custom resources (the Ruby files in your resources/ directory) and extracts all the properties, actions, and configuration they define. Custom resources are reusable Chef components that encapsulate complex operations.
Why you need this: Custom resources are often the most complex parts of a Chef cookbook. Understanding what properties they accept and what actions they perform is critical for converting them to Ansible modules or roles. Without this tool, you'd need to manually read through Ruby code and trace execution paths.
What you get:
- Complete list of all properties the resource accepts (like property :port, Integer)
- All actions the resource can perform (like :create, :delete, :configure)
- Default values and validation rules
- Metadata about the resource's purpose
Real-world example: Your resources/database_user.rb file defines a custom Chef resource. This tool extracts that it has properties like username, password, privileges, and actions like :create and :drop, helping you understand what Ansible tasks you need to write.
Parameters:
- path (string, required): Path to the custom resource (.rb) file
Returns: - JSON string with extracted properties, actions, and metadata
Example Usage:
list_directory#
Navigate and explore cookbook directory structures.
What it does: Lists all files and directories in a given path, like the Unix ls command. Simple but essential for exploring unfamiliar cookbooks.
Why you need this: When you're migrating a cookbook you didn't write (or wrote years ago), you need to understand what's in it. This tool helps you discover recipes, templates, resources, and other files you need to convert.
What you get: A list of all files and directories in the specified location, making it easy to explore the cookbook structure.
Real-world example: Running this on cookbooks/database/ shows you there are recipes/, templates/, attributes/, and resources/ directories, helping you plan your migration strategy.
Parameters:
- path (string, required): Path to the directory
Returns: - List of filenames in the directory, or an error message
Example Usage:
read_file#
Read cookbook files with comprehensive error handling.
What it does: Reads and displays the contents of any file in your cookbook, like the Unix cat command. Provides detailed error messages if the file doesn't exist or can't be read.
Why you need this: Before converting Chef code, you often need to examine the actual content of recipes, metadata files, or configuration files. This tool makes that easy without leaving your AI assistant.
What you get: The complete contents of the file, with helpful error messages if something goes wrong (like "File not found" or "Permission denied").
Real-world example: Reading metadata.rb shows you the cookbook's dependencies, version, and description before you start migration planning.
Parameters:
- path (string, required): Path to the file
Returns: - The contents of the file, or an error message
Example Usage:
read_cookbook_metadata#
Parse metadata.rb files for dependencies and cookbook information.
What it does: Reads and parses the metadata.rb file in a Chef cookbook, extracting structured information about the cookbook's name, version, dependencies, supported platforms, and more.
Why you need this: The metadata file is the "package.json" or "requirements.txt" of Chef cookbooks. It tells you what other cookbooks this one depends on, which is crucial for understanding migration order and complexity. Raw Ruby metadata files are hard to parse by eye.
What you get:
- Cookbook name and version
- All dependencies (like depends 'apt', '>= 2.0')
- Supported platforms (like supports 'ubuntu', '>= 18.04')
- License and maintainer information
- Long description of what the cookbook does
Real-world example: Parsing database/metadata.rb reveals it depends on the postgresql and apt cookbooks, so you need to migrate or handle those dependencies in Ansible too.
Parameters:
- path (string, required): Path to metadata.rb file
Returns: - Formatted string with extracted metadata
Example Usage:
parse_recipe#
Analyze Chef recipes and extract resources, actions, and properties.
What it does: Reads a Chef recipe file (the .rb files in recipes/ directory) and extracts every resource declared in it, including the resource type, name, action, and all properties. Think of it as a Chef recipe translator that converts Ruby code into structured data.
Why you need this: Chef recipes are written in Ruby DSL, which can be difficult to convert to Ansible YAML by hand, especially for large recipes with dozens of resources. This tool breaks down the recipe into individual components you can convert one by one.
What you get:
- Every resource in the recipe (like package 'nginx', service 'nginx', template '/etc/nginx.conf')
- The action for each resource (:install, :start, :create)
- All properties (like source, variables, mode)
- Guard conditions (like only_if, not_if)
Real-world example: Parsing recipes/default.rb shows it installs 5 packages, configures 3 services, and creates 8 template files. You can now convert each resource to an Ansible task systematically.
Parameters:
- path (string, required): Path to the Chef recipe (.rb) file
Returns: - Formatted string with extracted Chef resources and their properties
Example Usage:
parse_attributes#
Parse Chef attributes files and extract attribute definitions with precedence.
What it does: Reads Chef attributes files (the .rb files in attributes/ directory) and extracts all attribute definitions. Chef attributes are like variables that configure your cookbook. This tool handles Chef's complex 15-level precedence system (default, force_default, normal, override, force_override, automatic).
Why you need this: Chef's attribute precedence system is notoriously complex. Understanding which attribute value "wins" when multiple are defined is critical for correct Ansible variable migration. This tool resolves precedence automatically so you know exactly what values your Chef recipes will actually use.
What you get:
- All attributes defined in the file (like default['nginx']['port'] = 80)
- The precedence level of each attribute
- The final resolved value when precedence is enabled
- Equivalent Ansible variable structure
Real-world example: Your attributes file defines default['app']['port'] = 3000 and override['app']['port'] = 8080. This tool tells you that 8080 wins due to override precedence, so your Ansible vars should use app_port: 8080.
Parameters:
- path (string, required): Path to the attributes (.rb) file
- resolve_precedence (boolean, optional, default: true): If true, resolves precedence conflicts and shows only winning values
Returns: - Formatted string with extracted attributes
Example Usage:
list_cookbook_structure#
List the structure of a Chef cookbook directory.
What it does: Scans an entire Chef cookbook directory and presents a structured view of all recipes, templates, resources, attributes, files, and other components. Like a "table of contents" for your cookbook.
Why you need this: Before migrating a cookbook, you need to understand its size and complexity. This tool gives you a bird's-eye view of everything that needs conversion, helping you estimate effort and plan your approach.
What you get: - Complete directory tree of the cookbook - Count of recipes, templates, resources, etc. - File paths for each component - Quick overview of cookbook complexity
Real-world example: Running this on your database cookbook shows it has 3 recipes, 8 templates, 2 custom resources, and 1 attributes file. You now know you need to create an Ansible role with 3 task files, 8 Jinja2 templates, and variable definitions.
Parameters:
- path (string, required): Path to the cookbook root directory
Returns: - Formatted string showing the cookbook structure
Example Usage:
Resource Conversion#
convert_resource_to_task#
Convert a Chef resource to an Ansible task with automatic module selection.
What it does: Takes a single Chef resource (like package 'nginx') and converts it to the equivalent Ansible task with the appropriate module. Automatically selects the best Ansible module for the Chef resource type and handles property mapping.
Why you need this: Chef and Ansible have different syntax and different module names. Chef's package becomes Ansible's apt, yum, or package. Chef's service has different properties than Ansible's service. This tool knows all the mappings and handles the conversion automatically, including tricky cases like guards (only_if, not_if) becoming Ansible's when conditions.
What you get:
- Valid Ansible YAML task ready to use
- Correct module selection (e.g., apt vs yum based on context)
- Properties translated to Ansible syntax
- Guards converted to when conditions
- Comments explaining the conversion
Real-world example: Chef's package 'nginx' do action :install end becomes Ansible's - name: Install nginx\n apt:\n name: nginx\n state: present. All syntax differences handled automatically.
Parameters:
- resource_type (string, required): The Chef resource type (e.g., 'package', 'service')
- resource_name (string, required): The name of the resource
- action (string, optional, default: "create"): The Chef action (e.g., 'install', 'start')
- properties (string, optional): Additional resource properties as a string representation
Returns: - YAML representation of the equivalent Ansible task
Example Usage:
InSpec Integration#
convert_inspec_to_test#
Convert InSpec controls to Ansible test format.
What it does: Converts Chef InSpec test suites (compliance and testing code) into Ansible-compatible testing formats like Testinfra (Python-based), Ansible's built-in assert module, ServerSpec (Ruby-based), or Goss (YAML-based). InSpec is Chef's testing framework - think of it like unit tests for infrastructure.
Why you need this: If your Chef cookbooks have InSpec tests (they should!), you want to preserve that testing in Ansible. These tests verify your infrastructure is configured correctly. This tool automatically converts InSpec's Ruby-based syntax to multiple testing formats, saving hours of manual test rewriting.
What you get: - InSpec controls converted to Testinfra, Ansible assert, ServerSpec, or Goss format - All test cases preserved with equivalent checks - Directory structure for test organisation - Ready-to-run test files
Real-world example: Your InSpec test describe service('nginx') do it { should be_running } end becomes:
- Testinfra: def test_nginx_running(host): assert host.service("nginx").is_running
- Ansible: assert: that: "'nginx' in services"
- ServerSpec: describe service('nginx') do it { should be_running } end
- Goss: service: nginx: running: true
Parameters:
- inspec_path (string, required): Path to InSpec profile or control file
- output_format (string, optional, default: "testinfra"): Output format ('testinfra', 'ansible_assert', 'serverspec', or 'goss')
Returns: - Converted test code in specified format, or error message
Example Usage:
generate_inspec_from_recipe#
Generate InSpec controls from a Chef recipe to validate conversions.
What it does: Analyses a Chef recipe and automatically generates InSpec test cases that verify what the recipe does. If your recipe installs nginx and starts it, this tool creates tests that check nginx is installed and running. This is test generation, not conversion.
Why you need this: Many Chef cookbooks lack proper tests. This tool creates tests from your recipes automatically, which you can then run before and after migration to verify your Ansible conversion works identically to the original Chef code. It's your safety net.
What you get: - Complete InSpec test suite generated from recipe resources - Tests for packages installed, services running, files created, etc. - Validation that your Ansible conversion has the same effect as Chef - Confidence that nothing was missed in migration
Real-world example: Your recipe installs postgresql and creates /etc/postgresql/postgresql.conf. This tool generates InSpec tests checking package installation and file existence. Run these tests before migration (Chef) and after (Ansible) to verify identical results.
Parameters:
- recipe_path (string, required): Path to the Chef recipe file
Returns: - Generated InSpec control code, or error message
Example Usage:
Data Bags#
convert_chef_databag_to_vars#
Convert Chef data bags to Ansible variables or vault.
What it does: Converts Chef data bags (JSON files storing configuration data) into Ansible variables or Ansible Vault encrypted files. Data bags are Chef's way of storing data separately from cookbooks - things like database passwords, API keys, user lists, etc.
Why you need this: Chef data bags and Ansible variables serve the same purpose but use different formats and locations. Chef uses JSON in data_bags/, Ansible uses YAML in group_vars/, host_vars/, or vault/. This tool handles the conversion and knows when to use encrypted vaults for sensitive data.
What you get:
- Chef data bag JSON converted to Ansible YAML variables
- Automatic detection of sensitive data (passwords, keys) with vault encryption
- Proper variable naming (Chef's data_bag_item becomes Ansible's group_vars)
- Correct file structure for Ansible inventory
Real-world example: Your Chef data bag data_bags/secrets/database.json with {"id": "database", "password": "secret123"} becomes Ansible vault group_vars/all/vault.yml with vault_database_password: secret123 properly encrypted.
Parameters:
- databag_content (string, required): The JSON content of the data bag
- databag_name (string, required): Name of the data bag
- item_name (string, optional, default: "default"): Name of the data bag item
- is_encrypted (boolean, optional, default: false): Whether the data bag is encrypted
- target_scope (string, optional, default: "group_vars"): Target scope ('group_vars', 'host_vars', or 'vault')
Returns: - Converted Ansible variables in YAML format
Example Usage:
analyze_chef_databag_usage#
Analyze data bag usage in cookbooks and provide migration recommendations.
What it does: Scans your Chef cookbook to find everywhere data bags are referenced (like data_bag_item('users', 'admin')), analyses how they're used, and recommends the best Ansible approach for each use case.
Why you need this: Data bags are often scattered throughout recipes, templates, and attributes. Finding all references manually is tedious and error-prone. This tool finds them all and, crucially, recommends whether each should become group_vars, host_vars, or encrypted vault based on usage patterns.
What you get: - Complete list of all data bag references in the cookbook - What data each reference accesses - Migration recommendation for each (group_vars, host_vars, or vault) - Impact analysis (how many recipes/files need updating)
Real-world example: Analysis reveals your cookbook accesses data_bag('users') in 5 recipes and data_bag_item('secrets', 'api_key') in 2 templates. Tool recommends: users → group_vars (shared across servers), api_key → vault (sensitive credential).
Parameters:
- cookbook_path (string, required): Path to the cookbook directory
- databags_path (string, optional): Path to the data bags directory
Returns: - Analysis report with usage patterns and migration recommendations
Example Usage:
Environments#
convert_chef_environment_to_inventory_group#
Convert a Chef environment to an Ansible inventory group.
What it does: Converts a Chef environment file (like environments/production.rb) into an Ansible inventory group with equivalent settings. Chef environments separate configurations for dev/staging/production; Ansible uses inventory groups for the same purpose.
Why you need this: Chef environments define environment-specific settings like cookbook versions, attribute overrides, and node constraints. Ansible achieves this through inventory groups and group_vars. This tool translates between the two systems, preserving your environment isolation.
What you get:
- Ansible inventory group for the environment (e.g., [production])
- Group variables matching Chef environment attributes
- Cookbook version constraints translated to role/collection versions
- Ready-to-use inventory structure
Real-world example: Your Chef environments/production.rb defining override_attributes['app']['port'] = 8080 and cookbook_versions['nginx'] = '= 2.0.0' becomes Ansible inventory group [production] with group_vars/production.yml containing app_port: 8080 and role version pinning.
Parameters:
- environment_content (string, required): The content of the Chef environment file
- environment_name (string, required): Name of the environment
- include_constraints (boolean, optional, default: true): Include cookbook version constraints
Returns: - Ansible inventory group configuration in YAML format
Example Usage:
generate_inventory_from_chef_environments#
Generate complete Ansible inventory from Chef environments directory.
What it does: Scans an entire directory of Chef environment files and generates a complete Ansible inventory structure with all environments as groups, including all variables and settings. Does for all environments what convert_chef_environment_to_inventory_group does for one.
Why you need this: Instead of converting environments one-by-one, this tool processes your entire environments/ directory at once, creating a complete, production-ready Ansible inventory. Saves significant time when you have multiple environments (dev, test, staging, production, DR, etc.).
What you get: - Complete Ansible inventory in YAML or INI format - All Chef environments as inventory groups - All group_vars files for each environment - Proper inventory structure following Ansible best practices - Ready to use with ansible-playbook
Real-world example: Your Chef environments/ with dev.rb, staging.rb, and production.rb becomes Ansible inventory/ with hosts file defining [dev], [staging], [production] groups, plus group_vars/dev.yml, group_vars/staging.yml, group_vars/production.yml with respective settings.
Parameters:
- environments_directory (string, required): Path to the environments directory
- output_format (string, optional, default: "yaml"): Output format ('yaml' or 'ini')
Returns: - Complete Ansible inventory configuration
Example Usage:
analyze_chef_environment_usage#
Analyze environment usage in cookbooks and suggest migration strategy.
What it does: Examines how your Chef cookbook uses environments (like node.chef_environment or node.environment), identifies patterns, and recommends the best Ansible inventory strategy for your use case.
Why you need this: Different cookbooks use Chef environments in different ways - some for simple dev/prod split, others for complex multi-tenant setups. This tool understands these patterns and recommends whether you need simple inventory groups, multiple inventory files, or dynamic inventory scripts.
What you get: - All environment references in cookbook code - Usage patterns identified (simple vs complex) - Recommended Ansible inventory architecture - Migration complexity assessment - Step-by-step migration strategy
Real-world example: Analysis shows your cookbook checks node.chef_environment == 'production' in 3 recipes to conditionally configure replication. Tool recommends: use inventory groups with when: inventory_hostname in groups['production'] in your Ansible playbooks.
Parameters:
- cookbook_path (string, required): Path to the cookbook directory
- environments_path (string, optional): Path to the environments directory
Returns: - Analysis report with usage patterns and migration recommendations
Example Usage:
Migration Assessment#
assess_chef_migration_complexity#
Assess the complexity of migrating Chef cookbooks to Ansible.
What it does: Analyses one or more Chef cookbooks and calculates a complexity score based on factors like number of resources, custom resources, Ruby code complexity, guard conditions, community cookbook dependencies, and template usage. Think of it as a "how hard will this migration be?" calculator.
Why you need this: Before starting a migration, you need realistic effort estimates for planning, budgeting, and resource allocation. This tool prevents surprises by identifying complexity factors upfront. A cookbook with 10 simple resources is very different from one with 50 resources, 5 custom LWRPs, and heavy Ruby logic.
What you get: - Overall complexity score (Low/Medium/High/Very High) - Breakdown by complexity factor (resources, custom code, dependencies, etc.) - Estimated effort in person-hours or days - Risk factors (things likely to cause problems) - Recommended migration approach based on complexity
Real-world example: Assessing your database cookbook returns "Medium complexity (32 hours estimated)" because it has 25 resources, 2 custom resources, and depends on 3 community cookbooks. This helps you plan sprint capacity and team allocation.
Parameters:
- cookbook_paths (string, required): Comma-separated paths to cookbook directories
- migration_scope (string, optional, default: "full"): Scope of migration ('full', 'partial', or 'analysis_only')
- target_platform (string, optional, default: "ansible_awx"): Target platform ('ansible_awx', 'ansible_tower', or 'ansible_core')
Returns: - Comprehensive complexity assessment with scores and recommendations
Example Usage:
generate_migration_plan#
Generate a detailed migration plan for Chef to Ansible conversion.
What it does: Creates a comprehensive, phased migration plan with specific tasks, timeline, dependencies, and milestones. Goes beyond just complexity assessment to give you an actionable project plan.
Why you need this: Migrating Chef to Ansible isn't just technical conversion - it's a project requiring planning, sequencing, testing, and validation. This tool generates a realistic plan considering your chosen strategy (phased rollout vs big bang), timeline constraints, and dependencies between cookbooks.
What you get: - Phase-by-phase migration plan (e.g., Phase 1: Assessment, Phase 2: Core cookbooks, Phase 3: Applications) - Specific tasks for each phase with effort estimates - Dependency-aware sequencing (migrate base cookbooks before dependent ones) - Testing and validation checkpoints - Rollback contingency plans - Timeline with milestones
Real-world example: For a 12-week timeline and phased strategy, the plan might say: Weeks 1-2 (Assessment + tooling setup), Weeks 3-6 (Convert base cookbooks: apt, users, security), Weeks 7-10 (Convert application cookbooks: database, web-server), Weeks 11-12 (Testing + deployment).
Parameters:
- cookbook_paths (string, required): Comma-separated paths to cookbook directories
- migration_strategy (string, optional, default: "phased"): Migration strategy ('phased', 'big_bang', or 'parallel_run')
- timeline_weeks (integer, optional, default: 12): Timeline for migration in weeks
Returns: - Detailed migration plan with phases, tasks, and timeline
Example Usage:
analyze_cookbook_dependencies#
Analyze dependencies between cookbooks to determine migration order.
What it does: Maps out the dependency graph of your Chef cookbooks (which cookbooks depend on which others) and recommends the optimal order to migrate them. Uses the depends declarations in metadata.rb plus analysis of actual cookbook usage patterns.
Why you need this: You can't migrate cookbooks in random order - if cookbook A depends on cookbook B, you must migrate B first. With dozens of cookbooks and complex dependency chains, figuring this out manually is time-consuming and error-prone. This tool does the analysis automatically and handles circular dependencies.
What you get: - Complete dependency graph showing all cookbook relationships - Recommended migration order (which to do first, second, third, etc.) - Circular dependency detection and resolution strategies - Groupings of cookbooks that can be migrated in parallel - Critical path analysis (dependencies that would block everything else)
Real-world example: Analysis reveals: base cookbook has no dependencies (migrate first), database depends on base (migrate second), application depends on both base and database (migrate last). Attempting to migrate application first would fail.
Parameters:
- cookbook_paths (string, required): Comma-separated paths to cookbook directories
Returns: - Dependency analysis with recommended migration order
Example Usage:
generate_migration_report#
Generate a comprehensive migration report.
What it does: Creates a complete migration report combining complexity assessment, dependency analysis, conversion details, testing coverage, and recommendations. Outputs as Markdown (for engineers), HTML (for sharing), or JSON (for automation/integrations).
Why you need this: Stakeholders, managers, and auditors need documentation of the migration. This tool generates executive summaries, technical details, risk assessments, and recommendations in presentation-ready formats. It's your "migration in a document" for getting buy-in and tracking progress.
What you get: - Executive summary with effort estimates and risks - Detailed technical analysis of each cookbook - Conversion coverage (what's automated vs manual) - Testing strategy and coverage percentages - Recommended approach and timeline - Resource requirements (team size, skills needed) - Success criteria and validation checkpoints
Real-world example: Generated HTML report shows: 15 cookbooks analyzed, total estimated effort 420 hours over 14 weeks, 3 high-risk items identified (custom Chef providers requiring manual porting), recommended phased approach starting with base infrastructure. This document becomes your migration proposal.
Parameters:
- cookbook_paths (string, required): Comma-separated paths to cookbook directories
- report_format (string, optional, default: "markdown"): Report format ('markdown', 'html', or 'json')
- include_technical_details (string, optional, default: "yes"): Include technical details ('yes' or 'no')
Returns: - Comprehensive migration report in specified format
Example Usage:
validate_conversion#
Validate converted Ansible code against original Chef code.
What it does: Compares your converted Ansible playbook/role with the original Chef recipe to verify they're functionally equivalent. Checks that all resources were converted, properties are correct, guard conditions are preserved, and nothing was missed or misinterpreted.
Why you need this: Automated conversion is fast but needs validation. This tool catches mistakes like: forgetting to convert a guard condition, using wrong Ansible module parameters, missing template variables, or incorrect action mappings. It's your quality assurance for conversion accuracy.
What you get: - Line-by-line comparison of Chef vs Ansible - List of any missing or incorrectly converted resources - Property/parameter mapping verification - Guard condition translation checks - Overall conversion accuracy percentage - Specific suggestions for fixing issues found
Real-world example: Validation reveals: 18 of 20 Chef resources correctly converted, but 2 issues: (1) Chef's not_if guard on package resource missing equivalent Ansible when condition, (2) Template variable node['app']['port'] not mapped to {{ app_port }}. You can now fix these specific issues.
Parameters:**
- conversion_type (string, required): Type of conversion ('recipe_to_playbook', 'databag_to_vars', etc.)
- result_content (string, required): The converted Ansible content
- output_format (string, optional, default: "text"): Output format ('text', 'json', or 'yaml')
Returns: - Validation report with any issues or suggestions
Example Usage:
Habitat#
parse_habitat_plan#
Parse Habitat plan files for container conversion.
What it does: Reads and parses Chef Habitat plan.sh files (Bash scripts that define how to build Habitat packages) and extracts all the configuration, dependencies, build steps, and runtime settings. Habitat is Chef's application automation solution that packages apps with their dependencies.
Why you need this: Chef Habitat is being deprecated, and many organisations are migrating Habitat applications to containers (Docker/Kubernetes). This tool extracts all information from Habitat plans so you can convert them to Dockerfiles, docker-compose files, or Kubernetes manifests without manually reverse-engineering the plan.sh scripts.
What you get: - Package name, version, and maintainer info - All dependencies (pkg_deps) and build dependencies (pkg_build_deps) - Build steps and configuration - Exposed ports and volume mounts - Runtime configuration and hooks - Service dependencies and bindings - Everything needed to write an equivalent Dockerfile
Real-world example: Your habitat/plan.sh for a web application shows it depends on core/node, exposes port 3000, runs build command npm install, and has startup hook node server.js. This tool extracts all details you need to create a FROM node:lts Dockerfile with correct CMD and EXPOSE directives.
Parameters:
- plan_path (string, required): Path to the Habitat plan.sh file
Returns: - Parsed Habitat plan information in JSON format
Example Usage:
Performance#
profile_cookbook_performance#
Profile cookbook parsing performance and generate optimization report. What it does: Measures how long it takes to parse all components of a Chef cookbook (recipes, templates, resources, attributes) and identifies bottlenecks. Provides detailed timing data and recommendations for optimising slow operations.
Why you need this: For large cookbooks (hundreds of files), parsing can be slow. When you're migrating dozens of cookbooks, every second counts. This tool identifies which parsing operations are slowest (usually large recipe files or complex ERB templates) and suggests optimisations like parallelisation, caching, or breaking up large files.
What you get: - Total parsing time for the entire cookbook - Per-file timing data (which recipes/templates are slowest) - Bottleneck identification (what's taking the most time) - Memory usage statistics - Optimisation recommendations (e.g., "Consider splitting recipes/default.rb - 2,500 lines taking 3.2 seconds") - Comparison against cookbook size benchmarks
Real-world example: Profiling your 50-recipe cookbook shows total parse time of 12 seconds, with recipes/deploy.rb alone taking 4 seconds due to 1,000 resources. Tool recommends splitting this recipe into logical sub-recipes (deploy_setup.rb, deploy_app.rb, deploy_finalize.rb) to improve parsing and conversion performance.
Parameters:
- cookbook_path (string, required): Path to the cookbook directory
Returns: - Performance report with timing, bottlenecks, and optimization recommendations
Example Usage:
profile_parsing_operation#
Profile a single parsing operation in detail.
What it does: Provides deep performance analysis of parsing a single file with microsecond-level timing, memory allocation tracking, and detailed execution traces. Like a performance profiler specifically for SousChef parsing operations.
Why you need this: When a specific file is causing performance problems, this tool shows exactly where the time is spent - Ruby parsing, AST traversal, property extraction, or conversion logic. Essential for troubleshooting performance issues or contributing performance improvements to SousChef.
What you get: - Microsecond-precision timing for the operation - Breakdown of time spent in each parsing phase - Memory allocation and peak usage - Function call counts and hotspots - Detailed execution trace (if detailed=true) - Comparative metrics (how this file compares to typical files of same type) - Specific performance recommendations
Real-world example: Profiling parsing of recipes/complex.rb shows: Total time 850ms, with 600ms spent in Ruby AST parsing (slow), 200ms in resource extraction (normal), 50ms in output formatting (fast). Recommendation: This recipe has unusually complex Ruby metaprogramming slowing AST parsing - consider simplifying or expect manual conversion of metaprogrammed sections.
Parameters:**
- operation (string, required): Operation to profile ('recipe', 'template', 'resource', 'attributes')
- file_path (string, required): Path to the file to parse
- detailed (boolean, optional, default: false): Include detailed profiling information
Returns: - Detailed performance metrics for the specified operation
Example Usage:
CI/CD Pipeline Generation#
generate_jenkinsfile_from_chef#
Generate Jenkins pipeline from Chef cookbook CI patterns.
What it does: Analyses Chef cookbooks for testing tools and patterns (Test Kitchen, ChefSpec, Cookstyle) and generates a complete Jenkinsfile (Declarative or Scripted syntax) with appropriate pipeline stages, parallel execution, and artifact management.
Why you need this: Chef cookbooks often have established CI/CD patterns that need to be preserved in Ansible. This tool automatically detects Chef testing tools and converts them to equivalent Jenkins pipeline stages, maintaining your existing quality gates and testing workflows.
What you get: - Complete Jenkinsfile with all detected testing stages - Parallel execution for independent test suites - Artifact collection and reporting - Support for both Declarative and Scripted pipeline syntax - Customisable options for caching and artifact handling
Real-world example: Your Chef cookbook uses Test Kitchen with multiple platforms, ChefSpec for unit tests, and Cookstyle for linting. This tool generates a Jenkinsfile with parallel stages for each platform, unit test execution, and linting checks.
Parameters:
- cookbook_path (string, required): Path to the Chef cookbook directory
- pipeline_type (string, optional, default: "declarative"): Pipeline syntax ('declarative' or 'scripted')
- enable_parallel (boolean, optional, default: true): Enable parallel execution of independent stages
Returns: - Complete Jenkinsfile content
Example Usage:
generate_gitlab_ci_from_chef#
Generate GitLab CI configuration from Chef cookbook testing patterns.
What it does: Scans Chef cookbooks for CI patterns and generates a complete .gitlab-ci.yml file with appropriate stages, jobs, and caching strategies. Converts Chef testing workflows to GitLab CI/CD pipelines.
Why you need this: GitLab CI has different syntax and concepts than Chef's testing approach. This tool bridges that gap by understanding Chef cookbook testing patterns and generating equivalent GitLab CI configurations with proper stage sequencing, artifact passing, and caching.
What you get:
- Complete .gitlab-ci.yml with all detected testing jobs
- Proper stage dependencies and artifact handling
- Caching configuration for faster builds
- Support for multiple testing frameworks
- Customisable options for different environments
Real-world example: Your Chef cookbook has Test Kitchen configurations for Ubuntu and CentOS, plus ChefSpec tests. This tool creates a GitLab CI pipeline with separate jobs for each platform, shared caching, and proper artifact collection.
Parameters:
- cookbook_path (string, required): Path to the Chef cookbook directory
- enable_cache (boolean, optional, default: true): Enable caching for faster builds
- enable_artifacts (boolean, optional, default: true): Enable artifact collection and passing
Returns: - Complete GitLab CI configuration
Example Usage:
generate_github_workflow_from_chef#
Generate GitHub Actions workflow from Chef cookbook CI patterns.
What it does: Analyses Chef cookbook testing setups and generates GitHub Actions workflow files (.github/workflows/ci.yml) with equivalent testing stages, matrix builds, and artifact management.
Why you need this: GitHub Actions uses different concepts than Chef testing. This tool understands Chef cookbook CI patterns and creates GitHub Actions workflows with proper job matrices, artifact uploads, and status checks.
What you get: - Complete GitHub Actions workflow file - Matrix builds for multiple platforms/environments - Artifact collection and upload - Status checks and required reviews - Support for all detected Chef testing tools - Customisable workflow naming and triggers
Real-world example: Your Chef cookbook tests on multiple platforms with Test Kitchen. This tool generates a GitHub Actions workflow with a matrix strategy testing on Ubuntu, CentOS, and Windows, plus proper artifact collection for test results.
Parameters:
- cookbook_path (string, required): Path to the Chef cookbook directory
- workflow_name (string, optional, default: "Chef Cookbook CI"): Name for the workflow file
- enable_cache (boolean, optional, default: true): Enable caching for faster builds
- enable_artifacts (boolean, optional, default: true): Enable artifact collection
Returns: - Complete GitHub Actions workflow configuration
Example Usage:
AWX/AAP Integration#
generate_awx_job_template_from_cookbook#
Generate AWX/AAP job template from Chef cookbook.
What it does: Analyses a Chef cookbook and generates an AWX/AAP job template configuration that can be imported into your AWX/AAP instance. Converts cookbook metadata, dependencies, and execution patterns into AWX job template parameters.
Why you need this: AWX/AAP job templates define how Ansible playbooks are executed. This tool bridges Chef cookbooks to AWX by creating job templates that encapsulate the cookbook's execution requirements, variables, and dependencies.
What you get: - Complete AWX job template JSON/YAML - Proper inventory and credential associations - Variable definitions from cookbook attributes - Execution options and limits - Survey specifications for runtime parameters
Real-world example: Your Chef cookbook requires specific credentials and has configurable attributes. This tool generates an AWX job template with the correct credential requirements, survey questions for attribute overrides, and proper execution settings.
Parameters:
- cookbook_path (string, required): Path to the Chef cookbook directory
- cookbook_name (string, required): Name of the cookbook for the job template
Returns: - AWX job template configuration
Example Usage:
generate_awx_workflow_from_chef_runlist#
Generate AWX workflow from Chef run-list.
What it does: Converts Chef run-lists (sequences of recipes to execute) into AWX workflow templates with proper job ordering, dependencies, and execution flow. Handles complex run-list patterns including conditional execution.
Why you need this: Chef run-lists define execution order and dependencies. AWX workflows provide similar orchestration capabilities. This tool translates Chef execution patterns into AWX workflow nodes, edges, and conditions.
What you get: - Complete AWX workflow template - Job nodes for each cookbook/recipe - Dependency relationships and execution order - Conditional execution based on run-list logic - Error handling and rollback options
Real-world example: Your Chef run-list executes base recipes first, then application recipes. This tool creates an AWX workflow with sequential job execution, proper failure handling, and success notifications.
Parameters:
- runlist (string, required): Chef run-list specification
- workflow_name (string, required): Name for the AWX workflow
Returns: - AWX workflow template configuration
Example Usage:
generate_awx_inventory_source_from_chef#
Generate AWX dynamic inventory source from Chef server.
What it does: Creates AWX inventory source configurations that connect to Chef servers for dynamic inventory population. Generates scripts or configurations that query Chef server APIs to populate AWX inventories with current node information.
Why you need this: Chef servers maintain authoritative node inventories. AWX needs access to this data for orchestration. This tool creates the bridge between Chef server node data and AWX inventory management.
What you get: - AWX inventory source configuration - Dynamic inventory scripts (Python) - Chef server API integration - Node filtering and grouping options - Credential management for Chef server access
Real-world example: Your Chef server has nodes tagged by environment and role. This tool generates an AWX inventory source that creates groups like [production], [webservers], [databases] automatically populated from Chef node data.
Parameters:
- chef_server_url (string, required): URL of the Chef server
- environment (string, required): Chef environment to filter nodes
- inventory_name (string, required): Name for the AWX inventory
Returns: - AWX inventory source configuration and scripts
Example Usage:
Chef Server Integration#
Dynamic inventory generation and Chef Server connectivity for hybrid environments.
validate_chef_server_connection#
Validate Chef Server connectivity and authentication.
What it does: Tests the Chef Server REST API connection to verify connectivity, authentication, and configuration. Ensures your Chef server is accessible and properly configured before using it for inventory operations.
Why you need this: Before using your Chef server as a source for dynamic inventory or node information, you need to verify it's reachable and responding correctly. This tool does that validation instantly.
What you get: - Confirmation that Chef Server is reachable - Status of API connectivity - Authentication verification - Clear error messages if there are issues
Real-world example: Your DevOps team sets up a Chef server at https://chef.staging.example.com. Before configuring AWX to pull inventory from it, you use this tool to verify connectivity and get a success message confirming it's ready to use.
Parameters:
- server_url (string, required): Base URL of the Chef Server (e.g., https://chef.example.com)
- organisation (string, optional): Chef organisation name (default: default)
- client_name (string, required): Chef client or user name for authentication
- client_key_path (string, optional): Path to the client key file (PEM format)
- client_key (string, optional): Inline client key content (avoid when possible)
Returns: - Success/failure message with connection details
Example Usage:
get_chef_nodes#
Query Chef Server for nodes matching search criteria.
What it does: Searches your Chef server for nodes matching specific criteria (by role, environment, platform, custom attributes, etc.) and extracts relevant metadata including roles, environment, platform, and IP addresses. Provides JSON output suitable for dynamic inventory generation.
Why you need this: Chef servers maintain the authoritative list of all your infrastructure nodes. Before containerising or migrating to Ansible, you need to understand your current node landscape. This tool queries that data instantly.
What you get: - List of matching nodes with their names - Roles assigned to each node - Chef environment (production, staging, dev, etc.) - Platform information (Ubuntu, CentOS, etc.) - IP addresses (both private and FQDN) - Node attributes relevant for inventory
Real-world example: Your Chef server has 150 nodes across multiple environments. You need to know which ones are web servers in production. Running this tool with search query role:web_server AND environment:production returns only the 15 production web servers with their IPs and roles, perfect for creating Ansible inventory groups.
Parameters:
- search_query (string, optional): Chef search query (default: ':' for all nodes)
- Examples: role:web_server, environment:production, platform:ubuntu
- server_url (string, optional): Chef Server URL (defaults to CHEF_SERVER_URL)
- organisation (string, optional): Chef organisation (defaults to CHEF_ORG)
- client_name (string, optional): Client name (defaults to CHEF_CLIENT_NAME)
- client_key_path (string, optional): Client key path (defaults to CHEF_CLIENT_KEY_PATH)
- client_key (string, optional): Inline client key (defaults to CHEF_CLIENT_KEY)
Returns: - JSON string with list of matching nodes and their attributes
Example Usage:
get_chef_roles#
List Chef Server roles.
Parameters:
- server_url (string, optional): Chef Server URL (defaults to CHEF_SERVER_URL)
- organisation (string, optional): Chef organisation (defaults to CHEF_ORG)
- client_name (string, optional): Client name (defaults to CHEF_CLIENT_NAME)
- client_key_path (string, optional): Client key path (defaults to CHEF_CLIENT_KEY_PATH)
- client_key (string, optional): Inline client key (defaults to CHEF_CLIENT_KEY)
Returns: - JSON string with role summaries
get_chef_environments#
List Chef Server environments.
Parameters:
- server_url (string, optional): Chef Server URL (defaults to CHEF_SERVER_URL)
- organisation (string, optional): Chef organisation (defaults to CHEF_ORG)
- client_name (string, optional): Client name (defaults to CHEF_CLIENT_NAME)
- client_key_path (string, optional): Client key path (defaults to CHEF_CLIENT_KEY_PATH)
- client_key (string, optional): Inline client key (defaults to CHEF_CLIENT_KEY)
Returns: - JSON string with environment summaries
get_chef_cookbooks#
List Chef Server cookbooks.
Parameters:
- server_url (string, optional): Chef Server URL (defaults to CHEF_SERVER_URL)
- organisation (string, optional): Chef organisation (defaults to CHEF_ORG)
- client_name (string, optional): Client name (defaults to CHEF_CLIENT_NAME)
- client_key_path (string, optional): Client key path (defaults to CHEF_CLIENT_KEY_PATH)
- client_key (string, optional): Inline client key (defaults to CHEF_CLIENT_KEY)
Returns: - JSON string with cookbook summaries
get_chef_policies#
List Chef Server policies.
Parameters:
- server_url (string, optional): Chef Server URL (defaults to CHEF_SERVER_URL)
- organisation (string, optional): Chef organisation (defaults to CHEF_ORG)
- client_name (string, optional): Client name (defaults to CHEF_CLIENT_NAME)
- client_key_path (string, optional): Client key path (defaults to CHEF_CLIENT_KEY_PATH)
- client_key (string, optional): Inline client key (defaults to CHEF_CLIENT_KEY)
Returns: - JSON string with policy summaries
convert_template_with_ai#
Convert ERB templates to Jinja2 with AI-based validation.
What it does: Converts Chef ERB template files to Ansible Jinja2 format using rule-based conversion with optional AI-enhanced validation for complex Ruby logic. The tool first applies standard conversion rules, then optionally uses AI analysis to validate complex constructs and suggest improvements.
Why you need this: ERB templates are common in Chef cookbooks. Many are simple variable substitutions that convert easily, but some contain complex Ruby logic (loops, conditionals, method calls) that requires careful translation. This tool handles both cases: quick conversion for simple templates and intelligent analysis for complex ones.
What you get: - Converted Jinja2 template ready for Ansible - List of variables used in the template - Warnings about complex Ruby constructs - AI suggestions for improving the conversion (when AI enhancement is enabled) - Conversion method used (rule-based or AI-enhanced)
Real-world example: Your cookbook has a simple ERB template <%= @app_name %> which converts instantly to Jinja2: {{ app_name }}. But you also have a complex template with Ruby conditionals and loops. For that one, you can enable AI enhancement to get insights on how to best structure it as Jinja2 loops and filters.
Parameters:
- erb_path (string, required): Path to the ERB template file
- use_ai_enhancement (boolean, optional): Use AI for complex conversions (default: True)
Returns:
- JSON string with conversion results including:
- success: Whether conversion succeeded
- jinja2_output: The converted Jinja2 template
- variables: List of variables referenced
- warnings: Any issues found during conversion
- conversion_method: "rule-based", "ai-enhanced", or "rule-based-fallback"
Example Usage:
Ansible Upgrade Planning#
Comprehensive Ansible version upgrade assessment, planning, and validation tools based on official Ansible–Python compatibility matrices.
Python version detection (internal capability)#
SousChef automatically detects the Python version in your Ansible control-node environments when running the Ansible upgrade planning tools (for example, assess_ansible_upgrade_readiness and plan_ansible_upgrade).
What it does: Transparently inspects the Python interpreter available in the relevant environment (virtual environment, container, or system Python) and derives a semantic version (for example, 3.10.5) for comparison with Ansible version requirements.
Why you need this: Before planning an Ansible upgrade, you need to know what Python version your control nodes are running. Python version directly affects which Ansible versions you can upgrade to — older Python versions may not be supported by newer Ansible releases. The upgrade tools use this internal capability so you do not need a separate MCP tool for Python version checks.
What you get via the upgrade tools:
- The detected Python version (for example, 3.10.5) considered as part of upgrade readiness
- Confirmation that Python is available in the specified environment, or clear errors when it is not
- Compatibility guidance between your Python version and the target Ansible release
Real-world example: You are planning to upgrade from Ansible 2.14 to 2.17. You run assess_ansible_upgrade_readiness against your control node environment at /opt/ansible/venv. Internally, SousChef detects that the environment is running Python 3.9.13 and factors that into the readiness report, including whether Python 3.9 supports Ansible 2.17 according to the compatibility matrix.
This capability is not exposed as a standalone MCP tool; instead, it is used internally by the Ansible upgrade planning tools so you can simply ask your AI assistant to assess or plan an upgrade without managing Python checks yourself.
assess_ansible_upgrade_readiness#
Assess the current Ansible environment for upgrade readiness.
What it does: Analyses your current Ansible installation and detects: - Ansible version (for example, 2.14.0) - Python version on control node - Installed collections and their versions - Ansible configuration from ansible.cfg - Environment variables and settings - Compatibility issues and warnings - End-of-life (EOL) status
Returns a comprehensive assessment with all relevant environment information needed for upgrade planning.
Why you need this: Before planning an upgrade, you need a complete picture of your current Ansible environment. This tool gathers all that information instantly, providing the baseline needed for creating an upgrade plan.
What you get: - Current Ansible version and full version string - Python version information with compatibility status - List of installed collections with versions - EOL status and timeline (if applicable) - Compatibility issues detected - Actionable recommendations for upgrade preparation - Warnings about deprecated features or EOL versions
Real-world example: Your Ansible environment runs Ansible 2.14.0 with Python 3.10, and has community.general 5.0.0 installed. This tool reports all that information plus warns that Ansible 2.14 reaches EOL in May 2026, helping you plan your upgrade timeline.
Parameters:
- environment_path (string, required): Path to Ansible environment directory containing playbooks, inventory, and configuration files
Returns: - JSON string with environment assessment including versions, installed collections, compatibility issues, and recommendations
Example Usage:
plan_ansible_upgrade#
Generate a detailed upgrade plan to move between Ansible versions.
What it does: Creates a comprehensive upgrade plan including: - Upgrade path from current to target version - Breaking changes you need to be aware of - Pre-upgrade checklist (backup, compatibility checks) - Step-by-step upgrade instructions - Testing plan to validate the upgrade - Post-upgrade validation steps - Risk assessment and estimated downtime - Rollback procedures in case of issues
The plan accounts for major version jumps (like 2.9→2.10 where collections were split) and provides intermediate versions if needed.
Why you need this: Ansible upgrades can be complex with breaking changes and compatibility issues. Manual planning is error-prone. This tool analyses the specific upgrade path and generates a customised plan for your situation.
What you get: - Detailed upgrade path with all intermediate steps (markdown formatted) - Complete checklist of pre-upgrade verification steps - Breaking changes documented for your upgrade path - Required actions (collection updates, Python upgrades, config changes) - Step-by-step upgrade instructions with commands - Comprehensive testing procedure - Post-upgrade validation checklist - Risk assessment (Low/Medium/High) - Estimated downtime and effort - Rollback plan if things go wrong
Real-world example: You want to upgrade from Ansible 2.14 to 2.17. This tool generates a plan that identifies the collections you'll need to update, breaks down the upgrade into manageable steps, provides a testing procedure to verify everything still works, and gives you a rollback plan if needed.
Parameters:
- environment_path (string, required): Path to Ansible environment directory
- target_version (string, required): Target Ansible version to upgrade to (for example, "2.17")
Returns: - Markdown-formatted upgrade plan with detailed steps and recommendations
Example Usage:
validate_ansible_collection_compatibility#
Validate that Ansible collections are compatible with a target Ansible version.
What it does: Checks Ansible collections from a requirements.yml file against a target Ansible version to determine compatibility. For each collection, verifies: - Minimum Ansible version required - Maximum Ansible version supported (if applicable) - Known breaking changes in that version - Recommended minimum version for the target Ansible release
Returns detailed compatibility information for upgrade planning.
Why you need this: When you upgrade Ansible, your installed collections might not be compatible with the new version. This tool identifies compatibility issues before you upgrade, so you can plan collection updates accordingly.
What you get: - Compatibility status for each collection (compatible, needs update, not supported) - Recommended versions for your target Ansible release - List of breaking changes in target version - Warnings about deprecated collection versions - Migration guidance for incompatible collections
Real-world example: You want to upgrade to Ansible 2.17 and have community.general 3.0.0 in your requirements.yml. This tool reports that community.general 3.0.0 works with Ansible 2.17, but it's old and you should upgrade to 5.0.0 which has better features and bug fixes for 2.17.
Parameters:
- collections_file (string, required): Path to requirements.yml file containing collection specifications
- target_version (string, required): Target Ansible version to check compatibility against (for example, "2.17")
Returns:
- JSON string with compatibility report including:
- compatible: List of compatible collections
- updates_needed: Collections requiring version updates
- warnings: List of compatibility warnings
- current_version: Current installed version
- compatible: Boolean indicating if compatible
- recommended_version: Recommended version for target Ansible
- breaking_changes: Any breaking changes in target version
Example Usage:
check_ansible_eol_status#
Check if an Ansible version is end-of-life (EOL) or approaching EOL.
What it does: Provides end-of-life status, security risk assessment, and recommendations for Ansible versions based on official support timelines. Checks: - Whether the version has reached EOL - EOL date (past or future) - Days remaining until EOL or days since EOL - Security risk level (LOW/MEDIUM/HIGH) - Recommended actions
Why you need this: Running EOL software poses security risks as it no longer receives updates. This tool helps you understand the support status of your Ansible version and plan upgrades before support ends.
What you get: - Clear EOL status (is_eol: true/false) - EOL date for the version - Human-readable status message - Security risk assessment (LOW/MEDIUM/HIGH) - Days remaining until EOL or days overdue since EOL - Whether EOL is approaching (within 180 days)
Real-world example: You're running Ansible 2.9 and want to know its support status. This tool reports that Ansible 2.9 reached EOL on 2022-05-23, security risk is HIGH, and you're 1,357 days overdue for an upgrade.
Parameters:
- version (string, required): Ansible version string (for example, "2.9", "2.16")
Returns: - JSON string with EOL status including is_eol, eol_date, status message, security_risk, and days_remaining or days_overdue
Example Usage:
generate_ansible_upgrade_test_plan#
Generate a comprehensive testing plan for Ansible upgrade validation.
What it does: Creates a detailed testing plan covering: - Pre-upgrade baseline establishment - Post-upgrade validation procedures - Regression testing steps - Performance testing guidelines - Acceptance criteria - Sign-off checklist
The plan provides a structured approach to validating that your Ansible upgrade succeeded and everything still works as expected.
Why you need this: After upgrading Ansible, you need to verify that playbooks, roles, and collections still function correctly. This tool generates a comprehensive testing checklist to ensure nothing was broken by the upgrade.
What you get: - Pre-upgrade baseline testing procedures - Post-upgrade validation steps with commands - Syntax validation checklist - Dry-run testing guidance - Integration testing procedures - Performance comparison guidelines - Regression testing steps - Acceptance criteria for sign-off
Real-world example: You've just upgraded from Ansible 2.14 to 2.17 in your test environment. This tool generates a markdown checklist walking you through baseline capture, syntax checks, dry runs, full playbook execution, and performance validation to ensure the upgrade was successful.
Parameters:
- environment_path (string, required): Path to Ansible environment directory
Returns: - Markdown-formatted testing plan with checklists and procedures
Example Usage:
PowerShell Migration#
Enterprise Windows automation — convert PowerShell provisioning scripts to idiomatic Ansible playbooks, roles, WinRM inventories, and AWX/AAP job templates using the ansible.windows, community.windows, and chocolatey.chocolatey collections.
parse_powershell#
Parse a PowerShell provisioning script and extract structured actions.
What it does: Analyses a .ps1 script using pattern matching to identify 28+ common Windows provisioning operations: Windows features, services, registry edits, file operations, MSI installs, Chocolatey packages, users/groups, firewall rules, scheduled tasks, environment variables, PS modules, certificates, WinRM, IIS, DNS, and ACL operations. Unrecognised commands are preserved as win_shell fallbacks with confidence warnings and source locations.
Why you need this: Before converting a PowerShell script you need to understand what it actually does. This tool gives you a structured inventory of every provisioning action so you can assess scope, estimate effort, and plan your Ansible migration.
What you get: - Structured list of all recognised actions with type, parameters, and confidence - Source location (line number) for every extracted action - Metrics summary broken down by action category - Warnings for unrecognised commands that will need manual review
Real-world example: A 300-line setup.ps1 that installs IIS, configures the Windows Firewall, and creates scheduled tasks is parsed into 42 structured actions with full parameter detail — ready for automated conversion.
Parameters:
- script_path (string, required): Path to the PowerShell script (.ps1 file)
Returns:
- JSON string with source, actions, warnings, and metrics keys
Example Usage:
convert_powershell#
Convert a PowerShell provisioning script to an Ansible playbook.
What it does: Maps recognised PowerShell provisioning actions to their idiomatic ansible.windows.*, community.windows.*, and chocolatey.chocolatey.* module equivalents. Produces a complete, runnable Ansible playbook YAML. Unrecognised commands fall back to ansible.windows.win_shell with warnings so nothing is silently lost.
Why you need this: Manually rewriting PowerShell scripts as Ansible playbooks is time-consuming and error-prone. This tool automates the mapping so you can focus on reviewing the output rather than writing boilerplate.
What you get:
- Complete Ansible playbook YAML using idiomatic Windows collection modules
- Task count breakdown (idiomatic vs. win_shell fallbacks)
- Warning list with source locations for every fallback task
- Ready to use with ansible-playbook against a WinRM inventory
Real-world example: Install-WindowsFeature Web-Server becomes ansible.windows.win_feature: name: Web-Server state: present include_management_tools: true — idiomatic, idempotent, and production-ready.
Parameters:
- script_path (string, required): Path to the PowerShell script (.ps1 file)
- playbook_name (string, optional): Name for the generated Ansible play (default: powershell_migration)
- hosts (string, optional): Ansible inventory group or host pattern (default: windows)
Returns:
- JSON string with status, playbook_yaml, tasks_generated, win_shell_fallbacks, warnings, and source
Example Usage:
generate_windows_inventory_tool#
Generate a WinRM-ready Ansible inventory file for Windows managed nodes.
What it does: Produces an INI-format Ansible inventory with a [windows] group and a [windows:vars] section containing all the WinRM connection variables required by the ansible.windows collection (ansible_connection, ansible_winrm_transport, ansible_port, etc.).
Why you need this: Setting up WinRM inventory variables correctly is fiddly and easy to get wrong. This tool generates a battle-tested template so you can start running Windows playbooks immediately.
What you get:
- INI-format inventory/hosts file with [windows] group
- Pre-configured WinRM connection variables
- SSL and non-SSL variants supported
- Placeholder host comments with credential guidance
Real-world example: Generates an inventory for win01.example.com and win02.example.com with HTTPS WinRM on port 5986, ready for ansible-playbook -i inventory/hosts site.yml.
Parameters:
- hosts (string, optional): Comma-separated Windows host names or IPs (default: placeholder)
- winrm_port (integer, optional): WinRM HTTPS listener port (default: 5986)
- use_ssl (boolean, optional): Use HTTPS transport (default: true)
- validate_certs (boolean, optional): Validate WinRM SSL certificate (default: false)
Returns:
- INI-formatted inventory string ready to save as inventory/hosts
Example Usage:
generate_windows_requirements#
Generate requirements.yml with required Ansible collections for Windows automation.
What it does: Examines the parsed PowerShell script to determine which Ansible collections are actually needed (ansible.windows, community.windows, chocolatey.chocolatey, etc.) and produces a requirements.yml file pinned to stable versions. When no script is provided, all Windows collections are included.
Why you need this: Manually identifying and versioning Ansible collection dependencies is error-prone. This tool auto-detects which collections your converted playbook needs so ansible-galaxy collection install -r requirements.yml just works.
What you get:
- requirements.yml with all needed Windows Ansible collections
- Pinned to tested, stable versions
- Tailored to your script when a path is provided (omits unused collections)
Real-world example: A script using Chocolatey installs produces a requirements.yml with both ansible.windows and chocolatey.chocolatey; a script with only Windows Features and Services omits the Chocolatey entry.
Parameters:
- script_path (string, optional): Path to a PowerShell script. When omitted all Windows collections are included.
Returns:
- YAML string for requirements.yml
Example Usage:
generate_powershell_role#
Generate a complete Ansible role structure from a PowerShell script.
What it does: Parses the PowerShell script and produces all files for a production-ready Ansible role: tasks/main.yml, handlers/main.yml, defaults/main.yml, vars/main.yml, meta/main.yml, README.md, a top-level playbook, WinRM inventory, group_vars/windows.yml, and requirements.yml. Returns a JSON map of relative path → file content.
Why you need this: A single tool call produces a complete, deployable Ansible role skeleton instead of requiring you to manually create a dozen files in the right directory structure. Ideal as a starting point for production Windows automation.
What you get:
- Full Ansible role directory structure
- tasks/main.yml with converted tasks
- handlers/main.yml for service restart handlers
- defaults/main.yml and vars/main.yml for variable management
- meta/main.yml with collection dependencies
- README.md with role documentation
- Top-level playbook, WinRM inventory, group_vars/windows.yml, and requirements.yml
Real-world example: Running this on a 50-line setup.ps1 produces 10 files ready to commit to your Ansible project and run against a WinRM inventory.
Parameters:
- script_path (string, required): Path to the PowerShell script (.ps1 file)
- role_name (string, optional): Name of the role directory (default: windows_provisioning)
- playbook_name (string, optional): Base name for the top-level playbook file (default: site)
- hosts (string, optional): Ansible inventory host/group pattern (default: windows)
Returns:
- JSON string with status, files (path → content map), and file_count
Example Usage:
generate_powershell_job_template#
Generate an AWX/AAP Windows job template from a PowerShell script.
What it does: Parses the PowerShell script and produces a JSON configuration importable via awx-cli or the AWX/AAP REST API. The job template is pre-configured for WinRM Windows automation with optional survey specs derived from script variables, an action summary, and the exact CLI import command to run.
Why you need this: Manually creating AWX/AAP job templates with correct Windows credentials, inventory, and survey specs is tedious. This tool generates importable JSON so you can get your Windows automation running in AAP with a single awx command.
What you get: - AWX/AAP-compatible job template JSON - Pre-configured Windows credential and WinRM settings - Optional survey spec for runtime variable overrides - CLI import command ready to copy-paste - Action summary showing what the job template will automate
Real-world example: Generates a job template named "Setup IIS Web Server" referencing your windows-migration-project project and windows-winrm-credential credential, ready to import with awx job_templates create.
Parameters:
- script_path (string, required): Path to the PowerShell script (.ps1 file)
- job_template_name (string, optional): Display name for the AWX job template (default: Windows PowerShell Migration)
- playbook (string, optional): Playbook file relative to project root (default: site.yml)
- inventory (string, optional): Inventory name or ID in AWX (default: windows-inventory)
- project (string, optional): Project name or ID in AWX (default: windows-migration-project)
- credential (string, optional): Windows credential name in AWX (default: windows-winrm-credential)
- environment (string, optional): Target environment label (default: production)
- include_survey (boolean, optional): Whether to generate a survey spec (default: true)
Returns: - Formatted text block with job template JSON, CLI import command, and action summary
Example Usage:
analyze_powershell_fidelity#
Analyse migration fidelity for a PowerShell provisioning script.
What it does: Calculates the percentage of actions that can be automatically mapped to idiomatic Ansible modules (the fidelity score), lists actions needing manual review, and provides actionable next-step recommendations. A score of 100% means full automation is achievable; lower scores highlight areas requiring manual attention.
Why you need this: Before committing to a migration you need to know how much of the work can be automated vs. how much requires manual effort. This tool gives you that answer in seconds so you can plan your sprint and set stakeholder expectations.
What you get: - Fidelity score (0–100%) — percentage of actions fully automatable - Total action count broken down by automated, fallback, and manual-review - List of specific actions requiring manual completion - Actionable recommendations with suggested Ansible modules
Real-world example: A setup.ps1 with 40 actions scores 87.5% fidelity — 35 actions map automatically, 5 win_shell fallbacks need manual review. The report lists the 5 fallbacks and suggests replacements.
Parameters:
- script_path (string, required): Path to the PowerShell script (.ps1 file)
Returns:
- JSON string with fidelity_score, total_actions, automated_actions, fallback_actions, review_required, summary, and recommendations
Example Usage:
Bash Script Migration#
Enterprise-grade conversion of provisioning Bash scripts to Ansible playbooks and roles. Covers teams breaking out of Salt, Puppet, or Chef into raw Bash, and teams using AI to author Ansible for AAP.
When to use Bash Script Migration
- You have provisioning scripts that install packages, manage services, or configure systems
- Your team is breaking out of a CM tool (Salt, Puppet, Chef) into Bash temporarily or permanently
- You want to land directly in Ansible/AAP without writing tasks by hand
- You need to identify hardcoded secrets before committing to a repository
parse_bash_script#
Parse a Bash provisioning script and extract structured patterns.
What it does: Reads a Bash script and detects 13 categories of provisioning operation:
| Category | Patterns detected | Target Ansible module |
|---|---|---|
| Packages | apt-get, yum, dnf, zypper, apk, pip |
ansible.builtin.apt/yum/dnf/… |
| Services | systemctl, service |
ansible.builtin.service |
| File writes | heredoc, echo redirect, tee |
ansible.builtin.copy |
| Downloads | curl, wget |
ansible.builtin.get_url |
| Users | useradd, adduser, usermod, userdel |
ansible.builtin.user |
| Groups | groupadd, groupmod, groupdel |
ansible.builtin.group |
| File permissions | chmod, chown, chgrp |
ansible.builtin.file |
| Git operations | git clone, git pull, git checkout |
ansible.builtin.git |
| Archives | tar -x, unzip |
ansible.builtin.unarchive |
| sed operations | sed -i |
ansible.builtin.lineinfile / .replace |
| Cron jobs | crontab, cron.d writes |
ansible.builtin.cron |
| Firewall rules | ufw, firewall-cmd, iptables |
community.general.ufw, ansible.posix.firewalld, ansible.builtin.iptables |
| Hostname | hostnamectl set-hostname |
ansible.builtin.hostname |
Bonus detection: environment variables (exported shell vars), sensitive data (passwords, API keys, private key material), and CM escape calls (Salt, Puppet, Chef invocations inside a Bash script).
Why you need this: Before converting, you need to understand what a Bash script actually does. This tool gives you a structured, categorised inventory with confidence scores, idempotency risks, and vault recommendations — making conversion planning fast and reliable.
What you get: - Categorised inventory of all provisioning operations with line numbers - Confidence scores (0–100 %) for each detected pattern - Idempotency risk warnings for non-idempotent patterns (e.g. unconditional writes) - Environment variable list (with sensitive vars flagged) - Sensitive data alerts (value redacted, line number shown) - CM escape detection (Salt/Puppet/Chef calls embedded in Bash) - Shell-fallback list for lines that cannot be mapped to a module
Real-world example: You have a 300-line bootstrap.sh that was the emergency replacement for a failing Salt state. Running this tool reveals 8 package installs, 3 service starts, 2 hardcoded passwords, and a salt-call escape that will need attention before landing in AAP.
Parameters:
- script_path (string, required): Path to the Bash script file
Returns: - Human-readable summary of all detected patterns, warnings, and idempotency hints
Example Usage:
convert_bash_to_ansible#
Convert a Bash provisioning script to an Ansible playbook.
What it does: Maps all detected Bash patterns to their optimal Ansible modules. High-confidence patterns (≥ 80 %) become structured module tasks; low-confidence sections fall back to ansible.builtin.shell with changed_when, failed_when, and idempotency hints embedded as comments. Also returns an AAP hints block and a quality score.
Why you need this: Manually rewriting a Bash script as an Ansible playbook is tedious and error-prone. This tool does the mechanical conversion in seconds, leaving you with a playbook that is idiomatic, idempotent, and ready to review rather than ready to write.
What you get: - Complete Ansible playbook YAML - Per-task confidence metadata - Idempotency report (risks, shell fallback count, suggestions) - AAP hints — recommended Execution Environment image, credential types, survey variables derived from script environment variables, and actionable notes (vault warnings, missing collections, prerequisite packages) - Quality score — A–F letter grade (A ≥ 90, B ≥ 75, C ≥ 60, D ≥ 40, F < 40), overall coverage percentage, and a ranked list of improvements
Quality score deductions: - Each hardcoded secret deducts 5 points (max 20) - Shell fallback tasks count as non-idempotent
Real-world example: Your deploy.sh installs nginx, creates a service user, writes a config, and then calls salt-call as an escape hatch. The converted playbook maps the first three operations to ansible.builtin.apt, ansible.builtin.user, and ansible.builtin.copy — with the salt call flagged as a CM escape needing manual review. Quality score B (77 %).
Parameters:
- script_path (string, required): Path to the Bash script file
Returns:
- JSON string with playbook_yaml, tasks, warnings, idempotency_report, quality_score, and aap_hints keys
Example Usage:
generate_ansible_role_from_bash#
Generate a complete Ansible role directory structure from a Bash script.
What it does: Converts a Bash provisioning script into a full Ansible role — splitting tasks by category into separate task files, generating handlers, defaults, vars, meta, and a README. Sensitive environment variables are stubbed in defaults/main.yml with ansible-vault TODO comments rather than being embedded as plaintext.
Why you need this: A bare playbook is a start, but Ansible best practice for reusable automation is a role. This tool goes the extra mile: it produces a role that can be dropped directly into an AAP project, committed to Git, and executed via a job template with survey variables automatically derived from the script's environment variables.
What you get:
- tasks/main.yml — imports each category task file
- tasks/packages.yml — package install tasks
- tasks/services.yml — service management tasks
- tasks/users.yml — user and group management tasks
- tasks/files.yml — file write and permission tasks
- tasks/misc.yml — git, archives, sed, cron, firewall, hostname tasks
- handlers/main.yml — service restart handlers
- defaults/main.yml — environment variables as Ansible defaults; sensitive vars stubbed with vault comment
- vars/main.yml — empty placeholder
- meta/main.yml — role metadata (author, description, licence, min Ansible version)
- README.md — auto-generated role documentation
- Quality score and AAP hints in the JSON response
Real-world example: Your team has a setup_webserver.sh that has been running via Salt. You're migrating to AAP. Running this tool produces a webserver role with 6 task files. The DB_PASSWORD env var appears in defaults/main.yml stubbed as '' with a # TODO: set via ansible-vault comment. You commit the role, create an AAP project, and create a job template — the tool's AAP hints even tell you which Execution Environment image to use.
Parameters:
- script_path (string, required): Path to the Bash script file
- role_name (string, optional): Name for the generated role (default: bash_converted)
Returns:
- JSON string with status, role_name, files (dict of relative path → content), quality_score, and aap_hints keys
Example Usage:
Puppet Migration#
Convert Puppet manifests (.pp files) and module directories to idiomatic Ansible playbooks using ansible.builtin modules.
When to use Puppet Migration
- You have existing Puppet manifests you want to convert to Ansible
- You are migrating infrastructure managed by Puppet to AAP
- You need to translate Puppet classes and resource declarations to Ansible tasks
Supported Puppet Resource Types#
SousChef recognises 14 Puppet resource types. Ten are fully mapped to idiomatic Ansible modules; four (augeas, filebucket, notify, tidy) are recognised but produce ansible.builtin.debug placeholder tasks with manual-review guidance.
Fully mapped (10 types):
| Puppet Resource | Ansible Module |
|---|---|
package |
ansible.builtin.package |
service |
ansible.builtin.service |
file (absent/directory) |
ansible.builtin.file |
file (with content) |
ansible.builtin.copy |
file (with source template) |
ansible.builtin.template |
user |
ansible.builtin.user |
group |
ansible.builtin.group |
exec |
ansible.builtin.command (with idempotency warning) |
cron |
ansible.builtin.cron |
host |
ansible.builtin.lineinfile (with warning) |
mount |
ansible.posix.mount |
ssh_authorized_key |
ansible.posix.authorized_key |
Recognised but not auto-converted (4 types): augeas, filebucket, notify, tidy — each produces an ansible.builtin.debug placeholder task with a guidance message.
Unsupported DSL constructs (Hiera lookups, exported/virtual resources, create_resources) are flagged with line numbers and manual-review guidance — nothing is silently discarded.
parse_puppet_manifest#
Parse a Puppet manifest file (.pp) and extract resources, classes, and variables.
What it does: Reads a Puppet manifest and identifies all resource declarations, class definitions, variables, and any constructs that cannot be auto-converted (Hiera lookups, exported resources, etc.).
Parameters:
- manifest_path (string, required): Path to the Puppet manifest (.pp) file
Returns: - Human-readable summary of all resources, classes, variables, and unsupported constructs with line numbers
Example Usage:
parse_puppet_module#
Parse a Puppet module directory and analyse all manifests.
What it does: Recursively scans a Puppet module directory for .pp files and extracts all resources, classes, and defined types across every manifest.
Parameters:
- module_path (string, required): Path to the Puppet module directory
Returns: - Aggregated summary of all resources and classes across all manifests in the module
Example Usage:
convert_puppet_manifest_to_ansible#
Convert a Puppet manifest to an Ansible playbook.
What it does: Parses the manifest and maps every resource declaration to the corresponding ansible.builtin module task. High-fidelity resources produce structured module tasks; unsupported constructs become ansible.builtin.debug placeholder tasks with guidance comments.
Parameters:
- manifest_path (string, required): Path to the Puppet manifest (.pp) file
Returns: - YAML Ansible playbook string ready to review and deploy
Example Usage:
convert_puppet_module_to_ansible#
Convert an entire Puppet module directory to Ansible playbooks.
What it does: Iterates all .pp manifests in the module, converts each to Ansible tasks, and returns a consolidated playbook covering the full module.
Parameters:
- module_path (string, required): Path to the Puppet module directory
Returns: - YAML Ansible playbook string combining all converted manifests
Example Usage:
convert_puppet_resource_to_task#
Convert a single Puppet resource declaration to an Ansible task.
What it does: Takes an inline Puppet resource definition string (not a file path) and returns the equivalent Ansible task YAML. Useful for quick one-off lookups during a migration.
Parameters:
- resource_type (string, required): Puppet resource type (e.g., package, service, file)
- title (string, required): Resource title (e.g., nginx, /etc/nginx/nginx.conf)
- attributes (dict, optional): Resource attributes (e.g., {"ensure": "installed"})
Returns: - YAML Ansible task string
Example Usage:
list_puppet_supported_resource_types#
List all Puppet resource types that SousChef can convert automatically.
What it does: Returns the full table of Puppet→Ansible module mappings.
Parameters: None
Returns: - Human-readable table of resource types and their Ansible equivalents
Example Usage:
convert_puppet_manifest_to_ansible_with_ai#
Convert a Puppet manifest to Ansible using AI assistance for complex constructs.
What it does: Converts the manifest using rule-based mapping for standard resources, and uses a configured LLM to produce best-effort Ansible tasks for unsupported constructs (Hiera lookups, exported resources, create_resources, etc.).
Parameters:
- manifest_path (string, required): Path to the Puppet manifest (.pp) file
- ai_provider (string, optional): AI provider — anthropic, openai, watson, lightspeed (default: anthropic)
- api_key (string, optional): API key for the chosen provider
- model (string, optional): Model name (default: claude-3-5-sonnet-20241022)
- temperature (float, optional): Sampling temperature (default: 0.1)
- max_tokens (int, optional): Maximum tokens for AI response
- project_id (string, optional): Project ID for Watson/Lightspeed
- base_url (string, optional): Custom base URL for API calls
Returns: - YAML Ansible playbook string with AI-generated tasks for unsupported constructs
Example Usage:
convert_puppet_module_to_ansible_with_ai#
Convert a Puppet module to Ansible using AI assistance for complex constructs.
What it does: Same as convert_puppet_manifest_to_ansible_with_ai but operates on a full module directory.
Parameters:
- module_path (string, required): Path to the Puppet module directory
- ai_provider, api_key, model, temperature, max_tokens, project_id, base_url — same as above
Returns: - YAML Ansible playbook string covering all module manifests
Example Usage:
Tool Selection#
- Start with assessment: Use
assess_ansible_upgrade_readinessto understand your current state (automatically detects Python version) - Plan your upgrade: Use
plan_ansible_upgradeto create a detailed upgrade path - Check EOL status: Use
check_ansible_eol_statusto verify support timeline - Validate collections: Use
validate_ansible_collection_compatibilitybefore upgrading - Plan testing: Use
generate_ansible_upgrade_test_planto ensure thorough validation - Validate conversions: Always use
validate_conversionafter converting resources or recipes
Error Handling#
All tools provide detailed error messages with suggestions: - File not found errors include path verification tips - Parse errors show line numbers and context - Validation errors explain what needs fixing - Connection errors provide troubleshooting steps
Workflow Recommendations#
Chef-to-Ansible Migration Workflow#
- Discovery: Use
list_cookbook_structureandread_cookbook_metadata - Analysis: Use
assess_chef_migration_complexityandanalyze_cookbook_dependencies - Planning: Use
generate_migration_plan - Conversion: Use
convert_*tools for individual resources - Validation: Use
generate_inspec_from_recipeandvalidate_conversion - Assessment: Use
generate_migration_report - Chef Server Integration (optional): Use
validate_chef_server_connection→get_chef_nodes→ inventory generation
Bash Script Migration Workflow#
- Analyse: Use
parse_bash_scriptto understand what the script does - Review warnings: Check idempotency risks, sensitive data alerts, and CM escape calls
- Convert: Use
convert_bash_to_ansiblefor a quick playbook, orgenerate_ansible_role_from_bashfor a full reusable role - Review quality score: Address improvements flagged in the A–F report
- Secure secrets: Move any detected credentials to ansible-vault (see
defaults/main.ymlstubs) - AAP readiness: Use the
aap_hintsblock to configure the correct Execution Environment and credentials in your AAP job template
Puppet Migration Workflow#
- Inventory: Use
parse_puppet_manifestorparse_puppet_moduleto understand resource coverage - Review unsupported constructs: Check Hiera lookups, exported resources, and
create_resourcesflagged in the parse output - Convert: Use
convert_puppet_manifest_to_ansibleorconvert_puppet_module_to_ansiblefor standard manifests - Handle complex DSL: Use
convert_puppet_manifest_to_ansible_with_aifor manifests with unsupported constructs - One-off tasks: Use
convert_puppet_resource_to_taskto convert individual resource declarations during review - Validate: Review generated playbook against the original manifest and test on a staging environment
Ansible Upgrade Workflow#
- Assessment: Use
assess_ansible_upgrade_readiness(automatically detects Python version) - EOL Check: Use
check_ansible_eol_statusto verify support timeline - Planning: Use
plan_ansible_upgradefor your current → target version - Compatibility Check: Use
validate_ansible_collection_compatibilityfor installed collections - Testing Preparation: Use
generate_ansible_upgrade_test_plan - Execute Upgrade: Follow the plan generated by
plan_ansible_upgrade - Validate: Execute the testing plan from
generate_ansible_upgrade_test_plan
See Also#
- CLI Usage Guide - Command-line interface for all tools
- Examples - Real-world usage examples
- Migration Guide - Step-by-step migration process
- Configuration - Configure SousChef for your environment
SaltStack Migration#
Complete enterprise-grade SaltStack-to-Ansible migration tools covering parsing, conversion, assessment, planning, and reporting. For the full migration methodology and concept mapping, see the Salt Migration Guide.
parse_salt_sls#
Parse a SaltStack SLS state file and extract all states, pillar references, and grain usage.
What it does: Reads a Salt SLS state file and extracts every state declaration, including the state module, state function, parameters, and requisites. Also identifies all pillar references (pillar.get, {{ pillar['...'] }}) and grain references used within the file.
Why you need this: SLS files are the primary unit of Salt configuration. Before converting them to Ansible, you need to understand their contents—what state modules are used, how requisites chain states together, and which pillar values need to be migrated to Ansible variables. This tool provides that structured analysis.
What you get:
- Complete list of all states and their parameters
- State module and function for each declaration (e.g., pkg.installed, service.running)
- All requisites (require, watch, onchanges, onfail)
- Pillar references and their default values
- Grain references used for conditional logic
Parameters:
- sls_path (string, required): Path to the SLS state file
Returns: - JSON string with extracted states, pillar references, grain references, and requisite graph
Example Usage:
parse_salt_pillar#
Parse a Salt pillar file and extract all variable definitions.
What it does: Reads a Salt pillar SLS file and extracts the complete variable tree it defines. Identifies nested structures, default values, and whether values appear to be sensitive (passwords, keys, tokens) so they can be targeted for Ansible Vault during conversion.
Why you need this: Salt pillars are the primary mechanism for storing configuration data, including secrets. Before converting states, you need a complete inventory of all pillar variables so you can map them to the correct Ansible variable files (group_vars/, host_vars/) or Ansible Vault.
What you get:
- Complete variable tree from the pillar file
- Identification of potentially sensitive values
- Nested key paths for each variable (e.g., database:host, database:password)
- Suggested Ansible variable names (flattened from Salt nested structure)
Parameters:
- pillar_path (string, required): Path to the pillar SLS file
Returns: - JSON string with extracted variable tree, sensitivity classification, and suggested Ansible variable names
Example Usage:
parse_salt_top#
Parse the Salt top.sls file and extract all environment, target, and state mappings.
What it does: Reads the Salt top.sls file (the master targeting file that maps minions to states) and extracts the full targeting tree. Understands glob, grain, compound, and nodegroup matchers. Produces a structured map of which hosts receive which states in which environments.
Why you need this: top.sls is the starting point for understanding your entire Salt infrastructure. Its targeting rules become your Ansible inventory groups. Without understanding it, you cannot correctly map minions to Ansible host groups or ensure every host receives the right playbooks.
What you get:
- All environment blocks (base, production, staging, etc.)
- Targeting expressions per environment (glob, grain, compound)
- States assigned to each target
- Matcher type for each target (glob, grain, compound, nodegroup, pcre)
- Suggested Ansible inventory group names
Parameters:
- top_path (string, required): Path to the top.sls file
Returns: - JSON string with environment → target → state mappings, matcher types, and suggested inventory groups
Example Usage:
parse_salt_directory#
Scan a full Salt state tree directory and produce a structural inventory.
What it does: Recursively scans a Salt state directory and catalogues every SLS file found, grouping them by logical role (based on directory structure). Identifies init.sls files, detects included states, and builds a dependency summary across the tree.
Why you need this: Before assessing or converting a large Salt installation, you need to know what you are working with. This tool gives you an instant overview of the entire state tree—how many states exist, how they are organised, and which states include or require others.
What you get: - Complete list of all SLS files in the tree - Logical grouping by directory (each directory typically maps to a role) - Counts of states per directory - Cross-directory include relationships - Summary statistics (total files, total states, unique state modules used)
Parameters:
- salt_dir (string, required): Path to the Salt states directory
Returns: - JSON string with directory structure, file inventory, include relationships, and summary statistics
Example Usage:
convert_salt_to_ansible#
Convert a single Salt SLS state file to an Ansible playbook YAML file.
What it does: Transforms a Salt SLS file into an Ansible playbook. Converts each state declaration to the equivalent Ansible task using the correct Ansible module. Maps Salt requisites (require, watch, onchanges, onfail) to Ansible task ordering and notify/handler patterns. Replaces pillar references with Ansible variable syntax.
Why you need this: Manual SLS-to-playbook conversion is labour-intensive and error-prone. This tool automates the mechanical translation, handling the 18 supported Salt state modules and common Jinja2 patterns. You then review and refine the output rather than writing from scratch.
What you get:
- Complete Ansible playbook YAML ready for review and use
- One Ansible task per Salt state declaration
- Handlers generated from watch requisites
- Pillar references converted to {{ variable_name }} syntax
- Comments noting any patterns that required manual attention
Parameters:
- sls_path (string, required): Path to the SLS file to convert
Returns: - YAML string containing the converted Ansible playbook
Example Usage:
query_salt_master#
Query a live Salt Master REST API (CherryPy netapi) for minion data and state information.
What it does: Connects to a running Salt Master's CherryPy REST API and retrieves live data about minions, grains, and available states. Useful for building an accurate inventory before migration or verifying minion targeting before running converted playbooks.
Why you need this: Static analysis of top.sls and pillar files may not reflect the actual state of your Salt infrastructure. Minion lists may differ from targeting rules, grains may have changed, and some minions may be inactive. Querying the live Salt Master gives you ground truth for inventory generation.
What you get: - List of all accepted minions - Grain data for targeted minions - Minion connectivity status - Applied highstate status (last run result)
Parameters:
- master_url (string, required): URL of the Salt Master REST API (e.g., https://salt-master.example.com:8000)
- username (string, required): Salt API authentication username
- password (string, required): Salt API authentication password
- target (string, optional, default: *): Salt targeting expression for minion selection
Returns: - JSON string with minion list, grain data, and connectivity status
Example Usage:
assess_salt_migration_complexity#
Assess the migration complexity and estimate effort for a Salt state directory.
What it does: Analyses a Salt state directory and produces a complexity score and effort estimate for migrating it to Ansible. Evaluates factors including state count, pillar usage depth, requisite complexity, custom module usage, grain targeting intricacy, and use of advanced Salt features (reactors, beacons, mine).
Why you need this: Before committing to a Salt migration, you need to understand its scope. This tool provides objective complexity scoring that you can use to justify timeline and resource estimates to stakeholders, and to prioritise which state directories to migrate first.
What you get: - Overall complexity score (Low / Medium / High / Very High) - Per-directory complexity breakdown - Estimated effort in person-days - List of high-complexity states requiring manual attention - Recommended migration order (simplest first) - Key risk factors identified
Parameters:
- salt_dir (string, required): Path to the Salt states directory to assess
Returns: - JSON string with complexity scores, effort estimates, risk factors, and recommended migration order
Example Usage:
plan_salt_migration#
Generate a phased migration plan with timeline for a Salt-to-Ansible migration.
What it does: Produces a detailed, phased migration plan tailored to your target platform and available timeline. Breaks the migration into structured phases (Discovery, Assessment, Conversion, Validation, Deployment), assigns states to phases based on complexity, and generates a week-by-week schedule.
Why you need this: A successful migration needs a plan. This tool generates a professional migration plan you can present to stakeholders and use to track progress. It accounts for dependencies between states, allocates time for validation, and adjusts the schedule to fit your target timeline.
What you get: - Phased migration plan with objectives and activities per phase - Week-by-week schedule based on your timeline - States grouped by phase (simplest first) - Target platform-specific guidance (AAP, AWX, or Ansible Core) - Resource requirements per phase - Risk mitigation recommendations
Parameters:
- salt_dir (string, required): Path to the Salt states directory
- timeline_weeks (integer, required): Total available migration timeline in weeks
- target_platform (string, required): Target platform — aap, awx, or ansible_core
Returns: - Markdown-formatted migration plan with phased schedule and platform-specific guidance
Example Usage:
generate_salt_migration_report#
Generate an executive migration report for a Salt-to-Ansible migration.
What it does: Produces a comprehensive migration report covering the full state tree. Includes an executive summary, complexity analysis, effort estimates, risk assessment, and recommended approach. Suitable for presentation to technical leads, project managers, or business stakeholders.
Why you need this: Enterprise migrations require documentation for governance, budget approval, and project tracking. This tool generates a professional report in your chosen format that communicates migration scope, risks, and plan without requiring manual document authoring.
What you get: - Executive summary with headline metrics - Full complexity analysis per state directory - Total effort estimate with confidence range - Risk register with mitigations - Recommended migration approach and phasing - Technology recommendations (AAP/AWX vs Ansible Core)
Parameters:
- salt_dir (string, required): Path to the Salt states directory
- report_format (string, required): Output format — markdown or json
Returns: - Migration report in the requested format
Example Usage:
generate_salt_inventory#
Convert a top.sls file to an Ansible INI inventory file.
What it does: Reads a Salt top.sls file and converts its targeting rules into an Ansible INI inventory. Maps each targeting block to an Ansible host group, preserving environment separation. Handles glob, grain, compound, and nodegroup matchers by generating appropriately named groups.
Why you need this: Your Ansible inventory must replicate the targeting logic of your Salt top.sls so that each host receives the same configuration after migration. Manual inventory creation from complex top.sls files is tedious and error-prone. This tool automates the translation.
What you get: - Ansible INI inventory with host groups corresponding to Salt targeting - Environment separation (Salt environments → inventory directories or group naming) - Host group hierarchy for compound matchers - Comments explaining the targeting logic from the original top.sls
Parameters:
- top_path (string, required): Path to the Salt top.sls file
Returns: - INI-formatted Ansible inventory string
Example Usage:
convert_salt_pillar_to_vars#
Convert a Salt pillar file to Ansible variable files, with optional Vault encryption for sensitive values.
What it does: Reads a Salt pillar file and converts its contents to Ansible variable YAML. When output_format is vault, produces two files: one with non-sensitive variables for group_vars/ and one formatted for Ansible Vault encryption containing sensitive values (identified by key name heuristics such as password, secret, key, token).
Why you need this: Pillars are Salt's equivalent of Ansible vars/ and Ansible Vault combined. To complete a migration, every pillar value must be mapped to an Ansible variable. Doing this manually for large pillar trees is time-consuming and risks missing sensitive values that should be encrypted.
What you get:
- Plain variable YAML for non-sensitive pillar values
- Separate vault YAML for sensitive values (when output_format: vault)
- Ansible variable names derived from Salt pillar key paths
- Comments mapping original pillar keys to new Ansible variable names
Parameters:
- pillar_path (string, required): Path to the pillar SLS file
- output_format (string, required): Output format — yaml (all variables in one file) or vault (split into plain and vault files)
Returns: - YAML string(s) with converted variable definitions
Example Usage:
convert_salt_directory_to_ansible#
Batch convert an entire Salt state directory to a full Ansible roles structure.
What it does: Converts a complete Salt state directory tree to an Ansible roles directory structure in a single operation. Each Salt state directory becomes an Ansible role with the standard layout (tasks/main.yml, handlers/main.yml, templates/, vars/main.yml, defaults/main.yml). Generates a site.yml playbook that orchestrates all roles.
Why you need this: Manually converting each SLS file and assembling a roles structure takes days or weeks for large Salt installations. This tool automates the entire conversion, giving you a starting point that is structurally correct and covers all states. You then refine the output rather than authoring from scratch.
What you get:
- Full Ansible roles directory structure (one role per Salt state directory)
- tasks/main.yml with converted tasks for each role
- handlers/main.yml with handlers generated from watch requisites
- defaults/main.yml with default variable values from pillar references
- site.yml orchestrating all roles
- Summary of any states that required manual attention
Parameters:
- salt_dir (string, required): Path to the Salt states directory to convert
- output_dir (string, required): Path to the output directory for the Ansible roles structure
Returns: - JSON string with conversion summary, file list, and list of items requiring manual review
Example Usage:
Tool Selection#
- Start with assessment: Use
assess_salt_migration_complexityto understand your current state - Understand targeting: Use
parse_salt_topto map minion targeting before generating inventory - Migrate pillars first: Use
convert_salt_pillar_to_varsbefore converting states - Generate inventory: Use
generate_salt_inventoryto produce your Ansible inventory from top.sls - Single file conversion: Use
convert_salt_to_ansiblefor targeted SLS conversions - Batch conversion: Use
convert_salt_directory_to_ansiblefor full tree migrations - Plan your timeline: Use
plan_salt_migrationwith your target platform and available weeks - Report to stakeholders: Use
generate_salt_migration_reportfor executive documentation
For the full Salt migration methodology, see the Salt Migration Guide.