Site icon techbeatly

Integrating Ansible with OpenShift & Kubernetes

This article explores how the Ansible Automation Platform integrates with OpenShift to create a powerful automation solution for modern, efficient IT operations.

Introduction

Modern IT runs on automation these days. It’s the key to getting things done fast and right. At the forefront of this automation revolution is Ansible, which is celebrated for its adaptability and extensive support through collections. While my book “Ansible for Real Life Automation” covered these aspects comprehensively, this article takes a deep dive into the integration of the Ansible Automation Platform with OpenShift. By converging Ansible’s flexibility with OpenShift’s dynamic container orchestration, this integration exemplifies a synergy that empowers streamlined, automated, and scalable IT processes, shaping the future of efficient operations.

Ansible collections to manage Kubernetes and OpenShift

Also, check the Ansible Automation Platform Certified and Validated Contents to learn about available certified collections.

Let’s connect Ansible to OpenShift

In the following example, I use a simple playbook to fetch the pod details using multiple methods.

Method 1: Using OpenShift Auth Tokens

You can utilize the OpenShift tokens (oc whoami -t) but most of the time, the tokens are short-lived and you will not be able to use them for your Ansible use. In such cases, we can utilize the redhat.openshift.openshift_auth module to fetch the token as follows.

---
- name: Ansible to OpenShift Integration
  hosts: localhost
  gather_facts: false
  vars:
    ansible_python_interpreter: /usr/bin/python3
  tasks:

    # Method 1: Username & password to fetch the Token
  
    - name: Log in to OCP to obtain access token
      redhat.openshift.openshift_auth:
        host: "{{ ocp_host_api }}"
        username: "{{ lookup('ansible.builtin.env', 'OPENSHIFT_LOGIN_USERNAME') }}"
        password: "{{ lookup('ansible.builtin.env', 'OPENSHIFT_LOGIN_PASSWORD') }}"
        validate_certs: false
      register: openshift_auth_results

    - name: Get a list of all pods from openshift-apiserver
      kubernetes.core.k8s_info:
        host: "{{ ocp_host_api }}"
        api_key: "{{ openshift_auth_results.openshift_auth.api_key }}"
        kind: Pod
        namespace: openshift-apiserver
        validate_certs: false
      register: pod_list

    - name: Print details
      ansible.builtin.debug:
        msg: "{{ pod_list }}"

Method 2: Using username & Password directly in modules

Kindly be aware that this method exclusively applies to clusters configured for HTTP Basic Auth. If your cluster employs an alternative authentication mechanism (such as OAuth2 in OpenShift), this approach will not yield the anticipated results. In such cases, it’s recommended to explore the community.okd.k8s_auth or redhat.openshift.openshift_auth module (previously explained), as it might align more closely with your requirements.

Here is the sample playbook (Check the GitHub repo) for using the OpenShift cluster username and password for authentication.

---
- name: Ansible to OpenShift Integration
  hosts: localhost
  gather_facts: false
  vars:
    ansible_python_interpreter: /usr/bin/python3
  tasks:

    # Method 2: Using username & password

    - name: Get a list of all pods from openshift-apiserver
      kubernetes.core.k8s_info:
        host: https://api.cluster-wx549.wx549.sandbox2812.opentlc.com:6443
        username: "{{ lookup('ansible.builtin.env', 'OPENSHIFT_LOGIN_USERNAME') }}"
        password: "{{ lookup('ansible.builtin.env', 'OPENSHIFT_LOGIN_PASSWORD') }}"        
        kind: Pod
        namespace: openshift-apiserver
        validate_certs: false
      register: pod_list
      ignore_errors: true    

    - name: Print details
      ansible.builtin.debug:
        msg: "{{ pod_list }}" 

To pass the OPENSHIFT_LOGIN_USERNAME and OPENSHIFT_LOGIN_PASSWORD you can create a custom Credential Type as follows.

---
fields:
  - id: openshift_username
    type: string
    label: Username
  - id: openshift_password
    type: string
    label: Password
    secret: true
required:
  - openshift_username
  - openshift_password

And expose the environment variable as follows

env:
  OPENSHIFT_LOGIN_PASSWORD: '{{ openshift_password }}'
  OPENSHIFT_LOGIN_USERNAME: '{{ openshift_username }}'

Let’s create some resources in OpenShift using Ansible

Basically, you just need the redhat.openshift.k8s or kubernetes.core.k8s module to create almost any resources in OpenShift using Ansible; which is similar to kubectl apply -f <filename> or oc apply -f <filename.yaml> command.

For the demonstration purpose, let us create a project (namespace) object; for that, a simple YAML jinja2 template is prepared as follows; you need to pass all required variables mentioned in the template.

apiVersion: project.openshift.io/v1
kind: Project
metadata:
  annotations:
    openshift.io/requester: "{{ project_requestor }}"
    openshift.io/display-name: "{{ project_name }}"
    openshift.io/description: "{{ project_description }}"
  labels:
    project: "{{ project_name }}"
    app: "{{ project_app }}"
  name: {{ project_name }}

Remember, you can also pass the YAML definition inside the module but it will not be a neat method to handle such in the playbooks.

Now, in the playbook task file, we have to template the YAML and apply it using the k8s module as follows.

---
- name: Template the project details
  ansible.builtin.set_fact:
    project_definition: "{{ lookup('ansible.builtin.template', 'project.yaml.j2') }}"

- name: Create project
  # redhat.openshift.k8s:
  kubernetes.core.k8s:
    api_key: "{{ api_key }}"
    host: "{{ cluster_api }}"
    validate_certs: "{{ cert_validation | default(false) }}"
    state: present
    definition: "{{ project_definition }}"

That’s it! The resource will be created based on the YAML definition you have supplied as long as the user account (the Token you generated) has permission to do so.

Some more? Let us create a NetworkPolicy resource in OpenShift using Ansible

Let us use the same k8s module but with a different YAML jinja2 template this time; for a NetworkPolicy to allow traffic to the project from monitoring services (network.openshift.io/policy-group: monitoring).

The networkpolicy.yaml.j2 contains the basic YAML definition as follows.

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-openshift-monitoring
  namespace: "{{ ocp_project_name }}"
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          network.openshift.io/policy-group: monitoring
  podSelector: {}
  policyTypes:
  - Ingress

A typical playbook task file can be written as follows.

---
- name: Template the Network Policy details
  ansible.builtin.set_fact:
    netpol_definition: "{{ lookup('ansible.builtin.template', 'networkpolicy.yaml.j2') }}"

- name: Create Network Policy
  redhat.openshift.k8s:
    api_key: "{{ api_key }}"
    host: "{{ cluster_api }}"
    validate_certs: "{{ cert_validation | default(false) }}"
    state: present
    definition: "{{ netpol_definition }}"

Explore the Ansible collections and get your hands dirty with Ansible and OpenShift.

References

Exit mobile version