Saturday, June 12, 2021

AWS and Azure Cloud and DevOps Coaching Online Classes - June 2021 Schedule

DevOps Coaching Schedules June 2021

Date Time Type When?
Jun 21 6:00 to 7:45 PM CST Weekday Mon/Wed
Jun 26 9:45 to 11:25 AM on Sat,10:30 AM to 12:15 pm CST on Sun Weekend Sat/Sun

DevOps Training highlights:

- Comprehensive hands on knowledge on Git, Jenkins, TeamCity, Maven, SonarQube, Nexus, Terraform, Ansible, Puppet, Docker on AWS and Azure.

- 20+ yrs IT exp, 5+ Yrs in DevOps/Cloud/Automation.

- Many students already placed in reputed companies from my coaching program successfully.

- Working as a Sr.DevOps Coach/Architect in a one of the top IT services companies in USA.

- Unique program...less theory, more hands on lab exercises...in Person class room training

Resume preparation will be done with candidates personally.

One-to-one Interview coaching.

- Coaching is purely hands on with 101% job relevant.

100% Job assistance.

- Coached about  850+ people successfully for past three years and many of my students got placed with many large enterprises in DFW, Chicago, Florida, Seattle, Bay area, Ohio and NY areas..

Contact no: 469-733-5248
Email - devops.coaching@gmail.com
Contact: AK

Monday, May 24, 2021

How to create Azure Container Registry using Terraform in Azure Cloud | Setup Azure Container Registry using Terraform

Hashicorp's Terraform is an open-source tool for provisioning and managing cloud infrastructure. Terraform can provision resources on any cloud platform. 

Terraform allows you to create infrastructure in configuration files(tf files) that describe the topology of cloud resources. These resources include virtual machines, storage accounts, and networking interfaces. The Terraform CLI provides a simple mechanism to deploy and version the configuration files to Azure.

Watch the steps in YouTube:

Advantages of using Terraform:

  • Reduce manual human errors while deploying and managing infrastructure.
  • Deploys the same template multiple times to create identical development, test, and production environments.
  • Reduces the cost of development and test environments by creating them on-demand.

How to Authenticate with Azure?

Terraform can authenticate with Azure in many ways, in this example we will use Azure CLI to authenticate with Azure and then we will create resources using Terraform.

Pre-requistes:

Azure CLI needs to be installed.

Terraform needs to be installed.

Logging into the Azure CLI

Login to the Azure CLI using:

az login

The above command will open the browser and will ask your Microsoft account details. Once you logged in, you can see the account info by executing below command:

az account list

Now create a directory to store Terraform files.

mkdir tf-acr

cd tf-acr

Let's create a terraform file to use azure provider. To configure Terraform to use the Default Subscription defined in the Azure CLI, use the below cod.

Now initialize the working directory

sudo vi create-acr.tf

provider "azurerm" {
  features {}
}
resource "azurerm_resource_group" "rg" {
  name     = "rg-tf-acr"
  location = "southcentralus"
}
resource "azurerm_container_registry" "acr" {
  name                     = "azcontainerregistry321"
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  sku                      = "Basic"
  admin_enabled            = true
}
output "admin_password" {
  value       = azurerm_container_registry.acr.admin_password
  description = "The object ID of the user"
sensitive = true
}

Perform the below command to initialize the directory.

terraform init

Once directory is initialized, you can start writing code for setting up the infrastructure. Now Perform the below command to validate terraform files.

terraform validate

perform plan command to see how many resources will be created.

terraform plan


terraform apply

Do you want to perform these actions?

type yes 


Now Login into Azure Cloud to see the resources created.


How to destroy the resources ?
Execute terraform destroy

The above command to destroy both resource group and container registry created before.



Thursday, May 13, 2021

How to store Terraform state file in S3 Bucket | How to manage Terraform state in S3 Bucket?

One of the things that Terraform does (and does really well) is “tracks” your infrastructure that you provision. It does this through the means ofstate.

By default, Terraform stores state locally in a file named terraform.tfstate. This does not work well in a team environment where if any developer wants to make a change he needs to make sure nobody else is updating terraform in the same time.

Why state file should not be stored in your local machine?

  • Local state doesn't work well in a team or collaborative environment.
  • Terraform state can include sensitive information.
  • Storing state locally increases the chance of inadvertent deletion.
With remote state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. Terraform supports storing state in many ways including the below:

  • Terraform Cloud
  • HashiCorp Consul
  • Amazon S3
  • Azure Blob Storage
  • Google Cloud Storage
  • Alibaba Cloud OSS
  • Artifactory or Nexus 

We will learn how to store state file in AWS S3 bucket. We will be creating S3 bucket and also create Dynamo Table where we will be storing lock.

Watch the steps in YouTube Channel:


Pre-requistes:

Steps:

mkdir project-terraform

cd project-terraform

First let us create necessary terraform files.

Create tf files

sudo vi variables.tf

variable "region" {
    default = "us-east-2"
}

variable "instance_type" {
    default = "t2.micro"

sudo vi main.tf
provider "aws" {
  region     = "${var.region}"
}

#1 -this will create a S3 bucket in AWS
resource "aws_s3_bucket" "terraform_state_s3" {
  bucket = "terraform-coachdevops-state"
  force_destroy = true
# Enable versioning to see full revision history of our state files
  versioning {
         enabled = true
        }

# Enable server-side encryption by default
server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

# 2 - this Creates Dynamo Table
resource "aws_dynamodb_table" "terraform_locks" {
  name         = "tf-up-and-run-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
        attribute {
         name = "LockID"
         type = "S"
      }
}

now let us initialize terraform.

terraform init

terraform apply

 

this will create two resources - S3 bucket and AWS Dynamo table. But still terraform state file is stored locally. if you type below command, you will see the tfstate file locally.

 ls -al


To store state file remotely, you need to add following code with terraform block. This will add backend configuration.

sudo vi main.tf

#Step 3 - Creates S3 backend
terraform {
  backend "s3" {
    #Replace this with your bucket name!
    bucket         = "terraform-coachdevops-state"
    key            = "dc/s3/terraform.tfstate"
    region         = "us-east-2"
    #Replace this with your DynamoDB table name!
    dynamodb_table = "tf-up-and-run-locks"
    encrypt        = true
    }
}

terraform init

and then type yes and enter. Now you see local state file has 0 bytes(empty)

Now login to AWS console, Click on S3, Click on the bucket name

Now you should be able to see tfstate file in S3.

Click on the terraform.tfstate file, you can see multiple version of your state file. Terraform is automatically pushing and pulling state data to and from S3.

How to perform Destroy?

It is not that straight forward as back end is referencing S3 bucket, if we delete S3 bucket, back end will not where to reference. So we need to perform below steps to perform destroy:

1. remove back end reference in the main.tf by commenting out backend section of the code.

sudo vi main.tf

remove the below code or comment out:

/*

terraform {
  backend "s3" {
    #Replace this with your bucket name!
    bucket         = "terraform-coachdevops-state"
    key            = "dc/s3/terraform.tfstate"
    region         = "us-east-2"
    #Replace this with your DynamoDB table name!
    dynamodb_table = "tf-up-and-run-locks"
    encrypt        = true
    }
}

*/

We need to initialize again. so type below command

terraform init

type yes

Now you will see the local state file have been updated.

Now perform you can delete all the resources created by Terraform including S3 bucket and Dynamo table.

terraform destroy


Tuesday, April 27, 2021

Install Azure CLI in Ubuntu 18.0.4 | How to setup Azure CLI in Ubuntu 18.0.4 | How to Install Azure CLI in Ubuntu

The Azure command-line interface (Azure CLI) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. Azure CLI is Microsoft's cross-platform command-line experience for managing Azure resources.

Azure CLI can be installed by executing the below command:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Once Azure CLI is installed, you can verify it by executing below command:

az version



How to create Azure Resources using Terraform in Azure Cloud | Automate Infrastructure setup using Terraform in Azure Cloud

Hashicorp's Terraform is an open-source tool for provisioning and managing cloud infrastructure. Terraform can provision resources on any cloud platform. 

Terraform allows you to create infrastructure in configuration files(tf files) that describe the topology of cloud resources. These resources include virtual machines, storage accounts, and networking interfaces. The Terraform CLI provides a simple mechanism to deploy and version the configuration files to Azure.

Advantages of using Terraform:

  • Reduce manual human errors while deploying and managing infrastructure.
  • Deploys the same template multiple times to create identical development, test, and production environments.
  • Reduces the cost of development and test environments by creating them on-demand.

How to Authenticate with Azure?

Terraform can authenticate with Azure in many ways, in this example we will use Azure CLI to authenticate with Azure and then we will create resources using Terraform.

Pre-requistes:

Azure CLI needs to be installed.

Terraform needs to be installed.

Logging into the Azure CLI

Login to the Azure CLI using:

az login

The above command will open the browser and will ask your Microsoft account details. Once you logged in, you can see the account info by executing below command:

az account list

Now create a directory to store Terraform files.

mkdir azure-terraform

cd azure-terraform

Let's create a terraform file to use azure provider. To configure Terraform to use the Default Subscription defined in the Azure CLI, use the below cod.

sudo vi azure.tf

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.46.0"
    }
  }
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}
}

Now initialize the working directory

Perform the below command:

terraform init

Once directory is initialized, you can start writing code for setting up the infrastructure. 

sudo vi create-app-svc.tf

# Create a resource group
resource "azurerm_resource_group" "dev-rg" {
  name     = "dev-environment-rg"
  location = "South Central US"
}

# Create app service plan
resource "azurerm_app_service_plan" "service-plan" {
  name = "simple-service-plan"
  location = azurerm_resource_group.dev-rg.location
  resource_group_name = azurerm_resource_group.dev-rg.name
  kind = "Linux"
  reserved = true
  sku {
    tier = "Standard"
    size = "S1"
  }
  tags = {
    environment = "dev"
  }
}

# Create JAVA app service
resource "azurerm_app_service" "app-service" {
  name = "my-awesome-app-svc"
  location = azurerm_resource_group.dev-rg.location
  resource_group_name = azurerm_resource_group.dev-rg.name
  app_service_plan_id = azurerm_app_service_plan.service-plan.id

site_config {
    linux_fx_version = "TOMCAT|8.5-java11"
  }
tags = {
    environment = "dev"
  }
}

Now Perform the below command to validate terraform files.

terraform validate

perform plan command to see how many resources will be created.

terraform plan

terraform apply

Do you want to perform these actions?

type yes 

 
Now Login into Azure Cloud to see all the resources created.
 

Saturday, April 17, 2021

ERROR: Can't construct a java object for tag:yaml - Kubernetes Jenkins Deployment issue

ERROR: Can't construct a java object for tag:yaml.org,2002:io.kubernetes.client.openapi.models.V1Deployment; exception=Class not found: io.kubernetes.client.openapi.models.V1Deployment

 in 'reader', line 1, column 1:
    apiVersion: apps/v1
 

Caused by: hudson.remoting.ProxyException: org.yaml.snakeyaml.error.YAMLException: Class not found: io.kubernetes.client.openapi.models.V1Deployment at org.yaml.snakeyaml.constructor.Constructor.getClassForNode(Constructor.java:664) at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.getConstructor(Constructor.java:322) at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:331) ... 30 more

Fix/Workaround:
Try the deployment in Jenkins slave node instead of deploying from Jenkins Master.


Thursday, February 25, 2021

Automate Docker builds using Jenkins Pipelines | Dockerize PHP App | Upload Images into Nexus Docker Registry

We will learn how to automate Docker builds using Jenkins. We will use PHP based application. I have already created a repo with source code + Dockerfile. We will see how to create Docker image and upload into Nexus Docker registry successfully. 

- Automating builds
- Automating Docker image builds
- Automating Docker image upload into Nexus docker registry
- Automating Docker container provisioning
 
Watch here for YouTube channel:

Pre-requistes:
1. Jenkins is up and running
2. Docker installed on Jenkins instance. Click here to for integrating Docker and Jenkins
3. Docker and Docker pipelines plug-in are installed
4. Nexus is up and running and docker registry is configured. Click here to know how to do that.
5. port 80 is opened up in firewall rules to access phpApp running inside Docker container


Create an entry in Manage Credentials for connecting to Nexus
Go to Jenkins --> Manage Jenkins--> Click on Manage Credentials.
 

Enter Nexus user name and password with ID as nexus
Click on Save.

Step # 1 - Create a pipeline in Jenkins, name can be anything



Step # 2 - Copy the pipeline code from below
Make sure you change red highlighted values below:
Your account_d should be updated and repo should be updated.

pipeline {
    
    agent any
    
    environment {
        imageName = "myphpapp"
        registryCredentials = "nexus"
        registry = "ec2-13-58-223-172.us-east-2.compute.amazonaws.com:8085/"
        dockerImage = ''
    }
    
    stages {
        stage('Code checkout') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '', url: 'https://bitbucket.org/ananthkannan/phprepo/']]])                   }
        }
    
    // Building Docker images
    stage('Building image') {
      steps{
        script {
          dockerImage = docker.build imageName
        }
      }
    }

    // Uploading Docker images into Nexus Registry
    stage('Uploading to Nexus') {
     steps{  
         script {
             docker.withRegistry( 'http://'+registry, registryCredentials ) {
             dockerImage.push('latest')
          }
        }
      }
    }
    
    // Stopping Docker containers for cleaner Docker run
    stage('stop previous containers') {
         steps {
            sh 'docker ps -f name=myphpcontainer -q | xargs --no-run-if-empty docker container stop'
            sh 'docker container ls -a -fname=myphpcontainer -q | xargs -r docker container rm'
         }
       }
      
    stage('Docker Run') {
       steps{
         script {
                sh 'docker run -d -p 80:80 --rm --name myphpcontainer ' + registry + imageName
            }
         }
      }    
    }
}


Step # 3 - Click on Build - Build the pipeline
Once you create the pipeline, click on Build now.


Steps # 4 - Check Docker images are uploaded into Nexus Registry
Login to Nexus, click on your repo, now you should see the image got uploaded.


Steps # 5 - Access PHPApp in the browser which is running inside docker container
Once build is successful, go to browser and enter http://public_dns_name
You should see page like below:





Wednesday, January 20, 2021

Install SonarQube 8 on Ubuntu | How to setup SonarQube 8 on Ubuntu 18.0.4?

SonarQube is one of the popular static code analysis tools. SonarQube enables developers to write cleaner, safer code. SonarQube is open-source, Java based tool. SonarQube uses database for storing analysis results. Database can be MS SQL, Oracle or PostgreSQL.  We will use PostgreSQL as it is open source as well.

Please find steps for installing SonarQube on Ubuntu 18.0.4 in AWS Cloud. Make sure port 9000 is opened in security group(firewall rule).

Pre-requistes:
Instance should have at least 2 GB RAM. For AWS, instance should be atleast t2.small

Watch the steps in YouTube:
 
SonarQube Architecture

SonarQube have three components namely
1. Scanner - This contains scanner and analyser to scan application code.
2. SonarQube server - contains Webserver(UI) and search server 
3. DB server - used for storing the analysis reports.

Let us start with java install (skip java install if you already have it installed)

Install Open JDK 11
sudo apt-get update && sudo apt-get install default-jdk -y

Postgres Installation

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
 
 
 
sudo wget -q https://www.postgresql.org/media/keys/ACCC4CF8.asc -O - | sudo apt-key add -

 

sudo apt-get -y install postgresql postgresql-contrib


Ignore the message in red color below:

sudo systemctl start postgresql
sudo systemctl enable postgresql

Login as postgres user
sudo su - postgres

Now create a user below by executing below command
createuser sonar

9. Switch to sql shell by entering
psql







Execute the below three lines (one by one)

ALTER USER sonar WITH ENCRYPTED password 'password';

CREATE DATABASE sonarqube OWNER sonar;

 GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonar;

\q





type exit to come out of postgres user.




3. Download SonarQube and Install

sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.6.0.39681.zip

sudo apt-get -y install unzip
sudo unzip sonarqube*.zip -d /opt





sudo mv /opt/sonarqube-8.6.0.39681 /opt/sonarqube -v



Create Group and User:
sudo groupadd sonarGroup

Now add the user with directory access
sudo useradd -c "user to run SonarQube" -d /opt/sonarqube -g sonarGroup sonar 
sudo chown sonar:sonarGroup /opt/sonarqube -R

Modify sonar.properties file
sudo vi /opt/sonarqube/conf/sonar.properties
uncomment the below lines by removing # and add values highlighted yellow
sonar.jdbc.username=sonar
sonar.jdbc.password=password





Next, Add the below line
sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube

 
 
 
Press escape, and enter :wq! to come out of the above screen.

Edit the sonar script file and set RUN_AS_USER
sudo vi /opt/sonarqube/bin/linux-x86-64/sonar.sh
Add enable the below line 
RUN_AS_USER=sonar







Create Sonar as a service(this will enable to start automatically when you restart the server)

Execute the below command:

sudo vi /etc/systemd/system/sonar.service











add the below code in green color:
[Unit]
Description=SonarQube service
After=syslog.target network.target

[Service]
Type=forking

ExecStart=/opt/sonarqube/bin/l
inux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/li
nux-x86-64/sonar.sh stop
LimitNOFILE=131072
LimitNPROC=8192
User=sonar
Group=sonarGroup
Restart=always


[Install]
WantedBy=multi-user.target

Save the file by entering :wq!
 
Kernel System changes
we must make a few modifications to a couple of kernel system limits files for sonarqube to work.
sudo vi /etc/sysctl.conf

Add the following lines to the bottom of that file:

vm.max_map_count=262144
fs.file-max=65536
 

Next, we're going to edit limits.conf. Open that file with the command:

sudo vi /etc/security/limits.conf
At the end of this file, add the following: 

sonar   -   nofile   65536
sonar   -   nproc    4096


Reload system level changes without server boot
sudo sysctl -p

Start SonarQube Now
sudo systemctl start sonar

sudo systemctl enable sonar

sudo systemctl status sonar
type q now to come out of this mode.
Now execute the below command to see if Sonarqube is up and running. This may take a few minutes.
 
check the Sonar logs to make sure there is no error:

tail -f /opt/sonarqube/logs/sonar*.log

Make sure you get the below message that says sonarqube is up..

Now access sonarQube UI by going to browser and enter public dns name with port 9000

Please follow steps for integrating SonarQube with Jenkins

https://www.coachdevops.com/2020/04/how-to-integrate-sonarqube-with-jenkins.html