DevOps in the Cloud

Hello! Today, we’ll delve into the dynamic realm of DevOps in the Cloud. Cloud computing and DevOps have become inseparable partners, offering unparalleled scalability, flexibility, and efficiency. This time we’ll explore the fundamentals, practices, and tools that make DevOps in the Cloud a game-changer.

Cloud Computing Fundamentals

What is Cloud Computing?

Cloud computing is the delivery of computing services over the internet, providing access to a pool of shared computing resources (servers, storage, databases, networking, software, etc.).

Cloud services are categorized into three main models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

Using Cloud Services (e.g., AWS, Azure) for DevOps

Cloud Providers

Leading cloud providers like Amazon Web Services (AWS) and Microsoft Azure offer a vast array of services that facilitate DevOps practices.

These services include cloud-based infrastructure, container orchestration (e.g., AWS ECS, Azure Kubernetes Service), serverless computing (e.g., AWS Lambda, Azure Functions), and more.

Scalability and Elasticity

Scalability refers to the ability to handle an increasing workload by adding resources, such as servers or processing power, to your infrastructure.

In the cloud, scalability can be achieved horizontally (adding more servers) or vertically (adding more resources to existing servers).

Elasticity builds on scalability by automatically adjusting resource allocation based on demand.

Cloud services can automatically scale resources up during traffic spikes and down during lulls, optimizing cost and performance.

Cloud-native DevOps Practices

Cloud-native DevOps practices leverage the advantages of cloud services and follow key principles like microservices architecture, continuous delivery, and containerization.

Container orchestration platforms like Kubernetes have become a cornerstone of cloud-native DevOps for managing and scaling containerized applications.

Now, let’s test your understanding with some questions:

  1. What is the main advantage of cloud computing in DevOps?
    a) Lower cost
    b) Increased complexity
    c) Scalability, flexibility, and efficiency
    d) Decreased automation
  2. Which of the following is not a cloud computing service model?
    a) IaaS (Infrastructure as a Service)
    b) PaaS (Platform as a Service)
    c) SaaS (Software as a Service)
    d) HaaS (Hardware as a Service)
  3. What is the primary benefit of elasticity in the cloud?
    a) It allows for horizontal scaling.
    b) It ensures data security.
    c) It automatically adjusts resource allocation based on demand.
    d) It eliminates the need for continuous delivery.
  4. Which cloud provider offers services like AWS Lambda and AWS ECS for serverless computing and container orchestration, respectively?
    a) Microsoft Azure
    b) Google Cloud Platform
    c) IBM Cloud
    d) Amazon Web Services (AWS)
  5. What are the key principles of cloud-native DevOps practices?
    a) Waterfall development, manual testing, and monolithic architecture
    b) Microservices architecture, continuous delivery, and containerization
    c) On-premises infrastructure, infrequent deployments, and single-tier applications
    d) Traditional project management, isolated development, and manual deployments

1 c – 2 d – 3 c – 4 d – 5 b

Security in DevOps

Hello! As we progress through our DevOps journey, we come to a critical aspect that should be ingrained in every step of the DevOps pipeline: Security. Today we’ll explore the fundamental principles, practices, and tools of DevSecOps—where “Sec” stands for security.

DevSecOps Principles

DevSecOps is an approach that integrates security practices into the DevOps pipeline. Instead of treating security as a separate phase, it’s woven into every stage, from development to deployment.

Security is everyone’s responsibility in a DevSecOps culture, not just the security team’s.

Shift Left:

  • The concept of “Shift Left” in DevSecOps emphasizes addressing security concerns early in the development process. This proactive approach reduces the chances of security vulnerabilities making it into production.
  • Security checks, code reviews, and automated security testing are performed as code is developed, not just before deployment.

Security Scanning and Vulnerability Management

Security Scanning

Security scanning tools are used to identify vulnerabilities in code, dependencies, and configurations. Examples include static analysis tools that analyze code for security issues and dynamic analysis tools that test applications during runtime.

Automated scans are integrated into the CI/CD pipeline to catch vulnerabilities early.

Vulnerability Management

Once vulnerabilities are identified, a vulnerability management process is put in place to prioritize, remediate, and track the resolution of issues.

Vulnerability databases like the Common Vulnerabilities and Exposures (CVE) list are used to keep track of known vulnerabilities.

Compliance as Code

Compliance requirements are translated into code, known as Compliance as Code, which is used to automate checks for compliance.

Continuous compliance checks are performed automatically as part of the deployment process.

Security Best Practices

  • Least Privilege: Users and systems should only have the minimum access and permissions required to perform their tasks.
  • Secure by Design: Security considerations should be part of the design phase, and security controls should be implemented from the beginning.
  • Patch Management: Keep software and systems up-to-date with the latest security patches.
  • Monitoring and Incident Response: Continuously monitor systems for security threats, and have a well-defined incident response plan in place.

Now, let’s test your understanding with some questions:

  1. What does “Shift Left” mean in the context of DevSecOps?
    a) Delaying security checks until deployment.
    b) Addressing security concerns early in the development process.
    c) Shifting security responsibilities to the operations team.
    d) Ignoring security concerns in favor of rapid development.
  2. Which type of security scanning tool analyzes code for security issues during development?
    a) Dynamic analysis tools
    b) Monitoring tools
    c) Compliance as Code tools
    d) Static analysis tools
  3. What is the purpose of Vulnerability Management in DevSecOps?
    a) To identify security issues early in development.
    b) To automate deployment.
    c) To prioritize, remediate, and track the resolution of vulnerabilities.
    d) To create compliance checks.
  4. What does “Compliance as Code” refer to in DevSecOps?
    a) A coding style that emphasizes compliance with coding standards.
    b) A way to automate checks for compliance requirements using code.
    c) A coding practice that ignores security concerns.
    d) A coding approach that focuses on rapid development.
  5. Which security best practice emphasizes providing users and systems with only the minimum access and permissions needed to perform their tasks?
    a) Secure by Design
    b) Least Privilege
    c) Patch Management
    d) Monitoring and Incident Response

1 b – 2 b – 3 c – 4 b – 5 b

Importance of Monitoring in DevOps

Hello! As we venture further into the world of DevOps, one of the core pillars we’ll explore today is Monitoring and Logging. Monitoring and logging are essential components of any DevOps strategy, and they play a crucial role in ensuring the health, performance, and reliability of your applications and infrastructure.

Why is Monitoring Important in DevOps?

Monitoring is like the radar of DevOps, providing continuous visibility into your systems. Here are some reasons why monitoring is vital:

  1. Early Issue Detection: Monitoring helps detect issues and anomalies in real-time or near real-time, allowing you to address them before they escalate into critical problems.
  2. Performance Optimization: It enables you to identify bottlenecks and performance issues, helping you fine-tune your applications and infrastructure for optimal performance.
  3. Resource Utilization: Monitoring helps you keep an eye on resource consumption, ensuring that you are not over-provisioning or under-provisioning resources.
  4. Scalability: By monitoring application load and resource usage, you can make informed decisions about scaling your infrastructure horizontally or vertically.

Introduction to Monitoring Tools (e.g., Prometheus, Grafana)


  • Prometheus is an open-source monitoring and alerting toolkit built specifically for reliability and scalability. It is designed to collect metrics from various targets, store them efficiently, and allow you to query and visualize the data.
  • Prometheus uses a “pull” model, where it scrapes data from endpoints at regular intervals. It also has a powerful query language (PromQL) for analyzing and alerting on the collected data.


  • Grafana is a popular open-source visualization and analytics platform that complements Prometheus and other data sources. It allows you to create interactive and customizable dashboards for visualizing your monitoring data.
  • Grafana supports various data sources, making it a versatile tool for creating visually appealing and informative dashboards

Log Management and Analysis

Logs and Their Importance

  • Logs are records of events and activities in your systems and applications. They are invaluable for diagnosing issues, debugging, and gaining insights into system behavior.
  • Log management involves collecting, storing, and analyzing logs systematically. Centralized log management solutions make it easier to search and analyze logs across multiple servers and applications.

Examples of Log Analysis Tools

  • Elasticsearch and Kibana: Elasticsearch is a search and analytics engine, and Kibana is an open-source data visualization platform. Together, they provide a powerful solution for log management and analysis.
  • Splunk: Splunk is a well-known commercial log management and analysis tool that offers features for searching, monitoring, and alerting on log data.

Incident Response and Alerting

Incident Response

  • Incident response is the process of managing and mitigating incidents that affect the availability, integrity, or confidentiality of your systems. Incidents can be security breaches, system outages, or other unexpected events.
  • Effective incident response involves well-defined procedures, communication plans, and coordination among teams to minimize the impact of incidents.


  • Alerting is a critical aspect of incident response and monitoring. It involves setting up notifications and triggers that notify relevant personnel when predefined conditions or thresholds are met or breached.
  • Monitoring tools like Prometheus and Grafana allow you to set up alerts based on metrics and logs, enabling proactive incident response.

Now, let’s test your understanding with some questions:

  1. Why is monitoring important in DevOps?
    a) To increase the complexity of systems
    b) To detect and address issues in real-time
    c) To reduce resource utilization
    d) To eliminate the need for incident response
  2. Which tool is designed for collecting and querying metrics in a pull model?
    a) Elasticsearch
    b) Kibana
    c) Prometheus
    d) Grafana
  3. What is the primary purpose of Grafana in the context of monitoring?
    a) Storing log data
    b) Visualizing and analyzing monitoring data
    c) Incident response
    d) Executing queries on metrics data
  4. What are logs primarily used for in DevOps?
    a) Debugging and diagnosing issues
    b) Real-time monitoring
    c) Performance optimization
    d) Creating dashboards
  5. What is incident response in DevOps?
    a) A process for managing and mitigating incidents that affect system availability, integrity, or confidentiality
    b) A process for automating log analysis
    c) A method for increasing system complexity
    d) A tool for generating alerts

1 b – 2 c – 3 b – 4 a – 5 a

Configuration Management in DevOps

Today we’ll explore the crucial concept of Configuration Management in DevOps. Configuration Management ensures that your systems and infrastructure are consistent, reliable, and easily manageable.

Introduction to Configuration Management

Configuration Management is the practice of systematically handling changes and updates to a system’s software, hardware, and configurations. In DevOps, Configuration Management is a vital component for maintaining infrastructure, automating tasks, and ensuring the reliability of your systems.

Infrastructure as Code (IaC) Tools

Infrastructure as Code (IaC) is a key principle in Configuration Management. It treats infrastructure, including servers, networks, and storage, as code. This means you can define your infrastructure using code, making it reproducible, version-controlled, and automated.

Two popular IaC tools are Ansible and Terraform:

  • Ansible: Ansible is an automation tool that allows you to define configuration files (playbooks) in a human-readable format. It’s agentless, meaning it doesn’t require installing any software on target machines.
  • Terraform: Terraform is an infrastructure provisioning tool. It uses a declarative configuration language to define and provision infrastructure resources. Terraform provides support for various cloud providers and on-premises infrastructure.

Automating Server Configuration

Configuration Management tools like Ansible can automate server configuration, ensuring consistency and reducing manual intervention. Let’s take a look at an example of how Ansible can be used to automate the installation and configuration of software packages:

# Example: Ansible Playbook for Installing Software Packages

- name: Install Software Packages
  hosts: web_servers
  become: yes  # Run tasks with sudo privileges

    - name: Update package manager cache
        update_cache: yes
      when: ansible_os_family == "Debian"  # Only for Debian-based systems

    - name: Install required packages
          - nginx
          - postgresql
        state: present  # Ensure the packages are installed

In this Ansible playbook, we define tasks to update the package manager cache and install software packages like Nginx and PostgreSQL.

Managing Infrastructure as Code

Managing Infrastructure as Code (IaC) involves versioning your infrastructure code, collaborating with team members, and ensuring that your infrastructure remains in a desired and consistent state.

Version control systems like Git are used to track changes in your IaC code, enabling collaboration and providing a history of modifications. You can store your IaC code in a Git repository and use branches and pull requests for code review and collaboration.

Now, let’s conclude this week’s lesson with some questions to test your understanding:

  1. What is the primary goal of Configuration Management in DevOps?
    a) Automating server backups
    b) Managing changes and updates to system configurations
    c) Monitoring server performance
    d) Developing software applications
  2. What does Infrastructure as Code (IaC) allow you to do?
    a) Use infrastructure without writing any code
    b) Define and manage infrastructure using code
    c) Create virtual machines manually
    d) Automate software development processes
  3. Which tool is commonly used for automating server configuration in Configuration Management?
    a) Git
    b) Terraform
    c) Ansible
    d) Docker
  4. How does version control benefit Infrastructure as Code (IaC) development?
    a) It makes IaC files executable.
    b) It allows tracking changes, collaboration, and version history.
    c) It eliminates the need for server configuration.
    d) It automates software testing.
  5. Which infrastructure provisioning tool uses a declarative configuration language?
    a) Docker
    b) Ansible
    c) Git
    d) Terraform

1 b – 2 b – 3 c – 4 b – 5 d

Introduction to Containers

Imagine a container as a lightweight, standalone executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Containers are like mini-virtual machines, but they’re more efficient and faster to start.

Containers offer several benefits:

  • Portability: Containers can run on any system that supports the containerization platform, making it easy to move applications between environments.
  • Consistency: Containers ensure that an application runs the same way across different environments, from development to production.
  • Resource Efficiency: Containers share the host operating system’s kernel, making them lightweight and efficient.

Docker Basics

Docker is the most popular containerization platform. It simplifies the process of creating, deploying, and managing containers. Here are some key concepts:

  1. Docker Image: A Docker image is a read-only template containing all the necessary instructions to create a container. Images are used as the building blocks for containers.
  2. Docker Container: A Docker container is a running instance of a Docker image. It’s isolated from the host system and other containers, making it a secure and self-contained unit.
  3. Dockerfile: A Dockerfile is a text file that defines the instructions for building a Docker image. It specifies the base image, adds files, sets environment variables, and more.

Container Orchestration with Kubernetes (Overview)

While Docker is excellent for running individual containers, when you have complex applications composed of multiple containers, you need a way to manage and orchestrate them. That’s where Kubernetes comes in.

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It allows you to define how your application should run, manage load balancing, handle failover, and more.

Here are some key concepts in Kubernetes:

  • Pod: The smallest deployable unit in Kubernetes, typically containing one or more containers.
  • Deployment: A Kubernetes resource that manages a set of identical pods, ensuring they are always running and scaling as needed.
  • Service: Kubernetes service provides networking and load-balancing to pods, allowing them to communicate with each other and external clients.

Now, let’s conclude this week’s lesson with some questions to test your understanding:

  1. What is a container in the context of DevOps and Docker?
    a) A lightweight virtual machine
    b) A type of virtual machine
    c) A standalone executable package with code and dependencies
    d) A physical server
  2. Which of the following is a benefit of using containers in DevOps?
    a) Containers have their own dedicated operating system.
    b) Containers are resource-intensive and slow to start.
    c) Containers ensure consistent application behavior across different environments.
    d) Containers are difficult to move between environments.
  3. What is a Docker image used for?
    a) Running a container
    b) Storing data in a container
    c) Defining the instructions for creating a container
    d) Managing multiple containers
  4. Which file is used to define the instructions for building a Docker image?
    a) Dockerfile
    b) requirements.txt
    d) docker-compose.yml
  5. What is Kubernetes primarily used for in DevOps?
    a) Containerization
    b) Version control
    c) Container orchestration
    d) Load testing

1 c – 2 c – 3 c – 4 a – 5 c

Version Control and Collaboration

Hello, everyone! In this post, we’re going to explore a fundamental aspect of software development and DevOps: Version Control and Collaboration.

Introduction to Version Control Systems (VCS)

Version Control is the practice of tracking and managing changes to code and other digital assets.

It plays a crucial role in enabling collaboration among team members and maintaining a history of changes made to a project. One of the most common tools used for version control is a Version Control System (VCS).

Git Fundamentals

Git is the most widely used VCS in the DevOps and software development world. It was created by Linus Torvalds, the same person who created Linux. Git allows developers to:

  • Track changes in their code.
  • Collaborate with team members.
  • Maintain different versions of their software.

Basic Git Concepts

Let’s dive into some basic Git concepts:

  1. Repository (Repo): A Git repository is like a project folder that contains all the files and history of a project.
  2. Commit: A commit is a snapshot of the project at a particular point in time. It includes changes made to files.
  3. Branch: A branch is a separate line of development within a repository. It allows multiple developers to work on different features or bug fixes simultaneously.
  4. Merge: Merging combines changes from one branch into another, typically used to integrate new features or bug fixes.
  5. Pull Request (PR): In Git-based collaboration, a pull request is a way to propose changes to a repository. It allows team members to review and discuss code changes before merging them into the main branch.

Now, let’s conclude this post with some questions to test your understanding:

1) What is the primary purpose of a Version Control System (VCS) like Git?
a) To track and manage changes to code and other digital assets.
b) To compile code and create executable files.
c) To write documentation for software projects.
d) To host and run web applications.

2) What is a Git repository (Repo)?
a) A branch of code in Git.
b) A project folder that contains all the files and history of a project.
c) A code review process in Git.
d) A commit in Git.

3) What is a Pull Request (PR) in Git-based collaboration?
a) A request to add new features to a Git repository.
b) A request to delete a branch in Git.
c) A request to merge changes into a repository after review.
d) A request for technical support in Git.

4) What does it mean to “commit” changes in Git?
a) To delete files from a repository.
b) To take a snapshot of the project’s state at a particular point in time.
c) To create a new branch in Git.
d) To merge changes from one branch into another.

5) Why is branching important in Git-based collaboration?
a) Branching is not important in Git.
b) Branches allow multiple developers to work on different features or bug fixes simultaneously.
c) Branches are used to permanently delete code.
d) Branching slows down the development process.

1 a – 2 b – 3 c – 4 b – 5 b

Operations Fundamentals

Hello, everyone! This time we’re going to explore the fundamentals of IT Operations, a critical component in the world of DevOps.

Introduction to IT Operations

IT Operations, often referred to as Ops, is a crucial part of the DevOps equation. This field focuses on managing and maintaining the infrastructure, servers, networks, and other resources that software applications rely on. The goal of IT Operations is to ensure that these systems run smoothly and efficiently.

Traditional IT vs. DevOps

Let’s start by understanding the key differences between traditional IT and DevOps:

  1. Silos vs. Collaboration: In traditional IT, there are often silos where different teams (e.g., development, operations, and QA) work independently. DevOps encourages collaboration and cross-functional teamwork.
  2. Manual vs. Automated Processes: Traditional IT relies heavily on manual processes, which can be slow and error-prone. DevOps emphasizes automation to speed up tasks and reduce human error.
  3. Long Deployment Cycles vs. Continuous Delivery: Traditional IT tends to have long deployment cycles, with infrequent updates. DevOps enables continuous delivery, allowing for frequent and smaller releases.
  4. Risk Aversion vs. Experimentation: Traditional IT often prioritizes stability over change, fearing that updates might cause disruptions. DevOps embraces experimentation and views change as an opportunity for improvement.

Role of Operations in DevOps

In DevOps, Operations teams play a pivotal role in enabling the continuous delivery of software. Here are some of the key responsibilities of operations within a DevOps context:

  • Infrastructure as Code (IaC): Operations teams use tools like Terraform or Ansible to define and manage infrastructure as code, allowing for consistent and automated provisioning of resources.
  • Automation: Automating repetitive tasks, such as server provisioning, configuration management, and deployment, is essential for DevOps success. Tools like Puppet and Chef are commonly used for configuration management.
  • Monitoring and Alerting: Operations teams implement monitoring solutions to keep an eye on system health and performance. This includes tools like Nagios, Prometheus, and Grafana. When issues arise, automated alerts notify teams for rapid response.
  • Scalability and High Availability: Ensuring that systems can scale horizontally to accommodate increased load and maintain high availability is a core concern of operations. Cloud services like AWS, Azure, and Google Cloud offer resources to achieve this.

Now, it’s your turn to think about how you would automate a task. Consider a scenario where you need to automate a repetitive task in your daily life or at work. What task would you choose, and what programming language or tool would you use?

Now, let’s conclude this post with some questions to test your understanding:

1) What is the primary focus of IT Operations in DevOps?
a) Developing software applications
b) Managing and maintaining infrastructure
c) Providing customer support
d) Creating user interfaces

2) What are the key differences between traditional IT and DevOps?
a) Traditional IT prioritizes risk-taking, while DevOps prioritizes stability.
b) Traditional IT encourages automation, while DevOps relies on manual processes.
c) Traditional IT has silos, while DevOps promotes collaboration.
d) Traditional IT emphasizes frequent and smaller releases, while DevOps prefers infrequent updates.

3) Which of the following tasks is typically automated by DevOps operations teams?
a) Writing code for new software features
b) Monitoring server performance
c) Managing customer support tickets
d) Creating marketing materials

4) What is the purpose of infrastructure as code (IaC) in DevOps?
a) To manually configure servers and networks
b) To automate the provisioning and management of infrastructure
c) To write code for application development
d) To monitor server performance

5) Which of the following tools is commonly used for configuration management in DevOps?
a) Terraform
b) Nagios
c) Python
d) Git

1 b – 2 C – 3 b – 4 b – 5 a

Software Development Fundamentals

Today we’re going to dive into the fundamental aspects of software development. This post is all about building a fast understanding how software is created and how it relates to DevOps practices.

Introduction to Software Development

Software development is at the core of the DevOps process. Before we can understand DevOps, it’s essential to grasp the basics of software development.

Waterfall vs. Agile methodologies

Historically, software development followed a rigid approach known as the Waterfall methodology.

It was a linear process with distinct phases:

requirements ==> design ==> implementation ==> testing ==> maintenance.

Agile methodologies, on the other hand, introduced a more flexible and iterative approach, emphasizing collaboration, customer feedback, and adaptability.

In DevOps, we often use Agile practices to enable continuous delivery and deployment.

Agile and DevOps Alignment

Agile and DevOps go hand in hand. Agile methodologies promote close collaboration between developers, testers, and stakeholders, encouraging incremental and frequent software releases.

DevOps extends this collaboration to include operations, aiming for the seamless integration of development and IT operations.

Role of Developers in DevOps

Developers play a crucial role in the DevOps journey. They write the code that powers applications and services, but in a DevOps culture, they are also responsible for ensuring that their code can be easily and reliably deployed. This means writing code that is modular, well-documented, and thoroughly tested.

Let’s consider a few key takeaways from today’s post:

1) What is the primary difference between Waterfall and Agile methodologies in software development?
a) Waterfall emphasizes flexibility, while Agile is more structured.
b) Waterfall follows a linear approach, while Agile is iterative and collaborative.
c) Waterfall focuses on continuous deployment, while Agile is more traditional.
d) Waterfall promotes faster development cycles than Agile.

2) In the context of Agile, what is the significance of customer feedback?
a) Customer feedback is not relevant in Agile.
b) Agile teams use customer feedback to improve their products continuously.
c) Customer feedback is only considered after the software is fully developed.
d) Agile teams wait until the end of the project to gather customer feedback.

3) Why is it important for developers to write modular code in DevOps?
a) Modular code is only relevant for large projects.
b) Modular code makes it easier to test and maintain software.
c) Modular code has no impact on DevOps practices.
d) Modular code is a requirement in Waterfall, not DevOps.

4) How does DevOps extend the collaboration introduced by Agile?
a) DevOps focuses on reducing collaboration between teams.
b) DevOps eliminates the need for collaboration altogether.
c) DevOps includes operations teams in the collaboration between development and IT operations.
d) DevOps removes the need for Agile practices.

5) Which of the following best describes the DevOps approach to software development?
a) DevOps replaces software development with IT operations.
b) DevOps focuses solely on writing code.
c) DevOps aims to integrate development and IT operations seamlessly.
d) DevOps eliminates the need for software development.

1 b – 2 b – 3 b – 4 c – 5 c

Continuous Delivery (CD)

CD is an automated process of delivering code changes to server quickly and efficiently.

It is the natural extension of CI.

CD, so, it’s the practice of to be sure that the code is ready to be deployed in production.

The (old) process

Operations team will get regular requests to deploy artifacts of CI on servers to have QA tests.

Developers and Operation team need to work together to fix eventual deployment issues.

Because Deployment is not only deploy the artifact but also configuration change, Networking, and so on.

Then finally QA team can test further and send back feedback.

So there is too much human intervention in this process.

Solution is automation

Every step in deployment should be automated (server provision, dependencies, configuration change, deploy,…)


  • System automation (Ansible, Puppet, Chef,…)
  • Cloud infrastructure automation (Terraform, CFormation,…)
  • CICD automation (Jenkins, Octopus deploy,…)
  • Code testing
  • and many other

Ops team will write automation code for deployment

QA Testers will write automation code for software test

Dev team will write code and unit test code

Continues Integration (CI)

CI is the development methodology of DevOps, for which developers commit regularly in a centralised repository where build and test are executed automatically.

Through CI, developers commit frequently in the repo under a version control system, Git for instance.

Before commit they can execute a unit test locally.

A CI service, then, creates the build automatically and starts unit tests on the new code. If the code is good the build will create an artifact and will store it in a software repo(sitory)

In this way there is the big advantage to discover issues early


Tolls used in CI are:

  • IDE, for coding (Eclipse, Visual studio, Atom, PYCHARM,….)
  • VCS, to store the code (GIT, SVN, TFS,…)
  • Build tools, based on the programming language (Maven, Ant, MSBuild, Visual Build, …)
  • Software Repository, to store artifacts (Nexus, JFrog, Archivia, …)
  • CI Tools, that integrate everything (Jenkins, CircleCI, Bamboo, …)