|
|
{% extends "layout.html" %}
|
|
|
{% block content %}
|
|
|
<!DOCTYPE html>
|
|
|
<html lang="en">
|
|
|
<head>
|
|
|
<meta charset="UTF-8">
|
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
|
<title>Study Guide: Neural Networks for Classification</title>
|
|
|
|
|
|
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
|
|
|
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
|
|
|
<style>
|
|
|
|
|
|
body {
|
|
|
background-color: #ffffff;
|
|
|
color: #000000;
|
|
|
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
|
|
|
font-weight: normal;
|
|
|
line-height: 1.8;
|
|
|
margin: 0;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
.container {
|
|
|
max-width: 800px;
|
|
|
margin: 0 auto;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
h1, h2, h3 {
|
|
|
color: #000000;
|
|
|
border: none;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
h1 {
|
|
|
text-align: center;
|
|
|
border-bottom: 3px solid #000;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 30px;
|
|
|
font-size: 2.5em;
|
|
|
}
|
|
|
|
|
|
h2 {
|
|
|
font-size: 1.8em;
|
|
|
margin-top: 40px;
|
|
|
border-bottom: 1px solid #ddd;
|
|
|
padding-bottom: 8px;
|
|
|
}
|
|
|
|
|
|
h3 {
|
|
|
font-size: 1.3em;
|
|
|
margin-top: 25px;
|
|
|
}
|
|
|
|
|
|
|
|
|
strong {
|
|
|
font-weight: 900;
|
|
|
}
|
|
|
|
|
|
|
|
|
p, li {
|
|
|
font-size: 1.1em;
|
|
|
border-bottom: 1px solid #e0e0e0;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
li:last-child {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
ul {
|
|
|
list-style-type: none;
|
|
|
padding-left: 0;
|
|
|
}
|
|
|
|
|
|
li::before {
|
|
|
content: "β’";
|
|
|
color: #000;
|
|
|
font-weight: bold;
|
|
|
display: inline-block;
|
|
|
width: 1em;
|
|
|
margin-left: 0;
|
|
|
}
|
|
|
|
|
|
|
|
|
pre {
|
|
|
background-color: #f4f4f4;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 15px;
|
|
|
white-space: pre-wrap;
|
|
|
word-wrap: break-word;
|
|
|
font-family: "Courier New", Courier, monospace;
|
|
|
font-size: 0.95em;
|
|
|
font-weight: normal;
|
|
|
color: #333;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
.story {
|
|
|
background-color: #f8f9fa;
|
|
|
border-left: 4px solid #dc3545;
|
|
|
margin: 15px 0;
|
|
|
padding: 10px 15px;
|
|
|
font-style: italic;
|
|
|
color: #555;
|
|
|
font-weight: normal;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
table {
|
|
|
width: 100%;
|
|
|
border-collapse: collapse;
|
|
|
margin: 25px 0;
|
|
|
}
|
|
|
th, td {
|
|
|
border: 1px solid #ddd;
|
|
|
padding: 12px;
|
|
|
text-align: left;
|
|
|
}
|
|
|
th {
|
|
|
background-color: #f2f2f2;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
|
|
|
@media (max-width: 768px) {
|
|
|
body, .container {
|
|
|
padding: 10px;
|
|
|
}
|
|
|
h1 { font-size: 2em; }
|
|
|
h2 { font-size: 1.5em; }
|
|
|
h3 { font-size: 1.2em; }
|
|
|
p, li { font-size: 1em; }
|
|
|
pre { font-size: 0.85em; }
|
|
|
table, th, td { font-size: 0.9em; }
|
|
|
}
|
|
|
</style>
|
|
|
</head>
|
|
|
<body>
|
|
|
|
|
|
<div class="container">
|
|
|
<h1>π§ Study Guide: Neural Networks for Classification</h1>
|
|
|
|
|
|
|
|
|
|
|
|
<div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<a
|
|
|
href="/Neural-Networks-for-Classification-three"
|
|
|
target="_blank"
|
|
|
onclick="playSound()"
|
|
|
class="
|
|
|
cursor-pointer
|
|
|
inline-block
|
|
|
relative
|
|
|
bg-blue-500
|
|
|
text-white
|
|
|
font-bold
|
|
|
py-4 px-8
|
|
|
rounded-xl
|
|
|
text-2xl
|
|
|
transition-all
|
|
|
duration-150
|
|
|
|
|
|
/* 3D Effect (Hard Shadow) */
|
|
|
shadow-[0_8px_0_rgb(29,78,216)]
|
|
|
|
|
|
/* Pressed State (Move down & remove shadow) */
|
|
|
active:shadow-none
|
|
|
active:translate-y-[8px]
|
|
|
">
|
|
|
Tap Me!
|
|
|
</a>
|
|
|
</div>
|
|
|
|
|
|
<script>
|
|
|
function playSound() {
|
|
|
const audio = document.getElementById("clickSound");
|
|
|
if (audio) {
|
|
|
audio.currentTime = 0;
|
|
|
audio.play().catch(e => console.log("Audio play failed:", e));
|
|
|
}
|
|
|
}
|
|
|
</script>
|
|
|
|
|
|
|
|
|
|
|
|
<h2>πΉ Core Concepts</h2>
|
|
|
<div class="story">
|
|
|
<p><strong>Story-style intuition: The Corporate Hierarchy</strong></p>
|
|
|
<p>Think of a large company trying to decide if a new project proposal is a "Go" or "No-Go". The raw data (market research, costs) goes to the junior analysts (<strong>input layer</strong>). Each analyst specializes in one piece of data. They pass their summaries to mid-level managers (<strong>hidden layers</strong>), who combine these summaries to spot higher-level patterns. Finally, the CEO (<strong>output layer</strong>) takes the managers' final reports and makes the single classification decision: Go or No-Go. A <strong>Deep Neural Network</strong> is just a company with many layers of management, allowing it to understand extremely complex problems.</p>
|
|
|
</div>
|
|
|
<h3>What is a Neural Network?</h3>
|
|
|
<p>
|
|
|
A <strong>Neural Network (NN)</strong> is a computational model inspired by the structure and function of the human brain. It's composed of interconnected nodes, called artificial neurons, organized in layers. They are excellent at finding complex patterns in data.
|
|
|
</p>
|
|
|
|
|
|
<h3>Shallow vs. Deep Neural Networks</h3>
|
|
|
<ul>
|
|
|
<li><strong>Shallow NN:</strong> A network with only one hidden layer. It's like a small company with just one layer of management. Good for simpler problems.</li>
|
|
|
<li><strong>Deep NN (DNN):</strong> A network with two or more hidden layers. The "depth" allows it to learn hierarchical features, making it powerful for complex tasks like image and speech recognition.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>πΉ Neural Network Architecture</h2>
|
|
|
<div class="story">
|
|
|
<p><strong>Story example: The Assembly Line of Information</strong></p>
|
|
|
<p>An NN works like an assembly line. Raw materials (<strong>input data</strong>) enter at one end. Each station (<strong>neuron</strong>) performs a specific task: it takes materials from previous stations, weighs their importance (<strong>weights</strong>), adds a standard adjustment (<strong>bias</strong>), and decides whether to pass its result along (<strong>activation function</strong>). The process of the product moving from start to finish is <strong>Forward Propagation</strong>. If the final product is faulty, a manager goes back down the line (<strong>Backpropagation</strong>), telling each station exactly how to adjust its process to fix the error.</p>
|
|
|
</div>
|
|
|
|
|
|
|
|
|
[Image of a simple neural network architecture]
|
|
|
|
|
|
<ul>
|
|
|
<li><strong>Input Layer:</strong> Receives the initial data or features (e.g., the pixels of an image).</li>
|
|
|
<li><strong>Hidden Layers:</strong> One or more layers between the input and output. This is where the network learns to transform the data to find patterns.</li>
|
|
|
<li><strong>Output Layer:</strong> Produces the final result. For classification, this is typically the probability for each class.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>πΉ Mathematical Foundation</h2>
|
|
|
<div class="story">
|
|
|
<p><strong>Story example: The Neuron's Decision</strong></p>
|
|
|
<p>Each neuron is a tiny decision-maker. It listens to several colleagues (<strong>inputs</strong>). It trusts some colleagues more than others (their inputs have higher <strong>weights</strong>). It also has its own personal opinion (a <strong>bias</strong>). It adds up all the weighted opinions and its own bias to get a final score. Based on this score, it decides how strongly to "shout" its conclusion to the next layer of neurons. This "shout" is governed by its <strong>activation function</strong>.</p>
|
|
|
</div>
|
|
|
<h3>Weighted Sum & Activation</h3>
|
|
|
<p>$$ z = (w_1x_1 + w_2x_2 + \dots) + b $$</p>
|
|
|
<p>$$ a = f(z) $$</p>
|
|
|
<ul>
|
|
|
<li><strong>Activation Functions:</strong>
|
|
|
<ul>
|
|
|
<li>
|
|
|
<strong>Sigmoid:</strong>
|
|
|
<p>The Sigmoid function takes any real value and squashes it to a range between 0 and 1. This is perfect for the output layer in a <strong>binary classification</strong> task, where the output can be interpreted as a probability.</p>
|
|
|
<p><strong>Example:</strong> In an email spam detector, a Sigmoid output of <strong>0.95</strong> means there is a 95% probability that the email is spam.</p>
|
|
|
<p class="story"><strong>Story Analogy: The Dimmer Switch.</strong> Think of a Sigmoid function as a dimmer switch for a light. It's not just on or off; it can be 0% bright (output 0), 100% bright (output 1), or any percentage in between. This makes it ideal for representing the probability of a single outcome.</p>
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Softmax:</strong>
|
|
|
<p>The Softmax function is used in the output layer for <strong>multi-class classification</strong>. It takes a vector of raw scores (logits) and transforms them into a probability distribution, where each value is between 0 and 1, and all values sum up to 1.</p>
|
|
|
<p><strong>Example:</strong> An image classifier for animals might output raw scores of `[cat: 2.5, dog: 1.8, bird: 0.5]`. After applying Softmax, this becomes a probability distribution like `[cat: 0.65, dog: 0.29, bird: 0.06]`, indicating a 65% chance the image is a cat.</p>
|
|
|
<p class="story"><strong>Story Analogy: The Voting Poll.</strong> Imagine an election with multiple candidates (classes). Each candidate gets a certain number of raw votes (the logits). The Softmax function is the pollster that converts those raw vote counts into a final percentage for each candidate, ensuring the total percentage adds up to 100%. This tells you the relative likelihood of each candidate winning.</p>
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>ReLU (Rectified Linear Unit):</strong>
|
|
|
<p>ReLU is the most popular activation function for <strong>hidden layers</strong>. It's a very simple function: if the input is positive, it passes it through unchanged; if it's negative, it outputs zero. This simplicity makes it very fast and helps prevent the vanishing gradient problem.</p>
|
|
|
<p><strong>Example:</strong> If a neuron calculates a weighted sum of `z = -0.8`, the ReLU activation will be `a = 0`. If it calculates `z = 1.2`, the activation will be `a = 1.2`.</p>
|
|
|
<p class="story"><strong>Story Analogy: The One-Way Gate.</strong> Think of ReLU as a one-way gate that only opens for positive signals. If a positive signal arrives, the gate lets it pass through at full strength. If a negative signal arrives, the gate stays shut, blocking it completely. This simple but effective "go/no-go" mechanism is incredibly efficient for the internal workings of the network.</p>
|
|
|
</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li><strong>Loss Functions:</strong> The "report card" that tells the network how wrong its predictions are.
|
|
|
<ul>
|
|
|
<li><strong>Binary Cross-Entropy:</strong> Used for two-class problems.</li>
|
|
|
<li><strong>Categorical Cross-Entropy:</strong> Used for multi-class problems.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>πΉ Key Concepts in Training</h2>
|
|
|
<div class="story">
|
|
|
<p><strong>Story: The Student Studying for an Exam</strong></p>
|
|
|
<p>A student (the model) is studying a textbook (the dataset). One full read-through of the book is an <strong>Epoch</strong>. If they study in chunks, say 32 pages at a time, that's the <strong>Batch Size</strong>. Each time they review a chunk of pages is an <strong>Iteration</strong>. How much they adjust their notes after finding a mistake is the <strong>Learning Rate</strong>. Memorizing the book word-for-word is <strong>Overfitting</strong>, while not studying enough is <strong>Underfitting</strong>.</p>
|
|
|
</div>
|
|
|
<ul>
|
|
|
<li><strong>Epoch:</strong> One complete pass through the entire training dataset.</li>
|
|
|
<li><strong>Batch Size:</strong> The number of training examples used in one iteration.</li>
|
|
|
<li><strong>Learning Rate:</strong> A hyperparameter that controls how much to change the model in response to the estimated error each time the weights are updated.</li>
|
|
|
<li><strong>Regularization:</strong> Techniques to prevent overfitting.
|
|
|
<ul>
|
|
|
<li><strong>Dropout:</strong> Randomly "turning off" a fraction of neurons during training to prevent over-reliance on any single neuron.</li>
|
|
|
<li><strong>L2 Penalty:</strong> Adds a cost to having large weights, encouraging the model to use smaller, simpler weights.</li>
|
|
|
<li><strong>Early Stopping:</strong> Monitoring the performance on a validation set and stopping training when performance stops improving.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>πΉ Variants of Neural Networks</h2>
|
|
|
<table>
|
|
|
<thead>
|
|
|
<tr>
|
|
|
<th>Network Type</th>
|
|
|
<th>Story & Analogy</th>
|
|
|
</tr>
|
|
|
</thead>
|
|
|
<tbody>
|
|
|
<tr>
|
|
|
<td><strong>Deep Neural Network (DNN)</strong></td>
|
|
|
<td>A large corporation with many layers of management, capable of solving very complex business problems.</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td><strong>Convolutional Neural Network (CNN)</strong></td>
|
|
|
<td>A team of image specialists. They use special scanning tools (<strong>filters</strong>) to find simple patterns (edges, corners) and then combine them to recognize complex objects (faces, cars).</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td><strong>Recurrent Neural Network (RNN)</strong></td>
|
|
|
<td>A team that has a short-term memory. When processing a sentence, they remember the previous words to understand the context of the current word. Ideal for sequences like text or speech.</td>
|
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table>
|
|
|
|
|
|
<h2>πΉ Strengths & Weaknesses</h2>
|
|
|
<div class="story">
|
|
|
<p>A Neural Network is like a powerful but mysterious alien artifact. It can perform incredible feats (<strong>learn complex patterns</strong>) that no other tool can. However, it requires a huge amount of energy to run (<strong>data and computation</strong>), it's a "black box" because its inner workings are hard to understand, and you need to press its buttons (<strong>hyperparameters</strong>) in exactly the right way to get it to work.</p>
|
|
|
</div>
|
|
|
<h3>Advantages:</h3>
|
|
|
<ul>
|
|
|
<li>β
Can learn highly complex, non-linear decision boundaries.</li>
|
|
|
<li>β
State-of-the-art performance on unstructured data like images, text, and audio.</li>
|
|
|
<li>β
Can scale with massive datasets.</li>
|
|
|
</ul>
|
|
|
<h3>Disadvantages:</h3>
|
|
|
<ul>
|
|
|
<li>β Requires large amounts of data to train effectively.</li>
|
|
|
<li>β Computationally expensive and slow to train.</li>
|
|
|
<li>β Acts as a "black box," making it difficult to interpret its decisions.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>πΉ Python Implementation (Keras/TensorFlow)</h2>
|
|
|
<div class="story">
|
|
|
<p>Here, we use the `keras` library to build our "corporate hierarchy". We create a `Sequential` model, which is like setting up a new company. We `add` layers (departments) one by one. Then, we `compile` the company's rulebook: its goal (<strong>loss</strong>), its method for improving (<strong>optimizer</strong>), and how it will be graded (<strong>metrics</strong>). Finally, we `fit` the model, which is the process of training our new company on historical data.</p>
|
|
|
</div>
|
|
|
<pre><code>
|
|
|
import tensorflow as tf
|
|
|
from tensorflow.keras.models import Sequential
|
|
|
from tensorflow.keras.layers import Dense, Dropout
|
|
|
from sklearn.model_selection import train_test_split
|
|
|
from sklearn.preprocessing import StandardScaler
|
|
|
from sklearn.datasets import make_classification
|
|
|
|
|
|
# 1. Generate sample data
|
|
|
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_classes=2, random_state=42)
|
|
|
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
|
|
|
|
|
|
# 2. Normalize data
|
|
|
scaler = StandardScaler()
|
|
|
X_train = scaler.fit_transform(X_train)
|
|
|
X_test = scaler.transform(X_test)
|
|
|
|
|
|
# 3. Define the model
|
|
|
model = Sequential([
|
|
|
Dense(64, activation='relu', input_shape=(X_train.shape[1],)), # Hidden Layer 1
|
|
|
Dropout(0.5), # Regularization
|
|
|
Dense(32, activation='relu'), # Hidden Layer 2
|
|
|
Dense(1, activation='sigmoid') # Output Layer
|
|
|
])
|
|
|
|
|
|
# 4. Compile the model
|
|
|
model.compile(optimizer='adam',
|
|
|
loss='binary_crossentropy',
|
|
|
metrics=['accuracy'])
|
|
|
|
|
|
# 5. Train the model
|
|
|
history = model.fit(X_train, y_train,
|
|
|
epochs=50,
|
|
|
batch_size=32,
|
|
|
validation_split=0.2,
|
|
|
verbose=0)
|
|
|
|
|
|
# 6. Evaluate the model
|
|
|
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
|
|
|
print(f"Test Accuracy: {accuracy*100:.2f}%")
|
|
|
</code></pre>
|
|
|
|
|
|
<h2>πΉ Key Terminology Explained</h2>
|
|
|
<div class="story">
|
|
|
<p><strong>The Story: The Company's Training Manual</strong></p>
|
|
|
<p>Let's demystify the core processes and rules that govern how our neural network company learns and improves.</p>
|
|
|
</div>
|
|
|
<h3>Backpropagation</h3>
|
|
|
<p>
|
|
|
<strong>What it is:</strong> The algorithm used to train neural networks. It calculates the error at the output and propagates it backward through the network layers, determining how much each weight and bias contributed to the error. This information is then used by the optimizer (like Gradient Descent) to update the weights.
|
|
|
</p>
|
|
|
<p>
|
|
|
<strong>Story Example:</strong> In our corporate hierarchy, the final project fails (an error). <strong>Backpropagation</strong> is the process where the CEO blames the senior managers, who in turn figure out which mid-level managers gave them bad information, who then blame the junior analysts. This chain of blame assignment precisely identifies how much each employee at every level needs to adjust their work to fix the overall process.
|
|
|
</p>
|
|
|
<h3>Activation Function</h3>
|
|
|
<p>
|
|
|
<strong>What it is:</strong> A function applied to the output of a neuron that determines whether it should be activated ("fire") or not. It introduces non-linearity into the network, allowing it to learn complex patterns.
|
|
|
</p>
|
|
|
<p>
|
|
|
<strong>Story Example:</strong> An activation function is like a neuron's "excitement" level. A neuron listens to all the evidence, and if the total evidence exceeds a certain threshold, it gets excited and fires a strong signal. If not, it stays quiet. This on/off or graded response is what allows the network to make complex, non-linear decisions, rather than just calculating simple averages.
|
|
|
</p>
|
|
|
|
|
|
<h3>Dropout</h3>
|
|
|
<p>
|
|
|
<strong>What it is:</strong> A regularization technique where, during each training iteration, a random fraction of neurons are temporarily "dropped out" or ignored.
|
|
|
</p>
|
|
|
<p>
|
|
|
<strong>Story Example:</strong> Imagine a team of employees working on a project. To ensure no single employee becomes a single point of failure, the manager uses <strong>Dropout</strong>. Each day, they tell a few random employees to take the day off. This forces the remaining team members to become more versatile and robust, unable to rely on any one superstar. The result is a more resilient team that performs better overall.
|
|
|
</d>
|
|
|
|
|
|
<h3>Epoch</h3>
|
|
|
<p>
|
|
|
<strong>What it is:</strong> One complete forward and backward pass of all the training examples through the neural network.
|
|
|
</p>
|
|
|
<p>
|
|
|
<strong>Story Example:</strong> An epoch is like one full school year for our neural network student. During the year, they study every chapter in the textbook (all the training data) at least once. For a model to become truly proficient, it often needs to go through multiple school years (epochs) to master the material.
|
|
|
</p>
|
|
|
</div>
|
|
|
|
|
|
</body>
|
|
|
</html>
|
|
|
|
|
|
{% endblock %} |