deedrop1140's picture
Upload 137 files
f7c7e26 verified
{% extends "layout.html" %}
{% block content %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Study Guide: Self-Training</title>
<!-- MathJax for rendering mathematical formulas -->
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<style>
/* General Body Styles */
body {
background-color: #ffffff; /* White background */
color: #000000; /* Black text */
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
font-weight: normal;
line-height: 1.8;
margin: 0;
padding: 20px;
}
/* Container for centering content */
.container {
max-width: 800px;
margin: 0 auto;
padding: 20px;
}
/* Headings */
h1, h2, h3 {
color: #000000;
border: none;
font-weight: bold;
}
h1 {
text-align: center;
border-bottom: 3px solid #000;
padding-bottom: 10px;
margin-bottom: 30px;
font-size: 2.5em;
}
h2 {
font-size: 1.8em;
margin-top: 40px;
border-bottom: 1px solid #ddd;
padding-bottom: 8px;
}
h3 {
font-size: 1.3em;
margin-top: 25px;
}
/* Main words are even bolder */
strong {
font-weight: 900;
}
/* Paragraphs and List Items with a line below */
p, li {
font-size: 1.1em;
border-bottom: 1px solid #e0e0e0; /* Light gray line below each item */
padding-bottom: 10px; /* Space between text and the line */
margin-bottom: 10px; /* Space below the line */
}
/* Remove bottom border from the last item in a list for cleaner look */
li:last-child {
border-bottom: none;
}
/* Ordered lists */
ol {
list-style-type: decimal;
padding-left: 20px;
}
ol li {
padding-left: 10px;
}
/* Unordered Lists */
ul {
list-style-type: none;
padding-left: 0;
}
ul li::before {
content: "โ€ข";
color: #000;
font-weight: bold;
display: inline-block;
width: 1em;
margin-left: 0;
}
/* Code block styling */
pre {
background-color: #f4f4f4;
border: 1px solid #ddd;
border-radius: 5px;
padding: 15px;
white-space: pre-wrap;
word-wrap: break-word;
font-family: "Courier New", Courier, monospace;
font-size: 0.95em;
font-weight: normal;
color: #333;
border-bottom: none;
}
/* Self-Training Specific Styling */
.story-st {
background-color: #fffbeb;
border-left: 4px solid #ffc107; /* Amber accent */
margin: 15px 0;
padding: 10px 15px;
font-style: italic;
color: #555;
font-weight: normal;
border-bottom: none;
}
.story-st p, .story-st li {
border-bottom: none;
}
.example-st {
background-color: #fffefa;
padding: 15px;
margin: 15px 0;
border-radius: 5px;
border-left: 4px solid #ffd560; /* Lighter Amber accent */
}
.example-st p, .example-st li {
border-bottom: none !important;
}
/* Quiz Styling */
.quiz-section {
background-color: #fafafa;
border: 1px solid #ddd;
border-radius: 5px;
padding: 20px;
margin-top: 30px;
}
.quiz-answers {
background-color: #fffefa;
padding: 15px;
margin-top: 15px;
border-radius: 5px;
}
/* Table Styling */
table {
width: 100%;
border-collapse: collapse;
margin: 25px 0;
}
th, td {
border: 1px solid #ddd;
padding: 12px;
text-align: left;
}
th {
background-color: #f2f2f2;
font-weight: bold;
}
/* --- Mobile Responsive Styles --- */
@media (max-width: 768px) {
body, .container {
padding: 10px;
}
h1 { font-size: 2em; }
h2 { font-size: 1.5em; }
h3 { font-size: 1.2em; }
p, li { font-size: 1em; }
pre { font-size: 0.85em; }
table, th, td { font-size: 0.9em; }
}
</style>
</head>
<body>
<div class="container">
<h1>๐ŸŒฑ Study Guide: Self-Training in Machine Learning</h1>
<h2>๐Ÿ”น Core Concepts</h2>
<div class="story-st">
<p><strong>Story-style intuition: The Ambitious Student</strong></p>
<p>Imagine a student learning to identify animals. Their teacher gives them a small, labeled set of 10 flashcards (<strong>labeled data</strong>). The student studies these cards and learns the basic differences between cats and dogs. The teacher then gives the student a huge stack of 1,000 unlabeled photos (<strong>unlabeled data</strong>). The student goes through the stack and labels the photos they are most confident about (e.g., "I'm 99% sure this is a cat"). They add these self-labeled photos, called <strong>pseudo-labels</strong>, to their original small set of flashcards. Now, with a much larger study set, they retrain their brain to become an even better animal identifier. This process of using your own knowledge to learn more is the essence of <strong>Self-Training</strong>.</p>
</div>
<p><strong>Self-Training</strong> is a simple yet powerful <strong>semi-supervised learning</strong> technique. It is used when you have a small amount of labeled data and a large amount of unlabeled data. The model is first trained on the small labeled set, and then it iteratively "bootstraps" itself by using its own predictions on the unlabeled data to improve its performance.</p>
<h3>Supervised vs. Unsupervised vs. Semi-Supervised</h3>
<ul>
<li><strong>Supervised:</strong> All data is labeled (e.g., thousands of flashcards with answers).</li>
<li><strong>Unsupervised:</strong> No data is labeled (e.g., a pile of photos with no answers).</li>
<li><strong>Semi-Supervised:</strong> A small amount of labeled data and a large amount of unlabeled data (the self-training scenario).</li>
</ul>
<h2>๐Ÿ”น Workflow of Self-Training</h2>
<p>The self-training process is an iterative loop that aims to leverage the unlabeled data effectively.</p>
<ol>
<li><strong>Train Initial Model:</strong> Train a base classifier (like an SVM or Random Forest) on the small, human-labeled dataset (L).</li>
<li><strong>Predict on Unlabeled Data:</strong> Use this initial model to make predictions on the large unlabeled dataset (U).</li>
<li><strong>Select High-Confidence Predictions:</strong> From the predictions, select the ones where the model is most confident (e.g., prediction probability > 95%). These are your "pseudo-labels."</li>
<li><strong>Add to Training Set:</strong> Move these pseudo-labeled data points from the unlabeled set U to the labeled set L.</li>
<li><strong>Retrain the Model:</strong> Train the model again on the newly expanded labeled set.</li>
<li><strong>Repeat:</strong> Continue this loop until no more unlabeled data points meet the confidence threshold or a set number of iterations is reached.</li>
</ol>
<h2>๐Ÿ”น Mathematical Formulation</h2>
<div class="story-st">
<p>Think of the model's learning process as minimizing an "error" or "loss" score. Initially, it only cares about the error on the teacher's flashcards. In self-training, it also starts caring about the error on its self-marked homework, but maybe gives it a little less weight so it doesn't get misled by a mistake.</p>
</div>
<p>The learning process is guided by a combined loss function:</p>
<p>$$ L = L_{sup} + \lambda L_{pseudo} $$</p>
<ul>
<li>\( L_{sup} \): The supervised loss, calculated on the original, ground-truth labeled data. This is the primary error signal.</li>
<li>\( L_{pseudo} \): The loss calculated on the high-confidence pseudo-labeled data.</li>
<li>\( \lambda \): A weighting parameter that controls how much the model trusts its own pseudo-labels. A smaller \( \lambda \) means the model relies more on the original labeled data.</li>
</ul>
<h2>๐Ÿ”น Key Assumptions of Self-Training</h2>
<p>Self-training can be very effective, but it relies on a few important assumptions. If these aren't true, the model can actually get worse!</p>
<ul>
<li><strong>High-Confidence Predictions are Correct:</strong> This is the most critical assumption. The model must be accurate when it is highly confident. If its confident predictions are wrong, it will start teaching itself incorrect information.</li>
<li><strong>Low-Density Separation:</strong> The classes should be separated by a low-density region in the feature space. This means the decision boundary should fall in an area where there are not many data points, making the confident predictions safer.</li>
</ul>
<h2>๐Ÿ”น Advantages & Disadvantages</h2>
<table>
<thead>
<tr>
<th>Advantages</th>
<th>Disadvantages</th>
</tr>
</thead>
<tbody>
<tr>
<td>โœ… Simple to implement and understand.</td>
<td>โŒ <strong>Error Propagation:</strong> The biggest risk. If the model makes a confident mistake, that incorrect pseudo-label is added to the training set, potentially making the model even more wrong in the next iteration.</td>
</tr>
<tr>
<td>โœ… Can significantly improve model performance when labeled data is scarce.</td>
<td>โŒ <strong>Confirmation Bias:</strong> The model tends to reinforce its own initial biases. If it has a slight bias at the start, self-training can amplify it.</td>
</tr>
<tr>
<td>โœ… Leverages vast amounts of cheap, unlabeled data.</td>
<td>โŒ Highly sensitive to the choice of the confidence threshold.</td>
</tr>
</tbody>
</table>
<h2>๐Ÿ”น Applications</h2>
<p>Self-training is most useful in domains where labeling is a bottleneck:</p>
<ul>
<li><strong>Text Classification:</strong> Labeling a few hundred emails as "Spam" or "Not Spam" is easy. Self-training can then use a million unlabeled emails to improve the spam filter.</li>
<li><strong>Medical Image Analysis:</strong> A radiologist can label a small number of X-rays. The model can then use a vast hospital archive of unlabeled X-rays to improve its diagnostic accuracy.</li>
<li><strong>Speech Recognition:</strong> Using a small amount of transcribed audio to help label a much larger corpus of untranscribed speech.</li>
</ul>
<h2>๐Ÿ”น Python Implementation (Beginner Sketch with Scikit-learn)</h2>
<div class="story-st">
<p>Scikit-learn makes implementing self-training incredibly easy with its `SelfTrainingClassifier`. You simply take a standard classifier (like a Random Forest) and wrap it inside `SelfTrainingClassifier`. It handles the iterative prediction and retraining loop for you automatically.</p>
</div>
<pre><code>
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.semi_supervised import SelfTrainingClassifier
from sklearn.metrics import accuracy_score
# --- 1. Create a Sample Dataset ---
# We'll create a dataset with 1000 samples.
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, random_state=42)
# Split into a tiny labeled set and a large unlabeled set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.95, random_state=42)
# Let's "hide" most of the labels in the test set to simulate an unlabeled pool
# A value of -1 is the default for "unlabeled" in scikit-learn
y_unlabeled = np.full_like(y_test, -1)
# Combine the small labeled training set with the large unlabeled test set
X_combined = np.concatenate((X_train, X_test))
y_combined = np.concatenate((y_train, y_unlabeled))
# --- 2. Train a Standard Supervised Model (Baseline) ---
base_classifier_baseline = SVC(probability=True, random_state=42)
base_classifier_baseline.fit(X_train, y_train)
y_pred_baseline = base_classifier_baseline.predict(X_test)
print(f"Baseline Accuracy (trained on only {len(X_train)} labeled samples): {accuracy_score(y_test, y_pred_baseline):.2%}")
# --- 3. Train a Self-Training Model ---
# We use the same base classifier
base_classifier_st = SVC(probability=True, random_state=42)
self_training_model = SelfTrainingClassifier(base_classifier_st, threshold=0.95)
# Train the model on the combined labeled and unlabeled data
self_training_model.fit(X_combined, y_combined)
# --- 4. Evaluate the Self-Training Model ---
y_pred_st = self_training_model.predict(X_test)
print(f"Self-Training Accuracy (trained on labeled + pseudo-labeled data): {accuracy_score(y_test, y_pred_st):.2%}")
</code></pre>
<div class="quiz-section">
<h2>๐Ÿ“ Quick Quiz: Test Your Knowledge</h2>
<ol>
<li><strong>What is the primary motivation for using semi-supervised learning techniques like self-training?</strong></li>
<li><strong>What is the biggest risk associated with self-training, and how does it happen?</strong></li>
<li><strong>What is a "pseudo-label"?</strong></li>
<li><strong>If you lower the confidence threshold for pseudo-labeling (e.g., from 0.95 to 0.75), what is the likely trade-off?</strong></li>
</ol>
<div class="quiz-answers">
<h3>Answers</h3>
<p><strong>1.</strong> The primary motivation is to leverage large amounts of cheap, unlabeled data to improve a model's performance when labeled data is scarce or expensive to obtain.</p>
<p><strong>2.</strong> The biggest risk is <strong>error propagation</strong>. It happens when the model makes a confident but incorrect prediction, and that incorrect "pseudo-label" is added to the training set, which can corrupt the model and make it worse in subsequent iterations.</p>
<p><strong>3.</strong> A "pseudo-label" is a label for an unlabeled data point that is generated by the machine learning model itself, not by a human.</p>
<p><strong>4.</strong> The trade-off is between the <strong>quantity and quality</strong> of pseudo-labels. Lowering the threshold will add more data to the training set in each iteration (increasing quantity), but these labels will be less reliable, increasing the risk of error propagation (decreasing quality).</p>
</div>
</div>
<h2>๐Ÿ”น Key Terminology Explained</h2>
<div class="story-st">
<p><strong>The Story: Decoding the Ambitious Student's Study Guide</strong></p>
</div>
<ul>
<li>
<strong>Semi-Supervised Learning:</strong>
<br>
<strong>What it is:</strong> A learning paradigm that falls between supervised and unsupervised learning, using a mix of labeled and unlabeled data for training.
<br>
<strong>Story Example:</strong> The student's learning process, using both the teacher's few flashcards (labeled) and their own large stack of photos (unlabeled), is a perfect example of <strong>semi-supervised learning</strong>.
</li>
<li>
<strong>Pseudo-Label:</strong>
<br>
<strong>What it is:</strong> A label assigned by a model to an unlabeled data point. It's treated as a "real" label for the purpose of retraining, even though it might be incorrect.
<br>
<strong>Story Example:</strong> When the student confidently writes "Cat" on the back of an unlabeled photo, that "Cat" label is a <strong>pseudo-label</strong>. It's the student's best guess.
</li>
<li>
<strong>Confidence Threshold:</strong>
<br>
<strong>What it is:</strong> A predefined cutoff (e.g., 95% probability) that a model's prediction must meet to be considered a pseudo-label.
<br>
<strong>Story Example:</strong> The student decides they will only label photos if they are "at least 95% sure" of their answer. This 95% cutoff is their <strong>confidence threshold</strong>.
</li>
<li>
<strong>Error Propagation:</strong>
<br>
<strong>What it is:</strong> The process where an error made in an early stage of an iterative process is carried forward and potentially amplified in later stages.
<br>
<strong>Story Example:</strong> If the student confidently mislabels a photo of a fox as a "dog," they will add this incorrect flashcard to their study pile. In the next round of studying, this wrong example might confuse them further, causing them to mislabel even more photos. This is <strong>error propagation</strong>.
</li>
</ul>
</div>
</body>
</html>
{% endblock %}