File size: 46,667 Bytes
0d00d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
{
  "title": "Transductive SVM Mastery: 100 MCQs",
  "description": "A structured 3-level mastery set of 100 MCQs on Transductive Support Vector Machines (TSVM) — covering fundamentals, intuition, semi-supervised learning strategy, margin optimization, unlabeled data influence, and real-world scenario-based problem-solving.",
  "questions": [
    {
      "id": 1,
      "questionText": "What type of learning does Transductive SVM belong to?",
      "options": [
        "Unsupervised learning",
        "Reinforcement learning",
        "Supervised learning",
        "Semi-supervised learning"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM uses both labeled and unlabeled data, making it a semi-supervised learning method."
    },
    {
      "id": 2,
      "questionText": "What is the main goal of Transductive SVM?",
      "options": [
        "Label only the given test dataset",
        "Reduce number of features",
        "Perform clustering",
        "Predict all future data points"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM directly optimizes predictions for the given test set instead of generalizing globally."
    },
    {
      "id": 3,
      "questionText": "Which type of data does TSVM use during training?",
      "options": [
        "No data at all",
        "Both labeled and unlabeled data",
        "Only labeled data",
        "Only unlabeled data"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM is semi-supervised and uses both labeled and unlabeled data."
    },
    {
      "id": 4,
      "questionText": "TSVM is mainly used to improve performance when labeled data is:",
      "options": [
        "Balanced",
        "Abundant",
        "Noisy",
        "Very limited"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM performs best when labeled data is scarce but unlabeled data is available."
    },
    {
      "id": 5,
      "questionText": "What does TSVM try to optimize in its classification boundary?",
      "options": [
        "Random margin",
        "Minimum number of support vectors",
        "Margin using both labeled and unlabeled data",
        "Widest margin between clusters"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM adjusts the margin using both labeled and unlabeled samples."
    },
    {
      "id": 6,
      "questionText": "Transductive SVM is different from traditional SVM because it:",
      "options": [
        "Uses gradient descent",
        "Uses kernels",
        "Works only on images",
        "Uses unlabeled test data during training"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM includes test (unlabeled) data during training."
    },
    {
      "id": 7,
      "questionText": "Why does TSVM consider unlabeled data?",
      "options": [
        "To balance datasets",
        "To improve generalization for test data",
        "To reduce memory usage",
        "To remove irrelevant features"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Unlabeled data helps TSVM better align boundaries for the actual test samples."
    },
    {
      "id": 8,
      "questionText": "TSVM belongs to the category of:",
      "options": [
        "Fully supervised learning",
        "Active learning",
        "Semi-supervised learning",
        "Reinforcement learning"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM is a semi-supervised learning method."
    },
    {
      "id": 9,
      "questionText": "What is the typical limitation of TSVM?",
      "options": [
        "Works only offline",
        "Requires millions of labeled samples",
        "Cannot handle text data",
        "Computationally expensive"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM is more computationally heavy because of unlabeled data optimization."
    },
    {
      "id": 10,
      "questionText": "Transductive learning focuses on:",
      "options": [
        "Future unseen data",
        "Only currently given test samples",
        "Generating synthetic data",
        "Creating new labels"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Transductive learning makes predictions only for the given test samples."
    },
    {
      "id": 11,
      "questionText": "Which assumption helps TSVM improve classification?",
      "options": [
        "Data is purely random",
        "Labels are dynamic",
        "Unlabeled data follows the same distribution as labeled data",
        "Classes always overlap"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM assumes unlabeled data follows the same distribution as labeled data."
    },
    {
      "id": 12,
      "questionText": "In TSVM, unlabeled data helps mainly in:",
      "options": [
        "Balancing training batches",
        "Data compression",
        "Changing learning rate",
        "Shifting decision boundary to better margin"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Unlabeled data helps TSVM adjust the decision boundary more accurately."
    },
    {
      "id": 13,
      "questionText": "TSVM is best suited when unlabeled data is:",
      "options": [
        "Much more than labeled data",
        "Less than labeled data",
        "Exactly equal to labeled data",
        "Not available at all"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM benefits most when unlabeled data is available in large quantity."
    },
    {
      "id": 14,
      "questionText": "TSVM mainly helps solve which challenge?",
      "options": [
        "Memory usage",
        "Feature selection",
        "Label scarcity",
        "Overfitting"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM improves performance even with very few labeled samples."
    },
    {
      "id": 15,
      "questionText": "TSVM does not require:",
      "options": [
        "Margin calculation",
        "Labeled test data",
        "Kernel function",
        "Optimization"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM requires unlabeled test data, not labeled test data."
    },
    {
      "id": 16,
      "questionText": "TSVM selects decision boundaries by:",
      "options": [
        "Minimizing random noise",
        "Maximizing margin with unlabeled support",
        "Maximizing entropy",
        "Ignoring unlabeled samples"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM pushes boundary into low-density unlabeled regions."
    },
    {
      "id": 17,
      "questionText": "Transductive SVM improves performance mainly by:",
      "options": [
        "Gradient clipping",
        "Feature reduction",
        "Boundary adjustment using unlabeled data",
        "Random sampling"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Unlabeled data allows TSVM to refine classification boundary."
    },
    {
      "id": 18,
      "questionText": "Which of the following is true for TSVM?",
      "options": [
        "Requires labeled test data",
        "Uses test data during training",
        "Uses only training data",
        "Does not use kernel trick"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM includes test data in the optimization loop."
    },
    {
      "id": 19,
      "questionText": "Which situation best suits TSVM?",
      "options": [
        "Only structured data is present",
        "No test data is known",
        "Few labeled but many unlabeled samples exist",
        "Lots of labeled data"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM is ideal when unlabeled data is abundant but labels are limited."
    },
    {
      "id": 20,
      "questionText": "TSVM differs from supervised SVM mainly because it:",
      "options": [
        "Uses unlabeled data during optimization",
        "Uses learning rate",
        "Does not optimize margin",
        "Can work without data"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Unlabeled data is a key differentiator in TSVM’s boundary optimization."
    },
    {
      "id": 21,
      "questionText": "TSVM attempts to place the decision boundary in:",
      "options": [
        "Random regions",
        "Low-density regions",
        "Fixed center of data",
        "High-density regions"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Best decision boundaries avoid dense regions of unlabeled data."
    },
    {
      "id": 22,
      "questionText": "Which term is most associated with TSVM?",
      "options": [
        "Auto encoding",
        "Generalization only",
        "Label propagation",
        "Transductive inference"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM is a transductive inference-based model."
    },
    {
      "id": 23,
      "questionText": "Unlabeled test data in TSVM is used for:",
      "options": [
        "Feature scaling",
        "Random dropout",
        "Adjusting classification boundary",
        "Validation only"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM modifies its boundary based on test samples’ structure."
    },
    {
      "id": 24,
      "questionText": "Transductive SVM is most helpful when:",
      "options": [
        "Labels change dynamically",
        "No test data exists",
        "Test set is provided beforehand",
        "Only regression is required"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM needs test samples available during training."
    },
    {
      "id": 25,
      "questionText": "TSVM primarily tries to:",
      "options": [
        "Remove irrelevant classes",
        "Cluster unlabeled samples",
        "Predict all possible future distributions",
        "Optimize boundary for specific test data"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM is optimized for performance on specific given test data."
    },
    {
      "id": 26,
      "questionText": "In TSVM, unlabeled points influence the model by:",
      "options": [
        "Being ignored during margin calculation",
        "Randomly deciding class labels",
        "Reducing kernel complexity",
        "Helping move the boundary into low-density regions"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Unlabeled points guide the boundary away from dense data areas to achieve better separation."
    },
    {
      "id": 27,
      "questionText": "What is the main optimization challenge in TSVM?",
      "options": [
        "It is a non-convex optimization problem",
        "It ignores constraints completely",
        "It minimizes feature size",
        "Convex optimization is guaranteed"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM’s optimization is non-convex because unlabeled data labels are unknown and must be inferred during training."
    },
    {
      "id": 28,
      "questionText": "Which term best describes how TSVM assigns labels to unlabeled data during training?",
      "options": [
        "Rule-based labeling",
        "Hard clustering",
        "Automatic label inference",
        "Supervised annotation"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM infers pseudo-labels for unlabeled data as part of its optimization process."
    },
    {
      "id": 29,
      "questionText": "What does TSVM attempt to minimize?",
      "options": [
        "Sum of absolute residuals",
        "Classification error on both labeled and unlabeled data",
        "Only training loss",
        "Kernel bias"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM minimizes overall classification error including pseudo-labeled samples."
    },
    {
      "id": 30,
      "questionText": "The optimization objective of TSVM includes:",
      "options": [
        "Both labeled loss and unlabeled margin penalty",
        "Only kernel regularization",
        "Only labeled data loss",
        "Clustering objective"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM jointly optimizes labeled loss and a penalty term for unlabeled margin consistency."
    },
    {
      "id": 31,
      "questionText": "How does TSVM improve over SVM when labeled data is scarce?",
      "options": [
        "By ignoring unlabeled samples",
        "By using fixed weight initialization",
        "By forcing boundary to pass through dense unlabeled data",
        "By using unlabeled data to refine margin placement"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Unlabeled samples help TSVM adjust its boundary even with limited labeled examples."
    },
    {
      "id": 32,
      "questionText": "Which assumption is most crucial for TSVM to work effectively?",
      "options": [
        "All classes have equal size",
        "Unlabeled data is random noise",
        "Labeled and unlabeled data come from similar distributions",
        "All features are binary"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM assumes labeled and unlabeled data share the same underlying distribution."
    },
    {
      "id": 33,
      "questionText": "What is a pseudo-label in TSVM?",
      "options": [
        "An estimated label for unlabeled data",
        "A feature scaling term",
        "A random class label",
        "A type of kernel parameter"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM assigns temporary labels (pseudo-labels) to unlabeled data during optimization."
    },
    {
      "id": 34,
      "questionText": "Why is TSVM computationally more expensive than SVM?",
      "options": [
        "It uses both labeled and unlabeled samples in optimization",
        "It ignores regularization",
        "It skips margin maximization",
        "It trains multiple neural networks internally"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Involving unlabeled samples increases optimization complexity in TSVM."
    },
    {
      "id": 35,
      "questionText": "In TSVM, pseudo-label assignment affects:",
      "options": [
        "Hyperparameter tuning only",
        "Feature normalization",
        "Decision boundary and margin",
        "Kernel shape only"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Pseudo-labels change how unlabeled data influences the decision boundary."
    },
    {
      "id": 36,
      "questionText": "TSVM is also known as:",
      "options": [
        "Support Vector Data Description",
        "Hybrid Neural Classifier",
        "Maximum Margin Estimator",
        "Semi-supervised SVM"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM is often referred to as Semi-supervised SVM because it uses unlabeled data."
    },
    {
      "id": 37,
      "questionText": "The optimization of TSVM is often solved by:",
      "options": [
        "Greedy pruning of features",
        "Alternating between label estimation and margin optimization",
        "Using only test accuracy",
        "Random initialization of labels and stopping early"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM alternates between assigning pseudo-labels and optimizing the decision boundary."
    },
    {
      "id": 38,
      "questionText": "TSVM tries to ensure that unlabeled points:",
      "options": [
        "Are misclassified intentionally",
        "Are ignored completely",
        "Fall near the decision boundary",
        "Stay in low-density regions away from the boundary"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM prefers placing the boundary where few unlabeled points exist (low-density assumption)."
    },
    {
      "id": 39,
      "questionText": "What is the purpose of using test data in TSVM training?",
      "options": [
        "To tune kernel parameters",
        "To calculate validation loss",
        "To influence boundary for that specific test set",
        "To estimate gradient descent steps"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM includes test data to directly optimize for that specific evaluation set."
    },
    {
      "id": 40,
      "questionText": "Which of the following is NOT an advantage of TSVM?",
      "options": [
        "Simple convex optimization",
        "Uses unlabeled data effectively",
        "Improved test-specific accuracy",
        "Better performance with limited labeled data"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM’s optimization is non-convex, which can make it difficult to solve efficiently."
    },
    {
      "id": 41,
      "questionText": "What happens if unlabeled data comes from a different distribution in TSVM?",
      "options": [
        "It has no effect",
        "Training loss becomes zero",
        "Boundary may become misleading",
        "Model accuracy improves"
      ],
      "correctAnswerIndex": 2,
      "explanation": "If unlabeled data doesn’t match labeled distribution, TSVM’s decision boundary may shift incorrectly."
    },
    {
      "id": 42,
      "questionText": "Which kernel can be used in TSVM?",
      "options": [
        "Any kernel supported by SVM",
        "Only linear kernel",
        "No kernel at all",
        "Only polynomial kernel"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM supports any standard SVM kernel such as linear, RBF, or polynomial."
    },
    {
      "id": 43,
      "questionText": "The low-density separation principle in TSVM ensures:",
      "options": [
        "Boundary avoids dense regions of data",
        "Model ignores test data",
        "Optimization is convex",
        "Boundary passes through dense data points"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM seeks decision boundaries in low-density regions for better generalization."
    },
    {
      "id": 44,
      "questionText": "Which is a potential drawback of TSVM?",
      "options": [
        "Ignores margin constraint",
        "Cannot handle kernels",
        "Overfitting on unlabeled data structure",
        "Too few optimization parameters"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM may overfit to the unlabeled data structure if the assumption about data distribution fails."
    },
    {
      "id": 45,
      "questionText": "In TSVM optimization, the cost function includes terms for:",
      "options": [
        "Feature reduction only",
        "Test accuracy and dropout",
        "Labeled and unlabeled sample penalties",
        "Training error and kernel bias"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Both labeled and unlabeled sample penalties are part of TSVM’s cost function."
    },
    {
      "id": 46,
      "questionText": "The transductive setting assumes access to:",
      "options": [
        "Future unseen data points",
        "Synthetic datasets only",
        "Only current test dataset during training",
        "No data at training time"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM is transductive because it knows the test data in advance."
    },
    {
      "id": 47,
      "questionText": "In TSVM, which samples become support vectors?",
      "options": [
        "Only labeled samples",
        "Randomly chosen samples",
        "Only unlabeled samples",
        "Both labeled and unlabeled samples that define margin"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Support vectors can include both labeled and unlabeled points that lie near the margin."
    },
    {
      "id": 48,
      "questionText": "What role does the kernel trick play in TSVM?",
      "options": [
        "Converts classification to clustering",
        "Removes need for optimization",
        "Reduces computation cost",
        "Transforms data into higher dimensions for better separation"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Kernels in TSVM allow nonlinear separation using higher-dimensional mappings."
    },
    {
      "id": 49,
      "questionText": "If the unlabeled data distribution overlaps both classes, TSVM may:",
      "options": [
        "Misplace the boundary",
        "Achieve perfect separation",
        "Find an ideal margin",
        "Completely ignore labels"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Overlapping distributions make boundary placement difficult, reducing TSVM accuracy."
    },
    {
      "id": 50,
      "questionText": "What is the optimization goal of TSVM?",
      "options": [
        "Minimize Euclidean distance",
        "Maximize kernel variance",
        "Minimize total hinge loss for labeled and pseudo-labeled data",
        "Minimize feature weights"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM extends hinge loss to include pseudo-labeled unlabeled samples."
    },
    {
      "id": 51,
      "questionText": "The margin in TSVM is influenced by:",
      "options": [
        "Feature normalization",
        "Unlabeled data placement",
        "Random initialization",
        "Only kernel parameters"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Unlabeled data impacts how the margin is optimized in TSVM."
    },
    {
      "id": 52,
      "questionText": "TSVM can be sensitive to:",
      "options": [
        "Data sorting order",
        "Kernel choice and unlabeled data quality",
        "Training batch size only",
        "Hardware configuration"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Both kernel and unlabeled data quality strongly influence TSVM performance."
    },
    {
      "id": 53,
      "questionText": "The optimization of TSVM is often performed using:",
      "options": [
        "Backpropagation",
        "Gradient descent only",
        "Iterative label flipping and margin refinement",
        "Reinforcement updates"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM optimization alternates between label flipping and boundary updates."
    },
    {
      "id": 54,
      "questionText": "The unlabeled samples in TSVM are assigned:",
      "options": [
        "Random labels once",
        "Temporary pseudo-labels updated iteratively",
        "No labels ever",
        "Permanent labels"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Pseudo-labels are iteratively refined as the model learns."
    },
    {
      "id": 55,
      "questionText": "What is the primary risk of incorrect pseudo-labels in TSVM?",
      "options": [
        "Reduced convergence speed",
        "Lower feature correlation",
        "Kernel instability",
        "Boundary shift and wrong classification"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Incorrect pseudo-labels can distort margin direction and harm performance."
    },
    {
      "id": 56,
      "questionText": "TSVM can be viewed as solving a problem that involves both:",
      "options": [
        "Clustering and reinforcement",
        "Supervised and unsupervised objectives",
        "Regression and clustering",
        "Classification and regression trees"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM combines labeled (supervised) and unlabeled (unsupervised) objectives."
    },
    {
      "id": 57,
      "questionText": "TSVM uses the concept of margin maximization from:",
      "options": [
        "Decision trees",
        "Neural networks",
        "Traditional SVMs",
        "Naive Bayes"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Like SVM, TSVM also maximizes the decision margin."
    },
    {
      "id": 58,
      "questionText": "Which step in TSVM optimization determines pseudo-label accuracy?",
      "options": [
        "Initialization of weights",
        "Label inference phase",
        "Gradient adjustment",
        "Kernel scaling"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Label inference directly controls pseudo-label quality in TSVM."
    },
    {
      "id": 59,
      "questionText": "The iterative process in TSVM continues until:",
      "options": [
        "Training accuracy is 100%",
        "Labels stop changing significantly",
        "Random seed resets",
        "Kernel converges to zero"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM stops when pseudo-labels stabilize between iterations."
    },
    {
      "id": 60,
      "questionText": "The optimization process in TSVM is considered:",
      "options": [
        "Convex and easy to solve",
        "Always deterministic",
        "Non-convex and computationally hard",
        "Independent of initial labels"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM optimization is non-convex because of unlabeled data label uncertainty."
    },
    {
      "id": 61,
      "questionText": "What happens when unlabeled data contradicts labeled patterns in TSVM?",
      "options": [
        "Model ignores unlabeled data",
        "Pseudo-labels become fixed",
        "Decision boundary becomes unstable",
        "Kernel stops updating"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Contradictory unlabeled data can confuse boundary direction."
    },
    {
      "id": 62,
      "questionText": "Which of the following best describes TSVM’s loss function?",
      "options": [
        "Mean squared error",
        "Exponential loss",
        "Cross-entropy only",
        "Standard hinge loss extended with unlabeled penalty terms"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM extends the hinge loss to include penalties for unlabeled pseudo-labels."
    },
    {
      "id": 63,
      "questionText": "The decision boundary in TSVM depends heavily on:",
      "options": [
        "Distribution of unlabeled samples",
        "Number of epochs",
        "Learning rate decay",
        "Batch normalization"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Unlabeled data distribution strongly influences margin placement in TSVM."
    },
    {
      "id": 64,
      "questionText": "Which element makes TSVM non-trivial to implement?",
      "options": [
        "Fixed kernel trick",
        "No support vector calculation",
        "Static dataset handling",
        "Dynamic pseudo-label optimization"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Dynamic pseudo-label optimization adds complexity to TSVM."
    },
    {
      "id": 65,
      "questionText": "Which kind of semi-supervised principle does TSVM follow?",
      "options": [
        "Cluster assumption",
        "Low-density separation assumption",
        "Graph-based propagation",
        "Entropy minimization only"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM is based on low-density separation — boundaries avoid dense data regions."
    },
    {
      "id": 66,
      "questionText": "The decision function of TSVM is closest in structure to:",
      "options": [
        "Random forest vote",
        "Standard SVM decision function",
        "Neural classifier function",
        "Logistic regression probability"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM shares the same functional form as SVM but with adjusted parameters from unlabeled data."
    },
    {
      "id": 67,
      "questionText": "TSVM optimization complexity grows with:",
      "options": [
        "Number of labels only",
        "Number of unlabeled samples",
        "Learning rate value",
        "Feature normalization"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Larger unlabeled datasets increase TSVM’s computational cost."
    },
    {
      "id": 68,
      "questionText": "In Transductive SVM, what is the primary goal when adjusting the decision boundary?",
      "options": [
        "To cluster data without any decision boundary",
        "To maximize margin while correctly classifying both labeled and unlabeled data",
        "To only minimize error on labeled data",
        "To ignore the unlabeled data completely"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Transductive SVM aims to find a boundary that maximizes the margin while making good use of both labeled and unlabeled data for improved generalization."
    },
    {
      "id": 69,
      "questionText": "What happens if unlabeled data points lie very close to the initial decision boundary in TSVM?",
      "options": [
        "The model forces them to labeled class randomly",
        "TSVM ignores them completely",
        "The boundary may be adjusted to push those points away from the margin",
        "The boundary becomes fixed and cannot move"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM tries to adjust the boundary so unlabeled points do not lie close to or inside the margin region, improving generalization."
    },
    {
      "id": 70,
      "questionText": "Why is Transductive SVM considered more complex than Inductive SVM?",
      "options": [
        "It ignores labeled data",
        "It never uses margin maximization",
        "It optimizes label assignment for unlabeled data too",
        "It is only used for unsupervised problems"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM adds an additional layer of optimization by trying all possible label assignments of unlabeled data while maximizing the margin."
    },
    {
      "id": 71,
      "questionText": "A company has 100 labeled reviews and 10,000 unlabeled reviews. Why might TSVM perform better than regular SVM here?",
      "options": [
        "Because it assumes all reviews are identical",
        "Because it uses unlabeled reviews to better position the decision boundary",
        "Because it randomly guesses the boundary",
        "Because it fully ignores unlabeled reviews"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM benefits from large unlabeled data by adjusting the decision boundary based on its distribution."
    },
    {
      "id": 72,
      "questionText": "In a real-world email spam detection scenario, how does TSVM specifically help compared to standard SVM?",
      "options": [
        "It forces all unlabeled emails to spam class",
        "It uses unlabeled emails to refine the margin location before finalizing decisions",
        "It converts unlabeled emails to labeled by itself",
        "It deletes unlabeled emails"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM leverages structure in unlabeled email data to position the margin more correctly before classification."
    },
    {
      "id": 73,
      "questionText": "If TSVM is applied to medical diagnosis where only 10% of data is labeled, what is its most valuable capability?",
      "options": [
        "Removing all uncertain data",
        "Automatically labeling data without optimization",
        "Ignoring labeled data",
        "Shifting the boundary based on unlabeled data distribution"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM improves decision quality by adjusting boundary using the structure of unlabeled data."
    },
    {
      "id": 74,
      "questionText": "What kind of data scenario benefits MOST from TSVM?",
      "options": [
        "Very few labeled samples but large structured unlabeled data",
        "Plenty of labeled samples and zero unlabeled samples",
        "Fully noisy unlabeled data with no distribution pattern",
        "No patterns in unlabeled data"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM is highly effective when labeled data is limited but unlabeled data is large and meaningful."
    },
    {
      "id": 75,
      "questionText": "Which real-world use case is a strong fit for Transductive SVM?",
      "options": [
        "Perfectly labeled tiny dataset with no unlabeled data",
        "Pure clustering without labels",
        "Handwritten text recognition with many unlabeled samples",
        "Only regression problems"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Handwriting recognition often has few labeled samples and many similar unlabeled data points, ideal for TSVM."
    },
    {
      "id": 76,
      "questionText": "During TSVM training, what happens if an unlabeled sample lies deep inside a wrong class region?",
      "options": [
        "Model stops training immediately",
        "It forces all labeled points to change",
        "TSVM pushes the margin so that it lies on the correct side if possible",
        "It is permanently ignored"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM tries to shift the boundary gradually to avoid putting unlabeled samples deep into the wrong class."
    },
    {
      "id": 77,
      "questionText": "In TSVM, what is the usual effect of including many confidently separable unlabeled points?",
      "options": [
        "It makes decision boundary more stable and better aligned with data clusters",
        "It makes model unstable and random",
        "It causes TSVM to ignore all data",
        "It forces decision boundary to flip constantly"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Confidently separable unlabeled points reinforce stable decision boundary alignment with actual cluster structure."
    },
    {
      "id": 78,
      "questionText": "What type of challenge commonly occurs during TSVM optimization?",
      "options": [
        "It never converges by design",
        "Non-convex optimization due to unknown unlabeled labels",
        "No need for iteration",
        "Linear time complexity"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM solves a non-convex optimization problem due to the additional label assignment search."
    },
    {
      "id": 79,
      "questionText": "Why is TSVM training usually slower than standard SVM?",
      "options": [
        "It ignores mathematics",
        "It does not use margin maximization",
        "It tries multiple label combinations for unlabeled data",
        "It stops training halfway always"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM trains slower because it must attempt different label assignments while maximizing margin."
    },
    {
      "id": 80,
      "questionText": "In TSVM, what is the main purpose of using unlabeled data?",
      "options": [
        "To randomly flip decision boundaries",
        "To confuse the model",
        "To better shape the margin according to real data distribution",
        "To delete labeled samples"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Unlabeled data helps define a boundary that mirrors actual data cluster structure."
    },
    {
      "id": 81,
      "questionText": "In a fraud detection task, why can TSVM be extremely useful?",
      "options": [
        "Because fraud data is always labeled",
        "Because it ignores real-world data distribution",
        "Because TSVM works only with labeled data",
        "Because majority of data is unlabeled and TSVM can learn from it"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM is powerful in fraud detection where labeled fraud cases are rare but transaction patterns (unlabeled) are massive."
    },
    {
      "id": 82,
      "questionText": "Which scenario would likely MISLEAD a TSVM model?",
      "options": [
        "Unlabeled data that is all random noise or mixed distributions",
        "Large structured unlabeled data",
        "High-margin clusterable data",
        "Unlabeled data that follows clear separable clusters"
      ],
      "correctAnswerIndex": 0,
      "explanation": "If unlabeled data is highly noisy or structureless, TSVM may push margins incorrectly."
    },
    {
      "id": 83,
      "questionText": "What happens if all unlabeled data points strongly overlap between classes in TSVM?",
      "options": [
        "TSVM forces them into positive class",
        "Model stops and refuses to run",
        "TSVM may struggle to move the margin meaningfully",
        "TSVM easily finds the perfect boundary"
      ],
      "correctAnswerIndex": 2,
      "explanation": "If unlabeled samples are non-separable, TSVM gets little advantage since it cannot adjust margin meaningfully."
    },
    {
      "id": 84,
      "questionText": "When applying TSVM to a dataset with label noise in the labeled data, what might happen?",
      "options": [
        "TSVM deletes the noisy samples",
        "TSVM ignores unlabeled data",
        "TSVM improves automatically with noise",
        "TSVM may amplify wrong boundary decisions"
      ],
      "correctAnswerIndex": 3,
      "explanation": "If labeled data is noisy, TSVM might reinforce the wrong boundary because it follows a misleading starting label set."
    },
    {
      "id": 85,
      "questionText": "Which TSVM behavior is desirable in real world customer segmentation?",
      "options": [
        "Using no unlabeled input in training",
        "Adjusting decision boundary using natural grouping of unlabeled customers",
        "Deleting all unlabeled customers",
        "Blindly guessing labels"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM benefits segmentation by aligning its boundary with naturally existing customer groups found in unlabeled data."
    },
    {
      "id": 86,
      "questionText": "In TSVM-based image classification, what advantage does it provide during learning?",
      "options": [
        "It forces all images into one class",
        "It shapes the decision boundary using the natural distribution of unlabeled images",
        "It uses only textual features",
        "It refuses to learn if labels are missing"
      ],
      "correctAnswerIndex": 1,
      "explanation": "TSVM adapts boundary using unlabeled images' pattern distribution, improving classification accuracy."
    },
    {
      "id": 87,
      "questionText": "What is a potential risk if unlabeled data is from a DIFFERENT distribution than labeled data in TSVM?",
      "options": [
        "TSVM asks user to delete data",
        "It has no effect on boundary",
        "TSVM automatically fixes it",
        "Performance may degrade because TSVM forces boundary based on misleading structure"
      ],
      "correctAnswerIndex": 3,
      "explanation": "If unlabeled data is from a different domain, TSVM may wrongly adjust boundary, hurting accuracy."
    },
    {
      "id": 88,
      "questionText": "A TSVM is trained on 5 labeled dog images and 2000 unlabeled animal images. What is it most likely to do?",
      "options": [
        "Stop training",
        "Ignore the dog images",
        "Use unlabeled animal patterns to identify the dog boundary more accurately",
        "Randomly assign all unlabeled to dog"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM will exploit unlabeled animal images to shape a correct separation boundary for identifying dogs."
    },
    {
      "id": 89,
      "questionText": "How does TSVM generally react if unlabeled points are clearly clustered far from each other?",
      "options": [
        "It positions margin between clusters for better generalization",
        "It fails to separate them",
        "It merges clusters manually",
        "It shuts down"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM naturally places its margin between unlabeled clusters if they are well separated."
    },
    {
      "id": 90,
      "questionText": "In TSVM, what outcome indicates that unlabeled data has genuinely improved learning?",
      "options": [
        "Lower margin size",
        "Higher generalization accuracy on test data",
        "Model predicts everything as same class",
        "Model ignores patterns"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Proper use of unlabeled data leads to improved generalization accuracy in TSVM."
    },
    {
      "id": 91,
      "questionText": "When might TSVM produce WORSE results than Inductive SVM?",
      "options": [
        "When unlabeled data is huge but structured",
        "When unlabeled data comes from a completely different unrelated distribution",
        "When labeled data is clean and unlabeled data is small",
        "When unlabeled data perfectly follows same distribution"
      ],
      "correctAnswerIndex": 1,
      "explanation": "If unlabeled data is from a different domain, TSVM may adjust margin wrongly leading to worse performance."
    },
    {
      "id": 92,
      "questionText": "A TSVM model is being applied to sentiment analysis with mixed noisy unlabeled social media data. What must be carefully monitored?",
      "options": [
        "That it ignores unlabeled data completely",
        "That all unlabeled data is forcibly labeled positive",
        "That unlabeled data does not distort margin due to noise",
        "That it deletes all neutral comments"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Noisy unlabeled sentiment data may badly affect TSVM, so quality of unlabeled data must be monitored."
    },
    {
      "id": 93,
      "questionText": "What is the ideal unlabeled data characteristic for TSVM success?",
      "options": [
        "Highly overlapping with random noise",
        "Totally unrelated to task",
        "Completely identical to labeled data",
        "Clearly structured with separable cluster tendencies"
      ],
      "correctAnswerIndex": 3,
      "explanation": "TSVM excels when unlabeled data is structured in distinct patterns that help form clear separation margins."
    },
    {
      "id": 94,
      "questionText": "In practice, what process is often done BEFORE using TSVM on real data?",
      "options": [
        "Converting unlabeled to random labels",
        "Deleting all unlabeled data",
        "Data cleaning and verifying unlabeled distribution quality",
        "Removing margin maximization"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Proper preprocessing is essential to ensure unlabeled data is reliable before applying TSVM."
    },
    {
      "id": 95,
      "questionText": "If unlabeled data heavily contradicts labeled examples in TSVM, what might occur?",
      "options": [
        "TSVM successfully corrects labels always",
        "TSVM perfectly separates everything",
        "TSVM automatically detects noise and stops",
        "TSVM may get confused and produce poor boundary"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Contradictory unlabeled data can mislead TSVM into placing wrong boundaries."
    },
    {
      "id": 96,
      "questionText": "In a multi-class extension of TSVM, what additional challenge appears?",
      "options": [
        "It becomes unsupervised",
        "It stops supporting margin maximization",
        "Complexity increases with relabeling across multiple decision boundaries",
        "No unlabeled data is used"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Multi-class TSVM must explore boundary adjustments across multiple possible label assignments, raising complexity."
    },
    {
      "id": 97,
      "questionText": "What strategy can help TSVM avoid overfitting wrongly to noisy unlabeled data?",
      "options": [
        "Weight control or confidence filtering on unlabeled samples",
        "Completely ignore margin",
        "Force all unlabeled data to positive class",
        "Never use labeled data"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Confidence-based control helps TSVM use only reliable unlabeled points for boundary adjustment."
    },
    {
      "id": 98,
      "questionText": "In TSVM-based product recommendation, what is the main benefit of leveraging unlabeled browsing data?",
      "options": [
        "To delete purchase history",
        "To adjust the decision function according to real user interest patterns",
        "To cluster users without any classification",
        "To force all users into one preference group"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Browsing patterns (unlabeled) help TSVM refine decision boundaries to better match actual user preferences."
    },
    {
      "id": 99,
      "questionText": "When deployed in semi-supervised security systems, what critical aspect must be monitored in TSVM?",
      "options": [
        "It must run without labeled data",
        "It never needs retraining",
        "Unlabeled threat data must be from reliable and relevant distribution",
        "All unlabeled threats are ignored"
      ],
      "correctAnswerIndex": 2,
      "explanation": "TSVM relies on unlabeled data being relevant to security context; irrelevant sources can shift boundary incorrectly."
    },
    {
      "id": 100,
      "questionText": "What best represents a real-world success condition for TSVM?",
      "options": [
        "Unlabeled structured perfectly, labeled data small",
        "Labeled data large, unlabeled zero",
        "Unlabeled completely unrelated garbage",
        "Labeled data randomly mislabeled"
      ],
      "correctAnswerIndex": 0,
      "explanation": "TSVM is powerful when unlabeled data is abundant and structured, while labeled data is limited."
    }
  ]
}