Deep Learning & Medical Image Analysis
Build Neural Networks from Scratch in NumPy, Then Master Medical AI with PyTorch
Course Description
Learn deep learning from the ground up—build neural networks from scratch in NumPy before transitioning to PyTorch for medical imaging. This comprehensive course covers regression, softmax, feedforward networks, and CNNs implemented from first principles. Progress to medical-specific applications: image classification (GoogLeNet, ResNet), segmentation (U-Net and variants), and detection. Thorough coverage of evaluation metrics including AUC-ROC, sensitivity, specificity, and Dice coefficient for clinical AI.
After This Course
- ✓Implement linear, logistic, and softmax regression from scratch in NumPy
- ✓Build feedforward neural networks with backpropagation from first principles
- ✓Understand CNNs at a fundamental mathematical level
- ✓Apply classification architectures (GoogLeNet, ResNet, MobileNet) to medical images
- ✓Implement U-Net and its variants for accurate medical image segmentation
- ✓Design detection pipelines for accurate anomaly detection
- ✓Evaluate medical AI systems using appropriate clinical metrics
- ✓Apply best practices for medical imaging AI project management
Course Sessions
Regression Models from Scratch
Build linear regression, logistic regression, and softmax regression from scratch in NumPy. Implement least squares, gradient descent, and cross-entropy loss—the foundations for understanding neural networks.
Model Complexity and Regularization
Master overfitting, underfitting, and the bias-variance tradeoff. Implement L1/L2 regularization from scratch and understand their effects on model generalization.
Feedforward Neural Networks from Scratch
Build complete feedforward neural networks in NumPy. Implement backpropagation, activation functions (ReLU, Leaky ReLU, ELU), weight initialization, and batch normalization from first principles.
Convolutional Neural Networks from Scratch
Build a CNN and implement the im2col trick in NumPy. Master convolution operations, translation equivariance, receptive field calculations, and pooling layers before moving to frameworks.