Day 1 Computational Linear Algebra + Project : Compressed sensing and reconstruction of a CT scan-like image
Self Introduction¶
Hello folks I’m Rohan Sai as you all know if not iam a developer, AI enthusiast, and passionate explorer of multiple technologies. I love breaking down complex concepts into simple, digestible ideas and sharing them with the community. Writing under the pen name Aiknight, my goal is to demystify advanced topics in technology, making them accessible to learners and developers alike.
I’m thrilled to embark on an exciting 120 Days of Deep Learning adventure, and I’d love for you to join me every step of the way. Over the next eight months, we’ll journey through the captivating landscape of deep learning, uncovering its foundations, exploring its architectures, and delving into its real-world applications as well.
As a part of the journey i have also developed a project of Compressed Sensing and Reconstruction of CT Scan-Like Images.
You can try it out at Colab Link
Feel free to check out the Colab notebook, explore the code, and share your thoughts or contributions. Let’s collaborate and grow as a community! π
So lets not waste the time and get into our today's topic which is Computational Linear Algebra as Day 1 of my journey of "120 Days of Deep Learning"
Fun Fact:¶
Deep learning took a giant leap forward with AlexNet in 2012, smashing benchmarks in image classification by using GPUs for training and introducing groundbreaking techniques like ReLU activation and dropout. It marked the dawn of the deep learning revolution we thrive in today! π
Scalars, Vectors, Matrices, and Tensors¶
Concepts:¶
Scalars: A single number, e.g., $ a = 5 $.
Vectors: A 1D array, $ \vec{v} = [v_1, v_2, ..., v_n] $.
Matrices: A 2D array of numbers, $ A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} $.
Tensors: Higher-dimensional generalizations of matrices.
Procedure:¶
- Represent the entities mathematically.
- Perform operations like addition, subtraction, and multiplication.
Code Example:¶
import numpy as np
# Scalars
scalar = 5
print(f"Scalar: {scalar}")
# Vectors
vector = np.array([1, 2, 3])
print(f"Vector: {vector}")
# Matrices
matrix = np.array([[1, 2], [3, 4]])
print(f"Matrix:\n{matrix}")
# Tensors
tensor = np.random.rand(2, 3, 4) # 3D Tensor
print(f"Tensor:\n{tensor}")
Matrix Multiplication, Identity, and Inverse¶
Concepts:¶
- Matrix Multiplication: For $ A \in \mathbb{R}^{m \times n} $ and $ B \in \mathbb{R}^{n \times p} $: $ C_{ij} = \sum_{k=1}^n A_{ik} \cdot B_{kj} $
- Identity Matrix: $ I $, where $ AI = IA = A $.
- Matrix Inverse: $ A^{-1} $, where $ AA^{-1} = I $.
Procedure:¶
- Multiply matrices element-wise for valid dimensions.
- Verify the inverse using $ A \cdot A^{-1} = I $.
Code Example:¶
# Matrix Multiplication
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
C = np.dot(A, B) # Or A @ B
print(f"Matrix Multiplication Result:\n{C}")
# Identity Matrix
I = np.eye(2)
print(f"Identity Matrix:\n{I}")
# Matrix Inverse
inverse_A = np.linalg.inv(A)
print(f"Inverse of A:\n{inverse_A}")
# Verify Inverse
verification = np.dot(A, inverse_A)
print(f"Verification (A * A^-1):\n{verification}")
Linear Dependence and Span¶
Concepts:¶
Linear Dependence: Vectors $ v_1, v_2, ..., v_n $ are dependent if: $ c_1v_1 + c_2v_2 + ... + c_nv_n = 0 $
where $ c_i \neq 0 $ for some $ i $.
Span: Set of all possible linear combinations of vectors.
Procedure:¶
- Create a system of equations.
- Check for non-trivial solutions (e.g., rank deficiency).
Code Example:¶
# Checking Linear Dependence
from sympy import Matrix
vectors = Matrix([[1, 2], [2, 4]]) # Two linearly dependent vectors
rank = vectors.rank()
print(f"Rank of the matrix: {rank}")
if rank < len(vectors):
print("Vectors are linearly dependent.")
else:
print("Vectors are linearly independent.")
# Span Example
vector_space = np.array([[1, 0], [0, 1]]) # Basis vectors for 2D space
linear_comb = np.dot([3, 4], vector_space) # Linear combination
print(f"Span of the vectors:\n{linear_comb}")
Norms and Matrices¶
Concepts:¶
- Vector Norm: Measures vector magnitude: $ \|v\|_p = \left( \sum_{i=1}^n |v_i|^p \right )^{1/p} $
- Matrix Norm: Generalizes norms to matrices.
Code Example:¶
# Vector Norms
vector = np.array([3, 4])
l2_norm = np.linalg.norm(vector, ord=2) # Euclidean Norm
print(f"L2 Norm of the vector: {l2_norm}")
# Matrix Norms
matrix = np.array([[1, 2], [3, 4]])
frobenius_norm = np.linalg.norm(matrix, ord='fro') # Frobenius Norm
print(f"Frobenius Norm of the matrix: {frobenius_norm}")
Eigendecomposition¶
Concepts:¶
- Eigendecomposition: Decomposing a square matrix $ A $ into eigenvalues $ \lambda $ and eigenvectors $ v $, where: $ Av = \lambda v $ The matrix $ A $ can be written as: $ A = V \Lambda V^{-1} $ Here, $ \Lambda $ is a diagonal matrix of eigenvalues, and $ V $ contains the eigenvectors as columns.
Procedure:¶
- Compute eigenvalues $ \lambda $ and eigenvectors $ v $ using the characteristic equation $ \det(A - \lambda I) = 0 $.
- Verify $ Av = \lambda v $.
Code Example:¶
# Eigendecomposition
A = np.array([[4, 2], [1, 3]])
eigenvalues, eigenvectors = np.linalg.eig(A)
print(f"Eigenvalues:\n{eigenvalues}")
print(f"Eigenvectors:\n{eigenvectors}")
# Verify decomposition
V = eigenvectors
Lambda = np.diag(eigenvalues)
A_reconstructed = V @ Lambda @ np.linalg.inv(V)
print(f"Reconstructed Matrix A:\n{A_reconstructed}")
Singular Value Decomposition (SVD)¶
Concepts:¶
- SVD: Decomposing any matrix $ A $ into:
$
A = U \Sigma V^T
$
- $ U $: Left singular vectors.
- $ \Sigma $: Diagonal matrix of singular values.
- $ V^T $: Right singular vectors.
Procedure:¶
- Compute $ U, \Sigma, V^T $ using SVD.
- Verify $ A = U \Sigma V^T $.
Code Example:¶
# SVD
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
U, Sigma, Vt = np.linalg.svd(A)
print(f"U:\n{U}")
print(f"Singular Values (Sigma):\n{Sigma}")
print(f"V^T:\n{Vt}")
# Reconstruct Matrix A
Sigma_matrix = np.zeros_like(A, dtype=float)
np.fill_diagonal(Sigma_matrix, Sigma)
A_reconstructed = U @ Sigma_matrix @ Vt
print(f"Reconstructed Matrix A:\n{A_reconstructed}")
Moore-Penrose Pseudoinverse¶
Concepts:¶
- Pseudoinverse: Generalized inverse of a matrix $ A $:
$
A^+ = V \Sigma^+ U^T
$
- $ \Sigma^+ $ is obtained by taking the reciprocal of non-zero singular values in $ \Sigma $.
Procedure:¶
- Compute SVD of $ A $.
- Construct $ A^+ $ using $ U, \Sigma^+, V^T $.
Code Example:¶
# Pseudoinverse
A = np.array([[1, 2], [3, 4], [5, 6]])
pseudo_inverse = np.linalg.pinv(A)
print(f"Pseudoinverse of A:\n{pseudo_inverse}")
# Verify Pseudoinverse
identity = A @ pseudo_inverse @ A
print(f"Verification (A * A^+ * A):\n{identity}")
Trace and Determinant¶
Concepts:¶
Trace: Sum of diagonal elements of a square matrix:
$ \text{Trace}(A) = \sum_{i=1}^n a_{ii} $
Determinant: Scalar value representing matrix invertibility:
$ \text{det}(A) = \sum (-1)^k \cdot \text{minor}_k $
Code Example:¶
# Trace and Determinant
A = np.array([[4, 2], [3, 1]])
trace = np.trace(A)
determinant = np.linalg.det(A)
print(f"Trace of A: {trace}")
print(f"Determinant of A: {determinant}")
Principal Component Analysis (PCA)¶
Concepts:¶
- PCA: Reduces data dimensions by projecting onto eigenvectors of the covariance matrix.
Procedure:¶
- Compute the covariance matrix.
- Perform eigendecomposition on the covariance matrix.
- Project data onto the top $ k $ eigenvectors.
Code Example:¶
# PCA Example
from sklearn.decomposition import PCA
data = np.array([[2.5, 2.4], [0.5, 0.7], [2.2, 2.9], [1.9, 2.2], [3.1, 3.0]])
pca = PCA(n_components=1)
reduced_data = pca.fit_transform(data)
print(f"Reduced Data:\n{reduced_data}")
Background Removal with PCA¶
Concepts:¶
- Background Removal: PCA separates principal components representing the signal (foreground) and noise (background).
Procedure:¶
- Convert the image into a grayscale matrix.
- Flatten the matrix into vectors and compute the covariance matrix.
- Perform PCA to isolate the main components (foreground).
- Reconstruct the image without background components.
Code Example:¶
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Simulated image data
image = np.random.rand(100, 100)
plt.imshow(image, cmap='gray')
plt.title("Original Image")
plt.show()
# Flatten image and apply PCA
pca = PCA(n_components=20) # Keep 20 components
transformed = pca.fit_transform(image)
reconstructed = pca.inverse_transform(transformed)
# Display reconstructed image
plt.imshow(reconstructed, cmap='gray')
plt.title("Reconstructed Image with Background Removed")
plt.show()
So thats it for day 1...
Don't forget to checkout the project Colab Link
Please make sure to subscribe to my blog
Also , you can follow me on Linkedin and X for more exciting content...
Stay tuned...Happy Learning
Comments
Post a Comment