- Journey Through Embeddings: CBOW with negative samplingIn previous post, we explored the fundamentals of Word2Vec and implemented the Continuous Bag of Words (CBOW) model, which efficiently predicts target words based on their surrounding context. While CBOW is powerful, scaling it to large vocabularies can be computationally challenging due to the softmax function. This is where negative sampling comes in—a clever optimization… Read more: Journey Through Embeddings: CBOW with negative sampling
- Journey Through Embeddings: Word2vec CBOWWord embeddings have revolutionized natural language processing by transforming text into meaningful numerical representations that capture the relationships between words. In this blog, we focus on Word2Vec, a foundational technique in embedding methods, with a particular emphasis on the Continuous Bag of Words (CBOW) model. CBOW learns embeddings by predicting a target word from its… Read more: Journey Through Embeddings: Word2vec CBOW
- Clustering with Union-Find: Single-Linkage ImplementationIn this post, we’ll explore how the union-find data structure functions and its importance in machine learning applications. We’ll specifically focus on its role in clustering algorithms, demonstrating how it helps manage dynamic cluster formation and identify connected components efficiently. By implementing hierarchical clustering, you’ll see firsthand how this structure enhances the performance of algorithms,… Read more: Clustering with Union-Find: Single-Linkage Implementation
- Mastering Backpropagation: Math and ImplementationArtificial intelligence (AI) has rapidly advanced, transforming industries and pushing technological boundaries. Central to this progress is backpropagation, the essential algorithm that enables neural networks to learn and adapt. Often considered the heartbeat of AI, backpropagation drives the learning process, allowing machines to improve with experience. In this blog post, we’ll demystify backpropagation, exploring its… Read more: Mastering Backpropagation: Math and Implementation
- Unlocking CNNs with JAX: Comprehensive guideLet’s resume our hunt into the “Unlocking in JAX” series and let’s start with the type of neural networks which changed the way of how to deal with vision in general and application in real-world cases. Today, we dive into Convolutional Neural Networks (CNNs), exploring their unique ability to capture spatial hierarchies in images. By… Read more: Unlocking CNNs with JAX: Comprehensive guide
- Unlocking Neural Networks with JAX: optimizers with optaxIn our previous exploration within the “Unlocking in JAX” series, we covered the foundational concepts of neural networks and their implementation using JAX. Building on that knowledge, this post, shifts focus to a crucial component that significantly enhances learning efficiency: optimizers. Optax, a gradient processing and optimization library tailored for JAX, provides the tools necessary… Read more: Unlocking Neural Networks with JAX: optimizers with optax
- Unlocking Neural Networks with JAXWelcome back to our Unlocking with JAX series! Today, we’re getting hands-on with neural networks by building a multilayer perceptron (MLP) using JAX. JAX sits perfectly between the lower-level, details of CUDA and the higher-level abstractions offered by frameworks like Keras, offering both clarity and control. This balance helps immensely in understanding the inner workings… Read more: Unlocking Neural Networks with JAX
- Unlocking Linear Regression with JAX: A Practical GuideIn today’s post, we’ll guide you through a step-by-step demonstration of implementing a linear regression model with JAX. Even if you’re just stepping into the world of machine learning or looking to expand your knowledge, this blog is designed to suit all levels. So, grab a cup of your favorite brew, and let’s dive into… Read more: Unlocking Linear Regression with JAX: A Practical Guide
- JAX VMAP Simplified: An Easy Introduction for BeginnersThrough this post I’m gonna deep dive into one of the most exciting features of the JAX library: vmap, or vectorized map. For high-performance computing and machine learning research, efficiently handling computations over large datasets is critical. This is where JAX, a numerical computation library in Python developed by Google, comes in. Not only does… Read more: JAX VMAP Simplified: An Easy Introduction for Beginners