Hi there!#
Hello everyone! 👋 I’m Pablo, and I’m glad you’ve taken the time to visit my portfolio. If you’re here, something must have caught your eye—so welcome! 🎉
I’m passionate about Artificial Intelligence 🤖 and love diving deep into topics I find intriguing. Lately, I’ve been delving into Topological Data Analysis, leveraging my background in mathematics and computer science. Currently, I’m focused on mastering the challenges of understanding imbalanced data. 📊🔍
Outside the technical world, I’m someone who cherishes nature 🌲 and enjoys trekking 🥾. When time is tight, I switch gears and spend my free time cooking—if you’re curious, ask me about my signature dish, “cocido” 🍲 😉
Feel free to explore the projects and posts here. If there’s a topic you’d like to discuss, don’t hesitate to reach out. I’m always up for a good conversation! 💬
Remember that cozy assumption from basic ML theory? That our training data $\mathcal{S} = \lbrace{(x_1, y_1), \dots, (x_N, y_N)\rbrace}$ is drawn independently and identically distributed (i.i.d.) from some underlying true distribution $\mathcal{D}$? And that minimizing the empirical risk $\mathcal{R}_{emp}(f)$ is a good proxy for minimizing the true risk $\mathcal{R}(f)$ ? It sounds so clean, so elegant.
But what happens when $\mathcal{D}$ itself is… lopsided?
Welcome to the Imbalance Zone Think about real-world problems:...
Hey everyone! Let’s talk about machine learning. It’s everywhere now, right? From your phone unlocking with your face to recommending weirdly specific t-shirts online. But what is it, fundamentally? At its core, it’s about teaching computers to do stuff by showing them examples, rather than programming explicit rules for every conceivable situation. Think about teaching a kid what a dog is. You don’t list out “has four legs, barks, wags tail, etc....