Aayur Kaul: Empathetic AI - How Human-Centered Design Will Shape the Future of Artificial Intelligence

Aayur shares his philosophy on human-centered design, how it informs responsible AI development, and why empathy could be the most powerful feature we build into machines.

Aayur Kaul
Aayur Kaul
info_icon

As artificial intelligence continues to shape our everyday lives, one thing is becoming clear: intelligence alone isn't enough. To truly serve humanity, AI must be designed with empathy at its core. Aayur Kaul, visionary product leader, former founder, and strategist across edtech, fintech, and the consumer internet believes the next frontier in AI isn’t just about pushing technological boundaries, but deepening its relevance to real human needs. In this conversation, Aayur shares his philosophy on human-centered design, how it informs responsible AI development, and why empathy could be the most powerful feature we build into machines. Below are the excerpts of an interview with Aayur Kaul:

1.⁠ ⁠Why do you believe the next wave of AI must be more empathetic? What risks do we face if it isn’t?

The next wave of AI needs empathy because technology should solve human problems, not just technical ones. Imagine an AI that recommends courses based solely on popularity; it might push trending topics but ignore a user’s unique interests, like niche art forms. Without empathy, AI risks becoming a blunt tool that amplifies biases or prioritizes efficiency over inclusivity. For instance, a loan approval algorithm that ignores regional economic disparities could unfairly deny opportunities. At Operabase, we avoided this by designing recommendations that balanced global trends with individual artistic preferences, ensuring smaller theatre productions weren’t drowned out by mainstream shows. If AI lacks empathy, it risks eroding trust and widening societal gaps, problems no amount of technical brilliance can fix.

2.⁠ ⁠How can product teams operationalize empathy? What does that look like in day-to-day decision-making?

Empathy becomes actionable when teams treat user stories as guiding principles. At one of the places I worked, we noticed learners in India often abandoned courses midway. Instead of blaming “low engagement,” we shadowed users and discovered many struggled with unreliable internet. This led us to introduce offline access and shorter video modules a fix rooted in empathy, not just analytics. Empathy isn’t a workshop exercise; it’s about making user voices audible in every meeting.

3.⁠ ⁠Many AI systems unintentionally amplify bias. How can empathy help mitigate that?

Empathy forces us to ask, “Whose perspective is missing?” Think of it like a diverse jury - AI trained on varied voices makes fairer decisions. In my current gig, we combined global user data with hyper-local performance details to ensure recommendations didn’t favour Western classics. Empathy here isn’t sentimental, it's practical. It’s about deliberately seeking out blind spots and designing guardrails to protect marginalized groups.

4.⁠ ⁠Can you share an example of a product or feature you helped build that benefited from a more human-centred approach?

At Operabase, while building the recommendation system, instead of chasing higher click-through rates, we interviewed performers to understand their needs. A ballet choreographer mentioned struggling to discover avant-garde collaborations, while a student wanted smaller, affordable shows. We shifted from a one-size-fits-all algorithm to a hybrid model that blended popular trends with niche filters. The result? Users spent 24% more time on the platform, this human-centred pivot showed that empathy isn’t just ethical, it’s good business.

5.⁠ ⁠Do you think there's a tradeoff between personalization and privacy in empathetic AI? How should product leaders navigate it?

The tradeoff exists but can be managed with creativity. At Skillshare, we wanted personalized course suggestions without invasive tracking. Instead of monitoring every click, we asked users to share learning goals upfront, like a teacher tailoring a syllabus after a conversation. It’s akin to a chef who learns your tastes through direct feedback, not by snooping in your pantry. Leaders should treat privacy as a feature. For example, Revolut’s regional payment adaptations in APAC respected user boundaries while improving service at all steps.

6.⁠ ⁠One product you admire for its human-centricity?

Spotify’s Discover Weekly is a masterclass in empathetic AI. It feels like a friend who knows your secret love for 90s Bollywood music but also nudges you to try something new. The algorithm balances your habits with cultural trends, avoiding the “filter bubble” effect of platforms that trap users in repetitive loops. What stands out is its transparency, you can tweak recommendations or skip tracks without guilt. At one of the firms where I worked, we borrowed this philosophy by letting users control their recommendations, empowering them to explore freely. Spotify proves that human-centric design isn’t about perfection, it's about creating space for curiosity and trust.

As the conversation wraps, Aayur leaves us with a powerful reminder: building great AI isn't just a technical challenge, it's a human one. By anchoring innovation in empathy and inclusivity, we unlock AI's true potential to enrich lives, broaden access, and empower people across diverse contexts. In Aayur’s vision, the future of AI doesn’t just think, it understands. And in that understanding lies its greatest impact.

Published At:
×