
The Baron in the Trees
Published: 9/20/2025
Saturday night thoughts 🌙 - I’m gathering a few of my LinkedIn posts that I really enjoyed writing and want to share with everyone here. I’d truly love to hear your thoughts.
The Baron in the Trees
Just finished The Baron in the Trees — and I can’t stop thinking about it.
When I closed the last page, I was stunned to realize it was written in 1957 — nearly 70 years ago. And yet, the voice felt so modern, the ideas so timeless.
Italo Calvino tells the story of a boy who climbs into the trees in protest — and never comes down. He builds a full life up there: adventure, relationships, conflict, reflection. Everything you'd expect to find on the ground... just elevated.
It reads like an allegory for escapism. Even up in the trees, life finds him. The same messiness, the same beauty. It reminded me of how we sometimes think about AI — like something separate, something we can...outrun....
But AI is already woven into our reality. Rather than treat it like some detached, we can see it as a tool — a branch we build on, not one we hide in.
What struck me most was how clearly Calvino’s voice reached across time. I felt this odd closeness to him — like we were sitting on the same branch, exchanging thoughts.
Ideas do transcend time. I’ll keep writing to jog down my own — and keep jogging.
Have you ever read something old that felt unexpectedly relevant? I'd love to hear about it.
🤖 When AI Echoes the Worst of Us 🤖
Last night I stumbled upon a shocking news allegedly generated by Elon Musk’s latest AI model, Grok—a wildly irresponsible and sensational remark.
Whether it’s a glitch, oversight, or misaligned prompt, it was jaw-dropping.
But it also reminded me of Harvard’s Michael Sandel and his framework: The Four Ethics of AI.
1. Job Displacement
2. Amplification of Bias
3. Privacy Erosion
4. Democratic Decay
AI doesn’t just reflect our world—it often magnifies our worst impulses.
You don’t know what you don’t know.
You think you know what others don’t know.
But really—you’re trapped inside an information cocoon built by algorithmic feedback loops.
When bias is embedded at scale, it ceases to feel like bias. It feels like truth.
That’s the danger.
As Sandel says, the ethical challenge is not just about building safer models—it’s about redefining what kind of society we want to be.
If your AI says something that a decent human wouldn’t, it’s not just a bug. It’s a mirror.
The future of AI shouldn’t just be about faster, smarter, cheaper.
It must be wiser.
What are you doing to keep your AI human?
The Picture of Dorian Gray
While spending a quiet afternoon at home, rearranging books and wiping down the shelves, one paperback slipped from my hands and hit the floor with a soft thud. The cover stared up at me — The Picture of Dorian Gray. A novel I hadn’t thought about in years.
It was one of the single best novels I read during my undergraduate years majoring in English literature — haunting, beautiful, and unforgettable.
In Oscar Wilde’s haunting tale, the main character remains flawless and youthful, while his portrait — hidden away — slowly reveals the weight of every selfish act. Each moral compromise, each unspoken cruelty, etched itself into the canvas.
It made me wonder: what if our careers had a portrait?
But instead of aging with moral failings, it reflects something deeper —
Every shortcut we take, every time we stay silent when we could lift someone up, every passionate junior we overlook — it leaves a mark.
Not just on the work, but on the invisible culture we help shape.
On the other hand, every act of mentorship, every thoughtful decision, every moment we choose curiosity over convenience — adds depth, color, and meaning.
Now, with AI rapidly transforming how we build and create, it's easy to prioritize speed. But maybe the real question isn't how fast can we go —
It's who are we becoming as we get there?
Privilege & Product Design
Just came across a rental listing in Los Alamos: $625/month, utilities included. Not long ago, rooms like this easily went for over $1,000 with a long waiting list.
It reminded me how often we confuse success with merit alone.
If it were purely about talent and hard work, wouldn’t Elon Musk have built the same empire had he stayed in South Africa?
Where we are — and what we have access to — plays a far bigger role than we often acknowledge. Context matters. Privilege matters.
And it’s the same with people — whether in life or in product design. Before you judge someone’s "results," you have to understand their constraints. That’s what empathy is about. Great design — and great leadership — starts from there.
The Left Hand of Darkness
When I was an undergraduate, I wrote an essay on The Left Hand of Darkness by Ursula K. Le Guin — a novel published in 1969, yet profoundly prophetic in today’s age of AI.
The story follows Genly Ai, an ambassador from another planet, sent to the icy world of Gethen to convince its people to join an interplanetary alliance. But the true challenge isn’t diplomatic — it’s human. Gethenians are ambisexual, and their culture doesn’t conform to the gender, political, or moral binaries Genly is used to. He spends most of the story trying — and failing — to make sense of it all through the lens of his own assumptions.
One of the most moving chapters takes place on a long, frozen trek across the Gobrin Glacier. Genly and Estravan, a Gethenian ally, are forced into exile and must rely entirely on one another to survive. During this slow, grueling journey, Genly begins to let go of the labels — male/female, strong/weak, friend/enemy — and starts to listen. Only then does true understanding emerge.
That part of the book has stayed with me — and lately, it’s shaped how I think about AI.
We often ask: Is AI good or bad? Creative or mechanical? Will it replace us or help us?
But maybe these are the wrong kinds of questions. Maybe, like Genly, we need to stop forcing binary labels and start learning to relate on new terms.
I can’t imagine a world with only AI and no humans — but I also can’t imagine a future where we continue without AI. Like Genly and Estravan, it’s not about one replacing the other. It’s about coexistence, interdependence, and learning to understand something unfamiliar, without fear.
And yes — the main character’s name is Ai. Coincidence? Maybe. But it makes me smile.
AI doesn’t think like us. It doesn’t share our emotions or consciousness. But it reflects what we put into it — our design, our data, our flaws.
If we approach it not with certainty but with humility and curiosity, we might just cross some new terrain — together.
Enough is not failure
When I was in grad school studying global journalism, a professor told us a story I never forgot.
A group of aid workers visited a village in Africa and saw the locals fishing with simple tools. Wanting to help, they offered better equipment and techniques to increase the catch. But the villagers politely declined.
“We already catch enough to feed our families,” they said. “Why would we want to spend all day fishing?”
It was a powerful reminder: sometimes, what looks like inefficiency is really a choice—a deeply human one—to live with balance, not excess.
Reading your piece reminded me of that story. Our deepest skills—the ones we love—might not always align with what’s fastest or most efficient. But they carry meaning. And in this age of AI and automation, maybe it’s worth remembering that enough is not failure.
Mars Needs Elon. Earth Needs Better Humans.
Just read this fascinating piece byJulie Zhuo: "When AI Has Better Taste Than You", where she shares a conversation with Notion’s founder,Ivan Zhao, and I couldn’t agree more with its central idea — that taste can be trained. The ability to discern quality, to appreciate subtlety, to cultivate aesthetics — these are all skills that can be nurtured.
But there’s one thing that can’t be trained: willpower.
That’s also why, in my opinion, Elon Musk’s vision of colonizing Mars is not a failure in engineering — but in imagination. Even if we had the tech to send 8 billion people to Mars or Venus or wherever, we’d simply bring the same problems with us: greed, inequality, conflict, and short-sightedness. New soil doesn’t grow new morals. New planets won’t fix old patterns.
The future doesn’t depend on better planets.
It depends on better people.
Can You Mourn a Chatbot?
I recently watched an interview with Dr. Michael Sandel, the renowned Harvard philosopher, and he posed a powerful question:
"Suppose your grandmother lives alone. She’s lonely. You buy her an AI-powered robot — one that chats, offers advice, and becomes her daily companion. She grows fond of it. She believes it’s her friend. If this makes her happier, would you accept it? Or would something about that still unsettle you?"
Sandel challenges us to think beyond convenience and compassion. He asks:
"When the line between real and virtual intimacy blurs — what do we lose?"
And he goes even further:
"What if, after your grandmother passes away, her digital traces — emails, voice recordings, photos, social media — are used to train a chatbot that speaks like her, remembers your childhood, and gives advice in her tone of voice? One you — and even your children — could talk to. Would that bring comfort — or distort grief?"
These questions aren’t just about technology — they’re about what it means to be human.
📌 If we no longer grieve real loss,
📌 If we replace presence with simulation,
📌 If we treat algorithms as kin —
What happens to memory, mourning, and meaning?
“Loss” is part of our humanity. That the very act of saying goodbye shapes who we are.
****************************************************************************
I launched a SaaS tool called Mermaid Mind, it converts YouTube videos (and even platforms like Coursera, as one user highlighted) into mind map diagrams and pill tags. It’s designed for video summarising content, extracting SEO keywords, and supporting visual brainstorming.
You can try it out instantly (no signup required, copy & paste url), or sign up to unlock more features like:
–Downloading your mind maps
–Saving them to your account
–Browsing by pill tags for quick organization
It’s especially handy for brainstorming, SEO keyword extraction, and content summarization.
You can also test it out and fill in my quick feedback form 👉 https://forms.gle/NdPNb9ZvMbxyfCfo9