TL;DR
  • Apples new design system, Liquid Glass, is less about aesthetics than it is acclimation. Apple is training us for the Spatial Computing world it plans to build.

I read this week that one toggle-tap in Apple's new Liquid Glass interface uses more computational power than the entire Apollo 11 mission used to land humans on the moon.

This single toggle animation renders 120 frames across 90,000 subpixels, each requiring thousands of shader calculations. All that processing power dedicated to making fake glass shimmer on a screen you'll glance at for half a second.

A New Operating System

By now you may have heard about Apple's new design system rolling out across all their devices: Liquid Glass. After years of incremental updates (slightly rounder corners here, marginally faster processors there) Apple is treating translucent menus like the second coming of the iPhone.

The marketing team is working overtime to convince us this design refresh represents a "revolutionary leap in human-computer interaction."

It seems pretty laughable.

Still… as soon as I updated my iPhone last week, I caught myself spending twenty minutes just opening and closing apps. Watching screens blur in and out. Notifications glistening with subtle light reflections that shift based on how you're holding your phone.

Yes, it’s absurdly overengineered. But I'm kind of obsessed with it.

As I underwent my annual mass-update ritual (iPhone, Mac, Watch, AppleTV, AppleVision), I started thinking about these interfaces. Between all of my devices, I spend most of my waking hours inside an Apple-designed operating system. The iPhone is the alarm that pulls me out of sleep. I talk into my Mac for 10+ hours a day. My Watch monitors every step and heartbeat along the way.

For many of us, Apple's interface has become the lens through which we process reality.

Which is why the fanfare around Liquid Glass feels telling. Apple executives framed it as "inspired by VisionOS," the headset operating system, signaling a deeper bet on spatial computing.

What began as an AR experiment is now the template for the entire ecosystem. Layers of glass, depth cues, and shifting light effects are becoming part of daily interactions for billions of people. Each swipe and tap training us to see digital information as if it belongs in physical space, preparing the masses for augmented reality long before the hardware arrives.

How we Got Here

I'm always surprised when I see old screenshots of iPhone interfaces from 2007. They look hilarious and antique, like wood-paneled station wagons or rotary phones.

Looking back at these UIs, you can almost map the two-decade training program Apple's been running on our brains.

2007-2013: The Skeuomorphic Era

The Notes app once looked like yellow legal paper, YouTube was a mini-TV, and the Game Center had a green felt poker table aesthetic. None of this was accidental. Apple needed to convince humans that tapping glass could manipulate objects, so they made digital things look exactly like physical things.

Training wheels for the touchscreen age.

2013-2023: The Flat Years

Once we learned to tap glass confidently, Apple stripped away every fake texture overnight. iOS 7 was jarring. Suddenly everything was white space and simple shapes. But we adapted immediately because the training had worked. We didn't need 3D-looking buttons anymore. We'd internalized that rectangles could be buttons and swipes could move us through digital space.

2023-Now: Spatial Conditioning

Now Liquid Glass surrounds us with layers that feel suspended in air. Menus drift like sheets of frosted glass, notifications hover and blur behind them, and entire screens seem to float on top of one another.

When you swipe, the content doesn’t just disappear, it recedes into imagined depth, its motion guided by curves that engineers spent months refining. There’s no true space here, only transparency and math, arranged to persuade your brain that digital objects occupy physical layers.

Features

Beyond the visual evolution, consider how the features your iPhone has collected over the years have shaped your daily life.

Live Photos seemed gimmicky when they launched. Now, half my memories move. I don't just remember moments anymore, I remember the half-second before and after them. The breeze in someone's hair. The wind-up to a high-five.

These micro-movements have become part of how I recall the past.

The original slide-to-unlock gesture from 2007 trained billions of people that horizontal movement meant "begin." Now we swipe up to go home, down to refresh, right to go back. Our thumbs have been programmed with a gestural language more universal than any spoken language.

iMessage has quietly become one of the most powerful pieces of social infrastructure on Earth. For millions, it is the medium through which friendships are built, families coordinate, relationships bloom, and sometimes collapse.

Apple made one of its most consequential design choices when it branded messages from non-iPhone users with that awful green, a subtle but ever-present indicator of "in" and "out," a boundary enforced in every group chat where green interrupts the flow of blue.

Inside the blue-bubble world, Apple has engineered a full emotional vocabulary. The three dots pulsing at the bottom of the screen have become a universal symbol for anticipation. The single thumbs-up reply (👍) can end a conversation like a slammed door.

What once looked like minor interface tweaks has solidified into cultural infrastructure, shaping how we remember and how we connect.

Each of these features taught us something: that digital memories can move, that gestures carry meaning, and that blue means belonging.

And now Liquid Glass is teaching us the next lesson.

Training for Tomorrow

Every interface element in Liquid Glass carries optical qualities of frosted glass, responding to your touch, your tilt, even ambient light. The menu bar on Mac is now transparent. Sidebars refract content behind them. Your lock screen time display sits behind the subject of your wallpaper photo, creating depth.

None of this is necessary for functionality. You could tap solid rectangles and get the same results. But Apple is spending billions in research and computation to simulate realistic glass physics because they're conditioning us for augmented reality, where digital objects will need to feel like they belong in physical space.

When AR glasses finally arrive, the concept won't feel alien. Our phones will have spent years teaching us that digital objects have transparency, cast shadows, and exist in layers.

Most of us unlock our phones 150+ times per day. Each time, we're receiving a micro-dose of spatial training. The way notifications stack. The way apps shrink into the dock. The way control center emerges from invisible depth.

Ten billion operations per second, all to maintain an illusion of physicality that doesn't exist.

We spend more time looking at these interfaces than at actual reality. Our spatial reasoning, our attention patterns, our expectations about how information should behave. All of it shaped by design decisions made in Cupertino conference rooms.

The fascinating part isn't that this is happening. The interface has achieved its ultimate goal: becoming indistinguishable from intuition.

A Spatial Future

Next time you unlock your phone, pay attention to the shimmer. Notice how your eyes track the animation. Feel how your thumb expects the bounce. Watch yourself unconsciously tilt the screen to catch the light on that fake glass.

Apple has spent more computational power on that single interaction than it took to put humans on the moon. They're playing a longer game than we realize.

Apple is designing for the cognitive patterns we'll need for whatever comes next. Every unnecessarily beautiful detail is preparing us for a world where digital and physical stop being separate categories.

The interface shapes the mind. That’s the quiet magic of design: we think we’re just scrolling.

Up and to the right.