Skip to content Skip to sidebar Skip to footer

Agents Are Beginning to Show, Not Just Tell

Blog

More signals from Google my friends. Their A2UI project show the next evolution in how AI presents your property to guests by showing, not just telling the guest. 

Let’s imagine a guest asks an AI agent to book a spa appointment at your resort.

Currently in the text-only chat world, it would go something like this:

“I’d like to book a massage.”
“What type of massage would you like?”
“What do you have?”
“We offer Swedish, deep tissue, hot stone, couples, and aromatherapy massages ranging from 50 to 90 minutes.”
“Hot stone sounds good. What times are available tomorrow?”
“We have openings at 10am, 11:30am, 2pm, and 4pm.”
“2pm works.”
“Would you like to add any enhancements?”

Back and forth, seven turns, just to book a massage. Functional, but a bit clunky.

Now imagine the same desired outcome, but the agent responds with a visual interface: like a photo of your spa, a menu of massage menu items with descriptions and pricing, or a calendar showing tomorrow’s available slots with highlights, optional add-ons with checkboxes, and a “Book Now” button.

One interaction. Elegant. Simple. Beautiful. On-brand. Done.

This is the nature evolution of Agentic Commerce.

The Shift from Text to Visual

In December, Google announce A2UI (Agent-to-User Interface) which is an open-source project. Don’t get hung up on the technical details. It’s the signal that this gives that’s more important: that same type of standardization I’ve been talking about regarding data (schema.org) is emerging for AI agents to generate rich, visual, interactive interfaces on the fly.

Instead of describing your property in paragraphs, agents will be able to present your assets visually. Photo carousels. Interactive calendars. Pricing tables. Booking widgets. All generated dynamically based on the conversation.

This isn’t something that’s in the far-off future. Google is already integrating A2UI into Gemini Enterprise. Flutter’s GenUI SDK uses it today. The major players are building the rails for visual AI interaction right now.

What This Means for Your Property

Think about how a guest might plan a stay at your resort today. They ask ChatGPT or Perplexity for recommendations, get a text description, then have to navigate to your website to actually see anything and book.

Tomorrow? That same AI conversation could present your property directly. Images of your rooms. A real-time availability calendar. Dining options with reservation times. Activity offerings with instant booking. The guest never leaves the conversation. The conversation becomes the visual shopping experience.

Let’s say a guest is planning an anniversary weekend. The agent might show:

  • Suite options with photos and pricing, filtered to availability for their dates
  • A spa couples package with description and booking button
  • Dinner reservations at your signature restaurant with available times
  • A “Build Your Package” interface that bundles it all together

The experience shifts from “let me tell you about this resort” to “let me show you what’s possible and let you book it right here.”

Your Data Determines Your Presentation

Here’s the competitive reality you need to understand: the agent can only show what it has access to.

Properties with rich, structured data (high-quality images, detailed amenity descriptions, real-time availability feeds, well-categorized offerings) will get compelling visual presentations. Properties without that data? They get text summaries.

Same AI. Same guest intent. Radically different experience.

If your spa services exist only as a PDF on your website, the agent can’t build an interactive booking widget. If your room photos aren’t accessible and properly tagged, the agent can’t create a visual room selector. If your availability data isn’t exposed through modern APIs, the agent shows static descriptions instead of live booking options.

The quality of your data directly determines the quality of your AI-powered guest experience. Full stop.

You’re Already Preparing

Here’s the good news. If you’ve been working on structured data for AI visibility, or preparing your systems for agentic commerce, you’re already building the foundation that enables rich visual presentation.

The schema.org markup that helps AI engines find you? It also provides the structured property data agents need to present you well. The real-time availability feeds that enable conversational booking? They also power dynamic visual interfaces. The organized, accessible content that improves your AEO? It gives agents the raw material to show rather than tell.

This is the compound value of foundation-first thinking. Each investment in data structure and accessibility pays dividends across multiple emerging capabilities. You’re not starting from scratch. You’re building on work that matters.

Awareness, Not Panic

A2UI is currently at version 0.8. This is early-stage infrastructure. You don’t need to implement anything today. (Those curious about the technical details can explore a2ui.org.)

But you should understand where things are heading.

The trajectory is clear. AI interactions are becoming visual. The properties that will thrive are those with data rich enough and accessible enough to power these experiences. The ones that will struggle? Those still treating their property information as static website content rather than structured, queryable, API-accessible data.

The question isn’t whether AI will start showing instead of telling. It’s whether your property will be ready when it does.

The Bottom Line

We’ve talked about making sure AI can find you. We’ve talked about enabling AI to transact for you. Now we’re seeing the infrastructure for AI to present you visually, compellingly, in the flow of conversation.

The properties that own their data, structure it thoughtfully, and make it accessible will be the ones that shine in this new reality. Everyone else? Described in a paragraph while their competitors get the photo carousel.

The agents are learning to show. Make sure they have something worth showing.


If you’re not already receiving insights on AI and transformation in luxury hospitality, subscribe to the Luxe Élevé newsletter for weekly perspective on what’s changing and why it matters.

Frequently Asked Questions

What is A2UI?

A2UI (Agent-to-User Interface) is an open-source project from Google that enables AI agents to generate visual, interactive interfaces during conversations rather than responding with text only.

How does A2UI affect luxury hospitality?

A2UI enables AI agents to present hotel and resort offerings visually, including photo carousels, availability calendars, and booking widgets, directly within AI conversations instead of describing them in text.

What data do hotels need for AI visual interfaces?

Properties need structured, accessible data including high-quality images, detailed amenity descriptions, real-time availability feeds, and well-categorized offerings to enable rich AI-generated visual presentations.

Is A2UI ready for implementation today?

A2UI is currently at version 0.8 and is early-stage infrastructure. Properties don't need to implement it today but should ensure their data strategy supports the direction AI interfaces are heading.

How does structured data help with AI visual presentation?

The same structured data (schema.org markup, JSON-LD, organized content) that improves AI visibility also provides the foundation for agents to generate compelling visual interfaces for your property.
Luxe Eleve Newsletter
Subscribe now to the

Luxe Élevé Newsletter

Weekly insights on elevating luxury hospitality through modern technology. No fluff, no vendor nonsense. Subscribe to Luxe Élevé