Experiment #1.
Challenge:
UX designers spend significant time creating wireframes and comps — from scratch or using established design systems. I’ve worked extensively in Figma and the Expedia Design System (EGDS). Even with system efficiencies, friction remains.
Theory:
What if I skip layered files and components and move directly from a screenshot to a viable proof of concept?
A “build-first” approach.
Assumptions:
The existing page (screenshot) is a shared template.
Content is modular and section-based.
AI Tools:
• Google Notebook
• Chat GPT
• AI Pilot (Best for prototyping with existing design system)
• Figma
Persona driven scenarios:
Context: Booked lodging
Traveler types:
Business
Family
Goal: Introduce a dynamic section in under two hours that adapts to traveler priorities at check-in.
Business traveler: Wi-Fi, early check-in
Family traveler: Pool hours, family-friendly dining
Start with a standard screenshot.
Add a simple prompt.
I exported a post-booking lodging screenshot from Figma into UX Pilot and entered a straightforward prompt: “Post booking lodging page in Expedia.”
Nothing complex. No system instructions. Just a clean input.
The output did not replicate the screenshot. Instead, it generated its own interpretation of what a post-booking lodging page should look like — reorganizing content, reshaping hierarchy, and introducing structural changes.
Figma screenshot
UX Pilot output
Create only what I need
After four or five rounds of generating and regenerating, I realized I was solving the wrong problem. I didn’t need to recreate the entire page. I only needed to prototype the dynamic section.
Trying to reproduce the full layout introduced unnecessary noise — header variations, spacing shifts, component inconsistencies. Each regeneration moved further from the goal.
The actual question was narrower: How many iterations of a dynamic module might there be that can adapt to an existing screen?
Once I constrained the scope, the work moved faster. Prompts became clearer. Outputs became more usable. Instead of rebuilding the page, I focused on inserting and testing a flexible section that could surface business-focused priorities (Wi-Fi, early check-in) or family-focused priorities (pool hours, dining) without disrupting the surrounding template.
Various UX Pilot outputs
Generate Variants, Not Pages
After narrowing the scope, I prompted UX Pilot to generate only the business traveler section variants, followed by family variants. The focus was the dynamic module — not the full page.
Even with a request for visual consistency, the outputs shifted by audience. Business variants leaned structured and utilitarian. Family variants felt warmer and more expressive.
What stood out were the labels UX Pilot provided — “Notification/Alert Style,” “Minimalist List Item,” and purpose notes like “Best for immediate, critical information on arrival.” The tool wasn’t just generating UI; it was signaling intent.
I also generated large and small screen versions to test responsiveness. In some cases, prioritization shifted — reinforcing how audience and screen size influence hierarchy.
By isolating the module, I could evaluate those decisions without the noise of the full layout.
Business traveler variants
Family traveler variants
Explore the Action Layer
I ran a similar prompt for the top action buttons — six variations in total.
The goal was to test different visual identities for the same set of actions: hierarchy, grouping, emphasis, and density.
Some versions leaned primary-CTA heavy. Others distributed weight evenly or introduced segmented controls and icon-driven layouts.
The structure stayed constant. The expression changed.
Action buttons variants
Back into Figma
UX Pilot rebuilds layers directly in Figma. Some typography shifts on import, but the structure holds.
For this experiment, I pulled the generated variants into Figma-ready screens and integrated them into the existing layout. The handoff between tools was straightforward.
AI accelerated the exploration and Figma gave me control over the final ideaton.
Final Business Traveler comp in Figma
Final Family Traveler comp in Figma
Was this a successful experiment?
Yes. Within the time constraint, I was able to take a screenshot of a system-based page and riff on a dynamic content section. With more rigor, I could have pushed closer to a unified visual identity — but the core objective was met.
What surprised me?
Focusing on a single component was enough. Narrowing the scope saved time, generations, and credits.
What did I not expect?
Not all AI prototyping tools behave the same. For this exercise, I learned UX Pilot handles screenshots well and integrates smoothly with Figma. Additionally, the “flavor” of response from AI.
What would I have done differently?
Prompting takes intention. When the agent drifted, it slowed the process. Next time, I’ll define the new element first, then expand to the full screen. I’ll also be more deliberate about what I’m asking for — and how I ask it.
About AI Sandbox & Experiment #1
I plan to complete one to three experiments per week, depending on scope and time. The first endeavor is a starting point — not a one-off.
The goal is consistency over intensity. Each experiment isolates a specific design question and tests how AI tools can accelerate, challenge, or reshape the workflow. I’ll revisit similar problems across different AI platforms to compare outputs, assumptions, and integration patterns.
My goal isn’t just to understand these tools, but to use them in practical scenarios so I can speak about them with clarity and confidence. Hands-on application helps me see where they’re effective, where they fall short, and how they fit into real workflows. It also keeps me current as the tools and expectations continue to evolve.