Build with Gemini in Vertex AI Studio and Deploy as an App

Vertex AI Studio tour: prompt design, collaboration, prompt optimization agents, and one-click Deploy as App to Cloud Run. Official Google Cloud resources and media.

3 min read By Jatinder (Jay) Bhola

Vertex AI Studio: Where Prompts Become Apps

Vertex AI Studio is the Google Cloud console experience for building and tuning generative AI experiences — prompts, chat apps, and multimodal flows. It also connects directly to Cloud Run via “Deploy as App,” so you can go from a prompt to a shareable web app without leaving the console.

This post is based on official Google Cloud content only: the Build with Gemini in Vertex AI Studio blog and the Create gen AI apps in 60 seconds post. Links point to Google sources; media references use official demos and docs.

Vertex AI Studio: prompt → Deploy as App → Cloud Run URL


What You Can Do in Vertex AI Studio

Feature What it’s for
Prompt design System instructions, few-shot examples, parameters (temperature, top_p, etc.)
Multiple models Gemini family and other models available in Vertex AI
Collaboration Share and iterate with your team (see blog for current capabilities)
Prompt optimization Use agents as tools to suggest or refine prompts
Code generation Generate code from your design; one-click deploy to Cloud Run (Deploy as App)
GitHub / Cloud Run Deeper integration for code and deployment (see official roadmap in blog)

The “Deploy as App” Flow (With Media)

  1. Design in Studio — Build your prompt (and optionally multimodal input). Test until the behavior is right.
  2. Click “Deploy as App” — Vertex AI packages the experience into an interactive web UI (Gradio-based) and hands off to Cloud Run.
  3. Configure — Choose public or authenticated access.
  4. Share — Use the generated URL to share the app with stakeholders or testers.
  5. Iterate — Change the prompt in Studio and redeploy to update the app.

Official demo / product: For image generation you can try Nano Banana 2 in the console: Vertex AI Studio — Multimodal (Nano Banana 2). This illustrates the kind of experience you can then “Deploy as App.”


Where to Find Official Media and Demos

When you write your own blog or portfolio, you can embed or link to official screenshots from the blog, or record your own walkthrough of the public console.


Inference-as-a-Service with Cloud Run

Beyond “Deploy as App,” you can run custom inference (e.g. open models) on Cloud Run and call them from your app — the Unlock Inference-as-a-Service with Cloud Run and Vertex AI blog describes this pattern. That fits the “build in Studio, run in production” story: prototype in Vertex AI Studio, then move to Cloud Run (or Vertex AI endpoints) for scale and control.


References (Google sources only)