2026 Public Art Futures Lab · Artist-in-Residence

storyATL

Community Storytelling Platform

Atlanta Downtown (ATL DTN) Residency — Projection Based
James McKay  |  Atlanta, GA

1. Project Overview

storyATL is a platform where Atlanta residents record, preserve, and explore stories about community, place, and their lives in the city. Personal memories, family histories, everyday routines, local change, local lore. People contribute through in-person booths at events or through a web app anywhere. Every story enters a structured repository with transcripts, location data, and tags. A lightweight knowledge base connects stories to landmarks, neighborhoods, corridors, and key dates.

From that archive, the system generates new outputs. Story cards on top of animations. Audio documentary pages. Curated map walks. Projection-mapped visuals. Every output links back to its source stories with attribution, timecoded excerpts, and traceable citations. Consent, privacy, and visibility controls are built into every step.

The primary output for ATL DTN is screens and projection-mapped installations using generative visuals drawn from the story archive. The MAP Rover serves double duty as a mobile story booth and a projection surface for downtown public spaces. An augmented reality layer lets people point a phone at a downtown location and see stories from that place. Atlanta residents, workers, and visitors see their stories reflected back on the surfaces they walk past every day.

2. Why Atlanta, Why Now

Atlanta is rich in history, culture, and community. Our public spaces reflect that vibrancy. The stories are already there. They live in conversations on Peachtree, dinner crowds in East Atlanta Village, taking friends to the Beltline, along the Chattahoochee, in the barbershops, churches, and parks. They circulate, shift, and then might disappear because there is no system to hold them.

storyATL makes that vibrancy visible. Not as a mural or a sculpture, but as infrastructure that collects, preserves, and resurfaces community knowledge in the spaces where people already are. Downtown plazas. Building facades. The MAP Rover parked at a community event. The stories get captured, transcribed, tagged to their locations, and projected back onto downtown surfaces the same day.

The corridors tell you where the knowledge moves. Peachtree Street is one long argument about what “Atlanta” means. Sweet Auburn, Vine City, Summerhill, Old Fourth Ward: each has a version of its own history that no single archive holds. The BeltLine now connects communities that used to be separated by rail infrastructure. storyATL gives those stories a persistent, findable home and a way to show up in the physical spaces where they happened.

This is not decoration added afterward. It is part of the public space that reflects and grows with the people who use it.

3. The Platform

3.1 Story Capture: Booths and Web App

Atlanta residents record stories through physical booths at the Public Arts Futures Lab, event booths, or a web app at home. Both use the same capture pipeline and feed the same repository.

Booth Flow

  • Start
  • Consent
  • Record
  • Title/Tags
  • Location
  • Visibility
  • Submit

Capture Modes

  • In PAFL Booth
  • Using web app
  • Video (1–5 min)
  • Audio-only
  • Photo with caption
  • Interview mode for recording someone else

Interview Kit

  • Gentle prompts
  • “Told by” attribution
  • Optional timer and question cards
  • Built for a grandchild interviewing a grandparent

Design Principles

  • Multilingual prompts (via Whisper)
  • Simple and large UI targets
  • Captioning
  • Headset support
  • Consent-first throughout

Prompt examples

Tell a story that could only happen in Atlanta.
Describe a place you return to, and why.
Who helped you feel at home here?
What’s changed, and what has stayed the same?

3.2 Story Repository and Knowledge Base

Repository: structured storage for media files, transcripts, tags, and geolinks. Every story has a transcript with segment-level timecodes for citation.

Knowledge base: wiki-style pages for landmarks, institutions, neighborhood names, key dates, and context notes. These pages make stories discoverable without editorializing them.

Steward role: a small team handles tagging consistency, map placement, sensitive content escalation, and takedown requests. The goal is harm reduction and findability, not editorial approval of voices.

3.3 Geomapping and Location Tagging

Precision levels

Exact (pin at precise point), approximate (randomized radius), neighborhood-only (area tag, no pin), hidden (searchable but not mapped).

Privacy defaults

Public stories get approximate precision unless the contributor chooses exact. Minors and sensitive topics default to moderated or hidden.

Tag model

Place tags (neighborhoods, landmarks, corridors), theme tags (work, migration, food, school, music), time tags (decade or year range).

Curation tools

Stewards can suggest improved map placement or tags without altering the story itself. Contributors approve or decline.

3.4 Story Generator

Tools for turning existing stories into new formats. Source links, attribution, and location context stay intact.

Template mode

Captioned story card video (Remotion), audio documentary web page (player, transcript, photos, map pin), map walk playlist (curated route).

Prompt mode

User describes what they want. The system produces structured output and cites which story segments were used.

Citation guardrails

Every generated output links back to the original story page. Transcript excerpts are traceable by timecode. No voice cloning or impersonation. Summarization is labeled as such. The Anthropic API handles generation. It does not replace human voices or editorial judgment.

3.5 Interactive Map

Web and large-screen modes. Pins and clusters, place search, filters by theme, decade, format, and neighborhood. Curated walks for mobile browsing and event display.

Event display: featured loop on autoplay, browse mode with touch-first navigation, projection mapping via MadMapper. Two projection modes: pre-rendered Remotion loops (reliable) and live interactive display routed into MadMapper (advanced, for installations with gesture navigation).

Downtown activation: interactive map projected during downtown activations. Passersby browse stories on the touchscreen while the Rover is stationed at events and public spaces.

3.6 Generative Projection Layer

Live visuals drawn from transcript excerpts, place tags, and themes. Video, photography, typography, shapes, and collage projected in real time via MadMapper. Displayed excerpts include attribution. QR codes link back to source stories so viewers can discover full recordings and the storytelling repo.

The layer runs two ways: as pre-rendered loops for reliable event display, or as a live interactive installation where gesture input or touch controls shift the content in real time.

4. ATL DTN Deployment

4.1 Generative Projection

Live or pre-rendered visuals from the story archive, projected onto downtown surfaces via MadMapper. New stories enter through booth capture or the web app. The generative layer pulls from the live archive. The projection content grows as the community contributes. Capture, transcribe, and project within one event cycle.

4.2 MAP Rover Integration

The Rover serves as both a mobile story booth and a projection surface. Capture and projection from the same vehicle. The Surface Hub handles interactive map browsing during the day; it switches to projection output for evening installations. One vehicle, two modes, same archive.

4.3 Augmented Reality Layer

The interactive map doubles as a phone-based AR layer. Point a phone at a downtown location, see stories from that place overlaid on the screen. Same geomapping data, rendered through Mapbox.

4.4 Gesture Navigation

For large-screen installations, MediaPipe tracks hand and body landmarks for hands-free browsing. Open palm to select. Swipe to advance. Step closer to zoom into a neighborhood cluster.

4.5 Demo Deliverable

A working capture-to-projection pipeline for public demonstration. A complete vertical slice: booth or web capture, stories transcribed and tagged, generative visuals projected that same day or evening.

5. Community Engagement

Who Contributes

Atlanta residents and families are the primary storytellers. They record through booths, the web app, and interview kit sessions.

Local figures like longtime community members, educators, organizers, faith leaders, and business ownersare asked to contribute.

Commissioned artists and designers build specific outputs from the archive.

Atlanta-Focused Engagement

Pop-up story capture at Atlanta events and public spaces. Recording Booths at community gatherings, festivals, farmers markets, and public programming. People walk up, record a story, and see it projected on a nearby surface that same evening. The feedback loop is immediate: contribute a story, watch it become part of the downtown landscape.

Agentic Community Hours at the Futures Lab studio at Underground Atlanta. Recurring open sessions where creators, artists, and local business owners bring a workflow problem or project idea. Creative Context helps them scope it, pick tools, and start building. Participants leave with working prototypes and configured tools.

Community Contribution Models

Designer-built template packs

A local graphic or motion designer creates visual templates for story cards rendered with Remotion

Printable posters and cards with QR codes linking to full stories

Neighborhood timeline walks

Curated by Fulton County librarians or local historians

Interactive map walk, web exhibit page, and event playlist

A walk through Sweet Auburn from 1978 to 1999, linking community milestones to personal accounts

Youth media workshops

Students interview family members, local figures, and influencers using the interview kit

Themed collections like "First jobs," "Songs we grew up with," "A place I feel safe."

Stories presented at a community screening night

Gesture story portal

A movement artist builds gesture navigation for large-screen installations

Visitors browse the story map using motion capture controls … or by dancing

Phased Engagement

Engagement Phase 0

PAFL and partners identify 3–5 community partners. Recruit first 10–20 storytellers.

Engagement Phase 1

Pilot with small group. One “story drive” week of concentrated capture.

Summer Engagement

Collect (booths at events + web), steward (review tags), publish (featured stories), listen (feedback), patch (iteration).

Who Does What

PAFL and community partners support relationships, event logistics, communications, and additional volunteers. Creative Context builds the technology, runs the system, and maintains the pipeline.

6. Technology Approach

6.1 How Technology Fits

The platform runs on Next.js with TypeScript. Mapbox handles mapping and geocoding. Remotion generates video outputs. MadMapper handles projection mapping. MediaPipe provides gesture tracking. Whisper handles transcription so every story becomes searchable, citable, and captionable. The Anthropic API generates structured outputs with citation guardrails.

The AI components are operational tooling. Whisper converts speech to text so stories can be found, excerpted, and attributed by timecode. Anthropic and other APIs produce story cards, documentary pages, and walk descriptions from existing recordings. It cites its sources. It does not replace human voices. It does not make editorial decisions.

6.2 Equipment Requested

Creative Context requests

  • Mac minis
  • Cameras and microphones for booth capture
  • Software licenses
  • LLM and API fees and subscriptions

From PAFL inventory

  • Epson Home Cinema 880 + Optoma GT1090HDR projectors
  • Gaming PC for rendering and MadMapper
  • Microsoft Surface Hub 55" for interactive display
  • Azure Kinect DK for depth sensing / gesture
  • Ultra Leap 3Di for hand tracking
  • Portable speaker + microphone for booth audio

6.3 18-Week Build Plan

Phase 1: Foundation

Core platform build - recording, consent, storage, map integration.

Phase 2: Community Layer

Neighborhood profiles, walk routes, community moderation tools.

Phase 3: Discovery

Search, filters, thematic collections, MARTA transit layer.

Phase 4: Projection

Projection mapping system, rover integration, gesture recognition.

Phase 5: Public Launch

Public beta, community onboarding, first projection events.

Phase 6: Sustain

Documentation, handoff, sustainability plan, archive protocols.

6.4 Budget Note

Stipend ($5K) covers: mac mini(s), camera(s), webcam(s), and microphone(s), cloud hosting and API costs (Vercel, R2, Whisper, Anthropic, etc), event materials (signage, printed QR cards, booth construction), domain and infrastructure, travel to partner sites.

Full Tech Stack

LayerTools
FrontendNext.js, TypeScript, shadcn/ui, Tailwind CSS, Framer Motion, Vercel AI SDK v6
MappingMapbox GL JS, Mapbox Geocoding API
CapturegetUserMedia, MediaRecorder, Web Audio API, react-webcam
Motion TrackingMediaPipe (pose/hand landmarks)
AuthBetter Auth
APINext.js Route Handlers, tRPC
Background JobsInngest
HostingVercel
DatabaseNeon Postgres, Drizzle ORM
Media StorageCloudflare R2, CDN with signed URLs
TranscriptionWhisper
GenerationAnthropic API
Video OutputRemotion and ffmpeg
ProjectionMadMapper
ObservabilitySentry, PostHog

7. Privacy, Consent, and Attribution

Identity runs through Better Auth. Public attribution defaults to first name only. Invited contributors can opt into “verified storyteller” tags with identity known internally but display controlled by the contributor.

Visibility controls per story

  • Public (library and map)
  • Limited (events and playlists only)
  • Private (saved to account, optional share-by-link)

Location precision

  • Independent of story visibility
  • Exact, approximate, neighborhood-only, or hidden
  • A story can be public with hidden location

Minors and school-based capture

  • Guardian consent required
  • Limited visibility by default
  • Conservative location and attribution marking

Sensitive content

  • Reporting and takedown workflow
  • Stewards escalate. They do not make unilateral decisions

Generation guardrails

Every output links to its source stories. Transcript excerpts are traceable by timecode. No voice cloning. No image cloning. No impersonation. Summarization is labeled as summarization.

8. Track Record

I have been producing brand and network content for two decades. Making films and television, managing budgets, delivering work on contracted deadlines. The scale has ranged from small branded pieces on my own to multi-month productions with large teams across multiple locations across the globe.

Scope management is my job. Every production has the same constraints: time, money, people, locations, equipment. The work always ships.

I’ve been creating and building working systems using LLMs from Claude, ChatGPT, and Gemini along with custom toolchains. Working software that real people can use. Full-stack technical capability across the tools listed in this application.

I love Atlanta. I grew up here. Atlanta taught me how to be an artist.

The creative technology through-line is simple. Each wave of technology enables new work. The medium changes. The creation stays the same: turn creative ideas into something real, tangible, and executable.

Appendix A: MVP Data Model

Appendix C: Community Contribution Examples

Designer-built template packs

Remotion story cards, printable QR posters

Neighborhood timeline walks

Librarian/historian-curated map routes and web exhibits

Youth media workshops

Student interview projects using interview kit

Gesture story portal

MediaPipe-driven hands-free navigation for installations