Blog by Mark Headd: “…It’s not surprising that the civic tech world has largely metabolized the rise of Artificial Intelligence (AI) as a set of tools we can use to make these interfaces even better. Chatbots! Accessible PDFs! These are good and righteous efforts that make things easier for government employees and better for the people they serve. But they’re sitting on a fault line that AI is shifting beneath our feet: What if the primacy and focus we give *interfaces, *and the constraints we’ve accepted as immutable, are changing?..
Modern generative AI tools can assemble complex, high-fidelity interfaces quickly and cheaply. If you’re a civic designer used to hand-crafting bespoke interfaces with care, the idea of just-in-time interfaces in production makes your hair stand on end. Us, too. The reality is, this is still an idea that lies in the future. But the future is getting here very quickly.
Shopify, with its 5M DAUs and $292B processed annually, is doing its internal prototyping with generative AI. Delivering production UIs this way is gaining steam both in theory and in proof-of-concept (e.g., adaptive UIs, Fred Hohman’s Project Biscuit, Sean Grove’s ConjureUI demo). The idea is serious enough that Google, not a slouch in the setting-web-standards game, is getting into the mix with Stitch and Opal. AWS is throwing its hat in the ring too. Smaller players like BuildAI, Replit, Figma, and Camunda are exploring LLM-driven UI generation and workflow design. All of these at first may generate wacky interfaces and internet horror stories, and right now they’re mostly focused on dynamic UI generation for a developer, not a user. But these are all different implementations of an idea that are converging on a clear endpoint, and if they can get into use at any substantial scale, they will become more reliable and production ready very quickly…(More)”.