How Australian Indie Game Studios Are Using AI for Procedural Content Generation
Australian indie game development has always been about doing more with less. When you’re a five-person studio in Collingwood competing against AAA teams of hundreds, you’ve got to be smart about where you spend your time. And increasingly, that means using AI-powered procedural content generation to create worlds, levels, and assets that would otherwise take years to build by hand.
I’ve been talking to studios around Australia about how they’re actually using this tech — not the theoretical “AI will make games for us” hype, but what’s working in practice right now.
What We’re Actually Talking About
Let’s define terms, because “procedural generation” covers a wide range of techniques.
Traditional procedural generation — think Minecraft or No Man’s Sky — uses algorithms and rules to create content. You define parameters, write functions, and let the math generate variations. It’s been around for decades.
AI-powered procedural generation is different. It uses machine learning models — often trained on existing game content — to generate new content that has learned qualities of good design rather than just following explicit rules. The output tends to be more organic, more varied, and (when it works) more interesting than pure algorithmic approaches.
The distinction matters because AI-driven approaches can produce content that feels designed rather than just randomised. And for indie studios that can’t afford a team of 20 level designers, that’s incredibly valuable.
Melbourne’s Scene Is Leading the Way
No surprise that Melbourne, Australia’s game development capital, is where most of the action is.
Hollow Knight makers Team Cherry reportedly experimented with AI-assisted environment generation during the development of Silksong, though they’ve been characteristically quiet about specifics. What’s filtered out through developer talks suggests they used ML models to generate initial environment layouts that were then hand-refined by their design team — essentially using AI as a first draft tool rather than a final output.
Several smaller Melbourne studios are being more open about their approaches. One studio working on a roguelike (who asked me not to name them pre-launch) is using a fine-tuned language model to generate room descriptions, NPC dialogue, and quest variations. They’ve trained it on their own writing to maintain a consistent voice, and they say it’s reduced their content writing workload by about 60%.
Prideful Sloth in Brisbane has talked publicly about using procedural generation in their games, and their newer projects are incorporating ML-trained asset variation systems — where an AI model generates variations of hand-crafted base assets (vegetation, rocks, architectural elements) to fill out environments without the repetition that plagues most proc-gen games.
What’s Working and What Isn’t
From my conversations, a few patterns emerge about where AI procedural generation actually helps indie studios and where it falls short.
What’s working:
- Environment and level layout. AI models that generate initial level layouts based on design constraints (difficulty curves, pacing, player flow) are saving studios significant time. The key is using AI as a starting point for human designers, not as the final word.
- Asset variation. Training models to create variations of base art assets is perhaps the most practical application right now. One artist creates a tree. The AI creates 200 variations. A human art director picks the best 50. The forest looks hand-crafted but took a fraction of the time.
- Dialogue and flavour text. For games that need large volumes of text — item descriptions, NPC chatter, lore entries — AI writing tools are a genuine productivity multiplier when fine-tuned on the game’s existing writing.
- Music and sound. A few studios are using AI-generated adaptive music systems that compose variations based on gameplay state. This is early but promising.
What isn’t working:
- Core gameplay design. Nobody I talked to is using AI to design actual game mechanics or core loops. That’s still entirely human work, and most developers are sceptical it could work well.
- Narrative structure. AI can write individual dialogue lines, but structuring a compelling narrative arc with foreshadowing, payoff, and emotional resonance is beyond current capabilities.
- Character art. AI-generated character art remains controversial and often legally murky. Most studios are steering well clear of using AI image generation for character design, both for quality and ethical reasons.
The Technical Side
For the technically curious, here’s what the typical stack looks like for an Aussie indie studio doing AI proc-gen:
Most studios are using some combination of custom fine-tuned models (often based on open-source foundations like Llama or Mistral) for text generation, and custom-trained diffusion or GAN models for asset variation. The training data is almost always the studio’s own content — they’re not just scraping the internet.
Several studios mentioned working with external AI development partners to build and train their models. One Melbourne studio worked with Team400’s AI team to develop a custom procedural generation pipeline that could be integrated directly into their Unity workflow. The consensus is that off-the-shelf AI tools are a starting point, but production-quality proc-gen requires customisation.
Compute costs are a consideration for indie budgets. Most studios do their training in the cloud (typically on AWS or Google Cloud) and then run inference locally or on modest hardware. The models used for game content generation are generally much smaller than the massive LLMs making headlines, which keeps costs manageable.
The Ethical Questions
Australian indie developers, to their credit, seem more thoughtful about the ethics than the broader games industry.
The main concerns I heard were:
- Training data provenance. Studios are careful to train only on their own content or properly licensed material. The legal landscape around AI-generated content is still evolving, and nobody wants to ship a game with assets that might trigger an IP dispute.
- Disclosure. Should games disclose that content was AI-generated? There’s no industry standard yet, but several studios told me they plan to credit AI tools in their development credits, similar to how they’d credit middleware like Unity or Wwise.
- Impact on jobs. This is the elephant in the room. If AI procedural generation means a studio needs two artists instead of five, that’s good for the studio’s budget and bad for three artists. Australian game developers are already underpaid relative to tech peers, and further reducing headcount isn’t a comfortable topic.
What’s Next
I think we’re in the early adoption phase. Within two years, AI-assisted procedural generation will be standard in indie development — not because it’s trendy, but because the economic argument is too strong to ignore. A three-person studio that can generate content at the scale of a 15-person team will have a massive competitive advantage.
The studios that’ll do it best are the ones treating AI as a tool that amplifies human creativity rather than replacing it. The best proc-gen games won’t be the ones where AI made everything. They’ll be the ones where AI handled the repetitive work and freed up humans to focus on the bits that actually matter — the moments that surprise you, move you, or make you laugh.
Australian indie studios have always been resourceful. AI procedural generation is just the latest tool in a long tradition of punching above our weight. And based on what I’m seeing, we’re going to punch pretty damn hard.