Level 3: My Non-Linear Life

Learning Outcomes

  • Linear vs non-linear—the nuts and bolts of why games are different from film/TV
  • Understand how interactivity affects audio design
  • Some of the major technical developments in game audio history

Linear vs Non-Linear Audio

  • Linear: fixed sequence, timeline-based (film, TV, radio)
  • Non-Linear: triggered by user actions, adaptive, unpredictable
  • Hybrid: cinematic sequences inside interactive play (e.g. Bandersnatch)

Why Audio Matters

  • Inform – provides navigation, spatial awareness, environmental cues
  • Entertain – sets tone, increases enjoyment
  • Immerse – creates believability and emotional connection

Unique Challenges in Game Audio

  • Unpredictable Timing – sounds may need to trigger hundreds of times in different contexts
  • Branching Content – dialogue, music, and effects must adapt to multiple story paths
  • Dynamic Layering – music and ambience must shift smoothly with game state
  • Integration with Code – audio designers collaborate closely with programmers to set triggers and parameters

Before Middleware: How Oldschool Game Audio Worked

  • Internal Speaker Era (late 1970s)

    • Single beeper speaker; CPU directly generated tones
    • Non-linear triggering: CPU paused main work to output sound
  • Dedicated Sound Chips & Voices (early 1980s)

    • Consoles/computers added limited multi-voice chips
    • Logic decided which sound got which channel (voice stealing/prioritizing)
    • Each platform had a signature palette (e.g., NES 5 fixed channels, C64 3 flexible SID voices)

Before Middleware: Trackers, Samples, and Early Rules

  • PCM Samples & Trackers (mid 1980s–1990s)
    • Amiga: 4-channel stereo sample playback
    • Module files = samples + pattern instructions (compact + reusable)
    • Music & SFX still triggered by in-game events (no global timeline)
    • Tracker pattern logic = early “rules” for sequencing & variation

Watch on YouTube

MIDI and the Rise of Game Composers

  • MIDI = efficient event-based control, not audio
  • Allowed rich, multi-channel compositions on limited hardware
  • Paved the way for composers like Nobuo Uematsu (Final Fantasy)
  • Set the stage for adaptive, event-driven music in games

Watch on YouTube

PlayStation 1: Toward Adaptive Music (1/2)

  • SPU ADPCM format: looping, per‑voice reverb, variable sample rates
  • 24 hardware voices: mix/layer music + SFX in real time
  • Layer manipulation enabled early adaptive scoring (add/remove parts)
  • Sequenced (event-based) music instead of long streamed audio
  • Stored note/instrument data → low memory + runtime control (tempo, pitch, instrumentation)

PlayStation 1: Toward Adaptive Music (2/2)

  • Exploited quirks (e.g., “dummy” data to halt or redirect loops) for transitions
  • Could fade voices, swap instruments, or inject percussion on combat start
  • Real-time parameter tweaks (volume, reverb send) = early adaptive rules
  • Demonstrated hardware-driven path toward later middleware workflows

Watch on YouTube

Activity: Choose Your Own Audio Adventure

  • You’ve seen how audio evolved from single channels to MIDI to adaptive layering.
  • Now imagine designing for this scene:
    • A player enters a forest clearing.
    • Suddenly, an enemy appears.
    • The player can:
      1. Fight → fast combat with swords and spells
      2. Run Away → sneaking and footsteps in brush
      3. Talk → dialogue tree with branching choices
  • In your group (3-4 students):
    • How should the music adapt in each case?
    • What sound effects are essential?
    • How would you keep transitions smooth if the player switches paths quickly?

Share your audio design plans with the group.

Interactive Audio: Beyond Looping Tracks

  • Must be reactive (responding to player input)
  • Must be responsive (adapting to game state/context)
  • Behaves like a database of musical ideas, ready for recombination
  • Supports branching, layering, and parameter-driven changes
  • iMUSE (LucasArts, 1991): early system enabling seamless transitions and adaptive scoring

Rules for Interactive Sound Design

  • Accept technological limits; use them creatively
  • Serve the game’s vision and emotional goals
  • Collaborate with programmers and designers early
  • Plan interactivity into the design doc and asset pipeline

The Rise of Middleware in Game Audio

  • Emerged in the 2000s to solve workflow challenges
  • Gave composers and designers direct control over adaptive sound
  • Bridges DAW assets → Middleware → Game Engine
  • Popular tools: FMOD, Wwise, Miles, xACT

Internet and Flash: Non-Linear Constraints on the Web

  • Bandwidth limits shaped audio – compressed, short loops were common
  • Flash enabled event-driven sound – triggers tied to gameplay actions in browser games
  • Broadband expansion – richer, layered audio became possible

Modern Challenges and Innovations in Game Audio

  • Advancements in Mobile and Tablet Gaming
  • New Platforms and Audio Challenges
  • Evolution of Handheld Game Audio

Future Directions

  • Procedural Audio: e.g., No Man’s Sky, Spore
  • AI-Driven Systems: adaptive composition, responsive mixing
  • Accessibility: audio cues, haptics, visual indicators