Categories
Audio&Games

We need a MIDI engine III

– The programmer situation

Let’s face it, programmers don’t really like sound, except at high volume in their headphones with some coffee on the desk. Two profiles I met in ten years: the dude who’s making a custom audio engine and the dude who’s implementing the content and who doesn’t really give a shit about sound features. They usually implement everything else too, sound is just one.

With these profiles, game audio ended up with libs doing everything for the programmer integrating content or on the other side, we focused on stuff like 3D audio for programmers who love math. It could be fine, I mean it’s fine in some ways. But it is so not enough.

– The designer situation

I stumbled upon this Gamasutra article about the 2012 GDC game audio track. It says a lot about the state of game audio and shows how most composers just don’t want to put their hands in the dirt and challenges of game design and game audio.

How important is it to be a "game-only" musician or sound architect in today’s industry, or is the market increasingly cross-platform across TV, film, etcetera?
Kenneth Young: I don’t think it’s ever been important to be "game-only," but I do think it’s very important to understand the challenges that games and interactivity pose. To that end, because the general complexity and sophistication of games is increasing, you’d think the market would favor those people with experience. And yet there is a trend for film composers with zero games experience to score AAA games…

So… It’s not important but games are getting more and more sophisticated and so you would need less and less game audio experience? It doesn’t make sense. It is important to be “game-only”, it’s just not trendy. That’s what is happening. Composers don’t want to wear that nerdy hat, and game directors fantasize on their favorite music scores. Nonetheless, there are a lot of games outside the AAA thing which would benefit the experience of a game-only-or-pretty-much composer. Also, understanding game development takes a while, it’s not just a matter of knowing what audio engine to use, it goes deeper and there’s no limit to that. That’s where we don’t dig enough, that’s where I want to go.

I’d love to see a guy like John Williams do an original score for a video game, but he would need a seasoned audio pro from the game industry to put it all together, get it interactive, and make it the best it could be.

Well then, John Williams’ music is barely samples that I would assemble to create game audio. If I deconstruct music and reconstruct it to match a game, mechanics and flow, who’s the main audio artist/craftman/designer? It’s me. Assets then just don’t matter that much, it’s all about execution, implementation. For a game, you much more need the audio guy who understands how games work than the one who sold millions of CDs. Because the latter will never get it while the first can improve his composition skills.

Like pretty much all the 80s-90s Japanese composers who are like gods today, they started from very little experience on their composing skills, and got better with time and projects. Koji Kondo didn’t even have a demo tape!

Another social economy thingy: a composer who has dreams is going to try to stand out much more than a composer who already had all the freaking awards in the world. It’s about freshness.

But this is the best part of this interview:

What do you think are some unexplored avenues for games that rely heavily on music or sound?
Brian Schmidt: There are definitely unexplored — or lightly explored — areas of game sound. For example, tightly coupling audio with physics, direct synthesis — the physical modeling of sounds. We keep hearing that the power of these new consoles may lead to a re-birth of the MIDI synthesized score, perhaps with instrument-based controllers as input devices to obtain more performance nuance than is possible with keyboard input. And to this date, by far — by literally an order of magnitude — the most attended talk at GDC by a game composer has been Koji Kondo, who does MIDI generated music for his games — he believes it essential to the aesthetic.

Direct synthesis and procedural music or FXs don’t let you have a good control and always sound kind of the same. And Koji is absolutely right. MIDI generated music allows full interactivity, what argument do you need after this? Koji has the most recognizable music themes in the world, maybe ever, he does MIDI synthesized scores and you don’t want to believe him? It’s not just the music he makes, it’s the tight integration with the gameplay, the aesthetic of the game, the all thing makes his music a much bigger, better thing than just notes following each other. This magic doesn’t happen otherwise. Grim Fandango is the exact same thing, it’s not just about the fact that it’s good jazz music you never hear in games, it’s the beautiful iMuse system that makes it such a seeming less experience with the rest of the game, mechanics and visuals all working together.

It’s beautiful. This is why the entire audio system (assets, tool chain, engine) needs to be tight. And for that we need flexibility, we need to reduce friction with annoying heavy wav files, we need to be able to iterate fast and find cool stuff and tricks. We need MIDI.

But even the more global music world has a weird hate/love relationship with MIDI. I mean, every single artists out there from Radiohead to Bieber, use some. All of the audio softwares out there work with MIDI or are heavily based on it. MIDI is 30 years old.

And yet we still don’t have built-in MIDI in guitars for example (I mean at an affordable, decent price) and it’s not really about anything but the old “MIDI is not real music or it’s like cheating!” mantra. It’s like people saying .svg is not graphic because .tga is. Seriously, it’s that dumb. Feel my despair. People love to segregate, it’s a human social disease, seriously.

But back to the game audio world. No MIDI engine except super expensive MILES Audio and in-house engines, like at Nintendo.

OS are a mess. Linux audio is a mess, so is Windows. But both can/could have a built-in MIDI engine with low latency, no doubt.

Even worse, the web and Facebook. We are getting backward there. Flash is horrible for some stuff but for audio, it’s just unbelievably bad. You can’t do nothing but play/stop/mute/fade. HTML5? Same shit plus ridiculous problems with codecs and files, inconsistency through browsers… And the brand new web audio API from Google just does the same stuff over and over again:

    Spatialized audio supporting a wide range of 3D games and immersive environments:

    • Panning models: equal-power, HRTF, sound-field, pass-through
    • Distance Attenuation
    • Sound Cones
    • Obstruction / Occlusion
    • Doppler Shift
    • Source / Listener based
  • A convolution engine for a wide range of linear effects, especially very high-quality room effects. Here are some examples of possible effects:
    • Small / large room
    • Cathedral
    • Concert hall
    • Cave
    • Tunnel
    • Hallway
    • Forest
    • Amphitheater
    • Sound of a distant room through a doorway
    • Extreme filters
    • Strange backwards effects
    • Extreme comb filter effects
  • Dynamics compression for overall control and sweetening of the mix
  • Efficient real-time time-domain and frequency analysis / music visualizer support
  • Efficient biquad filters for lowpass, highpass, and other common filters.
  • A Waveshaping effect for distortion and other non-linear effects

Note the “Extreme filters”. God. Hi, programmers who sure will love to challenge themselves with these! But as a designer, I don’t give a damn about 3D -we’re playing in a browser on a laptop, not on a 5.1 setup with a 50 inches TV- and a convolution engine, previously known as reverb? Meh. Of course with all this, you could make your own limited web MIDI engine but… Sigh.

We don’t need dynamic real time mixing. We need dynamic real time composing and we are very short on tools to do that.

Somebody give me a high performance Fmod-like engine for FXs, a DirectMusic-like engine for music, wrap it all in a nice interface that can output for any platform and game audio designers will rise, as your game will sound and feel like nothing before. Actually, Fmod only needs a softsynth layer like Fluidsynth or TiMidity++ and more complex MIDI bindings but otherwise, it already plays .mid.

Fuck. It’s so messed up.

If you are a programmer and love to build tools, hit me the fuck up so that we can start something about all that and get famous or rich, or both.

We need a MIDI engine I

We need a MIDI engine II

Leave a Reply

Your email address will not be published.