Skip to content

Sound Innovation at Our School: Audio Programmer Interface 2025 Holds Court

London's Music School played host to API 2025, blending music and technology by featuring discussions on AI, MIDI 2.0, programming tools, and upcoming audio innovations.

Programming the Soundscapes of Tomorrow: School Welcomes Audio Programmer Interface 2025 Event
Programming the Soundscapes of Tomorrow: School Welcomes Audio Programmer Interface 2025 Event

Sound Innovation at Our School: Audio Programmer Interface 2025 Holds Court

The Music School, a prestigious institution known for its commitment to musical innovation, recently hosted an event in London that showcased the latest advancements in music production and sound engineering.

The event, called Audio Programmer Interface (API), brought together experts from the music and technology industries to discuss the impact of AI, coding, and MIDI 2.0 on the future of music creation.

One of the highlights of the event was a talk by Chris Nash, a programme leader at The Music School. Nash introduced 'Manhattan', a hybrid digital audio workstation (DAW), and 'Klang', a simplified C++ coding language for audio.

In another session, Reuben Thomas spoke about MIDI 2.0, the latest evolution of the technology behind electronic instruments and software communication. MIDI 2.0 allows for higher-quality messages and better interaction between devices.

The Music School's BSc (Hons) Music Production & Software Engineering degree, offered at the institution, aims to fuse musical innovation with real-world coding. The degree provides students with hands-on skills that open doors to careers in gaming, app development, immersive media, and more.

The degree focuses on building a solid foundation in music tech from day one, and students are encouraged to explore the latest tools and technologies in the field.

But the learning opportunities at The Music School don't stop with degrees. The Music School's free stuff page offers a range of free courses, exclusive music-making tools, and tutorials, regardless of the user's location or aspirations.

The convergence of AI, coding frameworks, and MIDI 2.0 is reshaping music production through enhanced automation, expressive control, widened access, and real-time creative collaboration. AI-driven music creation platforms, such as ElevenLabs Music and Suno AI, enable users to generate high-quality, emotionally rich soundscapes by simply inputting text prompts.

AI integration in digital audio workstations (DAWs) now facilitates automated removal of vocals and bass, cleaner mixes, faster editing, and enhanced mastering processes. Real-time collaboration with AI has also emerged, enabling musicians and producers to co-edit and tweak compositions live with AI assistance.

MIDI 2.0 adoption is advancing, with frameworks like JUCE incorporating new MIDI 2.0 features that provide higher resolution, more expressive control, and improved interoperability between devices and software.

The latest advancements in music production and sound engineering also involve the use of AI tools in ghost production, allowing producers to create tracks rapidly and release music under other artists' names.

Christopher Micheltree, another speaker at the event, discussed the use of AI in sound, demonstrating the Neutone SDK's ability to mimic instrument sounds, recreate effects, and shape audio in real time. The course is designed for both coders exploring the world of music and creatives looking to level up with programming.

Overall, the event at The Music School highlighted the exciting future of music production, where AI, coding, and MIDI 2.0 are reshaping the industry and defining a new era of personalized and hybrid music workflows.

Read also:

Latest