AUTOSPAT SCRIPT TUTORIAL
Introduction to AutoSpat
In this section, the speaker introduces AutoSpat and explains how it was developed.
Development of AutoSpat
- During the pandemic, the team compiled a database of creative decisions made during immersive audio productions.
- The team analyzed the patterns in the database and modeled eight sound object modules based on it.
- The eight modules are kick, bass, percussion, voice, synthesizer/lead, pad, FX, and ambience.
- In the future, different categories could be used if a different database is used.
How AutoSpat Works
- AutoSpat is an algorithm that spatializes sound objects automatically based on a basic set of initial inputs from the user.
- Users can tell AutoSpat which model to use for each channel based on their knowledge of what instruments are present.
- Based on the chosen models and sound object modules, AutoSpat generates a scene by displacing sound objects in space.
Outer Spat: A Game Changer for Creative Context and Applications
In this section, the speaker introduces Outer Spat, a tool that can generate 3D soundscapes that are coherent and customizable. The speaker explains how it can be used in various contexts such as museums, flagship stores, immersive exhibitions, and more.
Introduction to Outer Spat
- Outer Spat is a tool that generates 3D soundscapes that are coherent and customizable.
- It can be used in various contexts such as museums, flagship stores, immersive exhibitions, etc.
Benefits of Using Outer Spat
- It eliminates the need for human operators or technicians to operate an immersive audio system.
- It provides interoperability between systems and different frameworks.
- It is easy to use for new engineers or users who are approaching specialization on immersive audio.
How to Use Outer Spat
- The interface of Outer Spat is simplistic with a generation module, osc setting module, viewer, source visualizer, and global visualizer.
- Users can add new sources to their algorithm by generating a Sound Source label and assigning it an instrument model on which channel ID they want it on.
- Users can tell the algorithm which kind of generation mode should be used like balanced or random. Balanced tries to harmonize the distribution of parameters across all possible range of values for each instrument while random uses randomized values within the possible values of the individual model.
- After adding new sources to their algorithm users can click on generate button which will generate a scene based on their inputs.
Perceptual Factors
- Users should look for perceptual factors in order to determine what they are doing with each sound object.
Updating Individual Parameters
In this section, the speaker discusses updating individual parameters such as spatial crossover, level factor, time factor, and spectral factor.
Updating Individual Sources
- The speaker mentions that they can update individual sources.
- The parameters that can be updated include spatial crossover, level factor, time factor, and spectral factor.