In speech
In music
Concatenative synthesis for music started to develop in the 2000s in particular through the work of Schwarz and Pachet (so-called musaicing). The basic techniques are similar to those for speech, although with differences due to the differing nature of speech and music: for example, the segmentation is not into phonetic units but often into subunits of musical notes or events. ''Zero Point'', the first full-length album by Rob Clouth (Mesh 2020), features self-made concatenative synthesis software called the 'Reconstructor' which "''chops sampled sounds into tiny pieces and rearranges them to replicate a target sound. This allowed Clouth to use and manipulate his own beatboxing, a technique used on 'Into' and 'The Vacuum State'."'' Clouth's concatenative synthesis algorithm was adapted from 'Let It Bee — Towards NMF-Inspired Audio Mosaicing' by Jonathan Driedger, Thomas Prätzlich, and Meinard Müller. Clouth's work on ''Zero Point'' was cited as an inspiration for recent innovations in concatenative synthesis as outlined in "The Concatenator: A Bayesian Approach to Real Time Concatenative Musaicing" by Chris Tralie and Ben Cantil (ISMIR 2024), which improved on the speed, accuracy, and playability of prior realtime concatenative synthesis methods. The new algorithm serves as the engine behind the Concatenator plugin by DataMind Audio, which is currently still in beta.See also
* Granular synthesis *References