[expand title=”Bertrand Petit (INRIA); Manuel Serrano () – Generative Music using reactive Programming“]
Contact : bertrand.petit@inria.fr; manuel.serrano@inria.fr
Presentation format: In person

Abstract


Generative music, i.e., music produced using algorithms or assisted by algorithms, can be created using may different techniques and even methodologies. It can be generated from grammatical representations, using probabilistic algorithms, neural networks, rule-based systems, constraint programming, etc. In our work, we are
interested in a new technique that combines complex combination of basic musical elements with stochastic phenomena, and that is made possible by the use of synchronous reactive programming.
We have based our work on the HipHop.js programming language that allows composers to create music programs and that produces satisfying and unexpected musical results.
In this paper, we present this new way of composing music and we comment some concrete realizations.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Jeremy Hyrkas (University of California San Diego) – Network Modulation Synthesis: New Algorithms for Generating Musical Audio Using Autoencoder Networks“]
Contact : jeremy.hyrkas@gmail.com
Presentation format: In person

Abstract


A new framework is presented for generating musical audio using autoencoder neural networks. With the presented framework, called network modulation synthesis, users can create synthesis architectures and use novel generative algorithms to more easily move through the complex latent parameter space of an autoencoder model to create audio.

Implementations of the new algorithms are provided for the open-source CANNe synthesizer network, and can be applied to any autoencoder networks for audio synthesis. Spectrograms and time-series encoding analysis demonstrate that the new algorithms provide simple mechanisms for users to generate time-varying parameter combinations, and therefore auditory possibilities, that are difficult to create by generating audio from handcrafted encodings.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Dustin Lee (HKUST); Andrew Horner (HKUST); Wenyi Song (HKUST) – A Head-to-Head Comparison of the Emotional Characteristics of the Violin and Erhu on the Butterfly Lovers Concerto“]
Contact : dleeai@cse.ust.hk; horner@cse.ust.hk; wsongak@connect.ust.hk

Abstract


Recent work has compared the violin and erhu to determine their difference in emotional characteristics using an absolute scale. Musical excerpts from the Butterfly Lovers Concerto were divided into four emotional categories: Romantic, Joyful, Agitated, and Bittersweet. It was found that the violin had higher over-all Valence and Arousal scores than the erhu. The violin also had higher emotional intensity for Roman-tic, Joyful, and Agitated excerpts, while the erhu had higher emotional intensity for Bittersweet excerpts, though the differences were not statistically signifi-cant.

This paper considers the violin and erhu using relative comparisons and the same musical excerpts. The results show that the violin was statistically stronger in emotional intensity for all four emotional categories. The erhu performed was relatively stronger on Bittersweet excerpts. The results suggest that the violin has a wider range and dramatic expression than the erhu, at least for this piece.

Paper (preprint)


[/expand]


[expand title=”Cheuk Nam Lau (Hong Kong University of Science and Technology) – An Experiential Course on Creative Sound Design“]
Contact : cnlauac@connect.ust.hk
Presentation format: In person

Abstract


Music is a natural area for experiential learning, since it directly touches us emotionally. In this paper, we describe how we set up an experiential course on sound design. We structured the course as if we were at an LA school for film music. We made each weekly musical assignment a wild Dungeons&Dragons adventure into the dark arts of mood modulation. Students had a chance to explore how plastic music is, radically adapting it to different situations. The structure of the course help students rapidly ascended through the active learning spiral. A post-survey of the course showed a significant improvement in students really believing they could apply what they learned. This course tapped the emotional power of music to drive students up the active learning spiral with great momentum. The open-ended fantasy assignments motivated students without music back-ground to pick up the basics as they explored and allowed students with strong music background to look at music in a new way. It allowed everyone to have a great learning experience.

Paper (preprint)


[/expand]


[expand title=”Adrien Bitton (IRCAM); Philippe Esling (IRCAM); Tatsuya Harada (The University of Tokyo / RIKEN) – Neural Granular Sound Synthesis“]
Contact : bitton@ircam.fr; esling@ircam.fr; harada@mi.t.u-tokyo.ac.jp
Presentation format: In person

Abstract


Granular sound synthesis is a popular audio generation technique based on rearranging sequences of small waveform windows. In order to control the synthesis, all grains in a given corpus are analyzed through a set of acoustic descriptors. This provides a representation reflecting some form of local similarities across the grains. However, the quality of this grain space is bound by that of the descriptors. Its traversal is not continuously invertible to signal and does not render any structured temporality.
We demonstrate that generative neural networks can implement granular synthesis while alleviating most of its shortcomings. We efficiently replace its audio descriptor basis by a probabilistic latent space learned with a Variational Auto-Encoder. A major advantage of our proposal is that the resulting grain space is invertible, meaning that we can continuously synthesize sound when traversing its dimensions. It also implies that original grains are not stored for synthesis. To learn structured paths inside this latent space, we add a higher-level temporal embedding trained on arranged grain sequences.
The model can be applied to many types of libraries, including pitched notes or unpitched drums and environmental noises. We experiment with the common granular synthesis processes and enable new ones.

Paper (preprint)


[/expand]


[expand title=”Jon Nelson () – MVerb and MVerb3D: Modified Waveguide Mesh Reverb Plugins“]
Contact : jon.nelson@unt.edu
Presentation format: In person

Abstract


MVerb and MVerb3D are plugins that are based on five-by-five 2D and four-by-four-by-three 3D modified waveguide meshes. Developed in Csound within the Cabbage framework, they are highly flexible reverbs that can generate com-pelling and unique effects ranging from traditional spaces to infinitely morph-ing spaces as well as the simulation of metallic plates or cymbals. The plugins incorporate a variety of filters for timbrel control, optional delay randomiza-tion to create more unusual effects, and capacity for eight channel audio input and output with user-defined taps into the mesh.

Paper (preprint)

Video abstract


[/expand]


[expand title=”James Dooley (The Open University); Simon Hall (Integra Lab, Royal Birmgingham Conservatoire) – MYtrOmbone: exploring gesture controlled live electronics in solo trombone performance“]
Contact : james.dooley@open.ac.uk; simon.hall@bcu.ac.uk
Presentation format: Virtual

Abstract


MYtrOmbone is an interactive system that empowers a trombonist to manipulate live electronic processing by analysis of the slide position engaged. The system is designed and developed at Integra Lab using a Thalmic Labs Myo armband worn on the right forearm as a control device. MyoMapper software translates armband data to OSC messages, and a machine learning system extrapolates trombone positions from the incoming OSC data. These positions are then mapped in Integralive software to audio signal processing that is controlled by the trombonist.

This paper presents the musical context that has led to the development of the system, outlines the way that the system works and its application within the first piece composed for the first incarnation of the software, 146 Lucina, and suggests further work to undertake to develop and refine the system in the future.

Paper (preprint)

Video abstract


[/expand]


[expand title=”Jesse Austin-Stewart (Massey University); Bridget Johnson (Massey University) – Spatial System Design As A Spatio-Compositional Strategy“]
Contact : jhjaustin@gmail.com; B.D.Johnson@massey.ac.nz

Abstract


Spatio-compositional strategies are employed frequently in the context of electroacoustic music in conjunction with spatial loudspeaker systems when creating new works. There is much literature detailing a range of these strategies describing spatio-compositional approaches composers may use when developing a new piece. This paper addresses a gap in the literature, explaining why spatial system design/construction should also be considered as a spatio-compositional strategy. This research encourages spatial system design to be a key consideration when concerned with the spatial features of a work, while also encouraging a move from regular use of standardized loudspeaker systems.

Paper (preprint)


[/expand]


[expand title=”Tsung-Ching Liu (Chinese Culture University); Wan Tin Lin (Chinese Culture University) – The Real-time Synthesis of the Ancient Chinese Chime-Bells Instrument of Marquis Yi in Max/Msp Using FZ-ARMA Model“]
Contact : gatecomm@icloud.com; oo_3105@yahoo.com.tw
Presentation format: In person

Abstract


On our previous study, we were dealing with the real-time synthesis of the ancient Chinese Chime–bell instrument of Marquis Yi in Max/Msp for tuning justification purpose where two models, Direct Sinusoid Generator (DSG) and FZ (Frequency Zooming)-ARMA(4,6) have been suggested and tested by us. Though the Matlab simulation had shown that the FZ-ARMA(4,6) has better fidelity than the DSG model, we have difficulty to implement it in Max/Msp using the traditional direct form II type structure due to the computation error is propagated in long feedback loops that will eventually render the system unstable. With limited computation power that Max/Msp has, the DSG model, however, is workable in Max/Msp because the model uses only parallel biquad blocks. Motivated by that, this paper is to show that we are finally able to run the FZ-ARMA(4,6) in Max/Msp now simply by further decomposing the derived direct form II structure into multiple parallel biquad blocks using partial fraction decomposition. We offer the detailed design procedure followed by simulations.

Paper (preprint)


[/expand]


[expand title=”Jack Woodbury (Victoria University of Wellington) – CORROSE: The Loudspeaker’s Compositional Influence Acoustically Signified“]
Contact : jakwoodbury@gmail.com
Presentation format: Virtual

Abstract


This paper discusses the loudspeaker’s role in the acousmatic compositional process. First, the loudspeaker is characterized as a disruptive force, its influence distorting and warping an input signal. This distortion limits the composer’s control. Second, a series of related works are examined. CORROSE, an audio-visual installation developed by the author, is then presented as a method by which the loudspeaker’s influence may be acoustically signified. CORROSE uses damaged and augmented speaker drivers to disrupt the spectral and spatial acuity of a series of fixed electroacoustic compositions. Through this, the installation speaks to the role of the loudspeaker in the acousmatic compositional process.

Paper (preprint)


[/expand]


[expand title=”Leandro Garber (MUNTREF Arte y Ciencia); Tomás Ciccola (MUNTREF Arte y Ciencia); Juan Cruz Amusategui (MUNTREF Arte y Ciencia) – AudioStellar, an open source corpus-based musical instrument for latent sound structure discovery and sonic experimentation“]
Contact : leandrogarber@gmail.com; tomas.ciccola@gmail.com; juan.x.a@gmail.com
Presentation format: In person

Abstract


Generating a visual representation of short audio clips’ similarities is not only useful for organizing and exploring an audio sample library but it also opens up a new range of possibilities for sonic experimentation. We present AudioStellar, an open source software that enables creative practitioners to create AI generated 2D visualizations of their own audio corpus without programming or machine learning knowledge. Sound artists can play their input corpus by interacting with learned latent space using an user interface that provides built-in modes to experiment with. AudioStellar can interact with other software by MIDI syncing, sequencing, adding audio effects, and more. Creating novel forms of interaction is encouraged through OSC communication or writing custom C++ code using provided framework. AudioStellar has also proved useful as an educational strategy in courses and workshops for teaching concepts of programming, digital audio, machine learning and networks to young students in the digital art field.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Mauricio Rodriguez (San Francisco Conservatory of Music) – MEANING THE SCORE: FROM CODE AND PERFORMANCE TO SCORE “]
Contact : marod@ccrma.stanford.edu
Presentation format: In person

Abstract


This paper describes the composition approach to the work ‘Meaning The Score’, a series of graphic scores prototyped in SCORE (Smith 1985) and live-typeset with an old heavy-duty music typewriter. The ‘Musicwriter’ is performed as a musical instrument, amplified and processed on stage. The typeset notation is given to other musicians that react to both, the score and the performance that engraved the score in the first place. As a final iteration of the process, the resulting graphic scores are displayed in gallery settings due to their characteristic formatting.

Paper (preprint)


[/expand]


[expand title=”Thales Roel Pessanha (UNICAMP); Thiago Roque (UNICAMP); Guilherme Zanchetta (UNICAMP); Lucas Pereira (UNICAMP); Gabrielly de Oliveira (UNICAMP); Bruna Pinheiro (UNICAMP); Renata Paulino (UNICAMP); Tiago Tavares (UNICAMP) – InFracta: the Body as an Augmented Instrument in a Collaborative, Multi-Modal Piece“]
Contact : thalesroel@hotmail.com; thiago.roque07@gmail.com; guilhermezanchettac@gmail.com; lucasbertoloto@outlook.com; limadeoliveira.gaby@gmail.com; bruu.cmp@gmail.com; r186523@dac.unicamp.br; tiagoft@gmail.com

Abstract


This paper discusses the creative process of the piece “InFracta: Dialogue Processes in a Multi-modal Environment”. The discussion concerns the dialogues between the dance, music, image, and technology knowledge domains, which were all present in the construction of the piece’s poetics. The interaction between these domains fostered resignifying the dancers’ gestures so that their bodies interacted with a sound environment as if they were augmented instruments. This discussion adds to previous work on technology-mediated multi-modal art, in special concerning the contribution and the emergence of meaning related to each of the knowledge domains involved in the piece.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Bing Yen Chang (The Hong Kong University of Science and Technology); Hiu Ting Chan (HKUST); Andrew Horner (HKUST) – The Effects of Pitch, Dynamics, and Vowel on the Emotional Characteristics of the Tenor Voice“]
Contact : bychang@connect.ust.hk; htchanai@connect.ust.hk; horner@cse.ust.hk
Presentation format: In person

Abstract


Previous research on the Soprano voice has shown that emotional characteristics change with different Pitch, Dynamics, and Vowel. This work considers a similar investigation with the Tenor voice. Listening tests were conducted whereby listeners gave absolute judgements on Tenor voice tones over 10 emotional categories, and the data were analyzed with logistic regression. The results confirmed that high-Arousal categories (Happy, Heroic, Comic, Angry, Scary) were stronger for loud notes, while low-Arousal categories (Romantic, Calm, Mysterious, Shy, Sad) were stronger for soft notes. Most categories had an upward trend across the pitch range, while the low-Arousal categories Calm, Sad, and Mysterious had a downward trend. Vowel A was ranked the highest for Happy, Heroic, Comic, and Angry, while Vowel U was ranked the highest for Romantic, Calm, Shy, Scary, Sad, and Mysterious. Overall, the effect of Dynamics was approximately twice as strong as Pitch and Vowel. Pitch was slightly stronger than Vowel. The emotional characteristics of both the voices were in basic agreement regarding Dynamics and Vowel, but only overlapped over a limited range of pitches. These results give a fresh perspective on how Vowel shapes emotional expression in the singing voice and their relative importance compared to Pitch and Dynamics.

[/expand]


[expand title=”Thiago Roque (UNICAMP); Rafael Mendes (UNICAMP) – Timbre Manipulation from Audio Features Based on Fractal Additive Synthesis“]
Contact : thiago.roque07@gmail.com; rafael@dca.fee.unicamp.br
Presentation format: In person

Abstract


The search for new sound synthesis techniques has gained considerable impulse from the advances in Music Information Retrieval (MIR). From the concept of audio features, introduced by MIR, a new idea of sound synthesis has emerged based on the manipulation of high levels parameters, more directly involved with perceptual aspects of sound. In this article, we presents the initial version of a software for feature modulation based on the Fractal Additive Synthesis technique and the results on using this software as a tool to support the teaching of audio features in a music technology class for musicians at undergraduate level at the University of Campinas. Four audio features were chosen for this research: spectral centroid, even to odd harmonic energy ratio, mean harmonic band Hurst exponent and the harmonic band correlation coefficient.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Christine Steinmeier (Bielefeld University of Applied Sciences); Dominic Becking (Bielefeld University of Applied Sciences) – Headmusic: Exploring the Potential of a Low-Cost Commercial EEG Headset for the Creation of NIMEs“]
Contact : csteinmeier@fh-bielefeld.de; dbecking@fh-bielefeld.de
Presentation format: In person

Abstract


In recent years Electroencephalography (EEG) technology has evolved to such an extent that controlling software with the bare mind is no longer impossible. In addition, with the market introduction of various commercial devices even private households can now afford to purchase a (simplified) EEG device. This unlocks new prospects for the development of user interfaces. Especially people with severe physical disabilities could benefit by facilitating common difficulties (e.g. in terms of mobility or communication) but also for specific purposes such as making music. The goal of our work is to evaluate the applicability of a cheap, commercial EEG headset to be used as an input device for new musical interfaces (NIMEs). Our findings demonstrate that there are at least 7 input actions which can be unambiguously differentiated by machine learning and can be mapped to musical notes in order to play basic melodies.

[/expand]


[expand title=”Christophe LENGELE (Université de Montréal) – The story and the insides of a spatial performance tool: Live 4 Life“]
Contact : christophe.lengele@yahoo.fr
Presentation format: In person

Abstract


This paper describes the beginning of the story and the insides of a sound performance tool implemented in SuperCollider, whose objective is to have an ergonomic spatiotemporal and spectral control over numerous sound objects in real time, in order to swing between spatialized textures and polyrhythms.

After a brief review of some spatial / sequencing / performance tools with a graphical user interface built with SuperCollider, the incentives behind the creation of another new tool are explained, as well as discussing development strategies and more recent considerations between coding and composing from a performer’s perspective.

This spatial performance tool is finally detailed by focusing on the composition process and structure of one of the core parameters: the space (motion and rendering algorithm). A way of setting up a library of predefined spatialization models with a dynamic and quick selection among both rendering algorithms and also concrete and more abstract spatial composition techniques is proposed and detailed practically through the combination of different parameter modules.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Ken Paoli (College of DuPage) – Phil Winosr’s Musical Poetics: Music is Nothing, Music is Nowhere, Music is Nothing.“]
Contact : paolik@cod.edu
Presentation format: In person

Abstract


Phil Winsor worked as both a sound and visual artist and photographer throughout his lifetime (1938-2012). While leaving a large output of sound and video works, and photographic images, Winsor was also an active author and computer programmer writing books concerning computer music composition and computer pro-grams reflecting his algorithmic compositional aesthetic. In his last creative period, he penned a Poetics of Music in two versions. This paper concerns itself with Winsor’s Musical Poetics and his views on music in Academia.

Paper (preprint)

Video abstract


[/expand]


[expand title=”Emilio Rojas (Pontificia Universidad Católica de Chile); Rodrigo Cadiz (Pontificia Universidad Catolica de Chile) – DDSS: a diffusion-based extension of Xenakis’ dynamic stochastic synthesis“]
Contact : elrojas1@uc.cl; rcadiz@uc.cl
Presentation format: In person

Abstract


Dynamic Stochastic Synthesis, proposed by Iannis Xenakis in 1992, consists in the stochastic variation of a waveform on each cycle. Inspired by this method, a Diffusion Dynamic Stochastic Synthesis method is proposed in this article by solving a diffusion equation to simulate the process with particles and mapping their positions to amplitude values of a waveform. An implementation in Max/MSP is also shown and the results are compared with a simplified version of conventional DSS, analyzing the spectrum of the signals generated by both methods where, despite the similarity in the overall form of the frequency analysis, noticeable differences are found. In order to show the potential of the method proposed as a musical application in the form of an instrument. This article concludes summarizing the work, analyzing the main results, and finally discussing future work.

Paper (preprint)


[/expand]


[expand title=”Jiayue Wu (University of Colorado Denver) – Designing and Practicing Embodied Sonic Meditation “]
Contact : cecilia.wu@ucdenver.edu
Presentation format: In person

Abstract


This article narrates my practice-based research in Embodied Sonic Meditation, both as a Digital Musical Instrument (DMI) designer, a vocalist, a composer, a media artist, and a long-term meditation practitioner. I develop a methodological framework for Embodied Sonic Meditation practice with three case studies, using a sensor-augmented body as an instrument to create sound and sonic awareness. First, I suggest 15 design principles for Embodied Sonic Meditation HCI systems, based on previous research in DMI design principles, neuroscience research in meditation, Crikszentmihalyi’s Flow Theory, and the criteria of efficiency, music subjectivity, affordance, culture constraints, and meaning-making. I then make reference to three proof-of-concept case studies that have explored these ideas. I argue that Embodied Sonic Meditation affords an opportunity for sound art to mediate cultures, improve people’s wellbeing, and better connect people to their inner peace and the outer world.

Paper (preprint)


[/expand]


[expand title=”Stefano Kalonaris (Riken); Eric Nichols (Zillow Group); Gianluca Micchi (CRIStAL, UMR 9189, CNRS, Universite de Lille); Anna Aljanaki (University of Tartu) – Modeling Baroque Two-Part Counterpoint with Neural Machine Translation“]
Contact : stefanokalonaris@gmail.com; epnichols@gmail.com; gianluca.micchi@gmail.com; aljanaki@gmail.com
Presentation format: Virtual

Abstract


We propose a system for contrapuntal music generation based on a Neural Machine Translation (NMT) paradigm. We consider Baroque counterpoint and are interested in modeling the interaction between any two given parts as a mapping between a given source material and an appropriate target material.
Like in translation, the former imposes some constraints on the
latter, but doesn’t define it completely. We collate and edit a bespoke dataset of Baroque pieces, use it to train an attention-based neural network model, and evaluate the generated output via BLEU score and musicological analysis. We show that our model is able to respond with some idiomatic trademarks, such as imitation and appropriate rhythmic offset, although it falls short of having learned stylistically correct contrapuntal motion (e.g., avoidance of parallel fifths) or stricter imitative rules, such as canon.

Paper (preprint)

Video abstract


[/expand]


[expand title=”Arthur Tofani (University of São Paulo); Marcelo Queiroz (University of São Paulo) – Using tf-idf and cosine similarity into Shazam-based fingerprinting algorithms“]
Contact : gramofone@gmail.com; mqz@ime.usp.br
Presentation format: In person

Abstract


This study demonstrates the application of text retrieval techniques into Shazam-like fingerprinting algorithms. The goal is to filter query input hashes using tf-idf as a measure of relevance in order to reduce the number of records returned by the database. As accuracy could potentially be affected by this filtering approach, we investigate these requisites together, by looking for a filtering threshold t that produces reduced database response payloads with minimal impact on accuracy. Further, it also discusses the use of cosine similarity as an alternative to the original Shazam’s scoring method, given that the last one restricts the algorithm’s robustness against time distortions. We demonstrate that the application of these techniques outperforms the original algorithms description for many
different datasets.

Paper (preprint)


[/expand]


[expand title=”Zlatko Baracskai (University of the West of England) – Let-act: Challenges of Collaborative Multimedia Performances“]
Contact : zlatko.baracskai@uwe.ac.uk
Presentation format: In person

Abstract


This paper is born out of experiences in technical and artistic avenues of multimedia art. The plurality of media often calls for collaboration of artists and technicians to work together towards sometimes undefined goals, to experiment. Certain recurrences in such endeavors have led to formulating ideas that seem to have a broader ap-plicability in contemporary art critique. A performance characterized by proposed trends I shall call a let-act and I will try to paint a picture of how inevitable it actu-ally is. The reasons for choosing an initially vague name are many and will be unfolding in the body of this essay. Suffice it to say for now, that the main difference to a traditional performance is the dissolution of content sig-nificance. In this sense I am proposing that there are ma-jor changes happening as performers publicly present art that goes beyond being fixed, beyond open, into a new realm. This new art can also be described as arbitrary amalgamations that tick the boxes of funding bodies, and that refrain from challenging their audiences.

Paper


[/expand]


[expand title=”Maxime Poret (LaBRI – Université de Bordeaux); Sébastien Ibarboure (ESTIA); Matthias Robine (LaBRI, University of Bordeaux); Emmanuel Duc (SIGMA Clermont); Nadine Couture (ESTIA); Myriam Desainte-Catherine (LaBRI, University of Bordeaux); Catherine Semal (Bordeaux INP ENSC) – Sonification for 3D Printing Process Monitoring“]
Contact : maxime.poret@labri.fr; s.ibarboure@estia.fr; matthias.robine@labri.fr; emmanuel.duc@sigma-clermont.fr; n.couture@estia.fr; myriam@labri.fr; catherine.semal@ensc.fr
Presentation format: In person

Abstract


In order to monitor a 3D printing industrial process in a context of sensory overload and potential inattentional deafness, we designed a sonification of the information sent in by the printer. This sonification focuses not only on proper communication of the system’s state, but also on lowering the amount of stress usually induced by prolonged listening. To this end, we made use of a combination of synthetic and natural sounds whose perceptual properties were modulated according to the data influx using parameter mapping. Then an experiment was conducted on the recognition of various normal and abnormal behaviours, also allowing the participants to assess the amount of stress they experienced upon listening. The results are quite promising, but also highlight a confusing overlap in the natural sounds used, which will need to be fixed in future iterations. For now, tester opinion is mostly positive on the stressful aspect. However, listening times may need to be longer in further experimentation to better assess how stressful this sonification is.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Alejandro Albornoz (Universidad Austral de Chile) – Conjunction of aesthetics through computer music techniques in the acousmatic piece Sheffield 17 “]
Contact : akousma.viva@gmail.com
Presentation format: In person

Abstract


This paper describes the compositional process of the acousmatic piece Sheffield 17, created in 2017 as a trib-ute to Chilean electronic music pioneer José Vicente Asuar. In doing so, the text shows the different tech-niques used which include live coding, deferred time sound processing and sampling. These techniques have shown to be carriers of specific aesthetic implications that conclude in a hybrid and heterodox acousmatic piece.
Key words: acousmatic composition, computer music techniques, live coding, voice.

Paper (preprint)

Video abstract




[/expand]


[expand title=”gabriel duran (Pontificia Universidad Catolica de Chile); Patricio De La Cuadra (Pontificia Universidad Catolica de Chile); Domingo Mery (Pontificia Universidad Católica de Chile) – COMPARISON OF METHODS FOR BASS LINE ONSET DETECTION“]
Contact : geduran@uc.cl; pcuadra@uc.cl; dmery@uc.cl
Presentation format: In person

Abstract


In popular music, bass line tends to include relevant information about the chord sequence and thus segmenting musical audio data by bass notes can be used as a mid-level step to improve posterior higher level analysis, as chord detection and music structure analysis. In this paper, we present a comparison between four methods for detecting bass line onsets. The first method uses a multipitch detection algorithm to find the lowest note boundaries. The second method searches spectral differences in a low frequency range. The third uses Convolutional Neural Networks (CNN) and the fourth Recurrent Neural Networks (RNN). These methods are trained and tested on a MIDI rendered audio database, and standard evaluation metrics for detection problems are used, as well as a temporal accuracy for each method. The results are also compared to other onset detection systems showing that the deep learning based methods have better performance and time accuracy. We believe that this work provides a useful insight on how onset detection methods can be adapted to specific kind of onsets and a meticulous comparison of standard methods. We believe that our work comparing standard approaches provides a useful insight on how onset detection methods can be adapted to specific kind of onsets.

Paper (preprint)


[/expand]


[expand title=”Diego Espejo (Instituto de Acústica, Universidad Austral de Chile); Victor Poblete (Instituto de Acústica, Universidad Austral de Chile); Pablo Huijse (Universidad Austral de Chile); Felipe Otondo () – High-Performance Tools to Generate and Visualize a Sonic Time-Lapse“]
Contact : diego.espejoa@gmail.com; vpoblete@uach.cl; phuijse@inf.uach.cl; felipe.otondo@uach.cl
Presentation format: In person

Abstract


This paper describes an algorithm to generate a Sound-Time Lapse (STL), i.e., audio capsules or acoustic summaries that aim to reliably represent continuous and long audio signals recorded in wetlands from the Chilean city of Valdivia. The algorithm’s input is a long duration audio signal (>20 hours), from which representative chunks are extracted and merged with minimum audible distortion to produce a brief audio file (4-6 minutes). The algorithm is a flexible and adaptable tool, its input parameters can be adjusted by the user to highlight specific portions of the original long recordings. Modern libraries focusing on high performance computing were considered for the implementation of the algorithm. The project aims to highlight the importance of the natural heritage of Valdivia’s wetlands and bring them closer to the general public. Additionally, we recognize the opportunity to use STLs in scientific works so as to improve our understanding on biological diversity present in the natural sound composition of wetlands.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Miller Puckette (University of California, San Diego); Kerry Hagan () – Hand gesture to timbre converter“]
Contact : msp@ucsd.edu; Kerry.Hagan@ul.ie
Presentation format: In person

Abstract


In the design of a novel computer music instrument that uses the Leap Motion
game controller to generate and play computer-generated sounds in real time, we
consider the specific affordances that hand shapes bring to the control of
time-varying timbres. The resulting instrument was used in a new piece,
(anonymized), performed as a live duet. Central to our approach are: a
geometric exploration of the shape of hands themselves; a consideration of
accuracy and speed of limb motion; and an appropriately designed sound generator and control parameter space.

Paper (preprint)


[/expand]


[expand title=”Rebecca Brown (University of Virginia); Juan Vasquez (University of Virginia) – Autoethnography and Emotional Exposure as an Approach for Electroacoustic Music Composition“]
Contact : rlb9fd@virginia.edu; jcv3qj@virginia.edu
Presentation format: In person

Abstract


In a contemporary context, works that are highly specific to the artist’s life are commonplace in fields such as performance art. However, this approach is only tangentially employed when it comes to artistic applications of music technology. This paper explores the potential of composing electroacoustic music in the form of ‘music autoethnographies,’ a way of qualitative research that aims to systematically analyze personal experience to understand a specific context. We document our specific approach by contrasting the existing literature and related art pieces with the work of American composer Becky Brown.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Vesa Norilo (University of the Arts Helsinki); Andrew Brown (Griffith University) – Pi-Shaker: A New Workflow for Augmented Instruments“]
Contact : vnorilo@gmail.com; andrew.r.brown@griffith.edu.au
Presentation format: Virtual

Abstract


We present a project that explores the application of efficient digital signal processing techniques for interactive music applications across a range of devices and platforms, focusing on visual programming. As a test case, we implement physically informed models of sound synthesis and sound spatialisation that can respond in real time to performative gestures. We compare the strengths and weakness of implementations in several languages and how they can be integrated to best take advantage of these differences.

Paper (preprint)


[/expand]


[expand title=”Jose Lopez (Universidad Nacional de Música) – Reconfiguring Instrumental Performance in Perú: Ensamble de Laptops de la Universidad Nacional de Música – ELUNM.“]
Contact : jlopezr@unm.edu.pe
Presentation format: In person

Abstract


This paper reports on the paradigm shifting processes produced and activated by the institution of the first Peruvian laptop ensemble in 2019: Ensamble de Laptops de la Universidad Nacional de Música – ELUNM. This ensemble has been stablished within the context of the newly inaugurated Laboratorio de Música Electroacústica y Arte Sonoro at the Universidad Nacional de Música (ex Conservatorio Nacional de Música), and represents the partial results of specific actions taken for a period of ten years (2010 – 2020) for the introduction of themes related to technology based sound arts in Peru at differ-ent fronts, including public and private institutions. In this paper we present the activities of the ELUNM as they represent the successes and failures in a process of establishing sustainable development patterns for teach-ing musical technology in a country with a long history of shortcoming regarding the implementation of technology based musical practices.

Video abstract




[/expand]


[expand title=”Fadi AL-GHAWANMEH (University of Jordan); Melissa SCOTT (University of California); Mohamed Amine MENACER (University of Lorraine); Kamel Smaïli (University of Lorraine) – Predicting and Critiquing Machine Virtuosity: Mawwal Accompaniment as Case Study“]
Contact : fadighawanmeh@yahoo.com; melissajunes@gmail.com; mohamed-amine.menacer@loria.fr; Kamel.Smaili@loria.fr
Presentation format: Virtual

Abstract


The evaluation of machine virtuosity is critical to improving the quality of virtual instruments, and may also help predict future impact. In this contribution, we evaluate and predict the virtuosity of a statistical machine translation model that provides an automatic responsive accompaniment to mawwal, a genre of Arab vocal improvisation. As an objective evaluation used in natural language processing (BLEU score) did not adequately assess the model’s output, we focused on subjective evaluation. First, we culturally locate virtuosity within the particular Arab context of tarab, or modal ecstasy. We then analyze listening test evaluations,which suggest that the corpus size needs to increase to 18K for machine and human accompaniment to be comparable. We also posit that the relation ship between quality and inter-evaluator disagreement follows a higher order polynomial function. Finally, we gather suggestions from a musician in a user experience study for improving machine-induced tarab. We were able to infer that the machine’s lack of integration into tarab may be due, in part,to its dependence on a tri-gram language model, and instead suggest using a four- or five-gram model. In the conclusion, we note the limitations of language models for music translation.

Paper


[/expand]


[expand title=”Ka-wing Ho (The Chinese University of Hong Kong); Yiu Ling (The Chinese University of Hong Kong); Chuck-jee Chau (The Chinese University of Hong Kong) – Guitar Virtual Instrument using Physical Modelling with Collision Simulation“]
Contact : 1155085718@link.cuhk.edu.hk; 1155092438@link.cuhk.edu.hk; chuckjee@cse.cuhk.edu.hk
Presentation format: In person

Abstract


We have created a guitar virtual instrument by simulating string vibration using a finite difference method to solve a modified one-dimensional wave equation with damping and stiffness, along with a collision system that allows the simulated guitar to perform a variety of articulations. Convolution with impulse response is also used to enhance the realism of the sound. The core model design, implementation approach and the optimization techniques are presented in this paper.

Paper

Video abstract


[/expand]


[expand title=”Miriam Akkermann (TU Dresden) – Vocabulary ruts in Mixed Music – multifarious terms with many ascriptions. “]
Contact : miriam.akkermann@tu-dresden.de
Presentation format: In person

Abstract


This paper gathers implicit ascriptions and definitions related to the terms ‘computer,’ ‘tape,’ and ‘electronic’ when being used to describe the technical instrumentation of mixed music compositions from the 1970s to 1990s in order to frame terminological problems and outlines connections to challenges resulting from these inaccuracies for later re-performances.

Paper (preprint)


[/expand]


[expand title=”Tian Cheng (National Institute of Advanced Industrial Science and Technology (AIST)); Satoru Fukayama (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) – Joint Beat and Downbeat Tracking Based on CRNN Models and a Comparison of Using Different Context Ranges in Convolutional Layers“]
Contact : tian.cheng@aist.go.jp; s.fukayama@aist.go.jp; m.goto@aist.go.jp
Presentation format: Virtual

Abstract


In this paper, we address joint beat and downbeat tracking by using Convolutional-Recurrent Neural Networks (CRNNs). The model consists of four convolutional layers and four bi-directional recurrent layers. In order to deal with music in various styles, we propose to increase the convolution filter sizes in the convolutional layers, which helps obtain more context information. We compare four different filter sizes (covering 3 to 9 frames) to analyse the context effect on ten individual datasets. The mean cross validation results of eight datasets show that using context ranges of 5 and 7 frames perform better on downbeat tracking than other context ranges. The comparison results on two testing-only datasets (an in-house pop dataset and the SMC dataset) show the proposed CRNN model outperforms a previous state-of-the-art method with a context range of 7 frames.

Paper


[/expand]


[expand title=”Federico Schumacher (Universidad Diego Portales); Vicente Espinoza (Universidad de Chile); Francisca Mardones (Universidad Diego Portales); Rodrigo Vergara (Universidad de Chile); Alberto Aranguiz (Universidad Diego Portales); Valentina Aguilera (Universidad de Chile) – Sounds in Motion. What We Do Perceive “]
Contact : federico.schumacher@gmail.com; vicente.espinoza@ug.uchile.cl; francisca.mardones1@mail.udp.cl; rodrigoc.vergara@gmail.com; alberto.aranguiz@mail.udp.cl; valentinaaguilera04@gmail.com
Presentation format: In person

Abstract


Sound spatialization is a technique utilized in diverse musical genres as well as in soundtrack production for films and videogames. In this context, specialized software have been developed which allow for the design of sound trajectories that we have classified as (a) basic movements or Image Schemas of Spatial Movement and (b) archetypal geometric figures. The aim of this study is to evaluate the perceptual recognition of some of these sound trajectories. An experiment was designed which consisted in listening to auditory stimuli and associating them to the mentioned categories of spatial movement. The results suggest that, in most cases, the ability to recognize moving sound is hindered when there are no visual stimuli present. Moreover, the results indicate that archetypal geometric figures are rarely perceived as such, and that the perception of sound movement in space can be organized in three spatial dimensions, Height, Depth and Width, as the literature on sound localization also confirms.

Video abstract


[/expand]


[expand title=”Ning Ma (University of Sheffield); Guy Brown (University of Sheffield); Paolo Vecchiotti (University of Sheffield) – AMI – Creating Coherent Musical Composition with Attention“]
Contact : n.ma@sheffield.ac.uk; g.j.brown@sheffield.ac..uk; p.vecchiotti@sheffield.ac.uk
Presentation format: In person

Abstract


We present AMI (Artificial Musical Intelligence), a deep neural network that can generate musical composition for various musical instruments and different musical styles with a coherent long-term structure. AMI uses a state-of- the-art attention-based deep neural network architecture to discover patterns of musical structures such as melodies, chords, and rhythm, from tens of thousands of MIDI files. We encode music data in a way that is similar to read- ing a music score, which enables the model to better cap- ture music structures. Learning is done in an unsupervised manner, allowing exploitation of large collections of MIDI files that are available on the internet. As an autoregres- sive model, AMI predicts one musical note at a time, de- pending on not just the last note, but a long sequence of notes (up to thousands) from previous time steps. Fur- thermore, we enhance the learning of musical structures by adding embeddings at different time scales. As a re- sult, the model is able to maintain a coherent long-term structure and even occasionally transition to a different movement. Output examples can be heard at https://meddis.dcs.shef.ac.uk/melody/samples.

Paper (preprint)


[/expand]


[expand title=”Vesa Norilo (University of the Arts Helsinki); Josué Moreno (University of the Arts Helsinki) – Aural Weather Etude: Installing Atmosphere“]
Contact : vnorilo@gmail.com; josue.moreno.prieto@uniarts.fi
Presentation format: In person

Abstract


The Aural Weather Etude is a collaborative work that explores the spatial dimension as the primary means of organizing music and the devolution of narrative agency to the audience, inspired by the wall drawings by Sol Lewitt. This paper presents the work, the related creative process and some novel computational techniques related to efficient realization of a large number of sound sources in rapid spatial modulation and distance-based amplitude panning.

Paper (preprint)


[/expand]


[expand title=”Masahiro Hamasaki (National Institute of Advanced Industrial Science and Technology (AIST)); Keisuke Ishida (National Institute of Advanced Industrial Science and Technology (AIST)); Tomoyasu Nakano (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) – Songrium RelayPlay: A Web-based Listening Interface for Continuously Playing User-generated Music Videos of the Same Song with Different Singers“]
Contact : masahiro.hamasaki@aist.go.jp; ksuke-ishida@aist.go.jp; t.nakano@aist.go.jp; m.goto@aist.go.jp
Presentation format: In person

Abstract


This paper describes “Songrium RelayPlay,” a Web-based user interface for continuously and seamlessly playing back music videos that contain voices of various vocalists singing the same song. Since famous songs often have cover (Me Singing) videos sung by various vocalists on video-sharing services, our interface automatically aligns those videos to their original song to provide a new experience of interactively switching vocalists while listening to the song. Our backend system collects a number of instances of such videos from the Web by means of a Web-mining technique and then our listening interface plays them in relays using signal processing technologies. Even if users listen to a song only once, they can enjoy various singing voices by switching vocalists phrase by phrase (relay-playing). We implemented and publicly launched Songrium RelayPlay where users can enjoy over 18,000 songs having 0.4 million derivative singing videos.

Paper

Video abstract


[/expand]


[expand title=”Cale Plut (Simon Fraser University); Philippe Pasquier (Simon Fraser University) – LazyVoice: A multi-agent approach to fluid voice leading“]
Contact : cplut@sfu.ca; pasquier@sfu.ca
Presentation format: In person

Abstract


We outline and describe the interactive LazyVoice system for realizing chord progressions into individual voices with fluid voice leading, inspired by choral voice leading techniques. Polyphonic music consists of multiple musical lines
that, when taken together, form an implicit or explicit harmonic progression. While generative music systems exist that create harmonic progressions, these systems lack a means to translate the harmonic progression into individual polyphonic musical lines. We apply a technique used to improvise multiple-part harmony in choral settings to generate fluid musical lines from a harmonic progression. LazyVoice is a flexible voice leading system that translates abstracted harmonic progressions into multiple fluid musical lines.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Zlatko Baracskai (University of the West of England) – Triggering models for effortful musical interaction be-yond direct cognitive control“]
Contact : zlatko.baracskai@uwe.ac.uk
Presentation format: In person

Abstract


As the field of musical computer interaction embraces the need for difficulty of control and elongated learning curves we explore some possibilities in designing such instruments. This paper discusses models designed to derive triggers from sensor data putting emphasis on mapping and its potential for hosting effortful interaction design. Focus on deriving triggers entails achieving dif-ferent triggers and trigger characteristics while the tim-ing accuracy isn’t getting discussed. Starting from basic arraying and thresholding principles complex schemes are devised such that the levels of difficulty are built into the system. It is proposed that these mapping strategies designed for musical performance bear resemblance to acoustical instruments in the way they accommodate con-trol at many skill levels. The purpose of this paper is to exemplify such systems and describe their performance based on informal tests in order to demonstrate the po-tential of complex mapping strategies paired with simple interfaces. In appropriating interfaces to be operated at different levels of expertise it is anticipated that an in-vestment of rehearsal time will produce increased accu-racy of control, as well as the physicality of virtuosic performance, which allows novice audiences to more deeply appreciate computer facilitated instruments.

Paper


[/expand]


[expand title=”Toshihisa Tsuruoka (New York University); Leo Chang (); Oliver Hickman (none) – Ear Talk Project: Repurposing YouTube Live for Online Co-composition and Performance“]
Contact : tt1694@nyu.edu; leochang93@gmail.com; okhick@gmail.com
Presentation format: Virtual

Abstract


The Ear Talk project enables people from remote locations to collaboratively share, shape, and form music via an interactive score. Through the crowdsourcing of recordings and creative directions from participants, the Ear Talk project aims to create music online. Our goal is to establish an online environment where experienced composers as well as novices could participate in the proposed music making process without having to learn specific skills. For this reason, Ear Talk’s compositional process is conducted via verbal communication over YouTube live chat messages that interact with the live streamed score in real-time. The system utilizes free-to-use platforms familiar to most participants, such as Google Drive and YouTube. The interactive score is implemented in Max/Jitter for real-time audiovisual processing.

[/expand]


[expand title=”Edmund Hunt (Royal Birmingham Conservatoire); James Dooley (The Open University) – ‘A Core and Yet Absent’: Using Electroacoustic Technology to Mediate between String Quartet and an Ancient Text“]
Contact : Edmund.Hunt@bcu.ac.uk; james.dooley@open.ac.uk
Presentation format: Virtual

Abstract


‘Different Islands’, for dancer, string quartet and live electronics, began as a project to investigate creative approaches to an Old English text known as ‘Wulf and Eadwacer’. By embedding the untranslated text within disparate elements of musical composition, dance and live electronics, the project seeks to explore the text through its thematic content, sound, and structure. In choosing not to present the text in its translated form, the project intends to focus on the ambiguous, layered and fragmentary nature of the poem. Rather than presenting a definitive translation, the creative re-presentation of the poem allows it to remain open to different interpretations. In developing this project, the analogy of the text as a fossil provided a useful starting point. Electroacoustic technology, in the form of analyses and live electronics, provides the medium into which the textual ‘fossil’ leaves its imprint. This paper focuses on the development of the composition and electronics.

Video abstract


[/expand]


[expand title=”Micael Silva (University of Campinas); Danilo Rossetti (Federal University of Mato Grosso); Jonatas Manzoli (UNICAMP) – Emerging structures within microtime of Ligeti’s Continuum“]
Contact : micaelant@gmail.com; d.a.a.rossetti@gmail.com; jonatas@nics.unicamp.br
Presentation format: In person

Abstract


We present a computer-aided analysis of Ligeti’s Continuum. This composition is representative of Ligeti’s fascination to explore sound masses and microtime perception in musical works. The central idea of the composition is to create a constellation of continuous sounds with short durations of the harpsichord. Previously, authors explored Ligeti’s Continuum from a perceptual point of view, by analyzing the patterns in the composition and by performing experimental studies. Our contribution is focused on the analysis of the performance and its perceptual outcome. We hypothesize that the performance of the Continuum leads to the emergence of changes in loudness, microtemporal patterns, and spectral irregularities. To that, we associate these emergent features with psychoacoustical audio descriptors, anchored in models of bark scale, loudness, and spectral irregularity. These tools generate graphical representations of the piece and allow us to discuss the microtime patterns of the composition.

Paper (preprint)

Video abstract




[/expand]


[expand title=”Alejandro Delgado Luezas (Roli / Queen Mary University of London); SKoT McDonald (Roli); Ning Xu (Roli); Charalampos Saitis (Queen Mary University of London (starting Oct/Nov?)); Mark B. Sandler (Queen Mary University of London) – Learning Models for Query by Vocal Percussion: A Comparative Study“]
Contact : alejandro@roli.com; skot@roli.com; ning@roli.com; c.saitis@qmul.ac.uk; mark.sandler@qmul.ac.uk
Presentation format: In person

Abstract


The imitation of percussive sounds via the human voice is a natural and effective tool for communicating rhythmic ideas on the fly. Thus, the automatic retrieval of drum sounds using vocal percussion can help artists prototype drum patterns in a comfortable and quick way, smoothing the creative workflow as a result. Here we explore different strategies to perform this type of query, making use of both traditional machine learning algorithms and recent deep learning techniques. The main hyperparameters from the models involved are carefully selected by feeding performance metrics to a grid search algorithm. We also look into several audio data augmentation techniques, which can potentially regularise deep learning models and improve generalisation. We compare the final performances in terms of effectiveness (classification accuracy), efficiency (computational speed), stability (performance consistency), and interpretability (decision patterns), and discuss the relevance of these results when it comes to the design of successful query-by-vocal-percussion systems.

Paper


[/expand]


[expand title=”Sachi Tanihara (Tokyo University of the Arts) – THE EXPANDING UNIVERSE OF POETIC VOICE“]
Contact : sachitnhr@yahoo.co.jp
Presentation format: Virtual

Abstract


Technologies of computer music enable us to generate a new way of poetry appreciation and voice performance. In my work “La poésie de la Poésie – for one vocal performer and computer”, while vocalizing a poem, a reader can gain a sense of physical expansion and can be profoundly united with the poetic world. Spatial sound technologies, multiple reverberations, proliferous voice and automated tracing of vocalization boost the feeling of multi-dimensionally expanded body. In addition to the reader’s voice, multiple layers of the sounds including words, word meanings and even inner sounds like the background of the poem have their own movements and they are integrated to feel the overall sonic world in the poem. Computer technologies allow “active reading” and “active listening” to lead us into the “Unison with the poetic universe” through our whole bodies. This approach of voice performance is distinct from singing or vocal percussion; thus, everyone can experience the poetic world without any training.

For this interactive poetry, the collaboration with artificial intelligence is feasible to realize the real time poetic accompaniment. Moreover, the physical effects of the expanded vocalization could be developed for vocal communication, music therapy and musical eroticism which directly touch the bodily sensation.


[/expand]


[expand title=”Juan Carlos Vasquez (University of Virginia); Omar Guzman Fraire (UVA) – Current Activism Trends in Sound Art and Electroacoustic Music in Mexico and Colombia“]
Contact : juan.vasquez@msn.com; oeg6bp@virginia.edu
Presentation format: In person

Abstract


This study attempts to provide a current report of activism in forms of artistic expression in Latin America within the fields of electroacoustic music and sound art. Given the non-existent dedicated literature documenting related examples in this specific topic, we chose to focus in Mexico and Colombia as case studies for this paper, considering the authors’ close connection and artistic trajectory in both of these countries.

[/expand]


[expand title=”Michael Palumbo (York University); Doug Van Nort (York University); Rory Hoy (York University) – Disperf: A Platform for Telematic Music Concert Production“]
Contact : info@palumbomichael.com; vannort@yorku.ca; rorydavidhoy@gmail.com
Presentation format: Virtual

Abstract


Producing and engineering telematic music concerts requires a diverse set of software, hardware, and expertise ranging from networking, command line interfaces, acoustics, planning, and communication. We present a prototype platform for the production of telematic audiovisual performances, with dual emphases on usability beyond strictly academic contexts, and maintaining a shared “presence” in the sense of low-level system knowledge across all sites. Our system, “disperf”, provides a user interface for controlling various established software and hardware already in use in telematic music, including those used for network diagnostics and high-quality transmission of audio and video. The system is capable of passing data between these applications to calibrate settings, while each running instance of disperf receives real-time information about other online peers, exposing such relevant data as parametric namespaces, acoustic information, audio driver settings, and compatibility and run statuses of available services. We conclude with an example of another project developed in tandem, a virtual acoustic system which takes advantage of the disperf peer discovery model to transmit, receive, and control other virtual acoustic instances in participating remote sites.

Paper (preprint)


[/expand]


[expand title=”Richard Dudas (Hanyang University) – “Machinatuosity”: Virtual Strings, Spectral Filters and Temperament Tools for ‘Esquisse’“]
Contact : d3u7d2a4s@richarddudas.com
Presentation format: In person

Abstract


This paper describes the technological component under- pinning the author’s composition ‘Esquisse (in Memoriam J.-C. Risset)’, for piano and computer, in the form of real- time signal processing and synthesis implemented in the Max visual programming language. The computer part, designed specifically for this piece, incorporates some novel techniques, including an extension of the Karplus- Strong algorithm that permits the synthesis of string har- monics via one simple control parameter, and a spectral- domain filtering system based on comb-like harmonic filters which also incorporate spectral-domain bandpass “windows” the can be calculated on either a linear fre- quency or octave scale. The practical result of this is the ability to create harmonic filters representing a single formant of any size or full-spectrum harmonic filters whose fundamental changes in different parts of the spec- trum. The piece also incorporates some specially de- signed tuning tools to reconcile the equal-tempered tun- ing of the piano with the use of the harmonic series in the electronics as a prevalent compositional device used throughout the piece.

Paper (preprint)


[/expand]


[expand title=”Mara Helmuth (); Yunze Mu (University of Cincinnati, College-Conservatory of Music); Owen Hopper (University of Cincinnati, College-Conservatory of Music); Carl Jacobson (University of Cincinnati, College-Conservatory of Music); Shawn Milloway (University of Cincinnati, College-Conservatory of Music) – CCM Center for Computer Music Studio Report 2019“]
Contact : HELMUTMM@ucmail.uc.edu; muye@mail.uc.edu; hopperod@mail.uc.edu; jacobcf@mail.uc.edu; millowsh@mail.uc.edu
Presentation format: In person

Abstract


Recent developments at the University of Cincinnati, College-Conservatory of Music Center for Computer Music include explorations of creating virtual reality musical works using the Unity 3D game engine with RTcmix audio, a new course collaboration between the composition department and the School of Architecture and Interior Design, internet performance software updating and internet performance, music based on plant data and brain waves, and many performances of new student and faculty works, some by the Cincinnati Composers Laptop Orchestra Project.

Paper (preprint)


[/expand]


[expand title=”Edgar Berdahl (Louisiana State University); landon viator (lsu) – Sound Synthesis by Connecting a Chaotic Map to a Bank of Resonators“]
Contact : edgarberdahl@lsu.edu; landonviator@gmail.com
Presentation format: In person

Abstract


A chaotic map can be connected to a bank of resonators in order to realize a sound synthesizer. If the right parameters are chosen, this kind of sound synthesizer can explore the edge of chaos, producing tones that sound neither too random nor too simplistic.

In this work, an example synthesizer is created by connecting the De Jong chaotic map to banks of resonators. A multichannel contact microphone called “The Hexapad” is used to excite the sound synthesizer.

Video abstract


[/expand]


[expand title=”Carmine-Emanuele Cella (University of California, Berkeley); Daniele Ghisi (University of California, Berkeley); Vincent Lostanlen (Cornell University); Fabien Lévy (Detmold Hochschule fur musik ); Joshua Fineberg (Boston University); Yan Maresz (Conservatoire national supérieur de musique et de danse de Paris) – OrchideaSOL: a dataset of extended instrumental techniques for computer-aided orchestration“]
Contact : carmine.cella@berkeley.edu; danieleghisi@berkeley.edu; vl1019@nyu.edu; fabien.levy@gmx.net; fineberg@bu.edu; yan.maresz@gmail.com
Presentation format: In person

Abstract


This paper introduces OrchideaSOL, a free dataset of samples of extended instrumental playing techniques, designed to be used as default dataset for the Orchidea framework for target-based computer-aided orchestration.
OrchideaSOL is a reduced and modified subset of Studio On Line, or SOL for short, a dataset developed at Ircam between 1996 and 1998. We motivate the reasons behind OrchideaSOL and describe the differences between the original SOL and our dataset. We will also show the work done in improving the dynamic ranges of orchestral families and other aspects of the data.

Paper (preprint)


[/expand]


[expand title=”Edgar Berdahl (Louisiana State University) – Acoustic Control using the Internet of Things: Introducing the Concept of a Socially Responsible Noise Source“]
Contact : edgarberdahl@lsu.edu
Presentation format: In person

Abstract


It is proposed that loud noise sources in the future should become part of the Internet of Things, enabling them to transmit their noise disturbance signals over the Internet. Then, devices wishing to acoustically control the noise can use such a disturbance signal as a reference signal for canceling the noise. This architecture makes it possible for acoustic environments to be remediated by eliminating noise.

The concept is demonstrated using a prototype. As part of this, a socially responsible noise source simulator is created, which loudly radiates the sound of a jackhammer and also transmits the sound of the jackhammer disturbance signal over a network.

Jitter can arise due to asynchrony of clocks from different sound interfaces. This jitter must remain small for the technology to work.

Video abstract


[/expand]


[expand title=”Jonathan Kulpa (UC Berkeley); Edmund Campion (University of California, Berkeley); Carmine-Emanuele Cella (University of California, Berkeley) – QuBits, a System for Interactive Sonic Virtual Reality“]
Contact : kulpajj@gmail.com; campion@berkeley.edu; carmine.cella@berkeley.edu
Presentation format: In person

Abstract


This article describes the QuBits system, a virtual reality environment offering an expanded medium for musical experience with space and visuals. The user and the computer jointly shape many of the events. Sound engines were designed to explore an aesthetic of algorithmically generated sonic structures, sound mass, interactive evolution, and spatial sound. Real-time challenges are discussed. A network of software is diagramed, solving initial issues with latency. Finally, the principles and methods utilized in the current project are evaluated with implications for future iterations.

[/expand]


[expand title=”Ward Slager (University of the Arts Utrecht); Fedde ten Berge (STEIM) – PRP Voyager: an instrument employing complex mapping to generate complex music“]
Contact : ward@wardslager.com; fedde@steim.nl
Presentation format: In person

Abstract


The PRP Voyager is a generative musical instrument with a complex mapping interface based on user made presets. Presets are prepared beforehand or made during improvisation. Pulse envelopes are generated using the Pseudo Random Pulse (PRP) algorithm. The envelopes can be utilized to mask a broad variety of carrier oscillator algorithms. PRP generates patterns with a pseudo random quality that serve as a distinct musical language. The PRP patterns can be recorded and looped to create repeating patterns while still retaining a certain sense of pseudo randomness within the groove of the loops. This paper discusses the implementation, and development of the algorithm and instrument.

Paper

Video abstract


[/expand]


[expand title=”Virginia de las Pozas (NYU ) – Extramentality; Automatic Mapping Generation for Gestural Control of Electronic Music“]
Contact : vdelaspozas@gmail.com
Presentation format: In person

Abstract


This paper describes an approach for automatically generating mapping schemes in the context of human interaction with extramusical objects and electronic music. These mappings are structured by comparing sensor input to a synthesized matrix of sequenced audio and determining perceptually significant mappings for each component of the control space. The ultimate goal of Extramentality is to (1) facilitate artists working in live performance and installation environments where extra-musical objects are utilized in place of traditional musical instruments and (2) to encourage artists to explore extramusical and non-traditional objects as sound controlling interfaces.

Paper (preprint)


[/expand]


[expand title=”Rob Hamilton (Rensselaer Polytechnic Institute) – Ensemble Nonlinear: Low-cost Ensemble Performance with Networked Raspberry Pi’s“]
Contact : hamilr4@rpi.edu

Abstract


Musical performance ensembles using orchestras of laptops have become firmly established as a vehicle for advancing electronic and electroacoustic composition and research practices in institutions around the world. But while the successes of many such laptop orchestras primarily utilize high-cost homogeneous technology infrastructures featuring high-end laptops, audio interfaces and highly specialized hemispherical speakers, the financial cost of creating such an ensemble is simply out of reach for many artists, ensembles and educational institutions. This paper details the creation of Ensemble Nonlinear, an electronic music performance ensemble and research platform utilizing low-cost Raspberry Pi single-board computers (SBC) as networked clients connected to a single audio server laptop and speaker array.

[/expand]


[expand title=”Oliver Hancock (MAINZ) – A Ratio-based Interface for Musical Control of an Iterated Function System “]
Contact : oliverjhancock@googlemail.com
Presentation format: In person

Abstract


A musical application of a cantor set Iterated Function System (IFS) is described. Sonifications display a wide range of musical characters, inter-relatable by arbitrar-ily fine increments and controllable by specifying as few as two parameters..
A previous GUI allowed the user to specify the 1st iteration as two line segments expressed as percentages of the unit length. A ratio-based GUI is suggested as a more intuitive and efficient control, particularly for gen-erating sets with simple whole number relationships among the proportions of their 1st iteration. Such sets are observed to have simple, regular relationships between their pitches and rhythms, giving textural and spectral effects resembling conventional, notated music.
The algorithm is explained briefly, and the rules governing sonifications are described. The sonic output of the algorithm can be fully specified by the lengths of the 1st iteration line segments. A two-dimensional input space is proposed, and sonifications are categorized ac-cording to perceivable characteristics.
The ratio-based GUI is presented. Early results are discussed along and possible further work.

Video abstract




[/expand]


[expand title=”Sabine Breitsameter (Soundscape- & Environmental Media Lab, Darmstadt UAS) – Der Hörweg/The Listening Path: An Acoustic Ecology and Augmented Reality Project for a Rural Community. Redefining distribution, facilitation and propagation of soundscapes and music“]
Contact : sbreitsameter@snafu.de

Abstract


Based on digital technologies, our project Hörweg/Listening Path, situated along a hiking trail in the forest of a rural community, shows a novel way of distributing auditory content to a general public and of implementing a site specific and at the same time high quality listening experience into an everyday life situation. The paper will elucidate the project’s relationship to Acoustic Ecology’s method of sound walking, its site specific approach corresponding to the term “Locative Media”, and the Hörweg’s listening experience with respect to the term “Augmented Reality” and related expressions.
The project’s main issues, decisions, and problems will be illuminated based on its experiential concept, its aesthetics and design, its technological determinants and in relation to strategic aspect. It will be shown that all four categories’ aspects are forming a densely interwoven ensemble in order to establish an overall perceptual, emotional and cognitive experience.
Concluding, the paper will point to the idea of Audience Development, and connect it to Acoustic Ecology’s main goals: to inspire and foster the willingness and ability to listen by deepening the consciousness towards the value of sounds in everyday life, nature, media and arts.


[/expand]


[expand title=”Kittiphan Janbuala (College of music, Seoul National University ); PerMagnus Lindborg (Soundislands) – Sonification of Glitch-Video: Making and Evaluating Audiovisual Art made from the Betta Fish“]
Contact : kj.ice8@gmail.com; pm@permagnus.org
Presentation format: In person

Abstract


The advent of a computer for consumers has been supporting artists in various fields to develop their creativity into new territories. Computer-assisted sonification is one of the modern techniques which is available for the contemporary artist. Music or sound art usually derive from abstract inspiration which is the same approach as sonification for aesthetic. Non-speech audio as data can uncover a potential for aesthetic purposes. This paper first describes how we experimented and investigated with using glitch-video of an aquarium fish as the input to sonification processing and audiovisual composition called “1(X)MB”. We then report results from a listening test, and discuss the project design as whole.

[/expand]


[expand title=”Tae Hong Park (NYU) – The Soundscaper: A Tool for Soundscape Re-Synthesis“]
Contact : thp1@nyu.edu
Presentation format: In person

Abstract


Soundscaper is a software tool for re-synthesizing sound-scapes based on synthesis-by-analysis methods. Sound-scaper generates artificial soundscapes that can be playedback at for duration and at six dynamic levels (pp, p, mp,mf, f, and ff) using a content-based synthesis approachwhich allows real-time control over “loudness” beyondsimply scaling the resulting waveform. The analysis isconducted by considering concepts of foreground, middle-ground, and background soundscape components. Extractedaudio segments and acoustic events from the analysis mod-ule are “audio-stitched” together where a background soundcarpet is automatically populated and blended with soundevents to form a continuous soundscape output.

Paper (preprint)


[/expand]


[expand title=”Halley Young (University of Pennsylvania) – A Foundations-of-Computation Approach to Formalizing Musical Analysis“]
Contact : halleyy@seas.upenn.edu
Presentation format: In person

Abstract


In the last century, two trends have increased the scope of musical analysis: Mathematical music theorists such as Dmitri Tymoczko, Godfried Toussaint, and Guerino Mazzola have provided mathematical insight into specific musical scenarios, while musicologists such as Lawrence Zbikowski and Michael Spitzer have closely examined the nature of musical analysis as a cultural, cognitive, and scholarly endeavor. This paper intends to bring these two strands of research together by providing a constructive mathematical foundation for the process of musical analysis. By establishing a mathematical description of the generation of an analysis of a piece of music, useful mathematical tools for performing operations frequently used in analysis, and possible precise definitions for loaded terms such as musical similarity'' andmusical form”, I will extend the analyst and the meta-analyst’s ability to create abstractions from musical surfaces, the core of every process of analysis.

[/expand]


[expand title=”Jeffrey Clark (Ball State University) – A Methodology for Virtualizing Complex Sound Sources into 6DoF Recordings“]
Contact : jmclark85@gmail.com
Presentation format: In person

Abstract


Recording sound sources for spatially adaptive, immersive environments can present a problem in cases where the listener is able to move into close enough proximity to the sound source that the source is no longer a point source. This paper presents a recording and encoding methodology for representing these complex sound sources in six degree of freedom immersive environments in a way that allows them to retain their perceptual size based on the listener’s distance.

This paper also suggests a method for using statistical directivity data to retain a sound source’s frequency-domain signature based on its directivity characteristics and its rotation relative to the listener. It also suggests a method for calculating the damping of a sound based on a listener’s distance that accounts for the perceptual size of the source as it approaches or leaves appearing as a point source. The damping and directivity methods suggested can be applied to simple sound sources as well; allowing the methodology to be applied to all the virtualized sound objects in a scene using the same approach.

Paper (preprint)


[/expand]


[expand title=”Alexis Crawshaw (UCSB Media Arts and Technology and Université Paris 8 EDESTA) – Electro-Somaesthetic Music: Spatial Approaches“]
Contact : storyalexis@gmail.com
Presentation format: In person

Abstract


This paper introduces the concept of electro-somaesthetic music (ESM) and a set of spatial approaches unique to its realization. We describe ESM as computer-generated music intended to engage the human somatosensory system as an essential artistic aim. Specifically, ESM arises from mechanical waves engaging vibration-sensitive corporeal senses by non-cochlear means. Somatic spatial perception affords vibration-based content high spatial acuity within our most proximal, intimate space: at and within the threshold of our perceived self/body from our perceived external environment. We propose that these spatial properties set ESM expressively apart from hear-ing-based spatial music and present a novel, nuanced territory for compositional exploration. To facilitate in spatial ESM composition and to promote compelling results therein, we advance a theoretical system of technical and aesthetic concerns, accompanied by illustrative proofs-of-concept. This paper examines three paradigms for yielding spatial content within ESM: the manipulation of physical, acoustical parameters; of virtual, computational parameters; and of non-intuitive perceptual armatures. Additionally, we examine each of these paradigms through two lenses: egocentric reference (where spatial content is limited to the body) and allocentric reference (where content is distributed within an external environment).

Paper

Video abstract


[/expand]