[expand title=”Radek Rudnicki (Precyzja Foundation)* – PaperTracker : Gamiefied Music & Tech Teaching Tool“]
Contact : koshicontact@gmail.com
PaperTracker

Radek Rudnicki, Tristan Bunn, Jon He and Andre Murnieks

An interactive installation and educational platform that engages audiences of all ages with music, technology, and game design. The focus is on providing entertaining, inexpensive challenges that promote creative problem solving, collaborative work, and programming using visual apparatus.

Using low tech, paper-based media and coded image recognition, users are interacting with the platform. It uses layered sound output as feedback to test participants’ solutions to challenges that bridge the arts and sciences through the basics of music, logic, and gaming dynamics. Originally PaperTracker was built as an education platform and to build capacity both for educators and 10-14 year-old children from disadvantaged areas of New Zealand.
As such is redefining limits of accessibility of music technologies for disadvantaged communities.

PaperTracker for the first time was presented internationally at Tokyo Festival of Modular 2019, in the form of an audio-visual installation. Since then we have run workshops at SUAC University of Art and Culture, Hamamatsu Japan and at University of York, UK.

The PaperTracker was developed, as a research project at Massey University, College of Creative Arts, Wellington, New Zealand by Radek Rudnicki, Tristan Bunn, Jon He and Andre Mūrnieks.

[/expand]


[expand title=”José Miguel Fernandez (IRCAM)* – AntesCollider, Antescofo library to control SuperCollider.“]
Contact : jose.miguel.fernandez@ircam.fr
AntesCollider was presented at ICMC 2019, « AntesCollider, control and signal processing in the same score » .
AntesCollider is a library programmed in the Antescofo language to provide higher level and expressive control on the SuperCollider scsynth (SuperCollider Server) directly via OSC. The library is organized around a set of concurrent objects to easily create dynamically audio processes and to implement them using scsynth servers. The motivation is to use the Antescofo language expressiveness to write complex electronic musical processes and synchronize them relying on the score follower capabilities of the Antescofo meta sequencer and taking advantages of the optimized and versatile audio synthesis dynamic capabilities of scsynth.

This library is designed to be used by musicians, composers, computer music designers or sound designers. The workshop will focus on the use in the context of electroacoustic and interactive / mixed music as well as how to install and a tutorial of use with examples among others physical models and algorithms for control synthesis and in a high order ambisonic (HOA) space context.
[/expand]


[expand title=”Jeff Morris (Texas A&M Unversity)* – Bytebeat: Resources and Lessons from Performing and Teaching with a Single Line of Code“]
Contact : jeff@morrismusic.org
Bytebeat is a synthesis-and-composition technique that can generate a surprising variety of results using a single line of code and a very small set of operations. While it has mostly been used to generate brief looping passages, it is possible to create extended concert works, it can be used in live coding, and it is possible to create video animations from the same code. Bytebeat interpreters are available for free or low cost on many platforms, including mobile devices and web browsers.

This workshop will share lessons from teaching a course in bytebeat live coding to newcomer non-music majors, including a survey of resources and approaches to performing, composing, and teaching with bytebeat—the university course excelled in helping students set aside preconceived notions of how music should sound and adopt exploratory mindsets instead. Additionally, a number of topics for further discussion and inquiry will be presented, including aesthetics and computer history. For example, although bytebeat was created by demoscene artist Viznut in 2011, it would have been possible in the early 1950s but was underexplored for decades. Further, bytebeat does not follow the ubiquitous orchestra-score model created by Max Mathews; rather, timbre and form are intertwined in this intensely digital-native platform.
[/expand]


[expand title=”Christian Oyarzún (Universidad Austral de Chile)* – Introduction to Live-coding Micro-workshop“]
Contact : imavoodoochild@gmail.com
Live-coding is a practice based on real-time code writing for the creation and improvisation of music and image. From its origins, to the eaves of electroacoustic music in the mid-1990s, live-coding has been diversifying and integrating into various areas of artistic and technological research, motivating the development of varied programming environments, as well as diverse socialization contexts , including the Algoraves, or algorithmic electronic parties organized by local scenes of live-coders around the world, constituting a scene formed by “artists who want to learn to code, and coders who want to express themselves” that is permanently redefining and Updating its paradigms.

General objective:
By reviewing references, audition of works and practical exercises, the micro-workshop seeks to introduce attendees in the use of programming languages ​​for the creation of music and image in real time, delivering both technical and conceptual guidelines that allow them to propose and develop their own proposals.

Specific objectives:
-To deliver basic formal programming tools that allow them to propose and solve works that consider their execution in real time from code.
-Introduce attendees in the current paradigm of live programming for the development of audiovisual works.
-Stimulate the development of pieces that consider live-coding as a performance strategy.
[/expand]


[expand title=”Michael Hurtado (PUCP)* – Colors 2.0: building musical instruments using voice recognition“]
Contact : michael.hurtado@pucp.pe
In 1972, the Peruvian poet Jorge Eduardo Eielson composed a piece of vocal poetry called “colors”. For this piece, the poet used different tones and emphasis on each word / color, appealing to a work of permutation of four words and four tones: yellow, green, red and blue. According to some interviews, the poet suggests that the piece has a possible computational origin.
The workshop aims to start with the idea of ​​the word / sound link, a link that is explored in sound poetry in works such as those by Henri Chopin. Using NLP techniques for voice recognition, we will build a musical instrument that uses vocalized words as input, so that our algorithm analyzes the words in real time and generates sounds and images as output. This will make possible the connection that Eielson was looking for with his piece “colors”, allowing the artist to also work on the performance aspect.
[/expand]


[expand title=”Tae Hong Park (NYU)* – Building, Deploying, and Creative Applications using the Citygram Sensor Network System“]
Contact : thp1@nyu.edu
This workshop will focus on the building, deploying, and exploring creative applications afforded by current work in soundscape research through the Citygram Project. The workshop will offer a hands-on session following an overview of the project which will present its approaches to soundscape and acoustic ecology research, overview of our comprehensive cyber-physical sensor network, and potentials for exploration of musical, creative, and spatial analysis using real-time and historical spatio-acoustic data streams. The workshop will be divided into three sections: (1) Citygram introduction, (2) Building “tulip mics” from off-the-shelf components, (3) sensing and accessing data, and (4) exploring musical and artistic possibilities using soundscape data.

[/expand]


[expand title=”Philippe Pasquier (Simon Fraser University)* – Generative Music Systems Tutorial“]
Contact : pasquier@sfu.ca
This tutorial aims at introducing the field of generative music, music AI or musical metacreation (MUME) and its current developments, promises, and challenges, with a particular focus on ICMC-relevant aspects of the field.

MUME involves using tools and techniques from artificial intelligence, artificial life, and machine learning, to endow machines with musically creative behavior. From generative music soundtrack to computer-assisted composition, from Jazz to electronic music, the field is bringing together artists, practitioners and researchers interested in developing systems that autonomously (or interactively) recognize, learn, represent, compose, complete, accompany, or interpret musical data.

[/expand]