Music Technology Studios, Wood Hall (Basement Level)
gen~ and the GO Book: A Birds-Eye View
The recent Cycling ’74 book “Generating Sound & Organizing Time: Thinking with gen~ Book 1” is about the astonishing things you can do when you work at the atomic sample-by-sample structure of digital audio. This talk will introduce the gen~ environment to new users, talk about the content, design patterns, techniques and subcircuits used in and included with the book, and look ahead to what’s in store in volume 2.
Gregory Taylor was trained initially as a visual artist, He has studied central Javanese gamelan and electroacoustic music in the U.S. and the Netherlands, written for publications such as Wired, Array, and Option, and hosted a radio program of contemporary audio on WORT‐FM since 1986 (http://www.rtqe.net).
After a hiatus from his work in the cassette culture movement of the 1980/90s, he returned to regular recording and live performance as an improvisor in both solo and group contexts. His solo and group recordings on are available on the c74, Palace of Lights, pfMENTUM, Flood, Clang, Stasisfield, Loochtone, Mine All Mine, and Spectropol labels.
He is the author of Step by Step: Adventures in Sequencing with Max/MSP and the co-author (with Graham Wakefield) of Generating Sound & Organizing Time: Thinking with gen~, and he currently labors on behalf of the new media software company Cycling ’74, where he is in demand as a workshop facilitator and educator.
Autonomous Music Systems: teaching AI music
Autonomous Music Systems is a cross-listed graduate and undergraduate course at IUPUI that introduces musical machine learning, artificial intelligence, data management, musical automation, and music information retrieval techniques. This paper details the history of the course’s development and gives data and examples of student work produced in the past three semesters. In this class, students study music generation using multiple computer programming environments and softwares (including Max, Wekinator, Magenta, MATLAB, and Python) and build working digital musical systems which utilize machine-learning data to generate new music. The course culminates in a live performance that features an IRB-approved reverse Turing test, given to the audience. The author concludes by delineating curricular necessities and best practices for other educators wanting to teach similar courses in the future.
Jason Palamara is a technologist and performer on acoustic and electronic instruments. As an Assistant Professor of Music Technology at IUPUI, he specializes in the development of machine learning-enabled performance technologies for music. He is the founder/director
of IUPUI’s 30+ member DISEnsemble (Destructive/Inventive Systems Ensemble), which builds or hacks musical and non-musical stuff and plays live concerts. He regularly performs and composes music for modern dance as a solo artist and maintains long- term creative partnerships with electroacoustic musician Justin Comer (under the name JCϟjp) and percussionist-composer Scott Deal, with whom he developed AVATAR. This autonomous music system uses machine learning to play along with live improvisation. In his spare time, he raises chickens.
More Parameters for Your Music in Real Time
Concept, Construction, and Configuration of the Multi-Sensor Array
Karl F. Gerber
I present preliminary theoretical considerations from the practitioner’s perspective. Always oriented to the use for improvisation. How many parameters and which characteristics are desirable? The new array sensor32 allows the control of numerous parameters simultaneously in real time (polyphonic). Hands or legs and the upper body can be used. IR distance sensors based on the triangulation principle generate analogue voltages that are transmitted to MIDI continuous controllers. The array features 32 up to 64 controllers. Most of them feed algorithms. For the player’s orientation, LED bars with 10 to 20 display levels are placed close to the sensor. In the first phase, Re-Synthesis from NI Reaktor and Pianoteq modeling was used for the sonification. I do not start from the paradigm of universal gestures. Rather, the playing (ad hoc composing) of a complex instrument is imitated: operating an (400yrs old) organ with hands and feet produces gestures as a side effect, but they always depend on the purpose of the sound production and the construction of the console. Circuitry of my sensor boards and data acquisition through to REAKTOR synthesis software will be presented.
Composer Karl F. Gerber began playing the electric bass auto didactically. In 1975, he attended musicology lectures with Riethmüller in Freiburg as a guest student. After turning to jazz, he studied double bass with Adelhard Roidinger in Munich. He has a M Sc. in physics from the LMU Munich. As a composer he is self-taught, but attended courses with H. W. Erdmann, Cort Lippe, Robert Rowe, Carola Bauckholt, Götz Tangerding, Alex Grünwald, Joe Haider and Joe Viera. He has performed live algorithmic performances, including a co-improvisation with the University of Michigan Dancers at the 1998 ICMC in Ann Arbor, Michigan. This featured live formula editing, an anticipation of live coding.
“Beautiful Numbers” was awarded the electronic “Music for Dance” award at Bourges. Since “Loops” for solo piano, he has also created works in traditional notation without electronics such as “VC3e” for harpsichord four hands. After an invitation to the 2017 Kontakte Festival at the AdK Berlin, his “computer music without loudspeakers” has also attracted international interest. For example, in the Boston Berklee and South Korea, Seoul 2019. His installation “Violinautomat” was selected by the ISCM for the World Music Days in Tallinn, Estonia. The critic of Dagens Nyheter wrote “fascinating both technically and sonically”. He received the “Award of Distinction” at Matera Intermedia 2020 in Italy and the Best Music Award of the CMMR, Tokyo. His current projects are an automaton for alto recorder, a bowed psaltery with 16 bows, an extended snare drum and a hammer zither