Martin 125 (Digital Media Production Screening Room, just walk in by the same area we had the “Transition to Nightlife” on Friday)
Five-minute Videos follow the idea of “Show us your research, quickly!” and are intended to be fast introductions to research areas with a follow up discussion.
Raspberry Rainbow Synth
The Raspberry Rainbow Synth is a DIY synthesizer that uses an onboard Raspberry Pi 4 microcomputer, Python 3, and Cycling 74’s RNBO software along with various potentiometers, buttons, and an LCD. The synth uses RNBO patches as the sound source and provides a menu on startup for a player to select which patch to load for performance. The synth is an upgraded version of an original instrument I built in 2015 using a Raspberry Pi 2 computer and Pure Data.
Clay Chaplin is a computer musician, improviser, and audio engineer from Los Angeles who explores the realms of audio-visual improvisation, sound synthesis, field recording, electronics, and computer processing for creative sonic expression. Throughout his career he has worked on many projects involving experimental music, video, audio recording, and interactive computer systems. Chaplin studied composition and computer music with Morton Subotnick, Tom Erbe, and Mark Trayle. Clay is currently the Director of the Computer Music and Experimental Media studios at CalArts, is Co-director of the Experimental Sound Practices program, and the Larry Levin Chair in Contemporary Music.
Analytical Descriptors for Sonic Spatialization in Music Composition
The field of music composition delights in discovering new ways to organize elements of music, be it pitch, rhythm, harmony, timbre, texture, or dynamics. But one fundamental component of sound has been comparatively underexplored: it’s spatial quality. The location of a sound source, or where a sound is perceived to be within space, can be as emotively and rhetorically important to a piece of music as the intervals in a melody or timbres of the instruments. One potential reason that spatialization is underdiscussed is a dearth of language and analytical tools to describe how it is used in a composition.
To remedy that, my research consists of adapting Steve Larson’s theory of musical forces from a tonal context to a spatial one. Larson’s theory compares events in common practice music to physical phenomenon and suggests that listeners use these metaphors to interpret music. These forces include gravity (the tendency for notes to descend to an established point), magnetism (the tendency for some notes to predictably lead to certain others), and inertia (the tendency for a pattern to continue). Interestingly, these same forces map easily onto a spatial context. Gravity describes a fixed sound stuck in a single location, magnetism can be gestures that move towards each other or repeal away, and inertia is heard in how sounds may skirt along a predictable path. By using the analogy of musical forces to describe how sounds interact with each other in space, I hope to lay the groundwork for future research to talk about the role of space more critically in music composition.
Daniel Karcher is a composer originally from the Pacific Northwest who enjoys creating music for both acoustic and electronic mediums. Currently he is working on a DMA in composition at the University of Georgia and has earned previous degrees from the University of Miami and Oberlin Conservatory. He endeavors to make each piece he writes a unique experience for both himself and others listening to it. Aside from composing, his musical interests include discussing aesthetics, playing viola among friends, and analyzing video game soundtracks, text scores, and indeterminate music.
Faceplayer: a social face for your AI music app
Collaborating with a machine that improvises novel music can be engaging, but watching a human performer collaborate with a musical machine is much less engaging when it does not have a face. To solve this problem, the Machine Musician Lab presents an easy-to-use system (Faceplayer) that allows quick prototyping of sound-visual connections using videos of human dancers and musicians as stand-ins for robot bodies. While a computer-bound ML-enabled system actively listens and responds by playing a Disklavier, the Faceplayer provides a visual cue for the audience that provides the impression that the movement and the sound are linked. The Faceplayer image may then be projected onto any surface for many different effects. The team has devised several tests and use cases, and early results look promising. Later this Fall, the system will be utilized in a live musical Turing test at IUPUI. Attendees of this presentation will see video clips of the Faceplayer in action and will hear a detailed description of how they achieve these effects; lastly, the team will discuss their plans for developing the system.
Jason Palamara is a technologist and performer on acoustic and electronic instruments. As an Assistant Professor of Music Technology at IUPUI, he specializes in the development of machine learning-enabled performance technologies for music. He is the founder/director of IUPUI’s 30+ member DISEnsemble (Destructive/Inventive Systems Ensemble), which builds or hacks musical and non-musical stuff and plays live concerts. He regularly performs and composes music for modern dance as a solo artist and maintains long-term creative partnerships with electroacoustic musician Justin Comer (under the name JCϟjp) and percussionist-composer Scott Deal, with whom he developed AVATAR. This autonomous music system uses machine learning to play along with live improvisation. In his spare time, he raises chickens.
Sha#Bang! modules in VCV Rack
VCV Rack is a Eurorack simulator available for Mac, Windows, and Linux. It is a free download that comes with paid modules as well as a community of thousands of free and/or open-source module collections written in C++. I created Sha#Bang! Modules as a unique collection of innovative, probabilistic, generative, and rhythmically complex modules for VCV Rack. A few examples of the modules in the collection include:
“Collider” – a physical model of various shakers and wind chimes
“Cosmosis” – a continuous (non-discrete) sequencer based on constellations
“Neutrinode” – an animated sequencer where time moves out from nodes to their connected particles
“Orbitones” – physics-based animated LFO
“Polyrhythm Clock” – clock featuring three layers of embedded rhythms
“Qubit Crusher” – a bit crusher and downsampler using fractional rates with the ability to modulate or randomize.
“StochSeq” & “StochSeq4” – sequencers that use probabilistic patterns
“Talea” – an arpeggiator with polyrhythmic capabilities dependent upon note intervals
These modules make heavy use of probability and animation. Not only are these great for live interactive performance, but they serve as creative educational tools for teachers, students, and professionals. The use of open-source software available on all platforms, and with a user-friendly interface that mimics real-world hardware, VCV Rack provides an accessible and affordable introduction into the world of modular synthesis. Additionally, many students may have no experience or some studio DAW experience, and Sha#Bang! Modules provide a wonderful gateway for students shifting to patching languages like Max/MSP and Pure Data. Sha#Bang! Modules live on Github: https://github.com/jeremymuller/Sha-Bang-Modules
Jeremy Muller is active as a percussionist, composer, and multimedia artist. He’s been described as “highly creative” by Take Effect and has performed as a featured soloist at many venues throughout the United States, Canada, and Australia including International Computer Music Conference, The Banff Centre for the Arts (Canada), Transplanted Roots (Australia), NYC Electroacoustic Music Festival, ZeroSpace (University of Virginia), Northern Illinois University, Southwest Electroacoustic Festival, Jacksonville Electroacoustic Music Festival, UNC Pembroke, Balance-Unbalance International Conference, World Forum on Acoustic Ecology, the Musical Instrument Museum, Phoenix Art Museum, and PASIC. He has given world premieres of works by many composers including Matthew Burtner, Alexandre Lunsqui, Cristyn Magnus, and an evening-length tour de force solo work by Stuart Saunders Smith.
As a composer, his music has been performed by Arcomusical, Heartland Marimba Quartet, Arizona Contemporary Music Ensemble, Crossing 32nd Street, Manuel Rangel, clarinetist Robert Spring, Sam Houston State University, Glendale Community College Percussion Ensemble, and many other universities across the U.S. Jeremy recently released his debut solo percussion album on Albany records which includes several recording premieres, and his music can also be heard on Arcomusical’s third album “Emigre & Exile.” Some of his research & publications can be found through Bachovich Music Publications & Percussive Notes. Jeremy is currently a Lecturer at Georgia Institute of Technology and an Artist Affiliate at Georgia State University. He received a Doctor of Musical Arts from Arizona State University, a Master of Music from the Cincinnati College-Conservatory of Music, and a Bachelor of Music degree from Appalachian State University.
Audiovisual composed in Max and Ableton, programmed in GLSL and rendered with ShaderBoi
Michele Zaccagnini’s creative work consists of both music and audiovisuals implemented with idiosyncratic algorithmic techniques that are aimed at exploring static, non-narrative music. Michele’s main area of theoretical research focuses on Algorithmic Composition; he published a paper about Aldo Clementi compositional process in Perspectives of New Music, a description of one of his own compositional processes in The OM Composer’s Book n.3 (published by Ircam) and a paper about the Nonlinear Sequencer for the SEAMUS Conference in 2020 which is also available as a tool package in MaxMsp’s Package Manager.
Electronic Music in STEM School Environments
STEM (Science, Technology, Engineering, and Math) learning environments involve project-based learning. STEM students expect class to involve hands-on application of course material. Students use their knowledge to build all sorts of things in messy maker spaces. This includes music class. This presentation will discuss some of the advantages music class in providing a hub for learning and expressing all aspects of STEM education. Composers who use music technology will find STEM environments particular interesting as venues for performance, new collaborative projects, and outreaches.
Timothy Crist is a Professor of Music at Arkansas State University where he teaches composition, electronic music, theory, classical guitar, and conducts/directs the ASU New Music Ensemble and Guitar Ensembles. He is the recipient of the National Endowment for the Arts 2003 Individual Artist Fellowship for Music Composition. He has composed over 200 musical works for various media including solo, chamber, orchestral, band, and electronic. Crist’s music is performed throughout the United States and has been featured on National Public Radio, SEAMUS conferences, Southeastern Composers League, the Electronic Music Midwest Festival, the Florida Electroacoustic Music Festival, and the Society of Composers, Inc. conferences.