9:00 9:20
Welcoming address
Nicola Costa, Sovrintendente Teatro dellOpera di Genova Carlo Felice
Antonio Camurri, Presidente Associazione di Informatica Musicale Italiana (AIMI)
9:20 10:10
Models of Emotion
Chair: Antonio Camurri, University of Genova
WOLFGANG: ``Emotions'' and architecture which bias musical design
Doug Riecken (wolf@research.bell-labs.com)
Bell Labs Research - Lucent Technologies
A computational model of artificial emotions
Antonio Camurri (music@dist.unige.it), Pasqualino Ferrentino (linus@musart.dist.unige.it), Riccardo Dapelo (riccardo@musart.dist.unige.it)
Laboratory of Musical Informatics - http://MusArt.dist.unige.it, DIST-University of Genova.
10:20 11:35
Human-Machine Performance
Chair: Insook Choi, Human-Computer Intelligent Interaction Lab., Beckmann Institute, USA
Interactivity vs. control: human-machine performance basis of emotion
Insook Choi (ichoi@ncsa.uiuc.edu)
HCII, Beckman Institute, University of Illinois at Urbana-Champaign
Musical transaction and interactive emergence
Jonathan Impett (jfi21@cam.ac.uk) Faculty of Music, University of Cambridge, West Road, Cambridge CB3 9DP, UK.
Performance with refractions: Understanding musical gestures for interactive live performance
Nicola Orio*, Carlo De Pirro**
* CSC - DEI, Università di Padova (orio@dei.unipd.it)
** Conservatorio di Rovigo (cdp@csc.unipd.it)
11:55 12:45
Agent Architectures for Interactive Performance (I)
Chair: Doug Riecken, Bell Laboratories Research
An architecture for multimodal environment agents
Antonio Camurri, Alessandro Coglio, Paolo Coletta, Claudio Massucco (music@dist.unige.it, {tokamak, ciota, alex}@musart.dist.unige.it)
Laboratory of Musical Informatics - http://MusArt.dist.unige.it, DIST-University of Genova.
Interactive poem
Naoko Tosa, Ryohei Nakatsu {tosa,nakatsu}@mic.atr.co.jp
ATR Media Integration & Communications Research Laboratories, Japan
12:45 12:55
Announcements
14:00 14:50
Agent Architectures for Interactive Performance (II)
Chair: Doug Riecken, Bell Laboratories Research
Gesture controlled music performance in a real-time network
Ioannis Zannos*, Paul Modler*, Kuniaki Naoi**
* Staatliches Institut fur Musikforschung, Berlin {iani,modler}@sim.spk-berlin.de
** Technische Universitat Berlin (naoi@cs.tu-berlin.de)
A multimedia environment for interactive music performance
Roberto Bresin, Anders Friberg {roberto, anders}@speech.kth.se
Royal Institute of Technology - Speech, Music and Hearing, Sweden
15:00 15:50
Movement and Gesture: from Virtual Musical Instruments to Dance/Music Systems (I)
Chair: Robin Bargar, National Center for Supercomputing Applications, USA
Instrumental Gestural Mapping Strategies as Expressivity Determinants in Computer Music Performance
Joseph B. Rovan, Marcelo M. Wanderley, Shlomo Dubnov, Philippe Depalle {rovan, wanderle, dubnov, phd}@ircam.fr
Analysis-Synthesis Team/Real-Time Systems Group, IRCAM, France
Toward Kansei evaluation of movement and gesture in music/dance interactive multimodal environments
Antonio Camurri, Roberto Chiarvetto, Alessandro Coglio, Massimiliano Di Stefano, Claudia Liconte, Alberto Massari, Claudio Massucco, Daniela Murta, Stefano Nervi, Giuliano Palmieri, Matteo Ricchetti, Riccardo Rossi, Alessandro Stroscio, Riccardo Trocca
Laboratory of Musical Informatics - http://MusArt.dist.unige.it, DIST-University of Genova.
16:10 17:25
Movement and Gesture: from Virtual Musical Instruments to Dance/Music Systems (II)
Chair: Robin Bargar, National Center for Supercomputing Applications, USA
Emotional aspects of gesture recognition by a neural network, using dedicated input devices
Paul Modler, Ioannis Zannos {modler, iani}@sim.spk-berlin.de,
Staatliches Institut fur Musikforschung, Berlin
Empty-handed gesture analysis in Max/FTS
Axel Mulder, S. Sidney Fels, Kenji Mase {mulder, fels}@mic.atr.co.jp,
ATR Media Integration and Communications labs, Kyoto, Japan.
Music Composition, Improvisation, and Performance Through Body Movements
Roberto Morales-Manzanares*, Eduardo F.Morales** (roberto@kaliman.cimat.mx)
* Lab. de Informatica Musical, Univ. de Guanajuato, Centro de Investigaciones en Matematicas, Mexico.
** ITESM Campus Morelos
Multimedia Concert
Auditorium Montale,Teatro Carlo Felice, Friday, October 3, 21:00
Excerpts from Rolling Stone
Composer: Insook Choi; producer: Robin Bargar
Small Vanities - Studies for meta-trumpet, computer and live electronics (1997)
Composer and performer: Jonathan Impett
Sincretismo e multimedialitè: strutture e modelli performativi
a cura del Gruppo Sincretica: Giovanni Cospito: audio computing; Rosanna Guida: image computing; Andrea Inglese: scrittore e voce narrante; Giancarlo Locatelli: clarinetti; Simonetta Artuso: canto
Due Studi per palcoscenico sensorizzato, voce, attore e live electronics:
Da S.Beckett: Parole e Musica
Composer: Andrea Nicoli
Da W.Kandinsky: Il Suono Giallo
Composer: Riccardo Dapelo
Voce recitante/cantante: Daniela Aimale; Danzatrice: Natascia Ragni; Computer and live electronics: A.Nicoli, R.Dapelo.
Studio per computer, danzatrice e robot
Composer: Giuliano Palmieri; Danzatrice: Natascia Ragni; Computer and live electronics: A.Massucco, G.Palmieri
Saturday, October 4
9:00 10:40
Kansei Information Processing
Chair: Shuji Hashimoto, Waseda University, Japan
KANSEI as the third target of information processing and related topics in Japan
Shuji Hashimoto (shuji@shalab.phys.waseda.ac.jp)
Dept.of Applied Physics, Waseda University, Japan
a -EEG indicated KANSEI evaluation on visual image granularity of textures
Tadao Maekawa*, Ryohei Nakatsu*, Emi Nishina**, Yoshitaka Fuwamoto+, Tsutomu Ooashi++
* ATR Media Integration & Communications Research Labs. (maekawa@mic.atr.co.jp)
** National Institute of Multimedia Education
+ Foundation for Advancement of International Science
++ ATR Human Information Processing Research Labs
Physiological and psychological effect of high frequency components above the audible range --- An approach to Kansei Information Processing
Tsutomu Oohashi*, Emi Nishina** (nishina@nime.ac.jp), Norie Kawai+, Yoshitaka Fuwamoto+, Reiko Yagi+ and Masako Morimoto++
* Chiba Institute of Technology. ATR Human Information Processing Laboratories
** National Institute of Multimedia Education
+ Foundation for Advancement of International Sciences
++ The University of Tokyo
Modeling of emotional sound space using neural networks
Kenji Suzuki, Shuji Hashimoto {kenji,shuji}@shalab.phys.waseda.ac.jp
Dept.of Applied Physics, Waseda University, Japan
11:10 12:50
Analysis and Interpretation
Chair: Giovanni De Poli, CSC-DEI University of Padova
Musical structure and expressive intentions as sources of deviations in violin performance: a sonological analysis
Giovanni De Poli, Antonio Rodà and Alvise Vidolin (vidolin@dei.unipd.it)
CSC-DEI, University of Padova, Italy
Analysis of expressive intentions in pianistic performances
Giovanni Umberto Battel *, Riccardo Fimbianti **
* Conservatorio B. Marcello di Venezia
** CSC-DEI, University of Padova (rf@csc.unipd.it)
How are the players ideas perceived by listeners: analysis of "How high the moon" theme
Sergio Canazza, Nicola Orio {canazza, orio}@dei.unipd.it
CSC - DEI, University of Padova
A computer system for the automatic detection of perceptual onsets in a musical signal
Dirk Moelants *, Christian Rampazzo **
* IPEM, University of Ghent, Belgium. Dirk.Moelants@rug.ac.be
** DEI-University of Padova, Italy
14:00 - 15:15
Synthesis of Expressive Musical Performance
Chair: Ioannis Zannos, Staatliches Institut fur Musikforschung, Berlin
Generating expressive musical performances with SaxEx
Josep Lluis Arcos (arcos@iiia.csic.es), Ramon Lopez de Mantaras, Xavier Serra
IIIA - Spanish Scientific Research Council
Expressive control by fuzzy logic of a physical model clarinet in CSound
Piergiorgio Sartor, Elio Parisi (red@inca.dei.unipd.it)
CSC-DEI, University of Padova. Italy.
Automatic musical punctuation: A rule system, and a neural network approach
Anders Friberg, Roberto Bresin (andersf@speech.kth.se, roberto@speech.kth.se)
Royal Institute of Technology - Speech, Music and Hearing, Sweden
15:35 17:15
Modeling Emotion?
Chair: Marc Leman, IPEM University of Ghent
Emotion - Is it measurable ?
Shlomo Dubnov (dubnov@ircam.fr)
IRCAM, France.
Understanding musical emotions
Mladen Milicevic (mmladen@sc.edu)
The University of South Carolina, USA
Authoring intelligent sound for synchronous human-computer interaction
Robin Bargar (rbargar@pop.ncsa.uiuc.edu)
National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign
Melodic expectancy Comparison between behavioral test and neural network model
Jukka Lohivuori, Petri Toiviainen {louhivuo, petri}@tukki.jyu.fi
Department of Music, University of Jyvaskyla, Finland
17:15 18:00
Discussion and Conclusion of the Workshop
Imparagiocando3 video presentation
Interactive science exhibition: A playground for true and simulated emotions
Antonio Camurri*, Maria Grazia Dondi**, Giuseppe Gambardella*
* DIST-University of Genova.
** Istituto di Fisica di Ingegneria dellUniversità di Genova and Istituto Nazionale per la Fisica della Materia, Italy.
Organizing Committee
Antonio Camurri, Roberto Chiarvetto, Alessandro Coglio, Paolo Coletta, Riccardo Dapelo, Claudio Massucco, Stefano Nervi, Giuliano Palmieri, Matteo Ricchetti, Riccardo Rossi, Riccardo Trocca (Staff of the Laboratory of Musical informatics at DIST - University of Genova)
More information: Antonio Camurri, music@dist.unige.it, Tel: +39-10-3532988, fax: +39-10-3532948