keynotes

JEAN-PIERRE BRIOT Profile picture

Speaker: Jean-Pierre Briot

DEEP LEARNING FOR MUSIC GENERATION: ORIGINS, SUCCESSES AND CHALLENGES

Sunday, October 27, 18:00 – 19:30

https://youtu.be/wQjpSA32Ys4

A growing area of application of the current wave of deep learning (the hyper-vitamined return of artificial neural networks) is the generation of creative content, notably the case of music (and also images and text). The motivation is in using machine learning techniques to automatically learn musical styles from arbitrary musical corpora and then to generate musical samples from the estimated distribution, with some degree of control over the generation. In this talk, we will at first analyze some early works from the late 1980s using artificial neural networks for music generation and how their pioneering contributions have prefigured recent techniques (e.g., hierarchical models and Deep Dream). We will then present some recent achievements using last generation architectures such as VAE, GAN and Transformer and analyze successes and challenges. Last, but not the least, will also address the issue of the authorship of the generated music and corresponding rights issues. Jean-Pierre Briot is a senior researcher in computer science at CNRS (Centre National de la Recherche Scientifique) and Sorbonne Université in Paris, France. He is also permanent visiting professor at PUC-Rio and has recently been visiting professor at UNIRIO, both in Rio de Janeiro, Brazil. His general research interests are about the design of intelligent adaptive and cooperative software, at the crossing of artificial intelligence, distributed systems and software engineering, with various application fields such as Internet of Things, decision support systems and computer music. He is the first author of a recent reference book on the use of deep learning (artificial intelligence/machine learning) techniques for music generation. His current interest is focused on AI and music creativity. Jean-Pierre Briot holds a masters in mathematics (1980), a doctorship (PhD) in computer science (1984) and an “habilitation à diriger des recherches” in computer science (1989), all from Université Pierre et Marie Curie (aka Paris VI, since 2018 renamed/merged as Sorbonne Université). He also holds degrees in music, music acoustics and Japanese language. He has been visiting researcher in various institutions (Kyoto University, Tokyo Institute of Technology, University of Illinois at Urbana-Champaign, University of Southern California, University of Tokyo…). He has advised more than 25 PhD students. He has edited 12 books or journal special issues. In 2010, he has created the CNRS permanent representation office in Rio de Janeiro, for scientific cooperation with Southern America. For more details (including access to publications), please see http://www-desir.lip6.fr/~briot/cv/

IGOR PEREIRA Profile picture

SPEAKER: IGOR PEREIRA

Music source separation: Challenges and Solutions Separação de fontes musicais: aplicações e desafios

Monday, October 25, 18:00 – 19:30

https://youtu.be/Yx9jFE4Tl0Q

Igor Pereira é head of Machine Learning no aplicativo Moises, onde lidera o time que desenvolve algoritmos inteligentes para recuperação de informação musical – com enfoque na separação de fontes musicais. Igor é Doutor em Engenharia Elétrica e Computação pela Universidade Federal do Rio Grande do Norte (2019), com estágio de Doutorado Sanduíche (SWE) no Institute of Applied Sciences and Intelligent Systems (ISASI) na Itália (2018). Em seu doutorado, pesquisou sobre estratégias de sincronização de sistemas multimídia com a utilização de algoritmos de Machine Learning, desenvolvendo métodos de estado da arte que foram publicados em periódicos internacionais. Também é Mestre em Engenharia Elétrica pela Universidade Federal da Paraíba (2014) e graduado em Engenharia Elétrica pelo Instituto Federal de Educação, Ciência e Tecnologia da Paraíba (2011). Igor tem experiência em: Recuperação de Informação Musical, Sistemas Distribuídos, Redes Neurais Artificiais, Aprendizagem de Máquina, ip-cores, FPGAs e Sistemas Embarcados.

Dani Ribas Profile picture by Patrícia Soransso

SPEAKER: DANI RIBAS

Fotografia: Patrícia Soransso

Aspectos humanos da Economia Psíquica dos Algoritmos

Tuesday, October 26, 18:00 – 19:30

https://youtu.be/IdPRrk4JIGw

Dani Ribas é diretora da Sonar Cultural Consultoria. É Doutora em Sociologia pela UNICAMP, e a partir de sua tese criou o método ID_Musique, que analisa e incorpora o comportamento do público às estratégias criativas e de negócios de artistas da música. Foi consultora da UNESCO e do Mercosul Cultural, foi pesquisadora do CPF-SESC SP, foi diretora do DATA SIM e fez pesquisa em Economia Criativa para o IPEA. Integra a Rede SateliteLAT de Mulheres na Indústria da Música Latino-americana. É professora de Music Business no Music Rio Academy, OnStage Lab, FESP SP, Música & Negócios PUC-Rio e de Gestão Cultural na UNICAMP. É consultora em planejamento e gestão de carreira na música, com base em análise de dados e tendências de comportamento de público.

MARCELO M. WANDERLEY Profile picture

SPEAKER: MARCELO M. WANDERLEY

DESIGNING NIMES FOR WIDESPREAD USE

Wednesday, October 27, 18:00 – 19:30

https://youtu.be/19C2Uw6z19M

New interfaces for musical expression (NIME) allow for unprecedented access to sound parameters and the possibility to interact with sound and music on a variety of levels: from acoustic instrument simulations to interfaces for conducting pre-recorded music (e.g. interactive sequencers) to laptop orchestras/live coding. Despite hundreds of examples of interfaces proposed in the last decades, very few of these have been played by more than a dozen individuals. This talk will discuss challenges and opportunities involved in the design of NIMEs aiming at a large base of users. Could NIMEs become widespread tools for musical expression or are musical interfaces doomed by the drive for novelty (the “N” in NIME)? To provide answers to this question, I will discuss several examples of interfaces developed at the Input Devices and Music Interaction Laboratory (IDMIL), ranging from one-off prototypes to DMIs produced in small numbers (dozens of copies), focusing on issues inherently related to academic institutions’ structures, the availability of resources, and the need for specialized expertise to produce commercial-level instruments. Marcelo M. Wanderley is Professor of Music Technology at McGill University, Canada and International Research Chair at Inria Lille – Nord Europe, France. His research interests include the design and evaluation of digital musical instruments and the analysis of performer movements. He co-edited the electronic book “Trends in Gestural Control of Music”, 2000, co-authored the textbook “New Digital Musical Instruments: Control and Interaction Beyond the Keyboard”, 2006, and chaired the 2003 International Conference on New Interfaces for Musical Expression (NIME03). He is a member of Computer Music Journal’s Editorial Advisory Board and a senior member of the ACM and of the IEEE.