Partners

From MIReS

Jump to: navigation, search

The consortium as a whole comprised 5 EU centres of MIR RTD excellence (UPF-MTG, OFAI, IRCAM, INESCP and QMUL) unsurpassed in their achievement in terms of their collective numbers of academic publications on the subject of MIR or their input in establishing MIR as a discipline.

BMAT, Barcelona Music and Audio Technologies, represents a unique industry point of view as a pioneering organisation that has successfully focused on deploying the use of MIR technologies in commerce.

Stromatolite Design Lab is the only known design laboratory which has specialised in catalysing MIR RTD innovation through to industry by means of generating dynamic audio-visual user-oriented interfaces for MIR.


Members of the consortium have an extensive track record of joint cooperation on large scale projects. For example, IRCAM and UPF-MTG have collaborated on the EU-funded CIUDADO and SemanticHIFI projects. UPF-MTG, QMUL and OFAI have collaborated on the EU-funded SIMAC project. OFAI, INESCP and UPF-MTG have collaborated on the EU funded S2S2 Roadmap for Sound and Music Computing (http://smcnetwork.org/roadmap). Over the past 5 years BMAT has partnered with UPF-MTG on a series of RTD projects. STRO and QMUL have collaborated over a period of 3 years on the UK EPSRC-funded OMRAS2 (Online Music Recognition and Searching) project.


Strong synergies exist between partners in the area of audio signal processing, audio content analysis and sound similarity searches, while individual partners offer specific expertise within the field of Music Information ReSearch:


Contents



Universitat Pompeu Fabra - Music Technology Group (UPF-MTG)

The Music Technology Group (MTG) of the Universitat Pompeu Fabra (UPF) in Barcelona, part of its Department of Information and Communication Technologies, is actively pursuing research in sound and music computing technologies, with a strong focus in audio signal processing, sound and music description, musical advanced interaction and sound and music communities. The UPF-MTG staff is composed by more than 40 researchers (3 associate professors, 12 postdocs and 21 PhD students) coming from different and complementary disciplines. Some quantitative metrics representative of the UPF-MTG’s research excellence are its publications, being yearly published 2 books, 11 journal articles, 47 conference papers and 2-3 PhD theses.

The UPF-MTG funds its activity through several competitive public calls and R&D projects with companies. The UPF-MTG has been involved in several public projects; some examples are SIEMPRE (Social Interaction and Entrainment using Music PeRformance Experimentation), SAME (Sound And Music For Everyone Everyday Everywhere Everyway), PHAROS (audiovisual search across online spaces), SALERO (intelligent content for media production), etc. In terms of private R&D projects with companies, the UPF-MTG usually collaborate with its spin-offs (BMAT and Reactable), as well as with other companies such as Yamaha Research Center, Google Research, Pinnacle Systems, Steinberg, Microsoft, etc.


People

Xavier Serra is Associate Professor of the Department of Information and Communication Technologies and Director of the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. After a multidisciplinary academic education he obtained a PhD in Computer Music from Stanford University in 1989 with a dissertation on the spectral processing of musical sounds that is considered a key reference in the field. His research interests cover the understanding, modelling and generation of musical signals by computational means, with a balance between basic and applied research and approaches from both scientific/technological and humanistic/artistic disciplines. Dr. Serra is very active in promoting initiatives in the field of Sound and Music Computing at the local and international levels, being editor and reviewer of a number of journals, conferences and research programs of the European Commission, and also giving lectures on current and future challenges of the field. He has been the principal investigator of more than 20 major research projects funded by public and private institutions, the author of 31 patents and of more than 75 research publications. His research excellence has recently been recognized by the European Commission with and ERC Advanced Grant to study World Music from a computational point of view.

Sergi Jordà holds a B.S. in Fundamental Physics and a Ph.D. in Computer Science and Digital Communication. He is a researcher in the Music Technology Group of Universitat Pompeu Fabra in Barcelona, and a lecturer in the same university, where he teaches computer music, HCI, and interactive media arts. He has written many articles, books, given workshops and lectured though Europe, Asia and America, always trying to bridge HCI, music performance and interactive media arts. He has received several international awards, including the prestigious Ars Electronica’s Golden Nica in 2008. He is currently best known as one of the inventors of the Reactable, a tabletop musical instrument that accomplished mass popularity after being integrated in Icelandic artist Bjork’s last world tour, and he is one of the founding partners of the spin-off company Reactable Systems.



Stromatolite Design Lab (STRO)

Stromatolite is an award-winning London-based design research and innovation lab with clients which include Apple, Nike, Nokia and the Financial Times. Over the past 10 years Stromatolite has focused on RTD development of innovative digital interfaces and product futures for over 30 commercial clients, as well as the development of teaching methodologies for innovation at the Design Products at the Royal College of Art in London, Design Critical Practice at Goldsmiths University of London and the University of the Arts, London. Stromatolite co-founder Peter Russell-Clarke is now part of the award-winning Apple Industrial Design Team in Cupertino, US. Stromatolite co-founder Michela Magas has specialized in innovative navigation systems for media and music companies and more recently in catalysing innovation from EU MIR research centres for applications in the music industry.

The Stromatolite team now comprises of 22 regular collaborators. The team’s unique position as translational developers and innovation catalysts allows them to capture and interpret results of scientific research and take them to market as finished products. Stromatolite has developed ideas and innovation in the areas of new systems for searching for music (Sonaris and Songlines), real-time fresh food networks (Blackboard Menu) and new frameworks for interconnected products and the Internet of Things (Open Product Licenses). Their innovative system for searching for music has led to a collaboration with Peter Gabriel and Ed Averdieck (ex OD2 and Nokia Music) on Cue Songs: a new system for licensing music, and has prompted the TSB to present a case study to the UK government showing how the results of innovation can change the music industry business model. Stromatolite has recently been selected for the BIS “Make It in Great Britain” exhibition at the Science Museum during the London Olympics, together with major brands like Rolls Royce and McLaren, showcasing the best of British manufacturing and innovation.

Stromatolite has been awarded the UK Technology Strategy Board: Collaboration in Digital Industries; the EU FP7 award for MIReS: the future of music tech; the UK Technology Strategy Board award for Open Product Licenses; and the NEMart award for art meets science.


People

Michela Magas MaRCA, graduated from the Royal College of Art in London with a MA in Communication Design, specializing in innovation in media. During the 1990s she developed concepts for the newspaper in the digital age and spent 6 years redesigning the Financial Times newspaper and launching numerous publications for the Pearson Group in the context of digitization and rapid global expansion. She proceeded to create concepts for the Nike Futures Group and develop novel online navigation systems for a series of global companies, including the precursor to the Apple Coverflow. Following innovative music browsing concepts for Apple iTunes and Peter Gabriel's music companies, she was invited by the University of London to join the £2m UK EPSRC-funded OMRAS2 project, where she developed mHashup, a novel audio-visual interface to large music collections for discovering musical relationships among tracks, using MIR technologies. mHashup has been presented worldwide, including the Science Museum and the British Library in London, the SIGGRAPH 2008 conference in Los Angeles, and the 127th AES Convention in New York, and is the only non-commercial application that has featured on the BBC Click music engines special, with Shazam, Midomi and Pandora. For her Songlines music search project uniting cultures through sound, she was awarded the NEM ‘art meets science’ prize by the EU Commission at the 2010 NEM Summit in Barcelona. Described as innovation catalyst by the EU, Michela now leads Stromatolite teams in taking research to industry.



Austrian Society for Cybernetic Studies - Austrian Research Institute for Artificial Intelligence (OSKG-OFAI)

The Austrian Research Institute for Artificial Intelligence (OFAI) is a private, non-profit research institution run by the Österreichische Studiengesellschaft für Kybernetik (Austrian Society for Cybernetic Studies). It is devoted to basic and applied research in all areas of Artificial Intelligence. It currently employs some 30 full-time researchers. In the present project it will be represented by its Intelligent Music Processing and Machine Learning Group (IMPML), which has extensive expertise in the areas of machine learning, pattern recognition, intelligent music and signal processing, Music Information Retrieval (MIR), music performance research, and intelligent audio-visual interfaces. The group has been involved in a number of EC projects. Through its leader, Prof. Gerhard Widmer, the research group cooperates tightly with the Department of Computational Perception of the University of Linz, which providing access to additional expertise on computer perception and pattern recognition.

OFAI was one of the partners in the project S2S2 (Sound to Sense, Sense to Sound), which produced a Strategic Roadmap for Sound and Music Computing for the European Commission. Additional experience in running research projects comes from many EU and national projects such as: METAL (A Meta-learning Assistant for Providing User Support in Machine Learning and Data Mining), SOL-EU-NET (Data Mining and Decision Support for Business Competitiveness: A European Virtual Enterprise), 3DSearch (3D Ontology-based Web Search Application), MOSART (Music Orchestration Systems in Algorithmic Research and Technology), BioMinT (Biological Text Mining), and SIMAC (Semantic Interaction with Music Audio Contents).


People

Gerhard Widmer is full professor and head of the Department of Computational Perception at the Johannes Kepler University Linz, Austria, and the head of OFAI's Intelligent Music Processing and Machine Learning Group. He holds M.Sc. and Ph.D. degrees in computer science from the University of Technology Vienna, and a M.Sc. from the University of Wisconsin/Madison, USA. He has been active both in 'mainstream' AI and machine learning research and in the application of AI techniques to musical and multimedia problems for many years. This is reflected in the diversity of both his publications and his scientific services (e.g., he is on the editorial boards of major publishers and journals both in the AI / machine learning area (AAAI Press, Machine Learning) and in music (Journal of New Music Research). Dr. Widmer has coordinated OFAI's contributions to numerous national and European research projects, as well as directed applied research projects with commercial partners (e.g., the development of the audio-based music recommendation functionality in the new digital music player BeoSound5 by Bang & Olufsen (B&O)). He has been awarded several research prizes, including the highest scientific award in the country of Austria, the "Wittgenstein Prize" (2009), for his research on Artificial Intelligence and Intelligent Music Processing and Music Performance Analysis. In 2006, he was elected a Fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI), for his contributions to European AI Research.

Arthur Flexer is a senior researcher at the OFAI with about fifteen years of experience in basic and applied research on machine learning, signal processing and musical applications. He holds M.Sc. and Ph.D. degrees in psychology with an emphasis on Artificial Intelligence from the University of Vienna. He has been working in research since 1993 at the OFAI, the University of California San Diego (USA), the Okinawa Institute of Science and Technology (Japan) and as an Assistant Professor at the Center for Brain Research, Medical University Vienna. He is author and co-author of more than 50 peer-reviewed articles. He has experience in leading research projects and is currently managing two nationally funded MIR-related projects.



Institut de Recherche et Coordination Acoustique/Musique (IRCAM)

The Institut de Recherche et Coordination Acoustique/Musique (IRCAM) is a non-profit organization associated to Centre Pompidou in Paris, France, and is dedicated to the relationships between science, technology and contemporary music production. IRCAM conducts multidisciplinary research in the field of Sciences and Technologies of Music and Sound (STMS) and hosts a joint research unit with CNRS and University Paris 6 (Université Pierre et Marie Curie) entitled STMS (Science and Technology of Music and Sound). Involved scientific fields include acoustics, digital audio signal processing, computer science (real-time systems, man-machine interfaces, languages, databases), auditory and music perception and cognition, musicology. The IRCAM R&D department gathers 70 researchers, engineers and PhD students and is the largest public lab worldwide dedicated to STMS. The IRCAM R&D teams develop and support six software environments for professional music production and composition.

IRCAM has been a pioneer in the field of audio indexing and music information retrieval. It started in the 90s through collaborations with France Telecom (CTI /CRE) and French national projects: the Online Studio project (96-98, programme ‘Autoroutes de l’information’) which implemented an online database server including 30,000 audio instrument samples with similarity search features based on psychoacoustic studies on timbre spaces. This work on audio samples was continued in the framework of the national Écrins project, the CUIDAD and CUIDADO (Content-based User Interfaces and Descriptors for Digital Audio Databases available Online, 2001-2003) in which IRCAM was the lead contractor. CUIDADO, also featuring content-based management of musical recordings, has been the first large scale European project dedicated to Music Information Retrieval and systematically addressed the issue of automatic extraction of music descriptors from audio signals; it featured a music description data model and online server based on MPEG-7. Since CUIDAD and CUIDADO, IRCAM has been a major contributor to the MPEG-7 audio standard. IRCAM has been also in charge of the MPEG-7 standardization in the framework of the MUSIC NETWORK project. The results of CUIDADO, which focused on the server side, have then been applied in the SemanticHIFI project, which aimed, in collaboration with major European actors such as Sony France/Europe and Fraunhofer IDMT, to design a new generation of hi-fi systems featuring innovative functionalities and interfaces for the content-based manipulation of musical recordings. All these projects have been coordinated by IRCAM. IRCAM also currently leads the music indexing activities in the Quaero project. Quaero is a €200M project between France and Germany targeting multimedia indexing (still-image, video, audio, music, speech, NLP) including partners from the academic side (CNRS, INRIA, Telecom Paris-Tech, RWTH, Karlsruhe University, DGA) and from the industrial side (Technicolor, Orange, Exalead, Yacast, Jouve). IRCAM also plays a leading role in the International Music Information Retrieval Community ISMIR, IRCAM hosts the community music-ir@ircam.fr mailing list and has hosted the first European edition of the ISMIR Conference in 2002. IRCAM also has played a leading role in France in recent MIR-related projects with the ANR Ecoute, Sample Orchestrator 1, Sample Orchestrator 2 projects.


People

Geoffroy Peeters received his M.Sc. in electrical engineering from the Université-Catholique of Louvain-la-Neuve in 1995 and his Ph.D. in computer science from the Université Paris VI, France in 2001. During his Ph.D. he developed new signal processing algorithms for speech and audio processing. Since 1999, he works at IRCAM. His current research interests are in signal processing and pattern matching applied to audio and music indexing. He has developed new algorithms for timbre description, sound classification, audio identification, rhythm description, music structure discovery, audio summary, music genre/ mood recognition. Dr. Peeters owns several patents in these fields. He is co-author of the ISO MPEG-7 audio standard. He has coordinated indexing research activities for the CUIDAD, CUIDADO, SemanticHIFI European projects. He is now leading the music indexing research activities of the Quaero project.



Instituto de Engenharia de Sistemas e Computadores do Porto (INESCP)

The Institute for Systems and Computer Engineering of Porto (INESCP) is a private non-profit distributing association whose partners are INESC, the University of Porto, its School of Engineering and School of Sciences, and the Polytechnic Institute of Porto. INESC Porto has the statute of a Public Utility Institution and was appointed by the Portuguese Government as an Associated Laboratory, following an international evaluation that awarded a classification level of "Excellent".

INESC Porto acts as an interface between the academic world and the Information Technology and Electronics sector, carrying out scientific research and development as well as technology transfer and advanced professional training, under research contract with industry and services and in the framework of research projects funded by National Agencies and EC R&D programmes. The activities of INESC Porto in this project will be carried out by the Telecommunications and Multimedia Unit (UTM), which has a large experience in research projects both at national and international level, as well as in development contracts and technology transfer. Over the past 15 years the Unit has actively participated in about 30 projects in the framework of EC Programmes (ESPRIT, EUREKA, RACE, ACTS and IST). Examples of those are projects in the area of digital television and multimedia content chains, such as ATLANTIC, G-FORS, METAVISION, CONTESSA, ASSET, NUGGETS and, more recently, ENTHRONE, VISNET I, VISNET II and MOSAICA, as well as projects in the area of communications networks and services, such as ARROWS, DAIDALOS and Ambient Networks. As a result of research and development activities combined with advanced training and post-graduate programmes, a number of spin-off companies were launched during the past five years.


People

Fabien Gouyon (Ph.D. Computer Science, UPF Barcelona; M.Sc. IRCAM Paris; M.Sc. Signal Processing, ENSEEIHT Toulouse; B.Sc. Theoretical Physics, UPS Toulouse) is Invited Assistant Professor at the Faculty of Engineering of the University of Porto, in Portugal, and senior research scientist at the Telecommunications and Multimedia Unit of INESC Porto where he leads (together with Prof. Carlos Guedes) the Sound and Music Computing research group of the Telecommunications and Multimedia Unit (UTM). His main research and teaching activities are in Music Information Retrieval and Music Pattern Recognition. He has published over 50 papers in peer-reviewed international conferences and journals, published a book on computational rhythm description, gave the first tutorial on the topic at the International Conference on Music Information Retrieval in 2006 (ISMIR 2006) and participated to the writing of the European Roadmap for Sound and Music Computing, published in 2007. He was General Chair and Scientific Programme co-chair of the Sound and Music Computing Conference 2009 and is General Chair of 2012 Conference of the International Society for Music Information Retrieval (ISMIR 2012).

Carlos Guedes (PhD in Composition, NYU, 2005) is currently Associate Professor at the Faculty of Engineering at the University of Porto where he teaches in the Master in Multimedia and Doctoral Program in Digital Media from the UT Austin | Portugal partnership. He has a multifaceted activity in composition that ranges from traditional instrumental music to works employing digital interactive systems in theater and dance performance, and his work was presented in Europe and in the United States in places such as De Waag, ARCO, SIGGRAPH 2008, and Shanghai eArts Festival. As a researcher, he co-founded with Fabien Gouyon the Sound and Music Computing Group at INESC Porto, where he currently leads research projects in real-time automatic music generation. Carlos Guedes was the music program co-chair (together with Pedro Rebelo, SARC) of the Sound and Music Computing Conference 2009.



Centre for Digital Music - Queen Mary University of London (C4DM-QMUL)

The Centre for Digital Music (C4DM) at Queen Mary University of London (QMUL) is a world-leading multidisciplinary research group in the field of Music & Audio Technology. C4DM has around 50 full-time members, including academic staff, research staff and research students, working on topics including music information retrieval, music signal processing, music knowledge representation, machine listening, audio engineering, human machine interaction and digital performance. Research funding since 2001 totals over £14m, mainly from EPSRC (14 projects) and EU (6 projects), as well as from Royal Society, Leverhulme Trust, JISC, Nuffield Foundation, and industry. EU funded projects include DIGIBIC (2010-2013), SMALL (2009-2012), EASAIER (2006-2008), SIMAC (2004-2006) and SAVANT (2002-2004). Since 2001, the members of the C4DM have published over 200 conference papers, journal articles and book chapters. Our research has also featured widely in the media, including in New Scientist, The Economist, The Guardian, Scientific American, Financial Times and BBC World Business News. Conferences we have hosted include: International Conferences on Digital Audio Effects (DAFx-03), Music Information Retrieval (ISMIR 2005), Auditory Display (ICAD 2006) and Independent Component Analysis (ICA 2007), the British Computer Society conference on Human-Computer Interaction (HCI 2006:Engage), the Audio Engineering Society conference on New Directions in High Resolution Audio (AES-31, 2007), and the 89th MPEG Meeting (2009).


People

Simon Dixon is a lecturer at QMUL and head of the Music Informatics group. He has a PhD in Computer Science and LMusA in Classical Guitar. His research focuses on music informatics, including high-level music signal analysis and the representation of musical knowledge. He is principle investigator on Musicology for the Masses (2010-12, RCUK) and Linked Music Metadata (2010-11, JISC), and co-investigator on Sustainable Software for Digital Music and Audio Research (EPSRC, 2010-14), AudioMiner (WWTF Austria, 2010-12) and OMRAS-2 (EPSRC, 2007-10). He is author of the beat tracking software BeatRoot (ranked first in the MIREX 2006 evaluation) and the audio alignment software MATCH (Best Poster Award, ISMIR 2005), and co-author of the top-ranked Audio Chord Detection and Music Structure Segmentation systems (MIREX 2009). He was Programme Chair for ISMIR 2007, and General Co-chair of the 2011 Dagstuhl Seminar on Multimodal Music Processing, and has published over 70 refereed papers.

Anssi Klapuri received his PhD degree in Information Technology from Tampere University of Technology (TUT) in 2004. He was a senior researcher and research manager at TUT, before joining QMUL as lecturer in 2009. His research interests include audio signal processing, auditory modelling, and machine learning. He has worked as a principal investigator in industrial research projects worth over €1.5M since 2001. He received the IEEE Signal Processing Society 2005 Young Author Best Paper Award. He has co-edited one book and authored 14 journal papers and 6 book chapters, and a number of conference papers.

Mark Sandler is Head of the School of Electronic Engineering and Computer Science at QMUL and founder of the Centre for Digital Music. He became Professor of Signal Processing at QMUL in 2001, following 19 years at King’s College, where he was also Professor of Signal Processing. He has been Principal Investigator on many UK projects and was the lead investigator for QMUL on the SIMAC project. He was General Chair of DAFx 2003 and General Co-chair of ISMIR 2005 conferences. He is Chair of the Audio Engineering Society Technical Committee on Semantic Audio Analysis and is a Fellow of IEE and AES. He has published well over 300 papers in conferences and journals.



Barcelona Music and Audio Technologies (BMAT)

BMAT (Barcelona Music and Audio Technologies) is a technological company specialized in digital music products and services. BMAT is a spin-off company of the Music Technology Group (MTG) of the Universitat Pompeu Fabra (UPF) and an output of the IST Project SIMAC. BMAT is one of the few companies worldwide able to offer a number of solutions around audio and music, including audio analysis and description software, music recommendation, and audio identification, among others. BMAT aims at being a key player in digital music technologies. Its mission is to analyse all the music in the world to describe it, organize it, and track it, enabling new ways of interaction and management. As of today BMAT is a team of 20 engineers, PhD's, musicians and business people headquartered in Barcelona with representatives in Japan, Mexico, Russia, UK and Portugal. BMAT solutions service our partners across Europe, Africa, Asia, Latin-America and US, among which you can find companies such as Yamaha, Intel, Nielsen, Telefónica, The Orchard Jamendo, SESAC, Grooveshark or EMI Music Publishing.


People

Alex Loscos received the B.S. and M.S. degrees in Signal Processing Engineering in 1997. In 1998 he joined the Music Technology Group (MTG) for nine years, during which he published in most relevant proceedings and journals, co-published a couple of books and featured as author in more than 15 patents. After a few years as a researcher, lecturer, developer and manager he co-founded Barcelona Music & Audio Technologies (BMAT) in 2006, the spin-off company of the MTG. In 2007 he became Ph.D. in Computer Science and right after started as the Chief Strategy Officer at BMAT. A year and a half later he took over the position of BMAT's Chief Executive Officer. He is also a music passionate, an accomplished composer, and former member of international distribution bands. In 2004 he cofounded Safari Music, a record label through which he released his own music.

Salvador Gurrera holds an MBA and he is a senior engineer in electronics and telecommunications technician. After some years as a product engineer at HP he joined the Music Technology Group (MTG) to become the Head of administration, finance and technology transfer of the lab. He was responsible for managing a multi-million budget and for the negotiations and agreements with companies and governments. Salvador co-founded Barcelona Music and Audio Technologies (BMAT) and he is currently the Chief Finance and Operations Officer.

Oscar Paytuvi holds an MSc in Electrical Engineering from the Ramon Llull University (URL), a degree in Management skills applied to IT projects and in Product Management and Innovation in Technology from the Polytechnic University of Catalonia (UPC). For more than 10 years, Oscar has been a R&D Software engineer in the field of Multimedia and Mobile technologies at companies such as Nokia and Philips. Since 2005, Oscar has been a technical product manager / program manager in MIR projects (including publicly funded projects such as ASSETS and BUSCAMEDIA) at BMAT.

Personal tools
Namespaces
Variants
Actions
Navigation
Documentation Hub
MIReS Docs
Toolbox