CV

⬇ Download PDF
Education
PhD in Sound and Music Computing2020--2024
Universitat Pompeu Fabra (UPF) · Barcelona, Spain
Details

My dissertation, *Design, Development, and Deployment of Real-Time Drum Accompaniment Systems*, examines the generation of real-time symbolic drum accompaniments with a focus on live improvisation contexts. Over the course of this research, supervised by Dr. Sergi Jordà, I designed, developed, and evaluated three accompaniment systems of increasing complexity, each built around lightweight generative models optimized for real-time performance. Each system was evaluated using a combination of objective measures --- including computational performance benchmarks, model quality analyses (generation diversity, interpolation quality, and rhythmic characteristics) --- and qualitative evaluations through structured sessions with professional musician Raül Refree, whose iterative feedback directly shaped subsequent system designs, ultimately leading to a series of live public performances with the developed systems. Beyond the primary systems, this work produced several secondary contributions, including NeuralMidiFx, a wrapper for deploying neural networks as VST plugins, two novel datasets (TapTamDrum and El Bongosero), and an exploration of audio-domain adaptations. A hardware Eurorack version of one of the systems was also designed and deployed.

Master in Sound and Music Computing2017--2018
Universitat Pompeu Fabra (UPF) · Barcelona, Spain
Details

My master's studies at the Music Technology Group covered music perception and cognition,

real-time interaction, audio programming, digital signal processing, and deep learning.

Under the supervision of Sergi Jordà, I completed a thesis on generating basslines

interlocked with drum patterns using LSTM networks.

BASc in Electrical Engineering2008--2013
University of British Columbia (UBC) · Vancouver, Canada
Details

I completed a degree in Applied Science with a major in Electrical and Computer

Engineering, specializing in Digital Signal Processing (DSP) and acoustics during

my final year.

Skills
Programming: Software Design, Software Development, Software Testing, Python, C/C++, Matlab, Pure Data, Max/MSP, Concurrent Programming, Multi-Threaded Programming, PyTorch, Libtorch, ONNX, TensorFlow, JUCE, Docker, Linux, VST Plugin Development, Interface Development, Deployment of Neural Networks, Essentia.
Hardware: PCB Design, PCB Manufacturing, Circuit Design, Testing, Fabrication, Eurorack Module Development.
Data Science: Deep Learning. NLP Techniques for Music Generation. Collection, Curation and Processing of Large-scale Datasets. Data Analysis and Visualization.
Acoustics: EASE, CadnaA, ARTA, winMLS, LEAP EnclosureShop and CrossoverShop.
Work Experience
Research Engineer2024--2025
Universitat Pompeu Fabra · Barcelona, Spain
Details

• Led research projects in collaboration with undergraduate, master's, and PhD students on interactive sound systems

• Designed and developed interactive sonic installations for general public audiences

• Translated research outcomes into polished, ready-to-use applications for music production, disseminating research-driven tools to practitioners beyond academia

• Curated, maintained, and packaged open-source tools developed at the Music Technology Group for public release

• Collaborated with artists on designing and staging live performances integrating interactive generative systems

• Designed and conducted experiments for collecting crowd-sourced, open-source datasets from public participants

Acoustic Engineer/Consultant2014--2019
BAP Acoustics · Vancouver, Canada
Details

• Conducted noise and vibration measurements

• Modeled and simulated noise emissions from existing and proposed future noise sources

• Conducted room acoustic measurements

• Modeled and simulated acoustics of indoor spaces

• Simulated loudspeaker drivers based on electrical and acoustical measurements

• Designed loudspeaker cabinets and cross-over circuits for specialized applications

• Designed and developed loudspeaker and haptic vibration systems for amusement park ride simulators, creating immersive acoustic experiences emulating real-world vehicles

• Simulated outdoor public warning systems

• Developed specialized software implementing acoustic standards

• Measured noise from railways to monitor the corrugation of tracks

• Wrote memorandums and reports

Teaching Experience
Introduction To Programming (First Year Engineering)Fall 2019
Department of Information and Communications Technologies (DTIC) Universitat Pompeu Fabra (UPF)
Details

This introductory course provides engineering students with a foundational understanding of programming, with a specific focus on Python. Through hands-on exercises, students are acquainted with the basics of Python programming, equipping them with essential coding skills that serve as a stepping stone for more advanced computational tasks.

Electronic Music Production Lab (Fourth Year Engineering)Winter 2020
Department of Information and Communications Technologies (DTIC) Universitat Pompeu Fabra (UPF)
Details

Tailored for senior engineering students, this lab-oriented course delves into the realm of electronic music production. Utilizing the Pure Data environment, participants explore the intricacies of audio digital signal processing (DSP), gaining hands-on experience in crafting sound and understanding the underlying technical processes.

Computer Organization (First Year Engineering)Winter 2021, 2022, 2023
Department of Information and Communications Technologies (DTIC)
Details

A fundamental course for first-year engineering students, Computer Organization offers a deep dive into the architecture and operation of computers. Key topics include memory management techniques, the nuances of assembly language, and other foundational concepts that underpin the organization and functioning of computing systems.

Computational Music Creativity (Masters)Winter 2021, 2022, 2023
Department of Information and Communications Technologies (DTIC) Universitat Pompeu Fabra (UPF)
Details

Situated at the intersection of music and technology, this master's level course delves into the world of computational music creativity. Participants are introduced to the basics of deep generative models, providing insights into how advanced algorithms can be leveraged to foster musical creativity and innovation.

Publications
2025

Exploring Situated Stabilities of a Rhythm Generation System Through Variational Cross-Examination

Kotowski, Błażej and Evans, Nicholas and Haki, Behzad and Font, Frederic and Jordà, Sergi
2025
Abstract

This paper investigates GrooveTransformer, a real-time rhythm generation system, through the postphenomenological framework of Variational Cross-Examination (VCE). By reflecting on its deployment across three distinct artistic contexts, we identify three stabilities: an autonomous drum accompaniment generator, a rhythmic control voltage sequencer in Eurorack format, and a rhythm driver for a harmonic accompaniment system. The versatility of its applications was not an explicit goal from the outset of the project. Thus, we ask: how did this multistability emerge? Through VCE, we identify three key contributors to its emergence: the affordances of system invariants, the interdisciplinary collaboration, and the situated nature of its development. We conclude by reflecting on the viability of VCE as a descriptive and analytical method for Digital Musical Instrument (DMI) design, emphasizing its value in uncovering how technologies mediate, co-shape, and are co-shaped by users and contexts.

Learning Microrhythm in Uruguayan Candombe using Transformers

Mishra, Anmol and Prabhu, Satyajeet and Haki, Behzad and Rocamora, Martín
Proceedings of the International Computer Music Conference (ICMC), 2025
Abstract

Musicians rely on nuanced microrhythm, slight variations in timing, dynamics, and other aspects, to create an expressive rhythmic feel in music performance. Electronic music production often attempts to replicate these qualities through algorithmic manipulations to achieve similar effects. In this work, we address the generation of microrhythm using a method that learns microtiming and dynamics from onset timing and strength annotations of drum performances. We frame microrhythm learning as a sequence modeling task, leveraging a Transformer-based model. Our focus is on Uruguayan candombe drumming, where we explore its rhythmic patterns at both the beat and rhythmic cycle levels. To evaluate the model’s effectiveness in replicating the original microrhythm, we compare the mean, standard deviation, and histogram intersection of timing deviations and dynamics values at each subdivision for the original and the generated data. The model is deployed as a VST enabling artists to incorporate candombe grooves into drum scores. With this work, we aim to bridge the gap between algorithmic rhythm creation and the expressive qualities of live performance, striving to produce music with the authentic grooves of various Latin American genres.

Repurposing a Rhythm Accompaniment System for Pipe Organ Performance

Nicholas Evans and Behzad Haki and Sergi Jordà
Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), 2025
Abstract

This paper presents an overview of a human-machine collaborative musical performance by Raül Refree utilizing multiple MIDI-enabled pipe organs at Palau Güell, as part of the Organic concert series. Our earlier collaboration focused on live performances using drum generation systems, where generative models captured rhythmic transient structures while ignoring harmonic information. For the organ performance, we required a system capable of generating harmonic sequences in real-time, conditioned on Refree's performance. Instead of developing a comprehensive state-of-the-art model, we integrated a more traditional generative method to convert our pitch-agnostic rhythmic patterns into harmonic sequences. This paper details the development process, the creative and technical considerations behind the final performance, and a reflection on the efficacy and adaptability of the chosen methodology.

2024

Design, development, and deployment of real-time drum accompaniment systems

Behzad Haki
2024
Abstract

This dissertation examines the generation of real-time symbolic drum accompaniments, with a particular focus on live improvisation contexts. While the research does occasionally focus on the audio domain, the majority of the research is centered on symbolic-to-symbolic systems. This dissertation addresses real-time drum accompaniment from multiple perspectives: (1) conceptual, where a target application is designed based on a set of specified requirements, (2) architectural, where specific generative models are designed and developed for the selected conceptual design, and (3) deployment, where the conceptual design is realized and evaluated. Throughout this work, three accompaniment systems were developed and refined. The first work, detailed in Chapters 3 and 4, was aimed to develop a lightweight system on which future more sophisticated designs could be based. This system was based on a transformer model that was developed to convert a monotonic (single voice) rhythmic loop (groove) into a full multi-voice drum loop. The concept explored here was to investigate whether a loopbased system could be effectively used for generating drum accompaniments in long evolving improvisational sessions. The resulting system was evaluated by professional musician Raül Refree, who provided valuable insights on how the design could be modified to better suit the task. Following these evaluations, the second system, GrooveTransformer, was developed (discussed in Chapter 5). In this work, rather than relying on our personal speculations, we collaborated with Refree from the outset of the project. As such, we were able to develop a system that was far more suitable for the task at hand, to the extent that the musician felt comfortable to perform with the system in a public live improvisational session. While still loop-based, the generative model in this work was based on a variational transformer that enabled us to address the majority of the collaborating musician’s requirements for the system. Although initially deployed as software, we also developed a hardware Eurorack version (discussed in Chapter 6). The Eurorack module was designed to encourage experimentation and exploration beyond the system’s original intent. In the third system (discussed in Chapter 7), we moved beyond the loop-based approach. The primary goal was to enhance the system’s awareness of the evolving performance over extended durations. To this end, we developed a new generative model with a much larger context. The larger model’s computational demands required a thorough exploration of both conceptual and technical deployment strategies. All of these systems focused on converting a monotonic groove into a multivoice drum pattern. In Chapter 8, we first discuss the limitations and affordances of basing the generations solely on groove. Additionally, several works and proposals surrounding this groove-to-drum approach are discussed in detail: (1) how to improve the process of extracting grooves from polyphonic sources, (2) how to make this approach more accommodating for individuals with varying levels of musical experience, (3) how to expand the concept to generate general rhythms rather than exclusively drums, and (4) how to extract groove from audio sources. Beyond the primary objectives, this research also yielded several significant secondary contributions that arose from the explorations conducted. One such achievement was that we were able to establish that our systems can also be adapted to work with audios without major architectural changes (Appendix A). Moreover, we created NeuralMidiFx (Appendix B), a wrapper designed to facilitate the deployment of neural networks in VST (Virtual Studio Technology) format. This tool was developed to overcome the technical challenges encountered during the real-time deployment of the generative models. Furthermore, two novel datasets, TapTamDrum (Appendix C) and El Bongosero (Appendix D), were created as part of this research. These datasets serve as valuable resources for future studies on both rhythm generation and rhythm analysis.

El Bongosero: A Crowd-sourced Symbolic Dataset of Improvised Hand Percussion Rhythms Paired with Drum Patterns

Evans, Nicholas and Haki, Behzad and Gomez, Daniel and Jorda, Sergi
Proceedings of the 24th International Society for Music Information Retrieval Conference, 2024
Abstract

We present El Bongosero, a large-scale, open-source symbolic dataset comprising expressive, improvised drum performances crowd-sourced from a pool of individuals with varying levels of musical expertise. Originating from an interactive installation hosted at Centre de Cultura Contemporània de Barcelona, our dataset consists of 6,035 unique tapped sequences performed by 3,184 participants. To our knowledge, this is the only symbolic dataset of its size and type that includes expressive timing and dynamics information as well as each participant’s level of expertise. These unique characteristics could prove to be valuable to future research, particularly in the areas of music generation and music education. Preliminary analysis, including a step-wise Jaccard similarity analysis on a subset of the data, demonstrate that this dataset is a diverse, nonrandom, and musically meaningful collection. To facilitate prompt exploration and understanding of the data, we have also prepared a dedicated website and an open-source API in order to interact with the data.

Groove Transfer VST for Latin American Rhythms

Mishra, Anmol and Haki, Behzad and Prabhu, Satyajeet and Rocamora, Martín
the 25th International Society for Music Information Retrieval Conference (ISMIR), 2024
Abstract

Latin American music relies on groove—small variations in timing, dynamics, and other aspects—to create an expressive rhythmic feel in music performance. Electronic music production often attempts to replicate these qualities through algorithmic manipulations to achieve similar effects. In this work, we employ a transformer-based model to learn microtiming and dynamics from onset timing and strength annotations of Uruguayan Candombe drum performances. The model is then deployed as a VST allowing users to apply the learnt candombe microrhythms to quantized midi drum performances. With this work, we aim to bridge the gap between algorithmic rhythm creation and the expressive qualities of live performance, striving to produce music with the authentic grooves of various Latin American genres.

GrooveTransformer: A Generative Drum Sequencer Eurorack Module

Evans, Nicholas and Haki, Behzad and Jorda, Sergi
Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) 2024, 2024
Abstract

This paper presents the GrooveTransformer, a Eurorack module designed for generative drum sequencing. Central to its design is a Variational Auto-Encoder (VAE), around which we have designed a deployment context enabling performance through accompaniment and/or user interaction. This module allows the user to use the system as an accompaniment generator while interacting with the generative processes in real-time. In this paper, we review the design principles and technical architecture of the module, while also discussing the potentials and short-comings of our work.

2023

TapTamDrum: A Dataset for Dualized Drum Patterns

Haki, Behzad and Kotowski , Błażej and Lee , Cheuk and Jorda, Sergi
Proceedings of the 24th International Society for Music Information Retrieval Conference, 2023
Abstract

Drummers spend extensive time practicing rudiments to develop technique, speed, coordination, and phrasing. These rudiments are often practiced on "silent" practice pads using only the hands. Additionally, many percussive instruments across cultures are played exclusively with the hands. Building on these concepts and inspired by Einstein's probably apocryphal quote, "Make everything as simple as possible, but not simpler," we hypothesize that a dual-voice reduction could serve as a natural and meaningful compressed representation of multi-voiced drum patterns. This representation would retain more information than its corresponding monotonic representation while maintaining relative simplicity for tasks such as rhythm analysis and generation. To validate this potential representation, we investigate whether experienced drummers can consistently represent and reproduce the rhythmic essence of a given drum pattern using only their two hands. We present TapTamDrum: a novel dataset of repeated dualizations from four experienced drummers, along with preliminary analysis and tools for further exploration of the data.

NeuralMidiFx: A Wrapper Template for Deploying Neural Networks as VST3 Plugins

Haki, Behzad and Lenz, Julian and Jorda, Sergi
Proceedings of the 4th International Conference on on AI and Musical Creativity, 2023
Abstract

Proper research, development and evaluation of AI-based generative systems of music that focus on performance or composition require active user-system interactions. To include a diverse group of users that can properly engage with a given system, researchers should provide easy access to their developed systems. Given that many users (i.e. musicians) are non-technical to the field of AI and the development frameworks involved, the researchers should aim to make their systems accessible within the environments commonly used in production/composition workflows (e.g. in the form of plugins hosted in digital audio workstations). Unfortunately, deploying generative systems in this manner is highly expensive. As such, researchers with limited resources are often unable to provide easy access to their works, and subsequently, are not able to properly evaluate and encourage active engagement with their systems. Facing these limitations, we have been working on a solution that allows for easy, effective and accessible deployment of generative systems. To this end, we propose a wrapper/template called NeuralMidiFx, which streamlines the deployment of neural network based symbolic music generation systems as VST3 plugins. The proposed wrapper is intended to allow researchers to develop plugins with ease while requiring minimal familiarity with plugin development.

Completing Audio Drum Loops with Symbolic Drum Suggestions

Haki, Behzad and Pelinski, Teresa and Nieto, Marina and Jorda, Sergi
Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) 2023, 2023
Abstract

Sampled drums can be used as an affordable way of creating human-like drum tracks, or perhaps more interestingly, can be used as a mean of experimentation with rhythm and groove. Similarly, AI-based drum generation tools can focus on creating human-like drum patterns, or alternatively, focus on providing producers/musicians with means of experimentation with rhythm. In this work, we aimed to explore the latter approach. To this end, we present a suite of Transformer-based models aimed at completing audio drum loops with stylistically consistent symbolic drum events. Our proposed models rely on a reduced spectral representation of the drum loop, striking a balance between a raw audio recording and an exact symbolic transcription. Using a number of objective evaluations, we explore the validity of our approach and identify several challenges that need to be further studied in future iterations of this work. Lastly, we provide a real-time VST plugin that allows musicians/producers to utilize the models in real-time production settings.

2022

Real-Time Drum Accompaniment Using Transformer Architecture

Haki, Behzad and Nieto, Marina and Pelinski, Teresa and Jordà, Sergi
Proceedings of the 3rd International Conference on on AI and Musical Creativity, 2022
Abstract

This paper presents a real-time drum generation system capable of accompanying a human instrumentalist. The drum generation model is a transformer encoder trained to predict a short drum pattern given a reduced rhythmic representation. We demonstrate that with certain design considerations, the short drum pattern generator can be used as a real-time accompaniment in musical sessions lasting much longer than the duration of the training samples. A discussion on the potentials, limitations and possible future continuations of this work is provided.

2021

Transformer Neural Networks for Automated Rhythm Generation

Nuttall, Thomas and Haki, Behzad and Jorda, Sergi
Proceedings of the International Conference on New Interfaces for Musical Expression, 2021
Abstract

Recent applications of Transformer neural networks in the field of music have demonstrated their ability to effectively capture and emulate long-term dependencies characteristic of human notions of musicality and creative merit. We propose a novel approach to automated symbolic rhythm generation, where a Transformer-XL model trained on the Magenta Groove MIDI Dataset is used for the tasks of sequence generation and continuation. Hundreds of generations are evaluated using blind-listening tests to determine the extent to which the aspects of rhythm we understand to be valuable are learnt and reproduced. Our model is able to achieve a standard of rhythmic production comparable to human playing across arbitrarily long time periods and multiple playing styles.

2019

A Bassline Generation System Based on Sequence-to-Sequence Learning

Haki, Behzad and Jorda, Sergi
Proceedings of the International Conference on New Interfaces for Musical Expression, 2019
Abstract

This paper presents a detailed explanation of a system generating basslines that are stylistically and rhythmically interlocked with a provided audio drum loop. The proposed system is based on a natural language processing technique: word-based sequence-to-sequence learning using LSTM units. The novelty of the proposed method lies in the fact that the system is not reliant on a voice-by-voice transcription of drums; instead, in this method, a drum representation is used as an input sequence from which a translated bassline is obtained at the output. The drum representation consists of fixed size sequences of onsets detected from a 2-bar audio drum loop in eight different frequency bands. The basslines generated by this method consist of pitched notes with different duration. The proposed system was trained on two distinct datasets compiled for this project by the authors. Each dataset contains a variety of 2-bar drum loops with annotated basslines from two different styles of dance music: House and Soca. A listening experiment designed based on the system revealed that the proposed system is capable of generating basslines that are interesting and are well rhythmically interlocked with the drum loops from which they were generated.

Supervisions
At UPF, I had the opportunity to collaborate with, and along with Dr. Sergi Jordà, co-supervise a diverse group of students. Most of these collaborations led to academic publications at peer-reviewed conferences.
3 Master Students, 1 PhD Candidate2024-25
UPF, DTIC
3 Master Students, 2 Undergraduate Student2022-23
UPF, DTIC
3 Master Students2020-21
UPF, DTIC
3 Master Students, 1 Undergraduate Student2019-20
UPF, DTIC
Conference Reviewer
International Society for Music Information Retrieval Conference (ISMIR)2025
International Computer Music Conference (ICMC)2025
New Interfaces for Musical Expression Conference (NIME)2025
International Conference on on AI and Musical Creativity (AIMC)2025
New Interfaces for Musical Expression Conference (NIME)2024
International Conference on on AI and Musical Creativity (AIMC)2023
New Interfaces for Musical Expression Conference (NIME)2020
Datasets
TapTamDrum2023-24
[https://taptamdrum.github.io/](https://taptamdrum.github.io/)
Details

A novel dataset of repeated dualizations from four experienced drummers. This collection consists of 1116 dualized drum patterns.

El Bongosero2023-24
[https://elbongosero.github.io/](https://elbongosero.github.io/)
Details

A large dataset of concurrent drum and bongo improvisations collected as part of the ongoing AI exhibition held at CCCB. This dataset consists of over 6000 improvisations from over 3000 participants.

Open-Source Projects
GrooveTansformer: A Suite of Generative Models2020-2024
[https://github.com/behzadhaki/GrooveTransformer](https://github.com/behzadhaki/GrooveTransformer)
Details

This suite encompasses a collection of generative transformer models specifically tailored for the generation of symbolic drum patterns. Built on the transformer architecture, these models leverage advanced deep generative techniques to produce rhythmic sequences. The project is open-sourced and accessible to the global community, inviting collaboration and further exploration.

GrooveTansformer: Eurorack Module2023
Details

In 2023, the 'GrooveTransformer' took a groundbreaking leap, evolving from a purely digital realm to the tactile world of Eurorack modules. Recognizing the potential of merging modern machine learning capabilities with the hands-on, modular approach of Eurorack, this initiative aimed to amplify user engagement and control in music generation.

NeuralMidiFx2023
[https://neuralmidifx.github.io/](https://neuralmidifx.github.io/)
Details

Addressing the deployment challenges of AI music generators in musician-favored platforms like DAWs, I developed NeuralMidiFx. This open-source template streamlines the integration of neural-based music systems as VST3 plugins. With a focus on ease of use, NeuralMidiFx reduces technical hurdles, empowering researchers to easily share their generative tools with a broader audience.

Groove2Drum VST2021-2023
[https://github.com/behzadhaki/Groove2DrumVST](https://github.com/behzadhaki/Groove2DrumVST)
Details

An advanced VST plugin deploying the GrooveTransformer modules, Groove2Drum offers musicians immediate generative drum pattern capabilities for both composition and accompaniment, seamlessly integrating AI-driven rhythms into standard music production workflows.

TapTamDrum Dataset2020-2023
[https://taptamdrum.github.io/](https://taptamdrum.github.io/)
Details

Inspired by traditional drumming practices, TapTamDrum offers a dual-voice reduction dataset, capturing the essence of complex drum patterns using hands-only interpretations. Collected from four experienced drummers, this resource simplifies multi-voiced sequences for rhythm analysis and generation, providing a unique lens for rhythmic exploration.

MonotonicGrooveTransformer2020-22
[https://github.com/behzadhaki/MonotonicGrooveTransformer](https://github.com/behzadhaki/MonotonicGrooveTransformer)
Details

Designed for live musical accompaniment, this system utilizes a transformer encoder model trained to generate drum patterns from rhythmic inputs. Despite its training on short samples, strategic design enables it to accompany extended musical sequences seamlessly in real-time.

TransformerGrooveInfilling2020-22
[https://transformergrooveinfilling.github.io/](https://transformergrooveinfilling.github.io/)
Details

Utilizing Transformer-based models, this tool augments audio drum loops with fitting symbolic events. By converging raw audio with symbolic transcriptions through a spectral approach, a real-time VST plugin is provided, seamlessly integrating its generative strengths into live production.

Workshops
NeuralMidiFx Workshop at AIMC 20232023
University of Sussex, UK
Details

In this workshop, participants were introduced to NeuralMidiFx, a JUCE-based VST3 wrapper designed to simplify the integration of AI-driven symbolic music generative models into plugins. Tailored for those unfamiliar with plugin development, attendees were walked through the entire deployment process, culminating in the creation of a VST3 plugin using a pre-trained generative model.

Artist Collaborations
Collaboration with Raül Refree2022-2024
Barcelona, Spain
Details

A participatory design collaboration in which professional musician Raül Refree was involved from the outset as a key design partner. Through iterative evaluation sessions, his requirements and feedback directly shaped the architecture and interaction design of the GrooveTransformer system, culminating in a series of live public performances.

Collaboration with Desert (Cris Checa and Eloi Caballé)2023
Barcelona, Spain
Details

Collaborated with Barcelona-based duo Desert on a studio evaluation of the GrooveTransformer system, gathering qualitative insights on controllability and generative behaviour in a production context.

Showcases
+RAIN Film FestivalJune 2025
Barcelona, Spain
Details

My colleague Nicholas Evans, and *nara is neus*, performed at the +RAIN festival in which the system developed for the Palau Güell live show was used in a different context.

Palau Güell Live ShowNov 2024
Barcelona, Spain
Details

The second performance of Raül Refree with the last transformer-based generative system. In this live show, Refree plays the organ at the venue, and the generative system generates accompaniments that will be played on a secondary organ. In this live show, we explore whether and how a generative system can be used in a context for which it was not designed.

CCCB Live ShowFeb 2024
Barcelona, Spain
Details

The first performance of Raül Refree with the GrooveTransformer system. In this live show, Refree did an improvised performance with the system.

+RAIN Film Festival2023
Barcelona, Spain · [https://www.upf.edu/es/web/rainfilmfest](https://www.upf.edu/es/web/rainfilmfest)
Details

In this concert, Nicholas Evans, my colleague and co-developer of the GrooveTransformer Eurorack Module, performed with the developed module

Sonar+D Music Festival2023
Barcelona, Spain · [https://sonar.es/es/actividad/project-area-music-and-sound](https://sonar.es/es/actividad/project-area-music-and-sound)
Details

During the Sonar+D festival, we showcased the GrooveTransformer Eurorack module to the general public.

CCCB: El Bongosero2023-2024
Barcelona, Spain · [https://elbongosero.github.io/](https://elbongosero.github.io/)
Details

A six-month participatory installation at the Centre de Cultura Contemporània de Barcelona (CCCB), in which over 3000 members of the public engaged with a generative sonic system. Participants actively shaped the system's evolution through their interactions, contributing to an open-source crowd-sourced dataset of over 6000 improvisations collected as part of the ongoing exhibition.

Non-Academic Publications
Haki, Behzad, De Santis, Eric. “Good Communication in Restaurants: Acoustic Capacity”. BAP Acoustics Online Blog. May, 2015. [https://bapacoustics.com/good-communication-in-restaurants-acoustic-capacity/](https://bapacoustics.com/good-communication-in-restaurants-acoustic-capacity/)
Haki, Behzad. “How Good is Bluetooth at its best?”. Serene Audio Online Blog. November, 2015. [http://www.sereneaudio.com/blog/how-good-is-bluetooth-audio-at-its-best](http://www.sereneaudio.com/blog/how-good-is-bluetooth-audio-at-its-best)
Languages
English: Fluent
Farsi: Fluent