Abstract Language Model
2022 — Audio-visual performance, 45 min
Trained artificial neural network, custom software
For Abstract Language Model, an artificial neural network was trained with the entire charactersets represented in the Unicode Standard. The resulting complex data models contain the translation of all available human sign systems as equally representable, machine-created states. Through extraction and interpolation of these artificially created semiotic systems a transitionless universal language originates, which can be seen as a trans-human / trans-machine language. The audio-visual performance presents the states from Extraction > Analysis > Rearrange > Process > Transformation > Language with an abstract narration.
A detailed description for the research and process for Abstract Language Model can be found here (Abstract Language Model with Monolith YW, 2020-2022).

In situ at Clujotronic, Cluj-Napoca / Romania, 2022 (Photos by Nicoleta Vlad)
In situ at Alliance Française, Karachi / Pakistan, 2022 (Photos by Justine Emard & Maham Qureshi)