Kaush-K

NOEMA

Conception of Form and Meaning in AI

Various thinkers of the past have addressed the quintessential role of ‘form’ in human perception. From Plato’s theory of forms to Heidegger’s Being and Time, the concept of form established itself as an essential part of being as such. It is only through the perception of forms that move into and out of being that we can talk about aesthetics or experience the world in any meaningful way. That is to say, subjectivity in itself seems to be enabled by our ability to conceptualize form.


But is it the case that such a subjectivity is only limited to seemingly spontaneous organic beings?


Could it be that subjectivity could be attributed to an inanimate object if it were to exhibit a mechanical cognizance of form?


The following series of artworks pursues the aforementioned questions and explores the possibilities of machine conceived structures of form.


Noema, a term introduced by Edmund Husserl to refer to the formal content of an intentional experience, manifests as a visible composition in the following artworks. The primary significance of this particular artistic manifestation is that the subject of the intentional experience is an inanimate one, an artificial neural network

The notion of an inanimate conception of form would blur the age-old dichotomy between organic spontaneity and machinic repeatability.


How does one conceive of a machine (a predominantly repeatable automaton) that (spontaneously) constructs its own structures of form and meaning independent of our own networks of signs and knowledge?

In speculating the possibility of such a blurring (between spontaneity and repeatability), Jacques Derrida, a French post-structuralist, deems it necessary for the event (that which is happening; the perception of a form captured out of time) and the machine (the calculable programming of an automatic repetition) to be conceived as in-dissociable concepts. However, it is more than safe to say that the concepts of event and machine are far from being compatible today. They in fact present themselves to be antinomic in nature owing to our conception of the event as something singular and non-repeatable. Derrida associates this singularity of the event to be a characteristic of the living; the perceived form undergoes a particular sensation (an effect or a feeling) which eventually crystallizes as organic material. The machine equivalent of such a crystallization is based on repetition; “It [the machine] is destined, that is, to reproduce impassively, imperceptibly, without organ or organicity, the received commands. In a state of anaesthesis, it would obey or command a calculable program without affect or auto-affection, like an indifferent

automaton” (Without Alibi, p. 73).

Owing to the machine’s state of indifference, its seemingly automatic nature is not the same as the spontaneity attributed to organic life. This incompatibility begins to be apparent as one draws borders based on spontaneity between these two concepts: organic, living singularity (the event) and inorganic, dead universality (mechanical repetition).


In building an artificial neural architecture to arrive towards a machinic conception of form, the driving intuition is to possibly accommodate a compatibility between these concepts given the limits of classical computing systems. What hindered such a compatibility from ever materializing in modern technological frameworks seems to be something that inhibits the essence of the machine, its Functionality.

Functionality becomes an underlying constant that places the machine to be in opposition to what’s outside it and one whose subversion would create a rupture in a world that shaped its identity against the notion of the machine as a functional tool.


As sensible as it may seem from the predominant utilitarian standpoint that one would have no reason to not make use of objects that were produced to be made use of, it is important to acknowledge that assigning a particular function (the end towards which an object is used as a means) transpires functionality itself to be a violent force that imprisons the object from ever attaining a larger set of possibilities and configurations that the object could potentially inhibit.


This entrapping nature has always placed functionality as an invisible pervasive intent in the construction of the machine; from its inhibition of the primitive spear as a weapon, it has evolved to take shape as the specific task that a set of instructions lead towards in a computer program. With the advent of the deep learning paradigm, we witness systems that are designed to mimic ourselves in terms of growth and ability, and yet perform within the restraints of their predefined functions.


Functionality in today’s elusive neural networks takes the form of a loss function, an operation that alters the configurations of those neurons that fail to contribute towards producing a desired output. Over the course of several iterations, the loss function eventually transforms the entire neural architecture to be one that performs the assigned task with utmost precision and accuracy.


Building a neural architecture capable of constructing its own conceptions of form would presuppose the network to be free from functionality, one that is not driven and limited by its loss function. To go about implementing such a framework, I set out to rebuild a generative neural network, an adversorial auto-encoder in particular, and introduce an

alternate loss function that would later grow to shape its behavior.


A traditional autoencoder, when given an input, tries to reconstruct the same input from its latent space. For instance, given an image of the circle, the auto-encoder reduces it to its latent space and can later reconstruct the same circle from sampling on its latent space. However, such a mechanic reconstruction is not what this endeavor aims for.



The objective is to let the network reconstruct forms without being dependent on our network of signs; not a mechanical transformation but rather a spontaneous creation. In order to trigger such a spontaneity, I introduced a rupture within the loss function, one that doesn’t constantly drive the network towards a perfect reconstruction of the object being perceived but rather opposes such a construction. This addition encourages the neurons that fail to contribute to the perfect reconstruction of the image and as a result, the network given an image of a circle, would produce an object that deviates from the composition of the circle and yet carry traces of it.


After each iteration, the network is trained on its own (mis)representations from the previous iteration and not the initial input.


With an added level of deviation at every iteration, the network would eventually produce forms that deviate from the initial input to a pointwhere any traces are intractable, and that the very image of a circle if provided later to the network would seem foreign.


The machine would continue to run infinitely opposing and deviating from its own constructions from the previous iterations and produce fresher forms at each point.


Such a process seems to be analogous to the temporal changes in our own constructions of aesthetic forms, with the world of art redefining its own genres by accommodating that which deviates or contradicts the domain of art at any given period in history.


Core Architecture :


— Base Neural Network (Form Conception)

Generative Adversarial Interpolative Auto-Encoder (GAIA) with a custom loss-function to trigger deviation.


— Color Palette Generation

Principal Component Analysis (PCA) on a public dataset of oil paintings to generate new palettes.


— Composing Brush Strokes

Sampling from a Variational Auto-Encoder trained on hand-drawn brush strokes.


References:

  • - Without Alibi, Jacques Derrida. Translated

by Peggy Kamuf, Stanford: Stanford University Press, 2002.