What is a dimension in the simplest version of itself? Can it exist and present itself to us without relations to other dimensions or it becomes trapped in its context as long as you have more than one? Can it be turned into a material? In terms of geometry the answers can be more or less straightforward depending on perspective. In artistic exploration, however, new modes and models can emerge. Three dimensions of space and one dimension of time is what our reality is understood to be. A point without any reference is dimensionless, a line in this context is one-dimensional, a square is two-dimensional, and a cube is three-dimensional. Add transformation as propagation of time and the four-dimensional spacetime is complete. Nevertheless, the linearity of geometrical interdependence in this concept can be difficult to leave as these basic geometrical forms logically always build themselves up from forms with lower number of dimensions. If, however, new ideas and processes, ways of mapping, and less linear relationships are introduced into the interplay of dimensions, new types of experiences and artworks become possible. Combined with the interplay of physical and digital and with placing presence into the center of everything, the infinity of reality as well as the reality of infinity can be explored. In my systems everything feeds back onto everything. From its capture onto a chip inside a sensor light takes many different paths and splits into different forms that shape various complex experiences. From ultimately static ultra-high-resolution light scans to dynamic hyper-dimensional mapping of spacetime the n-verse project aims to experiment with dimensions and light-enabled information in its raw form through non-linearity and abstraction, both conceptual and expressive.
Applying a nonlinear multidimensional approach to not only capture, but also to express the complexity and mesmerizing depth of a human facial expression and the special way our perception system treats this visual input requires both simplification and addition. Firstly, I imprint a face into a reflection mask that turns a basic material of aluminum foil into a reflective structure with an almost infinite number of mirrors oriented in all possible directions. Directions that are not random but together with the size of these mirrors reveal and create through their relations both the structure as well as the process of hands shaping the details of one’s own face into mountains and valleys. Secondly, I carefully place the mask onto a high-resolution flatbed scanner choosing a position and orientation. Then as a one-dimensional scanner head slowly moves on its linear path and digitizes a line after a line of information how light and its color interact with the reflective physical structure and its depth, I paint on it with laser light tracing features, edges and searching for moments of complex exploding reflections as well as moments of light silence. It takes approximately fifteen minutes for the scanner head to capture over half a billion pixels. Distance, angle, laser type, shakiness of my hands and above all the overall intentions as well as the momentary idea and its expression determine the imprinted light information. While the mask scans are the key in my compositions of portraits, I use additional layers of scans for background and blending. Through experimentation with several different techniques, best outcomes come from direct laser painting onto the scanning head - a process I have explored ten years ago and that inspired me to restart working with ultra-high resolution scanning. In some compositions, the background is captured only by scanning a mostly flat crumpled aluminum sheet. The search for the expressive outcome for each portrait in Photoshop is a rather long process where a complex combination of a small number of layers and their exposure, offset, and gamma settings together with blending modes makes the final result emerge.
Transformation of real time multidimensional information into expressive forms is the main subject of the Figures interactive installation, where a highly complex system and many of its interlinked subsystems enable unique experiences. Two four-dimensional scanners stream hundreds of thousands of spatially distributed points thirty-times each second. These points represent the three dimensional physical reality of the installation space. Digital version of this actual physical reality is then reconstructed and reinjected into its position or rather its expressive two-dimensional orthographical re-representation. A combination of overlaps of spatially calibrated colored point clouds with expressive mapping of their forms acts as a mirror. A mirror where perspective is replaced with a flattened cross-section of space. Two perpendicularly arranged multi-monitor walls thus re-represent the presence of human bodies and their real time movement - a four dimensional system into two two-dimensional structures arranged in three-dimensional setup. While a dimensionality reduction does take place in this process from the perspective of spatial setup, an increase of dimensionality is at the same time achieved through transformation of raw colored points into lines, rectangles, and cubes. Furthermore, the parameters or rather variables that determine the length of lines and the size of rectangles as well as cubes are subject to mathematical operations that enable expression through abstraction and displacement. The result is that a massive technological structure and sculpture composed of twenty-eight monitors and three computers with a total of eight graphic cards enables simple interplay of dimensions through proprioceptive actions by participants. Acting as a bridge between physical and digital realities the concept of this installation is scalable and can also be realized with projections.
Dimensional interplay in both its simplest as well as its most complex version of itself in the context of the n-verse project takes place inside the Traces virtual reality installation. While its simplicity comes from the straightforward idea that a digitized version of physical reality streamed from two four-dimensional scanners can be aligned with its physical source and explored through a virtual reality system, its complexity lies in manipulation of time and advanced mapping of variables enabled by computer vision. In addition to real-time point cloud streaming into the system, mathematical operations in combination with skeleton tracking are used to not only transform raw colored points into their higher-dimensional representations, but also to filter portions of these points belonging to human bodies and apply specific functions on them in relation to time. In other words, one can explore the self in the flux of time. Up to ten seconds or three hundred frames of movement, change, and spatial translation in the form of multidimensional traces can be explored by the participants depending on their position within the installation space. Furthermore, additional skeleton tracking data such as hand positions are used and combined with random ranges to slightly influence other forms in the system. The resulting experience is visually overwhelming. Realistic forms turn into abstract structures with complex levels of detail composed up from smallest scales. The interplay of point, lines, rectangles, and cubes with delay and hyper-interactivity invites proprioceptive exploration that is both physical and digital at the same time, even though the time shifts.
Context and Conclusion
Turning things inside out to explore their limits has been part of my artistic practice for over the past decade. Looking back, it started with long exposure light painting with the Moon as a brush while experimenting with light painting, what followed were first experiments with light scanning techniques, projecting video feedback onto a scanner scanning itself, using a hybrid physical-digital three-dimensional tracked puppet head as a handheld device to manipulate virtual world that was the inside the puppet head, swapping four-dimensional sensors for two adjacent virtual reality installation to enable meeting of two participants, composing with dozens of four-dimensional streams to create massive virtual environments, placing a thousand times larger virtual world filled with volumetric recordings inside a giant light sculpture all enabled by four-dimensional scanning, and proposing two painters to wear hats with webcams streaming the image of their faces onto projections for a live painting session. All these experiments were driven by a simple thought - to look at things from a different perspective, to search for ideas and interventions that are experimental and that through the exposure of themselves also expose complex relationships in these systems and open different paths for them and for us. The n-verse project is both a culmination and continuation of this exploration. Oscillation between simplicity and complexity as well as between an idea and form is what now drives my processes. By grounding my experimentation in multidimensional systems enabled by light and scanning techniques, I already imagine many new interventions and artworks that will over time build up my n-verse. While future online presence of the project is extremely challenging especially in regards to real-time four-dimensional scanning and its transmission over the network, various cross-combinations of techniques and approaches will make it possible. Starting with expressive mapping of three-dimensional static scans of human figures and continuing with the same approach with volumetric recordings of short performances, such outputs can be shared online using existing technologies and in some cases also using standard data types. However, my ultimate goal is to build the n-verse as an open system for exploring not only my creations, but also for integrating other creators' works.
Jakub Grosz 2021