Everything we see is meant to be seen


"Everything we see is meant to be seen" is a statement I came up with after reading the book 100 Whites (Hara 1). In his book Hara very briefly explains that evolution has created a necessity for some things to be sensed and other things not to be sensed. Why can we see the moon, but can’t we look directly at the sun? If according to evolution it had to be possible to look directly at the sun, our senses probably would have changed Hara claims, but because there was no necessity in doing so we are impotent of this..

The theory of Hara did not only inspire me to write this essay, it seems to portray the idea that there is a reason I can see the things I see. This essay is a further inspection into the reasoning that everything we see is meant to be seen. I have challenged the quote with three different images, all three images have in common that they, in their own specific way, trigger an unconscious part of image processing within our brain. I have chosen these images to show that even visually incomprehensible images, which leave room for speculation still have a reason of being seen. Yet that there also seems to become a gap between the human programme created by evolution and technological programmes used for modern imagery.

1. Hara, K. (2019). 100 whites. Lars Müller Publishers.ISBN 978-3-03778-579-9



Image I


Ivan Seal Painting

Fig. 1. Seal, Ivan.
“bleach-flik-spitchudeeinmimowf.”2012. 50x40cm.
www.Ivanseal.com. Accessed 28 Jan. 2021.



The first image in front of me is a painting by Ivan Seal named bleach-flik-spitchudeeinmimowf (Fig.1.) Seal’s paintings are translated from cognitive images of his own memory. The architecture of the brain (2.) is the reference Ivan uses to sculpt and brush his paint onto his relatively small canvases. The border between a thing and an object is within each work being questioned (3). Things are undefined objects we see when we imagine something. They are the result of abstractions and compressions performed by the architecture of the brain in order to remember something. When we imagine something the image we see is not a precise one, for example we can’t measure it, this shows the distinction between the preciseness of a real life object and the undefinedness of a thing. Seal plays with these things in his work as he tries to define them back as an object again. Size and title also play an important role within the questioning and challenging of the architecture of the brain. The size of his canvases being usually slightly bigger than the human head is considered by Seal the perfect size for something to be an object. The title Seal uses is a coalescence of existing words assembled by computers, the original words within the title lose their authentic connotation and get new meaning by the painting.

What is revealed by Seal’s painting is the confabulation of the brain, Even though Seal doesn’t suffer from any memory disorder himself, his work is closely related to dementia and senility in general (Tan 4). Seal portrays to us an image we define as unrecognisable. He intentionally lets us experience the conflict between memory and image.

Everything we see is meant to be seen


Even though we don’t recognise the thing in bleach-flik-spitchudeeinmimowf (Fig.1), we do recognise that the thing we see is derived from something we know, by showing and letting us subliminally experience this conflict. Seal proves that everything we see is relative to something we know. Because if the things we saw were not relative, we would only see a combination of shapes, but we don’t, because we recognise it as a thing something derived from an object.

Architecture of the brain: The subliminal sorting process of which things we remember and how we remember those things


2. ‘Artist Ivan Seal in Conversation with Harriet Loffler’. Vimeo, uploaded by Contemporary Art Society, 19 Apr. 2013, vimeo.com/65272769.


3. Tan, Declan. ‘The Noise In-Between: An Interview With Ivan Seal’. The Quietus, 10 Mar. 2018, thequietus.com/articles/24144-ivan-seal-interview.



Image II


Ivan Seal Painting

Fig. 2. NASA.
“Face on Mars (35A72.” 1976.
https://photojournal.jpl.nasa.gov/catalog/PIA01141. Accessed 28 Jan. 2021.



“This picture is one of many taken in the northern latitudes of Mars by the Viking 1 Orbiter in search of a landing site for Viking 2.

The picture shows eroded mesa-like landforms. The huge rock formation in the center, which resembles a human head, is formed by shadows giving the illusion of eyes, nose and mouth. The feature is 1.5 kilometers (one mile) across, with the sun angle at approximately 20 degrees. The speckled appearance of the image is due to bit errors, emphasized by enlargement of the photo. The picture was taken on July 25 from a range of 1873 kilometers (1162 miles). Viking 2 will arrive in Mars orbit next Saturday (August 7) with a landing scheduled for early September.” (5.)
In July 1976 a space craft had landed on the surface of Mars. The space craft named the Viking 1 existed out of two elements, Viking 1 orbiter and Viking 1 lander. The orbiter was a floating spacecraft which had placed the lander (a controlled land vehicle) on the surface of Mars. After the arrival of the two machines they were separated for the first time in human history successfully sending images from Mars to Earth. The purpose of the vehicles was to send data to Earth about the surface of Mars to discover its potential living capabilities. Meanwhile there was a second space craft on its way to explore Mars as well named Viking 2. After the arrival of Viking 1 orbiter (the floating spacecraft), the vehicle started looking for a potential landing spot for Viking 2. While on its quest for a landing spot the vehicle was constantly sending information to NASA. A couple of days later, on the 31st of July, Nasa released image 35A72 (Fig. 2.)

The camera (Walrecht 6) on the space-craft was a stereoscopic vidicon camera. The stereoscopic setup was necessary to show altitude in terrain. The camera could photograph plots of 40 by 44 km with a resolution of 1056 lines by 1182 pixels. The angle and the focal length of both lenses was fixed with an angle of 1,54 by 1,69 degrees and a focal length of 475mm. The shutter speed could be managed between 0,0003 and 2,7 seconds. The image released by NASA is a combination of humanly corrected images layered on top of each other. According to an article by Malin space science system (Malin 7) the images still needed: “bit-error correction, reseau removal, very slight brightness alteration, and projection to a standard map view (mercator projection)”. Finaly image 35A72 is actually a humanly corrected merge off different images.

Image 35A72 is an image derived from innovation and technological advancement, intended to be an image of the terrain of Mars as objectively and analytically understandable as possible. Yet as objective as the camera might have been programmed, the image never reached the objectivity it was intended to, because no-one could unsee the face in the middle of the picture.

Everything we see is meant to be seen


The fact that everyone sees a face in the middle of the picture is called Apophenia, a term introduced by German neurologist and psychiatrist Conrad Klaus (Mishara 8). Humans are (overly) developed in recognising faces as before humans invented speech the face was the main tool for communication. Recognising faces was an evolutionary necessity we had to rely on (Goodale and Milner 9), yet when looking at this image we seem to suffer by it.

The image is one of the first images to let us experience the constancy of the brain in relation to technological image development. In Image I the conflict within us of not recognising the thing was not bothering the image as it was the intention of the creator, yet because technological images don’t have the same programme as we as humans have there lies a boundary and conflict between the programmes. The programme of the human which was created out of evolution and the programme of the camera which was created to show precise terrain do not correlate, therefore when looking at the image we are unwillingly suffering from our own evolution.

5. Original statement by NASA released with image 35A72.


6. Walrecht, H. (2014). Beelden uit de Ruimte. Hansonline. http://www.hansonline.eu/beelden/viking_orbiter.html


7. Malin, Michael C. ‘The “Face on Mars”’. Malin Space Science Systems, 1995, www.msss.com/education/facepage/face.html


8. Mishara, Aaron. ‘Klaus Conrad (1905-1961): Delusional Mood, Psychosis, and Beginning Schizophrenia. Schizophrenia bulletin, 2009, 10.1093/schbul/sbp144.


9. Goodale, Melvin A. D., and A. David Milner. Sight Unseen: An Exploration of Concious and Uncousious Vision. Ebsco Publishing, 2004.



Image III


Ivan Seal Painting

Fig. 3. (Twitter) @MelipOne.
“Name one thing in this photo.” 2019.
https://twitter.com/melip0ne/status/1120503955526750208. Accessed 28 Jan. 2021.



In 2014 the programer Ian Goodfellow designed an algorithm called a GAN (Giles 10), GAN short for Generative Adversarial Networks is made to successfully generate realistic images. The algorithm works with a database of images which it uses as a reference to create new images.

The database exists out of images from which the GAN gets its input, the algorithm within the programme works as a competition between two parts within the same programme. Of which the first part as Goodfellow calls it is the generator which’s algorithm is designed to generate images. The generator is fed all different kinds of images of the same thing so the generator can learn what something looks like. The second part of the GAN is a discriminator. The task of the discriminator is to continuously check whether the image being created by the generator is realistic based on the same database of imagery. The task of the generator is to keep producing as many images until the discriminator is fooled. The essence of a GAN is that they are made to learn from something they get fed and having a conflict within the algorithm, this idea could according to Goodfellow potentially be extended into different fields (11), for example to let an algorithm design medicine for not yet discovered illnesses, or train self-driving cars for random situations.

When on 23th of April in 2019 Twitter user @melipOne posted the picture on the left page with the title “Name one thing in this photo”(Fig. 3) people couldn’t figure out what they saw. The image provoked comments as “This is driving me crazy. Is that a black flamingo in the front? I just don't know” @yomanshouse, “This is the only "Name one thing..." tweet where I can't actually name anything” @ansyul, “i’m seriously so distressed looking at the almost-recognizable stuff” @ jurassicseb. People clearly couldn’t figure out what they were seeing in the image, yet the image was still possible to be seen.

Everything we see is meant to be seen


What makes the images produced by a GAN distinct from any other image is that the intention of the GAN is not to produce an image. The intention of the GAN is only to fool its programmed discriminator. This image compared to image I and II is therefore even further away from the human programme of perceiving images.

In Image I the intention and the end result were the same, they were both derived from the idea of something we vaguely know. In Image II the conflict between the programme created by evolution of the human and the programme of the technological camera became visible, yet between the two programmes there was still overlap in the fact that Image II still had the intention to be an image. Only a small part of the image triggered a different part of the human programme, so the differentiation between human and camera was not dominating the image only a small part of it.

In Image III there is almost no overlap between human and technological programme anymore, the only overlap there is lies within the database the GAN used to create the final image. Yet if this database would exist out of images not understandable by humans we would fail to see both the process of creation and the end result, leaving us with unrecognisable imagery.

10. Giles, Martin. ‘The GANfather: The Man Who’s given Machines the Gift of Imagination’. MIT Technology Review, 2018, www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination.


11. ‘Interview With Ian Goodfellow, RE•WORK Deep Learning Summit’. Youtube, uploaded by RE•WORK, 12 Apr. 2019, www.youtube.com/watch?v=MrK95Iv95So&t=233s.



Conclusion


I have shown three images which trigger unconscious parts of image processing within the human brain. The images show how evolution has created the architecture of the brain and why everything we see is meant to be seen. The three images also show how much the human brain still relies on its evolution when processing technological imagery.

Image I had the intention to portray conflict of recognition
Image I showed the intention to portray conflict of recognition

Image II had the intention to be an objective informative image
Image II showed a face derived from human evolution within an objective landscape

Image III had the intention to beat its own algorithm
Image III showed no clear sign of production and became irritatingly unrecognisable

Instead of being able to recognise algorithms, humans still depend on tools provided to them by evolution. The more technologically advanced images become, the bigger the gap will be between the intention of the algorithm and what humans recognise within the final production of the algorithm. Leaving us with images understandable for technology but incomprehensible for humanity.

Everything we see is meant to be seen
Everything we don’t see is needed to be seen