The Transcriptions of Space is an experimental application developed
using deep learning algorithms that demonstrates the ability of
artificial intelligence to realize the inherent human creativity.
By connecting two independent neural networks, the application
observes the world around and expresses its thoughts based on
previously acquired knowledge.
The first algorithm based on the machine learning process of image
recognition and a convolutional neural network acts similar to the
human desire to detect patterns and find meanings in vague visual
stimuli. Through the camera interface, it identifies letters in the
shapes of surrounding objects and writes them down sequentially.
Approximate string matching algorithm using an English dictionary
converts the sequence of found letters into a readable word.
The following algorithm of language model based on recurrent neural
network using the converted word as input predicts the next most
likely words in a sequence and thus create entire sentences of
text.
Considering the possibility of using the application in different
places such as Natural environment and Built environment, two language
models were trained.
For the Natural environment model was used the text corpus consisting
of scientific and fiction books about Nature. (2,763,098 words).
For the Built environment model, the text corpus consisted of books
describing the architectural features of cities and the history of
urban space and its interaction with humans, as well as science and
fiction literature of the human future in cyber cities. (3,134,152
words).
Walking - in the city, local neighbourhood, or hiking trail - lost in
thought, let the app be your companion or inspire you by expressing
its thoughts about the specific space.
Try it
ttos.artem.st
(only mobile version).
Creative Applications Network review /  
The Transcriptions of Space – AI assisted visual stimuli
works  /  ttos 
bio 
contact
mail
instagram
The Transcriptions of Space is an experimental application developed
using deep learning algorithms that demonstrates the ability of
artificial intelligence to realize the inherent human creativity.
By connecting two independent neural networks, the application
observes the world around and expresses its thoughts based on
previously acquired knowledge.
The first algorithm based on the machine learning process of image
recognition and a convolutional neural network acts similar to the
human desire to detect patterns and find meanings in vague visual
stimuli. Through the camera interface, it identifies letters in the
shapes of surrounding objects and writes them down sequentially.
Approximate string matching algorithm using an English dictionary
converts the sequence of found letters into a readable word.
The following algorithm of language model based on recurrent neural
network using the converted word as input predicts the next most
likely words in a sequence and thus create entire sentences of
text.
Considering the possibility of using the application in different
places such as Natural environment and Built environment, two language
models were trained.
For the Natural environment model was used the text corpus consisting
of scientific and fiction books about Nature. (2,763,098 words).
For the Built environment model, the text corpus consisted of books
describing the architectural features of cities and the history of
urban space and its interaction with humans, as well as science and
fiction literature of the human future in cyber cities. (3,134,152
words).
Walking - in the city, local neighbourhood, or hiking trail - lost in
thought, let the app be your companion or inspire you by expressing
its thoughts about the specific space.
Try it
ttos.artem.st.
Creative Applications Network review /  
The Transcriptions of Space – AI assisted visual stimuli