Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.
We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.
One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.
One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
The Library of Babel is a place for scholars to do research, for artists and writers to seek inspiration, for anyone with curiosity or a sense of humor to reflect on the weirdness of existence - in short, it’s just like any other library. If completed, it would contain every possible combination of 1,312,000 characters, including lower case letters, space, comma, and period. Thus, it would contain every book that ever has been written, and every book that ever could be - including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on. At present it contains all possible pages of 3200 characters, about 104677 books.
Since I imagine the question will present itself in some visitors’ minds (a certain amount of distrust of the virtual is inevitable) I’ll head off any doubts: any text you find in any location of the library will be in the same place in perpetuity. We do not simply generate and store books as they are requested - in fact, the storage demands would make that impossible. Every possible permutation of letters is accessible at this very moment in one of the library's books, only awaiting its discovery. We encourage those who find strange concatenations among the variations of letters to write about their discoveries in the forum, so future generations may benefit from their research.
Fragmentarium is an open source, cross-platform IDE for exploring pixel based graphics on the GPU. It is inspired by Adobe's Pixel Bender, but uses GLSL, and is created specifically with fractals and generative systems in mind.
Born in 1982. His works, centralising in real-time processed, computer programmed audio visual installations, have been shown at national and international art exhibitions as well as the Media Art Festivals. He is a recipient of many awards including the Excellence Prize at the Japan Media Art Festival in 2004, and the Award of Distinction at Prix Ars Electronica in 2008. Having been involved in a wide range of activities, he has worked on a concert piece production for Ryoji Ikeda, collaborated with Yoshihide Otomo, Yuki Kimura and Benedict Drew, participated in the Lexus Art Exhibition at Milan Design week. and has started live performance as Typingmonkeys.
A selection of links about generative and new media art by Marius Watz
Do you hate having to write your artist statement? Generate your own here for free, and if you don't like it, generate another one. For use with funding applications, exhibitions, curriculum vitae, websites ...
Mandelbulbs are a new class of 3D Mandelbrot fractals. Unlike many other 3D fractals the Mandelbulb continues to reveal finer details the closer you look.
Otomata is a generative sequencer.
It employs a cellular automaton type logic
Okapi is an open-source framework for building digital, generative art in HTML5.
The embedding of the subject in a parametric figuration can bring us back the responsibility to our environment. The recognition of the consequences of our (own and other) acting in space can be a key concept for orientation, integration and the understan
Generative drawing dream.
Let's get flurrious, create snowflakes..
Fractal 4D is a simple and very useful Adobe AIR app that enables you to draw beatiful fractal swirls.
A cadKIT for Processing (v1.0) for Object Oriented Geometry. Based on (anar+) parametric modeling scheme is a KIT of libraries.
Stripgenerator is free of charge project created to embrace the internet blogging and strip creation culture, helping the people with no drawing abilities to express their opinions via strips. Yeah, I am one of them as well.
The ad generator is a generative artwork that explores how advertising uses and manipulates language. Words and semantic structures from real corporate slogans are remixed and randomized to generate invented slogans.
Build, Share, Download Fonts