2005 - 2017

New Media 988 New Media



    An on-going series of digital installations situated in Google Earth. DDD was created by building 3d digital models and locating and animating them in Google Earth using KML code. The soundtrack was created using a well known song about the white cliffs of Dover.

    1 year ago / / /
  2. An Archive of 10,000 Cylinder Recordings Readied for the Spotify Era. The UCSB Library invites you to discover and listen to its online archive of cylinder recordings.


    1 year ago / / /
  3. 1 year ago / / / /
  4. Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

    We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

    One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.

    One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

  5. A video and audio tribute we did for mf doom flipping his beats re-creating them using original samples and adding videos to it we ve played this live as an opening act for mf doom mixed assembled and edited by sims, mass & alan the g artwork by julien sens

    2 years ago / / / / /
  6. 2 years ago / /
  7. Automatic Cinema aims at an artistic audience. The software can be used for exhibitions or installations, where a variety of media are served on various screens and channels – syncronized or not. Since all media assets are stored in a database, Automatic Cinema is also useful for documentarists and researchers with a structural approach to their material. And last but not least, Automatic Cinema is open source and can be developed by anybody. Instead of cutting a bunch of videoclips the hard way, Automatic Cinema generates countless versions based upon predefined styles. Probably, you'll end up seeing a movie you've never been thinking of — serendipity in it's best way.

    2 years ago / / /
  8. The Library of Babel is a place for scholars to do research, for artists and writers to seek inspiration, for anyone with curiosity or a sense of humor to reflect on the weirdness of existence - in short, it’s just like any other library. If completed, it would contain every possible combination of 1,312,000 characters, including lower case letters, space, comma, and period. Thus, it would contain every book that ever has been written, and every book that ever could be - including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on. At present it contains all possible pages of 3200 characters, about 104677 books.

    Since I imagine the question will present itself in some visitors’ minds (a certain amount of distrust of the virtual is inevitable) I’ll head off any doubts: any text you find in any location of the library will be in the same place in perpetuity. We do not simply generate and store books as they are requested - in fact, the storage demands would make that impossible. Every possible permutation of letters is accessible at this very moment in one of the library's books, only awaiting its discovery. We encourage those who find strange concatenations among the variations of letters to write about their discoveries in the forum, so future generations may benefit from their research.

    2 years ago / / /
  9. Playscii is an open source ASCII art program, the successor to EDSCII. It runs on Windows and Linux, and will run on Mac OSX soon after a bit more work.

    More info: http://vectorpoem.com/playscii/

    Please note that Playscii is open source, still in early development, and is offered as a pay-what-you-want download here on itch. Testing and bug reports are appreciated!

    2 years ago / /
  10. Echo Nest Remix is the Internet Synthesizer. Make amazing things from music, automatically.

    Turn any music or video into Python or JavaScript code.

    Echo Nest Remix lets you remix, re-edit, and reimagine any piece of music and video, automatically and algorithmically.

    Remix has done the following: played a song forever, walkenized and cowbellized hundreds of thousands of songs in a week, reversed basically everything, beat matched two songs, split apart DJ mixes by their individual tracks, made new kinds of video mashups, corrected sloppy drumming, synced video to a song, transitioned between multiple covers of the same song, made a cat play piano, and taught dogs to play dubstep. Check out all the examples here.

    Remix is available as an open source SDK for you to use, for Mac, Linux, and Windows:

    Install for Python: sudo pip install remix. Full installation details, packages for Mac and Windows, and complete Python documentation are here.

    Try JavaScript: Test out remix.js here.

    Download JavaScript: remix.js. Full JavaScript install details and documentation are here.

    2 years ago / / /
  11. [ About Re:Sound Bottle -second mix- ]
    Experimental sound medium that transforms recorded everyday sounds into music

    [ Concept ]
    • Allows anyone to create music using sounds from daily life
    • Communication that arises from intuitive sound interaction

    The conventional way of experiencing music is usually through existing technologies such as the ipod or the radio. However, this style of experiencing music takes place in a given form; is static and as a result leaves us dissatisfied.

    To really enjoy music, we need to find music through sounds around us. We need to stop being tied down with new gadgets that provide the music for us, but to search for music ourselves.

    A series of ideas like these lead me to create this device.

    This creation's main concept is to record sounds from daily life. It is the concept of ‘collecting sounds in a bottle’. You choose the sounds collected in the bottle. Using everyday sounds as a musical component establishes a new understanding of the sounds we listen to everyday. By collecting your own sampling of sounds, you encounter a unique piece of music that can be experienced only once.

    This device will bring a smile to anyone, as many will be able to experience the charm of music, leading them to turn music into something they love and adore.

    Created by Jun Fujiwara

    2 years ago / / / / /
  12. The Pannini projection is a mathematical rule for constructing perspective images with very wide fields of view. It is named in honor of Gian Paolo Pannini, an 18th Century Roman painter and professor of perspective, who may very well have used it to draw spectacular views such as the one above; for it can be realized with drawing instruments almost as easily as the standard rectilinear perspective projection. However it is not now taught in art schools, and was apparently never described in print before its recent rediscovery by a team of open source software developers.

    2 years ago / / /

  13. Fragmentarium is an open source, cross-platform IDE for exploring pixel based graphics on the GPU. It is inspired by Adobe's Pixel Bender, but uses GLSL, and is created specifically with fractals and generative systems in mind.

    2 years ago / /
  14. 2 years ago / /
  15. We describe a novel algorithm for extracting a resolution-independent vector representation from pixel art images, which enables magnifying the results by an arbitrary amount without image degradation. Our algorithm resolves pixel-scale features in the input and converts them into regions with smoothly varying shading that are crisply separated by piecewise-smooth contour curves. In the original image, pixels are represented on a square pixel lattice, where diagonal neighbors are only connected through a single point. This causes thin features to become visually disconnected under magnification by conventional means, and it causes connectedness and separation of diagonal neighbors to be ambiguous. The key to our algorithm is in resolving these ambiguities. This enables us to reshape the pixel cells so that neighboring pixels belonging to the same feature are connected through edges, thereby preserving the feature connectivity under magnification. We reduce pixel aliasing artifacts and improve smoothness by fitting spline curves to contours in the image and optimizing their control points.

    2 years ago / / /
Page 1 of 66December 2014 - November 2015 (2005 - 2017)

First / Previous / Next / Last /

Sort by: Date / Title / URL