2005 - 2017

Technology 673 Technology


  1. Open Stage Control is a libre desktop OSC bi-directionnal control surface application. It's built with HTML, JavaScript & CSS on top of Electron framework

     

    Download here : https://github.com/jean-emmanuel/open-stage-control/releases

     

    Features

    • mouse & multi-touch sensitive widgets
    • modular & responsive layout
    • built-in live editor
    • bi-directionnal osc bindings
    • headless server mode with any number of clients using chromium
    • app state store / recall & import / export
    • themes
    10 months ago / / /
  2. PabloDraw is an Ansi/Ascii text and RIPscrip vector graphic art editor/viewer with multi-user capabilities.


    1 year ago / /
  3. Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

    We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

    One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.

    One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.


  4. 2 years ago /
  5. The Gallery of Concept Visualization features projects which use pictures to communicate complex and difficult ideas (not just data).



    2 years ago /
  6. 2 years ago / /
  7. Automatic Cinema aims at an artistic audience. The software can be used for exhibitions or installations, where a variety of media are served on various screens and channels – syncronized or not. Since all media assets are stored in a database, Automatic Cinema is also useful for documentarists and researchers with a structural approach to their material. And last but not least, Automatic Cinema is open source and can be developed by anybody. Instead of cutting a bunch of videoclips the hard way, Automatic Cinema generates countless versions based upon predefined styles. Probably, you'll end up seeing a movie you've never been thinking of — serendipity in it's best way.

    2 years ago / / /
  8. Playscii is an open source ASCII art program, the successor to EDSCII. It runs on Windows and Linux, and will run on Mac OSX soon after a bit more work.

    More info: http://vectorpoem.com/playscii/

    Please note that Playscii is open source, still in early development, and is offered as a pay-what-you-want download here on itch. Testing and bug reports are appreciated!

    2 years ago / /
  9. Echo Nest Remix is the Internet Synthesizer. Make amazing things from music, automatically.

    Turn any music or video into Python or JavaScript code.

    Echo Nest Remix lets you remix, re-edit, and reimagine any piece of music and video, automatically and algorithmically.

    Remix has done the following: played a song forever, walkenized and cowbellized hundreds of thousands of songs in a week, reversed basically everything, beat matched two songs, split apart DJ mixes by their individual tracks, made new kinds of video mashups, corrected sloppy drumming, synced video to a song, transitioned between multiple covers of the same song, made a cat play piano, and taught dogs to play dubstep. Check out all the examples here.

    Remix is available as an open source SDK for you to use, for Mac, Linux, and Windows:

    Install for Python: sudo pip install remix. Full installation details, packages for Mac and Windows, and complete Python documentation are here.

    Try JavaScript: Test out remix.js here.

    Download JavaScript: remix.js. Full JavaScript install details and documentation are here.

    2 years ago / / /
  10. [ About Re:Sound Bottle -second mix- ]
    Experimental sound medium that transforms recorded everyday sounds into music

    [ Concept ]
    • Allows anyone to create music using sounds from daily life
    • Communication that arises from intuitive sound interaction

    The conventional way of experiencing music is usually through existing technologies such as the ipod or the radio. However, this style of experiencing music takes place in a given form; is static and as a result leaves us dissatisfied.

    To really enjoy music, we need to find music through sounds around us. We need to stop being tied down with new gadgets that provide the music for us, but to search for music ourselves.

    A series of ideas like these lead me to create this device.

    This creation's main concept is to record sounds from daily life. It is the concept of ‘collecting sounds in a bottle’. You choose the sounds collected in the bottle. Using everyday sounds as a musical component establishes a new understanding of the sounds we listen to everyday. By collecting your own sampling of sounds, you encounter a unique piece of music that can be experienced only once.

    This device will bring a smile to anyone, as many will be able to experience the charm of music, leading them to turn music into something they love and adore.

    Created by Jun Fujiwara

    2 years ago / / / / /



  11. Fragmentarium is an open source, cross-platform IDE for exploring pixel based graphics on the GPU. It is inspired by Adobe's Pixel Bender, but uses GLSL, and is created specifically with fractals and generative systems in mind.

    2 years ago / /
  12. How NASA, ESA and MIT joined forces with a Dutch artist to create a bizarre work of art using the International Space Station, the James Webb Telescope and the Universe itself.

    2 years ago / /
  13. 2 years ago / /
  14. This interactive map visualises the estimated concentration of floating plastic debris in the world’s oceans. The densities are computed with a numerical model calibrated against a series of field data collected from the five main Oceans and the Mediterranean Sea.

    Further it shows the various expeditions of the sail vessels participating in the data collection effort from 2007 to 2013, and allows the exploration of all plastic concentrations measured using surface net tows and visual sightings.

  15. We describe a novel algorithm for extracting a resolution-independent vector representation from pixel art images, which enables magnifying the results by an arbitrary amount without image degradation. Our algorithm resolves pixel-scale features in the input and converts them into regions with smoothly varying shading that are crisply separated by piecewise-smooth contour curves. In the original image, pixels are represented on a square pixel lattice, where diagonal neighbors are only connected through a single point. This causes thin features to become visually disconnected under magnification by conventional means, and it causes connectedness and separation of diagonal neighbors to be ambiguous. The key to our algorithm is in resolving these ambiguities. This enables us to reshape the pixel cells so that neighboring pixels belonging to the same feature are connected through edges, thereby preserving the feature connectivity under magnification. We reduce pixel aliasing artifacts and improve smoothness by fitting spline curves to contours in the image and optimizing their control points.


    2 years ago / / /
Page 1 of 45December 2014 - November 2015 (2005 - 2017)

First / Previous / Next / Last /

Sort by: Date / Title / URL