Our wiki is a comprehensive encyclopedia of online and offline aesthetics! We are a community dedicated to the identification, observation, and documentation of visual schemata.
What is an aesthetic? Why does everyone always argue about what aesthetics should be on this wiki?
The short answer: A collection of visual schema that creates a "mood."
Some types of aesthetics include:
Aesthetics originated from Internet communities (Ex: Cottagecore, Dark Academia)
National cultures (Americana, Traditional Polish) Note: Most articles that try to describe a national culture will be deleted. These articles should have a higher quality and risk stereotyping a nation.
Genres of fiction with established visual tropes (Ex: Cyberpunk, Gothic)
Holidays with iconic imagery and colors (Ex: Christmas, Halloween)
Locations that have expected activities, components, and types of people (Ex: Fanfare, Urbancore)
Music genres with consistent visual motifs present in cover art, music videos, etc (Ex: City Pop, Emo)
This does not mean all music genres should be present. For example, Pop and Alternative bands' do not have shared visual traits.
Periods of history with distinct visuals (Ex: Victorian, Y2K)
Stereotypes (Ex: Brocore, VSCO)
Subcultures that share music genres and fashion styles (Ex: Raver, Skinheads)
The long answer:
The word "aesthetic" originated as the philosophical discussion about what beauty is, how we should approach it, and why it exists. However, Millennials and Generation Z started using that term as an adjective that describes what they personally consider beautiful. For example: "After Denise finished watching The Virgin Suicides, she said, 'Wow. That was so aesthetic.'"
Aesthetics have now come to mean a collection of images, colors, objects, music, and writings that creates a specific emotion, purpose, and community. It is largely dependent on personal taste, cultural background, and exposure to different pieces of media. This definition is not official and can be debated. There is currently no dictionary definition that captures the complexity of this phenomenon, which arose in the Internet youth. Rather, people who participate in the community "know it when they see it." These elements are constantly debated, as the opinion on whether or not some aesthetics exist or are valid is constantly debated. This is especially true since everyone's own personal life factors into their opinions.
Here is an example of a debate that is going on within the community. Whether or not Lolita is an aesthetic varies on what counts as visual elements. On one hand, lace, petticoats, and bows are valid elements of visual schema. Those elements combine to spark feelings of kawaii, de-sexualization, rebellion, and appreciation of antique. On the other hand, aesthetics are made up of elements other than fashion, such as home decor or music. Fashion is the visual element, rather than the components making up the coord/outfit. That element is part of broader schemas such as Goth and Victorian. What counts as an element and what qualifies as sparking an emotion is a complicated subject.
So right now, the subject is trying to be defined by the community. What either fits into a larger schema or is distinct enough to warrant its own aesthetic is difficult to say and would depend on who you are asking.
Clip retrieval works by converting the text query to a CLIP embedding , then using that embedding to query a knn index of clip image embedddings
The road to wisdom?
-- Well, it's plain
and simple to express:
Err
and err
and err again
but less
and less
and less.
Hence the name LessWrong. We might never attain perfect understanding of the world, but we can at least strive to become less and less wrong each day.
We are a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we work to develop and practice the art of human rationality.[1]
To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.
“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognising that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviours in individuals.”
This app allows you to simulate how any origami crease pattern will fold. It may look a little different from what you typically think of as "origami" - rather than folding paper in a set of sequential steps, this simulation attempts to fold every crease simultaneously. It does this by iteratively solving for small displacements in the geometry of an initially flat sheet due to forces exerted by creases. You can read more about it in our paper:
Fast, Interactive Origami Simulation using GPU Computation by Amanda Ghassaei, Erik Demaine, and Neil Gershenfeld (7OSME)
All simulation methods were written from scratch and are executed in parallel in several GPU fragment shaders for fast performance. The solver extends work from the following sources:
Origami Folding: A Structural Engineering Approach by Mark Schenk and Simon D. Guest
Freeform Variations of Origami by Tomohiro Tachi
This app also uses the methods described in Simple Simulation of Curved Folds Based on Ruling-aware Triangulation to import curved crease patterns and pre-process them in a way that realistically simulates the bending between the creases.
Originally built by Amanda Ghassaei as a final project for Geometric Folding Algorithms. Other contributors include Sasaki Kosuke, Erik Demaine, and others. Code available on Github. If you have interesting crease patterns that would make good demo files, please send them to me (Amanda) so I can add them to the Examples menu.
https://cuttle.xyz/@forresto/Origami-simulator-tips-W4lDXuB5m0xh
https://nitter.42l.fr/kellianderson/status/1454871569981902848
Bitmap Image to 'Pixel Perfect' Vector Graphic or 3D model
The HTML5 application on this page converts your bitmap image online into a Scalable Vector Graphics or 3D model.
The result is 'pixel perfect'/lossless.
Hydra is a platform for live coding visuals, in which each connected browser window can be used as a node of a modular and distributed video synthesizer.
Built using WebRTC (peer-to-peer web streaming) and WebGL, hydra allows each connected browser/device/person to output a video signal or stream, and receive and modify streams from other browsers/devices/people. The API is inspired by analog modular synthesis, in which multiple visual sources (oscillators, cameras, application windows, other connected windows) can be transformed, modulated, and composited via combining sequences of functions.
Features:
Written in javascript and compatible with other javascript libraries
Available as a platform as well as a set of standalone modules
Cross-platform and requires no installation (runs in the browser)
Also available as a package for live coding from within atom text editor
Experimental and forever evolving !!
Conceptual artist exploring our use of technology and its ethical and aesthetic implications.
_
[EN] Filipe Vilas-Boas was born in Portugal in 1981. He is a self-taught conceptual artist currently living and working in Paris. Without being a naive tech utopist or a reluctant technophobe, he explores our use of technology and its ethical and aesthetic implications. His installations, performances and conceptual artworks question the global digitalization of our societies, mostly by merging our physical (IRL) and digital (URL) worlds.
His works were highlighted in the Portuguese Emerging Art Books, 2018 & 2019 Edition and have been shown internationally notably at Nuit Blanche Paris, the UNESCO, Biennale Siana, Le Cube, The French Ministry of Culture, Biennale Némo - Le 104 (FR), Athens Digital Art Festival, Monitor - Heraklion Contemporary Arts Festival (GR), Zaratan, MAAT Museum (PT) and at the Tate Modern (UK).
20 alternative interfaces for creating and editing images and text
https://github.com/constraint-systems
Flow
An experimental image editor that lets you set and direct pixel-flows.
Fracture
Shatter and recombine images using a grid of viewports.
Tri
Tri is an experimental image distorter. You can choose an image to render using a WebGL quad, adjust the texture and position coordinates to create different distortions, and save the result.
Tile
Layout images using a tiling tree layout. Move, split, and resize images using keyboard controls.
Sift
Slice an image into multiple layers. You can offset the slices to create interference patterns and pseudo-3D effects.
Automadraw
Draw and evolve your drawing using cellular automata on a pixel grid with two keyboard-controlled cursors.
Span
Lay out and rearrange text, line by line, using keyboard controls.
Stamp
Image-paint from a source image palette using keyboard controls.
Collapse
Collapse an image into itself using ranked superpixels.
Res
Selectively pixelate an image using a compression algorithm.
Rgb
Pixel-paint using keyboard controls.
Face
Edit both the text and the font it is rendered in.
Pal
Apply an eight-color terminal color scheme to an image. Use the keyboard controls to choose a theme, set thresholds, and cycle hues.
Bix
Draw on binary to glitch text.
Diptych
Pixel-reflow an image to match the dimensions of your text. Save the result as a diptych.
Slide
Divide and slide-stretch an image using keyboard controls.
Freeconfig
Push around image pixels in blocks.
Moire
Generate angular skyscapes using Asteroids-like ship controls.
Hex
A keyboard-driven, grid-based drawing tool.
Etch
A keyboard-based pixel drawing tool.
About
Constraint Systems is a collection of experimental web-based creative tools. They are an ongoing attempt to explore alternative ways of interacting with pixels and text on a computer screen. I hope to someday build these ideas into something larger, but the plan for now is to keep the scopes small and the releases quick.
New Art City’s mission is to develop an accessible toolkit for building virtual installations that show born-digital artifacts alongside digitized works of traditional media.
Our curation and product design prioritize those who are disadvantaged by structural injustice. An inclusive and redistributive community is as important to our project as the toolkit itself.
Web scraping describes techniques for automatically downloading and processing web content, or converting online text and other media into structured data that can then be used for various purposes. In short, the user writes a program to browse and analyze the web on their behalf, rather than doing so manually. This is a common practice in silicon valley, where open html pages are transformed into private property: Facebook began as a (horny) web scraping project, as did Google and all other search engines. Web scraping is also frequently used to acquire the massive datasets needed to train machine learning models, and has become an important research tool in fields such as journalism and sociology.
I define "scrapism" as the practice of web scraping for artistic, emotional, and critical ends. It combines aspects of data journalism, conceptual art, and hoarding, and offers a methodology to make sense of a world in which everything we do is mediated by internet companies. These companies surveill us, vacuum up every trace we leave behind, exploit our experiences and interject themselves into every possible moment. But in turn they also leave their own traces online, traces which when collected, filtered, and sorted can reveal (and possibly even alter) power relations. The premise of scrapism is that everything we need to know about power is online, hiding in plain sight.
This is a work-in-progress guide to web scraping as an artistic and critical practice, created by Sam Lavigne. I will be updating it over the coming months! I'll also be doing occasional live demos either on Twitch or YoutTube.
Orca is an esoteric programming language designed by @hundredrabbits to create procedural sequencers.
This playground lets you use Orca and its companion app Pilot directly in the browser and allows you to publish your creations by sharing their URL.
Originally captured as the medium for Ed Ruscha’s creative work, the more than 65,000 photographs selected from this archive present a unique view of one of Los Angeles’ quintessential streets, Sunset Boulevard, and how it has changed over the past 50 years. Ed Ruscha, with help from Getty and Stamen Design, is making this amazing collection accessible to you: explore his images of Sunset and discover your own story of Los Angeles.
Comprehensive overview of existing tools, strategies and thoughts on interacting with your data
TLDR: when I read I try to read actively, which for me mainly involves using various tools to annotate content: highlight and leave notes as I read. I've programmed data providers that parse them and provide nice interface to interact with this data from other tools. My automated scripts use them to render these annotations in human readable and searchable plaintext and generate TODOs/spaced repetition items.
In this post I'm gonna elaborate on all of that and give some motivation, review of these tools (mainly with the focus on open source thus extendable software) and my vision on how they could work in an ideal world. I won't try to convince you that my method of reading and interacting with information is superior for you: it doesn't have to be, and there are people out there more eloquent than me who do that. I assume you want this too and wondering about the practical details.
This is an interactive editor for making face filters with WebGL.
The language below is called GLSL, you can edit it to change the effect.
a project to excavate shut down, abandoned web ruins and restore them to surfable, accessible, searchable, remixable condition
somewhere between a library and a living museum, we're working on experimental new ways to close the gap between archival and visibility of the web that was lost
launched
geocities
myspace music
on deck
aol hometown
netscape web sites
geocities japan
FortuneCity
tba
This is a directory of 249 links in 73 categories.
This directory is somewhat inspired by the old, failed link collections like the original Yahoo! and DMOZ. They were terrible—you couldn’t find anything, but what you did find was often unexpected. My ‘archivist’/‘forager’ tendencies want to do this.
Linking has kind of died in the wild. Google views a site like this as a link farm—so, directories have died off. Yeah, well, I find many of the ‘link farms’ in my Web/Directory list to be immensely ‘great’ and ‘satisfying’. More than anything, I hope mine intrigues you to build your own. This directory forms my connection to the rest of society.
I reserve the right to link to dipshits and crazies. I link to what piques my curiosity, what amazes me or what horrifies me. This includes you. (You know you want to participate.)
You might also look at it like: maybe I’ve friended these links. But instead of putting them in a big number that represents my friends—my 249 friends, you see—I list my friends out neatly and try to coax you to meet them.
Perhaps there is no need for friending. For likes. For upvotes. For hashtags. For boosts. For trending. For rank. For followers. For an algorithm.
Perhaps plain ole linking—and spending time telling you why I linked—is good enough, was always good enough. Perhaps it’s superior!
No Home Like Place Airbnb is a global hotel filled with the same recurring items. Bed, chair, potted plant, all catered to our cosmopolitan sensibilities. We end up in a place that's completely interchangeable; a room is a room is a room. An algorithm finds these recurring items and replaces them with the same items from other listings.
Audio stream : http://icecast.spc.org:8000/longplayer
Longplayer is a one thousand year long musical composition. It began playing at midnight on the 31st of December 1999, and will continue to play without repetition until the last moment of 2999, at which point it will complete its cycle and begin again. Conceived and composed by Jem Finer, it was originally produced as an Artangel commission, and is now in the care of the Longplayer Trust.
How does Longplayer work?
Early calculations made while trying to establish the correct increments. At the bottom is an estimation of the playing positions on the 7th of January 2000 based on these values.
The composition of Longplayer results from the application of simple and precise rules to six short pieces of music. Six sections from these pieces – one from each – are playing simultaneously at all times. Longplayer chooses and combines these sections in such a way that no combination is repeated until exactly one thousand years has passed. At this point the composition arrives back at the point at which it first started. In effect Longplayer is an infinite piece of music repeating every thousand years – a millennial loop.
The six short pieces of music are transpositions of a 20’20” score for Tibetan Singing Bowls, the ‘source music’.[1] These transpositions vary from the original not only in pitch but also, proportionally, in duration.[2]
Every two minutes a starting point in each of the six pieces is calculated, from which they then play for the next two minutes. Each starting point is calculated by adding a specific length of time to its previous starting point.[3] For each of the six pieces of music this length of time is unique and unvarying. The relationships between these six precisely calculated increments are what gives Longplayer its exact one thousand year long duration.
Rates of Change
In the diagram below, the six simultaneous transpositions are represented by the six circles, whose circumference represents the length of the transposed source music. The solid rectangles represent the two minute sections presently playing. The unique increments by which these six sections advance determine their respective rates of change. These reflect different flows of time, from a glacial crawl to the almost perceptible sweep of an hour hand. The incremental advance of the third circle, is so small that it will take the full thousand years for it to pass once through the source music. Conversely the increment for the second circle is such that it makes its way through the music every 3.7 days. The diagram updates every 2 minutes
https://eclipticalis.com/
http://teropa.info/loop
https://daily.bandcamp.com/lists/generative-music-guide
https://github.com/npisanti/ofxPDSP
Today, you are an Astronaut. You are floating in inner space 100 miles above the surface of Earth. You peer through your window and this is what you see. You are people watching. These are fleeting moments.
These videos come from YouTube. They were uploaded in the last week and have titles like DSC 1234 and IMG 4321. They have almost zero previous views. They are unnamed, unedited, and unseen (by anyone but you).
Astronaut starts when you press GO. The video switches periodically. Click the button below the video to prevent the video from switching.
Astronaut was created by Andrew Wong and James Thompson on a sunny day in San Francisco in 2011.
Beautiful footage of our earth is provided by the Earth Science and Remote Sensing Unit, NASA Johnson Space Center.
Soundtrack provided by Claude Debussy's Claire de Lune performed by Caela Harrison (cc).
Try pressing spacebar.
Declassifier
Custom Software, COCO Dataset (corrected). 2 days 5 hours 25 min. 2019
Declassifier processes pictures using the YOLO computer vision algorithm. Instead of showing the program's prediction, the picture is overlayed with images from COCO, the training dataset from which the algorithm learned in the first place.
The data by which machine learning algorithms learn to make predictions is hardly ever shown, let alone credited. By doing both, Declassifier exposes the myth of magically intelligent machines, instead applauding the photographers who made the technical achievement possible. In fact, when showing the actual training pictures, credit is not only due but mandatory.
The tl;dr
We should all be automating our image compression.
Image optimization should be automated. It’s easy to forget, best practices change, and content that doesn’t go through a build pipeline can easily slip. To automate: Use imagemin or libvips for your build process. Many alternatives exist.
Most CDNs (e.g. Akamai) and third-party solutions like Cloudinary, imgix, Fastly’s Image Optimizer, Instart Logic’s SmartVision or ImageOptim API offer comprehensive automated image optimization solutions.
The amount of time you’ll spend reading blog posts and tweaking your configuration is greater than the monthly fee for a service (Cloudinary has a free tier). If you don’t want to outsource this work for cost or latency concerns, the open-source options above are solid. Projects like Imageflow or Thumbor enable self-hosted alternatives.
ArchiveBox takes a list of website URLs you want to archive, and creates a local, static, browsable HTML clone of the content from those websites (it saves HTML, JS, media files, PDFs, images and more).
You can use it to preserve access to websites you care about by storing them locally offline. ArchiveBox imports lists of URLs, renders the pages in a headless, autheticated, user-scriptable browser, and then saves archive of the content in multiple redundant common formats (HTML, PDF, PNG, WARC) that will last long after the originals disappear off the internet. It automatically extracts assets and media from pages and saves them in easily-accessible folders, with out-of-the-box support for git repositories, audio, video, subtitles, images, PDFs, and more.
Created by Satoshi HORII at Rhizomatiks, (centiscript) is a JavaScript based creative code environment for creating experimental graphics. Imagined as an endless exploration from one script to another, Satoshi sees (centiscript) as a tool for visual thinking. Each experiment can be shared online since it relies on JavasScript + HTML + Canvas.
Riot and Shredder source.
Net.art projects from back in the day, and recent migrations.
Project put together by Tero Parviainen is a web-based version of musical tool originally by Laurie Spiegel to create music by moving your mouse. It also has MIDI support.
MyBrother.tv is an artwork & use the api of: YouTube & wikipedia
The intend of mybrother tv is to provide a usergenerated content channel out from youtube and wikipedia. it s use a word to trigger the language and the flow of the clips. The Engines are composed with 4 differend intends.
EngineEntertinment: Compose a floating stream-channel arround the word.
EngineWikText: fullfast clipproducer from the word
Engine Multikulti: mix the language to get culturspread about the word
EngineAssoN: using hebs-roule on the word
+: entropic
-: epistemologic
hot tip: use a Commercial-blocker
actions
link to actual video on youtube.com
fullscreen
show the clip within a playlist
restart mybrother.tv
This is my tribute to Pablo Picasso’s most famous artwork, Guernica (1937). The main reason to do this is to echo Picasso’s antiwar message, which I strongly believe is needed more than ever. The backside of this artwork I added a few other Picasso’s artworks to advocate peace, however washed out and fragmented it is. The ox, the “sleeping” soldier, and Pegasus are from one of his early Guernica sketches. The others, most notably his Bouquet of Peace (1958), are sampled from his later works with peace theme. The only 3 animated elements are the flower, the lamp, and the light bulb. To me the flower symbolizes life, the lamp represents hope, and the light bulb embodies technological destruction. As long as life continues and hope lasts, humanity will goes on.
Open Stage Control is a libre desktop OSC bi-directionnal control surface application. It's built with HTML, JavaScript & CSS on top of Electron framework
Download here : https://github.com/jean-emmanuel/open-stage-control/releases
Satellite Collections
digital prints
2009-2011
You can see from pole to pole and across oceans and continents and you can watch it turn and there's no strings holding it up, and it's moving in a blackness that is almost beyond conception.
-Eugene Cernan, an astronaut on the Apollo 17, on seeing the Earth from space
In all of these prints, I collect things that I've cut out from Google Satellite View-- parking lots, silos, landfills, waste ponds. The view from a satellite is not a human one, nor is it one we were ever really meant to see. But it is precisely from this inhuman point of view that we are able to read our own humanity, in all of its tiny, repetitive marks upon the face of the earth. From this view, the lines that make up basketball courts and the scattered blue rectangles of swimming pools become like hieroglyphs that say: people were here.
The alienation provided by the satellite perspective reveals the things we take for granted to be strange, even absurd. Banal structures and locations can appear fantastical and newly intricate. Directing curiosity toward our own inimitably human landscape, we may find that those things that are most recognizably human (a tangle of carefully engineered water slides, for example) are also the most bizarre, the most unlikely, the most fragile.
The Library of Babel is a place for scholars to do research, for artists and writers to seek inspiration, for anyone with curiosity or a sense of humor to reflect on the weirdness of existence - in short, it’s just like any other library. If completed, it would contain every possible combination of 1,312,000 characters, including lower case letters, space, comma, and period. Thus, it would contain every book that ever has been written, and every book that ever could be - including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on. At present it contains all possible pages of 3200 characters, about 104677 books.
Since I imagine the question will present itself in some visitors’ minds (a certain amount of distrust of the virtual is inevitable) I’ll head off any doubts: any text you find in any location of the library will be in the same place in perpetuity. We do not simply generate and store books as they are requested - in fact, the storage demands would make that impossible. Every possible permutation of letters is accessible at this very moment in one of the library's books, only awaiting its discovery. We encourage those who find strange concatenations among the variations of letters to write about their discoveries in the forum, so future generations may benefit from their research.
Echo Nest Remix is the Internet Synthesizer. Make amazing things from music, automatically.
Turn any music or video into Python or JavaScript code.
Echo Nest Remix lets you remix, re-edit, and reimagine any piece of music and video, automatically and algorithmically.
Remix has done the following: played a song forever, walkenized and cowbellized hundreds of thousands of songs in a week, reversed basically everything, beat matched two songs, split apart DJ mixes by their individual tracks, made new kinds of video mashups, corrected sloppy drumming, synced video to a song, transitioned between multiple covers of the same song, made a cat play piano, and taught dogs to play dubstep. Check out all the examples here.
Remix is available as an open source SDK for you to use, for Mac, Linux, and Windows:
Install for Python: sudo pip install remix
. Full installation details, packages for Mac and Windows, and complete Python documentation are here.
Try JavaScript: Test out remix.js here.
Download JavaScript: remix.js. Full JavaScript install details and documentation are here.
Why are some ideas, processes and products (or, memes) popular, and others not? And - What is the unit of culture? For that matter: What is `Culture'? This short book synthesizes the Systems Model of Creativity (Csikszentmihalyi 1988, 2014) and Evolutionary Epistemology (Campbell 1974) to explain why some things are popular, and defines and describes the structure of the Meme, the unit of culture (Dawkins 1976).
Spacebrew is an open, dynamically re-routable software toolkit for choreographing interactive spaces. Or, in other words, a simple way to connect interactive things to one another. Every element you hook up to the system is identified as either a subscriber (reading data in) or a publisher (pushing data out). Data is in one of three standardized formats: a boolean (true/false), a number range (0-1023) or a string (text); it can also be sent as a custom format you specify. Once these elements are set up, you can use a web based visual switchboard to connect or disconnect publishers and subscribers to each other.
Our memory is dissipating. Hard drives only last five years, a webpage is forever changing and there’s no machine left that reads 15-year old floppy disks. Digital data is vulnerable. Yet entire libraries are shredded and lost to budget cuts, because we assume everything can be found online. But is that really true? For the first time in history, we have the technological means to save our entire past, yet it seems to be going up in smoke. Will we suffer from collective amnesia? This VPRO Backlight documentary tracks down the amnesiac zeitgeist starting at the Royal Tropical Institute in Amsterdam, whose world-famous 250-year old library was lost to budget cuts. The 400.000 Books were saved from the shredder by Ismail Serageldin, director of the world-famous Library of Alexandria, who is turning the legendary library of classical antiquity into a new knowledge hub for the digital world. Images as well as texts risk being lost in this ‘Digital Dark Age’. In an old McDonald’s restaurant in Mountain View, CA, retired NASA engineer Dennis Wingo is trying to retrieve the very first images of the moon. Upstate New York, Jason Scott has founded The Archive Team, a network of young activists that saves websites that are at risk of disappearing forever. In San Francisco, we visit Brewster Kahle’s Internet Archive that’s going against the trend to destroy archives, and the Long Now Foundation, which has put the long-term back on the agenda by building a clock that only ticks once a year and should last 10,000 years, in an attempt to reconnect with generations thousands of years from now. Directed by Bregtje van der Haak / produced by VPRO Backlight, The Netherlands You can watch the Dutch episode here: http://tegenlicht.vpro.nl/afleveringe... For broadcast rights: www.nposales.com / info@nposales.com.
"The average life of a web page is about 100 days before it's either changed or deleted," says Kahle. "Even if it's supported by big companies: Google Video came down, Yahoo Video came down, Apple went and wiped out all the pages in Mobile Me." Capturing this transient web was Kahle's original mission for the Internet Archive when he founded it in 1996. Nearly two decades later, the 53-year-old compares his organization to a "Library of Alexandria, version two."
That may be an understatement. In addition to hosting the Wayback Machine, an ever-growing collection of more than 400 billion copies of web pages, the Internet Archive has also expanded its services by providing millions of free digitized books, TV shows, movies, songs, documents, and software titles. Want to see what MotherJones.com looked like in 1996? Here you go. Are you a Deadhead in search of rare recordings? There are more than 9,000 to choose from. Remember when federal websites were closed for business during the government shutdown? They were still available thanks to the Internet Archive.
http://www.monegraph.com/
What had become clear was that, for any given digital work, it only takes two steps to ensure its originality. First, a public claim to ownership or creation of that work has to be asserted. And second, that claim and a representation of the work itself has to be captured in the block chain, so there is a public record in the ledger and a way to record any transfers of that title in the future. Thus, there are just three key parts to verifying digital art with monegraph (here with examples of each):
Tausende Gangnam-Style- und Harlem-Shake-Videos auf Youtube sind der Beleg: Remix ist heute ein Massenphänomen. War das 20. Jahrhundert noch geprägt von zentralisierter Kulturproduktion, laden heute Computer, Videohandys und Internet zu kreativer und öffentlicher Interaktion mit Kulturgütern ein.
Viele der erfolgreichsten Videos auf Youtube und Facebook profitieren davon, dass andere NutzerInnen eigene Versionen von ihnen erstellen und so zur Bekanntheit des Originals beitragen. Die Bandbreite reicht dabei von verwackelten Handy-Videos bis hin zu aufwendigen Remixversionen. Sich für die Erstellung von Werken bei Vorhandenem zu bedienen, ist kein neues Phänomen. Der Blogger Malte Welding illustrierte diesen Umstand einmal unter Verweis auf Wolfgang Amadeus Mozart, der Bach-Fugen bearbeitete und die den Fugen voranstehenden Präludien mit Eigenkompositionen ersetzte, die für Streicher geeignet waren: „Er remixte Bach. Er mashte ihn, er fledderte die toten Noten und schuf etwas Neues.“
You will be able to touch to any web page by this bookmarklet. It's convenient.
Immersive 3d space bubble. Use chrome to see it