A light Rust API for Multiresolution Stochastic Texture Synthesis [1], a non-parametric example-based algorithm for image generation.
Pixel-art scaling algorithms are graphical filters that are often used in video game emulators to enhance hand-drawn 2D pixel art graphics. The re-scaling of pixel art is a specialist sub-field of image rescaling.
As pixel-art graphics are usually in very low resolutions, they rely on careful placing of individual pixels, often with a limited palette of colors. This results in graphics that rely on a high amount of stylized visual cues to define complex shapes with very little resolution, down to individual pixels. This makes image scaling of pixel art a particularly difficult problem.
A number of specialized algorithms[1] have been developed to handle pixel-art graphics, as the traditional scaling algorithms do not take such perceptual cues into account.
Since a typical application of this technology is improving the appearance of fourth-generation and earlier video games on arcade and console emulators, many are designed to run in real time for sufficiently small input images at 60 frames per second. This places constraints on the type of programming techniques that can be used for this sort of real-time processing. Many work only on specific scale factors: 2× is the most common, with 3×, 4×, 5× and 6× also present.
Plugin for GIMP : https://github.com/bbbbbr/gimp-plugin-pixel-art-scalers
Waifu2x
https://en.wikipedia.org/wiki/Waifu2x
https://github.com/lltcggie/waifu2x-caffe/releases
https://github.com/imPRAGMA/W2XKit
https://old.reddit.com/r/WaifuUpscales/new/
https://github.com/BlueCocoa/waifu2x-ncnn-vulkan-macos/releases
https://old.reddit.com/r/Dandere2x/
https://old.reddit.com/r/waifu2x
https://github.com/AaronFeng753/Waifu2x-Extension
https://github.com/K4YT3X/video2x
https://old.reddit.com/r/AnimeResearch
Quote from a reddit comment :
A short list, ordered after output quality and setup time:
SRGAN, Super-resolution generative adversarial network : https://github.com/topics/srgan,
Other implementations: https://github.com/tensorlayer/srgan
https://github.com/brade31919/SRGAN-tensorflow
https://github.com/titu1994/Super-Resolution-using-Generative-Adversarial-Networks
Neural Enhance: https://github.com/alexjc/neural-enhance/
Photoshop: The newest PS version (19.x, since October 2017 release) also has a new upscaling method, called "Preserve Details 2.0 Upscale" but compared to SRGAN the results clearly lack sharp and fine details. You have asked for an App and PS is easy to use and can be automated.
Overview of the most popular algorithms:
https://github.com/IvoryCandy/super-resolution
(VDSR, EDSR, DCRN, SubPixelCNN, SRCNN, FSRCNN, SRGAN)
Not in the list above:
LapSRN: https://github.com/phoenix104104/LapSRN
SelfExSR: https://github.com/jbhuang0604/SelfExSR
RAISR, developed by Google:
https://github.com/MKFMIKU/RAISR
https://github.com/movehand/raisr
Evoboxx is a synthesizer based on the cellular automaton Game of Life, created by mathematician John Horton Conway in 1970. The game is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves, or, for advanced players, by creating patterns with particular properties.
Mosaic is an open source multi-platform (osx, linux, windows) live coding and visual programming application, based on openFrameworks.
This project deals with the idea of integrate/amplify human-machine communication, offering a real-time flowchart based visual interface for high level creative coding. As live-coding scripting languages offer a high level coding environment, ofxVisualProgramming and the Mosaic Project as his parent layer container, aim at a high level visual-programming environment, with embedded multi scripting languages availability (Lua, GLSL, Python and BASH(macOS & linux) ).
As this project is based on openFrameworks, one of the goals is to offer as more objects as possible, using the pre-defined OF classes for trans-media manipulation (audio, text, image, video, electronics, computer vision), plus all the gigantic ofxaddons ecosystem actually available (machine learning, protocols, web, hardware interface, among a lot more).
While the described characteristics could potentially offer an extremely high complex result (OF and OFXADDONS ecosystem is really huge, and the possibility of multiple scripting languages could lead every unexperienced user to confusion), the idea behind the interface design aim at avoiding the "high complex" situation, embodying a direct and natural drag&drop connect/disconnet interface (mouse/trackpad) on the most basic level of interaction, adding text editing (keyboard) on a intermediate level of interaction (script editing), following most advanced level of interaction for experienced users (external devices communication, automated interaction, etc...)
Beats is a command-line drum machine. Feed it a song notated in YAML, and it will produce a precision-milled Wave file of impeccable timing and feel.
http://beatsdrummachine.com/tutorial/
http://tropone.de/2019/02/21/ungewoehnliche-wege-rhythmen-zu-programmieren-teil-2-beats-cl/
Each letter of the alphabet is an operation, lowercase letters operate on bang, uppercase letters operate each frame. Orca is designed to control other applications, create procedural sequencers, and to experiment with livecoding. See the documentation and installation instructions here, or have a look at a tutorial video.
A
add: Outputs the sum of inputs.B
bool: Bangs if input is not empty, or 0.C
clock: Outputs a constant value based on the runtime frame.D
delay: Bangs on a fraction of the runtime frame.E
east: Moves eastward, or bangs.F
if: Bangs if both inputs are equal.G
generator: Writes distant operators with offset.H
halt: Stops southward operators from operating.I
increment: Increments southward operator.J
jumper: Outputs the northward operator.K
konkat: Outputs multiple variables.L
loop: Loops a number of eastward operators.M
modulo: Outputs the modulo of input.N
north: Moves Northward, or bangs.O
offset: Reads a distant operator with offset.P
push: Writes an eastward operator with offset.Q
query: Reads distant operators with offset.R
random: Outputs a random value.S
south: Moves southward, or bangs.T
track: Reads an eastward operator with offset.U
uturn: Reverses movement of inputs.V
variable: Reads and write globally available variables.W
west: Moves westward, or bangs.X
teleport: Writes a distant operator with offset.Y
jymper: Outputs the westward operator.Z
zoom: Moves eastwardly, respawns west on collision.*
bang: Bangs neighboring operators.#
comment: Comments a line, or characters until the next hash.:
midi: Sends a MIDI note.^
cc: Sends a MIDI CC value.;
udp: Sends a UDP message.=
osc: Sends a OSC message.enter
bang selected operator.shift+enter
toggle insert/write.space
toggle play/pause.>
increase BPM.<
decrease BPM.shift+arrowKey
Expand cursor.ctrl+arrowKey
Leap cursor.alt+arrowKey
Move selection.ctrl+c
copy selection.ctrl+x
cut selection.ctrl+v
paste selection.ctrl+z
undo.ctrl+shift+z
redo.]
increase grid size vertically.[
decrease grid size vertically.}
increase grid size horizontally.{
decrease grid size horizontally.ctrl/meta+]
increase program size vertically.ctrl/meta+[
decrease program size vertically.ctrl/meta+}
increase program size horizontally.ctrl/meta+{
decrease program size horizontally.ctrl+=
Zoom In.ctrl+-
Zoom Out.ctrl+0
Zoom Reset.tab
Toggle interface.backquote
Toggle background.Download the app here : https://hundredrabbits.itch.io/orca
Source code : https://github.com/hundredrabbits/Orca
Video tutorial : https://www.youtube.com/watch?v=RaI_TuISSJE
To test midi on Macosx : http://notahat.com/simplesynth
Activate the virtual Midi input on Macosx : https://help.ableton.com/hc/en-us/articles/209774225-Using-virtual-MIDI-buses
Pilot (another way to create music with orca from the same creators) :
Download the app here : https://hundredrabbits.itch.io/pilot
Source code : https://github.com/hundredrabbits/Pilot
A good explanation of the software in German : http://tropone.de/2019/03/13/orca-ein-sequenzer-der-kryptischer-nicht-aussehen-kann-und-ein-versuch-einer-anleitung/
Vuo is a kit for making a million different projects — apps, videos, prototypes, plugins, exhibits, live performance effects, and more. Even if you don't have programming experience, Vuo lets you build your own stuff for Mac.
Vuo is the Finnish word for flow, and that's what Vuo is about — supporting your creative flow. When you're creating, you want to focus on your ideas. You don't want to be distracted or frustrated trying to figure out how your tools work. Vuo helps you stay in the groove by making it easy to find the building blocks you want, put them together, and tweak your creation until it's just the way you want it.
Field is a development environment for experimental code and digital art in the broadest of possible senses. While there are a great many development environments and digital art tools out there today, this one has been constructed with two key principles in mind:
Embrace and extend — rather than make a personal, private and pristine code utopia, Field tries to bridge to as many libraries, programming languages, and ways of doing things as possible. The world doesn't necessarily need another programming language or serial port library, nor do we have to pick and choose between data-flow systems, graphical user interfaces or purely textual programming — we can have it all in the right environment and we can both leverage the work of others and take control of our own tools and methods.
Live code makes anything possible — Field tries to replace as many "features" with editable code as it can. Its programming language of choice is Python — a world class, highly respected and incredibly flexible language. As such, Field is intensely customizable, with the glue between interface objects and data modifiable inside Field itself. Field takes seriously the idea that its user — you — are a programmer / artist doing serious work and that you should be able to reconfigure your tools to suit your domain and style as closely as possible.
Sustainability practitioners have long relied on images to display relationships in complex adaptive systems on various scales and across different domains. These images facilitate communication, learning, collaboration and evaluation as they contribute to shared understanding of systemic processes. This research addresses the need for images that are widely understood across different fields and sectors for researchers, policy makers, design practitioners and evaluators with varying degrees of familiarity with the complexity sciences. The research identifies, defines and illustrates 16 key features of complex systems and contributes to an evolving visual language of complexity. Ultimately the work supports learning as a basis for informed decision-making at CECAN (Centre for the Evalutation of Complexity Across the Nexus) and other communities engaged with the analysis of complex problems.
We call them "seeds". Each seed is a machine learning example you can start playing with. Explore, learn and grow them into whatever you like.
This channel was created for anyone that is curious about audio programming, digital signal processing (dsp) and creative coding- from the very basic concepts with no previous programming knowledge all the way up to building your own software instruments and applications in C++ with frameworks like Juce and openFrameworks.
MoviePy is a Python module for video editing, which can be used for basic operations (like cuts, concatenations, title insertions), video compositing (a.k.a. non-linear editing), video processing, or to create advanced effects. It can read and write the most common video formats, including GIF.
Created by Satoshi HORII at Rhizomatiks, (centiscript) is a JavaScript based creative code environment for creating experimental graphics. Imagined as an endless exploration from one script to another, Satoshi sees (centiscript) as a tool for visual thinking. Each experiment can be shared online since it relies on JavasScript + HTML + Canvas.
This is the official on-line repository for the code from the Graphics Gems series of books (from Academic Press). This series focusses on short to medium length pieces of code which perform a wide variety of computer graphics related tasks. All code here can be used without restrictions. The code distributions here contain all known bug fixes and enhancements.
A extensive book introducing C++ and Openframeworks
A free and open-source intermedia sequencer
Enables precise and flexible scripting of interactive scenarios. Control and score any OSC-compliant software or hardware : Max/MSP, PureData, OpenFrameworks, Processing...
An open source collection of 20+ computational design tools for Clojure & Clojurescript by Karsten Schmidt.
In active development since 2012, and totalling almost 39,000 lines of code, the libraries address concepts related to many displines, from animation, generative design, data analysis / validation / visualization with SVG and WebGL, interactive installations, 2d / 3d geometry, digital fabrication, voxel modeling, rendering, linked data graphs & querying, encryption, OpenCL computing etc.
Many of the thi.ng projects (especially the larger ones) are written in a literate programming style and include extensive documentation, diagrams and tests, directly in the source code on GitHub. Each library can be used individually. All projects are licensed under the Apache Software License 2.0.
Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.
We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.
One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.
One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
Build beautiful interactive books using GitHub/Git and Markdown.
https://gist.github.com/nickloewen/10565777
This is a plain-text version of Bret Victor’s reading list.