moving things around

pull/65/head
Patricio Gonzalez Vivo 8 years ago
parent b0120ca139
commit 327de614b3

3
.gitignore vendored

@ -1,4 +1,3 @@
/src
.DS_Store
.dropbox
tmp.md
@ -8,4 +7,4 @@ book.pdf
log/*
/.idea
.idea/
idea/
idea/

@ -1,10 +0,0 @@
http://github.prideout.net/coordinate-fields/
https://briansharpe.wordpress.com/2011/12/01/optimized-artifact-free-gpu-cellular-noise/
http://www.rhythmiccanvas.com/research/papers/worley.pdf
http://webstaff.itn.liu.se/~stegu/GLSL-cellular/GLSL-cellular-notes.pdf
http://www.iquilezles.org/www/articles/voronoise/voronoise.htm
http://www.iquilezles.org/www/articles/smoothvoronoi/smoothvoronoi.htm
http://www.iquilezles.org/www/articles/voronoilines/voronoilines.htm

@ -1,14 +1,9 @@
http://github.prideout.net/coordinate-fields/
https://docs.google.com/spreadsheets/d/194IVZR_xLVsw5H0zJZ7CWuimOQA_n8KU2eeIZBDwIH0/edit#gid=0
https://briansharpe.wordpress.com/2011/12/01/optimized-artifact-free-gpu-cellular-noise/
http://www.rhythmiccanvas.com/research/papers/worley.pdf
http://webstaff.itn.liu.se/~stegu/GLSL-cellular/GLSL-cellular-notes.pdf
http://heman.readthedocs.org/en/latest/generate.html#archipelagos
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Tatarchuk-Noise(GDC07-D3D_Day).pdf
http://www.campi3d.com/External/MariExtensionPack/help/MARI%20Extension%20Pack.html?Understandingsomebasicnoiseterms.html
http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/white-paper-procedural-terrain-generation-with-r2452
http://www.iquilezles.org/www/articles/warp/warp.htm
http://www.iquilezles.org/www/articles/morenoise/morenoise.htm
https://github.com/NVIDIAGameWorks/OpenGLSamples/blob/master/samples/es3aep-kepler/TerrainTessellation/assets/shaders/noise.glsl
http://www.iquilezles.org/www/articles/voronoise/voronoise.htm
http://www.iquilezles.org/www/articles/smoothvoronoi/smoothvoronoi.htm
http://www.iquilezles.org/www/articles/voronoilines/voronoilines.htm

@ -1,93 +1,5 @@
## Emergin patterns
We made pseudo random values from a sine wave, then from it we construct noise. We went from the absolute chaos to smooth random variations we can control.
With it we were able to suggest more organic visual gestures. But we still far away from the “real” thing. If we look to satelites images, coherent structers emerge from mountans formation, looking closely to the surface of a leave we will see a clear an inner pattern. This surfaces speaks about the forces involve on their creation. On the tension of the laws applied apply to them together with the forces of their surrandings.
The next step in our quest on learning how to mimic nature will be to learn about iterations. More precisely iterations on time and iterations on space.
### Fractal Brownian Motion
Noise tends to means different things for different people. Musicians will think it in disturbing sounds, communicators as interference and astrophysics as cosmic microwave background. In fact most of this concept have one things in common that bring as back to the begining of random. Waves and their properties. Audio or electromagnetical waves, fluctuation overtime of a signal. That change happens in amplitud and frequency. The ecuation for it looks like this:
<div class="simpleFunction" data="
float amplitud = 1.;
float frequency = 1.;
y = amplitud * sin(x * frequency);
"></div>
* Try changing the values of the frequency and amplitud to understand how they behave.
* Using shaping functions try changing the amplitud overtime.
* Using shaping function try changing the frequency overtime.
By doing the last to excersize you have manage to "modulate" a sine wave, and you just create AM (amplitud modulated) and FM (frequency modulated) waves. Congratulations!
Another interesting property of waves is their ability to add up. Add the following lines to the previus example and pay atention how the frequencies and amplitudes change conform we add different waves.
```glsl
float t = 0.01*(-u_time*130.0);
y += sin(x*2.1 + t)*4.5;
y += sin(x*1.72 + t*1.121)*4.0;
y += sin(x*2.221 + t*0.437)*5.0;
y += sin(x*3.1122+ t*4.269)*2.5;
y *= 0.06;
```
* Experiment by changing their values.
* Is it possible to cancel two waves? how that will look like?
* Is it possible to add waves in such a way that they will amplify each other?
In music, each note is asociated with specific a frequency. This frequencies seams to respond to a pattern where it self in what we call scale.
By adding different iterations of noise (*octaves*), where in each one increment the frequencies (*Lacunarity*) and decreasing amplitude (*gain*) of the **noise** we can obtain a bigger level of granularity on the noise. This technique is call Fractal Brownian Motion (*fBM*) and in it simplest form looks like the following code
<div class="simpleFunction" data="// Properties
const int octaves = 1;
float lacunarity = 2.0;
float gain = 0.5;
//
// Initial values
float amplitud = 0.5;
float frequency = x;
//
// Loop of octaves
for (int i = 0; i < octaves; i++) {
&#9;y += amplitud * noise(frequency);
&#9;frequency *= lacunarity;
&#9;amplitud *= gain;
}"></div>
* Progressively change the number of octaves to iterate from 1 to 2, 4, 8 and 10. See want happens.
* With over 4 octaves try changing the lacunarity value.
* Also with over 4 octaves change the gain value and see what happens.
Note how each in each octave the noise seams to have more detail. Also note the self similarity while more octaves are added.
The following code is an example of how fBm could be implemented on two dimensions.
<div class='codeAndCanvas' data='2d-fbm.frag'></div>
* Reduce the numbers of octaves by changing the value on line 37
* Modify the lacunarity of the fBm on line 47
* Explore by changing the gain on line 48
This techniques is use commonly to construct procedural landscapes. The self similarity of the fBm is perfect for mountains, together with a close cassing known as turbulence. Esentially a fBm but constructed from the absolute value of a signed noise.
```glsl
for (int i = 0; i < OCTAVES; i++) {
value += amplitud * abs(snoise(st));
st *= 2.;
amplitud *= .5;
}
```
<a href="../edit.html#12/turbulence.frag"><canvas id="custom" class="canvas" data-fragment-url="turbulence.frag" width="520px" height="200px"></canvas></a>
Another member of this family is the ridge. Constructed similarly to the turbolence but with some extra calculations:
```glsl
n = abs(n); // create creases
n = offset - n; // invert so creases are at top
n = n * n; // sharpen creases
```
<a href="../edit.html#12/ridge.frag"><canvas id="custom" class="canvas" data-fragment-url="ridge.frag" width="520px" height="200px"></canvas></a>
The next chapters in our quest on learning how to mimic nature will be to learn about iterations. More precisely iterations on time and iterations on space.
## Celluar noise

@ -1,7 +1,7 @@
<?php
$path = "..";
$subtitle = ": Fractal Brownian Motion";
$subtitle = ": More noise";
$README = "README";
$language = "";

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 86 KiB

Before

Width:  |  Height:  |  Size: 370 KiB

After

Width:  |  Height:  |  Size: 370 KiB

@ -1,27 +1,87 @@
## Fractals
## Fractal Brownian Motion
https://www.shadertoy.com/view/lsX3W4
Noise tends to means different things for different people. Musicians will think it in disturbing sounds, communicators as interference and astrophysics as cosmic microwave background. In fact most of this concept have one things in common that bring as back to the begining of random. Waves and their properties. Audio or electromagnetical waves, fluctuation overtime of a signal. That change happens in amplitud and frequency. The ecuation for it looks like this:
https://www.shadertoy.com/view/Mss3Wf
<div class="simpleFunction" data="
float amplitud = 1.;
float frequency = 1.;
y = amplitud * sin(x * frequency);
"></div>
https://www.shadertoy.com/view/4df3Rn
* Try changing the values of the frequency and amplitud to understand how they behave.
* Using shaping functions try changing the amplitud overtime.
* Using shaping function try changing the frequency overtime.
https://www.shadertoy.com/view/Mss3R8
By doing the last to excersize you have manage to "modulate" a sine wave, and you just create AM (amplitud modulated) and FM (frequency modulated) waves. Congratulations!
https://www.shadertoy.com/view/4dfGRn
Another interesting property of waves is their ability to add up. Add the following lines to the previus example and pay atention how the frequencies and amplitudes change conform we add different waves.
https://www.shadertoy.com/view/lss3zs
```glsl
float t = 0.01*(-u_time*130.0);
y += sin(x*2.1 + t)*4.5;
y += sin(x*1.72 + t*1.121)*4.0;
y += sin(x*2.221 + t*0.437)*5.0;
y += sin(x*3.1122+ t*4.269)*2.5;
y *= 0.06;
```
https://www.shadertoy.com/view/4dXGDX
* Experiment by changing their values.
* Is it possible to cancel two waves? how that will look like?
* Is it possible to add waves in such a way that they will amplify each other?
https://www.shadertoy.com/view/XsXGz2
In music, each note is asociated with specific a frequency. This frequencies seams to respond to a pattern where it self in what we call scale.
https://www.shadertoy.com/view/lls3D7
By adding different iterations of noise (*octaves*), where in each one increment the frequencies (*Lacunarity*) and decreasing amplitude (*gain*) of the **noise** we can obtain a bigger level of granularity on the noise. This technique is call Fractal Brownian Motion (*fBM*) and in it simplest form looks like the following code
https://www.shadertoy.com/view/XdB3DD
<div class="simpleFunction" data="// Properties
const int octaves = 1;
float lacunarity = 2.0;
float gain = 0.5;
//
// Initial values
float amplitud = 0.5;
float frequency = x;
//
// Loop of octaves
for (int i = 0; i < octaves; i++) {
&#9;y += amplitud * noise(frequency);
&#9;frequency *= lacunarity;
&#9;amplitud *= gain;
}"></div>
https://www.shadertoy.com/view/XdBSWw
* Progressively change the number of octaves to iterate from 1 to 2, 4, 8 and 10. See want happens.
* With over 4 octaves try changing the lacunarity value.
* Also with over 4 octaves change the gain value and see what happens.
https://www.shadertoy.com/view/llfGD2
Note how each in each octave the noise seams to have more detail. Also note the self similarity while more octaves are added.
The following code is an example of how fBm could be implemented on two dimensions.
<div class='codeAndCanvas' data='2d-fbm.frag'></div>
* Reduce the numbers of octaves by changing the value on line 37
* Modify the lacunarity of the fBm on line 47
* Explore by changing the gain on line 48
This techniques is use commonly to construct procedural landscapes. The self similarity of the fBm is perfect for mountains, together with a close cassing known as turbulence. Esentially a fBm but constructed from the absolute value of a signed noise.
```glsl
for (int i = 0; i < OCTAVES; i++) {
value += amplitud * abs(snoise(st));
st *= 2.;
amplitud *= .5;
}
```
<a href="../edit.html#12/turbulence.frag"><canvas id="custom" class="canvas" data-fragment-url="turbulence.frag" width="520px" height="200px"></canvas></a>
Another member of this family is the ridge. Constructed similarly to the turbolence but with some extra calculations:
```glsl
n = abs(n); // create creases
n = offset - n; // invert so creases are at top
n = n * n; // sharpen creases
```
<a href="../edit.html#12/ridge.frag"><canvas id="custom" class="canvas" data-fragment-url="ridge.frag" width="520px" height="200px"></canvas></a>
https://www.shadertoy.com/view/Mlf3RX

Before

Width:  |  Height:  |  Size: 812 KiB

After

Width:  |  Height:  |  Size: 812 KiB

@ -1,7 +1,7 @@
<?php
$path = "..";
$subtitle = ": Fractals";
$subtitle = ": Fractal Brownian Motion";
$README = "README";
$language = "";

Before

Width:  |  Height:  |  Size: 556 KiB

After

Width:  |  Height:  |  Size: 556 KiB

@ -0,0 +1,14 @@
https://docs.google.com/spreadsheets/d/194IVZR_xLVsw5H0zJZ7CWuimOQA_n8KU2eeIZBDwIH0/edit#gid=0
http://heman.readthedocs.org/en/latest/generate.html#archipelagos
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Tatarchuk-Noise(GDC07-D3D_Day).pdf
http://www.campi3d.com/External/MariExtensionPack/help/MARI%20Extension%20Pack.html?Understandingsomebasicnoiseterms.html
http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/white-paper-procedural-terrain-generation-with-r2452
http://www.iquilezles.org/www/articles/warp/warp.htm
http://www.iquilezles.org/www/articles/morenoise/morenoise.htm
https://github.com/NVIDIAGameWorks/OpenGLSamples/blob/master/samples/es3aep-kepler/TerrainTessellation/assets/shaders/noise.glsl

Before

Width:  |  Height:  |  Size: 790 KiB

After

Width:  |  Height:  |  Size: 790 KiB

Before

Width:  |  Height:  |  Size: 596 KiB

After

Width:  |  Height:  |  Size: 596 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

@ -1,78 +1,27 @@
# Image processing
## Fractals
## Textures
https://www.shadertoy.com/view/lsX3W4
![](01.jpg)
https://www.shadertoy.com/view/Mss3Wf
Graphic cards (GPUs) have special memory types for images. Usually on CPUs images are stores as arrays of bites but on GPUs store images as ```sampler2D``` which is more like a table (or matrix) of floating point vectors. More interestingly is that the values of this *table* of vectors are continously. That means that value between pixels are interpolated in a low level.
https://www.shadertoy.com/view/4df3Rn
In order to use this feature we first need to *upload* the image from the CPU to the GPU, to then pass the ```id``` of the texture to the right [```uniform```](../05). All that happens outside the shader.
https://www.shadertoy.com/view/Mss3R8
Once the texture is loaded and linked to a valid ```uniform sampler2D``` you can ask for specific color value at specific coordinates (formated on a [```vec2```](index.html#vec2.md) variable) usin the [```texture2D()```](index.html#texture2D.md) function which will return a color formated on a [```vec4```](index.html#vec4.md) variable.
https://www.shadertoy.com/view/4dfGRn
```glsl
vec4 texture2D(sampler2D texture, vec2 coordinates)
```
https://www.shadertoy.com/view/lss3zs
Check the following code where we load Hokusai's Wave (1830) as ```uniform sampler2D u_tex0``` and we call every pixel of it inside the billboard:
https://www.shadertoy.com/view/4dXGDX
<div class="codeAndCanvas" data="texture.frag" data-textures="hokusai.jpg"></div>
https://www.shadertoy.com/view/XsXGz2
If you pay attention you will note that the coordinates for the texture are normalized! What a surprise right? Textures coordenates are consisten with the rest of the things we had saw and their coordenates are between 0.0 and 1.0 whitch match perfectly with the normalized space coordinates we have been using.
https://www.shadertoy.com/view/lls3D7
Now that you have seen how we load correctly a texture is time to experiment to discover what we can do with it, by trying:
https://www.shadertoy.com/view/XdB3DD
* Scaling the previus texture by half.
* Rotating the previus texture 90 degrees.
* Hooking the mouse position to the coordenates to move it.
https://www.shadertoy.com/view/XdBSWw
Why you should be excited about textures? Well first of all forget about the sad 255 values for channel, once your image is trasformed into a ```uniform sampler2D``` you have all the values between 0.0 and 1.0 (depending on what you set the ```precision``` to ). That's why shaders can make really beatiful post-processing effects.
https://www.shadertoy.com/view/llfGD2
Second, the [```vec2()```](index.html#vec2.md) means you can get values even between pixels. As we said before the textures are a continum. This means that if you set up your texture correctly you can ask for values all arround the surface of your image and the values will smoothly vary from pixel to pixel with no jumps!
Finnally, you can setup your image to repeat in the edges, so if you give values over or lower of the normalized 0.0 and 1.0, the values will wrap around starting over.
All this features makes your images more like an infinit spandex fabric. You can streach and shrinks your texture without noticing the grid of bites they originally where compose of or the ends of it. To experience this take a look to the following code where we distort a texture using [the noise function we already made](../11/).
<div class="codeAndCanvas" data="texture-noise.frag" data-textures="hokusai.jpg"></div>
## Texture resolution
Aboves examples play well with squared images, where both sides are equal and match our squared billboard. But for non-squared images things can be a little more tricky, and unfortunatly centuries of picturical art and photography found more pleasent to the eye non-squared proportions for images.
![Joseph Nicéphore Niépce (1826)](nicephore.jpg)
How we can solve this problem? Well we need to know the original proportions of the image to know how to streatch the texture correctly in order to have the original [*aspect ratio*](http://en.wikipedia.org/wiki/Aspect_ratio). For that the texture width and height is pass to the shader as an ```uniform```. Which in our example framework are pass as an ```uniform vec2``` with the same name of the texture followed with proposition ```Resolution```. Once we have this information on the shader he can get the aspect ration by dividing the ```width``` for the ```height``` of the texture resolution. Finally by multiplying this ratio to the coordinates on ```y``` we will shrink these axis to match the original proportions.
Uncomment line 21 of the following code to see this in action.
<div class="codeAndCanvas" data="texture-resolution.frag" data-textures="nicephore.jpg"></div>
* What we need to do to center this image?
## Digital upholstery
![](03.jpg)
You may be thinking that this is unnesesary complicated... and you are probably right. Also this way of working with images leave a enought room to different hacks and creative tricks. Try to imagine that you are an upholster and by streaching and folding a fabric over a structure you can create better and new patterns and techniques.
![Eadweard's Muybridge study of motion](muybridge.jpg)
This level of craftsmanship links back to some of the first optical experiments ever made. For example on games *sprite animations* are very common, and is inevitably to see on it reminicence to phenakistoscope, zoetrope and praxinoscope.
This could seam simple but the posibilities of modifing textures coordinates is enormus. For example: .
<div class="codeAndCanvas" data="texture-sprite.frag" data-textures="muybridge.jpg"></div>
Now is your turn:
* Can you make a kaleidoscope using what we have learn?
* Way before Oculus or google cardboard, stereoscopic photography was a big thing. Could code a simple shader to re-use this beautiful images?
<a href=“../edit.html#10/ikeda-03.frag”><canvas id=“custom” class=“canvas” data-fragment-url=“ikeda-03.frag” width=“520px” height=“200px”></canvas></a>
* What other optical toys can you re-create using textures?
In the next chapters we will learn how to do some image processing using shaders. You will note that finnaly the complexity of shader makes sense, because was in a big sense designed to do this type of process. We will start doing some image operations!
https://www.shadertoy.com/view/Mlf3RX

Before

Width:  |  Height:  |  Size: 268 KiB

After

Width:  |  Height:  |  Size: 268 KiB

@ -1,6 +1,7 @@
<?php
$path = "..";
$subtitle = ": Fractals";
$README = "README";
$language = "";

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 40 KiB

@ -1,18 +1,78 @@
## Image operations
# Image processing
## Textures
### Invert
![](01.jpg)
<div class="codeAndCanvas" data="inv.frag" data-imgs="00.jpg,01.jpg"></div>
Graphic cards (GPUs) have special memory types for images. Usually on CPUs images are stores as arrays of bites but on GPUs store images as ```sampler2D``` which is more like a table (or matrix) of floating point vectors. More interestingly is that the values of this *table* of vectors are continously. That means that value between pixels are interpolated in a low level.
### Add, Substract, Multiply and others
In order to use this feature we first need to *upload* the image from the CPU to the GPU, to then pass the ```id``` of the texture to the right [```uniform```](../05). All that happens outside the shader.
![](02.jpg)
Once the texture is loaded and linked to a valid ```uniform sampler2D``` you can ask for specific color value at specific coordinates (formated on a [```vec2```](index.html#vec2.md) variable) usin the [```texture2D()```](index.html#texture2D.md) function which will return a color formated on a [```vec4```](index.html#vec4.md) variable.
<div class="codeAndCanvas" data="operations.frag" data-imgs="00.jpg,01.jpg"></div>
```glsl
vec4 texture2D(sampler2D texture, vec2 coordinates)
```
### PS Blending modes
Check the following code where we load Hokusai's Wave (1830) as ```uniform sampler2D u_tex0``` and we call every pixel of it inside the billboard:
<div class="codeAndCanvas" data="texture.frag" data-textures="hokusai.jpg"></div>
If you pay attention you will note that the coordinates for the texture are normalized! What a surprise right? Textures coordenates are consisten with the rest of the things we had saw and their coordenates are between 0.0 and 1.0 whitch match perfectly with the normalized space coordinates we have been using.
Now that you have seen how we load correctly a texture is time to experiment to discover what we can do with it, by trying:
* Scaling the previus texture by half.
* Rotating the previus texture 90 degrees.
* Hooking the mouse position to the coordenates to move it.
Why you should be excited about textures? Well first of all forget about the sad 255 values for channel, once your image is trasformed into a ```uniform sampler2D``` you have all the values between 0.0 and 1.0 (depending on what you set the ```precision``` to ). That's why shaders can make really beatiful post-processing effects.
Second, the [```vec2()```](index.html#vec2.md) means you can get values even between pixels. As we said before the textures are a continum. This means that if you set up your texture correctly you can ask for values all arround the surface of your image and the values will smoothly vary from pixel to pixel with no jumps!
Finnally, you can setup your image to repeat in the edges, so if you give values over or lower of the normalized 0.0 and 1.0, the values will wrap around starting over.
All this features makes your images more like an infinit spandex fabric. You can streach and shrinks your texture without noticing the grid of bites they originally where compose of or the ends of it. To experience this take a look to the following code where we distort a texture using [the noise function we already made](../11/).
<div class="codeAndCanvas" data="texture-noise.frag" data-textures="hokusai.jpg"></div>
## Texture resolution
Aboves examples play well with squared images, where both sides are equal and match our squared billboard. But for non-squared images things can be a little more tricky, and unfortunatly centuries of picturical art and photography found more pleasent to the eye non-squared proportions for images.
![Joseph Nicéphore Niépce (1826)](nicephore.jpg)
How we can solve this problem? Well we need to know the original proportions of the image to know how to streatch the texture correctly in order to have the original [*aspect ratio*](http://en.wikipedia.org/wiki/Aspect_ratio). For that the texture width and height is pass to the shader as an ```uniform```. Which in our example framework are pass as an ```uniform vec2``` with the same name of the texture followed with proposition ```Resolution```. Once we have this information on the shader he can get the aspect ration by dividing the ```width``` for the ```height``` of the texture resolution. Finally by multiplying this ratio to the coordinates on ```y``` we will shrink these axis to match the original proportions.
Uncomment line 21 of the following code to see this in action.
<div class="codeAndCanvas" data="texture-resolution.frag" data-textures="nicephore.jpg"></div>
* What we need to do to center this image?
## Digital upholstery
![](03.jpg)
<div class="codeAndCanvas" data="blend.frag" data-imgs="04.jpg,05.jpg"></div>
You may be thinking that this is unnesesary complicated... and you are probably right. Also this way of working with images leave a enought room to different hacks and creative tricks. Try to imagine that you are an upholster and by streaching and folding a fabric over a structure you can create better and new patterns and techniques.
![Eadweard's Muybridge study of motion](muybridge.jpg)
This level of craftsmanship links back to some of the first optical experiments ever made. For example on games *sprite animations* are very common, and is inevitably to see on it reminicence to phenakistoscope, zoetrope and praxinoscope.
This could seam simple but the posibilities of modifing textures coordinates is enormus. For example: .
<div class="codeAndCanvas" data="texture-sprite.frag" data-textures="muybridge.jpg"></div>
Now is your turn:
* Can you make a kaleidoscope using what we have learn?
* Way before Oculus or google cardboard, stereoscopic photography was a big thing. Could code a simple shader to re-use this beautiful images?
<a href=“../edit.html#10/ikeda-03.frag”><canvas id=“custom” class=“canvas” data-fragment-url=“ikeda-03.frag” width=“520px” height=“200px”></canvas></a>
* What other optical toys can you re-create using textures?
In the next chapters we will learn how to do some image processing using shaders. You will note that finnaly the complexity of shader makes sense, because was in a big sense designed to do this type of process. We will start doing some image operations!

Before

Width:  |  Height:  |  Size: 516 KiB

After

Width:  |  Height:  |  Size: 516 KiB

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 148 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 416 KiB

After

Width:  |  Height:  |  Size: 416 KiB

Before

Width:  |  Height:  |  Size: 352 KiB

After

Width:  |  Height:  |  Size: 352 KiB

Before

Width:  |  Height:  |  Size: 866 KiB

After

Width:  |  Height:  |  Size: 866 KiB

Before

Width:  |  Height:  |  Size: 1018 KiB

After

Width:  |  Height:  |  Size: 1018 KiB

Before

Width:  |  Height:  |  Size: 126 KiB

After

Width:  |  Height:  |  Size: 126 KiB

Before

Width:  |  Height:  |  Size: 2.6 MiB

After

Width:  |  Height:  |  Size: 2.6 MiB

Before

Width:  |  Height:  |  Size: 587 KiB

After

Width:  |  Height:  |  Size: 587 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 86 KiB

@ -1 +1,18 @@
## Kernel convolutions
## Image operations
### Invert
<div class="codeAndCanvas" data="inv.frag" data-imgs="00.jpg,01.jpg"></div>
### Add, Substract, Multiply and others
![](02.jpg)
<div class="codeAndCanvas" data="operations.frag" data-imgs="00.jpg,01.jpg"></div>
### PS Blending modes
![](03.jpg)
<div class="codeAndCanvas" data="blend.frag" data-imgs="04.jpg,05.jpg"></div>

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

@ -1 +1 @@
## Filters
## Kernel convolutions

@ -0,0 +1 @@
## Filters

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save