About关于ÜberAcercaNoinOver

How this site came to be, what’s used to build it and where it will be going.

Table of Contents目录InhaltsverzeichnisTabla de ContenidoSisällysluetteloInhoudsopgave

But why?

Like many interesting things, this site’s raison d’être came from an overlap of interests: Having a test project, something no-one relies on (yet) to learn, practice and experiment with practical uses of AIs; Simply from being a geek and always having a pet project around; And most importantly to try and use technology to make something that will entertain and educate (my) children, something cool that hopefully makes life better for someone else too.

History

This started out by running a simple script through GPT-4, some prompt hacking to create fairy-tales for a given subject. Worked okay-ish, but the end result wasn’t all that exciting for the kids. Next step was to have GPT also describe some scenes. I then manually ran this through Stable Diffusion to generate a dozen or so images per description. Select one and insert it in the HTML. “The Sleeping Mouse” was made that way. The main issues with that workflow were the amount of manual labour and the lack of consistent characters.

Today’s Site

After hacking together the HTML pages it was time to drop the hand-written CSS I improvised. The style was rebased on a set of SCSS ‘modules’ I use for most my sites, improving those in the process. Some interesting features added specifically for this site are the (hopefully) nice and interesting way the text flows around the image outlines, to give it a bit of a playful look. Another is using CSS text shadows, CSS variables and a tiny bit of Javascript to render pixel-perfect text outlines.

Pixel-perfect text outlines

This leverages two different ‘hacks’, one is using a text-shadow to create an outline, the other is a way to draw lines of exactly n screen pixels (1 in this case), regardless of zoom level.

Outlines

This site uses SCSS to make writing styles more re-usable and modular.

// The CSS variable '--color-fg' contains the hue and saturation together
$color-fg: hsl(var(--color-fg), var(--l-fg));

// Subtitle-style outline of text. Not suitable for large amounts of text due to rendering 8 shadows
@mixin text-outline($color: $color-fg, $width: "var(--border-thickness)") {
    text-shadow:
        #{"0 calc(-1 * "} + $width + #{") 0 "} + $color,
        #{"calc(0.707106781186 * "} + $width + #{") calc(-0.707106781186 * "} + $width + #{") 0 "} + $color,
        #{$width} + #{" 0 0 "} + $color,
        #{"calc(0.707106781186 * "} + $width + #{") calc(0.707106781186 * "} + $width + #{") 0 "} + $color,
        #{"0 "} + $width + #{" 0 "} + $color,
        #{"calc(-0.707106781186 * "} + $width + #{") calc(0.707106781186 * "} + $width + #{") 0 "} + $color,
        #{"calc(-1 * "} + $width + #{") 0 0 "} + $color,
        #{"calc(-0.707106781186 * "} + $width + #{") calc(-0.707106781186 * "} + $width + #{") 0 "} + $color;
}

The text-outline mixin shown above takes a color and a line width. The text shadow property consists of eight separate shadows, which is imho the optimum for a high-quality result. The four diagonal ones can be removed to make rendering of large amounts of text faster, but I never found that to give good results, so I didn’t code any configurability for it.

Lines sized to device pixels

In CSS, if you specify a line to be 5px, what you actually get is a line of 5 CSS pixels, which is what you want most of times. But in some cases you want to draw ‘device’ pixels, for example to create ‘hairlines’ (the thinnest possible line that can be displayed properly).

We will be using and updating a CSS variable called --device-pixel-ratio, which indeed is window.devicePixelRatio exported to CSS. First we make sure it always exists by adding a default in our stylesheet:

:root {
    --device-pixel-ratio: 2;
}

Next we need a bit of script and some event listeners to update the CSS variable on any resize or zoom effect. Note that this does not capture ‘pinch zoom’, there is afaik no good method to listen for that. But for most cases it does the job.

const r = document.querySelector(':root');
const set_device_pixel_ratio_var = () => {
    r.style.setProperty('--device-pixel-ratio', window.devicePixelRatio || 1);
}

window.ready(()=>{
    window.matchMedia('screen and (min-resolution: 2dppx)')
        .addEventListener("change", ()=>{
            set_device_pixel_ratio_var()
        });
    window.addEventListener("resize", () => {
        set_device_pixel_ratio_var()
    });
    set_device_pixel_ratio_var();
});

Now we can create a hairline by using calc to make sure it’s recalculated when necessary.

:root {
    --thickness-hairline: calc( 1px / var(--device-pixel-ratio) );
}

And last, but not least, we can apply the whole to the h1 tags in our fairy-tales.

.fairytale {
    h1 {        
        @include text-outline("purple", "var(--thickness-hairline)");
    }
}

Next Steps

Narration

Compared samples and different voices from several options:

The favourite / standout performance was Polly. Added benefit is that their API makes it possible to determine which word in a text is currently being spoken, using speech mark, something not every API offers. I’d like to use that to highlight the narrated word to help children associate text with speech to stimulate language development.

Consistent characters in the illustrations

Long term goal: Generate a series of images from a list of character / object descriptions and one scene prompt per image.