BEdit»Blog

UI - Past, present and future

I was just going to make a small post regarding the BEdit UI system as a response to Simon Anciaux questions on my previous blog entry... at least I tried to keep it small.

During my days as game programmer (well, I still develop games but that's not what gets me salary anymore) I had witnessed several UI systems. Java AWT, Android and Apple families of devices with their inbuilt layout editors and system, different games on custom engines with a retained object oriented framework; and even a bit of Unity and Unreal Engine 4. I've even had a run in with Crazy Eddie's GUI and HTML. All these UI systems share one thing in common, from my point of view they are horrible! Observe the "from my point of view", I am a programmer, I want the UI to reflect what is in the code and a lot of UI systems require a lot of work to add simple stuff compared to adding a couple of lines of code. I can only assume that the target audience for these systems is not me.

The first UI system I've accepted as something usable got recommended to my by a friend I know to be a good programmer, Dear ImGui. Funnily enough I realized later on it's inspired by one of Casey's talks on immediate mode UI.

The UI of BEdit is the fourth (and so far most feature rich) UI system I've made from scratch. The first UI system I made was just a silly little debug UI for a mobile game, second was to be used in VR with a haptic device as input and the third is for our work-in-progress game with (mostly) gamepad input. As you might realize, these UI systems are trying to solve very different problems for very different audiences. Understanding the audience, and who will use your system is key when making any UI system.

For games the end-users are the players, on console and PC they expect common buttons and mouse support, for mobile multi-touch for zooming and similar guestures need to be supported. Who will use your code? Is it a UI / UX designer? A game designer? A programmer? If it's a UI / UX designer you probably need to make a stand-alone tool that looks and feels just like the tools the designer is already familiar with. A game designer probably wants to integrate it with other game software such as a level / map maker and whatever programming language that is in. BEdit has in this way the simplest target audience and users, me. It's only used for making the UI of BEdit, as I'm the only developer on the project (well, to be frank I delegated making of the icon), I am the only one who can say if the API is good or bad. And who is the end user? Who debugs binary files? Programmers. That's me!

What do programmers as end-users value in UI (your mileage may vary) and how to turn these into code?
  • Responsiveness - avoid lengthy, even though pretty, transition animations.
  • => No need to keep (much of) previous frame's state around, minimal interpolation, probably don't need audio.
  • Fast workflow - hotkeys for common actions, start where you left off.
  • => Dealing with winapi and keyboard chords isn't fun, but must be done.
  • Configurability - make good defaults but permit end-users who care change all of them.
  • => Some dump their configuration to json...
  • Functionality - if your UI does less than the end-user can program in say an hour, your UI isn't going to be used for very long.
  • => Spend time on it!


Keeping this in mind I set out to start the UI of BEdit. I already had a 2D renderer with font support and clipping so getting started was easy. In the beginning it looked something like
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
struct RenderGroup
{
    Resources* resources; // Font, textures, sprites...
    
    Vec2 nextWritePos; // Position where next character gets drawn.
    RenderPass pass; // Vertex storage, clip region, shader bindings...
};

RenderGroup StartGroup(Resources* resources, IntRect winRegion);
void DrawString(RenderGroup* group, String str); // Adds vertices to pass and moves group->nextWritePos
void Render(RenderGroup* group);
void NewLine(RenderGroup* group); // Moves group->nextWritePos to beginning of next row.
...

void UpdateApp(AppContext* context)
{
    IntRect winRegion = GetWinRegion(context);
    
    // Hex display of binary file:
    RenderGroup group = StartGroup(&context->resources, winRegion);
    for (U64 byteIndex = 0; byteIndex < context->binaryFile.sizeInBytes; ++byteIndex)
    {
        String digits = HexToString(context->binaryFile.bytes[byteIndex]);
        DrawString(&group, digits);

        if (group->nextWritePos.x + context->resources.font->charWidth * 2 > winRegion.width)
        {
            NewLine(&group);
        }
    }
    Render(&group);
}


That piece of code was pretty much the start of the UI. Next part was to make it interactible. Pretty simple thing to do. To start out, a simple click to start editing byte, when the byte has been written move on to next one.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
struct RenderGroup
{
    ...
    
    Vec2 interactionStart;
};

...

void BeginInteraction(RenderGroup* group) 
{
    group->interactionStart = group->nextWritePos;
    group->interactionStart.y -= group->resources->font.ascent;
}

B32 EndInteraction(RenderGroup* group, Mouse* mouse)
{
    Vec2 interactionEnd = group->nextWritePos;
    interactionEnd.y += group->resources->font.descent;
    
    B32 wasPressed = (mouse->isDown && InsideMinMax(group->interactionStart, mouse->pos, interactionEnd));
    return wasPressed;
}

...

struct AppContext
{
    ...
    U64 focusedHalfByteIndex;
};

void UpdateApp(AppContext* context)
{
    ...
    for (U32 keyInput = 0; keyInput < context->key.count; ++keyInput)
    {
        U32 codepoint = context->key.input[keyInput];
        if (IsHexDigit(codepoint))
        {
            ReplaceHalfByte(context->binaryFile, context->focusedHalfByteIndex++, ToHexDigit(codepoint));
        }
    }
    
    for (U64 byteIndex = 0; byteIndex < context->binaryFile.sizeInBytes; ++byteIndex)
    {
        if (context->focusedHalfByteIndex / 2 == byteIndex)
        {
            DrawEditMarker(&group, context->focusedHalfByteIndex % 2);
        }
        BeginInteraction(&group);
        String digits = HexToString(context->binaryFile.bytes[byteIndex]);
        DrawString(&group, digits);
        if (EndInteraction(&group))
        {
            context->focusedHalfByteIndex = byteIndex * 2;
        }
    }
    Render(&group);
}


This worked pretty well (for a starting point) so I moved on to do something more interesting, actually running the BEdit interpreter. The interpreter is pretty simple, it tokenizes the layout code, generates instructions then goes through each instruction one-by-one and let's you intercept each instruction. The layout language, just like C, C++, Java and.. well, most languages I know of, uses a stack based runtime. A stack entry in BEdit is (usually) representing a type. That is
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct MyOtherType
{
    string(4) str;
};

struct MyType
{
    u(4) myInt;
    MyOtherType nested;
    f(8) myDouble;
};

layout MyType;

emits
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// struct MyOtherType
 0: goto 4

// string(4) str;
 1: @global_address 4 bytes member-ascii name="str"
 2: global_address += 4

 3: goto local_return
// end of - MyOtherType

// struct MyType
 4: goto 14

// u(4) myInt
 5: @global_address 4 bytes member-unsigned name="myInt"
 6: global_address += 4

// MyOtherType nested;
 7: push_frame type="MyOtherType" name="nested"
 8: local_return = 10
 9: goto 1
10: pop-frame

// f(8) myDouble
11: @global_address 8 bytes member-float name="myDouble"
12: global_address += 8

13 goto local_return
// end of - MyType

// layout MyType
14: push-frame type="MyType" name=null
15: local_return = 17
16: goto 5
17: pop-frame


From a UI point-of-view, it feels pretty natural to represent this as a tree view. As frames get pushed I push a tree node, and as they get popped I pop a tree node. Pretty simple thing to do.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
struct TreeNode 
{ 
    U16 parentIndexPlusOne;
    B16 isOpen; 
};
struct TreeState
{
    TreeNode treeNodes[256]; // Nobody needs more than 256 tree nodes right?
};

void UpdateTreeView(Group* group, Instructions* instructions, Mouse* mouse, Keyboard* keyboard, TreeState* state)
{
    U32 indentation = 0;
    U16 currentTreeIndexPlusOne = 0;
    U16 treeCount = 0;
    for (Runtime runtime = StartEvaluating(instructions); !HasReachedEnd(&runtime); StepOne(&runtime))
    {
        switch (runtime.currentInstruction->type)
        {
            case PushFrame:
            {
                ++treeCount;
                if (TreeIsFullyOpen(state, treeCount - 1))
                {
                    MoveToColumn(group, indentation++); // Moves the nextWritePos.x

                    StartInteractible(group);
                    TreeNode* thisNode = state->treeNodes + (treeCount - 1);
                    thisNode->parentIndexPlusOne = currentTreeIndexPlusOne;
                    currentTreeIndexPlusOne = treeCount;
                    if (thisNode->isOpen) 
                    {
                        DrawTreeMarkerOpen(group); 
                    }
                    else 
                    { 
                        DrawTreeMarkerClosed(group); 
                    }
                    DrawString(group, runtime.currentInstruction->pushFrame.typeName);
                    DrawString(group, runtime.currentInstruction->pushFrame.memberName);

                    if (EndInteractible(group, mouse))
                    {
                        thisNode->isOpen = !thisNode->isOpen; // Note that this is done after drawing, this will have one frame latency.
                    }
                    NextLine(group);
            } break;

            case PopFrame:
            {
                --indentation;
                currentTreeIndexPlusOne = state->treeNodes[currentTreeIndexPlusOne - 1].parentIndexPlusOne;
            } break;

            case ScalarMember:
            {
                if (TreeIsFullyOpen(state, treeCount - 1))
                {
                    MoveToColumn(group, indentation);
                    // MemberEditor(group, runtime.currentInstruction, keyboard, mouse); // <- who has keyboard input?
                    DrawMemberValue(group, runtime.currentInstruction);
                    NextLine(group);
                }
            } break;
        }
    }
}


At this point I have two of the views I'd like to have in the application. But I'd like to have them displayed side-by-side with maybe some menu and buttons on top. Hex editor probably want's a Goto address widget, and obviously a save and open button.

The current approach has two flaws that needs to be fixed.
1. Clip region specification - can't the clip region just be as big as required?
2. Keyboard input - even if multiple UI elements can take input only one should get it.

Number 1 is really easy to solve. Since the clip region isn't actually needed until rendering the group, we can specify it after draw calls. After drawing anything we can just keep a record of the maximum value of nextWritePos.

The style of UI I'm going for looks like:
1
2
3
4
5
6
7
+--------+--------+
| MenuA  | MenuB  |
| ...    |--------|
|--------|        |
|   ...  |   ...  |
|        |        |
+--------+--------+


Left side is one view with its specific menu, and right side potentially a different view with its menu. I know the width of each of the views before I start drawing them (by default half of the screen) so determining where line breaks are required is easy - but most of the times I'm contempt on just assuming it'll fit. Note I'm not taking into account scroll bars at this time, we'll get to that later.

Changes required for this is:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
struct RenderGroup
{
    ...
    
    Vec2 maxDimDrawn;
};

...

RenderGroup StartAutosizedGroup(Resources* resources, IVec2 minPoint)
{
    return StartGroup(resources, RectMinDim(minPoint, {0, 0}));
}

void SetWidth(RenderGroup* group, U32 width)
{
    group->pass.clip.dim.width = width;
}

void Render(RenderGroup* group)
{
    if (group->pass.clip.dim.width == 0)
    {
        group->pass.clip.dim.width = CeilToU32(group->maxDimDraw.width);
    }
    if (group->pass.clip.dim.height == 0)
    {
        group->pass.clip.dim.height = CeilToU32(group->maxDimDraw.height);
    }
    ...
}


and the usage code

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
void UpdateApp(AppContext* context)
{
    ...
    
    R32 y = 0.0f;
    R32 width = 0.5f * GetWidth(context->window);
    RenderGroup leftMenu = StartAutosizedGroup(&context->resources, TopLeft(context->window));
    SetWidth(&leftMenu, width);
    
    DrawMenu(&leftMenu, context->leftView);
    Render(&leftMenu);
    
    y += leftMenu.pass.clip.dim.height;
    
    RenderGroup leftContent = StartRenderGroup(&context->resources, RectMinDim({0.0f, y}, {width, GetHeight(context->window) - y}));
    DrawContent(&leftContent, context->leftView);
    Render(&leftContent);
    
    // Same for right side.
    
    ...
}


Drawing the menu first lets me know the available size left for the content.

For the second part (the keyboard input) the widgets need unique identifiers. If you have experience with Dear ImGui or watched the Handmade Hero episodes where Casey does the UI you already know how to do that part. Pretty simple thing to do.

With identifiers for each widget I changed from the exploratory implementation, to an implementation I could more heavily lean on. UI got widget state storage so any widget could allocate what was needed and fetch it by id; TreeNode, hex editor's focusedHalfByteIndex etc. ended up using that system. Regarding widget identifiers, it is very easy to come up with unique identifiers in BEdit. There are two views, so the index of the view acts as one part. Since you're either representing members (that have a unique ordering) or byte indices (same there) it's trivial to assemble a unique identifier.

Now to the tricky part, scrolling.

But uhm... scrolling is easy, you just add an offset to the rendering in RenderGroup right? Well, that's what I tried and I found a category of bugs I've never encountered visually before (but I've heard war stories).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
struct Scroll
{
    Vec2 value;
};

struct RenderGroup
{
    ...
    
    Vec2 scroll;
    WidgetID id;
};

void Begin(RenderGroup* group)
{
    if (Scroll* scroll = (Scroll*)GetUIState(group->id))
    {
        group->scroll = scroll->value;
    }
}

void AddVertex(RenderGroup* group, Vertex v) // Called by all DrawXXX functions to add vertex.
{
    v.pos -= group->scroll;
    AddVertex(&group->pass, v);
}

void EndGroup(RenderGroup* group, Mouse* mouse)
{
    ...
    if (Scroll* scroll = (Scroll*)GetUIState(group->id))
    {
        if (InsideGroup(mouse->pos))
        {
            scroll->value += Config.scrollSensitivity * mouse->dScroll; // Note this has one frame latency.
        }
        scroll->value = ClampScrollToInsideView(group, scroll->value);
    }
    ...
}

...


Pretty simple stuff so far, if mouse cursor is inside the view just scroll it if it has a widget id. I tried it, and after I added culling it worked fine. It worked fine for my test.bmp and my test.wav, then I opened the intro_art.hha. Other than being really really really laggy it worked fine... then I scrolled down.

The intro_art.hha had some interesting problems when I scrolled down to about 75% of the file, the font got taller. But when I changed the scrolling to 74% the font shrunk. At this point in time I still had some iffy code about that was marked for upgrading, and the size of the file is 460 MB, so I decided in my mind I'm overwriting the font height somewhere. A typical write off the buffer. I stepped into the debugger, spread some data breakpoints around the font data and... nothing. No hit. But I found something else. Can you see it? The cause of the bug is visible in previous snippet.

The reason is floating point representation errors of Vec2 scroll (select the text if you gave up on figuring it out).

Just in case you still want to think about it, let's move on and deal with the laggy part. A half gigabyte file is pretty big to view in an editor, I wouldn't expect a general purpose editor to handle it, but BEdit must handle this. My initial thought was that the lack of culling the graphics was the main source of spent time. As a quick check, I disabled the rendering and turned on -O2. Still slow. Adding profiling data I saw all time was spent on the hex editor so I removed almost everything in there, but still just the act of iterating through the entire 460 MB of bytes proved to be too slow. I could cache the results, but I (and I hope you as a fellow programmer agree) don't accept a large one-frame latency. Another possibility would be to hide the latency by spawning a thread to do the work and display some loading UI... but I'm not a fan of that approach. BEdit must be able to load any file in a reasonable time (even if we get into the terabyte region).

Other than fixing the performance, the scrollbar needs to be implemented. Scrollbar wants to know how much you draw, how much is visible and where is the first visible line.

This poses a small chicken-egg problem. The RenderGroup needs to know how much you draw to determine the view size, but I need to know the size to determine how much to draw. The scrollbar needs to know how much you drew before the visible region as well as the entire size.

There are two use-cases at this time, the hex editor and the tree layout. For the hex editor we know exactly the amount to draw. We have the width of a glyph character (assuming monospaced font) and the visible width of the view. Also the visible height is known. We can determine the amount of lines required to draw the entire file without actually iterating through it. The scroll can change from being a float to line index and fraction of line (that also fixes the representation error).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
struct Scroll
{
    U64 firstVisibleLine;
    R32 fractionPart;
};

struct RenderGroup
{
    ...

    // Replaces nextWritePos:
    R32 nextWriteX;
    U64 nextWriteLine;
};

void AddVertex(RenderGroup* group, Vertex v) // Called by all DrawXXX functions to add vertex.
{
    if (group->scroll.firstVisibleLine < group->nextWriteLine)
    {
        R32 diff = (R32)(group->nextWriteLine - group->scroll.firstVisibleLine);
        v.y -= (diff + group->scroll.fractionPart) * group->resources.font.lineHeight;
        AddVertex(&group->pass, v);
    }
}

void SetLineCount(RenderGroup* group, U64 lineCount)
{
    group->maxDimDrawn.y = lineCount * group->resources.font.lineHeight;
}

void SetCurrentLine(RenderGroup* group, U64 line)
{
    group->nextWriteLine = line;
}
...

void HexEditor(RenderGroup* group, ...)
{
    U32 bytesPerLine = group->pass.clip.dim.width / (2 * GetGlyphWidth(group->resources)); // Pre-calculation of line width
    U32 lineCount = (context->binaryFile.sizeInBytes  + bytesPerLine - 1) / bytesPerLine; // Ceiled integer division
    SetLineCount(group, lineCount);
    
    U64 firstByteIndexVisible = 0;
    if (Scroll* scroll = (Scroll*)GetUIState(group->id))
    {
        firstByteIndexVisible = scroll->firstLineVisible * bytesPerLine;
        SetCurrentLine(group, scroll->firstLineVisible);
    }

    for (U64 byteIndex = firstByteIndexVisible; 
         byteIndex < Min(context->binaryFile.sizeInBytes, firstByteIndexVisible + lineCount * bytesPerLine);
         ++byteIndex)
    {
        String digits = HexToString(context->binaryFile.bytes[byteIndex]);
        DrawString(&group, digits);

        if (group->nextWritePos.x + context->resources.font->charWidth * 2 > winRegion.width)
        {
            Assert(byteIndex % bytesPerLine == 0); // If triggered "Pre-calculation of line width" is incorrect.
            NewLine(&group);
        }
    }
}


That works for the hex editor, but how about the other use-case; the tree view? We have no idea how many lines that takes, nor does it remain consistent between frames if the user toggles a node. This is the cake programming part I mentioned in previous post, we can do the old thing by default and let you do the smart thing if you can. The smart thing being you tell the UI what row you're drawing on and how tall the area is, and the default being you are completely clueless - please figure it out on my behalf. The second part works with the original code, as long as you're not iterating half a GB of data!

How many lines can a tree view have? As one row means one member, maybe in the range of thousands at most. We can iterate that.

How many lines can a hex view have? Billions. We can't iterate that.

This style of UI does have some issues, you need to know if the data given to the UI is going to be huge or not, if it's huge you need to know the amount of lines before you draw them.

As an example, let's say we have a file in hex as 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ab 00 and we'd like that file to be displayed as 00 <repeats 14 times> ab 00. Now we can't determine the size of a line anymore since the file size in bytes doesn't give us the width required to display a line. Well, just use the second version! But that might prove too slow. In these cases caching with a pre-pass is required. This is one of the most significant limitation of this UI.

On the positive side, we can now implement a scrollbar pretty trivially!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
struct Scroll
{
    ...
    B32 scrollBarVisible;
};

void DrawScrollbar(RenderGroup* group);

void SetLineCount(RenderGroup* group, U64 lineCount)
{
    ...
    R32 heightAvailable;
    if (group->pass.clip.dim.height != 0)
    {
        heightAvailable = group->pass.clip.dim.height;
    }
    else
    {
        heightAvailable = GetWindowHeight() - group->pass.clip.minPos.y;
    }
    
    if (heightAvailable < lineCount * group->resources->font.lineHeight)
    {
        scroll->scrollBarVisible = true;
    }
    else
    {
        scroll->scrollBarVisible = false;
    }
}

R32 GetWidthAvailableForLayout(RenderGroup* group)
{
    R32 result = group->pass.clip.dim.width;
    if (Scroll* scroll = (Scroll*)GetUIState(group->id))
    {
        if (scroll->scrollBarVisible)
        {
            result -= Config.scrollBarWidth;
        }
    }
    return result;
}

void Render(RenderGroup* group)
{
    ...
    if (scroll->scrollBarVisible)
    {
        DrawScrollBar(group);
    }

    Render(&group->pass);
}
...

void HexEditor(RenderGroup* group, ...)
{
    U32 bytesPerLine = GetWidthAvailableForLayout(group) / (2 * GetGlyphWidth(group->resources)); // Note this is off by one frame
    U32 lineCount = (context->binaryFile.sizeInBytes  + bytesPerLine - 1) / bytesPerLine; // Ceiled integer division
    SetLineCount(group, lineCount);

    ...
}


There are some issues we haven't touched yet, text alignment for column layout, the dual nature of - how big is a line suppose to be, and how long it actually is, the key input and all of those off by one frame.

The tree view at its current state looks like:
1
2
3
4
5
6
- SomeType
    SomeMember 123
    Other 0xababab
    - NestedMember
        a 0.123
        strMember "abc"

but wouldn't it look better as
1
2
3
4
5
6
- SomeType
    SomeMember 123
    Other      0xababab
    - NestedMember
        a         0.123
        strMember "abc"

?

This can be done pretty easily with a double pass, one to determine the widths of the columns, and next to actually draw them. I decided to instead draw them in first pass as well as record the column widths, and next frame use the widths to get the proper layout - causing a one frame delay.

Key input has a problem, current way of dealing with keyboard presses looks like:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
void DoMyView(UI* ui, MyState* state)
{
    if (HasKeyboardFocus(state->id))
    {
        foreach(key, ui->keyboard)
        {
            Handle(key);
        }
    }
    ...
}


This poses a problem, what if the key modifies what the current view is? It would be much simpler if the keyboard input would just be one key instead of an array, but then we run the chance of missing a key press - and that is very bad for text input.

These one frame delays are really piling up, and the keyboard input needs to be improved. I think there is a way to solve both problems in one go. I would call it adaptive update, just call the whole application update multiple times but only render the last update.

For keyboard input, it would look like
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
void PlatformMain()
{
    ...
    while (!quitting)
    {
        PollMessages();
        Keyboard* keyboard = GetKeyInputSinceLastFrame();
        renderer.disabled = true;
        for (U32 i = 0; i < keyboard->keyCount; ++i)
        {
            UpdateApp(..., keyboard->keys[i]);
        }
        renderer.disabled = false;
        UpdateApp(..., 0);
    }
}


And if there's an algorithm that needs a double pass, it might be able to utilize the same functionality.

As I haven't tried it yet I'm not sure if this is a good idea or the most horrible idea I've ever had, it has potential to be either. Updating the entire app might not be cheap when it comes to performance, but in all honesty it probably should be. It's just laying out text after all! But there is a possibility of an infinite don't render yet!-loop.

After the remaining issues mentioned here have been fixed, the UI system is pretty much done. Of course, making it look pretty is a different story, as well as making it work on all resolutions (although that's easier since there are no image files).

If you're not tired of reading yet and you're interested how the UI actually looks in its current state, head over to the main page and look at the screenshots. If you got curious about the layout language and that hha-file, then there's a series of articles you can have a look at in the wiki page.
Simon Anciaux,
Thanks for the post. It was interesting to see that you have similar issues than I did and sometimes similar solutions.

Could you be more precise on what exactly is the WidgetID struct ?

Am I understanding correctly that you process keyboard input outside of the widgets code ? I haven't done proper keyboard input in my UI system (except for textbox, and list navigation) but I was thinking about handling that in the widget update code.
Jens, Edited by Jens on
The WidgetID is very simple

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
struct WidgetID
{
    U64 e[4];
};
#define CreateID(...) WidgetID{ {(U64)__FILE__, __LINE__, ##__VA_ARGS__} }

...

void SomeView(U32 viewIndex)
{
    if (TextButton(CreateID(viewIndex), "foo"))
    {
        // You pressed foo!
    }
    
    WidgetID inputId = CreateID(viewIndex);
    if (HasKeyboardFocus(inputId))
    {
        TextInput* inputWidget = GetOrCreateUIState(TextInput, inputId);
        B32 returnWasPressed = HandleKeyInput(keyboardInput, inputWidget);
        DrawTextInput(inputWidget);
        
        if (returnWasPressed)
        {
            ... modify memory based on what's inside `inputWidget`
            ClearKeyboardFocus();
        }
    }
    else if (TextButton(inputId, "Click to edit"))
    {
        SetKeyboardFocus(inputId);
    }
}


Currently the keys are handled inside the widget, but that poses a small problem with hotkeys. If there are two views open, both with a Save button, what does Ctrl+S do? Since I only have two views I suppose the "last interacted with" view could take keyboard focus for hotkeys but that might be very unintuitive. Another solution might be to just make sure all hotkeys are unique.
Simon Anciaux,
My id struct is very similar (four u64), but I use it in a different way. The most common way I create an id is by passing a pointer to the data used or modified by the widget. It's just that the pointer is guaranteed to be unique (unless I use several widgets to modified the same data). The id can have 4 u64 because I use a sort of inheritance in my widgets. For example if I create a dropdown list, the dropdown button uses the user provided id, the list uses an additional u64, and if there is a scrollbar in the list the scrollbar uses an additional u64, and the scrollbar buttons use the last u64. I'm still not sure this is a good solution, but it seems to work for the moment although it sometimes adds a bit of complexity.

I didn't thought much about it before, but your comment points out that there is a difference between keyboard navigation/activation and hotkeys. And hotkeys seems that they should be handle outside of widgets. Maybe, I'm not sure.

Is it that common to have the same shortcuts for different items in different views ? You're pointing "save" but I think what I've seen in the past is different save options (think of them as options in the file menu). "Save", "Save scene", "Save project"... and they would have different shortcuts.
Jens,
Save in particular is a bit tricky. Say you're editing with the hex view, you do a little edit here and there then Ctrl+S to save the binary file. When I add text editing capabilities to the layout code you'll probably do something similar, but then the Ctrl+S should save the layout code. I could try to teach the user to press for example Ctrl+Shift+S to disambiguate, but I have a hunch it might be a bit too unintuitive.

"Goto" might also have similar issue. Ctrl+G is a common hotkey to jump to line, and a line in one view is not the same as the other. Also, there's the case where both left and right views are showing the same type of view - then what should be the target of the jump?

As I spell it out, it will probably be required to make a "focused view"-concept addition to the UI.