FloBrain: The Neural Backend

FloBrain: The Neural Backend

Designing the multi-model operating system that routes user queries to the best AI for the job.

Designing the multi-model operating system that routes user queries to the best AI for the job.

Product Design

User Experience Design

Research

My Role

TL;DR

Impact

users trust in our AI

decisions

75%

via

'view reasoning'

feature

01

TRUST

80%

faster identification

of bottlenecks.

02

EFFICIENCY

9.2/10

average SUS score

from pilot testers

03

USABILITY

100%

reduction in manual

model switching

04

FLEXIBILITY

| Problem & Research

1.

The Transparency Gap (Black Box)

The Transparency Gap (Black Box)

User interviews revealed that 85% of developers found debugging AI behavior through raw, unformatted JSON logs to be their primary cause of burnout.

Data is input, and a result is output, but the internal processes are hidden and unknown.

Data is input, and a result is output, but the internal processes are hidden and unknown.

A transparent AI with steps like "Context Analysis" & "Syncing" are visible and understandable.

A transparent AI with steps like "Context Analysis" & "Syncing" are visible and understandable.

The black box is present, & the desired steps are visible but disconnected & inaccessible, highlighting the lack of UI to see them.

The black box is present, & the desired steps are visible but disconnected & inaccessible, highlighting the lack of UI to see them.

The user is left confused , and the model's output is uncertain and potentially incorrect.

The user is left confused , and the model's output is uncertain and potentially incorrect.

2.

2.

The Cost of Tab-Hopping

The Cost of Tab-Hopping

To find the best output, users are forced into a fragmented workflow, manually copying prompts between isolated interfaces.

To find the best output, users are forced into a fragmented workflow, manually copying prompts between isolated interfaces.

This lead to "Model Lock-in," where developers stick to one AI simply to avoid the high friction of switching platforms.

This lead to "Model Lock-in," where developers stick to one AI simply to avoid the high friction of switching platforms.

Observational studies showed developers spend significant time manually validating outputs across 3+ isolated platforms, leading to inconsistent data and slower production cycles.

Observational studies showed developers spend significant time manually validating outputs across 3+ isolated platforms, leading to inconsistent data and slower production cycles.

| Feature Categorization

| Feature Categorization

Building the Intelligence Layer

Building the Intelligence Layer

1. System Observability

The Dashboard

The Objective

Provide a "control tower" view of AI health.

Features include real-time token tracking and the Workflow Engine to catch API failures before they hit the user.

Design Rationale: Reducing interaction cost via sidebars

  1. Relational Knowledge

The Memory

The Objective

Move beyond linear chat logs to a persistent, relational database.

Features a 3D Knowledge Graph to visualize exactly what data the AI is retrieving in real-time.

Design Rationale: Spatial cognition for complex relationships

  1. Unified Orchestration

The Universal Router

The Objective

Eliminate "model lock-in" through a unified router.

Features include live Confidence Scoring and a "View Reasoning" panel to transform the AI from a black box to a transparent system.

Design Rationale: Progressive disclosure of complexity

| Trade-offs

1. Tabular Efficiency vs. Spatial Clarity

Exploration:

I initially sketched a high-density Table View to manage memory nodes, which is standard for technical tools. However, I questioned if the heavy color-coding would be "distracting" rather than functional for the user.

Decision:

I traded tabular speed for a Spatial Cluster Map.

Rationale:

AI memory is relational, not linear. Grouping nodes into coloured nodes that differentiate themselves allows developers to visually audit the knowledge graph, making it easier to spot hallucinations or "context drift".

2. Chronological Feed vs. Real-time Process Monitoring

2. Chronological Feed vs. Real-time Process Monitoring

Exploration:

I initially treated the dashboard like a standard chat interface, dedicating primary real estate to a chronological "Recent Interactions" feed to help users track their past queries.

I initially treated the dashboard like a standard chat interface, dedicating primary real estate to a chronological "Recent Interactions" feed to help users track their past queries.

Decision:

I pivoted the layout to prioritize "Active Processes" (Neural Path Sync, Context Analysis) and "System Health" as the central focus.

I pivoted the layout to prioritize "Active Processes" (Neural Path Sync, Context Analysis) and "System Health" as the central focus.

Rationale:

FloBrain is a technical "Operating System," not a standard chatbot. In a developer environment, monitoring the current orchestration (latency and token usage) is more critical than scrolling through history. This ensures a "Glass Box" experience where the user can identify a system bottleneck in one second.

FloBrain is a technical "Operating System," not a standard chatbot. In a developer environment, monitoring the current orchestration (latency and token usage) is more critical than scrolling through history. This ensures a "Glass Box" experience where the user can identify a system bottleneck in one second.

Micro Details that matter

Final Designs

logged in user dashboard

not logged in user's home page

Challenges and Some Gyan

How do I design a UI for a process that is traditionally invisible (Syncing, Context Analysis) without overwhelming the user with raw data?

How do I reduce cognitive load in High-Density Environment?

Is it fair to trade familiarity for clarity?

( Gyan: A Sanskrit term for knowledge & wisdom gained through experience )

Thanks for reading.