Pretext AI shows how Pretext fits modern AI interfaces: streaming chat, virtualized message lists, multilingual responses, and any UI where text height needs to be known before the DOM catches up.
If your AI product renders thousands of messages, citations, tool outputs, or agent logs, DOM measurement quickly becomes the bottleneck. Pretext AI keeps layout predictable and fast.
AI UI Preview
Drag the slider — watch Pretext predict height before the DOM can measure it
Yes. Prepare the text once, predict height for the target width, and let the interface reserve space before the response finishes streaming.
Pretext predicts height before the DOM can measure it.
From ChatGPT-style chat to Perplexity-style search answers — these are the interfaces where DOM-based text measurement becomes a bottleneck.
LLM responses arrive token by token. Without height prediction, every new token triggers a DOM reflow and the chat bubble jumps. Pretext lets you reserve the right amount of space before the first token arrives.
Long conversation histories with thousands of messages need virtual scrolling. Libraries like react-window require row heights upfront. Pretext provides that estimate without mounting every message to the DOM.
AI search engines render citations, expandable source panels, and inline references. Each element has variable height that changes with content and viewport width. Pretext sizes them before render.
Coding agents produce dense tool output — diffs, terminal logs, file trees. These variable-length blocks need stable layout as new outputs stream in. Pretext handles the measurement without DOM thrashing.
AI-generated cards and summaries must reflow across mobile, tablet, and desktop. Pretext reuses the same prepared text to compute height at any width instantly — one prepare, many layouts.
English, Chinese, Japanese, Korean, and Arabic wrap differently at the same width. Pretext measures all scripts in one pipeline — no per-language measurement hacks.
Streaming responses make chat bubbles grow while content is still arriving. If your UI waits for the DOM to finish rendering before it can know the final height, nearby messages shift, scroll positions drift, and the whole interface feels less trustworthy.
Virtualized chat lists make the problem sharper. They often need a row estimate before an item mounts, not after. Without that estimate, you end up mounting, measuring, correcting, and doing more work than the interface should need in the first place.
The challenge gets harder once multilingual output enters the picture. English, Chinese, Japanese, Korean, Arabic, and mixed-script text wrap differently at the same width. DOM-based measurement can answer the question eventually, but it answers late and at a higher performance cost than many AI products can afford.
// Measure after render, then patch the layout const bubble = renderMessage(streamingText); list.appendChild(bubble); // DOM has to catch up before we know the size const height = bubble.getBoundingClientRect().height; virtualizer.updateRow(messageId, height); scrollState.reconcile(); layout shift // Repeat while tokens stream in stream.onChunk(() => { bubble.textContent = streamingText; const nextHeight = bubble.scrollHeight; virtualizer.updateRow(messageId, nextHeight); });
Install the package, prepare your text once, then layout at any width. No DOM reads, no reflows.
import { prepare, layout } from '@chenglou/pretext' // 1. Prepare the streaming text once const prepared = prepare(streamingText, '14px Inter') // 2. Predict height at any container width — instant const { height, lineCount } = layout(prepared, containerWidth, 20) // 3. Reserve space before the DOM renders virtualizer.updateRow(messageId, height) // No getBoundingClientRect(). No layout shift.
Pretext AI moves text sizing earlier in the UI pipeline, so interfaces can make steadier layout decisions before DOM measurement becomes the bottleneck.
Estimate bubble height from text, width, and line height so the UI stays stable while tokens stream in.
Use Pretext to size rows ahead of time instead of measuring every message after mount.
Measure English, Chinese, Japanese, Korean, Arabic, and mixed-script text with the same layout pipeline.
Once text is prepared, layout is arithmetic instead of a repeated DOM read or write cycle.
| Use case | DOM measurement | With Pretext |
|---|---|---|
| Chat bubble sizing | Measure after render | Predict before render |
| Streaming stability | Often shifts during updates | More stable layout |
| Virtualized lists | Needs mounting to know size | Can estimate size earlier |
| Multilingual text | Depends on render cycle | Handled in one text layout pipeline |
| Performance model | DOM plus reflow | Prepare once, layout repeatedly |
This does not mean the DOM disappears from your product. It means repeated text sizing no longer has to wait on DOM measurement loops.
Common questions about using Pretext for AI chat interfaces, streaming layouts, and multilingual text measurement.
Explore the playground, test text layout with real AI output, and see how Pretext AI fits chat, agents, and multilingual UI.