Performance
This guide covers performance optimization techniques for InAppAI React, helping you build fast, responsive chat experiences.
Overview
InAppAI React is optimized out-of-the-box, but there are several ways to further improve performance depending on your use case.
Bundle Size Optimization
1. Import Only What You Need
// ✅ Good - tree-shakeable imports
import { InAppAI } from '@inappai/react';
import type { Message, Tool } from '@inappai/react';
// ❌ Bad - imports everything
import * as InAppAI from '@inappai/react';
2. Lazy Load the Component
For apps where the chat isn’t immediately visible:
import { lazy, Suspense } from 'react';
// Lazy load InAppAI
const InAppAI = lazy(() => import('@inappai/react').then(mod => ({
default: mod.InAppAI
})));
function App() {
return (
<Suspense fallback={<div>Loading chat...</div>}>
<InAppAI
agentId="your-agent-id"
messages={messages}
onMessagesChange={setMessages}
/>
</Suspense>
);
}
This reduces initial bundle size by ~95KB.
3. Code Splitting by Route
Only load chat on routes that need it:
// App.tsx
const ChatPage = lazy(() => import('./pages/ChatPage'));
function App() {
return (
<Routes>
<Route path="/" element={<Home />} />
<Route path="/support" element={
<Suspense fallback={<Loading />}>
<ChatPage />
</Suspense>
} />
</Routes>
);
}
// pages/ChatPage.tsx
import { InAppAI } from '@inappai/react';
export default function ChatPage() {
return <InAppAI {...props} />;
}
Message State Optimization
1. Debounce Persistence
When saving messages to backend, debounce to avoid excessive requests:
import { useCallback } from 'react';
import debounce from 'lodash/debounce';
function App() {
const [messages, setMessages] = useState<Message[]>([]);
// Debounced save to backend
const saveToBackend = useCallback(
debounce(async (messages: Message[]) => {
await fetch('/api/messages', {
method: 'POST',
body: JSON.stringify({ messages }),
});
}, 1000),
[]
);
const handleMessagesChange = (newMessages: Message[]) => {
setMessages(newMessages);
saveToBackend(newMessages); // Debounced
};
return (
<InAppAI
messages={messages}
onMessagesChange={handleMessagesChange}
/>
);
}
2. Limit Message History
For long conversations, limit displayed messages:
const MAX_DISPLAYED_MESSAGES = 100;
function App() {
const [allMessages, setAllMessages] = useState<Message[]>([]);
// Show only recent messages
const displayedMessages = allMessages.slice(-MAX_DISPLAYED_MESSAGES);
return (
<InAppAI
messages={displayedMessages}
onMessagesChange={setAllMessages}
/>
);
}
Context Optimization
1. Memoize Context Functions
Avoid recreating context functions on every render:
import { useCallback } from 'react';
function App() {
// ❌ Bad - new function on every render
<InAppAI
context={() => ({
currentUrl: window.location.pathname,
})}
/>
// ✅ Good - memoized function
const getContext = useCallback(() => ({
currentUrl: window.location.pathname,
scrollPosition: window.scrollY,
}), []);
return (
<InAppAI context={getContext} />
);
}
2. Keep Context Lightweight
// ❌ Bad - large context sent with every message
context={{
user: entireUserObject, // Could be 50KB
products: allProducts, // Could be 1MB
history: fullBrowsingHistory, // Could be huge
}}
// ✅ Good - minimal context
context={{
userId: user.id,
currentProduct: products.find(p => p.id === currentId),
recentPages: history.slice(-3),
}}
3. Use Static Context When Possible
// If context doesn't change, use static object
const staticContext = {
appVersion: '2.1.0',
environment: 'production',
};
<InAppAI context={staticContext} />
Tool Optimization
1. Make Tool Handlers Async
Use async handlers to avoid blocking the main thread:
const tools: Tool[] = [
{
name: 'search',
handler: async ({ query }) => {
// ✅ Non-blocking async operation
const results = await searchAPI(query);
return { success: true, results };
},
},
];
2. Batch Tool Operations
If a tool updates multiple items, batch the updates:
{
name: 'addMultipleTodos',
handler: async ({ todos }) => {
// ✅ Good - single state update
setTodos(prev => [...prev, ...todos]);
// ❌ Bad - multiple state updates
// todos.forEach(todo => setTodos(prev => [...prev, todo]));
return { success: true, count: todos.length };
},
}
3. Optimize Tool Results
Return minimal data in tool results:
{
name: 'searchProducts',
handler: async ({ query }) => {
const products = await searchProducts(query);
// ❌ Bad - return full product objects
// return { success: true, products };
// ✅ Good - return only needed fields
return {
success: true,
products: products.map(p => ({
id: p.id,
name: p.name,
price: p.price,
})),
};
},
}
Rendering Optimization
1. Avoid Unnecessary Re-renders
Use React.memo for parent components:
import { memo } from 'react';
const ChatSection = memo(({ messages, onMessagesChange }: Props) => {
return (
<InAppAI
agentId="your-agent-id"
messages={messages}
onMessagesChange={onMessagesChange}
/>
);
});
2. Stabilize Callbacks
Use useCallback for event handlers:
const handleMessageSent = useCallback((message: Message) => {
analytics.track('Message Sent', { length: message.content.length });
}, []);
const handleError = useCallback((error: Error) => {
Sentry.captureException(error);
}, []);
<InAppAI
onMessageSent={handleMessageSent}
onError={handleError}
/>
3. Memoize Custom Styles
const customStyles = useMemo(() => ({
primaryColor: '#6366f1',
headerTitle: 'Support',
buttonIcon: '💬',
}), []); // Empty deps = stable reference
<InAppAI customStyles={customStyles} />
Real-World Benchmarks
Typical performance metrics:
| Metric | Value |
|---|---|
| Initial render | ~50ms |
| Message render | ~5ms |
| Tool execution | <10ms (local) |
| Network request | 200-2000ms (varies) |
| Re-render on message | ~3ms |
| Memory per message | ~2KB |
Performance Checklist
- Lazy load InAppAI if not immediately visible
- Debounce backend persistence (1-2 seconds)
- Limit message history (50-100 recent messages)
- Keep context under 5KB
- Use memoization for callbacks and styles
- Monitor response times in production
Common Performance Issues
Issue: Slow Initial Load
Solution: Lazy load the component
const InAppAI = lazy(() => import('@inappai/react'));
Issue: Laggy Typing
Solution: Debounce backend saves
const saveToBackend = debounce(async (messages) => {
await api.save(messages);
}, 1000);
Issue: High Memory Usage
Solution: Limit message history
const recentMessages = allMessages.slice(-100);
Issue: Slow Tool Execution
Solution: Make handlers async and optimize logic
handler: async ({ query }) => {
const results = await optimizedSearch(query);
return results;
}
Next Steps
- Architecture - Understand internal workings
- Security - Security best practices
- Troubleshooting - Debug performance issues