Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:26:05 +08:00
commit 8bcde7080b
26 changed files with 5957 additions and 0 deletions

View File

@@ -0,0 +1,151 @@
# Block Handlers
Run logic on every block or at intervals using the `onBlock` API (v2.29+).
## Basic Usage
```typescript
import { onBlock } from "generated";
onBlock(
{
name: "MyBlockHandler",
chain: 1,
},
async ({ block, context }) => {
context.log.info(`Processing block ${block.number}`);
}
);
```
**Note:** Block handlers don't require config changes or codegen runs.
## Options
| Option | Required | Description |
|--------|----------|-------------|
| `name` | Yes | Handler name for logging/metrics |
| `chain` | Yes | Chain ID to run on |
| `interval` | No | Block interval (default: 1 = every block) |
| `startBlock` | No | Block to start from |
| `endBlock` | No | Block to end at |
## Handler Function
**Important:** Block handlers require `preload_handlers: true` in config.yaml.
```typescript
onBlock(
{ name: "HourlyStats", chain: 1, interval: 300 },
async ({ block, context }) => {
// block.number - The block number
// block.chainId - The chain ID
// context - Same as event handlers
}
);
```
## Time-Based Intervals
Convert time to blocks:
```typescript
// Every 60 minutes on Ethereum (12s blocks)
const interval = (60 * 60) / 12; // 300 blocks
// Every 60 minutes on Optimism (2s blocks)
const interval = (60 * 60) / 2; // 1800 blocks
```
## Multichain Block Handlers
Use `forEach` for multiple chains:
```typescript
import { onBlock } from "generated";
[
{ chain: 1 as const, startBlock: 19783636, interval: 300 },
{ chain: 10 as const, startBlock: 119534316, interval: 1800 },
].forEach(({ chain, startBlock, interval }) => {
onBlock(
{ name: "HourlyPrice", chain, startBlock, interval },
async ({ block, context }) => {
// Handle block...
}
);
});
```
## Different Historical vs Realtime Intervals
Speed up historical sync with larger intervals:
```typescript
const realtimeBlock = 19783636;
// Historical: every 1000 blocks
onBlock(
{
name: "HistoricalHandler",
chain: 1,
endBlock: realtimeBlock - 1,
interval: 1000,
},
async ({ block, context }) => { /* ... */ }
);
// Realtime: every block
onBlock(
{
name: "RealtimeHandler",
chain: 1,
startBlock: realtimeBlock,
interval: 1,
},
async ({ block, context }) => { /* ... */ }
);
```
## Preset/Initial Data Handler
Load initial data on block 0:
```typescript
onBlock(
{
name: "Preset",
chain: 1,
startBlock: 0,
endBlock: 0,
},
async ({ block, context }) => {
// Skip preload phase for initial data
if (context.isPreload) return;
const users = await fetch("https://api.example.com/users");
for (const user of users) {
context.User.set({
id: user.id,
address: user.address,
name: user.name,
});
}
}
);
```
## Use Cases
- **Hourly/Daily aggregations** - Price snapshots, volume totals
- **Time-series data** - Create periodic data points
- **Initial state loading** - Populate entities on startup
- **State snapshots** - Capture state at intervals
## Limitations
- Requires `preload_handlers: true`
- Ordered multichain mode not supported
- Only EVM chains (no Fuel)
- No test framework support yet
- Only `block.number` and `block.chainId` available

View File

@@ -0,0 +1,280 @@
# HyperIndex Configuration Reference
Complete reference for `config.yaml` options.
## Basic Structure
```yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: indexer-name
# Global options
preload_handlers: true # Enable preload optimizations
unordered_multichain_mode: true # For multichain indexing
networks:
- id: 1 # Chain ID
start_block: 12345678
contracts:
- name: ContractName
address: 0x...
handler: src/EventHandlers.ts
events:
- event: EventSignature(...)
```
## Network Configuration
### start_block
The `start_block` field specifies where indexing begins.
**With HyperSync (default):** Setting `start_block: 0` is perfectly fine. HyperSync is extremely fast and can sync millions of blocks in minutes, so there's no performance penalty for starting from genesis.
**With RPC:** If using RPC as the data source (for unsupported networks), consider setting `start_block` to the contract deployment block to avoid slow sync times.
```yaml
networks:
- id: 1
start_block: 0 # Fine with HyperSync - it's fast!
contracts:
- name: MyContract
address: 0xContractAddress
```
### Single Network
```yaml
networks:
- id: 1
start_block: 0
contracts:
- name: MyContract
address: 0xContractAddress
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
```
### Multiple Networks (Multichain)
```yaml
# Global contract definitions
contracts:
- name: Factory
handler: src/factory.ts
events:
- event: ContractCreated(address indexed contract, address indexed token)
networks:
- id: 1 # Ethereum
start_block: 12345678
contracts:
- name: Factory
address:
- 0xFactoryAddress1
- id: 10 # Optimism
start_block: 98765432
contracts:
- name: Factory
address:
- 0xFactoryAddress2
```
**Important:** When using multichain, define handlers and events in global `contracts` section. Network sections only specify addresses.
## Contract Configuration
### Single Address
```yaml
contracts:
- name: MyContract
address: 0xContractAddress
handler: src/EventHandlers.ts
events:
- event: Transfer(...)
```
### Multiple Addresses
```yaml
contracts:
- name: MyContract
address:
- 0xAddress1
- 0xAddress2
- 0xAddress3
handler: src/EventHandlers.ts
events:
- event: Transfer(...)
```
### Dynamic Contracts (No Address)
For contracts created by factories, omit the address field:
```yaml
contracts:
- name: Pair
handler: src/core.ts
events:
- event: Mint(address sender, uint256 amount0, uint256 amount1)
- event: Burn(address sender, uint256 amount0, uint256 amount1)
```
Register dynamically in handler:
```typescript
Factory.PairCreated.contractRegister(({ event, context }) => {
context.addPair(event.params.pair);
});
```
## Event Configuration
### Basic Event
```yaml
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
```
### With Transaction Fields
```yaml
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
field_selection:
transaction_fields:
- hash
- from
- to
- value
```
### With Block Fields
```yaml
events:
- event: Transfer(...)
field_selection:
block_fields:
- number
- timestamp
- hash
```
## Advanced Options
### Preload Handlers
Enable for Effect API usage and performance optimization:
```yaml
preload_handlers: true
```
When enabled:
- Handlers run twice (preload + execution)
- External calls MUST use Effect API
- Use `!context.isPreload` to skip logic during preload
### Unordered Multichain Mode
For multichain indexing without cross-chain ordering:
```yaml
unordered_multichain_mode: true
```
Benefits:
- Faster indexing
- Each chain processes independently
Tradeoffs:
- No guaranteed cross-chain event order
- Use when chains are independent
### Wildcard Indexing
Index by event signature across all addresses:
```yaml
contracts:
- name: ERC20
handler: src/erc20.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
# No address = wildcard indexing
```
## Common Configurations
### DEX Indexer
```yaml
name: dex-indexer
preload_handlers: true
unordered_multichain_mode: true
contracts:
- name: Factory
handler: src/factory.ts
events:
- event: PairCreated(address indexed token0, address indexed token1, address pair)
- name: Pair
handler: src/core.ts
events:
- event: Swap(address indexed sender, uint256 amount0In, uint256 amount1In, uint256 amount0Out, uint256 amount1Out, address indexed to)
- event: Mint(address sender, uint256 amount0, uint256 amount1)
- event: Burn(address sender, uint256 amount0, uint256 amount1)
- event: Sync(uint112 reserve0, uint112 reserve1)
field_selection:
transaction_fields:
- hash
networks:
- id: 1
start_block: 10000835
contracts:
- name: Factory
address: 0x5C69bEe701ef814a2B6a3EDD4B1652CB9cc5aA6f
```
### Token Tracker
```yaml
name: token-tracker
networks:
- id: 1
start_block: 12345678
contracts:
- name: ERC20
address:
- 0xToken1
- 0xToken2
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
field_selection:
transaction_fields:
- hash
- event: Approval(address indexed owner, address indexed spender, uint256 value)
```
## Validation
Use the schema for validation:
```yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
```
Run codegen to validate:
```bash
pnpm codegen
```

View File

@@ -0,0 +1,261 @@
# Accessing Contract State
Read on-chain contract state (token metadata, balances, etc.) from your event handlers using the Effect API with viem.
## When You Need This
- Token metadata (name, symbol, decimals) not in events
- Current balances or allowances
- Contract configuration values
- Any on-chain data not emitted in events
## Basic Setup
### 1. Create a Viem Client
```typescript
// src/effects/client.ts
import { createPublicClient, http } from "viem";
import { mainnet } from "viem/chains";
export const client = createPublicClient({
chain: mainnet,
batch: { multicall: true }, // Enable multicall batching
transport: http(process.env.RPC_URL, { batch: true }),
});
```
### 2. Create an Effect for RPC Calls
```typescript
// src/effects/tokenMetadata.ts
import { experimental_createEffect, S } from "envio";
import { client } from "./client";
const ERC20_ABI = [
{ name: "name", type: "function", inputs: [], outputs: [{ type: "string" }] },
{ name: "symbol", type: "function", inputs: [], outputs: [{ type: "string" }] },
{ name: "decimals", type: "function", inputs: [], outputs: [{ type: "uint8" }] },
] as const;
export const getTokenMetadata = experimental_createEffect(
{
name: "getTokenMetadata",
input: S.object({
tokenAddress: S.string,
chainId: S.number,
}),
output: S.object({
name: S.string,
symbol: S.string,
decimals: S.number,
}),
cache: true, // Cache results - same token won't be fetched twice
},
async ({ input }) => {
const address = input.tokenAddress as `0x${string}`;
const [name, symbol, decimals] = await client.multicall({
allowFailure: false,
contracts: [
{ address, abi: ERC20_ABI, functionName: "name" },
{ address, abi: ERC20_ABI, functionName: "symbol" },
{ address, abi: ERC20_ABI, functionName: "decimals" },
],
});
return { name, symbol, decimals: Number(decimals) };
}
);
```
### 3. Use in Handler
```typescript
import { getTokenMetadata } from "./effects/tokenMetadata";
UniswapV3Factory.PoolCreated.handler(async ({ event, context }) => {
// Fetch token metadata via Effect API
const token0Data = await context.effect(getTokenMetadata, {
tokenAddress: event.params.token0,
chainId: event.chainId,
});
context.Token.set({
id: event.params.token0,
name: token0Data.name,
symbol: token0Data.symbol,
decimals: token0Data.decimals,
});
});
```
## Handling Non-Standard Tokens
Some tokens (like MKR, SAI) return `bytes32` instead of `string` for name/symbol:
```typescript
import { hexToString } from "viem";
export const getTokenMetadata = experimental_createEffect(
{
name: "getTokenMetadata",
input: S.string,
output: S.object({
name: S.string,
symbol: S.string,
decimals: S.number,
}),
cache: true,
},
async ({ input: tokenAddress }) => {
const address = tokenAddress as `0x${string}`;
// Try standard ERC20 first
try {
const [name, symbol, decimals] = await client.multicall({
allowFailure: false,
contracts: [
{ address, abi: ERC20_ABI, functionName: "name" },
{ address, abi: ERC20_ABI, functionName: "symbol" },
{ address, abi: ERC20_ABI, functionName: "decimals" },
],
});
return { name, symbol, decimals: Number(decimals) };
} catch {
// Fallback: Try bytes32 variant
try {
const [name, symbol, decimals] = await client.multicall({
allowFailure: false,
contracts: [
{ address, abi: ERC20_BYTES_ABI, functionName: "name" },
{ address, abi: ERC20_BYTES_ABI, functionName: "symbol" },
{ address, abi: ERC20_BYTES_ABI, functionName: "decimals" },
],
});
return {
name: hexToString(name).replace(/\u0000/g, ""),
symbol: hexToString(symbol).replace(/\u0000/g, ""),
decimals: Number(decimals),
};
} catch {
// Final fallback
return { name: "Unknown", symbol: "???", decimals: 18 };
}
}
}
);
const ERC20_BYTES_ABI = [
{ name: "name", type: "function", inputs: [], outputs: [{ type: "bytes32" }] },
{ name: "symbol", type: "function", inputs: [], outputs: [{ type: "bytes32" }] },
{ name: "decimals", type: "function", inputs: [], outputs: [{ type: "uint8" }] },
] as const;
```
## Multichain RPC
For indexers spanning multiple chains:
```typescript
import { createPublicClient, http } from "viem";
import { mainnet, optimism, arbitrum } from "viem/chains";
const CHAINS: Record<number, { chain: any; rpcUrl: string }> = {
1: { chain: mainnet, rpcUrl: process.env.ETH_RPC_URL! },
10: { chain: optimism, rpcUrl: process.env.OP_RPC_URL! },
42161: { chain: arbitrum, rpcUrl: process.env.ARB_RPC_URL! },
};
function getClient(chainId: number) {
const config = CHAINS[chainId];
if (!config) throw new Error(`No RPC configured for chain ${chainId}`);
return createPublicClient({
chain: config.chain,
batch: { multicall: true },
transport: http(config.rpcUrl, { batch: true }),
});
}
export const getBalance = experimental_createEffect(
{
name: "getBalance",
input: S.object({ chainId: S.number, address: S.string }),
output: S.string,
cache: true,
},
async ({ input }) => {
const client = getClient(input.chainId);
const balance = await client.getBalance({
address: input.address as `0x${string}`,
});
return balance.toString();
}
);
```
## Important: Current vs Historical State
RPC calls return **current** blockchain state, not historical state at the event's block.
For most use cases (token metadata), this is fine - names/symbols/decimals rarely change.
For historical data (balance at specific block), you'd need archive node access:
```typescript
const balance = await client.getBalance({
address: "0x...",
blockNumber: BigInt(event.block.number), // Requires archive node
});
```
## Error Handling Pattern
```typescript
MyContract.Event.handler(async ({ event, context }) => {
try {
const metadata = await context.effect(getTokenMetadata, {
tokenAddress: event.params.token,
chainId: event.chainId,
});
context.Token.set({
id: event.params.token,
...metadata,
});
} catch (error) {
context.log.error("Failed to fetch token metadata", {
token: event.params.token,
error,
});
// Create with defaults or skip
context.Token.set({
id: event.params.token,
name: "Unknown",
symbol: "???",
decimals: 18,
});
}
});
```
## Best Practices
1. **Always use Effect API** - Never call RPC directly in handlers
2. **Enable caching** - `cache: true` prevents duplicate calls
3. **Use multicall** - Batch multiple contract reads into one RPC call
4. **Handle non-standard tokens** - Try string first, fallback to bytes32
5. **Provide fallback values** - Don't let RPC failures crash your indexer
6. **Document required env vars** - List all RPC URLs in `.env.example`
## File Organization
```
src/
├── effects/
│ ├── index.ts # Export all effects
│ ├── client.ts # Viem client setup
│ └── tokenMetadata.ts # Token-related effects
├── EventHandlers.ts
└── .env.example # RPC_URL=https://...
```

View File

@@ -0,0 +1,208 @@
# Database Indexes & Query Optimization
Optimize query performance with strategic indexing.
## Why Indexes Matter
| Data Size | Without Indexes | With Indexes |
|-----------|-----------------|--------------|
| 1,000 rows | ~10ms | ~1ms |
| 100,000 rows | ~500ms | ~2ms |
| 1,000,000+ rows | 5+ seconds | ~5ms |
## Single-Column Indexes
Use `@index` directive on frequently queried fields:
```graphql
type Transaction {
id: ID!
userAddress: String! @index
tokenAddress: String! @index
amount: BigInt!
timestamp: BigInt! @index
}
```
**Use when:**
- Frequently filter by a field
- Sort results by a field
- Field has many different values (high cardinality)
## Composite Indexes
For multi-field queries, use entity-level `@index`:
```graphql
type Transfer @index(fields: ["from", "to", "tokenId"]) {
id: ID!
from: String! @index
to: String! @index
tokenId: BigInt!
value: BigInt!
timestamp: BigInt!
}
```
Creates:
1. Individual indexes on `from` and `to`
2. Composite index on `from + to + tokenId`
**Use when:**
- Query multiple fields together
- "Find transfers from X to Y for token Z"
## Multiple Composite Indexes
```graphql
type NFTListing
@index(fields: ["collection", "status", "price"])
@index(fields: ["seller", "status"]) {
id: ID!
collection: String! @index
tokenId: BigInt!
seller: String! @index
price: BigInt!
status: String! @index # "active", "sold", "cancelled"
createdAt: BigInt! @index
}
```
Supports:
- Active listings for collection, sorted by price
- Listings by seller with status
- Recently created listings
## Automatic Indexes
HyperIndex auto-indexes:
- All `ID` fields
- All `@derivedFrom` fields
No manual indexing needed for these.
## Common Index Patterns
### Token Transfers
```graphql
type TokenTransfer {
id: ID!
token_id: String! @index
from: String! @index
to: String! @index
amount: BigInt!
blockNumber: BigInt! @index
timestamp: BigInt! @index
}
```
### DEX Swaps
```graphql
type Swap @index(fields: ["pair", "timestamp"]) {
id: ID!
pair_id: String! @index
sender: String! @index
amountIn: BigInt!
amountOut: BigInt!
timestamp: BigInt! @index
}
```
### User Activity
```graphql
type UserAction @index(fields: ["user", "actionType", "timestamp"]) {
id: ID!
user: String! @index
actionType: String! @index
timestamp: BigInt! @index
amount: BigInt!
}
```
## Performance Tradeoffs
### Write Impact
| Index Level | Write Slowdown | Read Speed |
|-------------|----------------|------------|
| No indexes | Baseline | Slowest |
| Few targeted | 5-10% | Fast |
| Many indexes | 15%+ | Fastest |
Blockchain data is read-heavy - indexes usually worth it.
### Storage
- Each index: 2-10 bytes per row
- Consider for very large tables (millions+ rows)
## Query Optimization Tips
### Fetch Only What You Need
```graphql
# Good
query {
Transfer(where: { token: { _eq: "0x123" } }, limit: 10) {
id
amount
}
}
# Bad - unnecessary fields
query {
Transfer(where: { token: { _eq: "0x123" } }, limit: 10) {
id
from
to
amount
timestamp
blockNumber
transactionHash
# ... more fields
}
}
```
### Always Paginate
```graphql
query {
Transfer(
where: { token: { _eq: "0x123" } }
limit: 20
offset: 40 # Page 3
) {
id
amount
}
}
```
### Filter on Indexed Fields
```graphql
# Fast - userAddress is indexed
query {
Transaction(where: { userAddress: { _eq: "0x..." } }) { ... }
}
# Slow - amount is not indexed
query {
Transaction(where: { amount: { _gt: "1000" } }) { ... }
}
```
## Index Checklist
When designing schema:
- [ ] Index fields used in `where` clauses
- [ ] Index fields used in `order_by`
- [ ] Add composite indexes for multi-field queries
- [ ] Consider cardinality (high variety = good index candidate)
- [ ] Don't over-index write-heavy entities
- [ ] Test query performance with realistic data volumes

View File

@@ -0,0 +1,215 @@
# Deployment
Deploy your indexer to Envio's hosted service for production-ready infrastructure without operational overhead.
## Hosted Service Overview
Envio's hosted service provides:
- **Git-based deployments** - Push to deploy (like Vercel)
- **Zero infrastructure management** - No servers to maintain
- **Static production endpoints** - Consistent URLs, zero-downtime deploys
- **Built-in monitoring** - Logs, sync status, deployment health
- **Alerting** - Discord, Slack, Telegram, Email notifications
- **GraphQL API** - Production-ready query endpoint
- **Multi-chain support** - Single codebase, multiple networks
## Pre-Deployment Checklist
Before deploying, verify your indexer works locally:
```bash
# 1. Install dependencies
pnpm install
# 2. Generate types
pnpm codegen
# 3. Type check
pnpm tsc --noEmit
# 4. Run locally
pnpm dev
# 5. Test with TUI off to see all logs
TUI_OFF=true pnpm dev
```
### Verify:
- [ ] No TypeScript errors
- [ ] Entities are being created/updated
- [ ] No runtime errors in logs
- [ ] GraphQL queries return expected data (localhost:8080)
## Deployment Steps
### 1. Push to GitHub
Your indexer must be in a GitHub repository.
### 2. Connect to Envio
1. Go to [envio.dev/explorer](https://envio.dev/explorer)
2. Install the Envio Deployments GitHub App
3. Select your repository
### 3. Configure Deployment
- **Root directory**: Where your indexer lives (for monorepos)
- **Config file**: Path to `config.yaml`
- **Deployment branch**: Which branch triggers deploys
### 4. Deploy
Push to your deployment branch. The hosted service will:
1. Clone your repo
2. Install dependencies
3. Run codegen
4. Build and deploy
5. Start indexing
## Environment Variables
Set secrets in the Envio dashboard, not in your repo:
```bash
# Common env vars for hosted service
RPC_URL=https://... # If using custom RPC
ETH_RPC_URL=https://... # For multichain
POLYGON_RPC_URL=https://...
```
## Production Config Tips
```yaml
# config.yaml for production
name: my-production-indexer
rollback_on_reorg: true # Always enable for production
networks:
- id: 1
start_block: 18000000 # Don't start from 0 unless needed
confirmed_block_threshold: 250
contracts:
- name: MyContract
address: "0x..."
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
# Only include fields you actually use
field_selection:
transaction_fields:
- "from"
- "hash"
```
## Monitoring
The hosted service dashboard shows:
- **Sync progress** - Current block vs chain head
- **Logs** - Real-time and historical
- **Deployment status** - Build logs, errors
- **Health metrics** - Uptime, performance
## Alerting
Configure alerts for:
- Indexer stopped or crashed
- Sync falling behind
- Deployment failed
- Errors in handlers
Channels: Discord, Slack, Telegram, Email
## GraphQL Endpoint
Your production endpoint:
```
https://indexer.hyperindex.xyz/YOUR_INDEXER_SLUG/v1/graphql
```
Example query:
```graphql
query {
Transfer(limit: 10, order_by: { blockNumber: desc }) {
id
from
to
value
}
_meta {
block {
number
}
}
}
```
## Version Management
- **Multiple versions** - Keep old versions running while testing new ones
- **One-click rollback** - Instantly switch to previous version
- **Zero-downtime deploys** - New version starts, traffic switches when ready
## Self-Hosting Alternative
For custom infrastructure needs:
```bash
# Basic self-hosting with Docker
git clone https://github.com/enviodev/local-docker-example
cd local-docker-example
docker-compose up
```
**Note:** Self-hosting requires managing:
- PostgreSQL database
- Docker/container orchestration
- Monitoring and alerting
- Scaling and backups
Recommended only for teams with infrastructure expertise.
## Deployment Workflow
```
┌─────────────────┐
│ Local Dev │
│ pnpm dev │
└────────┬────────┘
┌─────────────────┐
│ Test Locally │
│ TUI_OFF=true │
└────────┬────────┘
┌─────────────────┐
│ Push to GitHub │
└────────┬────────┘
┌─────────────────┐
│ Auto Deploy │
│ (Hosted Svc) │
└────────┬────────┘
┌─────────────────┐
│ Monitor │
│ Dashboard │
└─────────────────┘
```
## Best Practices
1. **Test locally first** - Always verify before deploying
2. **Use environment variables** - Never commit secrets
3. **Enable reorg support** - `rollback_on_reorg: true`
4. **Set reasonable start_block** - Don't index from genesis unless needed
5. **Monitor after deploy** - Watch logs for first few minutes
6. **Configure alerts** - Know immediately if something breaks

View File

@@ -0,0 +1,252 @@
# Effect API for External Calls
When `preload_handlers: true` is enabled in config.yaml, all external calls (RPC, API, fetch) MUST use the Effect API.
## Why Effect API?
With preload optimizations enabled:
- Handlers run twice (preload phase + execution phase)
- Direct external calls would execute twice
- Effect API caches results and handles preload correctly
## Creating an Effect
```typescript
import { S, createEffect } from "envio";
export const getTokenMetadata = createEffect(
{
name: "getTokenMetadata", // For debugging
input: S.string, // Input schema
output: S.object({ // Output schema
name: S.string,
symbol: S.string,
decimals: S.number,
}),
rateLimit: false, // Disable rate limiting
cache: true, // Enable caching
},
async ({ input: tokenAddress, context }) => {
// External call here
const response = await fetch(`https://api.example.com/token/${tokenAddress}`);
return response.json();
}
);
```
## Schema Definition (S module)
The `S` module provides schema builders:
```typescript
import { S } from "envio";
// Primitives
S.string
S.number
S.boolean
S.bigint
// Nullable
S.union([S.string, S.null])
// Objects
S.object({
name: S.string,
value: S.number,
})
// Arrays
S.array(S.string)
// Tuples
S.tuple([S.string, S.number])
```
Full schema API: https://raw.githubusercontent.com/DZakh/sury/refs/tags/v9.3.0/docs/js-usage.md
## Using Effects in Handlers
```typescript
import { getTokenMetadata } from "./effects";
MyContract.Event.handler(async ({ event, context }) => {
// Call effect with context.effect()
const metadata = await context.effect(getTokenMetadata, event.params.token);
// Use the result
const entity = {
id: event.params.token,
name: metadata.name,
symbol: metadata.symbol,
decimals: BigInt(metadata.decimals),
};
context.Token.set(entity);
});
```
## RPC Calls with Viem
For blockchain RPC calls, use viem with Effect API:
```typescript
import { createEffect, S } from "envio";
import { createPublicClient, http, parseAbi } from "viem";
import { mainnet } from "viem/chains";
const ERC20_ABI = parseAbi([
"function name() view returns (string)",
"function symbol() view returns (string)",
"function decimals() view returns (uint8)",
"function totalSupply() view returns (uint256)",
]);
const client = createPublicClient({
chain: mainnet,
transport: http(process.env.RPC_URL),
});
export const fetchTokenData = createEffect(
{
name: "fetchTokenData",
input: S.string,
output: S.object({
name: S.string,
symbol: S.string,
decimals: S.number,
totalSupply: S.string,
}),
cache: true,
},
async ({ input: tokenAddress }) => {
try {
const [name, symbol, decimals, totalSupply] = await Promise.all([
client.readContract({
address: tokenAddress as `0x${string}`,
abi: ERC20_ABI,
functionName: "name",
}),
client.readContract({
address: tokenAddress as `0x${string}`,
abi: ERC20_ABI,
functionName: "symbol",
}),
client.readContract({
address: tokenAddress as `0x${string}`,
abi: ERC20_ABI,
functionName: "decimals",
}),
client.readContract({
address: tokenAddress as `0x${string}`,
abi: ERC20_ABI,
functionName: "totalSupply",
}),
]);
return {
name,
symbol,
decimals: Number(decimals),
totalSupply: totalSupply.toString(),
};
} catch (error) {
// Return defaults on error
return {
name: "Unknown",
symbol: "???",
decimals: 18,
totalSupply: "0",
};
}
}
);
```
## Multichain RPC
For multichain indexers, pass chainId to select correct RPC:
```typescript
import { createEffect, S } from "envio";
import { createPublicClient, http } from "viem";
const RPC_URLS: Record<number, string> = {
1: process.env.ETH_RPC_URL!,
10: process.env.OP_RPC_URL!,
137: process.env.POLYGON_RPC_URL!,
};
export const fetchBalance = createEffect(
{
name: "fetchBalance",
input: S.object({
chainId: S.number,
address: S.string,
}),
output: S.string,
cache: true,
},
async ({ input }) => {
const rpcUrl = RPC_URLS[input.chainId];
if (!rpcUrl) throw new Error(`No RPC for chain ${input.chainId}`);
const client = createPublicClient({
transport: http(rpcUrl),
});
const balance = await client.getBalance({
address: input.address as `0x${string}`,
});
return balance.toString();
}
);
// Usage in handler
MyContract.Event.handler(async ({ event, context }) => {
const balance = await context.effect(fetchBalance, {
chainId: event.chainId,
address: event.params.user,
});
});
```
## Skipping Preload Logic
Use `!context.isPreload` to skip non-essential logic during preload:
```typescript
MyContract.Event.handler(async ({ event, context }) => {
// This always runs
const entity = await context.Token.get(event.params.token);
if (!context.isPreload) {
// This only runs during actual execution
console.log("Processing event:", event.logIndex);
}
context.Token.set(entity);
});
```
## Best Practices
1. **Always cache when possible** - Set `cache: true` for idempotent calls
2. **Handle errors gracefully** - Return default values on failure
3. **Batch calls** - Use `Promise.all()` for multiple independent calls
4. **Organize effects** - Create `src/effects/` directory for effect definitions
5. **Use typed inputs** - Define proper schemas for type safety
6. **Document env vars** - List required RPC URLs in `.env.example`
## File Organization
```
src/
├── effects/
│ ├── index.ts # Export all effects
│ ├── tokenMetadata.ts # Token-related effects
│ └── pricing.ts # Price fetching effects
├── EventHandlers.ts
└── utils.ts
```

View File

@@ -0,0 +1,301 @@
# Entity Patterns in HyperIndex
Patterns for defining and working with entities in HyperIndex.
## Schema Definition
### Basic Entity
```graphql
type Token {
id: ID!
name: String!
symbol: String!
decimals: BigInt!
totalSupply: BigInt!
}
```
**Key differences from TheGraph:**
- No `@entity` decorator needed
- Use `String!` instead of `Bytes!`
- Use `BigInt!` for numbers (not `BigDecimal!` in schema)
### Entity Relationships
Use `_id` suffix for relationships:
```graphql
type Transfer {
id: ID!
from: String!
to: String!
amount: BigInt!
token_id: String! # References Token.id
blockNumber: BigInt!
}
type Token {
id: ID!
name: String!
symbol: String!
transfers: [Transfer!]! @derivedFrom(field: "token")
}
```
**Critical:** Entity arrays MUST have `@derivedFrom`:
```graphql
# WRONG - Will fail codegen
type Transaction {
mints: [Mint!]! # Missing @derivedFrom
}
# CORRECT
type Transaction {
id: ID!
mints: [Mint!]! @derivedFrom(field: "transaction")
}
type Mint {
id: ID!
transaction_id: String! # The relationship field
}
```
### Optional Fields
Use nullable types for optional fields:
```graphql
type Token {
id: ID!
name: String!
symbol: String!
logoUrl: String # Optional - no !
}
```
**Important:** Use `undefined` not `null` in TypeScript:
```typescript
const token = {
id: "0x...",
name: "Token",
symbol: "TKN",
logoUrl: undefined, // Not null
};
```
## Creating Entities
### Basic Creation
```typescript
MyContract.Transfer.handler(async ({ event, context }) => {
const transfer = {
id: `${event.chainId}-${event.transaction.hash}-${event.logIndex}`,
from: event.params.from,
to: event.params.to,
amount: event.params.amount,
token_id: event.srcAddress, // Relationship
blockNumber: BigInt(event.block.number),
};
context.Transfer.set(transfer);
});
```
### With Multichain ID
Always prefix IDs with chainId for multichain:
```typescript
const id = `${event.chainId}-${event.params.tokenId}`;
```
### Entity ID Patterns
```typescript
// Event-based (unique per event)
`${event.chainId}-${event.transaction.hash}-${event.logIndex}`
// Address-based (singleton per address per chain)
`${event.chainId}-${event.srcAddress}`
// Composite (multiple keys)
`${event.chainId}-${event.params.user}-${event.params.tokenId}`
// Time-based (daily aggregates)
`${event.chainId}-${dayTimestamp}-${event.srcAddress}`
```
## Updating Entities
**Entities are immutable.** Use spread operator for updates:
```typescript
MyContract.Transfer.handler(async ({ event, context }) => {
const token = await context.Token.get(event.srcAddress);
if (token) {
// CORRECT - Use spread operator
const updatedToken = {
...token,
totalSupply: token.totalSupply + event.params.amount,
lastUpdated: BigInt(event.block.timestamp),
};
context.Token.set(updatedToken);
}
});
```
**Never mutate directly:**
```typescript
// WRONG - Entities are read-only
token.totalSupply = newSupply;
context.Token.set(token); // Won't work
```
## Loading Entities
### Get by ID
```typescript
const token = await context.Token.get(tokenId);
if (token) {
// Token exists
}
```
### Get or Create Pattern
```typescript
MyContract.Transfer.handler(async ({ event, context }) => {
let token = await context.Token.get(event.srcAddress);
if (!token) {
token = {
id: event.srcAddress,
name: "Unknown",
symbol: "???",
decimals: BigInt(18),
totalSupply: BigInt(0),
};
}
const updatedToken = {
...token,
totalSupply: token.totalSupply + event.params.amount,
};
context.Token.set(updatedToken);
});
```
## Querying Related Entities
`@derivedFrom` arrays are virtual - cannot access in handlers:
```typescript
// WRONG - derivedFrom arrays don't exist in handlers
const transfers = token.transfers;
// CORRECT - Query using indexed field
// (If using indexed field operations)
const transfers = await context.Transfer.getWhere.token_id.eq(tokenId);
```
## BigDecimal Handling
For precision in calculations, use BigDecimal:
```typescript
import { BigDecimal } from "generated";
const ZERO_BD = new BigDecimal(0);
const ONE_BD = new BigDecimal(1);
// Convert token amount to decimal
function convertToDecimal(amount: bigint, decimals: bigint): BigDecimal {
const divisor = new BigDecimal(10n ** decimals);
return new BigDecimal(amount.toString()).div(divisor);
}
```
**Schema types vs code types:**
- Schema `BigInt!` → TypeScript `bigint`
- Schema `BigDecimal!` → TypeScript `BigDecimal`
- Schema `Int!` → TypeScript `number`
## Timestamp Handling
Always cast timestamps:
```typescript
// CORRECT
timestamp: BigInt(event.block.timestamp)
// For day calculations
const dayTimestamp = Math.floor(event.block.timestamp / 86400) * 86400;
const dayId = `${event.chainId}-${dayTimestamp}`;
```
## Common Entity Types
### Token Entity
```graphql
type Token {
id: ID!
address: String!
name: String!
symbol: String!
decimals: BigInt!
totalSupply: BigInt!
transfers: [Transfer!]! @derivedFrom(field: "token")
}
```
### Transfer Entity
```graphql
type Transfer {
id: ID!
token_id: String!
from: String!
to: String!
amount: BigInt!
blockNumber: BigInt!
blockTimestamp: BigInt!
transactionHash: String!
}
```
### Daily Aggregate Entity
```graphql
type TokenDayData {
id: ID! # chainId-dayTimestamp-tokenAddress
token_id: String!
date: Int! # Unix timestamp of day start
volume: BigDecimal!
txCount: BigInt!
open: BigDecimal!
high: BigDecimal!
low: BigDecimal!
close: BigDecimal!
}
```
## Entity Checklist
When defining entities:
- [ ] Use `ID!` for id field
- [ ] Use `String!` for addresses (not `Bytes!`)
- [ ] Use `_id` suffix for relationships
- [ ] Add `@derivedFrom` to all entity arrays
- [ ] No `@entity` decorator
- [ ] Consider multichain ID prefixes
- [ ] Match field types exactly (BigInt vs BigDecimal vs Int)

View File

@@ -0,0 +1,327 @@
# GraphQL Querying
Query indexed data via GraphQL. Works locally during development or on hosted deployments.
**Local endpoint:** `http://localhost:8080/v1/graphql`
**Hasura Console:** `http://localhost:8080` (password: `testing`)
## Checking Indexing Progress
**Always check sync status first** before assuming data is missing.
### Using `_meta` (Recommended)
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ _meta { chainId startBlock progressBlock sourceBlock eventsProcessed isReady } }"}'
```
Or in GraphQL:
```graphql
{
_meta {
chainId
startBlock
progressBlock
sourceBlock
eventsProcessed
isReady
readyAt
}
}
```
**Fields:**
- `progressBlock` - Last fully processed block
- `sourceBlock` - Latest known block from data source (target)
- `eventsProcessed` - Total events processed
- `isReady` - `true` when historical sync is complete
- `readyAt` - Timestamp when sync finished
**Example response:**
```json
{
"_meta": [
{
"chainId": 1,
"progressBlock": 22817138,
"sourceBlock": 23368264,
"eventsProcessed": 2380000,
"isReady": false
}
]
}
```
### Filter by Chain
```graphql
{
_meta(where: { chainId: { _eq: 1 } }) {
progressBlock
isReady
}
}
```
### Using `chain_metadata`
More detailed chain information:
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ chain_metadata { chain_id start_block latest_processed_block num_events_processed is_hyper_sync } }"}'
```
**Additional fields:**
- `is_hyper_sync` - Whether using HyperSync (fast) or RPC
- `latest_fetched_block_number` - Latest block fetched from source
- `num_batches_fetched` - Number of batches processed
## Basic Queries
### Query Entities
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ Transfer(limit: 10) { id from to amount blockNumber } }"}'
```
### With Ordering
```graphql
{
Transfer(order_by: { blockNumber: desc }, limit: 10) {
id
from
to
amount
}
}
```
### With Filters
```graphql
{
Transfer(where: { from: { _eq: "0x123..." } }, limit: 100) {
id
from
to
amount
}
}
```
### Filter by Chain (Multichain)
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ Transfer(where: {chainId: {_eq: 42161}}, limit: 10) { id chainId from to amount } }"}'
```
## Filter Operators
| Operator | Description | Example |
|----------|-------------|---------|
| `_eq` | Equals | `{field: {_eq: "value"}}` |
| `_neq` | Not equals | `{field: {_neq: "value"}}` |
| `_gt` | Greater than | `{amount: {_gt: "100"}}` |
| `_gte` | Greater than or equal | `{amount: {_gte: "100"}}` |
| `_lt` | Less than | `{amount: {_lt: "100"}}` |
| `_lte` | Less than or equal | `{amount: {_lte: "100"}}` |
| `_in` | In list | `{chainId: {_in: [1, 10]}}` |
| `_nin` | Not in list | `{chainId: {_nin: [1]}}` |
| `_is_null` | Is null | `{field: {_is_null: true}}` |
| `_like` | Pattern match | `{id: {_like: "1_%"}}` |
| `_ilike` | Case-insensitive pattern | `{user: {_ilike: "%abc%"}}` |
**Important:** BigInt values must be quoted strings: `{amount: {_gt: "1000000000000000000"}}`
## Logical Operators
```graphql
# AND - all conditions must match
where: { _and: [{ chainId: { _eq: 1 } }, { amount: { _gt: "0" } }] }
# OR - any condition matches
where: { _or: [{ from: { _eq: "0x123" } }, { to: { _eq: "0x123" } }] }
# NOT - negate condition
where: { _not: { amount: { _eq: "0" } } }
```
## Pagination
### Limit and Offset
```graphql
{
Transfer(limit: 100, offset: 200) {
id
}
}
```
### Cursor-based (by primary key)
```graphql
{
Transfer(limit: 100, where: { id: { _gt: "last_seen_id" } }, order_by: { id: asc }) {
id
}
}
```
## Common Query Patterns
### Recent Transfers for User
```graphql
query UserTransfers($address: String!) {
Transfer(
where: {
_or: [
{ from: { _eq: $address } },
{ to: { _eq: $address } }
]
}
order_by: { blockTimestamp: desc }
limit: 50
) {
id
from
to
amount
blockTimestamp
}
}
```
### Polling for Updates
```graphql
query NewTransfers($lastTimestamp: BigInt!) {
Transfer(where: { blockTimestamp: { _gt: $lastTimestamp } }) {
id
from
to
amount
blockTimestamp
}
}
```
### Get by Primary Key
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ Transfer_by_pk(id: \"1_0xabc..._0\") { id from to amount } }"}'
```
## Discovering Schema
### List All Query Types
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ __schema { queryType { fields { name } } } }"}'
```
### Get Entity Fields
```bash
curl -s http://localhost:8080/v1/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ __type(name: \"Transfer\") { fields { name type { name } } } }"}'
```
## Aggregations
**Local:** Aggregation queries work in Hasura console.
**Hosted Service:** Aggregations are **disabled** for performance.
**Best Practice:** Compute aggregates at indexing time:
```graphql
# Schema
type GlobalStats {
id: ID!
totalTransfers: Int!
totalVolume: BigDecimal!
}
```
```typescript
// Handler - update on each transfer
const stats = await context.GlobalStats.get("global");
context.GlobalStats.set({
...stats,
totalTransfers: stats.totalTransfers + 1,
totalVolume: stats.totalVolume.plus(transferAmount),
});
```
Then query precomputed values:
```graphql
{
GlobalStats(where: { id: { _eq: "global" } }) {
totalTransfers
totalVolume
}
}
```
## Hasura Console
Open `http://localhost:8080` for the visual interface.
**API Tab:**
- Execute GraphQL queries
- Explorer shows all entities
- Test queries before frontend integration
**Data Tab:**
- View database tables directly
- Check `db_write_timestamp` for freshness
- Manually inspect entities
## Fetch from Code
```typescript
const response = await fetch('http://localhost:8080/v1/graphql', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
query: `
query {
Transfer(limit: 10) {
id
from
to
amount
}
}
`
})
});
const data = await response.json();
```
## Best Practices
1. **Check `_meta` first** - Verify indexer progress before assuming data is missing
2. **Fetch only needed fields** - Reduces response size
3. **Use pagination** - Never query unlimited results
4. **Filter on indexed fields** - Use `@index` columns for faster queries
5. **Poll with timestamps** - Fetch only new data for real-time updates
6. **Precompute aggregates** - At indexing time, not query time
7. **BigInt as strings** - Always quote large numbers in filters

View File

@@ -0,0 +1,158 @@
# Logging & Debugging
Effective logging is essential for troubleshooting indexer issues. HyperIndex uses [pino](https://github.com/pinojs/pino) for high-performance logging.
## context.log Methods
Use the logging methods available on the context object in handlers:
```typescript
MyContract.Event.handler(async ({ event, context }) => {
// Different severity levels
context.log.debug(`Processing transfer ${event.transactionHash}`);
context.log.info(`Transfer from ${event.params.from} to ${event.params.to}`);
context.log.warn(`Large transfer detected: ${event.params.value}`);
context.log.error(`Failed to process: ${event.transactionHash}`);
});
```
## Structured Logging
Pass an object as the second argument for structured logs:
```typescript
context.log.info("Processing transfer", {
from: event.params.from,
to: event.params.to,
value: event.params.value.toString(),
block: event.block.number,
});
// With error object
context.log.error("Handler failed", {
error: err,
event: event.transactionHash,
});
```
## Debugging Workflow
### Disable TUI for Full Logs
The Terminal UI can hide errors. Disable it to see all output:
```bash
# Option 1: Environment variable
TUI_OFF=true pnpm dev
# Option 2: Flag
pnpm dev --tui-off
```
### Recommended Debug Command
```bash
TUI_OFF=true pnpm dev 2>&1 | tee debug.log
```
This shows all output AND saves to a file for later analysis.
## Environment Variables
### Log Level
```bash
# Console log level (default: "info")
LOG_LEVEL="debug" # Show debug logs
LOG_LEVEL="trace" # Most verbose
# File log level (default: "trace")
FILE_LOG_LEVEL="debug"
```
### Log Strategy
```bash
# Default: Human-readable with colors
LOG_STRATEGY="console-pretty"
# ECS format for Elastic Stack / Kibana
LOG_STRATEGY="ecs-file"
LOG_STRATEGY="ecs-console"
# Efficient file-only logging
LOG_STRATEGY="file-only"
LOG_FILE="./indexer.log"
# Both console and file
LOG_STRATEGY="both-prettyconsole"
LOG_FILE="./debug.log"
```
## Common Debugging Patterns
### Log Entity State
```typescript
MyContract.Event.handler(async ({ event, context }) => {
const entity = await context.Account.get(event.params.user);
context.log.debug("Entity state before update", {
id: event.params.user,
exists: !!entity,
currentBalance: entity?.balance?.toString() ?? "N/A",
});
// ... update logic
});
```
### Log Only During Execution (Skip Preload)
```typescript
MyContract.Event.handler(async ({ event, context }) => {
// Preload phase: load data
const account = await context.Account.get(event.params.user);
// Only log during actual execution
if (!context.isPreload) {
context.log.info("Processing account", {
id: event.params.user,
balance: account?.balance?.toString(),
});
}
// ... rest of handler
});
```
### Debug Missing Data
```typescript
MyContract.Event.handler(async ({ event, context }) => {
const token = await context.Token.get(event.params.token);
if (!token) {
context.log.warn("Token not found - may be created by later event", {
tokenAddress: event.params.token,
block: event.block.number,
txHash: event.transactionHash,
});
return;
}
// ...
});
```
## Preload Phase Logging Note
**Important:** `context.log` calls are ignored during the preload phase. Logs only appear during the execution phase. This is intentional - it prevents duplicate log entries since handlers run twice with preload optimization enabled.
## Troubleshooting Checklist
1. **Can't see errors?** → Run with `TUI_OFF=true`
2. **Need more detail?** → Set `LOG_LEVEL="debug"` or `"trace"`
3. **Want persistent logs?** → Set `LOG_STRATEGY="both-prettyconsole"` with `LOG_FILE`
4. **Logs appearing twice?** → Normal if you're logging outside `!context.isPreload` check
5. **No logs at all?** → Check you're not in preload phase; use `!context.isPreload` guard

View File

@@ -0,0 +1,190 @@
# Multichain Indexing
Index contracts across multiple blockchain networks in a single indexer.
## Config Structure
Define contracts globally, addresses per network:
```yaml
# Global contract definitions
contracts:
- name: Factory
handler: src/factory.ts
events:
- event: PairCreated(address indexed token0, address indexed token1, address pair)
- name: Pair
handler: src/pair.ts
events:
- event: Swap(...)
# Network-specific addresses
networks:
- id: 1 # Ethereum
start_block: 10000835
contracts:
- name: Factory
address: 0xEthereumFactoryAddress
- id: 10 # Optimism
start_block: 1234567
contracts:
- name: Factory
address: 0xOptimismFactoryAddress
- id: 137 # Polygon
start_block: 9876543
contracts:
- name: Factory
address: 0xPolygonFactoryAddress
```
## Entity ID Namespacing
**Critical:** Always prefix IDs with chainId to avoid collisions:
```typescript
// CORRECT - Unique across chains
const id = `${event.chainId}-${event.params.tokenId}`;
const pairId = `${event.chainId}-${event.srcAddress}`;
// WRONG - Collision between chains
const id = event.params.tokenId.toString();
```
## Multichain Modes
### Unordered Mode (Recommended)
Process events as soon as available from each chain:
```yaml
unordered_multichain_mode: true
```
**Benefits:**
- Better performance
- Lower latency
- Each chain processes independently
**When to use:**
- Operations are commutative (order doesn't matter)
- Entities from different networks don't interact
- Processing speed more important than cross-chain ordering
### Ordered Mode (Default)
Strict deterministic ordering across all chains:
```yaml
# Default - no flag needed (will change to unordered in future)
```
**When to use:**
- Bridge applications requiring deposit-before-withdrawal ordering
- Cross-chain governance
- Multi-chain financial applications requiring exact sequence
- Data consistency systems
**Tradeoffs:**
- Higher latency (waits for slowest chain)
- Processing speed limited by slowest block time
- Guaranteed deterministic results
## Handler Patterns
Access chainId in handlers:
```typescript
Factory.PairCreated.handler(async ({ event, context }) => {
// Use chainId for unique IDs
const pairId = `${event.chainId}-${event.params.pair}`;
const token0Id = `${event.chainId}-${event.params.token0}`;
const token1Id = `${event.chainId}-${event.params.token1}`;
context.Pair.set({
id: pairId,
chainId: event.chainId,
token0_id: token0Id,
token1_id: token1Id,
address: event.params.pair,
});
});
```
## Schema for Multichain
Include chainId in entities when needed:
```graphql
type Pair {
id: ID! # chainId-address format
chainId: Int!
address: String!
token0_id: String!
token1_id: String!
}
type Token {
id: ID! # chainId-address format
chainId: Int!
address: String!
symbol: String!
}
```
## Best Practices
1. **ID Namespacing** - Always include chainId in entity IDs
2. **Error Handling** - Failures on one chain shouldn't stop others
3. **Use Unordered Mode** - Unless cross-chain ordering is critical
4. **Monitor Resources** - Multiple chains increase load
5. **Test All Networks** - Verify handlers work on each chain
## Troubleshooting
**Different Network Speeds:**
- Use unordered mode to prevent bottlenecks
**Entity Conflicts:**
- Verify IDs are properly namespaced with chainId
**Memory Usage:**
- Optimize entity structure
- Implement pagination in queries
## Example: Multichain DEX
```yaml
name: multichain-dex
unordered_multichain_mode: true
contracts:
- name: Factory
handler: src/factory.ts
events:
- event: PairCreated(address indexed token0, address indexed token1, address pair)
- name: Pair
handler: src/pair.ts
events:
- event: Swap(address indexed sender, uint256 amount0In, uint256 amount1In, uint256 amount0Out, uint256 amount1Out, address indexed to)
networks:
- id: 1
start_block: 10000835
contracts:
- name: Factory
address: 0x5C69bEe701ef814a2B6a3EDD4B1652CB9cc5aA6f
- id: 10
start_block: 1234567
contracts:
- name: Factory
address: 0xOptimismFactory
- id: 8453
start_block: 1234567
contracts:
- name: Factory
address: 0xBaseFactory
```

View File

@@ -0,0 +1,179 @@
# Preload Optimization
> **Key concept:** Handlers run TWICE - first for preloading, then for execution.
Preload optimization is HyperIndex's flagship performance feature. It reduces database roundtrips from thousands to single digits by batching reads across events.
## Why Preload Exists
**The Problem:**
```typescript
// Without preload: 5,000 Transfer events = 10,000 DB calls
ERC20.Transfer.handler(async ({ event, context }) => {
const sender = await context.Account.get(event.params.from); // DB call 1
const receiver = await context.Account.get(event.params.to); // DB call 2
});
```
**With Preload:** All 5,000 events preload concurrently, batching identical entity types into single queries. Result: **10,000 calls → 2 calls**.
## How It Works
### Phase 1: Preload (Concurrent)
- All handlers run in parallel for the entire batch
- Database reads are batched and deduplicated
- Entity writes are SKIPPED
- `context.log` calls are SKIPPED
- Errors are silently caught (won't crash)
### Phase 2: Execution (Sequential)
- Handlers run one-by-one in on-chain order
- Reads come from in-memory cache (instant)
- Entity writes persist to database
- Logging works normally
- Errors will crash the indexer
## Configuration
```yaml
# config.yaml
preload_handlers: true # Default since envio@2.27
```
## Checking Which Phase You're In
```typescript
MyContract.Event.handler(async ({ event, context }) => {
// This runs in BOTH phases
const account = await context.Account.get(event.params.user);
if (context.isPreload) {
// Preload phase only - skip heavy logic
return;
}
// Execution phase only
context.log.info("Processing...");
// CPU-intensive operations
// Side effects
});
```
## Optimize with Promise.all
Concurrent reads in preload = fewer batched queries:
```typescript
// GOOD: Concurrent reads
ERC20.Transfer.handler(async ({ event, context }) => {
const [sender, receiver] = await Promise.all([
context.Account.get(event.params.from),
context.Account.get(event.params.to),
]);
// ...
});
// LESS OPTIMAL: Sequential reads
ERC20.Transfer.handler(async ({ event, context }) => {
const sender = await context.Account.get(event.params.from);
const receiver = await context.Account.get(event.params.to);
// ...
});
```
## Critical Footguns
### Never Call fetch() Directly
```typescript
// WRONG - fetch runs TWICE
MyContract.Event.handler(async ({ event, context }) => {
const data = await fetch(`https://api.example.com/${event.params.id}`);
});
// CORRECT - Use Effect API
import { getMetadata } from "./effects";
MyContract.Event.handler(async ({ event, context }) => {
const data = await context.effect(getMetadata, event.params.id);
});
```
### Never Use External APIs Without Effect API
Any external call (RPC, REST, GraphQL) must use the Effect API. See `effect-api.md` for details.
### Side Effects Run Twice
```typescript
// WRONG - Analytics call runs twice!
MyContract.Event.handler(async ({ event, context }) => {
await sendToAnalytics(event); // Called in preload AND execution
});
// CORRECT - Guard with isPreload
MyContract.Event.handler(async ({ event, context }) => {
if (!context.isPreload) {
await sendToAnalytics(event); // Only runs once
}
});
```
## When to Use context.isPreload
Use the `context.isPreload` check for:
1. **CPU-intensive operations** - Skip during preload
2. **Side effects that can't be rolled back** - Analytics, webhooks
3. **Logging** - Already skipped by default, but explicit if needed
4. **Operations that depend on previous events' writes**
```typescript
MyContract.Event.handler(async ({ event, context }) => {
// ALWAYS runs (both phases) - data loading
const [entity1, entity2] = await Promise.all([
context.Entity1.get(event.params.id1),
context.Entity2.get(event.params.id2),
]);
// Early return after loading in preload phase
if (context.isPreload) return;
// ONLY execution phase - actual processing
const result = expensiveCalculation(entity1, entity2);
context.Entity1.set({
...entity1,
processedValue: result,
});
});
```
## Preload Behavior Summary
| Operation | Preload Phase | Execution Phase |
|-----------|---------------|-----------------|
| `context.Entity.get()` | Batched, cached | From cache |
| `context.Entity.set()` | Ignored | Persisted |
| `context.log.*()` | Ignored | Works |
| `context.effect()` | Batched, cached | From cache |
| Exceptions | Silently caught | Crash indexer |
| Direct `fetch()` | Runs (BAD!) | Runs again (BAD!) |
## Performance Impact Example
Indexing 100,000 Transfer events:
| Approach | DB Roundtrips | Time |
|----------|---------------|------|
| No preload | 200,000 | ~10 min |
| With preload (sequential reads) | 2 | ~5 sec |
| With preload + Promise.all | 1 | ~3 sec |
## Best Practices
1. **Place reads at handler start** - Maximize preload benefit
2. **Use Promise.all for multiple reads** - Reduce to single batch
3. **Use Effect API for ALL external calls** - Automatic batching/caching
4. **Skip non-essential logic with `context.isPreload`** - Faster preload
5. **Don't worry about "entity not found"** - Preload is optimistic; execution phase has correct data

View File

@@ -0,0 +1,181 @@
# Chain Reorganization (Reorg) Support
HyperIndex automatically handles chain reorganizations to keep your indexed data consistent with the blockchain's canonical state.
## What Are Reorgs?
Chain reorganizations occur when the blockchain temporarily forks and then resolves to a single chain. When this happens:
- Some previously confirmed blocks get replaced
- Transactions may be dropped or reordered
- Indexed data may no longer be valid
HyperIndex detects reorgs and automatically rolls back affected data.
## Configuration
### Enable/Disable (Default: Enabled)
```yaml
# config.yaml
rollback_on_reorg: true # Default - recommended for production
```
### Confirmation Threshold
Configure how many blocks must pass before data is considered "final":
```yaml
# config.yaml
rollback_on_reorg: true
networks:
- id: 1 # Ethereum
confirmed_block_threshold: 250 # Higher for Ethereum mainnet
- id: 137 # Polygon
confirmed_block_threshold: 150 # Lower for faster chains
- id: 42161 # Arbitrum
# Uses default: 200 blocks
```
**Default threshold:** 200 blocks for all networks.
## What Gets Rolled Back
When a reorg is detected:
| Rolled Back | NOT Rolled Back |
|-------------|-----------------|
| All entity data | External API calls |
| Schema entities | Webhooks sent |
| Database writes | Logs written to files |
| | Analytics events |
## Example Configuration
```yaml
# Production config with reorg handling
name: my-indexer
rollback_on_reorg: true
networks:
- id: 1 # Ethereum Mainnet
confirmed_block_threshold: 250
start_block: 18000000
contracts:
- name: MyContract
address: "0x..."
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
- id: 10 # Optimism
confirmed_block_threshold: 100 # Faster finality
start_block: 100000000
contracts:
- name: MyContract
address: "0x..."
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
```
## Best Practices
### 1. Keep Reorg Support Enabled
```yaml
rollback_on_reorg: true # Always for production
```
Only disable for development/testing when you need faster iteration.
### 2. Use HyperSync for Guaranteed Detection
Reorg detection is **guaranteed** when using HyperSync (the default data source).
With custom RPC endpoints, edge cases may go undetected depending on the provider.
### 3. Avoid Non-Rollbackable Side Effects
```typescript
// BAD - Can't be rolled back if reorg happens
MyContract.Event.handler(async ({ event, context }) => {
await sendWebhook(event); // This stays even if block is reorged
await postToAnalytics(event);
});
// BETTER - Use Effect API with caching
// Or guard side effects appropriately
MyContract.Event.handler(async ({ event, context }) => {
// Entity writes ARE rolled back
context.Transfer.set({
id: `${event.chainId}_${event.transactionHash}_${event.logIndex}`,
// ...
});
// For critical external calls, consider confirmation delay
// or handle in a separate system that reads from your indexed data
});
```
### 4. Higher Thresholds for High-Value Apps
For financial applications or high-stakes data:
```yaml
networks:
- id: 1
confirmed_block_threshold: 300 # Extra conservative
```
### 5. Adjust Per Network
Different networks have different reorg characteristics:
| Network | Typical Reorg Depth | Recommended Threshold |
|---------|---------------------|----------------------|
| Ethereum | Rare, shallow | 200-300 |
| Polygon | More frequent | 150-200 |
| Arbitrum | Very rare (L2) | 100-150 |
| Optimism | Very rare (L2) | 100-150 |
| BSC | Occasional | 150-200 |
## Reorg Handling in Code
You generally don't need special code for reorgs - HyperIndex handles it automatically. However, be aware:
```typescript
MyContract.Event.handler(async ({ event, context }) => {
// This entity write will be rolled back if the block is reorged
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
amount: event.params.value,
// Include block info for debugging
blockNumber: BigInt(event.block.number),
blockHash: event.block.hash,
});
});
```
## Debugging Reorg Issues
If you suspect reorg-related data inconsistencies:
1. Check if `rollback_on_reorg: true` is set
2. Verify you're using HyperSync (not custom RPC)
3. Check block explorer for the affected block range
4. Look for "reorg detected" in indexer logs
## Summary
| Setting | Value | Use Case |
|---------|-------|----------|
| `rollback_on_reorg` | `true` | Production (default) |
| `rollback_on_reorg` | `false` | Dev/testing only |
| `confirmed_block_threshold` | 200 | Default for all networks |
| `confirmed_block_threshold` | 250-300 | High-value Ethereum apps |
| `confirmed_block_threshold` | 100-150 | L2s with fast finality |

View File

@@ -0,0 +1,150 @@
# RPC as Data Source
Use RPC for unsupported networks or as fallback for HyperSync.
## When to Use RPC
- **Unsupported Networks** - Chains not yet on HyperSync
- **Private Chains** - Custom EVM networks
- **Fallback** - Backup when HyperSync unavailable
**Note:** HyperSync is 10-100x faster. Use it when available.
## Basic RPC Configuration
```yaml
networks:
- id: 1
rpc_config:
url: https://eth-mainnet.your-provider.com
start_block: 15000000
contracts:
- name: MyContract
address: "0x1234..."
```
## Advanced RPC Options
```yaml
networks:
- id: 1
rpc_config:
url: https://eth-mainnet.your-provider.com
initial_block_interval: 10000 # Blocks per request
backoff_multiplicative: 0.8 # Scale back after errors
acceleration_additive: 2000 # Increase on success
interval_ceiling: 10000 # Max blocks per request
backoff_millis: 5000 # Wait after error (ms)
query_timeout_millis: 20000 # Request timeout (ms)
start_block: 15000000
```
| Parameter | Description | Recommended |
|-----------|-------------|-------------|
| `initial_block_interval` | Starting batch size | 1,000-10,000 |
| `backoff_multiplicative` | Reduce batch on error | 0.5-0.9 |
| `acceleration_additive` | Increase batch on success | 500-2,000 |
| `interval_ceiling` | Max batch size | 5,000-10,000 |
| `backoff_millis` | Wait after error | 1,000-10,000ms |
| `query_timeout_millis` | Request timeout | 10,000-30,000ms |
## RPC Fallback for HyperSync
Add fallback RPC when HyperSync has issues:
```yaml
networks:
- id: 137
# Primary: HyperSync (automatic)
# Fallback: RPC
rpc:
- url: https://polygon-rpc.com
for: fallback
- url: https://backup-polygon-rpc.com
for: fallback
initial_block_interval: 1000
start_block: 0
contracts:
- name: MyContract
address: 0x...
```
**Simple fallback:**
```yaml
networks:
- id: 137
rpc: https://polygon-rpc.com?API_KEY={POLYGON_API_KEY}
```
Fallback activates when no new block received for 20+ seconds.
## eRPC for Enhanced Reliability
Use [eRPC](https://github.com/erpc/erpc) for production deployments:
**Features:**
- Permanent caching
- Auto failover between providers
- Re-org awareness
- Auto-batching
- Load balancing
**erpc.yaml:**
```yaml
logLevel: debug
projects:
- id: main
upstreams:
- endpoint: evm+envio://rpc.hypersync.xyz # HyperRPC primary
- endpoint: https://eth-mainnet-provider1.com
- endpoint: https://eth-mainnet-provider2.com
```
**Run eRPC:**
```bash
docker run -v $(pwd)/erpc.yaml:/root/erpc.yaml \
-p 4000:4000 -p 4001:4001 \
ghcr.io/erpc/erpc:latest
```
**Use in config.yaml:**
```yaml
networks:
- id: 1
rpc_config:
url: http://erpc:4000/main/evm/1
start_block: 15000000
```
## Environment Variables
Use env vars for API keys:
```yaml
rpc: https://eth-mainnet.g.alchemy.com/v2/{ALCHEMY_API_KEY}
```
Set in `.env`:
```
ALCHEMY_API_KEY=your-key-here
```
## Best Practices
1. **Use HyperSync when available** - Much faster
2. **Start from recent blocks** - Faster initial sync
3. **Tune batch parameters** - Based on provider limits
4. **Use paid RPC services** - Better reliability
5. **Configure fallback** - For production deployments
6. **Consider eRPC** - For complex multi-provider setups
## Comparison: HyperSync vs RPC
| Feature | HyperSync | RPC |
|---------|-----------|-----|
| Speed | 10-100x faster | Baseline |
| Configuration | Minimal | Requires tuning |
| Rate Limits | None | Provider-dependent |
| Cost | Included | Pay per request |
| Networks | Supported networks only | Any EVM |
| Maintenance | Managed | Self-managed |

View File

@@ -0,0 +1,284 @@
# Testing HyperIndex Indexers
Unit test handlers with MockDb and simulated events.
## Setup
1. Install test framework:
```bash
pnpm i mocha @types/mocha
```
2. Create test folder and file: `test/test.ts`
3. Add to `package.json`:
```json
"test": "mocha"
```
4. Generate test helpers:
```bash
pnpm codegen
```
## Basic Test Structure
```typescript
import assert from "assert";
import { TestHelpers, UserEntity } from "generated";
const { MockDb, Greeter, Addresses } = TestHelpers;
describe("Greeter Indexer", () => {
it("NewGreeting creates User entity", async () => {
// 1. Create mock database
const mockDb = MockDb.createMockDb();
// 2. Create mock event
const mockEvent = Greeter.NewGreeting.createMockEvent({
greeting: "Hello",
user: Addresses.defaultAddress,
});
// 3. Process event
const updatedDb = await mockDb.processEvents([mockEvent]);
// 4. Assert result
const user = updatedDb.entities.User.get(Addresses.defaultAddress);
assert.equal(user?.latestGreeting, "Hello");
});
});
```
## MockDb API
### Create Empty Database
```typescript
const mockDb = MockDb.createMockDb();
```
### Add Entities
```typescript
const mockDb = MockDb.createMockDb();
const dbWithEntity = mockDb.entities.User.set({
id: "user-1",
balance: BigInt(1000),
name: "Alice",
});
```
### Get Entities
```typescript
const user = updatedDb.entities.User.get("user-1");
```
### Process Events
```typescript
const updatedDb = await mockDb.processEvents([event1, event2, event3]);
```
## Creating Mock Events
### Basic Event
```typescript
const mockEvent = MyContract.Transfer.createMockEvent({
from: "0x123...",
to: "0x456...",
value: BigInt(1000),
});
```
### With Custom Metadata
```typescript
const mockEvent = MyContract.Transfer.createMockEvent(
{
from: "0x123...",
to: "0x456...",
value: BigInt(1000),
},
{
chainId: 1,
srcAddress: "0xContractAddress",
logIndex: 0,
block: {
number: 12345678,
timestamp: 1699000000,
hash: "0xblockhash...",
},
transaction: {
hash: "0xtxhash...",
},
}
);
```
## Test Patterns
### Entity Creation
```typescript
it("creates entity on event", async () => {
const mockDb = MockDb.createMockDb();
const event = MyContract.Created.createMockEvent({
id: "token-1",
name: "Token",
});
const updatedDb = await mockDb.processEvents([event]);
const token = updatedDb.entities.Token.get("token-1");
assert.ok(token, "Token should exist");
assert.equal(token.name, "Token");
});
```
### Entity Updates
```typescript
it("updates entity on subsequent events", async () => {
const mockDb = MockDb.createMockDb();
const userAddress = Addresses.defaultAddress;
// First event
const event1 = Greeter.NewGreeting.createMockEvent({
greeting: "Hello",
user: userAddress,
});
// Second event
const event2 = Greeter.NewGreeting.createMockEvent({
greeting: "Hi again",
user: userAddress,
});
const updatedDb = await mockDb.processEvents([event1, event2]);
const user = updatedDb.entities.User.get(userAddress);
assert.equal(user?.numberOfGreetings, 2);
assert.equal(user?.latestGreeting, "Hi again");
});
```
### Pre-existing Entities
```typescript
it("updates existing entity", async () => {
// Start with entity in database
const mockDb = MockDb.createMockDb()
.entities.Token.set({
id: "token-1",
totalSupply: BigInt(1000),
});
const event = MyContract.Mint.createMockEvent({
tokenId: "token-1",
amount: BigInt(500),
});
const updatedDb = await mockDb.processEvents([event]);
const token = updatedDb.entities.Token.get("token-1");
assert.equal(token?.totalSupply, BigInt(1500));
});
```
### Multiple Event Types
```typescript
it("handles multiple event types", async () => {
const mockDb = MockDb.createMockDb();
const mintEvent = MyContract.Mint.createMockEvent({ ... });
const transferEvent = MyContract.Transfer.createMockEvent({ ... });
const burnEvent = MyContract.Burn.createMockEvent({ ... });
const updatedDb = await mockDb.processEvents([
mintEvent,
transferEvent,
burnEvent,
]);
// Assert final state
});
```
## Debugging Tests
### Log Entity State
```typescript
const user = updatedDb.entities.User.get(userId);
console.log(JSON.stringify(user, null, 2));
```
### Check Entity Exists
```typescript
assert.ok(
updatedDb.entities.User.get(userId),
`User ${userId} should exist`
);
```
## Common Issues
### "Cannot read properties of undefined"
- Entity doesn't exist - check IDs match
- Entity wasn't created - verify handler logic
### Type Mismatch
- Match schema types (BigInt vs number)
- Use BigInt() for BigInt fields
### Missing Imports
```typescript
import { TestHelpers, EntityType } from "generated";
const { MockDb, ContractName, Addresses } = TestHelpers;
```
## Running Tests
```bash
# Run all tests
pnpm test
# Run specific test file
pnpm mocha test/transfers.test.ts
# Watch mode (with nodemon)
pnpm mocha --watch test/
```
## Test Template
```typescript
import assert from "assert";
import { TestHelpers, TokenEntity } from "generated";
const { MockDb, MyContract, Addresses } = TestHelpers;
describe("MyContract Indexer", () => {
describe("Transfer events", () => {
it("creates Transfer entity", async () => {
const mockDb = MockDb.createMockDb();
const event = MyContract.Transfer.createMockEvent({
from: Addresses.defaultAddress,
to: "0x456...",
value: BigInt(1000),
});
const updatedDb = await mockDb.processEvents([event]);
// Add assertions
});
});
});
```

View File

@@ -0,0 +1,214 @@
# Wildcard Indexing & Topic Filtering
Index events by signature without specifying contract addresses.
## Basic Wildcard Indexing
Index all events matching a signature across ALL contracts:
**config.yaml:**
```yaml
networks:
- id: 1
start_block: 0
contracts:
- name: ERC20
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
# No address = wildcard indexing
```
**Handler:**
```typescript
import { ERC20 } from "generated";
ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
token: event.srcAddress, // The actual contract address
});
},
{ wildcard: true } // Enable wildcard
);
```
## Topic Filtering
Filter wildcard events by indexed parameters:
### Single Filter
Only index mints (from = zero address):
```typescript
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
ERC20.Transfer.handler(
async ({ event, context }) => {
// Handle mint event...
},
{ wildcard: true, eventFilters: { from: ZERO_ADDRESS } }
);
```
### Multiple Filters
Index both mints AND burns:
```typescript
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
const WHITELISTED = [
"0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266",
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
];
ERC20.Transfer.handler(
async ({ event, context }) => {
// Handle mint or burn...
},
{
wildcard: true,
eventFilters: [
{ from: ZERO_ADDRESS, to: WHITELISTED }, // Mints to whitelisted
{ from: WHITELISTED, to: ZERO_ADDRESS }, // Burns from whitelisted
],
}
);
```
### Per-Network Filters
Different filters for different chains:
```typescript
const WHITELISTED = {
1: ["0xEthereumAddress1"],
137: ["0xPolygonAddress1", "0xPolygonAddress2"],
};
ERC20.Transfer.handler(
async ({ event, context }) => {
// Handle transfer...
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELISTED[chainId] },
{ from: WHITELISTED[chainId], to: ZERO_ADDRESS },
],
}
);
```
## Wildcard with Dynamic Contracts
Track ERC20 transfers to/from dynamically registered contracts:
**config.yaml:**
```yaml
networks:
- id: 1
contracts:
- name: SafeRegistry
address: 0xRegistryAddress
handler: ./src/EventHandlers.ts
events:
- event: NewSafe(address safe)
- name: Safe
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
# No address - dynamically registered
```
**Handler:**
```typescript
// Register Safe addresses dynamically
SafeRegistry.NewSafe.contractRegister(async ({ event, context }) => {
context.addSafe(event.params.safe);
});
// Track transfers to/from registered Safes
Safe.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
},
{
wildcard: true,
eventFilters: ({ addresses }) => [
{ from: addresses }, // Transfers FROM Safe addresses
{ to: addresses }, // Transfers TO Safe addresses
],
}
);
```
## Filter in Handler
Additional filtering inside handler:
```typescript
const USDC = {
1: "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48",
137: "0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174",
};
Safe.Transfer.handler(
async ({ event, context }) => {
// Only process USDC transfers
if (event.srcAddress !== USDC[event.chainId]) {
return;
}
context.USDCTransfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
amount: event.params.value,
});
},
{
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
}
);
```
## Contract Register with Filters
Filter factory events when registering contracts:
```typescript
const DAI = "0x6B175474E89094C44Da98b954EedeAC495271d0F";
// Only register pools containing DAI
UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
context.addUniV3Pool(event.params.pool);
},
{ eventFilters: [{ token0: DAI }, { token1: DAI }] }
);
```
## Use Cases
- **Index all ERC20 transfers** - Track any token transfer
- **Index all NFT mints** - Track mints across collections
- **Track protocol interactions** - Monitor transfers to/from your contracts
- **Cross-contract analysis** - Analyze patterns across all contracts
- **Factory-created contracts** - Index contracts created by factories
## Limitations
- Only one wildcard per event signature per network
- Either `contractRegister` OR `handler` can have eventFilters, not both
- RPC data source supports only single wildcard event with topic filtering